content
stringlengths
275
370k
Flashcards in Management Deck (19): What is Wildlife Management ? Wildlife management is the application of ecological knowledge to populations of vertebrate animals and their plant and animal associates in a manner that strikes a balance between the needs of those populations and the needs of people. Eric Bolen, William L. Robinson This ecological knowledge is applied in three basic management approaches: 1) preservation, when nature is allowed to take its course without human intervention (custodial); 2) direct manipulation, when animal populations are trapped, shot, poisoned, and stocked; and 3) indirect manipulation, when vegetation, water, or other key components of wildlife habitats are altered. -Game species – hunting, consumptive rec. -Non-game species - Wildlife viewing Other non- consumptive recreational activities-camping, hiking, etc. Historically, wildlife management has followed this course: 1) Restrictions (hunting, harvest, access) 2) Control mortality factors (e.g. predator control) 3) Reservation of lands 4) Artificial replenishment 5) Habitat manipulation & control (of food, water, cover, disease, special factors) Levels of Management -Individual – wildlife rehab., endangered species -Single species (population) management – emphasis on one species, e.g. white-tailed deer. -Ecosystem (community) management (maintenance of ecological diversity) ---Strategic Habitat Conservation; surrogate species Strategic Habitat Management Ecologically meaningful scales Work in partnerships Adaptive management framework -Monitoring and Research Science and tools -Healthy permanent populations ---Healthy population parameters -Wildlife and Human needs in balance. ---Minimize human impact -Conditions which allow wildlife populations to tolerate human presence and recreational activities. ---Recreational hunting, wildlife watching, and other outdoor activities with human disturbance Steps in Management… 1- Baseline information on natural history and ecology of the species (population dynamics, habitat use, food habits, etc) 2- Population estimates & demographics (males : females; female : offspring, etc) 3- Survey & evaluation of important habitat components 4- Evaluate recreation impacts --Disturbance from hiking, camping, etc. 5- Management Actions --Protection (from human impact) --Supply limiting factors (food, cover, shelter, water, etc) Management of Game Species For the Hunter (Predator control in sensitive areas Supplemental foods - feeds & plantings Hunting improvements and accessories Monitoring habitat destruction Acquisition of nesting habitat) Game mgmt: Sustainable Harvest Additive or Compensatory mortality Correction/control harvest (males or females; both) Habitat manipulations (insure quality habitat) Game mgmt: White-tailed Deer ---Size, Reproduction, Trends Estimate of CC (carrying capacity) Harvest management strategies to control populations and manage for desirable sex ratios, age structure, trophy characteristics By state/federal agency and private landholders Game mgmt: the Hunter Population levels should be a compromise between ideal level for the animal and the ideal level for the hunter Determine harvest levels Seasons, bag limits, hunter restrictions Game mgmt: Migrating Waterfowl ---Production estimates - nest surveys ---Season limits, bag limits, point system Monitoring habitat and disturbance Federal & International agencies ---Treaties with other nations (Canada, Mexico) Impact - where and what? (Predator, Pest Control - e.g. cowbirds) Rattlesnake Round-ups, Rodeos Festive affairs in which the capture, display, handling, milking, and killing of the animals is celebrated. Total exploitation of a wildlife species “Texas Snake Man" Date from the 1930s -1940s era. ~ 50 round-ups annually in 10 western states. 60,000 - 101,000 snakes may be harvested Sweetwater, TX. 90 tons of snakes from 1958 - 1991. Focus on the western diamondback rattlesnake, Crotalus atrox. Non-target species impacted as well Professional and recreational collectors. Collections are made in most traditional ways Searching likely habitats, road riding, as well as gassing den sites. Animal welfare concerns are high on the list. Effects of Rattlesnake Roundups on the Eastern Diamondback Rattlesnake Management Strategies of Roundups Total protection granted, round-ups abolished until complete assessment is made - (?) Dependable, scientific information gathered Game status granted? -Allows for season limits, -Legal methods of capture, etc. Introduction or Restoration Suitability of animal for introduction or reintroduction Legitimate source of animals Why was the species extirpated? Is that issue still a problem? Habitat suitability & quality Protection - moratorium on hunting; establish conservation guidelines.
|Name: _________________________||Period: ___________________| This quiz consists of 5 multiple choice and 5 short answer questions through Chapter 9. Multiple Choice Questions 1. How does Mr. Lanman kill one of two slaves? 2. When Mr. Lanman brags about the two killings what happens? (a) He goes to jail for one night. (b) He has to pay a fine. (d) He has to set two slaves free. 3. In what time period of his life does Douglass's detailed descriptions in Chapter Two take place? (a) His twenties. (b) Two years of his childhood. (c) His teenage years. (d) When he meets his first love. 4. Near what river is the main farm Douglass works on? (a) Mississippi River. (b) Chippewa River. (c) Miles River. (d) Maryland River. 5. What is the name of the slave who is killed in the water when he refuses to get out after being beaten? Short Answer Questions 1. What does Douglass describe in detail in Chapter Seven? 2. Where is the punishment of slaves and the food given out? 3. Phillips compares the Mississippi Valley to a story in what publication? 4. What does the slaveholder use as a way to justify slavery? 5. According to Douglass, what do the slaves strive for? This section contains 251 words (approx. 1 page at 300 words per page)
Spangled cotingas are striking tropical birds, living in the forests and jungles of the north of South America, from Venezuela and the Guyanas to Colombia, Bolivia and the Amazon region of Brazil. It eats mainly fruit and the males and females are quite distinct: the males have spectacularly bright plumage, while the females are a much duller brownish colour. North of South America, from Venezuela to Colombia and guava, Bolivia and the Amazon region of Brazil. - Distribution / Resident - Extint in the wild - Critically endangered - In Danger - Near threatened - Minor concern - Insufficient data - Not evaluated Discover how they are The Spangled cotinga is an eye-catching colourful tropical bird, essentially standing out due to its distinct sexual dimorphism: males have spectacularly coloured plumage—blue and turquoise, with purple hues on the wings, tail and scapular feathers, with a bright purple gular area—while the females are much more discreet, a brownish colour with greys on the back and lighter brown on the dorsal and belly areas. They live in the forests and jungles of northern South America, from Venezuela and the Guianas to Colombia, Bolivia and the Amazon region of Brazil, up to altitudes of 600 metres. They rarely go higher than this. Fundamentally frugivorous, even though during breeding season it adds several insect species to its diet. It nests in trees with some height, creating a cup shaped nest padded with all types of plant matter, where it lays a single egg that is incubated for no longer than 20 days. When a predator approaches the nest, cotingas fly up to the treetops, performing aerial acrobats and spreading their tails like a fan to attract the intruder and expel them. Their antics to distract intruders are conspicuous and simply wonderful. It is resident and sedentary, although nomadic movements of this species are periodically seen in Venezuela, dependent on the climate and rainfall. It is not endangered and is abundant and common in forests and jungles in its entire area of distribution.
The Statute of Anne: “An Act for the Encouragement of Learning” By Kristopher A. Nelson in April 2010 300 words / 2 min. Tweet Share 300 years ago Saturday, the Statute of Anne created the first modern system of copyright. Please note that this post is from 2010. Evaluate with care and in light of later events. 300 years ago Saturday, the Statute of Anne created the first modern system of copyright. A few fun facts about the Act: - Violating copyright was defined as “infringement,” not “theft” (and remains so today). - Before the Act, printers, not authors, were the ones granted monopoly rights over works. - The United States, before and after the Act, was the source of many illicit reprints of British texts–since America did not get similar copyright rules until much later. - Copyright was designed to create an incentive to create, but to still permit an eventual public benefit by expanding the public domain. Want more discussion on how copyright ought to function? To commemorate the anniversary, the British Council asked just that question: The world’s first copyright law was passed by the English Parliament on 10 April 1710 as ‘An Act for the Encouragement of Learning’. Its 300th anniversary provides a unique opportunity to review copyright’s purposes and principles. If today we were starting from scratch, but with the same aim of encouraging learning‚ what kind of copyright would we want? via Copyright 1710-2010 « Counterpoint. There are a number of interesting ideas in there that are worth thinking about, including: - Cory Doctorow’s proposal that copyright law ought to “recognize and celebrate the wonderful thing that is copying.” - Mark Shuttleworth suggests, “It’s time to get creative about the incentives for creation.” - Alex Fleetwood writes, “Copyright has become synonymous with the protection of endangered cultural industries.” - Lawrence Lessig believes that the “problem that we are confronting is the result of a law that has been rendered hopelessly out-of-date by new technologies.”
Sturdy? Prod? Glisten? Infer the Meaning of the Words Lesson 16 of 16 Objective: SWBAT determine unknown words by creating inferences based on schema and text evidence. - Eaglet's World by Don Freeman - Lesson vocabulary words from the Reading/Writing word wall: inferring, evidence, schema, words, illustration, informational text - Set up the whiteboard - Eaglet's World powerpoint - 'Infer the Meaning' worksheet - screen and projector to show the video (look at the websites beforehand to familiarize yourself - I only showed about 30 seconds of each one) I chose this story because it has great information about eaglets hatching, which is related to our science unit. Reading about informational topics represents a shift towards building knowledge in the disciplines, a push for the Common Core Standards. The story line in the book is engaging and the illustrations are beautiful. I also think the kids can identify with growing up. Because this story is about an informational text topic, there's lots of opportunities for vocabulary work. The kids loved this book and the idea of a baby growing up to fly free. Let's Get Excited! Underlined words below are lesson vocabulary words that are emphasized and written on sentence strips for my Reading & Writing word wall. I pull off the words off the wall for each lesson, helping students understand this key 'reading and writing' vocabulary can be generalized across texts and topics. The focus on acquiring and using these words is part of a shift in the Common Core Standards towards building students’ academic vocabulary. My words are color coded ‘pink’ for literature/’blue’ for reading strategies/’orange’ for informational text/'yellow' for writing/’green’ for all other words) Common starting point - Show the first powerpoint slide. "We are doing more inferring with informational text today. Take a look at the title and picture. What can you infer?" - Take ideas - it's about ...an eagle, baby eagles, eggs.... - "How did you come to such good inferences? It sounds like you used your evidence (title and picture) and your schema (I know that eagles build nests, they have baby eagles, the eagles hatch from eggs.) My goal here is to engage the kids in the discussion about the text and give them a start on inferencing. This is the a lesson at the end of my inferencing unit, so my kids have a good idea of how to use evidence and schema to inference. Give the purpose of the lesson - "Inferring can help us in many ways - describing a character, make predictions, determining cause and effect and defining vocabulary. Take a look at some of the vocabulary in this story." - Show powerpoint slide 2-3. - "When we read a word we don't know, we use evidence from the text (words and illustrations) and our schema (what we know) to make a good inference." Slide 4 - "Let's talk about that more: (slide 5) - text clues - picture clues - rereading of the text - thinking about it - "We going to use this organizer today to helps us define the words." (last slide) Introduce strategy - teacher models - Read through the first few pages to get pace for the book (stop at the page that starts with 'where eaglet was...') - "Here's a word I don't know - 'sturdy'. I'll write that word. I'm going to infer that it means a strong (write that) and I know that because the words say 'twigs made it strong'. I'll write that too. Since I used 'text', I'll write a 'T' in the box." - "Here's a word I don't know - 'down'. I know one meaning, but it doesn't fit here. The picture shows a nest with feathers so I'll infer that it means 'feathers' because the evidence in the picture. I'll write 'I' for 'illustration' in the box." Practice strategy - guided practice - "Help me with another one." Read to the page 'Sometimes their wings...' Is there a word you don't know?" 'whirred' - "What do we infer that means?" How do we know that?" (schema - I think I know what bird wings sound like) . Write sound and an 'S' in the box for schema. - This is what the whiteboard looked like after our discussion. Students are using a variety of ways in this lesson to determine or clarify the meaning of unknown words in grade 2 reading content (L.2.4a). The focus is using sentence level context as clues for meaning, but if they offer other ways of defining words (such as root word, prefix, synonym, etc), encourage this. The kids come with a variety of language levels and there are words that some will know and some will not. The goal of determining meaning of unknown meanings is to find what works for each student, those who can use root words, others that use schema, and some that use evidence. Students Take a Turn - "Now I'll continue reading and see how well you can define the words." I'll pause after each page and write a word or two. If you know what it means, you don't have to write it." - Read slowly and help students with spelling. Challenge those that say they 'know' the word. Read and let students work - Continue reading and write some words on the board. - Remind students about using schema and evidence. - I did help with spelling by listing words on the board. - Challenge them to use good definitions. We did discuss a good number of the definitions and I encouraged the kids to go beyond the 'boring words', such as 'good and like' to more descriptive words. - Here's an example of a completed worksheet that one student completed. As students ask and answer questions to demonstrate understanding of key details in the text, they re reading closely to determine what the text says explicitly and cite specify textual evidence to support their definitions. (RI.2.1) The study of text goes beyond enjoying a simple story. Students are interacting with the text, questioning, inferring, and using other reading strategies to truly bring their own schema and build that upon the evidence that the author presents. Use What We've Learned Use what we've learned - "We have some great words now that describe baby eaglets. I have 2 videos to show you and we can use those words to describe what is happening. Here's a list of words that we can look for in the videos." - "This is a video of baby eaglets ." Take a look and then we can talk about what we see, using our new words. (sturdy, down, grubs, rim, talon, wobbling) - I just showed a minute or so of this one. - "Let's talk about what we just saw. Who can tell me what was in the video, using the words that we learned?" My kids were able to describe 'the mom gives the baby grubs' and 'when she flies the wings are whirring'. - "Here is another video of eaglets starting to fly." (whirring, nudge, scrambled, glisten, glint) "What can you say about the eagles in this video - use the words again." Scaffolding and Special Education: This lesson could be easily scaffolded up or down, depending on student ability. Students with language challenges may struggle more in this lesson due to the advance vocabulary. I would suggest they sit with a partner or you write words on the whiteboard. They should try to make inferences, but may need help 'writing' out their schema or inference. Students with higher vocabulary should be able to make good inferences. I would set an expectation of how many words they need to define because they may know most of the words. It's still a great expectation for them to cite the evidence or schema to support the inference. That is an expectation of the Common Core State Standards.
When was the book of Job written? How do we know it was written then since we don’t know who wrote the book and when Job lived? Top Ten Reasons Why We Believe the Book of Job was Written During the Time of the Patriarchs 1. Job lived 140 years after his calamities (42:16). This corresponds with the lifespans of the patriarchs. For example, Abraham lived 175 years. 2. Job’s wealth was reckoned in livestock (1:3; 42:12) which was also true of Abraham (Gen. 12:16) and Jacob (Gen. 30:43). 3. The Sabeans and Chaldeans (Job 1:15, 17) were nomads in Abraham’s time, but in later years were not. 4. The Hebrew word (qsitah) translated “piece of silver” (42:11) is used elsewhere only twice (Gen. 33:19, Josh. 24:32). Both times are in reference to Jacob. 5. Job’s daughters were heirs of his estate along with their brothers (Job. 42:15). This was not possible later under the Mosaic Law if a daughter’s brothers were still living (Num. 27:8). 6. Literary works similar in some ways to the Book of Job were written in Egypt and Mesopotamia around the time of the patriarchs. 7. The Book of Job includes no references to the Mosaic institutions (priesthood, laws, tabernacle, special religious days and feasts). 8. The name (sadday) is used of God 31 times in Job (compared with 17 times elsewhere in the Old Testament) and was a name familiar to the patriarchs. 9. Several personal and place names in the book were also associated with the patriarchal period. Examples include (a) Sheba – a grandson of Abraham, (b) Tema – another grandson of Abraham, (c) Eliphaz – a son of Esau, (d) Uz – a nephew of Abraham. 10. Job was a common West Semitic name in the second millennium B.C. Job was also a name of a 19th-century-B.C. prince in the Egyptian Execration texts.
How Do Bean Seeds Grow Into Plants? by ECSDM Biology/Living Environment, Math, Science & Technology Elementary, 1st Grade, 2nd Grade Students should have some previous experience with the concept of life cycles prior to introducing this unit. Students will be able to tell or draw to describe the life cycle of the bean plant. Students will be able to put the pictures of a developing bean plant into its life cycle order. Students will be able to participate in experiments and document their finding by drawing pictures. Students will demonstrate their understanding of sorting and classifying during bean identification activities. One set of the following materials per student: coloring and drawing materials a wide variety of bean seeds cups to sort bean seeds 10 oz. see thru plastic cup some potting soil lima beans for planting - Students will sort and classify a variety of bean seeds and record their observations. - Students will conduct an experiment and place lima beans in water to observe the effects water has on the seeds. They will also form a control group with no water to form a comparison. - Students will conduct the "Paper Towel Plants" experiment to begin the opening of the lima bean and growth of roots and record their observations. - Students will plant the baby lima bean plants in soil to observe any further growth. Use the SMART Notebook file to guide the lesson. (This can be found in the Support Materials section below.) - Slide 1: Begin by asking students to list some of the things that living things need to grow and change. - Slide 2: Have students define what they think a bean seed is and what it would need to grow and change. Write these on the idea web on Slide 2. - Slide 3: Explain that we are going to do a study to see how bean seeds grow and change beginning with a little research. We will view the short video “Flowers and Seeds” with Timothy Littleboot to get some ideas and some information about seeds. - Slide 4: Discuss the video and what they observed and learned about seeds. Explain that we are going to study bean seeds very closely to see how they grow and change. - Slide 5: Model the song lyrics for "Seeds Grow Into Plants" using the tune from the Farmer in The Dell. - Slides 6, 7, and 8: Use the Enchanted Learning Book "A Sprouting Bean" and the illustrations to show and discuss the stages of bean growth. Ask students to respond to the information and tell what they think plants may need to stay alive. - Slides 9, 10 and 11: Discuss what plants need to live, reading the text for discussion. - Slide 12: Invite students to identify the parts of plants they recognize. - Slide 13: Return to the idea web to add new ideas and facts we have learned. - Slide 14: Ask students to participate in the activity of placing the pictures to demonstrate the life cycle of the bean plant. - Slide 15: Read the text to the students. Introduce the first bean seed experiment of observing and exploring dried bean seeds. Provide a wide variety of dried beans, in many sizes and colors for the students to explore and observe and use a hand held magnifying glass. Demonstrate how they can be sorted and grouped according to how they look. - Students should use small cups to group each type of bean. Ask students to summarize their findings by telling the class about the seeds they studied and what they saw. - Have students draw pictures of the seeds and record a sentence about their findings. - Slide 16: Discuss how the beans look, feel and smell. Explain that each seed has a coat that protects it and how water helps the bean seed to get ready to do its growing. State that we are going to do an experiment to see if it is true that water will help our lima bean seeds to grow and change. - Do the second experiment of placing about 7 to 15 lima beans in clear cups of water to soak overnight. Try to have at least one bean per two students to allow for close study. Prepare other cups without any water and put the same amount of beans in the cups to create a comparison group. - Record student’s observations as you do the experiment. Briefly explain that a hypothesis is a prediction of what we think may happen and record their predictions on a chart. You can evaluate the results of the experiment and refer back to their predictions, confirming or negating based on their observations of what has happened to the bean seeds. - Slide 17: Distribute if possible “The Bean Sprout” handout (from Enchanted Learning.com), or one like it, detailing the stages of growth of the bean seed to help children summarize the steps both verbally and in written form. - Slide 18: Describe for students the last experiment that is in two stages. In the first stage, students will view and follow the steps described to establish their own paper towel plants. - Ask students to record their hypotheses of what may occur while experiment in class. Continue to record observations during the experiment using both pictures and words. - After the first part of the experiment has been conducted, review how the lima bean seeds grew and changed, naming the parts that grew while it was in the bag. Refer back to the SMART Board diagrams to help students recall the names of the parts that they will begin observing. - Have students record the changes daily and discuss what their own bean seeds are doing and report these to the class. It is recommended that you make extra paper towel plants in case of casualties. - Slide 19: Once the beans begin opening and creating some roots, the second part of the experiment begins. - Plant each student's budding beans in a cup with soil. Have students record their daily progress and spray water on their plants every day. Stand back and watch the show…encourage students to use the new vocabulary they have learned as their bean plants grow and change. Send the bean plants home for further growing and changing. - Have students discuss their data and findings, using the notes and pictures they have taken during the process. - Summarize the conclusions they have drawn by writing dictated sentences in simple patterns on a chart paper with each student's name. Use this as a discussion and a reference point to help students prepare for their own writing about the growth and change of seeds. Please see the assessment section for the writing activity. Materials and Resources - Use of a limited vocabulary used in pattern sentences is useful to allow students an opportunity to grasp and practice new words. - Use of visual representations while discussing the topic helps students to make immediate connections to vocabulary. - Repeated opportunities to view the Smart Board and video presentations will reinforce vocabulary and allow other opportunities to understand the vocabulary. - Accepting limited answers and drawings as responses, encourages student participation. - Read text aloud and encourage students to participate in 'reading' photos to gain information. Use a variety of posters and picture books, (both fiction and non-fiction) to visually demonstrate how seeds grow and change. Some book suggestions are listed below: - From Seed to Plant by Gail Gibbons - From Seed to Plant (Rookie Read-About Science) by Allan Fowler - The Magic School Bus Plants Seeds: A Book About How Living Things Grow (Magic School Bus) by Joanna Cole, John Speirs, and Bruce Degan - Curious George Plants a Seed (Curious George Early Readers) by H.A. and Margret Rey - The Carrot Seed 60th Anniversary Edition by Ruth Krauss and Crockett Johnson - How A Seed Grows or the Spanish edition - Como crece una semilla (Let's-Read-and-Find-Out Science 1) by Helene J. Jordan and Loretta Krupinski - The Tiny Seed by Eric Carle - One Watermelon Seed by Celia Barker Lottridge and Karen Patkau - Seeds! Seeds! Seeds! by Nancy Elizabeth Wallace - From Seed to Pumpkin by Jan Kottke - From Seed to Sunflower by Gerald Legg - Plant a Seed of Peace by Rebecca Seiling - A Seed in Need: a First Look at the Plant Cycle (First Look: Science) by Sam Godwin and Simone Abel - The Sun Seed by Jan Schubert - From Seed to Pumpkin (Let's-Read-and-Find-Out Science 1) by Wendy Pfeffer and James Graham Hale - Spring Is Here! A Story About Seeds (Ready-to-Read. Pre-Level 1) by Joan Holub and Will Terry - The Reason for a Flower by Ruth Heller - How a Seed Grows Into a Sunflower (Amaze) by David Stewart - All About Seeds - Pbk (Now I Know) by Susan Kuchalla and Jane McBee - From Seed to Dandelion (Scholastic News Nonfiction Readers: How Things Grow) by Ellen Weiss - I'm A Seed (level 1) (Hello Reader) by Jean Marzollo and Judith Moffatt Some chart or poster recommendations might include: - Scholastic magazine and National Geographic Magazine for Children have posters that are always attractive and informative. - How Seeds Grow - Chart, How Plants Make Food, Plants on the Grow Bulletin Board Set Manufacturer: FRANK SCHAFFER - Plant Kingdom Poster Set Manufacturer: MCDONALD PUBLISHING - PLANTS THEME CHART Manufacturer: CREATIVE TEACHING PRESS - CHARTLET THE LIFE CYCLE OF A PLANT Manufacturer: CARSON DELLOSA - Vegetables & Fruits Poster Cards Manufacturer: WORLDCLASS LEARNING MATERIALS - Photo Fun Plants 8/Pk 8-1/2'' X 11'' Manufacturer: EDUPRESS - Plants Thematic Unit Manufacturer: TEACHER CREATED RESOURCES Help students to write and illustrate a book telling about the sequential development of seeds growing into plants. Use four sheets of scaffolded paper in growing sizes, stapled at the top.
Peer Effects in the Classroom "Students who are exposed to unusually low achieving cohorts tend to score lower themselves." How can advanced economies get the biggest increase in human capital for their education dollar? That is, how productive are their investments in education? In answering these questions, one tricky problem is "peer effects": students are "good" peers if they produce positive learning spillovers, so that students exposed to them gain more for each dollar spent on their education, or "bad" peers if they have the reverse effect. It is hard to know whether such peer effects exist, but if they do, they are crucial to current debates on which policies maximize the productivity of a country's education spending. The United States is debating school choice; European countries are discussing whether to eliminate ability tracks from their education systems; Latin American countries are debating whether to devolve control and funding of education to localities. Many arguments against school choice, decentralized funding, and ability tracking rest on the belief that peer effects are important and have a particular asymmetry: that is, bad peers gain more by being exposed to good peers than good peers lose by being exposed to bad peers. If this asymmetry is strong, then investments in human capital are maximized when students are forced to attend schools with a broad array of abilities and backgrounds. Such coercion is obviously impossible with ability tracking and can be hard to achieve with choice or local funding. In Peer Effects in the Classroom: Learning From Gender and Race Variation (NBER Working Paper No. 7867), NBER Research Associate Caroline Hoxby tries to determine whether peer effects exist and, if they do, what form they take (for instance, are they asymmetric?) She begins by noting that true peer effects are hard to measure. Parents who provide home environments that are good for learning tend to select the same schools. Even within a school, interested parents lobby to have their children assigned to particular teachers. Thus, if high achievers tend to be clumped in some classrooms and low achievers in other classrooms, we should not assume that the achievement differences are caused by peer effects. Most of the achievement differences probably are due to parents, who would influence their children a lot even if they could not get them in classrooms with particular groups of peers. It is not just parents' activities that make peer effects hard to measure, though; it is also schools' activities. Students with similar abilities may be assigned to the same classroom in order to make it easier to teach. Teachers with a knack for handling the unruly students may have classes full of them. Thus, classroom achievement could differ because the initial student composition differs, not because peers influence one another. To identify true peer effects, Hoxby compares groups within a given school that differ randomly in peer composition. To illustrate: suppose that a family shows up for kindergarten with their older son and finds that, simply because of random variation in local births, that son's cohort is 80 percent female. The next year, they show up with their younger son and find that, also because of random variation, that son's cohort is 30 percent female. Their two sons now will go through elementary school consistently experiencing classrooms that have different peer composition on average. Their older son will be exposed to more female students (who tend to be higher achievers and less disruptive in elementary school). Their younger son will be exposed to more male students. Because the two boys have the same parents and the same school, the main difference in their experience will be peers. If it turns out that male students systemically do better (or worse) when exposed to more female students, then that systematic difference must be attributable to peer effects. Hoxby also compares school cohorts that differ in racial composition or initial achievement, rather than in gender composition. She uses data from the entire population of elementary students in Texas from 1990 to 1999 (the Texas Schools Microdata Sample). Her measure of achievement is a student's score on the Texas Assessment of Academic Skills, which is administered in all Texas public schools. Hoxby finds that peer effects do exist. For instance, her results suggest that having a more female peer group raises both male and female scores in reading and math. She points out that only some of the "good" peer effect of females can be direct learning spillovers because females do not know math better than males on average, although they are better readers. The fact that females raise math scores, therefore, must be due to phenomena more general than direct learning spillovers -- for instance, females' lower tendency to disrupt. In Texas, black and Hispanic students tend to enter school with lower initial achievement. Does this matter? Hoxby finds that it does. Students who are exposed to unusually low achieving cohorts tend to score lower themselves. Interestingly enough, black students appear to be particularly affected by the achievement of other black students. Hispanic students appear to be particularly affected by the achievement of other Hispanic students. In fact, Hispanic students do better when in majority Hispanic cohorts, even though the additional Hispanic students tend to have lower initial achievement. It may be that in classes with more Hispanics, a student who is learning English is more likely to find a bilingual student who helps him out. Hoxby finds little evidence of a general asymmetry, though, such as low achievers gaining more by being with high achievers and that high achievers lose by being with low achievers. After taking steps to eliminate changes in achievement that could be caused by general time trends or unusual events -- such as the appearance of an especially good teacher in one school -- Hoxby concludes that, on average, a student's own test score rises by 0.10 to 0.55 points when he or she is surrounded by peers who score one point higher. -- Linda Gorman The Digest is not copyrighted and may be reproduced freely with appropriate attribution of source.
T and O map A T and O map or O-T or T-O map (orbis terrarum, orb or circle of the lands; with the letter T inside an O), is a type of medieval world map, sometimes also called a Beatine map or a Beatus map because one of the earliest known representations of this sort is attributed to Beatus of Liébana, an 8th-century Spanish monk. The map appeared in the prologue to his twelve books of commentaries on the Apocalypse. History and description Latin: Orbis a rotunditate circuli dictus, quia sicut rota est [...] Undique enim Oceanus circumfluens eius in circulo ambit fines. Divisus est autem trifarie: e quibus una pars Asia, altera Europa, tertia Africa nuncupatur. English: The [inhabited] mass of solid land is called round after the roundness of a circle, because it is like a wheel [...] Because of this, the Ocean flowing around it is contained in a circular limit, and it is divided in three parts, one part being called Asia, the second Europe, and the third Africa. Although Isidore taught in the Etymologiae that the Earth was "round", his meaning was ambiguous and some writers think he referred to a disc-shaped Earth. However, other writings by Isidore make it clear that he considered the Earth to be globular. Indeed, the theory of a spherical earth had always been the prevailing assumption among the learned since at least Aristotle, who had divided the spherical earth into zones of climate, with a frigid clime at the poles, a deadly torrid clime near the equator, and a mild and habitable temperate clime between the two. The T and O map is representing only the top half of the spherical Earth. It was presumably tacitly considered a convenient projection of the inhabited parts, the northern temperate half of the globe. Since the southern temperate clime was considered uninhabited, or unattainable, there was no need to depict them on a world map. It was then believed that no one could cross the torrid equatorial clime and reach the unknown lands on the other half of the globe. These imagined lands were called antipodes. The T is the Mediterranean, the Nile, and the Don (formerly called the Tanais) dividing the three continents, Asia, Europe and Africa, and the O is the encircling ocean. Jerusalem was generally represented in the center of the map. Asia was typically the size of the other two continents combined. Because the sun rose in the east, Paradise (the Garden of Eden) was generally depicted as being in Asia, and Asia was situated at the top portion of the map. This qualitative and conceptual type of medieval cartography could yield extremely detailed maps in addition to simple representations. The earliest maps had only a few cities and the most important bodies of water noted. The four sacred rivers of the Holy Land were always present. More useful tools for the traveler were the itinerary, which listed in order the names of towns between two points, and the periplus that did the same for harbors and landmarks along a seacoast. Later maps of this same conceptual format featured many rivers and cities of Eastern as well as Western Europe, and other features encountered during the Crusades. Decorative illustrations were also added in addition to the new geographic features. The most important cities would be represented by distinct fortifications and towers in addition to their names, and the empty spaces would be filled with mythical creatures. The world map from the Saint-Sever Beatus, dating to ca. AD 1050. - Isidore of Seville (c. 630). "14". Etymologiae. - Stevens, Wesley M. (1980). "The Figure of the Earth in Isidore's ‘De natura rerum’". Isis 71 (2): 268–277. doi:10.1086/352464. JSTOR 230175. - Michael Livingston, Modern Medieval Map Myths: The Flat World, Ancient Sea-Kings, and Dragons, 2002. - Hiatt, Alfred (2002). "Blank Spaces on the Earth". The Yale Journal of Criticism 15 (2): 223–250. doi:10.1353/yale.2002.0019. - Crosby, Alfred W. (1996). The Measure of Reality: Quantification in Western Europe, 1250-1600. Cambridge: Cambridge University Press. ISBN 0-521-55427-6. - Lester, Toby (2009). The fourth part of the world: the race to the ends of the Earth, and the epic story of the map that gave America its name. New York, NY: Free Press. ISBN 9781416535317. - Carlo Zaccagnini, ‘Maps of the World’, in Giovanni B. Lanfranchi et al., Leggo! Studies Presented to Frederick Mario Fales on the occasion of his 65th birthday, Wiesbaden, Harrassowitz Verlag, 2012, pp. 865–874. ISBN 9783447066594 - Mode, PJ. "The History and Academic Literature of Persuasive Cartography". Persuasive Cartography, The PJ Mode Collection. Cornell University Library. Retrieved 22 October 2015. - Brigitte Englisch, Ordo orbis terrae. Die Weltsicht in den Mappae mundi des frühen und hohen Mittelalters. Berlin 2002, ISBN 3-05-003635-4
Gasification of carbonaceous fuels - Gasification is a process of converting carbonaceous fuel into gaseous product with a useable heating value. Carbonaceous fuel includes solids, liquids and gases such as coals, biomass, residual oils and natural gas. Gasification is not a single step process, but involves multiple sub processes and reactions. The generated Synthesis Gas or ‘Syn Gas’ has wide range of applications ranging from direct house hold applications to power generation. Out of all gasification processes, coal gasification process gained much interest from last three decades. Coal can be gasified by reacting it with mixture of air and steam at various temperature and pressure conditions. Biomass gasification is also gaining attention in recent times as it represents an attractive sustainable energy alternative. Gasification Process Reactions - Gasification is not a single step process and involves following subprocesses / stages and corresponding reactions - Drying of fuel, Devolatalization, Oxidation of char and Reduction in absence of oxygen. Syn-gas is produced at the end of the reduction subprocess in the gasifier. All these substeps generally take place in different zones of the gasifier, depending on the type of gasifier used. Gasification Process Types - Gasification processes can be categorized into three groups: entrained flow, fluidised bed and moving bed. These gasification processes differ from each other on account of fuel particle size, residence time in the gasifier and operating temperatures. Due to these differences, composition of the product gases and percent conversion of coal is also different for each process. When selecting a specific gasification process for syn gas generation, different factors such as fuel type (coal or biomass etc.), plant size, reactivity of fuel and composition of air-oxygen oxidant mixture have to be taken into consideration. Selection of a gasification process - For selecting a specific gasification process following factors are evaluated - characteristics of coal or biomass used (such as particle size, ash content), reactivity of the fuel, type and composition of the oxidant (air/oxygen), plant size. General guidelines for gasification process selection are discussed briefly in this article. Gasification Fuels - Characteristics of various fuels used for gasification is discussed including fuels such as different types of coal, liquid fuels and variety biomass feedstocks such as wood biomass, agricultural waste, municipal organic waste etc. Suitability of these fuels for syn gas generation by gasifiction process depends on these characteristics such as - ash content, heating value, voidage etc. Gasification Products - Synthesis gas or syngas is the primary product of gasification. Main constituents of syngas are carbon monoxide and hydrogen. If the syngas contains some non-combustible gases such as carbon dioxide and nitrogen, the mixture is commonly known as producer gas. If the generated synthesis gas contains only carbon monoxide and hydrogen, then the mixture is known as water gas. Water gas is commonly used for producing pure hydrogen for fuel cell applications, hence purity of the carbon monoxide and hydrogen mixture is important for water gas. Applications of syngas, producer gas and water gas are also discussed here. Coal analysis for coal gasification process - Since coal is a widely used feed for gasification, knowledge of the different ways to analyse coal and interpretation of that analysis becomes important. The main purpose of coal analysis is to determine the rank of the coal along with its intrinsic characteristics. For different types of coal, following properties are important in analysis - moisture, volatile matter content, ash content, fixed carbon content. Different analysis techniques such as proximate coal analysis, ultimate coal analysis, ash analysis etc. are briefly discussed. Types of gasifier equipments - Gasifier equipments are generally classified as upward draft, downward draft and cross draft gasifiers, based on the direction of air/oxygen flow in the equipment. It should be noted that there are types of gasifier equipment which are different from types of gasification processes. The choice of the one type of gasifier ober there is mostly determined by the fuel, its final available form, its size , moisture content and ash content. First three type of gasifiers are mostly used in entrained bed gasification process and moving bed gasification process. While the last one is fluidised bed gasification process.
Best Results From Wikipedia Yahoo Answers Encyclopedia Youtube Seed dormancy is a condition of plant seeds that prevents germinating under optimal environmental conditions. Living, non dormant seeds germinate when soil temperatures and moisture conditions are suited for cellular processes and division; dormant seeds do not. One important function of most seeds is delayed germination, which allows time for dispersal and prevents germination of all the seeds at same time. The staggering of germination safeguards some seeds and seedlings from suffering damage or death from short periods of bad weather or from transient herbivores; it also allows some seeds to germinate when competition from other plants for light and water might be less intense. Another form of delayed seed germination is seed quiescence, which is different than true seed dormancy and occurs when a seed fails to germinate because the external environmental conditions are too dry or warm or cold for germination. Many species of plants have seeds that delay germination for many months or years, and some seeds can remain in the soil seed bank for more than 50 years before germination. Some seeds have a very long viability period, and the oldest documented germinating seed was nearly 2000 years old based on radiocarbon dating. True dormancy or innate dormancy is caused by conditions within the seed that prevent germination under normally ideal conditions. Often seed dormancy is divided into two major categories based on what part of the seed produces dormancy: exogenous and endogenous. There are three types of dormancy based on their mode of action: physical, physiological and morphological. There have been a number of classification schemes developed to group different dormant seeds, but none have gained universal usage. Dormancy occurs because of a wide range of reasons that often overlap, producing conditions in which definitive categorization is not clear. Compounding this problem is that the same seed that is dormant for one reason at a given point may be dormant because of another reason at a later point. Some seeds fluctuate from periods of dormancy to non dormancy, and despite the fact that a dormant seed appears to be static or inert, in reality they are still receiving and responding to environmental cues. Exogenous dormancy is caused by conditions outside the embryo and is often broken down into three subgroups: Which occurs when seeds are impermeable to water or the exchange of gases. Legumes are typical examples of physically dormant seeds; they have low moisture content and are prevented from imbibing water by the seed coat. Chipping or cracking of the seed coat or any other coverings allows water intake. Impermeability is often caused by an outer cell layer which is composed of macrosclereid cells or the outer layer is composed of a mucilaginous cell layer. The third cause of seed coat impermeability is a hardened endocarp. Seed coats that are impermeable to water and gases form during the last stages of seed development. Mechanical dormancy occurs when seed coats or other coverings are too hard to allow the embryo to expand during germination. In the past this mechanism of dormancy was ascribed to a number of species that have been found to have endogenous factors for their dormancy instead. These endogenous facts include physiologically dormancy cased by low embryo growth potential Includes growth regulators etc, that are present in the coverings around the embryo. They may be leached out of the tissues by washing or soaking the seed, or deactivated by other means. Other chemicals that prevent germination are washed out of the seeds by rainwater or snow melt. Endogenous dormancy is caused by conditions within the embryo itself, and it is also often broken down into three subgroups: physiological dormancy, morphological dormancy and combined dormancy, each of these groups may also have subgroups. Physiological dormancy prevents embryo growth and seed germination until chemical changes occur. These chemicals include inhibitors that often retard embryo growth to the point where it is not strong enough to break through the seed coat or other tissues. Physiological dormancy is indicated when an increase in germination rate occurs after an application of gibberellic acid (GA3) or after Dry after-ripening or dry storage. It is also indicated when dormant seed embryos are excised and produce healthy seedlings: or when up to 3 months of cold (0-10°C) or warm (=15°C) stratification increases germination: or when dry after-ripening shortens the cold stratification period required. In some seeds physiological dormancy is indicated when scarification increases germination. Physiological dormancy is broken when inhibiting chemicals are broken down or are no longer produced by the seed; often by a period of cool moist conditions, normally below (+4C) 39F, or in the case of many species in Ranunculaceaeand a few others,(-5C) 24F.Abscisic acid is usually the growth inhibitor in seeds and its production can be affected by light. Some plants like Peony species have multiple types of physiological dormancy, one affects radicle (root) growth while the other affects plumule (shoot) growth. Seeds with physiological dormancy most often do not germinate even after the seed coat or other structures that interfere with embryo growth are removed. Conditions that affect physiological dormancy of seeds include: - Drying; some plants including a number of grasses and those from seasonally arid regions need a period of drying before they will germinate, the seeds are released but need to have a lower moister content before germination can begin. If the seeds remain moist after dispersal, germination can be delayed for many months or even years. Many herbaceous plants from temperate climate zones have physiological dormancy that disappears with drying of the seeds. Other species will germinate after dispersal only under very narrow temperature ranges, but as the seeds dry they are able to germinate over a wider temperature range. - Photodormancy or light sensitivity affects germination of some seeds. These photoblastic seeds need a period of darkness or light to germinate. In species with thin seed coats, light may be able to penetrate into the dormant embryo. The presence of light or the absence of light may trigger the germination process, inhibiting germination in some seeds buried too deeply or in others not buried in the soil. - Thermodormancy is seed sensitivity to heat or cold. Some seeds including cocklebur and amaranth germinate only at high temperatures (30C or 86F) many plants that have seed that germinate in early to mid summer have thermodormancy and germinate only when the soil temperature is warm. Other seeds need cool soils to germinate, while others like celery are inhibited when soil temperatures are too warm. O A cotyledon (ˌkoʊtɛlɪdɔ�n; "seed leaf" from Greek: κοτυληδών kotylēd�n, gen.: κοτυληδόνος kotylēdonos, from κοτ�λη kotýlē "cup, bowl"), is a significant part of the embryo within the seed of a plant. Upon germination, the cotyledon may become the embryonic first leaves of a seedling. The number of cotyledons present is one characteristic used by botanists to classify the flowering plants (angiosperms). Species with one cotyledon are called monocotyledonous (or, "monocots") and placed in the class Liliopsida. Plants with two embryonic leaves are termed dicotyledonous ("dicots") and placed in the class Magnoliopsida. In the case of dicot seedlings whose cotyledons are photosynthetic, the cotyledons are functionally similar to leaves. However, true leaves and cotyledons are developmentally distinct. Cotyledons are formed during embryogenesis, along with the root and shoot meristems, and are therefore present in the seed prior to germination. True leaves, however, are formed post-embryonically (i.e. after germination) from the shoot apical meristem, which is responsible for generating subsequent aerial portions of the plant. The cotyledon of grasses and many other monocotyledons is a highly modified leaf composed of a scutellumand acoleoptile. The scutellum is a tissue within the seed that is specialized to absorb stored food from the adjacentendosperm. The coleoptile is a protective cap that covers the plumule (precursor to the stem and leaves of the plant). Gymnosperm seedlings also have cotyledons, and these are often variable in number (multicotyledonous), with from 2 to 24 cotyledons forming a whorl at the top of the hypocotyl (the embryonic stem) surrounding the plumule. Within each species, there is often still some variation in cotyledon numbers, e.g. Monterey Pine (Pinus radiata) seedlings have 5–9, and Jeffrey Pine (Pinus jeffreyi) 7–13 (Mirov 1967), but other species are more fixed, with e.g. Mediterranean Cypress always having just two cotyledons. The highest number reported is for Big-cone Pinyon (Pinus maximartinezii), with 24 (Farjon & Styles 1997). The cotyledons may be ephemeral, lasting only days after emergence, or persistent, enduring a year or more on the plant. The cotyledons contain (or in the case of gymnosperms and monocotyledons, have access to) the stored food reserves of the seed. As these reserves are used up, the cotyledons may turn green and begin photosynthesis, or may wither as the first true leaves take over food production for the seedling. Epigeal versus hypogeal development Cotyledons may be either epigeal, expanding on the germination of the seed, throwing off the seed shell, rising above the ground, and perhaps becoming photosynthetic; or hypogeal, not expanding, remaining below ground and not becoming photosynthetic. The latter is typically the case where the cotyledons act as a storage organ, as in many nuts and acorns. Hypogeal plants have (on average) significantly larger seeds than epigeal ones. They also are capable of surviving if the seedling is clipped off, as meristem buds remain underground (with epigeal plants, the meristem is clipped off if the seedling is grazed). The tradeoff is whether the plant should produce a large number of small seeds, or a smaller number of seeds which are more likely to survive. Related plants show a mixture of hypogeal and epigeal development, even within the same plant family. Groups which contain both hypogeal and epigeal species include, for example, the Araucariaceae family of Southern Hemisphere conifers, the Fabaceae (pea family), and the genus Lilium(seeLily seed germination types). The term cotyledon was coined by Marcello Malpighi. John Ray was the first botanist to recognise that some plants have two and others only one, and eventually the first to recognise the immense importance of this fact to systematics. The embryo, contained within the seed, is the next generation of plant. Thus successful seed germination is vital for a species to perpetuate itself. By definition, germination commences when the dry seed, shed from its parent plant, takes up water (imbibition), and is completed when the embryonic root visibly emerges through the outer structures of the seed (usually the seed or fruit coat). Thereafter, there is seedling establishment, utilizing reserves stored within the seed, followed by vegetative and reproductive growth of the plant, supported by photosynthesis. The seed is metabolically inactive (quiescent) in the mature, dry state and can withstand extremes of drought and cold. For example, dry seeds can be stored over liquid nitrogen at -150 degrees Celsius (-238 degrees Fahrenheit) for many years without harm. Upon hydration of a seed, metabolism commences as water enters its cells, using enzymes and structural components present when the seed was dry. Respiration to provide energy has been observed within minutes of water uptake. Mitochondria that were stored in the dry seed are involved, although initially they are somewhat inefficient because of damage sustained during drying and rehydration. During germination they are repaired and also new organelles are synthesized. Protein synthesis also commences rapidly in the imbibing seed. Early during germination, stored messenger ribonucleic acids (mRNAs) are used as templates for protein synthesis, but later in germination these are replaced with newly transcribed messages, some of which code for a different set of proteins. Although the pattern of seed protein synthesis changes during germination, no proteins have been identified as being essential for this event to be completed. Elongation of cells of the radicle (embryonic root) is responsible for its emergence from the seed. This is a turgor -driven process and is achieved through increased elasticity of the radicle cell walls, by a process which is not known. Cell division and deoxyribonucleic acid (DNA) replication occur after germination, as the radicle grows, and reserves of protein, carbohydrate, and oil, stored in the dry seed, are used to support seedling growth. Mature seeds of some species are incapable of germinating, even under ideal conditions of temperature and hydration, unless they receive certain environmental stimuli; such seeds are dormant. Breaking of this dormancy may be achieved in several ways, depending upon the species. Frequently, dormancy is lost from seeds as they are stored in the dry state for several weeks to years, a phenomenon called dry after-ripening. But many seeds remain dormant in a fully imbibed state; they are as metabolically active as nondormant seeds, but yet fail to complete germination. Dormancy of these seeds may be broken by one or more of the following: (1) light, sunlight being the most effective; (2) low temperatures (1 to 5 degrees Celsius [33.8 to 41 degrees Fahrenheit]) for several weeks; (3) day/night fluctuating temperatures of 1 to 10 degrees Celsius (41 to 50 degrees Fahrenheit); (4) chemicals, such as nitrate in the soil, or applied hormones (gibberellins) in the laboratory; and (5) fire. Dormancy mechanism operate to control the germination of seeds in their natural environment and to optimize the conditions under which the resultant seedling can become established. Dormant seeds that require light will not germinate unless they are close to the soil surface; hence germinated seeds will not expend their stored reserves before they can reach the surface and become photosynthetically independent seedlings. This is particularly important for small, wind-dispersed weed seeds. The light-perception mechanism in light-requiring seeds involves a receptor protein, phytochrome, which is activated by red wavelengths of light and inactivated by far-red (near-infrared). Far-red light from sunlight penetrates farther into soil than does red, but also light penetrating through a leaf canopy is richer in farred than red, since the latter is absorbed by photosynthetic pigments in the leaf. Hence, germination of light-sensitive seeds is advantageously inhibited under a leaf canopy and helps explain why germination and subsequent plant growth is so profuse in forest clearings. Seeds that need a period of low temperature cannot germinate immediately after dispersal in the summer or early autumn but will do so after being subjected to the cold of winter, conditions that may cause the parent plant to die, and thus remove competition for space in the spring. The requirement for alternating temperatures will prevent germination of seeds beneath dense vegetation because the latter dampens the day/night temperature fluctuations; these seeds will germinate only when there is little vegetation cover, again reducing competition with established plants. Seed dormancy is also important in relation to agricultural and horticultural crops. Its presence causes delayed and sporadic germination, which is undesirable. On the other hand, the absence of dormancy from cereals, for example, can result in germination of the seed on the ear, causing spoilage of the crop. Thus having mild dormancy to prevent this, which is lost during storage of the seed (dry after-ripening), is desirable. see also Fire Ecology; Reproduction in Plants; Seeds J. Derek Bewley Bewley, J. Derek. "Seed Germination and Dormancy." Plant Cell 9 (1997): 1055–1066. ——, and Michael Black. Seeds: Physiology of Development and Germination, 2nd ed. New York: Plenum Press, 1994. From Yahoo Answers Answers:Grass is the classic example of a monocotyledon plant. The first shoot sent up on germination is a single leaf. A pea is a good example of a dicotyledon. Cotyledon is the first leaf that appears on germination. 'Mono' means one, 'di' means two. The traditional differences between monocots and dicots are: Flowers: In monocots, flowers are trimerous (number of flower parts in a whorl in threes) while in dicots the flowers are tetramerous or pentamerous (flower parts are in fours or fives). Pollen: In monocots, pollen has one furrow or pore while dicots have three. Seeds: In monocots, the embryo has one cotyledon while the embryo of the dicot has two. Stems: In monocots, vascular bundles in the stem are scattered, in dicots arranged in a ring. Roots: In monocots, roots are adventitious, while in dicots they develop from the radicle. slice of onion, showing parallel veins in cross section slice of onion, showing parallel veins in cross section Leaves: In monocots, the major leaf veins are parallel, while in dicots they are reticulate. Not all of these, though, are necessarily definitive. The leaves of most pine trees (which are multicotyledinous) have parallel veins, for example. There is a good picture of a monocot and a dicot seedling side by side here: http://en.wikipedia.org/wiki/Cotyledon Answers:1) fruit (although this depends on your definition of fruit) 2) large gametophyte 3) white pine 4) flower are to angiosperms 5) wind and the pines 6) number of cotyledons 7) Seeds = 2 Cones = 1 Fruits = 3 Flowers = 4 Hope this helps! :) Answers:There's a very easy way to tell whether a plant is dicot or monocot. Look at the veins in the leaf. If they branch out like a tree (dendritic pattern) they are dicots. If the veins are parallel, like the veins in a blade of grass or a corn leaf, it's a monocot. The first link is a pic of an oak leaf, which is a dicot. The second link is a pic of the parallel veins of a corn leaf. Also, dicot flowers tend to have flower parts in multiples of four or five (petals, stamens, sepals). Monocot flowers tend to have flower parts in multiples of threes.
Hemophilia B, also called factor IX (FIX) deficiency or Christmas disease, is a genetic disorder caused by missing or defective factor IX, a clotting protein. Although it is passed down from parents to children, about 1/3 of cases are caused by a spontaneous mutation, a change in a gene. According to the US Centers for Disease Control and Prevention, hemophilia occurs in approximately 1 in 5,000 live births. There are about 20,000 people with hemophilia in the US. All races and ethnic groups are affected. Hemophilia B is four times less common than hemophilia A. The X and Y chromosomes are called sex chromosomes. The gene for hemophilia is carried on the X chromosome. Hemophilia is inherited in an X-linked recessive manner. Females inherit two X chromosomes, one from their mother and one from their father (XX). Males inherit an X chromosome from their mother and a Y chromosome from their father (XY). That means if a son inherits an X chromosome carrying hemophilia from his mother, he will have hemophilia. It also means that fathers cannot pass hemophilia on to their sons. But because daughters have two X chromosomes, even if they inherit the hemophilia gene from their mother, most likely they will inherit a healthy X chromosome from their father and not have hemophilia. A daughter who inherits an X chromosome that contains the gene for hemophilia is called a carrier. She can pass the gene on to her children. Hemophilia can occur in daughters, but is rare. For a female carrier, there are four possible outcomes for each pregnancy: - A girl who is not a carrier - A girl who is a carrier - A boy without hemophilia - A boy with hemophilia People with hemophilia B bleed longer than other people. Bleeds can occur internally, into joints and muscles, or externally, from minor cuts, dental procedures or trauma. How frequently a person bleeds and how serious the bleeds are depends on how much FIX is in the plasma, the straw-colored fluid portion of blood. Normal plasma levels of FIX range from 50% to 150%. Levels below 50%, or half of what is needed to form a clot, determine a person’s symptoms. - Mild hemophilia B. 6% up to 49% of FIX in the blood. People with mild hemophilia B typically experience bleeding only after serious injury, trauma or surgery. In many cases, mild hemophilia is not diagnosed until an injury, surgery or tooth extraction results in prolonged bleeding. The first episode may not occur until adulthood. Women with mild hemophilia often experience menorrhagia, heavy menstrual periods, and can hemorrhage after childbirth. - Moderate hemophilia B. 1% up to 5% of FIX in the blood. People with moderate hemophilia B tend to have bleeding episodes after injuries. Bleeds that occur without obvious cause are called spontaneous bleeding episodes. - Severe hemophilia B. <1% of FIX in the blood. People with severe hemophilia B experience bleeding following an injury and may have frequent spontaneous bleeding episodes, often into their joints and muscles. The best place for patients with hemophilia to be diagnosed and treated is at one of the federally-funded hemophilia treatment centers (HTCs) that are spread throughout the country. HTCs provide comprehensive care from skilled hematologists and other professional staff, including nurses, physical therapists, social workers and sometimes dentists, dieticians and other healthcare providers. A medical health history is important to help determine if other relatives have been diagnosed with a bleeding disorder or have experienced symptoms. Tests that evaluate clotting time and a patient’s ability to form a clot may be ordered. A clotting factor test, called an assay, will determine the type of hemophilia and its severity. The main medication to treat hemophilia B is concentrated FIX product, called clotting factor or simply factor. Recombinant factor products, which are developed in a lab through the use of DNA technology, , preclude the use of human-derived pools of donor-sourced plasma. And while plasma-derived FIX products are still available, approximately 75% of the hemophilia community takes a recombinant FIX product. These factor therapies are infused intravenously through a vein in the arm or a port in the chest. The Medical and Scientific Advisory Council (MASAC) of the National Hemophilia Foundation encourages the use of recombinant clotting factor products because they are safer. Your doctor or your HTC will help you decide which is right for you. Patients with severe hemophilia may be on a routine treatment regimen, called prophylaxis, to maintain enough clotting factor in their bloodstream to prevent bleeds. MASAC recommends prophylaxis as optimal therapy for children with severe hemophilia B. Aminocaproic acid is an antifibrinolytic, preventing the breakdown of blood clots. It is often recommended before dental procedures, and to treat nose and mouth bleeds. It is taken orally, as a tablet or liquid. MASAC recommends that a dose of clotting factor be taken first to form a clot, then aminocaproic acid, to preserve the clot and keep it from being broken down prematurely. Living with Hemophilia B There's a lot to know about living with a bleeding disorder like hemophilia B. Visit NHF’s Steps for Living to explore resources, tools, tips and videos on living with hemophilia A through all life stages. Organized by life stages, Steps for Living provides information on recognizing the signs of bleeds in children, help on navigating school issues, how to exercise safely, helping teens manage their bleeding disorder, information on workplace accommodations, and much more. There are downloadable checklists, toolkits, videos, and more. Copyright National Hemophilia Foundation Last Updated January 2022
Cambial cells of a plant tissue Table of Contents Plants are composed of three major organ groups: roots, stems, and leaves. As we know from other areas of biology, these organs are comprised of tissues working together for a common goal (function). In turn, tissues are made of a number of cells which are made of elements and atoms on the most fundamental level. In this section, we will look at the various types of plant tissue and their place and purpose within a plant. It is important to realize that there may be slight variations and modifications to the basic tissue types in special plants. Plant tissues are characterized and classified according to their structure and function. The organs that they form will be organized into patterns within a plant which will aid in further classifying the plant. A good example of this is the three basic tissue patterns found in roots and stems which serve to delineate between woody dicot, herbaceous dicot and monocot plants. We will look at these classifications later on in the Fruits Flowers and Seeds tutorial. Tissues where cells are constantly dividing are called meristems or meristematic tissues. These regions produce new cells. These new cells are generally small, six-sided boxlike structures with a number of tiny vacuoles and a large nucleus, by comparison. Sometimes there are no vacuoles at all. As the cells mature the vacuoles will grow to many different shapes and sizes, depending on the needs of the cell. It is possible that the vacuole may fill 95% or more of the cell’s total volume. There are three types of meristems: (1) apical meristems, (2) lateral meristems, and (3) intercalary meristems. Apical meristems are located at or near the tips of roots and shoots. As new cells form in the meristems, the roots and shoots will increase in length. This vertical growth is also known as primary growth. A good example would be the growth of a tree in height. Each apical meristem will produce embryo leaves and buds as well as three types of primary meristems: protoderm, ground meristems, and procambium. These primary meristems will produce the cells that will form the primary tissues. Lateral meristems account for secondary growth in plants. Secondary growth is generally horizontal growth. A good example would be the growth of a tree trunk in girth. There are two types of lateral meristems to be aware of in the study of plants. The vascular cambium, the first type of lateral meristem, is sometimes just called the cambium. The cambium is a thin, branching cylinder that, except for the tips where the apical meristems are located, runs the length of the roots and stems of most perennial plants and many herbaceous annuals. The cambium is responsible for the production of cells and tissues that increase the thickness, or girth, of the plant. The cork cambium, the second type of lateral meristem, is much like the vascular cambium in that it is also a thin cylinder that runs the length of roots and stems. The difference is that it is only found in woody plants, as it will produce the outer bark. Both the vascular cambium and the cork cambium, if present, will begin to produce cells and tissues only after the primary tissues produced by the apical meristems have begun to mature. Intercalary meristems are found in grasses and related plants that do not have a vascular cambium or a cork cambium, as they do not increase in girth. These plants do have apical meristems and in areas of leaf attachment, called nodes, they have the third type of meristematic tissue. This meristem will also actively produce new cells and is responsible for increases in length. The intercalary meristem is responsible for the regrowth of cut grass. There are other tissues in plants that do not actively produce new cells. These tissues are called nonmeristematic tissues. Nonmeristematic tissues are made of cells that are produced by the meristems and are formed to various shapes and sizes depending on their intended function in the plant. Sometimes the tissues are composed of the same type of cells throughout, or sometimes they are mixed. There are simple tissues and complex tissues to consider but we will start with the simple tissues for the sake of discussion. There are three basic types named for the type of cell that makes up their composition: (1) parenchyma tissue, (2) collenchyma tissue, and (3) sclerenchyma tissue. Parenchyma cells form parenchyma tissue. Parenchyma cells are the most abundant of cell types and are found in almost all major parts of higher plants. These cells are basically sphere-shaped when they are first made. However, these cells have thin walls, which flatten at the points of contact when many cells are packed together. Generally, they have many sides with the majority having 14 sides. These cells have large vacuoles and may contain various secretions including starch, oils, tannins, and crystals. Some parenchyma cells have many chloroplasts and form the tissues found in leaves. This type of tissue is called chlorenchyma. The chief function of this type of tissue is photosynthesis, while parenchyma tissues without chloroplasts are generally used for food or water storage. Additionally, some groups of cells are loosely packed together with connected air spaces, such as in water lilies, this tissue is called aerenchyma tissue. These types of cells can also develop irregular extensions of the inner wall which increases the overall surface area of the plasma membrane and facilitates transferring of dissolved substances between adjacent cells. Parenchyma cells can divide if they are mature, and this is vital in repairing damage to plant tissues. Parenchyma cells and tissues comprise most of the edible portions of fruit. Collenchyma cells form collenchyma tissue. These cells have a living protoplasm, like parenchyma cells, and may also stay alive for a long period of time. Their main distinguishing difference from parenchyma cells is the increased thickness of their walls. In cross-section, the walls look uneven. Collenchyma cells are found just beneath the epidermis and generally, they are elongated and their walls are pliable in addition to being strong. As a plant grows these cells and the tissues they form, provide flexible support for organs such as leaves and flower parts. Good examples of collenchyma plant cells are the ‘strings’ from celery that get stuck in our teeth. Sclerenchyma cells form sclerenchyma tissue. These cells have thick, tough secondary walls that are embedded with lignin. At maturity, most sclerenchyma cells are dead and function in structure and support. Sclerenchyma cells can occur in two forms: As a result of cellular processes, substances that are left to accumulate within the cell can sometimes damage the protoplasm. Thus it is essential that these materials are either isolated from the protoplasm in which they originate, or be moved outside the plant body. Although most of these substances are waste products, some substances are vital to normal plant functions. Examples: oils in citrus, pine resin, latex, opium, nectar, perfumes, and plant hormones. Generally, secretory cells are derived from parenchyma cells and may function on their own or as a tissue. They sometimes have great commercial value. Tissues composed of more than one cell type are generically referred to as complex tissues. Xylem and phloem are the two most important complex tissues in a plant, as their primary functions include the transport of water, ions, and soluble food substances throughout the plant. While some complex tissues are produced by apical meristems, most in woody plants are produced by the vascular cambium and is often referenced as vascular tissue. Other complex tissues include the epidermis and the periderm. The epidermis consists primarily of parenchyma-like cells and forms a protective covering for all plant organs. The epidermis includes specialized cells that allow for the movement of water and gases in and out of the plant, secretory glands, various hairs, cells in which crystals are accumulated and isolated, and other cells that increase absorption in the roots. The periderm is mostly cork cells and therefore forms the outer bark of woody plants. It is considered to be a complex tissue because of the pockets of parenchyma cells scattered throughout. Xylem is an important plant tissue as it is part of the ‘plumbing’ of a plant. Think of bundles of pipes running along the main axis of stems and roots. It carries water and dissolved substances throughout and consists of a combination of parenchyma cells, fibers, vessels, tracheids, and ray cells. Long tubes made up of individual cells are the vessels, while vessel members are open at each end. Internally, there may be bars of wall material extending across the open space. These cells are joined end to end to form long tubes. Vessel members and tracheids are dead at maturity. Tracheids have thick secondary cell walls and are tapered at the ends. They do not have end openings such as the vessels. The tracheids ends overlap with each other, with pairs of pits present. The pit pairs allow water to pass from cell to cell. While most conduction in the xylem is up and down, there is some side-to-side or lateral conduction via rays. Rays are horizontal rows of long-living parenchyma cells that arise out of the vascular cambium. In trees, and other woody plants, ray will radiate out from the center of stems and roots and in cross-section will look like the spokes of a wheel. Phloem is an equally important plant tissue as it also is part of the ‘plumbing’ of a plant. Primarily, phloem carries dissolved food substances throughout the plant. This conduction system is composed of sieve-tube member and companion cells, that are without secondary walls. The parent cells of the vascular cambium produce both xylem and phloem. This usually also includes fibers, parenchyma, and ray cells. Sieve tubes are formed from sieve-tube members laid end to end. The end walls, unlike vessel members in xylem, do not have openings. The end walls, however, are full of small pores where cytoplasm extends from cell to cell. These porous connections are called sieve plates. In spite of the fact that their cytoplasm is actively involved in the conduction of food materials, sieve-tube members do not have nuclei at maturity. It is the companion cells that are nestled between sieve-tube members that function in some manner bringing about the conduction of food. Sieve-tube members that are alive contain a polymer called callose. Callose stays in solution as long as the cell contents are under pressure. As a repair mechanism, if an insect injures a cell and the pressure drops, the callose will precipitate. However, the callose and a phloem protein will be moved through the nearest sieve plate where they will for a plug. This prevents further leakage of sieve tube contents and the injury is not necessarily fatal to overall plant turgor pressure. The epidermis is also a complex plant tissue, and an interesting one at that. Officially, the epidermis is the outermost layer of cells on all plant organs (roots, stems, leaves). The epidermis is in direct contact with the environment and therefore is subject to environmental conditions and constraints. Generally, the epidermis is one cell layer thick, however there are exceptions such as tropical plants where the layer may be several cells thick and thus acts as a sponge. Cutin, a fatty substance secreted by most epidermal cells, forms a waxy protective layer called the cuticle. The thickness of the cuticle is one of the main determiners of how much water is lost by evaporation. Additionally, at no extra charge, the cuticle provides some resistance to bacteria and other disease organisms. Some plants, such as the wax palm, produce enough cuticle to have commercial value: carnauba wax. Other wax products are used as polishes, candles, and even phonographic records. Epidermal cells are important for increasing the absorptive surface area in root hairs. Root hairs are essentially tubular extensions of the main root body composed entirely of epidermal cells. Leaves are not left out. They have many small pores called stomata that are surrounded by pairs of specialized epidermal cells called guard cells. Guard cells are unique epidermal cells because they are of a different shape and contain chloroplasts. (Guard cells – discussed in greater detail in the Leaves tutorial). There are other modified epidermal cells that may be glands or hairs that repel insects or reduce water loss. In woody plants, when the cork cambium begins to produce new tissues to increase the girth of the stem or root the epidermis is sloughed off and replaced by a periderm. The periderm is made of semi-rectangular and boxlike cork cells. This will be the outermost layer of bark. These cells are dead at maturity. However, before the cells die, the protoplasm secretes a fatty substance called suberin into the cell walls. Suberin makes the cork cells waterproof and aids in protecting tissues beneath the bark. There are parts of the cork cambium that produce pockets of loosely packed cork cells. These cork cells do not have suberin embedded in their cell walls. These loose areas are extended through the surface of the periderm and are called lenticels. Lenticels function in gas exchange between the air and the stem interior. At the bottom of the deep fissures in tree bark are the lenticels. The evolution of the species of the genus "Homo" led to the emergence of modern humans. Find out more about human evolut.. Muscle cells are specialized to generate force and movement. Learn about the different types of muscle tissues in this t.. Animals adapt to their environment in aspects of anatomy, physiology, and behavior. This tutorial will help you understa.. The endoplasmic reticulum and Golgi apparatus are the organelles involved in the translation step of protein synthesis a.. Different pregnancy and birth control and contraception strategies are described. Read this tutorial to learn each of th..
Thinking from a different perspective is a critical thinking skill that is essential for kids to learn computer coding. In fact, it is essential to many jobs. There are many ways to help kids develop the skill. Today we share a coding game we developed that can get you started on training kids the critical skill of thinking from a different perspective. Today’s game is an extension of the Hot Dog Coding Game. The story behind the game is the restaurant runs out of hog dogs. Jacob is asked to go to the hot dog store to buy more hot dogs. Since he doesn’t know how to get there, he calls friends for directions. The players’ task is to be the friend and give a good direction that Jacob can follow to get to the store. There are 4 friends at 4 different locations. Depending on the friend’s location, you can get different directions. This is where the game starts. To play, kids need to give a good driving direction going from the restaurant to the hot dog store. Depending on the location of the friend, you will give different directions. 1. Think from different perspectives 2. Anticipate all possible scenarios of a project 3. Learn Abstraction, a solution that is indifferent of factors of different scenarios, ie., a solution that can be used for all different scenarios Questions to ask kids: What caused the confusion and the different directions? What can you do to avoid the differences? Step 0. Download the coding game. You can find the game at the bottom of this post. After you download the game, you can cut out the moving blocks for kids to use as code blocks to design their directions. You are free to put the friend at any location on the game board. However, we have marked 4 locations on the game board to maximize the differences of the directions you can give. Note: it is not just the location, the side you are facing also matters. When you are facing North, then West is your left; while you facing East, North is your left. Step 1. Build direction as someone at Location 1, using the coding blocks Ask kids to use the coding blocks (Left Right Up Down and Numbers) to lay out the directions, assuming they are at Location 1, marked on the game board as L1. Remember L1 is NOT the starting point of the trip. The trip always starts from the restaurant. After kids finish giving directions with the coding blocks, you can ask them to follow the direction, from the restaurant to the hot dog store. Does the direction lead them to the store? Or somewhere else? If the direction leads them to a different place, ask kids check the direction, identify mistakes, and make necessary changes on codes. If you have more than one child, after the kids finish the direction, you can ask them to pass the direction to a different child to execute the direction. Make sure to keep the Direction given by L1 for future comparisons. Step 2. Build direction as someone at Location 2 with Coding Blocks Before moving on to the direction building from Location 2 perspective, ask kids if they think the coding will be the same as L1. If they answer NO, ask them the difference they anticipate. Now repeat the process in Step 1. After finishing executing the direction codes, ask kids compare the direction codes from L1 and L2. Step 3. Build direction as someone at Location 3 with Coding Blocks. Repeat the process in Step 2, except this time, doing it from Location 3. It is important to ask kids think about the differences they anticipate before they start building direction codes. It is also important to compare the 3 sets of direction codes at the end of Step 3. Step 4. Find the Abstraction Solution Ask kids if they can find a way to write codes so that no matter where you are, you always give the same code direction. This is the process called Abstraction. It is the process to remove factors that differ the solutions based on different scenarios, so that one solution can work for all different scenarios. When we write computer codes, we want to be able to see all different scenarios and cover all different perspectives. However, we also want to make our codes simple and short. With this game, we can definitely write 4 sets of codes to cover all 4 scenarios, but it is much more efficient if we can find a way to use one set codes to cover all scenarios. There are more than one ways for abstraction. One answer could be using South, North, East, West instead Left Right Up Down. Another answer could be to ask the friend to give directions like he/she is at the restaurant, instead of his/her own location. Step 5. Design Code Direction Using the Abstraction Method. Once kids pick one abstraction solution, ask them to put together the code direction with this new method. After they finish designing the codes, ask them execute the code direction from L4. If it doesn’t work, kids need find out which parts are wrong and correct the mistakes; If it works, test it on the other locations, one location at a time. To conclude the activity, 1. It is important for a programmer or a coder to be able to anticipate all different scenarios and be able to think from all different perspectives for any project 2. A good coding project is one that covers all scenarios with the shortest possible code. 3. To achieve #2, programmers also need be good at Abstraction. To Download the printable coding game: Would you like to have more coding games and activities for kids? Join the DIY Coding Camp at Home. We have games specially designed for each of the essential skills, plus extension activity ideas. All games are printable at home, and you don’t need to know coding to help kids gain coding skills. You don’t even need a computer.
To gain information on the microhabitat use, home range and movement of a species, it is often necessary to remotely track individuals in the field. Radio telemetry is commonly used to track amphibians, but can only be used on relatively large individuals. Harmonic direction finding can be used to track smaller animals, but its effectiveness has not been fully evaluated. Tag attachment can alter the behaviour of amphibians, suggesting that data obtained using either technique may be unreliable. We investigated the effects of external tag attachment on behaviour in the laboratory by observing 12 frogs for five nights before and five nights after tag attachment, allowing one night to recover from handling. Tag attachment did not affect distance moved or number of times moved, indicating that the effects of tag attachment are unlikely to persist after the first night following attachment. We then compared harmonic direction finding and radio-telemetry using data collected in the field. We fitted rainforest stream frogs of three species with tags of either type, located them diurnally and nocturnally for approximately two weeks, and compared movement parameters between techniques. In the field, we obtained fewer fixes on frogs using harmonic direction finding, but measures of movement and habitat use did not differ significantly between techniques. Because radio telemetry makes it possible to locate animals more consistently, it should be preferred for animals large enough to carry radio tags. If harmonic direction finding is necessary, it can produce reliable data, particularly for relatively sedentary species. Keywords: amphibian; behaviour; ecology; techniques; radio tracking; harmonic radar; harmonic direction finding
Each activity will allow your children to. So be a super hero with the letters in the alphabet. Add to my workbooks 0 Download file pdf Embed in my website or blog Add to Google Classroom. Abc order worksheets kindergarten. Also enjoy the activity Kindergarten ABC Order Activities 2. Alphabetical Order Worksheets ABC Order Check out our collection of free ABC order worksheets which will help with teaching students how to place words in alphabetical order. 6 pages have 9 words each. Set of 20 worksheets using 52 Dolch for Kindergarten words. We have a variety to chose from including abc dot-to-dot abc maze missing alphabetical order letters identify which letters arent in order and more. Learning to alphabetize is an important life skill. Think Abc alphabetical sets of five printable worksheet in the classroom or order home. Teach your kids how to place words in alphabetical order. Kindergarten Alphabetical Order Worksheet 1. Think Green write sets of five printable worksheet. Looking up a word in a dictionary finding a book at the bookstore or looking through our phone contacts are all done in alphabetical order. Worksheet 1 Worksheet 2. If you are working on teaching your kindergartner their abcs you will love all of our Kindergarten alphabet resouces. Rounded up here are alphabetical order worksheets curated to intrigue children of kindergarten through grade 5 comprising printable activities such as missing letters connecting dots comparing words with 1 to 5 similar letters sorting and alphabetizing words arranging compound words in ABC order. Put the words in ABC order or alphabetical order by looking at the first letter of the given words. Campfire camping canoe smore hammock Recommended for grades 3 and up. On this page you will find LOTS of fun creative and unique ideas for teaching Kindergarten kids their alphabet letters matching upper and lowercase letters phonics phonemic awareness beginnign sounds activities alphabet crafts and handwriting. 2nd through 4th Grades. Putting letters in abc order Other contents. These worksheets ask students to circle the word on each line that comes first in alphabetical order. Arrange the words in the order. ABC Order is a big skill for students to initially grasp but with the right scaffolding and some fun learning tools it is a concept that can be introduced at the end of Kindergarten especially to those shining stars that are ready for an extra challenge. Think Worksheet number sets of five. Summer ABC Order Intermediate Write each word list in alphabetical order. Found worksheet you are looking for. 14 of the pages have 6 words each that need to be cut out and pasted in alphabetical order. Access the most comprehensive library of K-8 resources for learning at school and at home. The exactly aspect of Abc Order Worksheets Kindergarten was 1920×1080 pixels. A core skill students in grade 1 need to learn is the order of letters in the alphabet. Dolch Kindergarten Words Alphabetical Order Worksheets. Abc Order Worksheets For KindergartenFirst kids trace lines on this prekindergarten writing worksheet to strengthen the fine motor skills needed to form the letter A. Abc Order Worksheets Kindergarten was created by combining each of gallery on SmartKids SmartKids is match and guidelines that suggested for you for enthusiasm about you search. With our fun and free kindergarten alphabet activities and ABC order worksheets your children learn how to properly write each of the letters of the alphabet in uppercase and lowercase letters. Check out our collection of free ABC order worksheets which will help with teaching students how to place words in alphabetical order. Live worksheets English English language The alphabet ABC Order. These worksheets help students learn the ordering of letters. Look at the first letter of each word. These online alphabetical order worksheets or alphabetizing worksheets or ABC order worksheets greatly help kids to get familiarized with the alphabet. To downloadprint click on pop-out icon or print icon. These are approximately the skills tested alphabetical first grade although kindergarten standards vary widely. Access the most comprehensive library of K-8 resources for learning at school and at home. Some of the worksheets for this concept are Abc order Abc order Abc order work Abc order Abc order Superstar work alphabet order name write the Alphabetizing 2nd letter Abc order work. Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6. Students practice the alphabets by putting them in order. Displaying top 8 worksheets found for – Abc Order. This intermediate-level worksheet requires students to look at the 2nd 3rd or 4th letter in each word. Spelling and Vocabulary Gradelevel.
The spotted lanternfly was first detected in Pennsylvania in 2014 and has since spread to 26 counties in that state and at least six other eastern states. It’s moving into southern New England, Ohio and Indiana. This approximately 1-inch-long species from Asia has attractive polka-dotted front wings but can infest and kill trees and plants. Professor Frank Hale is an entomologist who is tracking this species. How did the spotted lanternfly get to the U.S., and how quickly is it spreading? It is native to India, China and Vietnam and probably arrived in a cut stone shipment in 2012. The first sighting was in 2014 in Berks County, Pennsylvania, on a tree of heaven — a common invasive tree brought to North America from China in the late 1700s. By July 2021 the lanternfly had spread to about half of Pennsylvania, large areas of New Jersey, parts of New York state, Maryland, Delaware and Virginia. It also had been found in western Connecticut, eastern Ohio, and now Indiana. To give an idea of how fast these lanternflies spread, they were introduced into South Korea in 2004 and spread throughout that entire country – which is approximately the size of Pennsylvania – in only three years. How do they spread so fast? The lanternflies lay egg masses in late summer and autumn on the trunks of trees and any smooth-surfaced item sitting outdoors. The egg masses, which resemble smears of dry mud, can also be laid on the smooth surfaces of cars, trucks and trains. Then, they can be unintentionally transported to any part of the country in just a few days. Once the eggs hatch, they crawl to nearby host plants to start a new infestation. How do they damage trees and plants? What do they feed on? They feed by piercing the bark of trees and vines to tap into the plant’s vascular system to feast on sap. For a sucking insect, lanternflies are relatively big. They remove large amounts of sap and excrete copious amounts of clear, sticky “honeydew” that can coat the tree and anything beneath. A black sooty mold grows wherever the honeydew has been deposited. While unsightly, sooty mold isn’t harmful when growing on the bark of the tree or beneath it. Lanternfly feeding seriously stresses trees and vines, which lose carbohydrates and other nutrients meant for storage in the roots and eventually for new growth. Infested trees and vines grow more slowly, exhibit dieback – begin to die from the branch tips – and can even die. How are scientists and officials trying to stop their spread? Biological control shows some promise for the future. Two naturally occurring fungal pathogens of spotted lanternflies have been identified in the U.S. Also, U.S. labs are testing two parasitoid insects – insects that grow by feeding on lanternflies and killing them in the process – that have been brought from China for testing and possible future release. How worried should people be about this lanternfly? Very worried. Lanternflies easily build to high numbers. The area where host trees live is relatively wide, and lanternflies damage crops, the forest and the landscape. They damage many plants and cause a major nuisance to the general public. The heavy flow of honeydew and the resulting sooty mold makes a mess of the landscape. The adults start to aggregate on plants and structures to lay their egg masses in September. Their sudden, mass appearance can be alarming to people the way periodical cicada populations shock people when they come out of the ground. But lanternflies are more shocking because the few predators that could feed on them, like wheel bugs and predatory stink bugs, do not seem to control the infestations. That is why the introduction of parasitoids from Asia are important for achieving some meaningful level of biological control. Lanternflies can be a serious pest of grapes, and where found, they have reduced grape yields and damaged or killed vines. Multiple applications of insecticides are often needed to kill them, but this increases the cost of crop production. The pest threatens the major wine-producing regions in the East, such as the Finger Lakes and Long Island in New York; parts of Virginia; and Newport, Rhode Island. Have any other pests similarly damaged trees? Yes, the emerald ash borer, which arrived in the U.S. from China by accident and was discovered in 2002. It has killed millions of ash trees in North America. The Asian longhorned beetle, which feeds on and kills many species of trees, has turned up in multiple locations, most recently near Charleston, South Carolina. Maple, buckeye, horse chestnut, willow and elm would be threatened if this pest ever got widely established. The box tree moth damages boxwoods and is known to live in Canada. It has been seen in Connecticut, Michigan and South Carolina. It possibly was spread accidentally into the U.S. in shipments of boxwoods from Canada. It is not known to be established in any state, but a federal government order has halted importing host plants like boxwood, euonymus and holly from Canada. What should I do if I see one? If it has already infested the region where you live and you find spotted lanternflies on your property, contact your local county extension office for control recommendations. But if it has not been found in your county or state, report it to your state department of agriculture. If the infestation is caught early before it can become established in your area, hopefully it can be eradicated there. Eventually, it will spread to many parts of the country. We can slow the spread by identifying and eradicating new infestations wherever they arise. [You’re smart and curious about the world. So are The Conversation’s authors and editors. You can read us daily by subscribing to our newsletter.]
Creating Universal Blood Researchers at UBC are one step closer to creating a universal blood type. In a studyrecently published in JACS, they demonstrated the ability to convert different types of blood into universal type O. There are four major blood types (A, B, AB and O) that differ based on the presence or absence of antigens on the surface of red blood cells. Type O blood, is considered to be a universal donor as it is lacks the A and B antigen and doesn’t trigger unwanted, potentially life-threatening immune responses, which can arise if type A, B, or AB are given to an incompatible donor. Enzymatic removal of the terminal sugar on the A and B antigens, has been proposed as a way to convert red blood cells into a universal, antigen-null, donor type. However, the enzymes discovered up to this point in time have not been efficient enough to make this idea practical. The researchers at UBC screened over 20 000 DNA samples derived from human gut bacteria and discovered a novel class of enzyme that was able to effectively cleave the sugar from A and B antigens. Using directed evolution, they were able to engineer the gene product to produce an enzyme that is approximately 30 times more efficient than the previous best candidate, suggesting that enzymatic removal of blood group antigens is an attractive method for allowing the transfusion of blood from an otherwise incompatible donor. More research and safety testing are needed before this approach would be applied to clinical practice. However, the results from this study are very promising, and bring us one step closing to alleviating the chronic shortages of universal type O blood.
Glass blowing is a process that is used to shape glass. Limestone, sand, potash, and soda ash must first be combined and heated in a furnace at over 2000°F (1093.3°C). While the glass is in a molten state, it is shaped. In order to perform glass blowing, the artist must have a blowpipe. The tip of the blowpipe is preheated by dipping it in the molten glass as it sits in the furnace. A ball of the molten glass is accumulated on the blowpipe and rolled onto a tool called a marver, which is usually a thick sheet of steel that lies flat. The marver is important to the glass blowing process, because it creates a cool exterior layer on the glass and makes it possible to shape it. The artist blows air into the blowpipe in order to form a bubble with the molten glass. If the project calls for making a large piece, the artist can create additional bubbles over the original. With the glass blowing process, a variety of shapes can be created. By using a tool called tweezers, the glass blower can pull the glass or create detail. He or she can also use special paddles made of either graphite or wood to design flat areas in the glass. In order to manipulate the glass into various shapes, the glass blower uses tools called jacks. If he or she needs to make cuts in the piece, he or she uses what are called straight shears. Diamond shears, on the other hand, cut off large portions of glass. Once he or she has created a piece of the appropriate size, he or she moves the piece to a tool called a punty. Here, the glass blower can finish the top of the piece. Glass blowing has a history dating back to approximately 200 BCE. In these early years, the glass shaped was formed around a core made of dung or mud. Typically, the process was used to create containers capable of holding liquids. Today, it is used to create works of art and craft projects. In fact, it is one of the fastest growing hobbies in the United States.
According to recent research by the University of the West of Scotland (UWS), novel antiviral drugs—including Covid treatments—can be produced from earlier overlooked substances seen in marine plants. Image Credit: University of the West of Scotland. The research examined a set of substances known as marine sulfated polysaccharides—a kind of carbohydrate having sulfur—mostly seen in the cell walls of marine algae or seaweeds. The study demonstrated that nine of the substances analyzed showed promising results against Covid, with five of the substances reported more effective. Identifying valuable biologically active compounds among marine plants throws considerable light on the significant role that marine life can carry out in creating efficient antiviral treatments, and a large amount of these ecosystems are yet to be analyzed. Despite the advancement of vaccines as a protective measure, the emergence of newly evolved strains of Covid mean that its threat to public health continues, so finding other suitable treatments remains a priority. This is a very exciting discovery and shows great potential for these substances to be developed into effective medicines.” Milan Radosavljevic, Professor and Vice Principal, Research Innovation and Engagement, University of the West of Scotland Professor Radosavljevic also adds, “This study highlights our internationally-renowned research expertise, and I am proud of the commitment from our academics and their research students to find solutions to some of the world’s most urgent issues.” I’m pleased to be involved in this research, which shows extremely positive results for the future development of antiviral drugs. Scientists often look to nature as a potential source for new drug discoveries: for example, the natural environment provided us with penicillin – the first naturally occurring antibiotic to be used therapeutically. Thanks to the development of this drug, various diseases caused by bacteria ceased to be life-threatening.” Abdalla Mohamedsalih, PhD Student, School of Computing, Engineering and Physical Sciences, University of the West of Scotland Abdalla Mohamedsalih adds, “I look forward to continuing this research, as we seek to find effective treatments for Covid.” Scientists from the UWS examined earlier research works spanning 25 years and concentrated on those that referred to marine plant substances harboring some effect against various viruses. A shortlist was created with 45 substances exhibiting potential antiviral effects. These shortlisted substances were from different marine sources, including various kinds of microalgae, algae, squid cartilage, and sea cucumbers. Computer-generated versions of these molecules were created along with the spike protein—the one covering the outside of the coronavirus. Through these computer models, scientists simulated how well each substance would attach to the spike protein. Molecular docking—the process of using computer simulations to test binding—is widely employed by researchers. Nine substances of the shortlist revealed real promise as avenues for future development of therapeutic drugs. Although the ocean has been looked to as a source for biologically active materials for many years, the marine environment is still largely under-investigated as far as medicines are concerned. Effective new antiviral drugs are critically needed for treating a wide range of diseases, including Covid, which continues to be a threat globally.” Dr Mostafa Rateb, Lecturer, School of Computing Engineering and Physical Sciences, University of the West of Scotland “This is a very interesting discovery and I look forward to seeing where these initial findings lead,” remarks Dr. Mostafa Rateb. Discovering novel medicines appropriate for treating Covid and other viruses is the priority for scientists globally. They are striving hard to produce drugs that target life-threatening viruses and prevent them from developing in the body. As of now, only limited options are present for treating Covid, with only a handful of existing medicines accepted for use against the disease. Hence the focus is on identifying other suitable treatments. The initial results from the current study show promise that marine plants have great potential for use as Covid inhibitors even though further experimental research works are needed. Salih, A. E. M., et al. (2021) Marine Sulfated Polysaccharides as Promising Antiviral Agents: A Comprehensive Report and Modeling Study Focusing on SARS CoV-2. Marine Drugs. doi.org/10.3390/md19080406.
Fundamental Interactions – Fundamental Forces In physics, the fundamental interactions, also known as fundamental forces, are interactions among elementary particles that do not appear to be reducible to more basic interactions. These interactions govern how particles and also macroscopic objects interact and how certain particles decay. Generally, they can be classified into one of four fundamental forces: - Gravitational force. Gravity was the first force to be investigated scientifically. The gravitational force was described systematically by Isaac Newton in the 17th century. Newton stated that the gravitational force acts between all objects having mass (including objects ranging from atoms and photons, to planets and stars) and is directly proportional to the masses of the bodies and inversely proportional to the square of the distance between the bodies. Since energy and mass are equivalent, all forms of energy (including light) cause gravitation and are under the influence of it. Gravity is the weakest of the four fundamental forces of physics, approximately 1038 times weaker than the strong force. On the other hand, gravity is additive. Every speck of matter that you put into a lump contributes to the overall overall gravity of the lump. Since it is also a very long range force, it is dominant force at the macroscopic scale, and is the cause of the formation, shape and trajectory (orbit) of astronomical bodies. - Electromagnetic force. The electromagnetic force is the force responsible for all electromagnetic processes. It acts between electrically charged particles. It is infinite-ranged force, much stronger than gravitational force, obeys the inverse square law, but neither electricity nor magnetism adds up in the way that gravitational force does. Since there are positive and negative charges (poles), these charges tend to cancel each other out. - Weak force. The weak interaction or weak force is one of the four fundamental forces and involves the exchange of the intermediate vector bosons, the W and the Z. Since these bosons are very massive (on the order of 80 GeV, the uncertainty principle dictates a range of about 10-18 meters which is less than the diameter of a proton. As a result, the weak interaction takes place only at very small, sub-atomic distances. - Strong force. The strong interaction or strong force is one of the four fundamental forces and involves the exchange of the vector gauge bosons known as gluons. In general, the strong interaction is very complicated interaction, because it significantly varies with distance. The strong nuclear force holds most ordinary matter together because it confines quarks into hadron particles such as the proton and neutron. Moreover, the strong force is the force which can hold a nucleus together against the enormous forces of repulsion of the protons is strong indeed. These fundamental interactions are characterized on the basis of the following four criteria: - the types of particles that experience the force - the range over which the force is effective - the relative strength of the force - the nature of the particles that mediate the force We hope, this article, Fundamental Interaction – Fundamental Force, helps you. If so, give us a like in the sidebar. Main purpose of this website is to help the public to learn some interesting and important information about materials and their properties.
This Plain English fact sheet outlines the work done by the EPA in monitoring aquatic plants in Irish lakes. Summary: Aquatic plants are good at showing if the quality of the water is good or bad and play an important role in lake ecology by providing food and a habitat for many smaller plants, animals and birds. Filesize: 1,406 KB Aquatic plants play an important role in lake ecology by providing food and a habitat for many smaller plants, animals and birds. • provide shelter for young fish, • help to improve the clarity of the water, • help stabilise lake shore banks, and • reduce the amount of sediment being suspended in the water. The Environmental Protection Agency (EPA) monitors these aquatic plants at more than 10,000 sites in over 200 lakes once every three years.https://www.epa.ie/media/epa-2020/monitoring-amp-assessment/freshwater-amp-marine/EPA_Lake-Monitoring_Aquatic-Plants_factsheet_thumb.jpg
Differentiation: audit tools and resources Download our audit to invite teachers to reflect on their use of differentiation in areas such as questioning, resources and tasks set. See if these examples of differentiation strategies could work for you. Share our audit tool with your teachers Use our tool to help your teachers reflect on their use of differentiation. The audit asks about things such as: - Which resources the teachers use, and if they have a range of texts at different difficulty levels available when using texts - How they structure activities and tasks - How they develop the use of key vocabulary and language so it's accessible for all groups, but also challenging where appropriate - What questions they ask, and to whom For each question, the teacher is asked to consider: - Low attainers - The most able pupils - Pupils with special educational needs (SEN) - Pupils with English as an additional language (EAL) See examples of differentiation strategies Differentiation strategies across the Key Stages Sign up to receive Mike Gershon's free 'Differentiation Deviser', which has 80 examples of differentiation strategies, activities and techniques, across the Key Stages and the curriculum. The Bell Foundation
Myocardial Infarction is the medical term for a heart attack. A heart attack essentially means that there has been a death of heart cells. This results from a complete blockage of one of the blood vessels that feeds the coronary artery. Atherosclerosis is the main cause of myocardial infarction. Atherosclerotic plaque is a substance made up of cells, cholesterol, and other fatty substances. This substance develops in the wall of the coronary artery and over time, becomes large enough to start narrowing the channel through which the blood travels. This can lead to the forming of a blood clot which can cause complete obstruction of the artery and a subsequent heart attack. The most common symptom of a heart attack is chest pain. The pain is typically described as a burning or pressure sensation beneath the mid- or upper-breastbone. The pain may also radiate into the mid-upper back, neck, jaw, or arms. Although the pain can be severe, it’s often only moderate, and can be accompanied by shortness of breath or sweating. Many individuals are at risk for having a heart attack. The risk factors include: • Heredity—especially early heart disease • High cholesterol • High blood pressure • Sedentary lifestyle • Cigarette smoking The best treatment for a heart attack is prevention, which means controlling or eliminating risk factors. Exercising is also an important part of treatment, as are regular examinations by your doctor.
The ultimate size of a fruit tree - its mature height and spread - is affected by many characteristics. Local climate, soil conditions, and the species (apple, plum, cherry and so on) all play a part. Within species some varieties naturally tend to grow more vigourously than others. Bramley's Seedling apple trees for example tend to be bigger and stronger than Rubinette apple trees. However the most significant factor in the ultimate size of your fruit tree is its rootstock. A fruit tree supplied from our nursery usually consists of two parts, the scion (the fruiting variety) which makes up most of the tree that you see above ground-level, and the rootstock which - as the name suggests - is the roots. The join or "union" is easy to spot in a young tree - it is the kink a few inches above the ground where the scion was budded or grafted on to the rootstock. This marriage works because rootstocks are very closely related to scions - thus apple rootstocks are apple varieties in their own right, but where the main attribute is not fruit flavour but tree size. Plum rootstocks can also be used for apricots and peaches, which shows just how closely these species are related. Most rootstocks will produce edible fruit if left to grow naturally, but the fruit is usually small and poorly flavoured. The rootstock not only influences the size of the tree, it also provides other characteristics such as precocity (the age of the tree when it will first start to bear fruit), some disease resistance attributes, and resistance to harsh winters. However with some exceptions, the rootstock has little influence over the size of the fruit - so an apple from a tree growing on a rootstock which limits tree height to just 2m or so is going to be roughly the same size as an apple from a tree growing on a rootstock which allows the height to be 5m or more. In other words, even if you have a very small garden and want to grow very small trees, the fruit is going to be the standard size. Whilst "seedling" trees (grafted on to roots of the same variety grown from seed) were the norm until the 20th century, the technique of grafting a fruiting variety on to a rootstock to control the size, vigour and precocity of apple trees in particular had been known for several centuries. In the late nineteenth and early twentieth centuries researchers began to formalise the use of size-controlling rootstocks for apple trees and later for other fruit tree species. Although rootstocks are invariably given cryptic numeric reference codes, they are essentially fruit tree varieties in their own right. New rootstocks are developed using the same techniques that are used for developing fruit varieties - a combination of chance, observation, and scientific crossing of varieties with desirable characteristics. The most widely-used rootstock in Europe in the 19th century was called Paradise, although there were dozens of different forms of this rootstock with a range of vigours and little standardisation. Beginning around 1912 researchers at East Malling Research Station in the UK were the first to classify rootstocks and develop new ones for specific purposes. In about 1917 they released "M IX", or "M9" as we now know it, which was derived from a form of the Paradise rootstock called "Jaune de Metz". (It is possible that "Jaune" refers to the golden yellow bark of this rootstock). Apple trees grown on M9 rootstocks are very small, and they fruit very early in life - making this an ideal rootstock for commercial apple orchards, and it is indeed probably the most widely-planted of all rootstocks. East Malling Research Station, in conjunction with some other UK research stations at Merton and Long Ashton developed a range of virus-free rootstocks of which M27, M9, M26, MM106, M7, MM111, and M25 are in widespread use today. Note that the numbers in the East Malling series have no relation to the size of the tree - M27 and M26 produce trees which are respectively smaller and larger than M9! In the USA resistance to the disease fireblight, and the ability to survive extreme cold are more important than they are in Europe. Work on rootstocks able to resist fireblight and cold was initially carried out at the New York Agricultural Experimental Station in Geneva, NY, and this led to the G-series rootstocks. These rootstocks are often better suited to the climate and disease regime of North America than the East Malling series - although M9 and M7 are also widely used in North America. Given the commercial importance of M9, many researchers have developed improved versions or alternatives. Whilst most scientific attention has focussed on developing rootstocks for apple trees, rootstocks are also important for growing pears, plums and cherries. Rootstocks for Fruit Trees supplied by Orange Pippin For the gardener or home orchardist, the most important choice is usually the mature size of the tree rather than the rootstock, but we include rootstock references in our ordering section. For more information about the rootstocks we use, see the following pages:
Most aircraft require some form of electrical power to operate navigation-, taxi-, landing-, strobe lights, one or more COM and NAV radio's, transponder, intercom and other advanced electronic system of your choice. The electrical system consist of a battery and an alternator or generator on older aircraft. All of this is connected through several meters (kilometers/ miles in large aircraft) of wire. All matter on Earth is made up from molecules and they basically consist of atoms. These atoms are made of electrons, protons and neutrons. And electricity is about the flow of free electrons attracted to protons and repelled by other electrons. Aircraft used to use generators to generate electrical energy but modern designs use an alternator which is lighter and has more capacity and can generate more power at lower RPMs than the good old generator could. This page will be a bit technical but for a good understanding of electricity generation it will be necessary that you work your way through. When a current flows through a wire, a small, but weak detectable magnetic field exists around that wire. If that wire is then formed into a coil, the resultant magnetic field is concentrated. And the lines of magnetic force of the separate wires will then all line up together creating a magnetic field or flux. The same principle works in reverse too: when a wire passes through a magnetic field a voltage is generated. Form that wire into a coil and rotate a magnet through it and an even higher alternating voltage is generated. Other forms of generating electricity are: friction (static electricity), heat (thermocouple with two dissimilar metals), pressure (piezoelectric crystals), light (photo/light sensitive voltaic cells) and chemical (battery). They are listing in order of amount of usable power obtainable from these sources from lowest to the highest. In a generator the magnetic field is generated by a stationary permanent magnet and a coil is rotated within the field (the other way around works too, see Rotax engines). Two slip rings are used to pickup the AC voltage. If a DC voltage is required the slip rings are replaced by a commutator. A commutator makes sure that the same polarity voltage is pickup by the brushes at the same angular position. This will rectify the alternating voltage for use in the aircraft DC system. In the real world the permanent magnet is assisted by a field coil and this strengthens the field of the permanent magnet, the generator is then said to be self exciting. A drawback with this generator type system is that the aircraft engine RPM must be above 1200 for the generator to start charging the battery with a sufficient amount. During taxi and other low RPM activity the battery will be the main power source, keeping a watchful eye on the ammeter and or voltmeter will therefore be important. In contrary to the generator, an alternator uses a rotating magnetic field in a stationary coil to generate electricity. This rotating magnetic field can be supplied by a magnet but normally a coil with an iron core is used and it is therefore called an electromagnet. The ALT part of the main switch energizes the field coil of the alternator with power from the battery until the alternator comes online. The generated voltage is alternating and rectified by internal diodes to an usable DC voltage. This illustrates that if the battery fails while in flight, the pilot switches the ALT switch off and back on to attempt to 'reset' the system. The magnetic field can not be rebuild by the field coil (dead battery) and as a result the alternator will not produce any power, leaving the aircraft without long term electricity. One of the advantages of the alternator is that it generates more power, even when the engine is idling and it even weighs less than the generator! The lower weight can be explained because there is no heavy magnet inside the alternator. Both types will need a voltage regulator to keep their output constant at 13,8 volt (or 28 volt in those systems), current regulation is by design in the alternator but the generator needs an external one combined with reverse current flow protection (diodes). There are two types of batteries: primary and secondary cell. The primary can not be recharged where as the secondary can be. Primary cells are: zinc-carbon, lithium and alkaline type batteries. Some examples of secondary cells are: lead acid, nickel cadmium, nickel metal-hydride, silver-zinc, lithium-ion (Li-Ion), lithium-polymer (LiPo) and lithium-iron-phosphate (LiFePo4). These are all rechargeable, but each chemistry demands its own charge characteristic and if you do not follow that strictly, the results are more than interesting! Some electrical and mechanical specifications of lithium primary cells (like in the image: lithium-iron-disulfide (LiFeS2)) can be found in this PDF from Energizer: Product Datasheet. The principle of a lead acid battery is as follows: two dissimilar electrodes are placed in an electrolyte, they are all conductors. The chemicals react with the electrodes and electrons attract to the negative electrode and a shortage of electrons exists at the positive terminal and a voltage of 2.1 volt is build up at each cell. Batteries are made up from 6 cells for a 12 volt model. Each cell in a NiCad/NiMH battery has a voltage of 1.2 volt, so you will need 10 cells for a 12 volt model battery. NiCads are based on a strong alkaline for their electrolyte. The cells in a Lithium battery are between 1,5 and 4 volts depending on the chemistry and charge level of the cell. They have a long life, higher charge density but cost more than ordinary lead-acid batteries and need a special charger. The other advantage is that they weigh much less! Also known as solar cells, these devices convert photons from any light source hitting the semi-conductor material in the cell and generate electricity. The most commonly known cell is made from silicon in layers (p-n junction) where one is doped in boron and the other with phosphorus. The efficiency of practical solar cells is around 20 - 25%, the physical limit for a single p-n junction cell is know as the Shockley-Queisser limit at 33,7%. This means the power in sunlight (1000 w/m2) can ideally generate a maximum of 337 watt when the sun is at a 90° angle with the cell. Gliders sometimes use solar panels to recharge their batteries during flight (one example is the Pipistrel Taurus, see image). Large scale solar application for electric aircraft (solar cells on wings and fuselage) is not feasible due to the previous mentioned limitations. The Pipistrel aircraft Alpha Electro and Velis use this technique to regain some of the energy used to climb to altitude. With this process they convert potential (altitude) and kinetic (speed) energy back into chemical energy in the batteries. Our E6B Pilot Tools APP can calculate exactly how much energy can be recovered to zero speed and altitude, follow the next link to the potential/kinetic calculator, assuming no losses in the conversion from airspeed to mechanical to electrical to chemical energy. Remember that you have to reverse this path to use this stored energy again at the propeller in thrust. So your resulting final efficiency is not that high. For example: if every step is 80% efficient, then the final efficiency is about 26% back into thrust.Written by EAI.
People with Down syndrome have an extra or irregular chromosome in some or all of their body’s cells. The chromosomal abnormalities impair physical and mental development. Most people with Down syndrome have distinctive physical features. The extra or irregular chromosomes related to Down syndrome result from abnormal cell division in the egg before or after it is fertilized by sperm. Less often, the abnormal cell division occurs in sperm before conception. It is not known why the cells divide abnormally. Signs of Down syndrome usually appear at birth or shortly thereafter. Many children with the condition have a flat face, small ears and mouth, and broad hands and feet, although these features vary from person to person. Most young children have a lack of muscle tone (hypotonia), which generally improves by late childhood. Often developmental disabilities result from the combination of a lower intelligence level and physical limitations related to Down syndrome. Heart defects, intestinal abnormalities, and irregular ear and respiratory tract structures can also occur and cause additional symptoms or lead to complications. Your child’s treatment for Down syndrome will be directed by a team of health professionals. This treatment is guided by the identification of your child’s unique symptoms and physical problems. You can help your child become as independent as possible and lead a healthy, productive life by working closely with these health professionals and other care providers. It is normal to experience a wide range of emotions when your baby is born with Down syndrome. Even if you learned about your baby’s condition while pregnant, the first few weeks after birth often are very difficult as you learn to cope with the diagnosis. A confirmed diagnosis of Down syndrome requires karyotyping. This test usually is done on a sample of your baby’s blood if it is done after birth. It may take 2 to 3 weeks to get the complete results of this test. This waiting period can be extremely difficult, especially if earlier test results were uncertain and your baby has only subtle characteristics of Down syndrome. Your newborn with Down syndrome will have regular checkups and various tests during the first month. These tests are used to monitor his or her condition and to help health professionals look for early signs of common diseases associated with Down syndrome and other health conditions. These checkups also are a good time to begin discussing issues of concern about your newborn. As a parent of a child with Down syndrome, you play an important role in helping your child reach his or her full potential. Most families choose to raise their child, while some consider foster care or adoption. Support groups and organizations can assist you in making the right decision for your family. Being a parent of a child with Down syndrome is full of challenges and frustrations and frequent highs and lows—all of which can lead to exhaustion. Take good care of yourself so you have the energy to enjoy your child and attend to his or her needs. Be patient and encouraging with your young child as he or she learns to walk and master other developmental skills, such as turning over, sitting, standing, and talking. Your child will likely take more time than other children to reach these milestones, but the achievements are just as significant and exciting to watch. Enroll your young child (infant through age 3) in an early-intervention program. These programs have staff who are trained to monitor and encourage your child’s development. Talk with a health professional about programs available in your area. Basic skills, such as learning to feed oneself and dress independently, also take longer to accomplish for children with Down syndrome. Maintain a positive attitude when helping your child learn these tasks. Provide opportunities to practice and recognize that it is okay for your child to be challenged and sometimes fail. You also can promote your child’s development by having a positive attitude and providing him or her with learning and socialization opportunities. You can stimulate your child’s thinking skills without making tasks too difficult. A child with Down syndrome may need additional therapy, counseling, or training. Parents and other caregivers may also need assistance in planning a secure future for their family member with Down syndrome. Different types of therapy, such as occupational and speech therapy, are used frequently to help people with Down syndrome learn essential skills and achieve as much independence as possible. Family counseling. This therapy involves regular sessions with a qualified counselor who has experience working with families who have children with Down syndrome.
Technology is a ubiquitous part of children's lives. It is transparent. Most homes have connected computers or Internet-enabled devices. As prices of technology drop, computers and digital devices may replace television as we know it. When pioneering educational technology advocate Jan Hawkins wrote an essay for Edutopia in 1997, "The World at Your Fingertips: Education Technology Opens Doors," about how technology brings the tools of empowerment into the hands and minds of those who use them, she couldn't have known her words would be even more relevant today. Now, walk into a classroom. Are there computers and if so, how are they being used? Are they being used at all? Technology has revolutionized the way we think, work, and play. Technology, when integrated into the curriculum, revolutionizes the learning process. More and more studies show that technology integration in the curriculum improves students' learning processes and outcomes. Teachers who recognize computers as problem-solving tools change the way they teach. They move from a behavioral approach to a more constructivist approach. Technology and interactive multimedia are more conducive to project-based learning. Students are engaged in their learning using these powerful tools, and can become creators and critics instead of just consumers. NatureMapping brings real science to the classroom with hand-held data collection devices. Another reason for technology integration is the necessity of today's students to have 21st century skills. These 21st century skills include personal and social responsibility planning, critical thinking, reasoning, and creativity strong communication skills, both for interpersonal and presentation needs visualizing and decision making knowing how and when to use technology and choosing the most appropriate tool for the task The Edutopia article "Why Integrate Technology into the Curriculum?: The Reasons Are Many" offers this summary: "Integrating technology into classroom instruction means more than teaching basic computer skills and software programs in a separate computer class. Effective tech integration must happen across the curriculum in ways that research shows deepen and enhance the learning process. In particular, it must support four key components of learning: active engagement, participation in groups, frequent interaction and feedback, and connection to real-world experts." Technology helps change the student/teacher roles and relationships: students take responsibility for their learning outcomes, while teachers become guides and facilitators. Technology lends itself as the multidimensional tool that assists that process. For economically disadvantaged students, the school may be the only place where they will have the opportunity to use a computer and integrate technology into their learning (for more about equity, access, and digital inclusion, check out our Digital Divide Resource Roundup.)
Tests and procedures used to diagnose mouth cancer include: - Physical exam. Your doctor or dentist will examine your lips and mouth to look for abnormalities — areas of irritation, such as sores and white patches (leukoplakia). - Removal of tissue for testing (biopsy). If a suspicious area is found, your doctor or dentist may remove a sample of cells for laboratory testing in a procedure called a biopsy. The doctor might use a cutting tool to cut away a sample of tissue or use a needle to remove a sample. In the laboratory, the cells are analyzed for cancer or precancerous changes that indicate a risk of future cancer. Determining the extent of the cancer Leukoplakia appears as thick, white patches on the inside surfaces of your mouth. It has a number of possible causes, including repeated injury or irritation. It can also be a sign of precancerous changes in the mouth or mouth cancer. Once mouth cancer is diagnosed, your doctor works to determine the extent (stage) of your cancer. Mouth cancer staging tests may include: - Using a small camera to inspect your throat. During a procedure called endoscopy, your doctor may pass a small, flexible camera equipped with a light down your throat to look for signs that cancer has spread beyond your mouth. - Imaging tests. A variety of imaging tests may help determine whether cancer has spread beyond your mouth. Imaging tests may include X-ray, CT, MRI and positron emission tomography (PET) scans, among others. Not everyone needs each test. Your doctor will determine which tests are appropriate based on your condition. Mouth cancer stages are indicated using Roman numerals I through IV. A lower stage, such as stage I, indicates a smaller cancer confined to one area. A higher stage, such as stage IV, indicates a larger cancer, or that cancer has spread to other areas of the head or neck or to other areas of the body. Your cancer's stage helps your doctor determine your treatment options. Treatment for mouth cancer depends on your cancer's location and stage, as well as your overall health and personal preferences. You may have just one type of treatment, or you may undergo a combination of cancer treatments. Treatment options include surgery, radiation and chemotherapy. Discuss your options with your doctor. Surgery for mouth cancer may include: - Surgery to remove the tumor. Your surgeon may cut away the tumor and a margin of healthy tissue that surrounds it to ensure all of the cancer cells have been removed. Smaller cancers may be removed through minor surgery. Larger tumors may require more-extensive procedures. For instance, removing a larger tumor may involve removing a section of your jawbone or a portion of your tongue. - Surgery to remove cancer that has spread to the neck. If cancer cells have spread to the lymph nodes in your neck or if there's a high risk that this has happened based on the size or depth of your cancer, your surgeon may recommend a procedure to remove lymph nodes and related tissue in your neck (neck dissection). Neck dissection removes any cancer cells that may have spread to your lymph nodes. It's also useful for determining whether you will need additional treatment after surgery. - Surgery to reconstruct the mouth. After an operation to remove your cancer, your surgeon may recommend reconstructive surgery to rebuild your mouth to help you regain the ability to talk and eat. Your surgeon may transplant grafts of skin, muscle or bone from other parts of your body to reconstruct your mouth. Dental implants also may be used to replace your natural teeth. Surgery carries a risk of bleeding and infection. Surgery for mouth cancer often affects your appearance, as well as your ability to speak, eat and swallow. You may need a tube to help you eat, drink and take medicine. For short-term use, the tube may be inserted through your nose and into your stomach. Longer term, a tube may be inserted through your skin and into your stomach. Your doctor may refer you to specialists who can help you cope with these changes. Radiation therapy uses high-energy beams, such as X-rays and protons, to kill cancer cells. Radiation therapy is most often delivered from a machine outside of your body (external beam radiation), though it can also come from radioactive seeds and wires placed near your cancer (brachytherapy). Radiation therapy is often used after surgery. But sometimes it might be used alone if you have an early-stage mouth cancer. In other situations, radiation therapy may be combined with chemotherapy. This combination increases the effectiveness of radiation therapy, but it also increases the side effects you may experience. In cases of advanced mouth cancer, radiation therapy may help relieve signs and symptoms caused by the cancer, such as pain. The side effects of radiation therapy to your mouth may include dry mouth, tooth decay and damage to your jawbone. Your doctor will recommend that you visit a dentist before beginning radiation therapy to be sure your teeth are as healthy as possible. Any unhealthy teeth may need treatment or removal. A dentist can also help you understand how best to care for your teeth during and after radiation therapy to reduce your risk of complications. Chemotherapy is a treatment that uses chemicals to kill cancer cells. Chemotherapy drugs can be given alone, in combination with other chemotherapy drugs or in combination with other cancer treatments. Chemotherapy may increase the effectiveness of radiation therapy, so the two are often combined. The side effects of chemotherapy depend on which drugs you receive. Common side effects include nausea, vomiting and hair loss. Ask your doctor which side effects are likely for the chemotherapy drugs you'll receive. Targeted drug therapy Targeted drugs treat mouth cancer by altering specific aspects of cancer cells that fuel their growth. Targeted drugs can be used alone or in combination with chemotherapy or radiation therapy. Cetuximab (Erbitux) is one targeted therapy used to treat mouth cancer in certain situations. Cetuximab stops the action of a protein that's found in many types of healthy cells, but is more prevalent in certain types of cancer cells. Side effects include skin rash, itching, headache, diarrhea and infections. Other targeted drugs might be an option if standard treatments aren't working. Immunotherapy uses your immune system to fight cancer. Your body's disease-fighting immune system may not attack your cancer because the cancer cells produce proteins that blind the immune system cells. Immunotherapy works by interfering with that process. Immunotherapy treatments are generally reserved for people with advanced mouth cancer that's not responding to standard treatments. Explore Mayo Clinic studies testing new treatments, interventions and tests as a means to prevent, detect, treat or manage this disease. Lifestyle and home remedies Quit using tobacco Mouth cancers are closely linked to tobacco use, including cigarettes, cigars, pipes, chewing tobacco and snuff, among others. Not everyone who is diagnosed with mouth cancer uses tobacco. But if you do, now is the time to stop because: - Tobacco use makes treatment less effective. - Tobacco use makes it harder for your body to heal after surgery. - Tobacco use increases your risk of a cancer recurrence and of getting another cancer in the future. Quitting smoking or chewing can be very difficult. And it's that much harder when you're trying to cope with a stressful situation, such as a cancer diagnosis and treatment. Your doctor can discuss all of your options, including medications, nicotine replacement products and counseling. Quit drinking alcohol Alcohol, particularly when combined with tobacco use, greatly increases the risk of mouth cancer. If you drink alcohol, stop drinking all types of alcohol. This may help reduce your risk of a second cancer. No complementary or alternative medicine treatments can cure mouth cancer. But complementary and alternative medicine treatments may help you cope with mouth cancer and the side effects of cancer treatment, such as fatigue. Many people undergoing cancer treatment experience fatigue. Your doctor can treat underlying causes of fatigue, but the feeling of being utterly worn out may persist despite treatments. Complementary therapies can help you cope with fatigue. Ask your doctor about trying: - Exercise. Try gentle exercise for 30 minutes on most days of the week. Moderate exercise, such as brisk walking, during and after cancer treatment reduces fatigue. Talk to your doctor before you begin exercising, to make sure it's safe for you. - Massage therapy. During a massage, a massage therapist uses his or her hands to apply pressure to your skin and muscles. Some massage therapists are specially trained to work with people who have cancer. Ask your doctor for names of massage therapists in your community. - Relaxation. Activities that help you feel relaxed may help you cope. Try listening to music or writing in a journal. - Acupuncture. During an acupuncture session, a trained practitioner inserts thin needles into precise points on your body. Some acupuncturists are specially trained to work with people with cancer. Ask your doctor to recommend someone in your community. Coping and support As you discuss your mouth cancer treatment options with your doctor, you may feel overwhelmed. It can be a confusing time, as you're trying to come to terms with your new diagnosis, and also being pressed to make treatment decisions. Cope with this uncertainty by taking control of what you can. For instance, try to: - Learn enough about mouth cancer to make treatment decisions. Make a list of questions to ask at your next appointment. Bring a recorder or a friend to help you take notes. Ask your doctor about reliable books or websites to turn to for accurate information. The more you know about your cancer and your treatment options, the more confident you'll feel as you make treatment decisions. - Talk to other mouth cancer survivors. Connect with people who understand what you're going through. Ask your doctor about support groups for people with cancer in your community. Or contact your local chapter of the American Cancer Society. Another option is online message boards, such as those run by the Oral Cancer Foundation. - Take time for yourself. Set aside time for yourself each day. Use this time to take your mind off your cancer and do what makes you happy. Even a short break for some relaxation in the middle of a day full of tests and scans may help you cope. - Keep family and friends close. Friends and family can provide both emotional and practical support as you go through treatment. Your friends and family will likely ask you what they can do to help. Take them up on their offers. Think ahead to ways you might like help, whether it's asking a friend to prepare a meal for you or asking a family member to be there when you need someone to talk with. Preparing for your appointment Make an appointment with your doctor or dentist if you have signs or symptoms that worry you. If your doctor or dentist feels you may have mouth cancer, you may be referred to a dentist who specializes in diseases of the gums and related tissue in the mouth (periodontist) or to a doctor who specializes in diseases that affect the ears, nose and throat (otolaryngologist). Because appointments can be brief, and because there's often a lot of ground to cover, it's a good idea to be well-prepared. Here's some information to help you get ready, and what to expect from your doctor. What you can do - Be aware of any pre-appointment restrictions. At the time you make the appointment, be sure to ask if there's anything you need to do in advance, such as restrict your diet. - Write down any symptoms you're experiencing, including any that may seem unrelated to the reason for which you scheduled the appointment. - Write down key personal information, including any major stresses or recent life changes. - Make a list of all medications, vitamins or supplements that you're taking. - Consider taking a family member or friend along. Sometimes it can be difficult to remember all the information provided during an appointment. Someone who accompanies you may remember something that you missed or forgot. - Write down questions to ask your doctor. Your time with your doctor is limited, so preparing a list of questions can help you make the most of your time together. List your questions from most important to least important in case time runs out. For mouth cancer, some basic questions to ask include: - What is likely causing my symptoms or condition? - What are other possible causes for my symptoms or condition? - What kinds of tests do I need? - Is my condition likely temporary or chronic? - What is the best course of action? - What are the alternatives to the primary approach that you're suggesting? - I have these other health conditions. How can I best manage them together? - Are there any restrictions that I need to follow? - Should I see a specialist? What will that cost, and will my insurance cover it? - Are there brochures or other printed material that I can take with me? What websites do you recommend? - What will determine whether I should plan for a follow-up visit? In addition to the questions that you've prepared to ask your doctor, don't hesitate to ask other questions that occur to you. What to expect from your doctor Your doctor is likely to ask you a number of questions. Being ready to answer them may allow more time later to cover points you want to address. Your doctor may ask: - When did you first begin experiencing symptoms? - Have your symptoms been continuous, or occasional? - How severe are your symptoms? - What, if anything, seems to improve your symptoms? - What, if anything, appears to worsen your symptoms? - Do you now or have you ever used tobacco? - Do you drink alcohol? - Have you ever received radiation therapy to your head or neck area? What you can do in the meantime Avoid doing things that worsen your signs and symptoms. If you have pain in your mouth, avoid foods that are spicy, hard or acidic and that may cause further irritation. If you're having trouble eating because of pain, consider drinking nutritional supplement beverages. These can give you the nutrition you need until you can meet with your doctor or your dentist. Jan. 03, 2019
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages) A low-flush toilet (or low-flow toilet or high-efficiency toilet) is a flush toilet that uses significantly less water than traditional high-flow toilets. Before the early 1990s in the United States, standard flush toilets typically required at least 3.5 gallons (13.2 litres) per flush and they used float valves that often leaked, increasing their total water use. In the early 1990s, because of concerns about water shortages, and because of improvements in toilet technology, some states and then the federal government began to develop water-efficiency standards for appliances, including toilets, mandating that new toilets use less water. The first standards required low-flow toilets of 1.6 gallons (6.0 litres) per flush. Further improvements in the technology to overcome concerns about the initial poor performance of early models have further cut the water use of toilets and while federal standards stagnate at 1.6 gallons per flush, certain states' standards toughened up to require that new toilets use no more than 1.28 gallons (4.8 litres) per flush, while working far better than older models. Low-flush toilets include single-flush models and dual-flush toilets, which typically use 1.6 US gallons per flush for the full flush and 1.28 US gallons or less for a reduced flush. The US Environmental Protection Agency's WaterSense program provides certification that toilets meet the goal of using less than 1.6 US gallons per flush. Units that meet or exceed this standard can carry the WaterSense sticker. The EPA estimates that the average US home will save US$90 per year, and $2,000 over the lifetime of the toilets. Dry toilets can lead to even more water savings in private homes as they use no water for flushing. The early low-flush toilets in the US often had a poor design that required more than one flush to rid the bowl of solid waste, resulting in limited water savings. In response, US Congressman Joe Knollenberg from Michigan tried to get Congress to repeal the law[clarification needed] but was unsuccessful, and the industry worked to redesign and improve toilet functioning. Some reduction in sewer flows have caused slight backups or required redesign of wastewater pipes, but overall, very substantial residential water savings have resulted from the change over time to more efficient toilets. In 1988 Massachusetts became the first state in the US to mandate the use of low-flush toilets in new construction and remodeling. In 1992 US President George H. W. Bush signed the Energy Policy Act. This law made 1.6 gallons per flush a mandatory federal maximum for new toilets. This law went into effect on January 1, 1994 for residential buildings and January 1, 1997 for commercial buildings. The first generation of low-flush toilets were simple modifications of traditional toilets. A valve would open and the water would passively flow into the bowl. The resulting water pressure was often inadequate to carry away waste. Improvements in design now make modern models not only more water-efficient but more effective than old models. In addition to tank-type toilets that "pull"[clarification needed] waste down, there are also now pressure-assist models, which use water pressure to effectively "push" waste. - Low-flow fixtures - Dual flush toilet - Sewer dosing unit - Waterless urinal - Residential water use in the US and Canada - Jenkins, Matt. "A Brief History of Water Conservation in America and Europe" (web). Rate My Toilet. Retrieved November 10, 2014. - "WaterSense An EPA Partnership Program". US EPA. Retrieved 23 December 2012. - Gleick, Peter; Haasz, Dana; Henges-Jeck, Christine; Srinivasan, Veena; Wolff, Gary; Cushing, Katherine Kao; Mann, Aamardip (2003). Waste Not, Want Not: The Potential for Urban Water Conservation in California. Oakland, California: Pacific Institute. p. 176. ISBN 1-893790-09-6. Retrieved 16 April 2021.
Old Earth Ministries Online Dinosaur Curriculum Free online curriculum for homeschools and private schools From Old Earth Ministries (We Believe in an Old Earth...and God!) NOTE: If you found this page through a search engine, please visit the intro page first. Lesson 12 - Majungasaurus Majungasaurus ("Mahajanga lizard") is a genus of abelisaurid theropod dinosaur that lived in Madagascar from 70 to 65 million years ago, at the end of the Cretaceous Period. Only one species (M. crenatissimus) has been identified. This dinosaur was briefly called Majungatholus, a name which is now considered a junior synonym of Majungasaurus. Like other abelisaurids, Majungasaurus was a bipedal predator with a short snout. Although the forelimbs are not completely known, they were very short, while the hindlimbs were longer and very stocky. Length: 23 feet Weight: 2,400 lbs Date Range: 70 - 65 Ma, Maastrichtian Age, Cretaceous Period It can be distinguished from other abelisaurids by its wider skull, the very rough texture and thickened bone on the top of its snout, and the single rounded horn on the roof of its skull, which was originally mistaken for the dome of a pachycephalosaur. It also had more teeth in both upper and lower jaws than most abelisaurids. Known from several well-preserved skulls and abundant skeletal material, Majungasaurus has recently become one of the best-studied theropod dinosaurs from the Southern Hemisphere. It appears to be most closely related to abelisaurids from India rather than South America or continental Africa, a fact which has important biogeographical implications. Majungasaurus was the apex predator in its ecosystem, mainly preying on sauropods like Rapetosaurus, and is also the only dinosaur for which direct evidence of cannibalism is known. Majungasaurus was a medium-sized theropod that typically measured 6–7 meters (20–23 ft) in length, including its tail. Fragmentary remains of larger individuals indicate that some adults reached lengths of more than 8 meters (26 ft). Scientists estimate that an average adult Majungasaurus weighed more than 1100 kilograms (2400 lb), although the largest animals would have weighed more. Its 8–9 meter (26–30 ft) relative Carnotaurus has been estimated to weigh 1500 kilograms (3300 lb). The skull of Majungasaurus is exceptionally well-known compared to most theropods and generally similar to that of other abelisaurids. Like other abelisaurid skulls, its length was proportionally short for its height, although not as short as in Carnotaurus. The skulls of large individuals measured 60–70 centimeters (24–28 in) long. However, the skull of Majungasaurus was markedly wider than in other abelisaurids. All abelisaurids had a rough, sculptured texture on the outside faces of the skull bones, and Majungasaurus was no exception. This was carried to an extreme on the nasal bones of Majungasaurus, which were extremely thick and fused together, with a low central ridge running along the half of the bone closest to the nostrils. A distinctive dome-like horn protruded from the fused frontal bones on top of the skull as well. In life, these structures would have been covered with some sort of integument, possibly made of keratin. The postcranial skeleton of Majungasaurus closely resembles those of Carnotaurus and Aucasaurus, the only other abelisaurid genera for which complete skeletal material is known. Majungasaurus was bipedal, with a long tail to balance out the head and torso, putting the center of gravity over the hips. The humerus (upper arm bone) was short and curved, closely resembling those of Aucasaurus and Carnotaurus. Also like related dinosaurs, Majungasaurus had very short forelimbs with four extremely reduced digits, with only two very short external fingers and no claws. The hand and finger bones of Majungasaurus, like other carnotaurines, lacked the characteristic pits and grooves where claws and tendons would normally attach, and its finger bones were fused together, indicating that the hand was immobile. Like other abelisaurids, the hindlimbs were stocky and short compared to body Discovery and Naming French paleontologist Charles Depéret described the first theropod remains from northwestern Madagascar in 1896. These included two teeth, a claw, and some vertebrae discovered along the Betsiboka River by a French army officer and deposited in the collection of what is now the Université Claude Bernard Lyon 1. Depéret referred these fossils to the genus Megalosaurus, which at the time was a wastebasket taxon containing any number of unrelated large theropods, as the new species M. crenatissimus. Numerous fragmentary remains from Mahajanga Province in northwestern Madagascar were recovered by French collectors over the next 100 years, many of which were deposited in the Muséum National d'Histoire Naturelle in Paris. In 1955, René Lavocat described a theropod dentary with teeth from the Maevarano Formation in the same region where the original material was found. The teeth matched those first described by Depéret, but the strongly curved jaw bone was very different from both Megalosaurus and Dryptosaurus. Lavocat renamed the genus Majungasaurus, using an older spelling of Mahajanga as well as the Greek word meaning "lizard", and made this jaw bone the type specimen. In the 1990s, more specimins were found which helped describe Majungasaurus to a greater extent. Majungasaurus is perhaps most distinctive for its skull ornamentation, including the swollen and fused nasals and the frontal horn. Other ceratosaurs, including Carnotaurus, Rajasaurus, and Ceratosaurus itself bore crests on the head. These structures are likely to have played a role in intraspecific competition, although their exact function within that context is unknown. The hollow cavity inside the frontal horn of Majungasaurus would have weakened the structure and probably precluded its use in direct physical combat, although the horn may have served a display purpose. While there is variation in the ornamentation of Majungasaurus individuals, there is no evidence for sexual dimorphism. Scientists have suggested that the unique skull shape of Majungasaurus and other abelisaurids indicate different predatory habits than other theropods. Whereas most theropods were characterized by long, low skulls of narrow width, abelisaurid skulls were taller and wider, and often shorter in length as well. The narrow skulls of other theropods were well-equipped to withstand the vertical stress of a powerful bite, but not as good at withstanding torsion (twisting). In comparison to modern mammalian predators, most theropods may have used a strategy similar in some ways to that of long- and narrow-snouted canids, with the delivery of many bites weakening the prey animal. Abelisaurids, especially Majungasaurus, may instead have been adapted for a feeding strategy more similar to modern felids, with short and broad snouts, that bite once and hold on until the prey is subdued. Majungasaurus had an even broader snout than other abelisaurids, and other aspects of its anatomy may also support the bite-and-hold hypothesis. The neck was strengthened, with robust vertebrae, interlocking ribs and ossified tendons, as well as reinforced muscle attachment sites on the vertebrae and the back of the skull. These muscles would have been able to hold the head steady despite the struggles of its prey. Abelisaurid skulls were also strengthened in many areas by bone mineralized out of the skin, creating the characteristic rough texture of the bones. This is particularly true of Majungasaurus, where the nasal bones were fused and thickened for strength. On the other hand, the lower jaw of Majungasaurus sported a large fenestra (opening) on each side, as seen in other ceratosaurs, as well as synovial joints between certain bones that allowed a high degree of flexibility in the lower jaw, although not to the extent seen in snakes. This may have been an adaptation to prevent the fracture of the lower jaw when holding onto a struggling prey animal. The front teeth of the upper jaw were more robust than the rest, to provide an anchor point for the bite, while the low crown height of Majungasaurus teeth prevented them from breaking off during a struggle. Finally, unlike the teeth of Allosaurus and most other theropods, which were curved on both the front and back, abelisaurids like Majungasaurus had teeth curved on the front edge but straighter on the back (cutting) edge. This structure may have served to prevent slicing, and instead holding the teeth in place when biting. Majungasaurus was the largest predator in its environment, while the only known large herbivores at the time were sauropods like Rapetosaurus. Scientists have suggested that Majungasaurus, and perhaps other abelisaurids, specialized on hunting sauropods. Majungasaurus tooth marks on Rapetosaurus bones confirm that it at least fed on these sauropods, whether or not it actually killed them. Although sauropods may have been the prey of choice for Majungasaurus, recent discoveries in Madagascar indicate another surprising component of its diet: other Majungasaurus. Numerous bones of Majungasaurus have been discovered bearing tooth marks identical to those found on sauropod bones from the same localities. These marks have the same spacing as teeth in Majungasaurus jaws, are of the same size as Majungasaurus teeth, and contain smaller notches consistent with the serrations on those teeth. As Majungasaurus is the only large theropod known from the area, the simplest explanation is that it was feeding on other members of its own species. Suggestions that the Triassic Coelophysis was a cannibal have been recently disproven, leaving Majungasaurus as the only non-avian theropod with confirmed cannibalistic tendencies, although there is some evidence that cannibalism may have occurred in other species as well. It is unknown if Majungasaurus actively hunted their own kind or only scavenged their carcasses. Scientists have reconstructed the respiratory system of Majungasaurus based on a superbly preserved series of vertebrae (UA 8678) recovered from the Maevarano Formation. Most of these vertebrae and some of the ribs contained cavities (pneumatic foramina) that may have resulted from the infiltration of avian-style lungs and air sacs. In birds, the neck vertebrae and ribs are hollowed out by the cervical air sac, the upper back vertebrae by the lung, and the lower back and sacral (hip) vertebrae by the abdominal air sac. Similar features in Majungasaurus vertebrae imply the presence of these air sacs. These air sacs may have allowed for a basic form of avian-style 'flow-through ventilation,' where air flow through the lungs is one-way, so that oxygen-rich air inhaled from outside the body is never mixed with exhaled air laden with carbon dioxide. This method of respiration, while complicated, is highly efficient. The recognition of pneumatic foramina in Majungasaurus, besides providing an understanding of its respiratory biology, also has larger-scale implications for evolutionary biology. The split between the ceratosaur line, which led to Majungasaurus, and the tetanuran line, to which birds belong, occurred very early in the history of theropods. The avian respiratory system, present in both lines, must therefore have evolved before the split, and well before the evolution of birds themselves. This provides further evidence of the dinosaurian origin of birds. Brain and inner ear structure Computed tomography, also known as CT scanning, of a complete Majungasaurus skull (FMNH PR 2100) allowed a rough reconstruction of its brain and inner ear structure. Overall, the brain was very small relative to body size, but otherwise similar to many other non-coelurosaurian theropods, with a very conservative form closer to modern crocodilians than to birds. One difference between Majungasaurus and other theropods was its smaller flocculus, a region of the cerebellum that helps to coordinate movements of the eye with movements of the head. This suggests that Majungasaurus and other abelisaurids like Indosaurus, which also had a small flocculus, did not rely on quick head movements to sight and capture prey. Inferences about behavior can also be drawn from examination of the inner ear. The semicircular canals within the inner ear aid in balance, and the lateral semicircular canal is usually parallel to the ground when the animal holds its head in an alert posture. When the skull of Majungasaurus is rotated so that its lateral canal is parallel to the ground, the entire skull is nearly horizontal. This contrasts with many other theropods, where the head was more strongly downturned when in the alert position. The lateral canal is also significantly longer in Majungasaurus than in its more basal relative Ceratosaurus, indicating a greater sensitivity to side-to-side motions of the head. A 2007 report described pathologies in the bones of Majungasaurus. Scientists examined the remains of at least 21 individuals and discovered four with noticeable pathologies. While pathology had been studied in large tetanuran theropods like allosaurids and tyrannosaurids, this was the first time an abelisauroid had been examined in this manner. No wounds were found on any skull elements, in contrast to tyrannosaurids where sometimes gruesome facial bites were common. One of the specimens was a phalanx (toe bone) of the foot, which had apparently been broken and subsequently healed. Most of the pathologies occurred on the vertebrae. For example, a dorsal (back) vertebra from a juvenile animal showed an exostosis (bony growth) on its underside. The growth probably resulted from the conversion of cartilage or a ligament to bone during development, but the cause of the ossification was not determined. Hypervitaminosis A and bone spurs were ruled out, and an osteoma (benign bone tumor) was deemed unlikely. Another specimen, a small caudal (tail) vertebra, was also found to have an abnormal growth, this time on the top of its neural spine, which projects upwards from the vertebrae, allowing muscle attachment. Similar growths from the neural spine have been found in specimens of Allosaurus and Masiakasaurus, probably resulting from the ossification of a ligament running either between the neural spines (interspinal ligament) or along their tops (supraspinal ligament). The most serious pathology discovered was in a series of five large tail vertebrae. The first two vertebrae showed only minor abnormalities with the exception of a large groove that extended along the left side of both bones. However, the next three vertebrae were completely fused together at many different points, forming a solid bony mass. There is no sign of any other vertebrae after the fifth in the series, indicating that the tail ended there prematurely. From the size of the last vertebrae, scientists judged that about ten vertebrae were lost. One explanation for this pathology is severe physical trauma resulting in the loss of the tail tip, followed by osteomyelitis (infection) of the last remaining vertebrae. Alternatively, the infection may have come first and led to the end of the tail becoming necrotic and falling off. This is the first example of tail truncation known in a non-avian theropod dinosaur. All specimens of Majungasaurus have been recovered from the Maevarano Formation in the Mahajanga Province in northwestern Madagascar. Most of these, including all of the most complete material, came from the Anembalemba Member, although Majungasaurus teeth have also been found in the underlying Masorobe Member and the overlying Miadana Member. While these sediments have not been dated radiometrically, evidence from biostratigraphy and paleomagnetism suggest that they were deposited during the Maastrichtian stage, which lasted from 70 to 65 Ma (million years ago). Majungasaurus teeth are found up until the very end of the Maastrichtian, when all non-avian dinosaurs went extinct. Then as now, Madagascar was an island, having separated from the Indian subcontinent less than 20 million years earlier. It was drifting northwards but still 10–15° more southerly in latitude than it is today. The prevailing climate of the time was semi-arid, with pronounced seasonality in temperature and rainfall. Majungasaurus inhabited a coastal flood plain cut by many sandy river channels. Strong geological evidence suggests the occurrence of periodic debris flows through these channels at the beginning of the wet season, burying the carcasses of organisms killed during the preceding dry season and providing for their exceptional preservation as fossils. Sea levels in the area were rising throughout the Maastrichtian, and would continue to do so into the Paleocene Epoch, so Majungasaurus may have roamed coastal environments like tidal flats as well. The neighboring Berivotra Formation represents the contemporaneous marine environment. Besides Majungasaurus, fossil taxa recovered from the Maevarano include fish, frogs, lizards, snakes, seven distinct species of crocodylomorphs, five or six species of mammals, Vorona and several other birds, the possibly flighted dromaeosaurid Rahonavis, the noasaurid Masiakasaurus and two titanosaurian sauropods, including Rapetosaurus. Majungasaurus was by far the largest carnivore and probably the dominant predator on land, although large crocodylomorphs like Mahajangasuchus and Trematochampsa might have competed with it closer to water. If you ordered the Test Pack, it is now time to take Test 2. Return to the Old Earth Ministries Online Dinosaur Curriculum homepage.
Is Lung Infection Contagious? The nasal passage, pharynx, larynx, trachea, bronchi, and the lungs are all components of the respiratory system. While each one of these components work together and also help us breathe, it is within the lungs, that the essential exchange of oxygen and carbon dioxide takes place. The respiratory system process is required for our survival, so, you can imagine the consequence lung diseases would have on the human body. Since air that we inhale might contain damaging germs or environmental pollutants, there is a great need to protect the respiratory system. - Viruses that cause cold and flu often cause pneumonia, which is a serious infection characterized by accumulation of fluid in lungs. - Contact with adenovirus, the flu virus, breathing syncytial virus (RSV), or even parainfluenza trojan may also lead to this kind of infection. - Fungi such as Aspergillus or Pneumocystis carinii could also cause a fungal infection. - Physical contact with an infected person can make the transmission of the causal pathogen, therefore causing one to develop contamination. - Thus, one must follow preventive measures. Pathogens that Cause Lung Infection As pointed out earlier, a lung infection happens when bacteria, viruses, or environmental pollutants enter into the lungs. The pathogens that are likely to result in like an infection contain bacteria such as Streptococcus pneumoniae or perhaps Methicillin-resistant Staphylococcus Aureus (MRSA). These bacteria generally live on the human body itself, but when they find a way into the lungs and multiply, they cause a bacterial infection in the lungs. Depending on the results of these diagnostic assessments, doctors will recommend the use of antibiotics, antiviral drugs, or antifungal medicines for treating a respiratory tract infection. Antibiotics such as azithromycin or even clarithromycin are often approved for managing a bacterial infection. Taking anti-flu drugs such as amantadine, rimantadine, or even zanamivir can also prevent virus from deteriorating into viral pneumonia. Administration of vaccines or immunization shots is one of the best precautionary measures. Since pathogens can spread in order to other people through physical contact, refrain from maintaining connection with an infected person. The Best All-Natural Remedies for Bronchial CoughBronovil Cough Relief Set includes soothing homeopathic drops, and natural supplement, created to help target the source of upper respiratory infection. Bronovil's ingredients have been used for many years to support healthy lungs and respiratory system, helping in reducing inflammation and support respiratory health. Now they are all combined into this special cough formula. Reducing inflammation and supporting healing has been proven to ease the symptoms related to upper respiratory infections. Click Here to Purchase » - Apart from your treatment options, it is also very important to be able to bring some lifestyle changes in your everyday routine. - Give up on smoking tobacco and steer clear of exposure towards things that trigger allergies and pollutants. - Maintain a healthy diet and exercise regularly (as suggested by the doctor). - This would help you build up a healthy immune system. - Breathing exercises and yoga will also make the respiratory system system more stronger. - And last but not the least, it is very important in order to get yourself examined on a regular basis by your healthcare professional. Carrots: The Natural Food to Remove Cough and Phlegm From Your Lungs (Recipe included) Carrots: The Natural Food to Remove Cough and Phlegm From Your Lungs (Recipe included) We are all aware of the fact that carrots are exceptionally healthy ... - Maintaining personal hygiene is essential in avoiding the spread of bacterial infection in the lungs. - Someone experiencing infection should also make sure that his/her foods are as per the particular prescribed diet. - A poor diet may nullify, or even oppose, the effects of the treatment. - Because infection in the lungs will be majorly caused due to inhaling polluted air, it really is recommended to use nose filters on a trip. - Dealing with bacterial infection within the lungs requires a holistic approach, taking into consideration all of the facets of the disease. Lung infection is not contagious by itself, but certain causal organisms that lead to such infections can be passed on to people through actual contact. This is why, it becomes vital that this kind of infections tend to be treated at the earliest. Viruses causing cold and flu can easily spread along with a virus might worsen into a lung infection. Thus, treating cold or flu from the first may reduce the risk of a lung infection too. If one experiences symptoms associated with cold, virus, or pneumonia, it would be best to choose a medical checkup. Blood culture, torso X-rays, sputum analysis, or additional diagnostic tests are generally conducted to look at the condition of lungs and also decide the causal patient accountable for causing the infection. Pneumonia is the medical term for lung inflammation, particularly affecting the alveoli. It can occur as a result of several different reasons, one of which is bacterial infection in lungs. Bacterial pneumonia is triggered due to the Streptococcus pneumoniae bacteria. The principal symptoms are cough and a fever together with shaking chills, fatigue, shortness of breath and chest pain. In elders, pneumonia can also cause confusion. What is the Role of Amniotic Fluid? The amniotic fluid is the fluid in which the baby floats. It is a suspended mechanism that can help the baby in their development. Here is what the amniotic fluid does for the growth and development of the infant. What are the Symptoms of Transient Tachypnea.
Definition - What does Compressed Gas mean? Compressed gasses are gasses that are stored under pressure in cylinders. The three major types of compressed gasses are liquefied gasses, non-liquefied gasses and dissolved gasses. The pressure of the gas in a cylinder is usually recorded as pounds per square inch gauge (psig) or kilopascals. Safeopedia explains Compressed Gas Liquefied gasses are liquid at normal temperatures when they are inside cylinders under pressure. Common liquefied gasses include ammonia, chloride, propane and nitrous oxide. Non-liquefied gasses are also known as compressed, pressurized or permanent gasses. Oxygen, nitrogen, helium and argon are all examples of non-liquefied gasses. Acetylene is the only common dissolved gas and is very unstable chemically.
|This article needs additional citations for verification. (July 2014) (Learn how and when to remove this template message)| The Crookes radiometer, also known as a light mill, consists of an airtight glass bulb, containing a partial vacuum. Inside are a set of vanes which are mounted on a spindle. The vanes rotate when exposed to light, with faster rotation for more intense light, providing a quantitative measurement of electromagnetic radiation intensity. The reason for the rotation was a cause of much scientific debate in the ten years following the invention of the device, but in 1879 the currently accepted explanation for the rotation was published. Today the device is mainly used in physics education as a demonstration of a heat engine run by light energy. It was invented in 1873 by the chemist Sir William Crookes as the by-product of some chemical research. In the course of very accurate quantitative chemical work, he was weighing samples in a partially evacuated chamber to reduce the effect of air currents, and noticed the weighings were disturbed when sunlight shone on the balance. Investigating this effect, he created the device named after him. It is still manufactured and sold as an educational aid or curiosity. - 1 General description - 2 Thermodynamic explanation - 3 Explanations for the force on the vanes - 4 All-black light mill - 5 Nanoscale light mill - 6 See also - 7 References - 8 External links The radiometer is made from a glass bulb from which much of the air has been removed to form a partial vacuum. Inside the bulb, on a low friction spindle, is a rotor with several (usually four) vertical lightweight metal vanes spaced equally around the axis. The vanes are polished or white on one side and black on the other. When exposed to sunlight, artificial light, or infrared radiation (even the heat of a hand nearby can be enough), the vanes turn with no apparent motive power, the dark sides retreating from the radiation source and the light sides advancing. Cooling the radiometer causes rotation in the opposite direction. The effect begins to be observed at partial vacuum pressures of a few torr (several hundred pascals), reaches a peak at around 10−2 torr (1 pascal) and has disappeared by the time the vacuum reaches 10−6 torr (10−4 pascal) (see explanations note 1). At these very high vacuums the effect of photon radiation pressure on the vanes can be observed in very sensitive apparatus (see Nichols radiometer) but this is insufficient to cause rotation. Origin of the name - The prefix "radio-" in the title originates from the combining form of Latin radius, a ray: here it refers to electromagnetic radiation. - A Crookes radiometer, consistent with the suffix "- meter" in its title, can provide a quantitative measurement of electromagnetic radiation intensity. This can be done, for example, by visual means (e.g., a spinning slotted disk, which functions as a simple stroboscope) without interfering with the measurement itself. Radiometers are now commonly sold worldwide as a novelty ornament; needing no batteries, but only light to get the vanes to turn. They come in various forms, such as the one pictured, and are often used in science museums to illustrate "radiation pressure" – a scientific principle that they do not in fact demonstrate. Movement with black-body absorption When a radiant energy source is directed at a Crookes radiometer, the radiometer becomes a heat engine. The operation of a heat engine is based on a difference in temperature that is converted to a mechanical output. In this case, the black side of the vane becomes hotter than the other side, as radiant energy from a light source warms the black side by black-body absorption faster than the silver or white side. The internal air molecules are heated up when they touch the black side of the vane. The details of exactly how this moves the warmer side of the vane forward are given in the section below. The internal temperature rises as the black vanes impart heat to the air molecules, but the molecules are cooled again when they touch the bulb's glass surface, which is at ambient temperature. This heat loss through the glass keeps the internal bulb temperature steady so that the two sides of the vanes can develop a temperature difference. The white or silver side of the vanes are slightly warmer than the internal air temperature but cooler than the black side, as some heat conducts through the vane from the black side. The two sides of each vane must be thermally insulated to some degree so that the silver or white side does not immediately reach the temperature of the black side. If the vanes are made of metal, then the black or white paint can be the insulation. The glass stays much closer to ambient temperature than the temperature reached by the black side of the vanes. The higher external air pressure helps conduct heat away from the glass. The air pressure inside the bulb needs to strike a balance between too low and too high. A strong vacuum inside the bulb does not permit motion, because there are not enough air molecules to cause the air currents that propel the vanes and transfer heat to the outside before both sides of each vane reach thermal equilibrium by heat conduction through the vane material. High inside pressure inhibits motion because the temperature differences are not enough to push the vanes through the higher concentration of air: there is too much air resistance for "eddy currents" to occur, and any slight air movement caused by the temperature difference is damped by the higher pressure before the currents can "wrap around" to the other side. Movement with black-body radiation When the radiometer is heated in the absence of a light source, it turns in the forward direction (i.e. black sides trailing). If a person's hands are placed around the glass without touching it, the vanes will turn slowly or not at all, but if the glass is touched to warm it quickly, they will turn more noticeably. Directly heated glass gives off enough infrared radiation to turn the vanes, but glass blocks much of the far-infrared radiation from a source of warmth not in contact with it. However, near-infrared and visible light more easily penetrate the glass. If the glass is cooled quickly in the absence of a strong light source by putting ice on the glass or placing it in the freezer with the door almost closed, it turns backwards (i.e. the silver sides trail). This demonstrates black-body radiation from the black sides of the vanes rather than black-body absorption. The wheel turns backwards because the net exchange of heat between the black sides and the environment initially cools the black sides faster than the white sides. Upon reaching equilibrium, typically after a minute or two, reverse rotation ceases. This contrasts with sunlight, with which forward rotation can be maintained all day. Explanations for the force on the vanes Over the years, there have been many attempts to explain how a Crookes radiometer works: - Crookes incorrectly suggested that the force was due to the pressure of light. This theory was originally supported by James Clerk Maxwell, who had predicted this force. This explanation is still often seen in leaflets packaged with the device. The first experiment to test this theory was done by Arthur Schuster in 1876, who observed that there was a force on the glass bulb of the Crookes radiometer that was in the opposite direction to the rotation of the vanes. This showed that the force turning the vanes was generated inside the radiometer. If light pressure were the cause of the rotation, then the better the vacuum in the bulb, the less air resistance to movement, and the faster the vanes should spin. In 1901, with a better vacuum pump, Pyotr Lebedev showed that in fact, the radiometer only works when there is low pressure gas in the bulb, and the vanes stay motionless in a hard vacuum. Finally, if light pressure were the motive force, the radiometer would spin in the opposite direction, as the photons on the shiny side being reflected would deposit more momentum than on the black side where the photons are absorbed. The actual pressure exerted by light is far too small to move these vanes, but can be measured with devices such as the Nichols radiometer. - Another incorrect theory was that the heat on the dark side was causing the material to outgas, which pushed the radiometer around. This was effectively disproved by both Schuster's and Lebedev's experiments. - A partial explanation is that gas molecules hitting the warmer side of the vane will pick up some of the heat, bouncing off the vane with increased speed. Giving the molecule this extra boost effectively means that a minute pressure is exerted on the vane. The imbalance of this effect between the warmer black side and the cooler silver side means the net pressure on the vane is equivalent to a push on the black side, and as a result the vanes spin round with the black side trailing. The problem with this idea is that while the faster moving molecules produce more force, they also do a better job of stopping other molecules from reaching the vane, so the net force on the vane should be exactly the same — the greater temperature causes a decrease in local density which results in the same force on both sides. Years after this explanation was dismissed, Albert Einstein showed that the two pressures do not cancel out exactly at the edges of the vanes because of the temperature difference there. The force predicted by Einstein would be enough to move the vanes, but not fast enough. - The final piece of the puzzle, thermal transpiration, was theorized by Osborne Reynolds in an unpublished paper that was refereed by Maxwell, who then published his own paper which contained a critique of the mathematics in Reynolds's unpublished paper. Maxwell died that year and the Royal Society refused to publish Reynolds's critique of Maxwell's rebuttal to Reynolds's unpublished paper, as it was felt that this would be an inappropriate argument when one of the people involved had already died. Reynolds found that if a porous plate is kept hotter on one side than the other, the interactions between gas molecules and the plates are such that gas will flow through from the cooler to the hotter side. The vanes of a typical Crookes radiometer are not porous, but the space past their edges behaves like the pores in Reynolds's plate. On average, the gas molecules move from the cold side toward the hot side whenever the pressure ratio is less than the square root of the (absolute) temperature ratio. The pressure difference causes the vane to move, cold (white) side forward due to the tangential force of the movement of the rarefied gas moving from the colder edge to the hotter edge. All-black light mill To rotate, a light mill does not have to be coated with different colors across each vane. In 2009, researchers at the University of Texas, Austin created a monocolored light mill which has four curved vanes; each vane forms a convex and a concave surface. The light mill is uniformly coated by gold nanocrystals, which are a strong light absorber. Upon exposure, due to geometric effect, the convex side of the vane receives more photon energy than the concave side does, and subsequently the gas molecules receive more heat from the convex side than from the concave side. At rough vacuum, this asymmetric heating effect generates a net gas movement across each vane, from the concave side to the convex side, as shown by the researchers' Direct Simulation Monte Carlo (DSMC) modeling. The gas movement causes the light mill to rotate with the concave side moving forward, due to Newton's Third Law. This monocolored design promotes the fabrication of micrometer- or nanometer- scaled light mills, as it is difficult to pattern materials of distinct optical properties within a very narrow, three-dimensional space. Nanoscale light mill In 2010 researchers at the University of California, Berkeley succeeded in building a nanoscale light mill that works on an entirely different principle to the Crookes radiometer. A swastika shaped gold light mill, only 100 nanometers in diameter, was built and illuminated by laser light that had been tuned to have an angular momentum. The possibility of doing this had been suggested by the Princeton physicist Richard Beth in 1936. The torque was greatly enhanced by the resonant coupling of the incident light to plasmonic waves in the gold structure. - Citations and notes - Worrall, J. (1982), "The pressure of light: The strange case of the vacillating 'crucial experiment'", Studies in History and Philosophy of Science (Elsevier), doi:10.1016/0039-3681(82)90023-1 - The Electrical engineer, London: Biggs & Co., 1884, p. 158 - Gibbs, Philip (1996). "How does a light-mill work?". http://math.ucr.edu/home/baez/physics/index.html. Usenet Physics FAQ. Retrieved 8 August 2014. External link in - Crookes, William (1 January 1874). "On Attraction and Repulsion Resulting from Radiation" (PDF). Philosophical Transactions of the Royal Society of London 164: 501–527. doi:10.1098/rstl.1874.0015.. - Reynolds, Osborne (1 January 1879). "On certain dimensional properties of matter in the gaseous state …" (PDF). Philosophical Transactions of the Royal Society of London 170: 727–845. doi:10.1098/rstl.1879.0078.; Part 2. - Maxwell, J. Clerk (1 January 1879). "On stresses in rarefied gases arising from inequalities of temperature" (PDF). Philosophical Transactions of the Royal Society of London 170: 231–256. doi:10.1098/rstl.1879.0067. - Han, Li-Hsin; Shaomin Wu; J. Christopher Condit; Nate J. Kemp; Thomas E. Milner; Marc D. Feldman; Shaochen Chen (2010). "Light-Powered Micromotor Driven by Geometry-Assisted, Asymmetric Photon-heating and Subsequent Gas Convection". Applied Physics Letters 96: 213509(1–3). Bibcode:2010ApPhL..96u3509H. doi:10.1063/1.3431741. - Han, Li-Hsin; Shaomin Wu; J. Christopher Condit; Nate J. Kemp; Thomas E. Milner; Marc D. Feldman; Shaochen Chen (2011). "Light-Powered Micromotor: Design, Fabrication, and Mathematical Modeling". Journal of Microelectromechanical Systems 20 (2): 487–496. doi:10.1109/JMEMS.2011.2105249. - Yarris, Lynn. "Nano-sized light mill drives micro-sized disk". Physorg. Retrieved 6 July 2010. - General information - Loeb, Leonard B. (1934) The Kinetic Theory Of Gases (2nd Edition);McGraw-Hill Book Company; pp 353–386 - Kennard, Earle H. (1938) Kinetic Theory of Gases; McGraw-Hill Book Company; pp 327–337 - US 182172, Crookes, William, "Improvement In Apparatus For Indicating The Intensity Of Radiation", published 10 August 1876, issued 12 September 1876 |Wikimedia Commons has media related to Light mills.| - Crooke's Radiometer applet - How does a light-mill work?-Physics FAQ - The Cathode Ray Tube site - Bell, Mary; Green, S. E. (1933), "On Radiometer Action and the Pressure of Radiation", Proceedings of the Physical Society 45 (2): 320–357, Bibcode:1933PPS....45..320B, doi:10.1088/0959-5309/45/2/315. 1933 Bell and Green experiment describing the effect of different gas pressures on the vanes. - The Properties of the Force Exerted in a Radiometer
During the summer, tree leaves produce all the pigments we see in fall, but they make so much chlorophyll that the green masks the underlying reds, oranges, and yellows. In fall, days get shorter and cooler, and trees stop producing chlorophyll. As a result, the green color fades, revealing the vibrant colors we love. Eventually, these colors also fade, and the leaves turn brown, wither, and drop. Then the trees become dormant for winter. There are four pigments responsible for leaf colors: - Chlorophyll (pronounced KLOR-a-fill) – green - Xanthophyll (pronounced ZAN-tho-fill) – yellow - Carotene (pronounced CARE-a-teen) – gold, orange - Anthocyanin (pronounced an-tho-SIGH-a-nin) – red, violet, can also be bluish Leaves are brown when there are no more photo-sensitive pigments; only the tannins are left. Color these leaves according to the pigments they produce: Leaves turn color early in the season; the lighter carotenes glow warmly against the blue sky and green grass. The fading chlorophyll, combined with xanthophyll, carotene, and anthocyanin, produce the spectacular show we anticipate every year. Leaves change slowly and over time may be any combination of the four pigments, ending in a brilliant flame of anthocyanin. Like the maple, this tree puts on an awe-inspiring display of xanthophyll, carotene, and anthocyanin all together. Light filtering through the xanthophyll and lighter carotene of these leaves creates an ethereal glow. The ginkgo drops all of its leaves in a day or two. Carotenes recede quickly around the edges of the leaves as they prepare to parachute to the ground. A pale hint of chlorophyll mixes with xanthophyll and a touch of carotene as this tree shuts down for winter. Facts about fall leaf colors: - Trees use the sugars they produce through photosynthesis to make all of the pigments we see. - The best fall color display comes in years when there has been a warm, wet spring; a summer without drought or excessive heat; and a fall with warm, sunny days and cool nights. - Chlorophyll, carotene, xanthophyll, and anthocyanin are also responsible for the coloring of all fruits and vegetables, including corn, pumpkins, beans, peppers, tomatoes, and berries. - Peak fall color comes earlier in northern latitudes than southern latitudes, so if you miss the best of the sugar maples in Chicago, take a trip south to get your color fix. - You can preserve a leaf by ironing it between sheets of wax paper. Fall color(ing) activity correct colors: Illustrations by Maria Ciacco ©2016 Chicago Botanic Garden and my.chicagobotanic.org
Venus may have had life, but it has a runaway greenhouse atmosphere of carbon dioxide and nitric acid, with surface temperatures hot enough to melt lead. Some have proposed to inject blue-green algae into the air, which would metabolize the CO2 into oxygen and water, dropping the temperature and making it rain for the first time. After some period of time, Venus might become habitable for us. Mars is too small to keep it's atmosphere, which has mostly all escaped into space. All that remains is a thin, sparse covering of carbon dioxide, but water once ran freely on the surface, and may still be there, frozen at the poles and under the Martian soil. Mars may have hosted life at one time, but not intelligent life - there are no canals, and no ruined cities. Earth is the Goldilocks planet - neither too hot, not too cold. Once it too was covered in a reducing atmosphere, but anaerobic life evolved, and turned the sky oxygen-blue, while comets deposited oceans of water. Oxygen breathing life evolved, became multicellular and eventually what passes for intelligent (the US Congress not withstanding). What is the likelihood this is unique? Intelligence itself doesn't appear to be unique - chimps, dolphins and elephants seem to have at least some self-awareness, and creatures such as octopi, crows and apes can use tools and solve puzzles. Our sample of one suggests that life eventually gives way to intelligent tool users. So far it looks like planets are common, and life may be too. So if the universe is full of planets teeming with intelligent tool users - where are they? The Sun (Sol) is a very common type of G class yellow dwarf, a third generation star that has shone for 5 billion years. The universe is about 13 billion years old - while it took some time to make the heavier elements we need, such as iron, silicon and carbon, there has been plenty on time for civilizations to arise before ours. If star travel is possible, why have they not been here? Why are they not here now? We certainly will be out among the stars as soon as we get the technology down - that's our way. The fundamental problem is that the Universe is too old, and too big. Our galazy, the Milky Way, is one of hundreds of billions or even trillions of galaxies, and holds around 200 billion stars - that's 30 stars for every man, woman and child alive today. In this galaxy alone. Anything that can happen, has happened - somewhere. If star-travelling species can exist, they do exist. And if they do exist, why did they not colonize our solar system already? There are a few possibilities, none of them very pleasant: - Life, and especially tool using intelligent life is actually very rare. Maybe we are unique - or civilizations are so spread out as to almost never make contact with each other. What evidence we have so far is rather against this. - Technology is a fatal disease - all civilizations that develop it die, from pollution, nuclear holocaust, or self made pathogens. None make it as far as communication with other civilizations, or to star travel. - Star travel isn't possible, and the planet-bound civilizations either don't communicate with each other, or they don't use radio. Perhaps we are too young to have developed sub-space based communications which are instantaneous and efficient, and they are watching our TV signals and shaking their heads (or whatever they shake) over our youthful stupidity. And poor production values. - Everyone is hunkered down, or dead. Advanced machine civilizations silently cruise the interstellar starways, and when they capture the radio signals from an ignorant and wasteful emerging biological infestation, they send out the clean up crew. “You are gods; you are all children of The Highest!" - The Bible, Psalm 82 vs 6. At least that one's hopeful......
On this page: There are times when children just can’t seem to concentrate. This isn’t a huge problem for most kids—they can regain their focus and get back on task fairly easily. But it’s a serious problem for others. Attention-deficit/hyperactivity disorder (ADHD) is a real illness that makes it difficult for children to sit still, concentrate and complete their work correctly and on time. Of course, it’s normal for children to want to run around or play loudly on occasion, and no one would expect a young child to sit quietly for a long time. But with ADHD, these behaviours happen often for a long time and in different environments (for example, at home and at school), and interfere a lot with the child’s life. ADHD is a mental illness that affects the way a child behaves or acts. ADHD starts to cause a lot of problems before a child is seven years old. If your child is living with this illness, they might have a hard time paying attention to what’s going on around them. Or they might make careless mistakes at school or struggle to organize things. This group of symptoms is called inattention. Your child may also find it impossible to sit still. They may fidget often or look very restless. This group of symptoms is called hyperactivity. Your child might also have a hard time waiting in line or waiting for their turn. This group of symptoms is called impulsivity. There are different types of ADHD based on the group of symptoms that causes the most problems. But most children have at least some symptoms from all of the groups. It’s normal for any child to sometime get distracted, restless or disorganized. But if you feel that many of the above concerns apply to your school-aged child and they’ve been happening often for a long time and causing a lot of problems, talk to your doctor. ADHD affects about 5% of school-age children. It’s usually diagnosed during elementary school years because it’s normal for younger children to have a lot of energy and less ability to pay attention. - Boys—ADHD, particularly the hyperactivity type, affects boys more often than girls - Family members—ADHD seems to run in families, so a child is more likely to have ADHD if a close biological relative has it - Other mental illnesses—About half of children diagnosed with ADHD also have another behavior disorder. They may also experience a mood disorder or anxiety disorder - Other disorders or conditions—ADHD may be associated with learning problems or communication problems. In a few cases, ADHD may occur with Tourette’s Disorder Different illnesses and medical conditions can look like ADHD. Some of these include learning disabilities, vision or hearing problems, fetal alcohol syndrome and mental illnesses like bipolar disorder. That’s why it’s so important for a doctor to rule out other problems before they diagnose a child with ADHD. Researchers aren’t sure what causes ADHD. Like other mental illnesses, it’s likely caused or influenced by many different things. A few examples include your genes, the environment you live in, and your life experiences. We do know that researchers haven’t found a concrete link between ADHD and factors like parenting style or watching TV. ADHD also seems to happen more often in children of women who smoked cigarettes while they were pregnant. When a child is diagnosed with ADHD, the child and their family members should first learn about ADHD. This reinforces that the illness is a difficulty that the child can overcome and helps the entire family understand the illness. A combination of counselling, changes at home, changes at school and medication help children living with ADHD. Counselling, changes at home and changes at school may be the best first-line treatments and supports for mild to moderate ADHD symptoms. Medication may be needed if symptoms are severe or don’t improve with other treatments or supports. Counselling—The most common type of counselling for children living with ADHD is training to help them learn and understand positive behaviours. This is called behaviour skills training. It also helps children make positive choices that help them reach their goals, and it helps them work well with the people around them. Other kinds of counselling might also be helpful. Counselling may include the child, their parents and the entire family. Common types of counselling include: - Cognitive-behavioural therapy (CBT). It has been adapted to help children understand the thoughts behind their urges - Parenting skills training. It teaches parents how to cope with their child’s ADHD symptoms and how to guide a child living with ADHD. This may include learning how to predict problem situations, solve problems, enforce rules and give constructive feedback - Family counselling and support. This helps all family members, including siblings, learn how to cope with disruptive behaviour and encourage positive behaviour Changes at home—Changes at home can help a child cope with ADHD symptoms. Helpful changes may include: - Maintaining a consistent daily schedule, including a regular bedtime - Using lists, charts, schedules or notes to help your child remember important tasks or information - Making sure your child is getting exercise - Helping your child try structured social activities. Sports, dance or community volunteer work may help improve social skills, demonstrate the child’s strengths and boost self-esteem Your mental health clinician can suggest changes at home to help your child’s specific problems. Changes at school—A child’s school may provide changes to classroom activities and learning material. For example, the school may allow a child to move their desk to a quieter, less distracting area. These small changes help many children living with ADHD. But if your child still struggles, the school may make bigger changes, like providing different kinds of learning materials. It’s best if parents and schools work together to help a child living with ADHD. Medication—There are two different types of ADHD medication: stimulant and non-stimulant medication. It may seem odd to treat a hyperactive child with a stimulant, but they are very effective for children who have been properly screened and diagnosed with ADHD. There is also a non-stimulant medication for ADHD. Children may be prescribed other types of medication, such as antidepressants, if they can’t take ADHD medication. However, the kind of medication your child is prescribed will depend on many factors, such as the type of ADHD and any other medical or mental health problems. Medication can help manage ADHD symptoms and improve your child’s quality of life, but it won’t solve all behaviour problems or social skills problems. That’s why it’s important to include counselling and changes at home or school in the treatment plan. In addition to talking to your family doctor, check out the resources below for more information about attention-deficit/hyperactivity disorder: FORCE Society for Kids’ Mental Health Visit.www.forcesociety.com or call 1-855-887-8004 (toll-free in BC) or 604-878-3400 (in the Lower Mainland) for information and resources that support parents of a young person with mental illness. Kelty Mental Health Contact Kelty Mental Health at www.keltymentalhealth.ca or 1-800-665-1822 (toll-free in BC) or 604-875-2084 (in Greater Vancouver) for information, referrals and support for children, youth and their families in all areas of mental health and addictions. BC Partners for Mental Health and Addictions Information Visit www.heretohelp.bc.ca for the Managing Mental Illness series of info sheets, which is full of information, tips and self-tests to help you understand mental illnesses. The website also has many publications for family members, including parents of younger children. The Family Toolkit and info sheets can help parents work better with mental health services and the school system. Centre for ADHD Awareness, Canada (CADDAC) Visit www.caddac.ca for information and resources, tips for working with your doctor and child’s school, information for educators, parenting strategies, support groups, and more. Resources available in many languages: *For each service below, if English is not your first language, say the name of your preferred language in English to be connected to an interpreter. More than 100 languages are available. Call 811 or visit www.healthlinkbc.ca to access free, non-emergency health information for anyone in your family, including mental health information. Through 811, you can also speak to a registered nurse about symptoms you’re worried about, or talk with a pharmacist about medication questions. Crisis lines aren’t only for people in crisis. You can call for information on local services or if you just need someone to talk to. If you are in distress, call 310-6789 (do not add 604, 778 or 250 before the number) 24 hours a day to connect to a BC crisis line, without a wait or busy signal. The crisis lines linked in through 310-6789 have received advanced training in mental health issues and services by members of the BC Partners for Mental Health and Addictions Information.
Scarcity refers to a gap between limited resources and theoretically limitless wants. The notion of scarcity is that there is never enough (of something) to satisfy all conceivable human wants, even at advanced states of human technology. Scarcity involves making a sacrifice—giving something up, or making a trade-off—in order to obtain more of the scarce resource that is wanted. The condition of scarcity in the real world necessitates competition for scarce resources, and competition occurs "when people strive to meet the criteria that are being used to determine who gets what".:p. 105 The price system, or market prices, are one way to allocate scarce resources. "If a society coordinates economic plans on the basis of willingness to pay money, members of that society will [strive to compete] to make money":p. 105 If other criteria are used, we would expect to see competition in terms of those other criteria. For example, although air is more important to us than gold, it is less scarce simply because the production cost of air is zero. Gold on the other hand has a high production cost. It has to be found and processed, both of which require a great deal of resources. Additionally, scarcity implies that not all of society's goals can be pursued at the same time; trade-offs are made of one goal against others. In an influential 1932 essay, Lionel Robbins defined economics as "the science which studies human behavior as a relationship between ends and scarce means which have alternative uses". In cases of monopoly or monopsony an artificial scarcity can be created. Scarcity can also occur through stockpiling, either as an attempt to corner the market or for other reasons. Temporary scarcity can be caused by (and cause) panic buying. A scarce good is a good that has more quantity demanded than quantity supplied. This, according to economic laws, would have by nature an attributed price. The term scarcity refers to the possible existence of conflict over the possession of a finite good. One can say that, for any scarce good, someones’ ownership and control excludes someone else’s control. Scarcity falls into three distinctive categories: demand-induced, supply-induced, and structural. Demand-induced scarcity happens when the demand of the resource increases and the supply stays the same. Supply-induced scarcity happens when a supply is very low in comparison to the demand. This happens mostly due to environmental degradation like deforestation and drought. Lastly, structural scarcity occurs when part of a population doesn't have equal access to resources due to political conflicts or location. This happens in Africa where desert countries don't have access to water. To get water they have to travel and make agreements with countries who have water resources. In some countries political groups hold necessary resources hostage for concessions or money. Supply-induced and structural scarcity demands for resources cause the most conflict for a country. On the opposite side of the coin there are the nonscarce goods. These goods don’t need to be valueless and some can even be indispensable for one’s existence. As Frank Fetter explains in his Economic Principles: "Some things, even such as are indispensable to existence, may yet, because of their abundance, fail to be objects of desire and of choice. Such things are called free goods. They have no value in the sense in which the economist uses that term. Free goods are things which exist in superfluity; that is, in quantities sufficient not only to gratify but also to satisfy all the desires which may depend on them." As compared with the scarce goods, nonscarce goods are the ones where there can be no contest over its ownership. The fact that someone is using something doesn’t unable anyone else to use it. For a good to be considered nonscarce it can either have an infinite existence, no sense of possession or it can be infinitely replicated. |Look up scarcity in Wiktionary, the free dictionary.| - Siddiqui, A.S. (2011). Comprehensive Economics XII. Laxmi Publications Pvt Limited. ISBN 978-81-318-0368-4. Retrieved 2017-11-20. - "Scarcity". Investopedia. Retrieved 2017-11-20. - :pp.5–8Heyne, Paul; Boettke, Peter J.; Prychitko, David L. (2014). The Economic Way of Thinking (13th ed.). Pearson. ISBN 978-0-13-299129-2. - Robbins, Lionel (2014) . An Essay on the Nature and Significance of Economic Science (2nd ed.). London: Macmillan. p. 16. - A. Tucker, Jeffrey; Kinsella, Stephan. "Goods, scarce and nonscarce". Mises. Retrieved 25 Aug 2010. - Kennedy, Bingham (January 2001). "Environmental Scarcity and the Outbreak of Conflict". PRB. - Milgate, Murray (March 2008). "goods and commodities". In Steven N. Durlauf; Lawrence E. Blume (eds.). The New Palgrave Dictionary of Economics (2nd ed.). Palgrave Macmillan. pp. 546–48. doi:10.1057/9780230226203.0657. Retrieved 2010-03-24. - Montani, Guido (1987). "Scarcity". In Eatwell, J.; Millgate, M.; Newman, P. (eds.). The New Palgrave. A Dictionary of Economics. 4. Palgrave, Houndsmill. pp. 253–54. - Malthus, Thomas R. (1960) . Gertrude Himmelfarb (ed.). On Population (An Essay on the Principle of Population, as It affects the Future Improvement of Society. With Remarks on the speculations of Mr. Godwin, M. Condorcet, and other writers). New York: Modern Library. p. 601. Retrieved 2010-03-24. - Burke, Edmund (1990) . E. J. Payne (ed.). Thoughts and Details on Scarcity. Indianapolis, IN: Liberty Fund, Inc. Retrieved 2019-07-30.
Sunday, November 8 marks the 120th anniversary of one of the greatest moments in the history of science: an obscure German physics professor’s discovery of the X-ray. His name was Wilhelm Roentgen, and in the six weeks that followed, he devoted nearly every waking hour to exploring the properties of the new rays before announcing his discovery to the world. Within just months, scientists worldwide were experimenting with the newly discovered rays. Roentgen’s discovery and its subsequent revolutionary impact represent one of science’s greatest stories. Roentgen’s discovery opened a window on the previously invisible interior of the human body and spawned the existence of an entirely new medical specialty, radiology. As a practicing radiologist, I’ve been amazed to see how x-rays have changed our views of ourselves and our world – so much so that I even wrote book about it, X-ray Vision. The story is vast, but some of its key features can be summarized here. How Roentgen Discovered X-Rays Born in Germany in 1845, Roentgen had a somewhat lackluster career as a student, but he eventually earned a PhD and took at position at the University of Wurzburg. There he investigated the effects of passing electrical currents through vacuum tubes. On a fateful day in November 1895, he observed that, despite the presence of a cardboard barrier, emissions from a tube caused a nearby screen to fluoresce (glow). Said Roentgen, “I did not think. I investigated.” Soon he hypothesized that the tube was emitting a new kind of ray, invisible to the eye, which could penetrate solid objects. Borrowing the physicist’s traditional term for the unknown, X, Roentgen dubbed the new form of radiation “X-rays.” Within two weeks of his discovery, he produced the first X-ray image of a human being, showing the bones of his wife Bertha’s ringed hand. So unexpected was his discovery that when he published his initial paper on the properties of the new ray, the most renowned physicist in the world, Lord Kelvin, after whom one of the best-known temperature scales is named, pronounced the new rays a hoax. Of course, scientists around the world soon discovered otherwise. Thanks to his discovery, Roentgen received numerous accolades, including the very first Nobel Prize in Physics in 1901. Despite the urging of friends and colleagues who wanted him to grow rich from his discovery, Roentgen refused to file a patent application and even donated the entire monetary component of his Nobel Prize (the equivalent today of US$1.2 million) to his university. He firmly believed that such discoveries are the property of all mankind. When Roentgen died in 1923, he had fallen into penury, his savings consumed by post-World War I inflation. What Are X-Rays? X-rays themselves are a form of electromagnetic radiation, composed of the same photons as visible light, microwaves, and radio waves, only vibrating at much shorter wavelengths and much higher frequencies. This enables them to penetrate solid objects, such as wood, clothing and human tissues. When a medical X-ray image is created, an X-ray beam passes through the patient and is picked up by a detector on the other side. Portions of the beam are absorbed, but others pass all the way through, and the “shadows” cast by different tissues create the image. X-rays have revolutionized our view of the world. They cast light on the previously invisible realm of the very small through X-ray crystallography. This technique for imaging molecular structures was pioneered by the father-son team of William H and William L Bragg, who shared the 1915 Nobel Prize in Physics. Decades later, X-ray crystallography provided the images James Watson and Francis Crick used to deduce the double-helix structure of DNA. X-rays have also been used to reveal structure of the universe itself. X-ray astronomy was impossible until the 1960s, because nearly all extraterrestrial X-rays are absorbed by the Earth’s atmosphere. But thanks to rocket- and satellite-mounted detectors, astronomers began detecting high-energy X-ray emissions from long-visible stars and galaxies, as well as previously unknown and truly bizarre objects such as neutron stars and black holes, which have densities more than trillions of times that of the sun. But perhaps the most notable of all objects revealed by X-rays has been the human body itself. Almost immediately after the discovery of the new rays, scientists and physicians began using them to peer inside the body without cutting it open, revealing not only normal structures but also fractures, pneumonias and even foreign objects, such as swallowed coins. US President James Garfield died in 1881 largely because his doctors could not locate an assassin’s bullet in his body, while a century later, X-rays revealed the bullet in President Ronald Reagan’s chest in minutes, helping to save his life. CT scans are a critical part of diagnosis. CT scan of upper abdomen via www.shutterstock.com. From X-Rays To CT Scanning And Beyond Newer medical imaging techniques such as CT scanning rely on Roentgen’s discovery, but instead of sending X-rays through the patient’s body from one direction only, beams are directed from many different angles, making it possible to create a much sharper two-dimensional image of the body’s interior. CT scans now play a huge role in medical diagnosis. One recent study in a major emergency department showed that after a CT scan of the chest, the treating doctor’s leading diagnosis changes 42% of the time, and after a CT scan of the abdomen it changes 51% of the time. This is testament to the huge impact of X-ray technology on how medicine is practiced, sparing many patients unnecessary surgery and getting urgent treatment to others faster than would otherwise be possible. When leading internists in the US were challenged to name the medical innovations without which it would be most difficult to imagine practicing medicine, CT scanning received the greatest number of votes by far. Thanks to CT’s wide availability and great speed, doctors can determine within minutes whether or not a patient’s abdominal pain is due to appendicitis, chest pain reflects a tear in the aorta, or a severe headache is due to the rupture of a blood vessel in the brain. It is no wonder that about 80 million CT scans are performed each year in the US. Of course, the use of X-rays in medicine extends beyond diagnosis. In the years immediately following Roentgen’s discovery, some investigators discovered that the same X-rays used to detect cancers in organs such as the lung and breast could also aid in treating such tumors, by damaging the DNA of cancerous cells. Today, radiation oncology is a distinct medical specialty, often called upon to help kill residual cancer cells after a tumor is surgically removed. It also plays an important role in palliative medicine, enabling physicians to relieve pain by shrinking inoperable tumors. Thanks to Roentgen’s invisible light, we now operate with a much deeper understanding of the universe we inhabit, the molecules and cells of which we are composed and the diseases that threaten our lives. Roentgen himself would no doubt be astounded by the novel purposes to which X-rays have been adapted in the decades since his death. Yet he would also remind us that, over the next 120 years, many new X-ray discoveries almost certainly remain to be made.
Until the 1960s many historians believed that ethnic group influence on foreign policy would gradually diminish as the population became an American population with fewer immediate ethnic ties. Beginning in the 1920s, the flow of immigration slowed dramatically with laws reflecting anti-immigration public sentiment. Depression and war continued the trend to the point where under 5 percent of the population had been born in a foreign country. The political and cultural trends that lasted through the 1950s stressed the need for assimilation and conformity to the mainstream norms. The Cold War, with its emphasis on the righteousness of the American position and its constant invocation of national patriotism, further inhibited criticism of mainstream American values and institutions. School textbooks extolled the virtues of the "melting pot" to which other cultures contributed, but nonetheless stressed the importance of unity and adaptation to the national norm. As with so much else in national life, these concepts and the national ethnic and racial makeup changed dramatically in the 1960s. An event that unmistakably precipitated the changes, and one whose legacy might well include significantly altering the policies of American diplomacy in the decades ahead, is the Immigration Act of 1965. Sponsored by liberal Democrats in Congress and Presidents John Kennedy and Lyndon Johnson, this law changed the rules of entering, and thereby opened the doors to the greatest influx of immigrants in history. The Hart-Celler Act, as it was then known, received little attention when it was passed, and continues to be relatively ignored in histories recounting Johnson's reform program, the Great Society. Nonetheless, it may be the single most significant legislation of that era as far as its impact on the nation's future. Rejecting national origin quotas as the basis for admittance, as had previously been the case, in the 1965 law family reunification became the basis for admittance for nearly two-thirds of those who would immigrate. Three subsequent laws increased the impact of the 1965 law: the Refugee Act of 1980, which recognized a separate category of those fleeing political oppression; the Immigration and Control Act of l986, which provided amnesty for three million immigrants who had entered the United States illegally before l982; and a 1990 amendment to the l965 law that substantially raised the number who could enter as legal immigrants. The flow of immigrants has risen steadily, particularly after 1990, when the annual totals for legal immigration peaked for several years at approximately 1.5 million. Between 1965 and 2000, approximately 23 million immigrants legally entered the United States. Adding estimates that there are also from 8 to 12 million illegal immigrants, the total attests to a massive foreign-born influence. Twenty-five percent of California's population is foreign born, and New York state is not far behind, with about 20 percent. As important as the sheer number of immigrants is, it is also significant that 85 percent of the legal immigrants are from non-European backgrounds without the traditional foreign policy interests of most Americans. Europeans comprise approximately 15 percent of legal immigrants, Asian Americans about one-third, and Latin Americans most of the rest. The foreign policy issues that concern these new Americans have already begun to shape the direction of diplomacy, notably on trade and on immigration issues, such as amnesty for undocumented workers already in the country. Future diplomacy is likely to concern such issues more directly. One reason is that demographers project that, because of higher Hispanic birthrates and the origins of future immigrants, the United States will see a decline in its population of European background and a substantial rise in its Hispanic and Asian population.
description of the Earth's surface and the people and processes that shape its landscape. social science and way of thinking. the romans and geography Ptlomey wrote Guide to Geography(also known as Geographica), gave detailed descriptions of cities and people of the Earth. during this time, maps became more symbols of art and decoration than mathematical representations of the Earth's surfaces. proposes that cultures are a direct result of where they exist. concluded that warmer climates tend to cause inhabants to have a more relaxed attitude toward work and progress. this philosophy led some people to believe that Europeans and those from more temperate climates where more motivated. intellegent, and culturally advanced than those of warmer climates suggest that humans are not a product of their environment but possess the skills necessary to modify their environment to fit human needs. people can determine their outcomes. today and beyond and geography two new technologies that have impacted how we study the earth and geography :GIS and GPS GPS (global positioning system) in cars and cellphones today. use latitude and longitude coordinates to determine the exact location on earth. GIS (geographic information system) uses geographic information and layers it into a new map showing specific types of geographic allows geographer to see land use changing over time by comparing pictures of places from years past to current photographs ex.Google Earth basic tools that geographers use to convey information problem with conveying earth(3D) into paper causes distortion. the size of map to the amount of area it represents on the planet Large-scale: shows more detail but smaller area. ex.a map of your city Small-scale: shows less detail but lager area. ex.a map of the world any azimuthal map, shows true direction and examines the Earth from one point-usually a pole or polar projections. puts a cone over the Earth and ties to keep distance intact but loses directional qualities. puts data into a spatial format and are useful for determining demographic data, such as infant mortality rates, by assigning colors or patterns to areas. 5 Themes of Geography Place, Region, Location, Human-enviroment interaction, and spatial interaction or movement description of what and how we see and experience a certain aspect of the Earth's surface. places define and refine what we are. it is the description of what the location is like. ex. hot, cold, and busy can all be used when describing a certain location the linking of places together using any parameter the geographer chooses. ex. The Midwest Region of the United States or The Corn Belt of the United States where anything and everything inside has the same characteristic or phenomena. these characteristics might be religion, language, or cultural. ex.Corn Belt: same crops can be defined around a certain point or node. are most intense around the center but lose their characteristics the farther the distance from the focal point ex. radio station, shopping mall relative and absolute location relative: location in reference to something else on the Earth surface absolute: latitude and longitude coordinates internal, physical characteristics of a place ex. new orleans site is 8 feet below sea level(making it a poor site of human habitation and prone to flooding in times of high precipitation) an area someone is driving to a familiar location. a map someone believes to exist. ex. map inside your head of the school describes how people modify or alter the environment to fit individual or societal needs. ex. Las Vegas, Nevada built in the middle of a desert but modified the environment to meet water needs humans can not live in 5 toos too hot, too cold, too wet, too dry, and too hilly. any of these environmental conditions taken to the extreme makes land uninhabitable/ however as human engineering and invention continue to improve, humans can adjust to survive in conditions they previously could not Spatial Interaction or Movement how linked is a place to the outisde world. determines importance of an area. ex. airports, transportation systems, communication movement of any characteristic, relates to spatial interaction theme. the place where the characteristic began is known as the heart Relocation Diffusion / Migration Diffusion physical spread of cultures, ideas, and diseases through people. when people migrate they often bring with them aspects of their culture such as language. ex. when Hmong refugees came to the United States from Laos, they brought with them their language, religion, and customs spread of characteristic from a central node or a hearth through various means. can be broken down into three types of diffusion: hierarchal, contagious, and stimulus spreads as a result of a group, usually a social elite, spreading ideas or pattern in a society. social elite may be political leaders, entertainment leaders, or sports starts usually associated with the spread of disease. such as influenza. diseases spread without regard to race, social status, or economic status and is often rapid. ex. Internet leads to contagious diffusion takes a part of an idea and spreads that idea to create an innovative product. ex. changing hamburger meet to veggie burgers in India because they don't eat meat. how things on the Earth's surface has a physical location and is organized in space in some fashion arithmetic populaton density the number of people is divided by the amount of land in an area to arrive at the number of people per square mile/kilometer
Converting a fraction to its decimal format is a very simple and easy thing to do. In this article, we'll show you exactly how to convert the fraction 1/29 to a decimal and give you lots and lots of examples to help you. Looking for fraction to decimal worksheets? Click here to see all of our free fraction to decimal worksheets. The two main ways to express a fraction as a decimal are: - With a calculator! - Using long division. The simplest method is obviously to use a calculator. It's quick, and easy. To show a fraction as a decimal you divide the top number of the fraction (the numerator) by the bottom number (the denominator) and the result is the fraction as a decimal. Let's look at a quick example of this by using the fraction 129 and converting it to decimal using a calculator. As you can see, in one quick calculation, we've converted the fraction 129 into it's decimal expression, 0.03448275862069. If you don't have a calculator, you can show a fraction as a decimal using good old fashioned long division instead. (Note: for the purpose of this article, we always calculate to 3 decimal places) With the long division method, the whole number at the top is the answer, and the bottom number is the remainder: There are other methods for converting fractions into a decimal version but it's very unlikely you will ever use something that is not a simple calculator or a long division method. Why Convert 1/29 to a Decimal? We often find ourselves wanting to convert a fraction like 1/29 into a decimal because it allows you to represent the fraction in a way that can be easily understood. In your daily life you will find yourself working with decimals much more frequently than fractions, and this teaches your brain to understand decimal numbers. So, if you need to do any form of common arithmetic like addition, subtraction, division, or multiplication, converting 1/29 into a decimal is a good way to perform those calculations. Another benefit to showing 1/29 as a decimal is as a comparison. It's very easy to compare two decimal numbers and see which is greater and which is smaller, but when you have fractions with different numerators and denominators it is not always immediately clear when you compare then. Both fractions and decimal numbers have a place in math though, because fractions are easy to multiply, can express larger decimal numbers easier, and it's important to learn and understand how to convert both from fraction to decimal, and from decimal to fraction. Practice Fraction to Decimal Worksheets Like most math problems, converting fractions to decimals is something that will get much easier for you the more you practice the problems and the more you practice, the more you understand. Whether you are a student, a parent, or a teacher, you can create your own fractions to decimals worksheets using our fractions to decimals worksheet generator. This completely free tool will let you create completely randomized, differentiated, fraction to decimal problems to help you with your learning and understanding of fractions. Practice Fractions to Decimals Using Examples If you want to continue learning about how to convert fractions to decimals, take a look at the quick calculations and random calculations in the sidebar to the right of this blog post. We have listed some of the most common fractions in the quick calculation section, and a selection of completely random fractions as well, to help you work through a number of problems. Each article will show you, step-by-step, how to convert a fraction into a decimal and will help students to really learn and understand this process. Convert Another Fraction to Decimal Number Enter your fraction in the boxes below and click "Calculate" to convert the fraction into a decimal.
At this point, you’ve learned dozens of CSS properties that allow you to change the appearance of text elements and the boxes they generate. But so far, we’ve merely been decorating elements as they appear in the flow of the document. In this chapter, we’ll look at floating and positioning, the CSS methods for breaking out of the normal flow and arranging elements on the page. Floating an element moves it to the left or right, and allows the following text to wrap around it. Positioning is a way to specify the location of an element anywhere on the page with pixel precision. We’ll start by examining the properties responsible for floating and positioning, so you’ll get a good feel for how the CSS layout tools work. In Chapter 16, we’ll broaden the scope and see how these properties are used to create common multicolumn page layouts. Before we start moving elements around, let’s be sure we are well acquainted with how they behave in the normal flow. We’ve covered the normal flow in previous chapters, but it’s worth a refresher. In the CSS layout model, text elements are laid out from top to bottom in the order in which they appear in the source, and from left to right (in left-to-right reading languages). Block elements stack up on top of one another and fill the available width of the browser window or other containing ...
HARVESTING THE ASTEROIDS by Steve Williams From L5 News, August 1982 There are three sources of raw materials available to construct facilities in space: the Earth, Moon, and asteroids. Being of small relative size and mass, the asteroids are unique. The largest asteroid is Ceres. which was also the first to be discovered back in 1800. It has a diameter of 1018 km, less than 1/3 that of the Moon. The asteroids range in size from Ceres down to fine-grained micrometer size dust. Most of the asteroids lie at distances of between 2.1 and 3.3 AU from the sun, though some come much closer. (AU = astronomical unit, which is simply the semi-major axis of the Earth's orbit, about stat million km. It's often used by astronomers to measure distances within the Solar System.) The combined mass of the asteroid belt is something like 2 to 4% of the mass of the Moon, this mass being roughly distributed over some 60 trillion cubic kilometers. An average asteroid we might be interested in mining would have a diameter of about 200 meters. A "carbonaceous asteroid" of this size would have a mass of about 50 ten-gigawatt solar power satellites, Before we talk about mining these pieces of cosmic debris we must first ask what is their composition and how well do we know it? Because of the Apollo program and six trips to the Moon, we have a very good idea of what is available to us from lunar "dirt." Our knowledge of the asteroids is a bit more indirect. Astronomers have been plagued by the asteroids for over a hundred years. While taking a long time-exposure photo of one area of the sky they would occasionally get a long white streak on their plate where some asteroid had moved across the field of view. Eventually they began studying these objects and have amassed a catalog of slightly over 2000. Only in the last 25 years have astronomers been seriously studying the physical characteristics of the asteroids. Information comes from three main sources: photometry, or the variation of brightness with time yielding a light curve; colorimetry, or how the brightness varies with wavelength of the reflected light from the asteroid; and the polarization of the light. From the light curve astronomers determine the shape of the asteroid, how fast it is spinning, and the uniformity of its source composition. (Occasionally an object will be darker on one side than the other.) Colorimetry determines if the asteroid is silicaceous (like the Moon), metallic, or carbonaceous. Polarization determines the texture of the surface. (i.e., is it smooth, rocky, fractured? Is there a regolith?) The validity of the astronomical data is very much enhanced by examinations of meteorites (asteroids which have made their way to the surface of the earth). It is not too surprising that asteroids and meteorites have similar individual compositions: silicate rich assemblages, metal rich assemblages, and carbonaceous assemblages. Though asteroids/meteorites are often separtated into several dozen different categories, the majority fall into the three above-named divisions. Of particular interest are the carbonaceous asteroids. They contain in abundance such items as water, carbon, nitrogen, and various hydrocarbons. These items are completely absent or, at best, are only minor trace elements on the Moon. They are quite necessary to support life, as well as being important in many manufacturing processes. In supporting a space habitat / manufacturing facility, most studies currently propose to bring these items up from the Earth. This transportation is costly and pushes up the cost of moving into space. When moving things around within the Solar System, the most important consideration by far is the energy required. This energy is proportional to the square of the change in velocity; this is the source of the familiar acronym delta-V. We pay a considerable penalty in delta-V when bringing things up out of the deep "gravitational well" of the Earth. Because of this fact we want to look for asteroids which "approach" the Earth and would require less energy to transport to our manufacturing facility in Earth orbit. Quite conveniently there is just such a class of asteroids known as Earth approachers. They generally don't approach the actual vicinity of the Earth, but their orbits are at some point near the Earth's orbit. Only about 40 are known, though there are estimated to he some 800 with diameters of at least one kilometer. They originate from two sources: genuine main belt asteroids which have been gravitationally perturbed into their current orbits, and "burnt out" short period comets. The majority are probably carbonaceous. As discussed in the March 1980 L-5 News, there are a number of ways to transport an asteroid to the Earth-Moon system. Three of the most prominent are a rotary pellet launcher, a linear mass driver, and a light sail. There are a number of other advantages the Earth-approaching asteroids have over the Moon. As mentioned above we can tap resources in the asteroids not available on the Moon. Solar energy is available only half the time at any given spot on the surface of the Moon to drive a solar power source, whereas it would he constantly available on an asteroid. The logistics of transportation to and from an asteroid is simpler, because there is no need to build a large mass driver, develop a mass catcher, a fleet of lunar launch vehicles, or any of the supporting facilities. The gravitational well of the Moon, though much smaller than Earth's, must still be overcome. It has been estimated by Brian O'Leary that a 200 meter asteroid could be transported to a space manufacturing facility in high Earth orbit for a comparatively reasonable cost of one billion 1976 dollars. Another interesting proposal would be to mine a metallic asteroid strictly for its precious metals. A 200 meter asteroid containing only 8.5 parts per million of platinum would contain enough of this metal alone to sell for about $1.4 billion on today's market. This is particularly interesting when one considers that the world's primary sources of platinum today are in the Soviet Union and South Africa. Certainly no one will make a large investment in an asteroid until we have hard evidence of its exact composition. For this reason we should first send an unmanned probe to survey 10 or more of the most promising asteroids, followed by or done in conjunction with a sample test retrieval mission. Right now such a mission would be far more beneficial than those to Saturn, Jupiter, Venus, or Mars. Much could he learned about early conditions in the Solar System, and therefore such a mission would satisfy not only "space industrialists," but also the pure scientists. A number of studies have gone into considerable detail on the mining and processing of lunar material to turn out a completed SPS. I would like to see a similar study done on milling and retrieving an Earth-approaching asteroid. 1. Chapman. CR. "The Nature of Asteroids." Scientific American, Jan. 1975. Vol. 232, pp. 24-33. 2. Gehrels, T. Asteroids, University of Arizona Press. 1980. 3. Hartman. W.K. Moons and Planets, An Introduction to Planetary Science, 1972. Chapter 8-9. 4. Miller. R.A. and Smith. D.B.S.. et al. "Extra terrestrial Processing and Manufacturing of Large Space Systems," Final report, NASA Contractor Report CR-161293, Sept. 1979. 5. Nichiporuk, W. and Brown, H. "The Distribution of Platinum and Palladium Metals in Iron Meteorites and in the Metal Phase of Ordinary Chondrites." Journal of Geophysical Research, Jan. 1975, pp. 459-470. 6. O'Leary. B. "Mining the Apollo and Amor Asteroids." Science, July 1977, pp. 363-366. 7. Shoemaker, E. M. u/ E. F. "Earth Approaching Asteroids: Population and Compositional Types," NASA Conference Publication 2053, Jan. 1978, pp. 161-175.
A list of key resources for teachers reviewed by Dr Sandra Lynch. In response to the requests of teachers the NSW Association has compiled an annotated bibliography of texts that it recommends to teachers practising philosophy in the classroom. They provide teachers both with practical models and also with some of the theoretical underpinnings of the inquiry model used by the Philosophy in Schools Association. Using these materials teachers can become acquainted with and develop facility in the methodology and techniques of philosophical inquiry and are also given the support necessary to enable them to choose and devise their own classroom materials. Prepared by Dr Sandra Lynch. The texts are listed under two headings: - those suitable for use in the classroom - those valuable as reference material or background reading. ** Cam, Philip (ed.), Thinking Stories 1 & 2: Philosophical Inquiry for Children and Thinking Stories: Teacher Resource/Activity Book (Sydney: Hale & Iremonger, 1994). Thinking Stories 1 & 2 are collections of stories for Middle & Upper Primary students, which encourage philosophical inquiry about topics such as truth, ‘goodness’, ‘friendship’, ‘fairness’, our experience of time and change, and our relationship with the environment. The activity books accompanying the collections complement and supplement the stories providing discussion plans, activities and exercises to stimulate inquiry and giving practical advice on the best use of the manuals. This series provides an excellent model for teachers introducing philosophy to the classroom. *de Haan, Chris, MacColl, San and McCutcheon, Lucy. Philosophy With Kids Books I, 2 & 3 and Philosophy With Kids: More Ideas and Activities (Melbourne: Longman House, 1995). This series consists of four practical teacher resource books. Books 1,2 and 3 are suited to Infants and Middle Primary classes and offer practical advice on how to begin the process of philosophical inquiry. They use numerous familiar children’s storybooks and poems, many of which are Australian, to stimulate discussion within a community of inquiry. The fourth book contains ideas and activities suitable for all infants and primary classes. This series is an abundant resource, which provides a valuable introduction to philosophy within the classroom. *Golding, Clinton, Connecting Concepts (Melbourne: ACER Press, 2003). This book is a valuable classroom resource to use with students 12 years of age and above that contains detailed instructions for turning a class into a community of inquiry, exploring concepts like violence, the mind, culture knowledge and justice. Connecting Concepts includes discussion ideas and exercises suitable for whole class, group and individual activities using a wide variety of learning styles. Clear guidelines, examples and sample questions provide a step-by-step introduction to conceptual analysis in the classroom and blackline masters introduce concept games to students. Jackson, Thomas F. arid Oho, Linda. (Draft) Getting Started in Philosophy: A Start Up Kit for K-I(1993). This kit aims to provide Kindergarten and Year 1 students with a firm foundation in the skills requisite to participation in the Communitv of Inquiry. It consists of a good introductory set of activities for the philosophy classroom and is available from the Philosophy in Schools Association of NSW [email protected] Parker, Michael, The Quest for the Stone of Wisdom (Sydney: Scholastic, 1996) This book and accompanying teacher’s guide is an introduction to philosophical ideas and concepts using critical and creative thinking techniques. It is designed for upper primary and lower secondary students and has a comic based format that appeals to adolescents and pre-adolescents. The books include lots of practical exercises and activities designed to enhance conceptual exploration and are available from Scholastic Australia (Customer Service: Email:[email protected]) Sprod, Tim. Books Into Ideas (Cheltenham, Vic.: Hawker Browniow Education, 1993. This book begins with a short, practical discussion of the methodology of Philosophy in Schools. It deals with particular children’s picture books, presenting discussion plans and activities based on these books, which demonstrate how teachers can facilitate and encourage thinking in young learners. Included among the stories are The Bun yip of Berkeley’s Creek, Bill and Pete, Wombats, Where The Wild Things Are, and A Pet for Mrs. Arbuckle. This book is a particularly valuable resource for the K-2 classroom and is available from Hawker Brownlow Education, 1123A Nepean Hwy. Highett, VIC. 3190 (Telephone in Sydney 02 634 6969 or Toll free to place orders direct 008 33 4603).Wilks, Susan F. Critical & Creative Thinking: Strategies for Classroom Inquiry (Armadale, VIC.: Eleanor Curtain Publishing, 1995). This is an accessible and practical text, which provides units of work on philosophical issues found in everyday literature. It includes fables e.g. The Boy Who Cried Wolf and Jack and the Beanstalk, Alice’s Adventures in Wonderland, and a story by Roald Dahl. It also provides checklists to help in monitoring progress, reviewing teaching styles and includes a special focus on ESL. The Institute For The Advancement Of Philosophy For Children at Montclair State University, New Jersey, U.S.A. also produces a range of programs designed for use in the philosophy classroom. These programs are presented in the form of children’s novels which are each accompanied by a teacher’s manual and are available from the Australian Council for Educational Research (ACER), Private Bag 55, Camberwell, VIC. 3124. The Philosophy in Schools Association of New South Wales has in general tended to prefer the use of materials including Australian content within the classroom, however a number of the IAPC materials are highly regarded by members. The IAPC teacher’s manuals help guide philosophical exploration and provide hundreds of pages of suggestions, discussion plans, exercises, and activities to aid teachers. Cam, Philip. Thinking Together: Philosophical Inquiry For The Classroom (Sydney: Hale & Iremonger/PETA, 1995). Thinking Together shows how story- based material can be used to help children raise and consider philosophical puzzles and problems so as to develop thinking skills and concepts, which are applicable across the curriculum. It discusses how a community of inquiry is built, how to encourage discussion, the use of questioning techniques and includes many activities designed to develop the tools of effective thinking. This book provides invaluable practical advice to teachers in guiding philosophical discussion. Haynes, Joanna, Children as Philosophers (London: Routledge, 2002). This book explores theoretical and practical issues associated with using philosophy in the classroom. It provides examples of children working as philosophers from a young age and provides lots of practical suggestions for teachers. Lipman, Matthew, Sharp Ann M., and Oscanyan, Frederick S., Philosophy in the Classroom (2nd. edition, Philadelphia: Temple University Press, 1980). This textbook assumes that education is a matter of teaching ways of thinking. It demonstrates how the classroom can be converted into a community of inquiry and discusses the skills requisite to this process. It includes an excellent chapter on guiding philosophical discussion. Matthews, Gareth, Philosophy and the Young Child (Cambridge, Mass.: Harvard University Press, 1980). Matthews begins with a series of anecdotes which give a good sense of children’s philosophical thinking, focusing on topics such as puzzlement, ‘fantasy’, ‘play’ and ‘dialogue. It is a delightful hook, enjoyable and easy to read and is particularly suited for K2. Matthews, Gareth, Dialogues with Children (Cambridge, vIass.: Harvard University Press, 1984). This book is a record of Matthews’ experiences with a class in Scotland. It deals with topics such as ‘happiness’, ‘desire’, ‘cheese’ and ‘time travel’ and provides insight into philosophical dialogue with children. Reed, Ronald F., Talking with Children (Denver, Colorado: Arden Press, 1983). This book deals with the art and role of conversation with children, discussing and demystifying children’s conversation with parents, in groups and at school. Splitter, Laurance J. and Sharp, Ann M. Teaching for Better Thinking: The Classroom Community of Inquiry (Melbourne: Australian Council For Educational Research, 1995). This book deals with the Community of Inquiry, dialogue, questioning techniques, the relationship between thinking, philosophy and Philosophy for Children, and ethical enquiry in relation to the Personal Development curriculum. It is a lengthy but valuable text. Ordering Classroom Materials Texts marked with a single asterisk can be purchased from ACER Press Customer Service, Private Bag 55, Camberwell, VIC. 3124. (Tel: 1800 338 402 (Toll Free); + 61 3 9835 7447; Fax: + 61 3 9835 7499; Email: [email protected] ) Loan resources for teachers – members only Thinking. The Journal of Philosophy For Children (Montclair State University, New Jersey, U.S.A.) Analytic Teaching. Community of Inquiry Journal (Viterbo College, La Crosse, Wisconsin, U.S.A.) Studies In Philosophy And Education (Kluvfr Academic Publishers, Netherlands). Volume 13, contains 20 good, brief and practical articles on Philosophy in Schools by North American scholars in Education. Two volumes, entitled The Philosophy of John Dewey and The Structure of Experience edited by John J. McDermott (New York: G.P.Putnam & Sons) might also be useful in that they call attention to a view of the child as an active and curious explorer and to education as a process which is sensitive to this active dimension, seeking to guide children – through their participation in a variety of experiences – in such a way that their creativity and autonomy are developed and enhanced. Similarly, Dewey’s book, My Pedagogic Creed (New York, 1897) would also be worthwhile reading in the context of the philosophy classroom. Publications available for purchase A range of Australian published materials available for purchase ACER PRESS: The Australian Centre for Educational Research Press is the principal distributor of philosophy in schools material in Australia.
These fine motor activities are provided for the Melissa & Doug blog by Cindy Utzinger, pediatric Occupational Therapist. What do kicking a soccer ball, hitting a baseball, copying from the blackboard (or smart board), and reading all have in common? These activities all require a child to have good visual motor and visual perceptual skills. By this I do not mean visual acuity. Acuity is sharpness of vision; whether or not your child can read the smallest row of letters off of the chart on the doctor’s wall that has a big E on the top. Visual motor skills and visual perceptual skills encompass much more than just seeing clearly. - Visual motor skills can also be referred to as eye hand or eye foot coordination. Visual motor skills allow your child to coordinate their body movements in response to what they are seeing. These skills are required for so many of the activities that our children participate in including coloring, using scissors, handwriting, copying work, solving mazes or word finds, catching a ball, batting a ball, doing crafts, tying shoes, completing puzzles, playing an instrument, and is crucial for academic performance. - Visual perceptual skills allow a child to gather visual information from the environment and integrate them with their other senses. These skills allow them to derive meaning and understanding from what they see and experience. Visual perceptual skills are important for so many things a child does and especially so when it comes to learning. These skills help them to learn to read, copy from a board or a book, avoid letter reversals, understand directional concepts such as left and right, remember things that they have seen, use both hands together in a coordinated manner, have good visual motor skills, and visualize objects or experiences. Visual perceptual skills also help a child to integrate what they see with their other senses to be able to do things such as ride a bike, play ball, or hear a sound (such as a fire truck) and visually recognize where it is coming from. Another very important aspect of a child’s vision is their eye movements. There are several very important movements that the eyes need to be able to make. One of those is the ability of the eyes to smoothly move to track an object as it moves within the child’s visual field (smooth pursuits). Two other important movements of the eyes are convergence and divergence. This is the ability of the eyes to watch an object as it comes close to them and then as it moves out away from them. Just like the visual motor and visual perceptual skills, these eye movements are important for so many things with motor coordination and academic skills being pretty high up on that list. By now you can probably tell that there is so much more to vision then just having 20/20 vision. The visual system is so much more complex then I think most people would ever realize and good vision is something most of us with good vision probably take for granted (I know I do!). As an Occupational Therapist, I am seeing a growing number of children who are struggling in these areas. What I want to do, then, is give you some ideas on activities that you can do with your children to help them develop strong visual perceptual and visual motor skills and eye movements. - Fine motor activities - Encourage your child to do paper and pencil activities such as word finds, finding the hidden picture, mazes, and dot to dots. - Make a design out of toothpicks, pretzel sticks, popsicle sticks or dry noodles for your child and then have them copy your design. - Encourage your child to do lacing activities such as the Lace and Trace Shapes. - Have your child string beads with the Wooden Stringing Beads or household items such as dry noodles. You can have your child make up a design or try to duplicate your design. - Sit a few feet away from your child on the floor with legs spread out in a V shape. Use the Froggy Kickball and simply roll it back and forth to each other. This is great for younger kids or kids who can’t do the activities listed below. - Roll the Froggy Kickball to your child and have them try to kick it. - Bounce the Froggy Kickball with your child. - Give your child a spot to aim for on the wall a few inches to a few feet above their head and have them gently toss the Froggy Kickball against the wall and then catch it. Make it more challenging by setting a goal for them to reach with consecutive tosses and catches without dropping the ball. - Play catch with the Froggy Toss and Grip game or the Froggy Toss and Catch Net and Ball game. Modify it to your child’s level by adjusting how close or far away you stand or by having them switch throwing and catching hands to make it more challenging or for more laughs. - Hit the ball back and forth with your child using the Tootle Turtle Racquet and Ball Set. To make this more interesting or more challenging for older kids, have them hold both racquets and hit the ball back and forth from right to left. - A simple game of balloon volleyball can be so much fun. To make it more fun, I like to use the racquets from the Tootle Turtle Racquet and Ball Set. You can hit the balloon back and forth with your child or have them hold both racquets and hit it from right to left by themselves. Please make sure that if you concerned that your child may have visual motor or visual perceptual problems that you talk to their doctor and perhaps seek out an evaluation from a Developmental Optometrist. * * * Cindy Utzinger is a pediatric Occupational Therapist, handwriting tutor, and founder of Building Write Foundations LLC. She lives in North Carolina with her husband and two young children (a son and a daughter). In her free time she can be found running through the streets of her neighborhood to get some exercise or enjoying time on the lake with family and friends. Through her website (www.cindyutzinger.com) she provides parents, teachers, and caregivers with information regarding the importance of building each and every child’s sensory foundation and provides ways to help build their sensory foundation and their foundation for learning. Through her website she also blogs and tackles issues dealing with handwriting problems, ADD/ADHD, Sensory Processing Disorder, and diagnoses on the autism spectrum. * * *
Most physical properties of enantiomers i.e., melting point, boiling point, refractive index, etc. are identical. However, they differ in a property called optical activity, in which a sample rotates the plane of polarization of a polarized light beam passing through. This effect was first discovered in 1808 by E.L. Malus, who passed light through reflective glass surfaces. Four years later, J.B. Biot found that the extent of rotation of the light depends on the thickness of the quartz plates that he used. He also discovered that other compounds i.e., sucrose solutions were capable to rotate the light. He attributed this "optical activity" to the certain features in their molecular structure (asymmetry). Because of his research, he designed one of the first polariscopes, and formulated the basic quantitative laws of polarimetry. In 1850, Wilhelmy used polarimetry to study the reaction rate of the hydrolysis of sucrose. In 1874, van't Hoff proposed that a tetrahedral environment of the carbon atom could explain the phenomenon of optical activity. Today, polarimetry is used routinely in quality and process control in the pharmaceutical industry, flavor, fragrance and essential oil industry, food industry, and chemical industry. The optical purity of the product can be determined by measuring the specific rotation of compounds like amino acids, antibiotics, steroids, vitamins, lemon oil, various sugars, and polymers and comparing them with the reference value (if the specific rotation of the pure enantiomer is known). How does it work? Normal monochromatic light contains light that possesses oscillations of the electrical field in all possible planes perpendicular to the direction of propagation. When light is passed through a polarizer (i.e., Nicol prism, Polaroid film) only light oscillating in one plane will leave the polarizer ("picket fence model"). This linear polarized light can be described as a superposition of two counter-rotating components, which propagate with different velocities in an optical active medium. If one component interacts stronger than the other with a chiral molecule, it will slow down and therefore arrive later at the observer. The result is that the plane of the light appears to be rotated. In a polarimeter (figure 2), plane-polarized light is introduced to a tube (typically 10 cm in length, figure 3) containing a solution with the substance to be measured. If the substance is optical inactive, the plane of the polarized light will not change in orientation and the observer will read an angle of [a]= 0o. If the compound in the polarimetry cell was optical active, the plane of the light would be rotated on its way through the tube. The observed rotation is a result of the different components of the plane polarized light interacting differently with the chiral center. In order to observe the maximum brightness, the observer (person or instrument) will have to rotate the axis of the analyzer back, either clockwise or counterclockwise direction depending on the nature of the compound. For clockwise direction, the rotation (in degrees) is defined as positive ("+") and called dextrorotatory (from the Latin: dexter=right). In contrast, the counterclockwise direction is defined as negative ("-") and called levorotatory (from the Latin laevus=left). Unfortunately, there is no direct correlation between the configuration [(D/L) in Rosanoff, (R/S) in Cahn-Ingold-Prelog nomenclature] of an enantiomer and the direction [(+) or (-)] in which they rotate plane-polarized light. This means that the R-enantiomer can exhibit a positive or negative value for the optical rotation depending on the compound. In some cases, the solvent has an impact on the magnitude and the sign as well i.e.,(S)-lactic acid exhibits an optical rotation of [a]= +3.9o in water and [a]= +13.7o using 2 M sodium hydroxide solution as solvent because the observer looks at a different species (lactate). The specific rotation [a] depends on the length of the tube, the wavelength that it is used for the acquisition, the concentration of the optical active compound (enantiomer), and to a certain degree on the temperature as well. However, the temperature effect is very difficult to specify since it differs for each compound. For instance, the [a]-value for a-pinene only slightly increases in the range from 0 oC to 100 oC (at l=589.3 nm), while it is almost cut in half for b-pinene. These two compounds only differ by the position of the alkene function. Generally the following equation is used to calculate the specific optical rotation from experimental data: a = observed optical rotation c = the concentration of the solution in grams per milliliter l = the length of the tube in decimeters (1 dm=10 cm) A student obtained the following specific optical rotation from his measurement. This notation means that the measurement was conducted at 25 oC using the D-line of the sodium lamp (=589.3 nm). A sample containing 1.00 g/mL of the compound in a 1 dm tube exhibits an optical rotation of 3.5o in clockwise direction. Note that the instrument used in Chem 30CL can provide the specific optical rotation , which already corrects the optical rotation for the cell dimensions and the concentration. The optical rotation is raw data, which does not include these corrections. It is very important to pay attention which mode was used to acquire the data! As mentioned earlier, polarimetry can be used to determine optical purity of enantiomers. The observed specific optical rotation of a compound is [a]= +7.00o. The specific optical rotation for the pure enantiomer is . |Percent optical purity| The sample consists of 75% of the racemic form (=equimolar mixture of both enantiomers, a=0o) and an excess of 25% of the enantiomer in question (62.5 % and 37.5 %). The instrument used below allows you to calculate the specific rotation, if you know the concentration of the solution. The cell used for the measurement has a pathlength of 10.0 cm. Actual polarimeter used in the lab (Autopol IV) located in YH 6104 Polarimetry cell (5 cm stainless steel cell shown here) The cell has to be handled carefully since it costs more than $1000 to manufacture. It has to be cleaned thoroughly after the measurement was performed and is returned to the teaching assistant or instructor. Special attention should be given to the inlets that have been broken off several times already due to negligence on the student’s part! 1. The instrument has to warm up for at least 10-15 minutes, if it is not already turned on. The switch is located in the back of the instrument. The proper wavelength is chosen. 2. A solution with a known concentration (~0.5-3 %) of the compound in the proper solvent is prepared. 3. The polarimetry cell is filled with the solvent. After filling the cell, the path through the cell should be clear (If the path is not clear, the air bubbles in the path have to be removed prior to the measurement). The cell is placed on the rails inside the instrument, all the way on either the right or the left side. 4. The "Zero button" is pressed to zero the instrument. The screen should show 0.000 and not fluctuate too much. If this is not the case, make sure that the light can pass through. If this does not solve the problem, inform the teaching assistant or instructor about this problem immediately. 5. Then, the solvent is removed and the cell is dried. The solution of the compound is filled into the dry polarimeter cell making sure that the entire inner part is filled without any air bubbles or particulate matter. 6. The "I" button on the keypad is pressed and specific rotation is selected. 7. The proper cell dimension is selected: 100 mm (the cell provided is 100.0 mm=1 dm long) 8. Next, the proper concentration in % is entered (=the actual concentration of your solution and not the one recommended since they will most likely differ slightly!) 9. The reading on the display (=specific optical rotation) is recorded including the sign. (The experimenter has to research the literature data before performing the measurement in order to see if he is in the correct ballpark!). 10. The cell is taken out, cleaned thoroughly with the solvent used for the measurement and returned it to your teaching assistant or instructor. If the student is the last one to perform a measurement for the day, the instrument has to be turned off as well. 11. The sample from the optical rotation measurement can be recovered after the measurement if needed by removing the solvent i.e., Jacobsen ligand. 12. It is entirely unacceptable that the student locks up the cells somewhere, where they are not available to others because all the students in the course use the polarimetry cells. Doing so will result is a significant penalty for the student at fault
Monday March 26 2012 The actions of the rare 'flu susceptability gene' are uncertain Scientists have discovered a “gene flaw linked to serious flu risk”, according to BBC News. The broadcaster says the mutation controls a malformed protein, which makes cells more susceptible to viral infection. This story is based on a series of experiments that examined the influence of a gene on our susceptibility to flu. In its normal form the gene is thought to play a protective role when it comes to viral infections, although a small proportion of people may carry a mutated version of the gene. To examine the mutation’s effects researchers conducted animal experiments and genetic scans to look at the DNA of people severely affected by the flu virus in the past. The researchers found that mice lacking this gene had a much greater risk of developing severe flu than mice that had a functioning copy of the gene. Furthermore, they found that approximately 0.3% of people in the general population carry a mutated copy of the gene but that approximately 5.7% of those hospitalised with flu carried a mutated copy of the gene. This research points to a genetic influence on flu severity. It is still at a relatively early stage, and larger studies will need to be carried out before we know precisely how this relationship works. In the longer term, this could lead to the identification of people who are likely to develop a bad case of the flu and allow them to take preventative action, such as ensuring they get the flu vaccine. Where did the story come from? The study was carried out by researchers from the Wellcome Trust Sanger Institutes, University College London and other institutions throughout the UK and the US. The research was supported by The Wellcome Trust and other UK funding bodies. The study was published in peer-reviewed scientific journal Nature. This research was covered appropriately by the media, with the BBC providing a detailed description of the study while also emphasising the early stage of the research. What kind of research was this? This research used a series of studies to examine the impact of a specific gene on flu susceptibility. The gene is normally responsible for producing a family of proteins called "interferon-inducible transmembrane" proteins (IFITM), which have been previously shown to keep viruses from replicating. The researchers thought that mice that did not carry a copy of the gene, and therefore did not produce the IFITM proteins, would be more likely to show signs of severe flu infection. They also thought that people with severe flu would be more likely than the general population to carry mutated versions of the gene. The research included an animal study and a case-control study in humans. Animal studies are typically conducted early in the research process. In this case, the first study in mice allowed the researchers to examine the effects that absence of the gene had on the development of flu. It would not have been appropriate to perform this experiment in humans. Once they confirmed that the gene had a role in the susceptibility of mice to flu, they moved on to a case-control study in humans, which allowed them to examine the likelihood of people with severe flu carrying a mutated version of the gene and compare this with the likelihood among the general population. Case-control studies can show associations between two factors, but cannot confirm that one thing causes the other. In this case, the study could only show an association between carrying a mutant gene and having a severe case of flu, not that the gene made people more susceptible to the flu. What did the research involve? For the animal-based phase of the study the researchers took two groups of mice. The mice in one group had a copy of the IFITM gene and the mice in the other group lacked a copy of the gene. They then exposed both groups to a strain of the flu virus that is not considered to be highly virulent and does not generally produce severe disease. They compared the two groups for several outcomes, including the proportion of the mice who became infected, the proportion of the mice who survived, their weight and the amount of virus present in their lungs. For the case-control phase of the study, the researchers sequenced the genes of 53 patients who had been hospitalised with severe H1N1 or severe seasonal flu in 2009-10. They then used an existing database, from the Wellcome Trust Case Control Consortium, to determine the proportion of the general population who carry similar IFITM mutations. By comparing these proportions, the researchers determined how much more likely the severe flu patients were to have IFITM mutations. The database included genetic sequence information from people in the UK, Netherlands and Germany. What were the basic results? The researchers found that, upon infection with the flu virus, mice that did not carry a copy of IFITM: - lost more than 25% of their body weight, compared with 20% in mice with a functioning copy of the gene - showed several signs of severe illness, such as rapid breathing and goose bumps after six days - had 10-times higher levels of virus present in their lungs after six days, compared with the mice with the gene - showed signs of sudden and severe pneumonia and substantial lung damage When analysing data from the case-control study, the researchers found that patients hospitalised with severe flu were significantly more likely than the general European population to carry a specific mutation of the IFITM (5.7% of the hospitalised group compared to 0.3% of population database). How did the researchers interpret the results? The researchers conclude that the IFITM protein family acts as “an essential barrier to influenza A virus infection”. This research suggests that a family of proteins has a protective role against flu infection. The mouse study indicates that when these proteins are not produced, the flu virus replicates at higher levels and can lead to more severe forms of the flu. The case-control study strengthens these findings, and indicates that there is an association between severe flu and the presence of a mutated IFITM gene in humans. However, it is important to note several limitations to the study: - The animal study can suggest mechanisms through which the genetic mutation may influence the development of severe flu, but the results cannot be generalised as applying to humans. - The case-control study was small, with only 53 patients with severe flu included. Larger studies will be needed to confirm the findings. - Case-control studies can only show an association between two factors and cannot prove that the genetic mutations cause more severe disease. - If the genetic association can indeed cause flu susceptibility, it is unlikely that it is the sole factor governing the way people react to the flu virus. There are several known risk factors for developing severe flu (for example, being very young or old, having other medical illnesses or a weakened immune system, and socio-economic deprivation) and the manner in which these interact with genetic susceptibility is unknown. The researchers say that IFITM is likely to be important when new flu viruses occur, as people will not yet have encountered the virus before and therefore will not yet have acquired immunity to such infections. They further say that their research shows that when IFITM proteins are not present, normally mild strains of the virus may inflict very severe symptoms. They suggest that people with IFITM mutations, and populations with a higher proportion of these mutations, may be more vulnerable to infection with the flu virus. It is important to remember that the genetic variant seen in this research is extremely rare in Europeans. Even if it is shown to increase the odds of developing severe flu in a few people, it is unlikely to lead to severe disease on a large scale. Analysis by Bazian
The Bronze Age, a period that lasted roughly three thousand years, saw major advances in social, economic, and technological advances that made Greece the hub of activity in the Mediterranean. Historians have identified three distinct civilizations to identify the people of the time. These civilizations overlap in time and coincide with the major geographic regions of the Greece. The Cycladic civilization developed in the islands of the Aegean, and more specifically around the Cyclades, while the Minoans occupied the large island of Crete. At the same time, the civilization of the Greek mainland is classified as “Helladic”. The Mycenaean era describes Helladic civilization towards the end of the 11th c. BCE and is also the called “Age of Heroes” because it is the source of the mythological heroes and epics like Hercules, the Iliad and the Odyssey. All three civilizations of the Bronze Age had many characteristics in common, while at the same time were distinct in their culture and disposition. The Minoans are considered to be the first advanced civilization of Europe, while Mycenaean culture had a great deal of influence with its legends and Greek language on what later became the splendor of Classical Greece. “The Mycenaeans are the first ‘Greeks’” (Martin, Ancient Greece 16). Either by fortune or force, the Mycenaeans outlasted both the people of Cyclades and the Minoans, and by the end of the 10th c. BCE expanded their influence over the Greek mainland, the islands of the Aegean and Ionian seas, Crete, and the coast of Asia Minor. However, after 1100 BCE Mycenaean civilization ceased either through internal strife, or outside invasions (the Dorian invasions have been proposed as a possible explanation), or through a combination of the two, it is not known for sure. What is known is that the extensive damage done to the Mycenaean civilization took three hundred years to reverse. We call this period “the Dark Ages” partly because the people of Greece fell into a period of basic sustenance with no significant evidence of cultural development, and partly because the incomplete historical record renders our own view of the era rather incomplete.
[As always, don't forget to give proper attribution when using the following in the classroom or elsewhere as indicated in the sidebar] The cone in the sphere problem led me to an interesting relationship in the corresponding 2-dimensional case with a surprise ending. (Only a math person would compare a math problem to a mystery novel!). The following investigation allows the student to explore a myriad of possibilities: from similar triangles to the altitude on hypotenuse theorems to Pythagorean, to chord-chord or secant-tangent power theorems, coordinate methods, draw the radius technique, etc. Sounds like this one problem might review over 50% of a geometry course? You decide for yourself! Just remember -- one person is not likely to think of every method. Open this up to student discovery and watch miracles unfold... STUDENT ACTIVITY OR READER CHALLENGE In the diagram above, segment AF is a diameter of the circle whose center is O, BC is a tangent segment (F is the point of tangency), BC = AF and BF = FC. Segments AB and AC intersect the circle at D and E, respectively. Lots of given there! Perhaps some unnecessary information? (a) If AF = 40, show that DE = 32. Notes: To encourage depth of reasoning, consider requiring teams of students to find at least two methods. (b) Let's generalize (of course!). This time no numerical values are given. Everything else is the same. Prove, in general, that DE/BC = 4/5. (c) So where's the 3-4-5 triangle (one similar to it, that is)? Find it and prove that it is indeed similar to a 3-4-5.
Scientists at the American Museum of Natural History were funded by NASA to use satellite data and data from museum collections (documented locations of species) to test a computer model that would predict geographic locations of chameleons. The study predicted the geographic distribution of 11 known chameleon species and also helped researchers to find seven new species. The chameleon prediction project was supported by NASA under award No. NAG5-8543 and by the Center for Biodiversity and Conservation at the American Museum of Natural History. NASA also provided support for the RS/GIS Facility. To learn more visit: Scientists Use Satellites and Museum Collections to Locate Lizards in Madagascar NASA Helps Forecast Reptile Distributions In Madagascar NASA Earth Observatory Amercian Museum Of Natural History AMNH Biologist and Colleagues Develop Computer Models That Accurately Predict Where Reptiles Species Live in Madagascar Photo courtesy of Christopher Raxworthy
This model holly leaf is made in sections and joined together. Like a real holly leaf, it will not lie flat. It has negative curvature. To make the holly leaf, a circle centre $C$ of radius 5 cm and radii $CA$ and $CB$ with $\angle ACB = 125$ degrees are drawn. The tangents to the circle at $A$ and $B$ meet at the point $P$. Eight identical 3 sided shapes are made by cutting along $PA$ and $PB$ and around the arc $AB$ to make a 3 sided shape with 2 straight edges and one edge along the minor arc of the circle (the circles are thrown away). Two identical 4-sided shapes are made by drawing a circle with radius 5 cm, a diameter $B$*$D$* and tangents $B$*$P$* and $D$*$Q$*equal in length to $PB$. These shapes have edges $B$*$P$*, $P$*$Q$*, $Q$*$D$* and the semicircular arc (inside the rectangle) from $B$* to $D$*. The sketch shows (on a smaller scale) how the ten pieces are joined together to make the "holly Find the length of the boundary of the yellow area around $P$ which is bounded by six arcs centred at $P$, each of radius $r$ cm. All points on the boundary of the yellow region are equidistant from the point $P$. If the surface at $P$ were flat, the boundary of the region would be a circle and its length would be $2\pi r$. In this case the length of the boundary is greater than $2\pi r$ and the surface of the "holly leaf" has negative curvature at $P$. Compare the perimeter and area of this "holly leaf" with the similar flat leaf for which $\angle ACB = 135$ degrees. See the problem for the flat version of this problem. What happens to the holly leaves as the angle $\angle ACB$changes? [For positive curvature the boundary is less than $2 \pi r$ in length.]
HTML has changed a lot in the last decade. As vendors release new versions of popular browsers, each seeks to implement the most compelling new features. However, the standard-setting process does not move as fast as the browser vendors do. We have a set of standards for HTML. These standards are specified in recommendations of the HTML Working Group of the W3C (see the sidebar titled "The World Wide Web Consortium"). The World Wide Web Consortium The World Wide Web Consortium (W3C) is the source of standards for HTML and for many other Internet-related technologies. The W3C was created in 1994 by Tim Berners-Lee at the MIT Laboratory for Computer Science, in cooperation with the European Center for Nuclear Research (CERN), the Defense Advanced Research Project Agency (DARPA), and the European Commission. In the years since its founding, the W3C has grown to include more than 400 member organizations. The W3C is dedicated to developing technical specifications for the infrastructure of the World Wide Web that promote interoperability through common standards. Toward this end, the W3C organizes working groups, each of which focuses on a specific technology, such as HTML. W3C working groups publish sets of technical specifications, known as "recommendations," which have been approved by a majority vote of the W3C members. A W3C recommendation indicates that the members have reached a consensus that a given specification is appropriate for widespread use.
Discover the secret colors hidden in the Black Magic Marker. What You Need What To Do: Cut a circle out of the coffee filter. (It doesn't have to be a perfect circle, just a round shape that's about as big as your spread-out hand. With the black marker, draw a line across the circle, about 1 inch up from the bottom. Put some water in the cup-enough to cover the bottom. Curl the paper circle so it fits inside the cup. Make sure the bottom of the circle is in the water. Watch as the water flows up the paper. When it touches the black line, you'll start to see some different colors. Leave the paper in the water until the colors go all the way to the top edge. How many colors can you see? If you have another black marker, draw a line on a clean, dry coffee filter circle. Put the circle in some fresh water. Does this marker make different colors than the first one? Another Way to see a Rainbow Use a clean, dry coffee filter circle. Use your marker to draw a black spot in the center. Put the circle on a saucer, and put a few drops of water on the spot. In a few minutes you'll see rings of color that go out from the center of the circle to the edges. Most nonpermanent markers use inks that are made of colored pigments and water. On a coffee filter, the water in the ink carries the pigment onto the paper. When the ink dries, the pigment remains on the paper. When you dip the paper in water, the dried pigments dissolve. As the water travels up the paper, it carries the pigments along with it. Different-colored pigments are carried along at different rates; some travel farther and faster than others. How fast each pigment travels depends on the size of the pigment molecule and on how strongly the pigment is attracted to the paper. Since the water carries the different pigments at different rates, the black ink separates to reveal the colors that were mixed to make it. In this experiment, you're using a technique called chromatography. The name comes from the Greek words chrome and graph for "color writing." The technique was developed in 1910 by Russian botanist Mikhail Tsvet. He used it for separating the pigments that made up plant dyes. There are many different types of chromatography. In all of them, a gas or liquid (like the water in your experiment) flows through a stationary substance (like your coffee filter). Since different ingredients in a mixture are carried along at different rates, they end up in different places. By examining where all the ingredients ended up, scientists can figure out what was combined to make the mixture. Chromatography is one of the most valuable techniques biochemists have for separating mixtures. It can be used to determine the ingredients that make up a particular flavor or scent, to analyze the components of pollutants, to find traces of drugs in urine, and to separate blood proteins in various species of animals (a technique that's used to determine evolutionary relationships). Why does mixing many colors of ink make black? Ink and paint get their colors by absorbing some of the colors in white light and reflecting others. Green ink looks green because it reflects the green part of white light and absorbs all the other colors. Red ink looks red because it reflects red light and absorbs all the other colors. When you mix green, red, blue, and yellow ink, each ink that you add absorbs more light. That leaves less light to reflect to your eye. Since the mixture absorbs light of many colors and reflects very little, you end up with black.
Lithium-sulfur batteries have shown to be capable of producing up to 10 times more energy than conventional batteries, which makes them ideal for applications in energy-demanding electric vehicles. But commercializing sulfur batteries has proved difficult to achieve with one of the main problems being the tendency for lithium and sulfur reaction products, called lithium polysulfides, to dissolve in the battery’s electrolyte and travel to the opposite electrode permanently. The result causes the battery’s capacity to decrease over its lifetime. The scientises have been investigating a strategy to prevent the 'polysulfide shuttling' phenomenon by creating nano-sized sulfur particles, and coating them in silica (SiO2), otherwise known as glass. The work is in a reported paper entitled 'SiO2 – Coated Sulfur Particles as a Cathode Material for Lithium-Sulfur Batteries' and published online in the journal Nanoscale. The researchers have been working on designing a cathode material in which silica cages 'trap' polysulfides having a thin shell of silica, and the particles’ polysulfide products now face a trapping barrier – a glass cage. The team used an organic precursor to construct the trapping barrier. “Our biggest challenge was to optimize the process to deposit SiO2 – not too thick, not too thin, about the thickness of a virus,” said Mihri Ozkan. A schematic illustration of the process to synthesize silica-coated sulfur particles. Graduate students Brennan Campbell, Jeffrey Bell, Hamed Hosseini Bay, Zachary Favors, and Robert Ionescu found that silica-caged sulfur particles provided higher battery performance, but felt further improvement was necessary because of the challenge with the breakage of the SiO2 shell. “We have decided to incorporate mildly reduced graphene oxide (mrGO), a close relative of graphene, as a conductive additive in cathode material design, to provide mechanical stability to the glass caged structures,” said Cengiz Ozkan. The new generation cathode provided more improvement than the first design, since the team engineered both a polysulfide-trapping barrier and a flexible graphene oxide blanket that
Gravity, the new space thriller starring George Clooney and Sandra Bullock, is being hailed as one of the greatest movies of modern science-fiction. The film, however, has a stronger basis in fact than you might imagine. One of the main antagonists of the film is a cloud of space debris orbiting the Earth at tremendous speeds, decimating anything in its path, including the International Space Station. Though dramatized in the movie, space junk does pose a real threat to existing satellite systems and has the potential to put future flights at risk. The number of satellites currently in orbit is staggering. There have been some 5,000 satellite launches to date, and the potential for their orbits to become congested will only increase as that number climbs. Modern society now depends on satellites for a large number of essential services like communications, weather forecasting, navigation, television and scientific research. The vast majority of these are concentrated in low Earth orbit (an altitude between 160 and 2,000 km) near the poles, where roughly two-thirds of all known artificial objects can be found. Experts project an exponential increase in the number of artificial objects in orbit throughout the rest of the century. Composed of spent rockets, dead satellites and fragments from previous collisions, the debris cloud currently in Earth’s orbit has become a major concern among space officials in recent years. There are currently some 17,000 artificial objects in orbit being monitored, 10,000 of which are fragments created by explosions or collisions. NASA, however, estimates that there are around 500,000 pieces of space debris larger than a marble, 22,000 as large as a softball and potentially hundreds of millions of pieces at least 1 mm in diameter. Even these tiny fragments have the potential to knock out one of the 1,000 or so operational satellites currently in orbit and generate even more debris. For example, the thought of an errant piece of junk, no larger than a marble, striking a GPS Block IIR(M) satellite and disrupting critical GPS systems or compromising something like HughesNet’s EchoStar XVII and blacking out internet service to an entire region has generated a significant amount of concern. Collision warnings for operational satellites are on the rise and roughly 10 objects pass within two kilometers of vital communication satellites every week. There have already been several major collisions in low Earth orbit. The most significant event occurred in 2009 when a defunct Russian satellite, Cosmos 2251, slammed into an active U.S. communications satellite traveling at 42,000 km/hour. The incident resulted in the creation of an extra 2,000 items of space junk. A similar incident was narrowly avoided in 2012 when a dead Cold-War spy satellite and a NASA satellite were expected to pass within 213 m of one another. The collision was averted when the Fermi Gamma-ray Space Telescope engaged its thrusters briefly to give the two objects a wider berth of six miles. So what can be done to combat the threat of space junk to the satellites we depend on? One solution looks at the fact that most of the debris in orbit is the direct result of accidental explosions due to unspent fuel resources. By venting pressure tanks, depleting all fuel reserves and turning off battery connections, dead satellites would remain intact and would be easier to remove from orbit. Satellites could also be removed from high traffic orbits while they can still be maneuvered, gradually reducing their altitude until they reenter Earth’s atmosphere. The third solution calls for active removal of uncontrollable objects using retrieval craft. These would attach to dead satellites or larger debris using robotic arms or by deploying special nets and could then induce a controlled reentry using powerful thrusters. Whichever solution turns out to be most effective, the message from the scientific community is clear: something has to be done. Though catastrophic events like those portrayed in Gravity are still fiction for now, the dangers posed by uncontrollable space junk to satellites, unmanned missions, and manned space flights will only increase as humanity sends more and more objects up into orbit. Robyn Johnston has an unequalled passion for science and technology, and has a desire to share it through her writing. When she isn’t writing, you’ll find her focusing on her thesis for a Master’s degree in Material Science. You can follow her on twitter @robynkjohnston Image caption: Gravity’s Dr. Ryan Stone (Sandra Bullock) takes respite in a Soyuz (Credits: Warner Bros.).
Threatened & Endangered Species Threatened and endangered species of the Everglades include: - American Alligator - American Crocodile - Sea Turtles - Florida Panther Threatened wildlife includes species, subspecies, or isolated populations that are likely to become endangered in the near future unless steps are taken to protect and manage the species and/or its habitat for its survival. A species, subspecies, or isolated population is considered endangered that is, or soon may be, in immediate danger of extinction unless the species or its habitat is fully protected. Each species must be listed on the Federal list of endangered and threatened species before it can receive protection under the Endangered Species Act. The ESA was enacted in 1973 to conserve and set up recovery plans for listed species and associated habitats. Threatened and endangered plantlife of the hardwood hammocks and rocky pinelands include the brittle thatch palm (Thrinax morrissii), buccaneer palm (Pseudophoenix sargentii), Florida thatch palm (Thrinax parvitolia), Krug's holly (Ilex krugiana), lignum-vitae (Guaiacum sanctum), manchineel (Hippomane mancinella), silver thatch palm (Coccothrinax argentata), and tree cactus (Cereus robinii). American alligator and American crocodile The American alligator (Alligator mississippiensis) is also found in the freshwater marshes of the Everglades. It was first listed as endangered in 1966 in accordance with the Endangered Species Act. However, populations quickly recovered resulting in delisting as an endangered species except for purposes of its similarity of appearance to the American crocodile (Crocodylus acutus) where the two species share habitat. Marine and estuarine habitats of the Everglades provide habitat for threatened and endangered species. The Florida population of green sea turtles (Chelonia mydas) has been considered endangered since 1978. The declining population has been victim to commercial harvesting for eggs and food as well as incidental by-catch in the shrimp fishery. Hawksbill sea turtle (Eretmochelys imbricata), Atlantic Ridley sea turtle (Lepidochelys kempii), and leatherback sea turtle (Dermochelys coriacea) are all listed as endangered species while the loggerhead sea turtle (Caretta caretta) is considered threatened under the protection of the ESA. Recovery plans have been established for all listed sea turtle species. Many threatened and endangered species live throughout the Everglades. Threatened and endangered birds include the Everglades snail kite (Rostrhamus sociabilis), wood stork (Mycteria americana), Cape Sable seaside sparrow (Ammodramus maritimus mirabilis), red-cockaded woodpecker (Picoides borealis), piping plover (Charadrius melodus), bald eagle (Haliaeetus leucocephalus), and roseate tern (Sterna dougallii). Federally listed as endangered, the manatee (Trichechus manatus laterostris) is a large, slow-moving, plant-eating aquatic mammal. Its distribution is determined primarily by water temperature as manatees cannot survive long in water below about 63 °F (17 °C). In Florida, manatees often migrate into warm spring-fed rivers or near the heated discharges of power plants during winter months. As offshore waters warm in late spring and summer, manatees move out into shallow fresh, brackish, and seawater habitats. Primary threats to the Florida panther's (Felis concolor coryi) survival are loss and degradation of habitat. An initial recovery plan is currently being implemented which identifies, protects, and enhances the existing range and habitats; establishes positive public opinion and support; and reintroduces panthers into areas of suitable habitat. It is estimated there are 70-100 individuals living in the hardwood hammocks of the Everglades. Order - Carnivora Family - Felidae Genus - Puma Species - concolor coryi Historically, the Florida Panther occurred throughout the southeast United States, from Texas, Louisiana, and the lower Mississippi River valley north and east to the Atlantic Ocean, including Arkansas, Alabama, Florida, Georgia, and parts of Tennessee and South Carolina. Today approximately 70 panthers remain in parks and nearby private lands in southwest Florida. The Florida panther is one of the most endangered animals in the world. The only known wild breeding population occurs in south Florida within the Big Cypress Swamp region. Radio telemetry has also tracked panthers into locations ranging from the St. Johns River drainage from Okeechobee county south to Putnam county. HabitatFlorida panthers reside in upper dry lands such as hardwood hammock, pine flatwoods, saw palmetto and cabbage palm thickets, and in wetland areas including cypress forests, mangrove forests, and freshwater marshes. They often den and sleep in the drier scrub and saw palmetto environments. In search of food and safer resting locations, panthers are known to wade and swim through canals and swamps. Preferring secluded habitats away from human activity, panthers rarely visit agricultural lands. They require large remote tracts of land with plenty of prey and cover along with low levels of human disturbance. Home ranges of panthers in southwest Florida average 200 square miles for resident males and 75 square miles for resident females. These territories are maintained by each animal as hunting grounds. Males will not tolerate other males, and will fight, sometimes inflicting deadly wounds on the other. However, these territories tend to overlap with potential mates. They mark territories by leaving scat and urine on piles of dirt and leaves. Social structure consists of mature resident animals who have territorial ranges, along with the transient and subdominant individuals who live on the peripheries. These panthers have suboptimal hunting grounds and an increased chance of human encounters. This panther is smaller than its western cousins. It has longer legs, smaller feet, and a shorter coat. The Florida panther also often has a right angle crook between the second and third vertebrae from the end of its tail and a cowlick (whorl of hair) in the middle of its back. These two distinguishing features may be a result of inbreeding rather than defining characteristics of this subspecies. The skull of the Florida panther differs from other subspecies of panther. It is characterized by an exaggerated rise of the nasal arch, giving the face a roman-nose appearance, as well as being relatively flat and broad overall. The paw prints are asymmetrical, consisting of a three lobed pad surrounded by four toes. During walking, the hind paw is often placed in the print of the front paw. The front paws are wider (~70mm) than the hind ones (~60mm) in adult animals The subspecies name concolor refers to the single, uniform body color of the Florida panther. Adults are tawny with lighter fur on the lower chests, belly, and inner legs. This tawny body coloration may vary considerably, ranging from grayish to reddish or yellowish. This uniform coloration camouflages panthers in a variety of environments. The tail tip, back of ears, and sides of the nose are dark brown or black. White spotting also occurs on the fur, possibly resulting from tick bites. Size, Age & Growth Adult male panthers in south Florida weigh up to 154 pounds (70 kg), measuring nearly 7 feet (2 m) from nose to tip of tail. Considerably smaller, females weigh between 50 and 108 pounds (23-49 kg) and measure about 6 feet (1.8 m) in length. The shoulder height ranges from 23-28 inches (60-70 cm). Panthers can live up to 12 years or more in wild populations. Kittens weigh between one and two pounds (0.5-1.0 kg) at the age of 12-14 days. They weigh from 33 to 66 pounds (15-30 kg) (males) and 33-49 pounds (15-22 kg) (females) by 10 months of age. Panthers feed and travel during the cooler temperatures and low light levels of night. White-tail deer, feral hogs, raccoons, armadillos, small alligators, and other small rodents and fowl make up this strict carnivore's diet. Deer or hogs are the preferred prey, taken once a week or so, supplemented with smaller prey. Being able to run up to 35 mph for only a short distance, the preferred method of hunting is to sneak up close to their prey to launch an ambush attack. Panthers kill prey with a strong bite to the throat, back of neck or base of skull. The average breeding age is 2-3 years for both males and females, although they have been known to conceive as early as 18 months. The breeding season falls between October and March, with a gestation period of 92-96 days. One litter is produced every other year, with an average litter size of 1-3 kittens. The female prepares the den in a dry, sheltered area, for protection from the harsh environmental elements. When the kittens are born, they have a spotted coat, blue eyes and are blind. The eyes open within 2-3 weeks, at the same time they begin to walk. During the first two months of life, the kittens are helpless and remain close to the den. At approximately 6-8 weeks the kittens are weaned and introduced to meat. Eventually the kittens travel with their mother, learning hunting and survival skills. Around 6 months of age, the spotted markings begin to fade, with the coat becoming the tawny adult color and eyes turning from brown to pale gold. After a year and a half, the kittens will leave their mother in search of their own territories. There are many causes of death to Florida panthers including injuries from cat fighting, collisions with automobiles, illegal hunting, mercury poisoning, allergic reaction to anesthesia, and various diseases. The most frequent cause of death among radio-collared panthers was intraspecific aggression - panthers killing other panthers. Mature males often kill juvenile males who wander into their territories searching for a mate. Females have also been killed by mature males and occasionally a younger male will kill a mature male. Deaths from motor vehicle collisions are responsible for a large number of deaths. Between 1978 and 1994, 20 panther deaths and 6 injuries were documented from these collisions. Diseases that inflict the Florida panther include pseudo rabies, a viral pathogen found in feral hogs. Its is fatal to hogs and believe to be transferred to panthers eating infected hogs. Panleukopenia (feline distemper) is highly contagious and potentially extremely dangerous to the entire remaining panther population. They are also susceptible to feline leukemia and feline AIDS virus. Problems resulting from inbreeding exist such as abnormal semen, testicle abnormalities, and congenital heart disease. Panthers are also vulnerable to parasitic infestations of ticks, tapeworm, hookworm, ring worm, and intestinal flukes. Management and ProtectionThreats to the Florida panther's survival are loss and degradation of habitat. Other threats include inbreeding, insufficient prey numbers, motor vehicles, disease, and environmental contaminants. Experts all agree that something must be done immediately to preserve the remaining population. An initial recovery plan was prepared by the Florida Panther Recovery Team, approved the Fish and Wildlife Service in 1981. It revised by the Florida Panther Interagency Committee's Technical Committee and again approved in 1987 by the Fish and Wildlife Service. The objective is for three self-sustaining populations within the historic range of the Florida panther through: 1) Identifying, protecting, and enhancing the existing range and habitats, 2) Establishing positive public opinion and support and management of the panther, and 3) Reintroducing panthers into areas of suitable habitat. This recover plan is currently being implemented. In 1993, a Habitat Preservation Plan for panther habitat in south Florida was completed. This gives the framework for reintroduction site identification and evaluation projects. The primary force behind the recovery effort is the Florida Panther Interagency Committee. This committee was established in 1986 to ensure cooperation and coordination between the principal agencies involved in the implementation of recovery efforts. These agencies include U.S. Fish and Wildlife Service, National Park Service, Florida Fish and Wildlife Conservation Commission, and the Florida Department of Environmental Protection. The Florida panther is listed as an endangered species and is protected under the Endangered Species Act of 1973. It is unlawful to import, export, transport, sell, receive, acquire, or purchase any wild animal (alive or dead including parts, products, eggs, or offspring): (1) in interstate or foreign commerce if taken, possessed, transported or sold in violation of any State law or regulation; or (2) if taken or possessed in violation of any U.S. law, treaty, or regulation or in violation of Indian tribal law. It is also unlawful to possess any wild animal (alive or dead including parts, products, eggs, and offspring) within the U.S. territorial or special maritime jurisdiction (as defined in 18 U.S.C. 7) that is taken, possessed, transported, or sold in violation of any State law or regulation, foreign law, or Indian tribal law. It is also listed on Appendix I of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES-I as of February 22, 1977); by the International Union for Conservation of Nature and Natural Resources (IUCN Red Data Book as of June, 1970); and the Convention on Nature Protection and Wildlife Preservation in the Western Hemisphere, 1970. Originally the Florida panther was classified in the genus Felis along with the domestic cat and 28 other species. It was first described by Bangs in 1899. The subspecies name coryi came from Charles B. Cory who first described this panther as one of the more than 20 subspecies of Puma concolor. He originally named it Felis concolor floridana, but floridana was already in use for a subspecies of bobcat. It was subsequently renamed Felis concolor coryi and in 1993, it was reassigned to the genus Puma. For more information on the numerous threatened and endangered species in Florida, visit the Florida Fish and Wildlife Commission's Imperiled Species List.
The air is 78% nitrogen. If nitrogen were more reactive, it would not be so abundant in the atmosphere. When compounds that contain nitrogen decompose, they tend to form nitrogen molecules rather than other nitrogen compounds. When two atoms of nitrogen get near each other, the attraction is so strong, that other chemicals can't compete successfully against such a pull. One atom of nitrogen has 7 protons, 7 neutrons, and 7 electrons. The two innermost electrons do not participate in chemical bonding. Of the other five electrons, two are paired, three are single. Electrons on two neighboring atoms of nitrogen are attracted to both atoms. (Fig. 1) Only the outer electrons are shown. The air is 78% nitrogen. If nitrogen were more reactive, it would not be so abundant in the atmosphere. When compounds that contain nitrogen decompose, they tend to form nitrogen molecules rather than other nitrogen compounds. When two atoms of nitrogen get near each other, the attraction is so strong, that other chemicals can't compete successfully against such a pull. One atom of nitrogen has 7 protons, 7 neutrons, and 7 electrons. The two innermost electrons do not participate in chemical bonding. Of the other five electrons, two are paired, three are single. Electrons on two neighboring atoms of nitrogen are attracted to both atoms. (Fig. 1) Only the outer electrons are shown. The atoms are so strongly attracted to each other, that they vibrate with great amplitude. When most of the energy is transferred to the surroundings by collisions with other molecules, the nitogen molecule settles down with the electrons at their points of zero force. (Fig. 2) The line of six electrons should be in a three-dimentional array. Figure 3 is an attempt at a rough representation of the nitrogen molecule in three dimensions. Each atom has one unshared pair of electrons. The air is about 79% nitrogen and about 20% oxygen. Oxygen is an active element. There would be no oxygen left in the air if it weren't for the green plants on land and at sea. Oxygen is their waste product. With so much oxygen intermixed with the nitrogen, it is remarkable that the air does not consume itself in one big fire. If I put air into a furnace, and I supply an electric arc, the air does not ignite. Only the molecules of air that are inside the spark react. A very small quantity of nitrogen oxides results. The reactions that form the oxides, require more heat from the spark than they return to the surroundings. Any free nitrogen atoms that appear are more strongly attracted to other nitrogen atoms than to oxygen atoms. The reaction to produce one of the oxides is: N2 + O2 ------> 2NO The product, nitric oxide, is surprising in that it is considered to be triply bonded. The outer electrons of the free oxygen atom are arranged as two pairs and two singles. (Fig. 4) Usually only singles participate in bonding. There should be a maximum of two bonds per oxygen atom. This is another example that illustrates that formal rules have limited scope. In nitric oxide, three of the electrons of the oxygen atom are in the space between nitrogen and oxygen, and one pair and one single remain with the oxygen atom. (Fig. 5) The molecule of nitrogen is inert. The molecule of nitric oxide is chemically active . The oxygen atom has eight protons compared with the nitrogen atom with seven protons. In the nitric oxide molecule, there is an extra single electron on the oxygen atom. The six bonding electrons have their point of zero force closer to the oxygen than to the nitrogen. The nitrogen molecule has no polarity. (Fig. 6) The oxygen end of the nitric oxide molecule is negative relative to the nitrogen end. The polarity of the molecule of NO makes it chemically active. The accounting of the bond strengths of the reactants and the products shows that energy from the surroundings must be transferred to the reactants to keep the reaction going. N2 + O2 -----> NO + NO Bond energies are as follows: the triple bond of N2 1.57 x 10-11 erg, the double bond of O2 8.20 x 10-12 erg, and the triple bond of NO 1.04 x 10-11 erg. The total for the reactants is 2.39 x 10-11 erg. The total for the products is 2.08 x 10-11 erg. Subtract 2.08 x 10-11 from 2.39 x 10-11 and get 3.10 x 10-12. The energy of the reaction as written is 3.10 x 10-12 erg. The energy of the reaction to produce one molecule of NO is half of that: 1.55 x 10-12 erg. This is the amount of energy that the reaction receives from the surroundings per NO molecule. When the total bond energies of the reactants is greater than the total bond energies of the products, the transfer of energy is from the surroundings, to the reaction. Any reaction that accepts energy from the environment is more likely to be going in the reverse direction. At a very high temperature there is a reaction between two molecules of nitric oxide that produces nitrogen and oxygen. NO + NO -----> N2 + O2 That reaction yields energy to the environment. When reactants and products can change roles, the reaction is reversible. When the reaction stops or gives the appearance of having stopped, the product of the dominant reaction is present in greater quantity than the product of the reverse reaction. Usually, the dominant reaction is the one that yields energy to the environment. In a gasoline engine, the fuel reacts with oxygen. Instead of admitting pure oxygen into the cylinder of the motor, the engine admits a mixture of air and fuel. As a result, some of the nitrogen of the air combines with some of the oxygen of the air and forms a minute quantity of nitric oxide in the high temperature of the cylinder. This escapes in the exhaust. With hundreds of thousands of cars in a metropolitan area, the quantity of NO in the air becomes significant, and it contributes to the smog. As a molecule in the atmosphere, nitrogen is quite inert at 298 K. Nitrogen reacts with nothing else at room temperature. There is one exception. Lithium metal reacts with nitrogen slowly at room temperature. Lithium can do this because it has a very weak hold on its outer electron, and it has very little mass, so it is easily accelerated. When a nitrogen molecule is close to lithium metal, it is strongly attracted by the electrons of the metal. The electrons, being free to move in the metal, gather where they are attracted by the molecule of nitrogen. The accelerated nitrogen molecule interacts strongly with the metal surface. This increases the vibration of the two nitrogen nuclei. Loosely held electrons leave the lithium to join the nitrogen. The nitrogen atoms that are held together by the electrons that they share, lose their grip on each other when other electrons are available. The combination of vibration and extra electrons causes some increase in the spacing of the two nitrogen atoms. Positive lithium ions are also attracted to the extra electrons. The Li+ ions leave the metal easily because they are weakly held, and easily accelerated. The presence of positive charges helps to weaken the attraction between nitrogen atoms. The atoms separate. Both nitrogen atoms combine with lithium atoms to form lithium nitride. 6Li + N2 ----> 2Li3N In this compound, the extra electrons are closer to nitrogen than to the lithium. The nitrogen becomes a negative ion, N3- and the lithium becomes a positive ion, Li+. The compound takes the form of a crystal. Lithium is the only substance that reacts with nitogen at room temperature, because it is the metal with the lightest atoms. Several others react at high temperatures. There are nitrides of magnesium, calcium, borium, strontium, zinc, cadmium, and thorium. Whenever a nitrogen atom in any compound gets an opportunity to combine with another nitrogen atom, it breaks away to form the nitrogen molecule. In some compounds, existing bonds prevent the ecape of the nitrogen atom. The great attraction of nitrogen for nitrogen is in the form of the triple bond. Single bonds and double bonds between nitrogen atoms exist in some compounds in which the nitrogen atoms cannot break away to form the triple bond. Compare the bond energies of the single, double, and triple bonds of nitrogen to nitrogen. The bond energies are in the ratio 1 : 2.6 : 5.8 . When there is a N-N bond, each atom is also bonded to two other atoms, as in: H2NNH2. This molecule is short-lived because of the weakness of the N-N bond. Another possibility is : O=N-N=O, also terribly unstable. The kind of attachments that the N-N atoms have to other atoms and molecules changes the strength of the N-N bond. The N-N bond is slightly stronger than 2.70 x 10-12 erg in hydrazobenzene: The compound azobenzene has doubly bonded nitrogen, and is fairly stable: When one pair of hydrogen atoms combine, they accelerate toward each other so strongly that they set up a vibration that constitutes a very high temperature. The excess energy is transferred in collisions with other atoms. The commotion may liberate other nitrogen atoms and provoke a chain reaction. After the explosion, one of the products is nitrogen gas. An example of an explosion that is driven by the tendency of nitrogen atoms to combine as nitrogen gas is the decomposition of glyceryl trinitrate, the active ingredient in dynamite. The structural formula of glyceryl trinitrate is : The molecule contains 9 N-O bonds. They are very weak bonds, requiring 3.33 x 10-12 erg to break one bond. When one pair of nitrogen atoms combine, they yield 1.57 x 10-11 erg to form one molecule. 1.57 x 10-11 is 4.7 times 3.33 x 10-12. The making of one nitrogen molecule yields enough energy to break almost 5 N-O bonds. The breaking of 5 N-O bonds supplies 5 free atoms of oxygen . When one pair of oxygen atoms combine to form an oxygen molecule, they yield 8.2 x 10-12 erg. 8.2 x 10-12 is 2.46 times 3.33 x 10-12. The making of one oxygen molecule breaks more than 2 N-O bonds. The original 5 free oxygen atoms form at least 2 oxygen molecules. Therefore they break 4 N-O bonds. The total of 5 plus 4 N-O bonds broken releases all of the nitrogen atoms and 6 of the oygen atoms of the glyceryl trinitrate molecule. It is obvious that the energy of one triple bond of nitrogen is enough to start a chain reaction that explodes any quantity of glyceryl trinitrate. Even a small quantity consists of a tremendous number of molecules. To show the proportion of reactant molecules to product molecule, I use a balanced equation: 4C3H5O9N3 -----> 12CO2 + 6N2 + 10 H2O + O2 I have to use 4 molecules of glyceryl trinitrate to get enough atoms to form one molecule of oxygen. It seems remarkable that the products come out in the right proportions. This is a reaction in which all bonds are broken at the same time. Suddenly there is a tremendously high temperature. The only bonds that can exist at that temperature are the nitrogen triple bond and the carbon monoxide triple bond. As the energy transfers to the surroundings, the temperature falls gradually. Bonds can form only when the temperature is low enough. The strongest bonds form first. The rest of the bonds take their turns in order of decreasing strength as the temperature continues to fall. The order is N2 , CO2 , H2O , O2 . Even before N2 forms, CO forms. CO is carbon monoxide. The triple bond of carbon monoxide is the strongest bond in any molecule of two atoms, 1.78 x 10-11 erg, as compared with the 1.57 x 10-11 erg of the nitrogen triple bond. After N2 forms, oxygen atoms react with CO molecules to form CO2. Two double bonds, O=C=O, with a total of 2.48 x 10-11 erg, yield more energy than one triple bond, CO, with 1.78 x 10-11 erg Water forms before oxygen molecules because the two O-H bonds in water yield more energy than the double bond in O=O. Picture a mixture of free atoms of hydrogen and oxygen. When two oxygen atoms are close to each other, they accelerate strongly according to their bond strength of 8.2 x 10-12 erg. As long as the vibrating O=O system retains its energy, it cannot form a bond. As some of the energy is transferred to the surroundings, the vibration has less amplitude and the O=O bond is somewhat established. In the meantime, some hydrogen atoms vibrate with some oxygen atoms. As long as their energy is greater than the bond strength of 8.1 x 10-12 erg, the H-O bonds can't be established. Should a second hydrogen atom start to vibrate with the same oxygen atom, the total vibration in the H-O-H system can be twice 8.1 x 10-12 erg, or 16.2 x 10-12 erg. That doesn't raise the temperature because it is internal energy, nol translational energy. However it tends to tie down an oxygen atom where it is unavailable to other oxygen atoms. It also stabilizes the H-O-H system, because it is harder to increase the total energy of two H-O vibrations than one H-O vibration. Meanwhile, the O=O system can be broken by a slight addition of energy. The free oxygen atoms may find hydrogen atoms to vibrate with, instead of returning to oxygen atoms. On the other hand, knocking one hydrogen atom off H-O-H does not liberate any oxygen atom. Therefore H2O forms before O2. By the same reasoning, ammonia should form before N2 . There is a possible reaction: N2 + 3H2 -----> 2NH3, Nitrogen plus 3Hydrogen yields 2 ammonia. The reaction dosen't take place in this instance, because of the much greater likelihood that two nitrogen atoms find each other than that three successive hydrogen atoms find the same nitrogen atom before a second nitrogen atom arrives and disrupts the process. Whereas H-O-H has more total bond energy than O=O, H-N-H does not have as much total bond energy as triple-bonded nitrogen. The total bond energy of three N-H bonds is slightly more than the bond energy of triple-bonded nitrogen. Therefore, the reaction N2 + 3H2 -----> 2NH3 should take place. The problem is that the reaction rate is so slow, that no noticeable product appears. The problem is solved by the use of a catalyst. A catalyst is anything that speeds up a reaction without yielding energy of its own. For this reaction, the catalyst is iron in a spongy structure that has a large surface area. Possibly, the molecules of reactants are attracted to the surface, and gain kinetic energy that way. There may be somethoing else that the iron does also. Otherwise iron would not be better than some other metal. Even with the catalyst, the production of ammonia is very slow. Merely raising the temperature doesn't help, because the nitrogen and hydrogen occupy more volume at higher temperatures. The space between molecules increases, and the nitrogen and hydrogen molecules can't find each other. To make matters worse, ammonia molecules continue to decompose because they don't have to find anything. They break apart when they interact with anything that supplies the bond energy. The problem is solved by running the reaction at very high pressure. The nitrogen and hydrogen molecules find each other more often when they are densely packed. Meanwhile the decomposition of ammonia does not speed up. Another important contrivance for increasing the yield, removes ammonia selectively. The reaction chamber is a pipe, through which the reactants circulate . At one point in the circuit, the pipe passes through a refrigerated region, where the temperatue is below the boiling point of ammonia, 239.8 K. The ammonia condenses and liquid ammonia is tapped off from the lowest point in the pipe. Removing the product increases the yield, because the product, when removed, does not interfere with the reactant. Furthermore ammonia, which decomposes at high temperature in the reacting vessel, is stable at room temperature. The value of ammonia is its convenience as a starting material in the production of compounds that contain nitrogen. Before the availability of ammonia, chemists had to rely on living things for sources of nitrogen, free of the formidable triple bond. Ammonia is an industrial source of nitric acid: 4NH3 + 5O2 -----> 4NO + 6H2O ammonia + oxygen -----> nitric oxide + water This reaction is carried out at 1100 K, using platinum as a catalyst. After NO is formed, it continues to react with excess oxygen: 2NO + O2 -----> 2NO2 nitric oxide + oxygen -----> nitrogen dioxide The product is cooled to room temperature and dissolved in water: 3NO2 + H2O -----> 2HNO3 + NO nitrogen dioxide + water -----> nitric acid + nitric oxide The nitric oxide is recycled. The nitric acid is sometimes used to produce ammonium nitrate, NH4NO3 , for use as a fertilizer in agriculture. Ammonium nitrate has been known to explode at times. Among the amines are many that can be produced by the reaction of ammonia with an organic compound. For example, the production of methylamine from methyl alcohol, methanol: NH3 + CH3OH -----> CH3NH2 ammonia + methanol -----> methylamine With the help of ammonia, many reactions are facilitated. Furthermore, there are many compounds in which nitrogen is an essential component, amines, amides, azo compounds, and many salts. In living things, proteins, the bulding blocks of the cells, all contain nitrogen. Compounds that are assembled in living cells, are hard to produce in the laboratory, because , in addition to containing the elements in the correct proportions, the molecules of organic compounds have an exact configuration. Each atom must be in a particular place in the molecule. The first organic compound synthesized in the laboratory from inorganic materials was urea: An accounting of bond strengths of the reactants and products shows that the reaction is in the right direction. The difficulty is to get gas molecules, CO2 and NH3 to combine correctly to produce solid urea and liquid water. There is a problem of organization. If I were to raise the temperature sufficiently, I could get a mixture of free atoms of carbon, nitrogen, oxygen, and hydrogen, from CO2 and NH3. On cooling, the elements would never combine as urea. I must start with CO2 and NH3 and break no more bonds than necessary. (Fig. 11) 1. As a carbon dioxide molecule and an ammonia molecule get close to each other, all of their points of zero force for the electrons and nuclei change, The parts shift in chorus. I can't describe everything at once, although it all happens at once. I show the changes as if they were in sequence. No doubt there is something that leads and something that lags, but I can't tell which leads and which lags. 4. The shifting of negative charge away from the carbon atom leaves it more positive than usual. That attracts the electron pair of the nitrogen atom. A bond forms between the nitrogen and the carbon. 5. A scond molecule of ammonia arrives and orients itself with its nitrogen atom toward the carbon atom. 6. At this point, heat and pressure are applied. One of the protons of the second ammonia molecule is drawn toward the singly bonded oxygen atom. 7. The presence of an extra proton on the oxygen atom draws electrons away from the carbon atom, toward the oxygen atom. 8. Water is removed, and urea remains. The shape of the urea molecule is the result of falling into place of all the parts, at their points of zero force. The accounting of bond energies shows that the reactants have less than 1% higher bond energy than the products. That is why heat and pressure are applied. It also explains why the reaction is reversible.
eso1048-en-au — Photo Release A Swarm of Ancient Stars 8 December 2010 We know of about 150 of the rich collections of old stars called globular clusters that orbit our galaxy, the Milky Way. This sharp new image of Messier 107, captured by the Wide Field Imager on the 2.2-metre telescope at ESO’s La Silla Observatory in Chile, displays the structure of one such globular cluster in exquisite detail. Studying these stellar swarms has revealed much about the history of our galaxy and how stars evolve. The globular cluster Messier 107, also known as NGC 6171, is a compact and ancient family of stars that lies about 21 000 light-years away. Messier 107 is a bustling metropolis: thousands of stars in globular clusters like this one are concentrated into a space that is only about twenty times the distance between our Sun and its nearest stellar neighbour, Alpha Centauri, across. A significant number of these stars have already evolved into red giants, one of the last stages of a star’s life, and have a yellowish colour in this image. Globular clusters are among the oldest objects in the Universe. And since the stars within a globular cluster formed from the same cloud of interstellar matter at roughly the same time — typically over 10 billion years ago — they are all low-mass stars, as lightweights burn their hydrogen fuel supply much more slowly than stellar behemoths. Globular clusters formed during the earliest stages in the formation of their host galaxies and therefore studying these objects can give significant insights into how galaxies, and their component stars, evolve. Messier 107 has undergone intensive observations, being one of the 160 stellar fields that was selected for the Pre-FLAMES Survey — a preliminary survey conducted between 1999 and 2002 using the 2.2-metre telescope at ESO’s La Silla Observatory in Chile, to find suitable stars for follow-up observations with the VLT’s spectroscopic instrument FLAMES . Using FLAMES, it is possible to observe up to 130 targets at the same time, making it particularly well suited to the spectroscopic study of densely populated stellar fields, such as globular clusters. M107 is not visible to the naked eye, but, with an apparent magnitude of about eight, it can easily be observed from a dark site with binoculars or a small telescope. The globular cluster is about 13 arcminutes across, which corresponds to about 80 light-years at its distance, and it is found in the constellation of Ophiuchus, north of the pincers of Scorpius. Roughly half of the Milky Way’s known globular clusters are actually found in the constellations of Sagittarius, Scorpius and Ophiuchus, in the general direction of the centre of the Milky Way. This is because they are all in elongated orbits around the central region and are on average most likely to be seen in this direction. Messier 107 was discovered by Pierre Méchain in April 1782 and it was added to the list of seven Additional Messier Objects that were originally not included in the final version of Messier’s catalogue, which was published the previous year. On 12 May 1793, it was independently rediscovered by William Herschel, who was able to resolve this globular cluster into stars for the first time. But it was not until 1947 that this globular cluster finally took its place in Messier’s catalogue as M107, making it the most recent star cluster to be added to this famous list. This image is composed from exposures taken through the blue, green and near-infrared filters by the Wide Field Camera (WFI) on the MPG/ESO 2.2-metre telescope at the La Silla Observatory in Chile. ESO, the European Southern Observatory, is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive astronomical observatory. It is supported by 14 countries: Austria, Belgium, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and VISTA, the world’s largest survey telescope. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 42-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”. ESO, La Silla, Paranal, E-ELT and Survey Telescopes Public Information Officer Tel: +49 89 3200 6655 Cell: +49 151 1537 3591 About the Release |Type:||• Solar System : Star : Grouping : Cluster : Globular| |Facility:||MPG/ESO 2.2-metre telescope|
MethodismArticle Free Pass External Web sites Britannica Web sites Articles from Britannica encyclopedias for elementary and high school students. - Methodism - Children's Encyclopedia (Ages 8-11) Methodism is a branch of Protestant Christianity. It is based on the ideas of a man named John Wesley, who lived in the 1700s. At first Wesley only wanted to reform the Church of England, but his ideas soon led to the development of a new church. - Methodism - Student Encyclopedia (Ages 11 and up) The brothers John and Charles Wesley were sons of an Anglican clergyman (see Wesley). In 1728 John became a priest, and the following year he and Charles were both at Oxford University. They became members of a club of devout students who pledged themselves to regular Bible reading, attendance at Holy Communion, and visitation of prisoners in the local jails. Their carefully ordered pattern of life earned them the derisive name of Methodists from their fellow students. Their group was also humorously called the Holy Club, the Bible Bigots, and other uncomplimentary names.
Nanotechnology: Two new routes create specific types of single-walled carbon nanotubes with high purity Scientists have speculated for decades about the potential applications of single-walled carbon nanotubes (SWNTs). With their characteristic strength, flexibility, and conductivity, these nanomaterials that resemble rolled-up chicken wire might one day feature prominently in solar cells and miniaturized electronic circuits. But efforts to make them have always produced a mix of nanotubes with varying diameters and chiralities—the carbon-atom geometry that can make a nanotube behave like a metal or a semiconductor. Two groups provided possible solutions to this long-standing problem this year when they independently reported SWNT syntheses that produce a single type of nanotube with high purity. Yan Li of Peking University and coworkers grew SWNTs that are 92% pure, improving on the previous best of 55% (Nature 2014, DOI: 10.1038/nature13434). The tubes in the pure portion have a single chirality, bestowing the tiny cylinders with metallic properties. The key, Li says, was finding the “right recipe” for making a high-temperature tungsten-cobalt alloy nanocrystal catalyst to seed nanotube growth. Scientists led by Konstantin Amsharov of the Max Planck Institute for Solid State Research, in Germany, and Roman Fasel of the Swiss Federal Laboratories for Materials Science & Technology made one—and only one—type of SWNT by starting from a polycyclic aromatic hydrocarbon seed molecule (Nature 2014, DOI: 10.1038/nature13607). This precursor folds up into a nanotube cap when heated on a platinum surface. The nanotube then elongates as ethanol is added to provide a carbon source. The resulting nanotubes are metallic and free of defects. To capitalize on these discoveries, researchers need to next figure out how to scale up these syntheses and tune them to make pure SWNTs of different sizes and chiralities. View Enlarged Image A polycyclic aromatic hydrocarbon seed folds up into a cap when heated on a platinum surface. This cap dictates the chirality of the nanotube, approximately 2 nm in diameter, that grows from it. Credit: Juan Ramon Sanchez-Valencia
The Bureau of Engraving and Printing has quite the task of determining just how much money should be printed and distributed each year. Once the number of economic importance has been figured out, the fun part of shredding old bills to compensate for the new ones takes place and the value of a dollar is balanced out. Have you ever wondered though just how much United States currency is circulating through the world right now? Money in the United States changes hands constantly. Each day, millions of U.S. dollars and coins are circulated throughout the economy. Visual Economics has put together a graphic, shown below, which illustrates the amount of money in circulation, broken down by number of bills. While the graphic is from 2008, the percentage of bills is very comparable today. A more recent number provided by the Federal Reserve Statistical Release at the end of April 2010, shows that total USD currency in circulation was $878.8 billion. Many people believe that it is the Federal Reserve that prints our money, however, in actuality the Bureau of Engraving and Printing (BEP) produces the paper currency, while the U.S. Mint produces the coinage. Further, up to two thirds of U.S. currency in circulation worldwide is held outside of the United States. After the currency is minted, the coins and paper are sent to the Federal Reserve Banks (collectively, the “Fed”), twelve in total throughout the United States, and to some large banks within each Federal Reserve Bank district. It is the responsibility of the Fed to decide the amount of money in circulation. The Fed regulates the supply of money using: - The Discount Rate – The interest rate the Fed charges to financial institutions for short-term loans of reserves. - Reserve Requirements – The percentage of deposits in demand deposit accounts financial institutions must hold in reserve. - Open Market Operations – The Fed buys or sell U.S. Government securities on the open market to influence short-term interest rates and the growth of the money and credit aggregates. The Fed establishes monetary policy with the aim of sustaining economic growth. However, changes in the money supply impact interest rates as well as exchange rates. When the economy needs to be stimulated, or an increase in the growth rate of the money supply and credit is needed, the Fed steps in and buys U.S. government securities. This increases the amount of bank reserves. As a result, the federal funds rate, the rate on overnight loans between banks, decreases as banks are more willing to lend each other reserves. Other short term rates also decrease as the increase in supply of loanable funds decreases the equilibrium rate for loans. Longer-term interest rates also decrease. The decrease in rates causes the dollar to depreciate in the foreign exchange market. The reduction in interest rates at home leads to capital outflow, as money flows out to take advantage of higher interest rates abroad. The opposite occurs when the Fed is attempting to “cool” down the economy and stem inflation (too much money chasing too few goods causes an increase in prices). In this situation, the Fed sells U.S. government securities. As a result, the amount of bank reserves decreases, in turn, raising the Federal funds rate and other long-term rates. The increase in rates causes the dollar to appreciate, as investors demand the dollar because of its attractive yield. Keeping the economy in check has always been a tough job for the US Treasury and with the more money put into circulation the more difficult the job becomes. While $878 billion in circulation is a ton of money, it may not be as much as you thought. Projects have over $1 trillion in circulation before 2020 and if you’ve ever tried to visualize that amount of money, you know just how large that sum is.
Establishing a field station at Emiquon has significantly impacted and facilitated on-site research and education about this vital ecosystem. In addition, the Therkildsen Field Station at Emiquon is facilitating “hands-on” learning in the field and in the laboratory for students of many ages and for their teachers. The Therkildsen Field Station will have: - Scientific significance – Alarmingly large portions of oceans and fresh water are no longer livable due to increased nutrient loading from agricultural activities. Research at Therkildsen may discover principles of nutrient export that can be immediately applied to these problems, or may pave the way for future discoveries. - Societal significance - Better management of floodplains will likely have significant effects on Gulf Hypoxia, fisheries loss and species diversity. The Emiquon restoration will become a model for improved floodplain management. - Historical significance – The Emiquon region has supported different communities for thousands of years. Until 1923, when levees were built, Thompson Lake was famous for its abundant wildlife, and attracted outdoor enthusiasts from all over the world. The restoration is another chapter in the history of humans learning to live productively on the Illinois River floodplain.
Otherwise known as the Achaemenid Empire, the First Persian Empire was founded by ruler Cyrus the Great in the 6th Century BC. The Empire gradually expanded into the largest empire the world had seen at the time, controlling regions as far as the Indus River to the East and the Balkans and Macedonia to the west. Indeed, it is estimated that at its peak, the Persian Empire controlled 44% of the world’s population, a feat which remains unmatched! Modern-day regions which were under the Persian Empire’s control include Middle Eastern nations such as Iran, Iraq, Palestine and Israel and Lebanon, North African countries such as Egypt and Libya in addition to territories as far as Eastern Europe including Armenia, Azerbaijan and Georgia. The Persians were able to exert control over these vast territories through the establishment of revolutionary infrastructures, which included a complex road system and postal system in addition to the implementation of a single language, Aramaic, which enhanced the empire’s sense of unity. The various monarchs of the Persian Empire were able to consolidate their control through a complex bureaucratic system, which included a number institutions such as the military and various civil service. However, the Empire’s compartmentalization ended up contributing to its downfall, with the power of smaller local governments growing and threatening to undermine the authority of the Empire’s leader. As a result, finances and resources became concentrated on uprooting various rebellions, leaving the Empire in a weakened state. This weakness was eventually exploited by the iconic historical figure Alexander the Great, whose armies invaded Persia in 334 BC. Himself an admirer of Persia’s founder Cyrus the Great, Alexander introduced a number of Persian customs into Macedonian culture and ensured that respect for the Persian Kings was implemented within his empire. Despite the disintegration of the Persian Empire, its culture thrived for hundreds of years, eventually restoring its power by the 2nd Century BC. By Louis Cross
The latest news from academia, regulators research labs and other things of interest Posted: May 10, 2011 Microwave guiding of electrons (Nanowerk News) For the first time scientists at MPQ achieve guiding of electrons by purely electric fields. The investigation of the properties of electrons plays a key role for the understanding of the fundamental laws of nature. However, being extremely small and quick, electrons are difficult to control. Physicists around Dr. Peter Hommelhoff, head of the Max Planck Research Group "Ultrafast Quantum Optics" at the Max Planck Institute of Quantum Optics (Garching near Munich), have now demonstrated efficient guiding of slow electrons by applying a microwave voltage to electrodes fabricated on a planar substrate ("Microwave Guiding of Electrons on a Chip"). This new technique of electron guiding – which resembles the guiding of light waves in optical fibres – promises a variety of applications, from guided matter-wave experiments to non-invasive electron microscopy. Fig. 1: Photograph of the guide with the electron source in the back. The white lines are the substrate being visible in the space between the individual electrodes. Electrons are emitted through a tiny hole of 20 µm diameter (not visible) in the centre of the gun. The guiding minimum forms 0.5 mm above the electrodes. Guided electrons follow the direction of the electrodes and turn to the left in the foreground of the picture. Electrons have been the first elementary particles revealing their wave-like properties and have therefore been of great importance in the development of the theory of quantum mechanics. Even now the observation of electrons leads to new insight into the fundamental laws of physics. Measurements involving confined electrons have so far mainly been performed in so-called Penning traps, which combine a static magnetic field with an oscillating electric field. For a number of experiments with propagating electrons, like interferometry with slow electrons, it would be advantageous to confine the electrons by a purely electric field. This can be done in an alternating quadrupole potential similar to the standard technique that is used for ion trapping. These so-called Paul traps are based on four electrodes to which a radiofrequency voltage is applied. The resulting field evokes a driving force which keeps the particle in the centre of the trap. Wolfgang Paul received the Nobel Prize in physics for the invention of these traps in 1989. For several years by now scientists realize Paul traps with micro structured electrodes on planar substrates, using standard microelectronic chip technology. Dr. Hommelhoff and his group have now applied this method for the first time to electrons. Since the mass of these point-like particles is only a tenth of a thousandth of the mass of an ion, electrons react much faster to electric fields than the rather heavy ions. Hence, in order to guide electrons, the frequency of the alternating voltage applied to the electrodes has to be much higher than for the confinement of ions and is in the microwave range, at around 1 GHz. In the experiment electrons are generated in a thermal source (in which a tungsten wire is heated like in a light bulb) and the emitted electrons are collimated to a parallel beam of a few electron volts. From there the electrons are injected into the "wave-guide". It is being generated by five electrodes on a planar substrate to which an alternating voltage with a frequency of about 1 GHz is applied (see Figure 1). This introduces an oscillating quadrupole field in a distance of half a millimetre above the electrodes, which confines the electrons in the radial direction. In the longitudinal direction there is no force acting on the particles so that they are free to travel along the "guide tube". As the confinement in the radial direction is very strong the electrons are forced to follow even small directional changes of the electrodes. Fig. 2: Overview of the experimental setup and signals of guided and unguided electrons.(a) Picture of the setup as seen from above with the substrate in the centre. The last element of the electron gun is visible in the top left corner. Guided electrons follow the curved orange path from the source to the detector, whereas unguided electrons travel in straight lines over the substrate (indicated in blue). (b) Guided electrons result in a bright spot at the exit of the guide (indicated by an orange circle). (c) With no guiding voltage applied the electrons hit the detector on the right side where a diffuse spot is forming. In order to make this effect more visible the 37mm long electrodes are bent to a curve of 30 degrees opening angle and with a bending radius of 40mm. At the end of the structure the guided electrons are ejected and registered by a detector. As shown in Figure 2 (b), a bright spot caused by guided electrons appears on the detector right at the exit of the guide tube, which is situated in the left part of the picture. When the alternating field is switched off a more diffusively illuminated area shows up on the right side (Figure 2 (c)). It is caused by electrons spreading out from the source and propagating on straight trajectories over the substrate. "With this fundamental experiment we were able to show that electrons can be efficiently guided be purely electric fields", says Dr. Hommelhoff. "However, as our electron source yields a rather poorly collimated electron beam we still lose many electrons." In the future the researchers plan to combine the new microwave guide with an electron source based on field emission from an atomically sharp metal tip. These devices deliver electron beams with such a strong collimation that their transverse component is limited by the Heisenberg uncertainty principle only. Under these conditions it should be feasible to investigate the individual quantum mechanical oscillations of the electrons in the radial potential of the guide. "The strong confinement of electrons observed in our experiment means that a "jump" from one quantum state to the neighbouring higher state requires a lot of energy and is therefore not very likely to happen", explains Johannes Hoffrogge, doctoral student at the experiment. "Once a single quantum state is populated it will remain so for an extended period of time and can be used for quantum experiments." This would make it possible to conduct quantum physics experiments such as interferometry with guided slow electrons. Here the wave function of an electron is first split up; later on, its two components are brought together again whereby characteristic superpositions of quantum states of the electron can be generated. But the new method could also be applied for a new form of electron microscopy. Source: Max Planck Institute of Quantum Optics If you liked this article, please give it a quick review on reddit or StumbleUpon. Thanks! Check out these other trending stories on Nanowerk:
(PhysOrg.com) -- A new investigation of a fossilized tracksite in southern Africa shows how early dinosaurs made on-the-fly adjustments to their movements to cope with slippery and sloping terrain. Differences in how early dinosaurs made these adjustments provide insight into the later evolution of the group. The research, conducted by researchers at the University of Michigan, Argentina's Universidad de Buenos Aires, and the Iziko South African Museum in Cape Town, South Africa, will be published online Oct. 6 in the open-access journal PLoS ONE. The Moyeni tracksite in Lesotho contains more than 250 footprints made by a variety of four-legged animals near the beginning of the Jurassic Period (about 200 million years ago), when the Earth's landmasses were united as Pangea. The site was first discovered and described in the 1960s and 1970s by French paleontologist Paul Ellenberger but has not since been examined in detail. In their re-analysis of the fossil tracksite, the researchers created a high-resolution map of trackway surface using a combination of traditional mapping techniques and a 3D surface scanner, which recorded millimeter-scale detail. The digital record of the site will serve as an archive and will be the source of future research, said U-M's Jeffrey Wilson, an assistant professor in the Department of Geological Sciences and an assistant curator in the Museum of Paleontology. The researchers' re-interpretation of the geology of the tracksite indicated that the dinosaurs were walking across an ancient point bar that presented the animals with varying surface conditions. Based on the map, scans, and first-hand observations at the site, Wilson and coworkers Claudia Marsicano and Roger Smith interpreted the tracks to understand how dinosaurs adjusted to changes in terrain as they moved between a wet riverbed, a sloping bank, and a flat, upper surface of the point bar. "Tracks and trackways bring animals to life in a way that their bones cannot, by providing a brief but vibrant record of a living, breathing animal as it moved through its environment," Wilson said. "While fossilized bones can provide a wealth of information about extinct animals' anatomy and physiology, inferences about their locomotion and behavior are necessarily indirect." Tracks, on the other hand, are a direct record of the animal's behavior. The disadvantage, though, is that tracks preserve the impression of nothing more than the sole of the foot, rendering trackmaker identification an approximation. It is very difficult to identify species with such limited information. "Suppose you ran down the beach with a group of friends and then tried to identify each person's footprints," Wilson said. "You might use characteristics like foot size and length and even the number of toes, if someone in the group happens to be missing one. We use similar indicators to figure out what we're looking at, and while we can't identify tracks down to the species level, we can distinguish major groups, such as plant-eating ornithischians and meat-eating theropods." When they analyzed the tracks, the researchers determined that ornithischians changed their way of walking as surface conditions changed. In the river bed, they crouched low, adopted a sprawling four-legged stance, and crept along flat-footed, dragging their feet. On the slope, they narrowed their stance, still walking on all fours, but picking up their feet. Once they reached the flat, stable ground on top, they switched to walking on two legs. In contrast, the theropod that crossed the surface didn't vary its posture or gait. Remaining upright on two legs, it used claws on its toes to grip slippery surfaces. "The tracksite is a natural laboratory," said Smith. "We have a record of how different animals reacted to the same set of ground conditions." The different walking styles also foreshadow evolutionary trends in the two dinosaur lines, Wilson said. Three separate times in their evolutionary history, ornithischians switched from walking mainly on two legs to walking exclusively on all four. "It was thought that early in their evolutionary history, they had the capacity to do both, but at Moyeni they were caught in the act, and we can analyze how and perhaps why they did it," Marsicano said. Theropods, on the other hand, never gave up their two-legged stance. But because their lineage is believed to have given rise to birds, the possibility that their gripping claws played a key role is interesting to consider. "One idea about the origins of flight is that the progenitors of birds learned to fly by flapping their wings while climbing inclined surfaces," Wilson said. "In that scenario, the ability to grip a surface with claws is important." More information: "Dynamic locomotor capabilities revealed by early dinosaur trackmakers from southern Africa"---dx.plos.org/10.1371/journal.pone.0007331 Source: University of Michigan (news : web) Explore further: Skull study sheds light on dinosaur diversity
Contrary to their undeserved reputation, out of the sixty-seven species and sub-species of snakes in Florida, only six are venomous. They live in just about any conceivable habitat, from coastal mangroves to freshwater wetlands and dry uplands therefore, it is probable that you will encounter them occasionally. Snakes are reptiles, just like lizards, turtles and alligators, but many people fear them more than any other animals. Snakes are strictly carnivores and play an important role in our ecology, especially because they keep in check rodents which destroy our crops and are carriers of diseases that affect men. About half of our snakes are born alive while the others lay eggs, newly born snakes usually appear by late summer. There are two types of venomous snakes in Florida, the pit vipers, which includes the diamondback rattlesnake, canebrake rattlesnake, pygmy rattlesnake, cottonmouth, and the copperhead; the other group is represented by the coral snake. The pit vipers are identified by the facial pits, one located between the eye and nostril on each side of the head. They also have an elliptical eye pupil and a broad, V-shaped head. Their venom is haemotoxic, which destroys the red blood cells and walls of the blood vessels of the victim. The coral snake has a neurotoxic venom, which acts on the nervous system, paralysing their victims. Diamondback rattlesnake: The largest and deadlier poisonous snake in North America can be recognized by the yellow-bordered, diamond-shaped markings on the back, and rattles on the end of its tail. When disturbed, the rattler assumes a defensive position with the body coiled upon itself, and head and neck raised in an S-position from which it will strike its enemy. The head is wider than the neck and the mouth has the typical fangs, lying folded inside the roof of their mouth. Their food is mostly wild rabbits and cotton rats. They can grow to about eight feet in length and can be found state-wide. Canebrake rattlesnake: It is a large snake, usually about four feet long, not as common as the diamondback. It has a pink buff color with sooty black bands and a rusty stripe down the middle of the back. Their tail is brown to black, and terminates in a rattle. They measure about five feet in length and are found in northern Florida and as far south as Alachua County. Pygmy rattlesnake: Also called ground rattler, is fairly common in Florida. It is gray in color and marked with rounded, dusky spots. Reddish spots alternate with the black along the middle of the back, starting at the base of the head all the way to their tail. It feeds on frogs and small rodents and can measure up to two feet in length. Cottonmouth moccasin: Also called water moccasin, can be found in every county of Florida, usually around stream banks, in swamps, and margins of lakes. Their color varies from olive-brown to black, with or without dark crossbands on the body. Their head is wider than their neck with a dark band extending from the eye to the rear of the jaw; the tail does not have rattles. When disturbed, it cocks its head upwards and opens its mouth wide to reveal the whitish interior lining, which gives this snake its name. It feeds on fish, frogs, snakes, lizards and small mammals. They average about 3 1/2 feet in length. Several kinds of harmless water snakes are often mistaken for cottonmouths. Copperhead: A medium sized snake, pinkish tan with reddish-brown crossbands. These bands are wide along the sides and narrow along the back to form a shape resembling an hourglass. Found only in the Apalachicola River drainage of Gadsen, Liberty, Calhoun and Jackson Counties. Their habitat are fields and hammocks, and fairly rare within its range. The average length is 2 1/2 feet. Coral snake: A fairly small snake, usually less than two feet in length, with patterns of red, yellow, and black rings. The characteristic black nose is used to identify this snake from the red nose of the non-poisonous scarlet king snake and northern scarlet snake. The red rings of the coral snake borders the yellow while the red rings of the king snake borders the black. Seminole County Residential Horticulturist All Seminole County Extension Services are Open to All Regardless of Race, Color, Sex, Handicap or National Origin.
From Wikipedia, the free encyclopedia - View original article Morse code is a method of transmitting text information as a series of on-off tones, lights, or clicks that can be directly understood by a skilled listener or observer without special equipment. The International Morse Code encodes the ISO basic Latin alphabet, some extra Latin letters, the Arabic numerals and a small set of punctuation and procedural signals as standardized sequences of short and long signals called "dots" and "dashes", or "dits" and "dahs". Because many non-English natural languages use more than the 26 Roman letters, extensions to the Morse alphabet exist for those languages. Each character (letter or numeral) is represented by a unique sequence of dots and dashes. The duration of a dash is three times the duration of a dot. Each dot or dash is followed by a short silence, equal to the dot duration. The letters of a word are separated by a space equal to three dots (one dash), and the words are separated by a space equal to seven dots. The dot duration is the basic unit of time measurement in code transmission. For efficiency, the length of each character in Morse is approximately inversely proportional to its frequency of occurrence in English. Thus, the most common letter in English, the letter "E," has the shortest code, a single dot. Morse code is most popular among amateur radio operators, although it is no longer required for licensing in most countries. Pilots and air traffic controllers usually need only a cursory understanding. Aeronautical navigational aids, such as VORs and NDBs, constantly identify in Morse code. Compared to voice, Morse code is less sensitive to poor signal conditions, yet still comprehensible to humans without a decoding device. Morse is therefore a useful alternative to synthesized speech for sending automated data to skilled listeners on voice channels. Many amateur radio repeaters, for example, identify with Morse, even though they are used for voice communications. For emergency signals, Morse code can be sent by way of improvised sources that can be easily "keyed" on and off, making it one of the simplest and most versatile methods of telecommunication. The most common distress signal is SOS or three dots, three dashes and three dots, internationally recognized by treaty. Beginning in 1836, the American artist Samuel F. B. Morse, the American physicist Joseph Henry, and Alfred Vail developed an electrical telegraph system. This system sent pulses of electric current along wires which controlled an electromagnet that was located at the receiving end of the telegraph system. A code was needed to transmit natural language using only these pulses, and the silence between them. Morse therefore developed the forerunner to modern International Morse code. In 1837, William Cooke and Charles Wheatstone in England began using an electrical telegraph that also used electromagnets in its receivers. However, in contrast with any system of making sounds of clicks, their system used pointing needles that rotated above alphabetical charts to indicate the letters that were being sent. In 1841, Cooke and Wheatstone built a telegraph that printed the letters from a wheel of typefaces struck by a hammer. This machine was based on their 1840 telegraph and worked well; however, they failed to find customers for this system and only two examples were ever built. On the other hand, the three Americans' system for telegraphy, which was first used in about 1844, was designed to make indentations on a paper tape when electric currents were received. Morse's original telegraph receiver used a mechanical clockwork to move a paper tape. When an electrical current was received, an electromagnet engaged an armature that pushed a stylus onto the moving paper tape, making an indentation on the tape. When the current was interrupted, a spring retracted the stylus, and that portion of the moving tape remained unmarked. The Morse code was developed so that operators could translate the indentations marked on the paper tape into text messages. In his earliest code, Morse had planned to only transmit numerals, and use a dictionary to look up each word according to the number which had been sent. However, the code was soon expanded by Alfred Vail to include letters and special characters, so it could be used more generally. Vail determined the frequency of use of letters in the English language by counting the movable type he found in the type-cases of a local newspaper in Morristown. The shorter marks were called "dots", and the longer ones "dashes", and the letters most commonly used were assigned the shorter sequences of dots and dashes. In the original Morse telegraphs, the receiver's armature made a clicking noise as it moved in and out of position to mark the paper tape. The telegraph operators soon learned that they could translate the clicks directly into dots and dashes, and write these down by hand, thus making the paper tape unnecessary. When Morse code was adapted to radio communication, the dots and dashes were sent as short and long pulses. It was later found that people became more proficient at receiving Morse code when it is taught as a language that is heard, instead of one read from a page. To reflect the sounds of Morse code receivers, the operators began to vocalize a dot as "dit", and a dash as "dah". Dots which are not the final element of a character became vocalized as "di". For example, the letter "c" was then vocalized as "dah-di-dah-dit". In the 1890s, Morse code began to be used extensively for early radio communication, before it was possible to transmit voice. In the late nineteenth and early twentieth century, most high-speed international communication used Morse code on telegraph lines, undersea cables and radio circuits. In aviation, Morse code in radio systems started to be used on a regular basis in the 1920s. Although previous transmitters were bulky and the spark gap system of transmission was difficult to use, there had been some earlier attempts. In 1910 the U.S. Navy experimented with sending Morse from an airplane. That same year a radio on the airship America had been instrumental in coordinating the rescue of its crew. Zeppelin airships equipped with radio were used for bombing and naval scouting during World War I, and ground-based radio direction finders were used for airship navigation. Allied airships and military aircraft also made some use of radiotelegraphy. However, there was little aeronautical radio in general use during World War I, and in the 1920s there was no radio system used by such important flights as that of Charles Lindbergh from New York to Paris in 1927. Once he and the Spirit of St. Louis were off the ground, Lindbergh was truly alone and incommunicado. On the other hand, when the first airplane flight was made from California to Australia in the 1930s on the Southern Cross, one of its four crewmen was its radio operator who communicated with ground stations via radio telegraph. Beginning in the 1930s, both civilian and military pilots were required to be able to use Morse code, both for use with early communications systems and identification of navigational beacons which transmitted continuous two- or three-letter identifiers in Morse code. Aeronautical charts show the identifier of each navigational aid next to its location on the map. Radio telegraphy using Morse code was vital during World War II, especially in carrying messages between the warships and the naval bases of the belligerents. Long-range ship-to-ship communications was by radio telegraphy, using encrypted messages, because the voice radio systems on ships then were quite limited in both their range, and their security. Radiotelegraphy was also extensively used by warplanes, especially by long-range patrol planes that were sent out by these navies to scout for enemy warships, cargo ships, and troop ships. In addition, rapidly moving armies in the field could not have fought effectively without radiotelegraphy, because they moved more rapidly than telegraph and telephone lines could be erected. This was seen especially in the blitzkrieg offensives of the Nazi German Wehrmacht in Poland, Belgium, France (in 1940), the Soviet Union, and in North Africa; by the British Army in North Africa, Italy, and the Netherlands; and by the U.S. Army in France and Belgium (in 1944), and in southern Germany in 1945. Morse code was used as an international standard for maritime distress until 1999, when it was replaced by the Global Maritime Distress Safety System. When the French Navy ceased using Morse code on January 31, 1997, the final message transmitted was "Calling all. This is our last cry before our eternal silence." In the United States the final commercial Morse code transmission was on July 12, 1999, signing off with Samuel Morse's original 1844 message, "What hath God wrought", and the prosign "SK". The United States Coast Guard has ceased all use of Morse code on the radio, and no longer monitors any radio frequencies for Morse code transmissions, including the international medium frequency (MF) distress frequency of 500 kHz. However the Federal Communications Commission still grants commercial radiotelegraph operator licenses to applicants who pass its code and written tests. Licensees have reactivated the old California coastal Morse station KPH and regularly transmit from the site under either this Call sign or as KSM. Similarly, a few US Museum ship stations are operated by Morse enthusiasts. Morse code speed is measured in words per minute (wpm) or characters per minute (cpm). Characters have differing lengths because they contain differing numbers of dots and dashes. Consequently words also have different lengths in terms of dot duration, even when they contain the same number of characters. For this reason, a standard word is helpful to measure operator transmission speed. "PARIS" and "CODEX" are two such standard words. Operators skilled in Morse code can often understand ("copy") code in their heads at rates in excess of 40 wpm. International contests in code copying are still occasionally held. In July 1939 at a contest in Asheville, North Carolina in the United States Ted R. McElroy set a still-standing record for Morse copying, 75.2 wpm. William Pierpont N0HFF also notes that some operators may have passed 100 wpm. By this time they are "hearing" phrases and sentences rather than words. The fastest speed ever sent by a straight key was achieved in 1942 by Harry Turner W9YZE (d. 1992) who reached 35 wpm in a demonstration at a U.S. Army base. To accurately compare code copying speed records of different eras it is useful to keep in mind that different standard words (50 dot durations versus 60 dot durations) and different interword gaps (5 dot durations versus 7 dot durations) may have been used when determining such speed records. For example speeds run with the CODEX standard word and the PARIS standard may differ by up to 20%. Today among amateur operators there are several organizations that recognize high speed code ability, one group consisting of those who can copy Morse at 60 wpm. Also, Certificates of Code Proficiency are issued by several amateur radio societies, including the American Radio Relay League. Their basic award starts at 10 wpm with endorsements as high as 40 wpm, and are available to anyone who can copy the transmitted text. Members of the Boy Scouts of America may put a Morse interpreter's strip on their uniforms if they meet the standards for translating code at 5 wpm. Morse code has been in use for more than 160 years—longer than any other electrical coding system. What is called Morse code today is actually somewhat different from what was originally developed by Vail and Morse. The Modern International Morse code, or continental code, was created by Friedrich Clemens Gerke in 1848 and initially used for telegraphy between Hamburg and Cuxhaven in Germany. Gerke changed nearly half of the alphabet and all of the numerals resulting substantially in the modern form of the code. After some minor changes, International Morse Code was standardized at the International Telegraphy Congress in 1865 in Paris, and was later made the standard by the International Telecommunication Union (ITU). Morse's original code specification, largely limited to use in the United States and Canada, became known as American Morse code or railroad code. American Morse code is now seldom used except in historical re-enactments. In aviation, instrument pilots use radio navigation aids. To ensure that the stations the pilots are using are serviceable, the stations all transmit a short set of identification letters (usually a two-to-five-letter version of the station name) in Morse code. Station identification letters are shown on air navigation charts. For example, the VOR based at Manchester Airport in England is abbreviated as "MCT", and MCT in Morse code is transmitted on its radio frequency. In some countries, during periods of maintenance, the facility may radiate a T-E-S-T code (— · · · · —) or the code may be removed, which tells pilots and navigators that the station is unreliable. In Canada, the identification is removed entirely to signify the navigation aid is not to be used. In the aviation service Morse is typically sent at a very slow speed of about 5 words per minute. In the U.S., pilots do not actually have to know Morse to identify the transmitter because the dot/dash sequence is written out next to the transmitter's symbol on aeronautical charts. Some modern navigation receivers automatically translate the code into displayed letters. International Morse code today is most popular among amateur radio operators, where it is used as the pattern to key a transmitter on and off in the radio communications mode commonly referred to as "continuous wave" or "CW" to distinguish it from spark transmissions, not because the transmission was continuous. Other keying methods are available in radio telegraphy, such as frequency shift keying. The original amateur radio operators used Morse code exclusively, since voice-capable radio transmitters did not become commonly available until around 1920. Until 2003 the International Telecommunication Union mandated Morse code proficiency as part of the amateur radio licensing procedure worldwide. However, the World Radiocommunication Conference of 2003 made the Morse code requirement for amateur radio licensing optional. Many countries subsequently removed the Morse requirement from their licence requirements. Until 1991 a demonstration of the ability to send and receive Morse code at a minimum of five words per minute (wpm) was required to receive an amateur radio license for use in the United States from the Federal Communications Commission. Demonstration of this ability was still required for the privilege to use the HF bands. Until 2000 proficiency at the 20 wpm level was required to receive the highest level of amateur license (Amateur Extra Class); effective April 15, 2000, the FCC reduced the Extra Class requirement to five wpm. Finally, effective on February 23, 2007 the FCC eliminated the Morse code proficiency requirements from all amateur radio licenses. While voice and data transmissions are limited to specific amateur radio bands under U.S. rules, Morse code is permitted on all amateur bands—LF, MF, HF, UHF, and VHF. In some countries, certain portions of the amateur radio bands are reserved for transmission of Morse code signals only. The relatively limited speed at which Morse code can be sent led to the development of an extensive number of abbreviations to speed communication. These include prosigns, Q codes, and a set of Morse code abbreviations for typical message components. For example, CQ is broadcast to be interpreted as "seek you" (I'd like to converse with anyone who can hear my signal). OM (old man), YL (young lady) and XYL ("ex-YL" – wife) are common abbreviations. YL or OM is used by an operator when referring to the other operator, XYL or OM is used by an operator when referring to his or her spouse. QTH is "location" ("My QTH" is "My location"). The use of abbreviations for common terms permits conversation even when the operators speak different languages. Although the traditional telegraph key (straight key) is still used by some amateurs, the use of mechanical semi-automatic keyers (known as "bugs") and of fully automatic electronic keyers is prevalent today. Software is also frequently employed to produce and decode Morse code radio signals. Through May 2013 the First, Second, and Third Class (commercial) Radiotelegraph Licenses using code tests based upon the CODEX standard word were still being issued in the United States by the Federal Communications Commission. The First Class license required 20 WPM code group and 25 WPM text code proficiency, the others 16 WPM code group test (five letter blocks sent as simulation of receiving encrypted text) and 20 WPM code text (plain language) test. It was also necessary to pass written tests on operating practice and electronics theory. A unique additional demand for the First Class was a requirement of a year of experience for operators of shipboard and coast stations using Morse. This allowed the holder to be chief operator on board a passenger ship. However, since 1999 the use of satellite and very high frequency maritime communications systems (GMDSS) has made them obsolete. (By that point meeting experience requirement for the First was very difficult.) Currently only one class of license, the Radiotelegraph Operator Certificate, is issued. This is granted either when the tests are passed or as the Second and First are renewed and become this lifetime license. For new applicants it requires passing a written examination on electronic theory, as well as 16 WPM code and 20 WPM text tests. However the code exams are currently waived for holders of Amateur Extra Class licenses who obtained their operating privileges under the old 20 WPM test requirement. Radio navigation aids such as VORs and NDBs for aeronautical use broadcast identifying information in the form of Morse Code, though many VOR stations now also provide voice identification. Warships, including those of the U.S. Navy, have long used signal lamps to exchange messages in Morse code. Modern use continues, in part, as a way to communicate while maintaining radio silence. Submarine periscopes include a signal lamp. An important application is signalling for help through SOS, "· · · — — — · · ·". This can be sent many ways: keying a radio on and off, flashing a mirror, toggling a flashlight and similar methods. SOS is not three separate characters, rather, it is a prosign SOS, and is keyed without gaps between characters. Morse code has been employed as an assistive technology, helping people with a variety of disabilities to communicate. Morse can be sent by persons with severe motion disabilities, as long as they have some minimal motor control. An original solution to the problem that caretakers have to learn to decode has been an electronic typewriter with the codes written on the keys. Codes were sung by users; see the voice typewriter employing morse or votem, Newell and Nabarro, 1968. Morse code can also be translated by computer and used in a speaking communication aid. In some cases this means alternately blowing into and sucking on a plastic tube ("sip-and-puff" interface). An important advantage of Morse code over row column scanning is that, once learned, it does not require looking at a display. Also, it appears faster than scanning. People with severe motion disabilities in addition to sensory disabilities (e.g. people who are also deaf or blind) can receive Morse through a skin buzzer.. In one case reported in the radio amateur magazine QST, an old shipboard radio operator who had a stroke and lost the ability to speak or write could communicate with his physician (a radio amateur) by blinking his eyes in Morse. Another example occurred in 1966 when prisoner of war Jeremiah Denton, brought on television by his North Vietnamese captors, Morse-blinked the word TORTURE. In these two cases interpreters were available to understand those series of eye-blinks. The text "Welcome to Wikipedia, the free encyclopedia that anyone can edit." sent as Morse code at 13 wpm. |Problems playing this file? See media help.| It says "A B C D E F G H I J K L M N O P Q R S T U V W X Y Z" in Morse code at 8 wpm |Problems playing this file? See media help.| International Morse code is composed of five elements: Morse code can be transmitted in a number of ways: originally as electrical pulses along a telegraph wire, but also as an audio tone, a radio signal with short and long tones, or as a mechanical, audible or visual signal (e.g. a flashing light) using devices like an Aldis lamp or a heliograph, a common flashlight, or even a car horn. Some mine rescues have used pulling on a rope - a short pull for a dot and a long pull for a dash. Morse code is transmitted using just two states (on and off). Historians have called it the first digital code. Strictly speaking it is not binary, as there are five fundamental elements (see quinary). However, this does not mean Morse code cannot be represented as a binary code. In an abstract sense, this is the function that telegraph operators perform when transmitting messages. Working from the above definitions and further defining a "unit" as a bit, we can visualize any Morse code sequence as a combination of the following five elements: Note that this method assumes that dits and dahs are always separated by dot duration gaps, and that gaps are always separated by dits and dahs. Morse messages are generally transmitted by a hand-operated device such as a telegraph key, so there are variations introduced by the skill of the sender and receiver — more experienced operators can send and receive at faster speeds. In addition, individual operators differ slightly, for example using slightly longer or shorter dashes or gaps, perhaps only for particular characters. This is called their "fist", and experienced operators can recognize specific individuals by it alone. A good operator who sends clearly and is easy to copy is said to have a "good fist". A "poor fist" is a characteristic of sloppy or hard to copy Morse code. An operator must choose two speeds when sending a message in Morse code. First, the operator must choose the character speed, or how fast each individual letter is sent. Second, the operator must choose the text speed, or how fast the entire message is sent. Both speeds can be the same, but often they are not the same. An operator could generate the characters at a high rate, but by increasing the space between the letters, send the message more slowly. Using different character and text speeds is, in fact, a common practice, and is used in the Farnsworth method of learning Morse code. Because Morse code is usually hand generated, an operator may retain a certain comfortable character speed, but vary the text speed by varying the spacing between the letters. All Morse code elements depend on the dot length. A dash is the length of 3 dots, and spacings are specified in number of dot lengths. Because of this, some method to standardize the dot length is useful. A simple way to do this is to send the same five-character word over and over for one minute at a speed that will allow the operator to send the correct number of words in one minute. If, for example, the operator wanted a character speed of 13 words per minute, the operator would send the five-character word 13 times in exactly one minute. From this, the operator would arrive at a dot length necessary to produce 13 words per minute while meeting all the standards. The word one chooses determines the dot length. A word with more dots, like PARIS, would be sent with longer dots to fill in one minute. A word with more dashes, like CODEX, would produce a shorter dot length so everything would fit into 1 minute. The words PARIS and CODEX are frequently used as a Morse code standard word. Using the word PARIS as a standard, the number of dot units is 50 and a simple calculation shows that the dot length at 20 words per minute is 60 milliseconds. Using the word CODEX with 60 dot units, the dot length at 20 words per minute is 50 milliseconds. Because Morse code is usually sent by hand, it is unlikely that an operator could be that precise with the dot length, and the individual characteristics and preferences of the operators usually override the standards. For commercial radiotelegraph licenses in the United States, the Federal Communications Commission specifies tests for Morse code proficiency in words per minute of text speed. The commission does not specify character speeds. For proficiency at 20 words per minute, it would be impossible to generate characters at less than that speed. If, for example, the characters were generated at a rate to produce 5 words in one minute, the examiner could not send 20 words in one minute. Conversely, the examiner could generate characters at a rate to produce 24 words per minute, but increase the character spacing to send the message at 20 words per minute. The regulation, however, only specifies the number of words to be received in one minute. While the Federal Communications Commission no longer requires Morse code for amateur radio licenses, the old requirements were similar to the requirements for commercial radiotelegraph licenses. There was no requirement for any particular character speed, but the examinee had to send and receive a message at a specified text speed. A difference between amateur radio licenses and commercial radiotelegraph licenses is that commercial operators must be able to receive code groups of random characters along with plain language text. For each class of license, the code group speed requirement is slower than the plain language text requirement. For example, for the Radiotelegraph Operator License, the examinee must pass a 20 word per minute plain text test and a 16 word per minute code group test. Based upon a 50 dot duration standard word such as PARIS, the time for one dot duration or one unit can be computed by the formula: Where: T is the unit time, or dot duration, in milliseconds, W is the speed in wpm, and C is the speed in cpm. Below is an illustration of timing conventions. The phrase "MORSE CODE", in Morse code format, would normally be written something like this, where – represents dahs and · represents dits: −− −−− ·−· ··· · −·−· −−− −·· · M O R S E C O D E Next is the exact conventional timing for this phrase, with = representing "signal on", and . representing "signal off", each for the time length of exactly one dit: 1 2 3 4 5 6 7 8 12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789 M------ O---------- R------ S---- E C---------- O---------- D------ E ===.===...===.===.===...=.===.=...=.=.=...=.......===.=.===.=...===.===.===...===.=.=...= ^ ^ ^ ^ ^ | dah dit | | symbol space letter space word space Morse code is often spoken or written with "dah" for dashes, "dit" for dots located at the end of a character, and "di" for dots located at the beginning or internally within the character. Thus, the following Morse code sequence: M O R S E C O D E −− −−− ·−· ··· · (space) −·−· −−− −·· · Dah-dah dah-dah-dah di-dah-dit di-di-dit dit, Dah-di-dah-dit dah-dah-dah dah-di-dit dit. Note that there is little point in learning to read written Morse as above; rather, the sounds of all of the letters and symbols need to be learned, for both sending and receiving. Morse Code cannot be treated as a classical radioteletype (RTTY) signal when it comes to calculating a link margin or a link budget for the simple reason of it possessing variable length dots and dashes as well as variant timing between letters and words. For the purposes of Information Theory and Channel Coding comparisons the word PARIS is used to determine Morse Code's properties because it has an even number of dots and dashes. Morse Code when transmitted essentially creates an AM signal (even in on/off keying mode), assumptions about signal can be made with respect to similarly timed RTTY signalling. Because Morse code transmissions employ an on-off keyed radio signal, it requires less complex transmission equipment than other forms of radio communication. Morse code is usually received as a medium-pitched audio tone (600–1000 Hz), so transmissions are easier to copy than voice through the noise on congested frequencies, and it can be used in very high noise / low signal environments. The transmitted power is concentrated into a limited bandwidth so narrow receiver filters can be used to suppress interference from adjacent frequencies. The narrow signal bandwidth also takes advantage of the natural aural selectivity of the human brain, further enhancing weak signal readability. This efficiency makes CW extremely useful for DX (distance) transmissions, as well as for low-power transmissions (commonly called "QRP operation", from the Q-code for "reduce power"). The ARRL has a readability standard for robot encoders called ARRL Farnsworth Spacing that is supposed to have higher readability for both robot and human decoders. Some programs like WinMorse have implemented the standard. People learning Morse code using the Farnsworth method are taught to send and receive letters and other symbols at their full target speed, that is with normal relative timing of the dots, dashes and spaces within each symbol for that speed. The Farnsworth method is named for Donald R. "Russ" Farnsworth, also known by his call sign, W6TTB. However, initially exaggerated spaces between symbols and words are used, to give "thinking time" to make the sound "shape" of the letters and symbols easier to learn. The spacing can then be reduced with practice and familiarity. Another popular teaching method is the Koch method, named after German psychologist Ludwig Koch, which uses the full target speed from the outset, but begins with just two characters. Once strings containing those two characters can be copied with 90% accuracy, an additional character is added, and so on until the full character set is mastered. In North America, many thousands of individuals have increased their code recognition speed (after initial memorization of the characters) by listening to the regularly scheduled code practice transmissions broadcast by W1AW, the American Radio Relay League's headquarters station. In the United Kingdom many people learned the Morse code by means of a series of words or phrases that have the same rhythm as a Morse character. For instance, "Q" in Morse is dah-dah-di-dah, which can be memorized by the phrase "God save the Queen", and the Morse for "F" is di-di-dah-dit, which can be memorized as "Did she like it." A well-known Morse code rhythm from the Second World War period derives from Beethoven's Fifth Symphony, the opening phrase of which was regularly played at the beginning of BBC broadcasts. The timing of the notes corresponds to the Morse for "V"; di-di-di-dah and stood for "V for Victory" (as well as the Roman numeral for the number five). Prosign for "Invitation to transmit" |Punctuation||Question Mark [?]| |Punctuation||Exclamation Point [!]| |Punctuation||SlashFraction Bar [/]| |Punctuation||Ampersand (or "Wait") [&]| Prosign for "Wait" Not in ITU-R recommendation |Punctuation||Double Dash [=]| |Punctuation||Plus sign [+]| |Punctuation||Hyphen, Minus Sign [-]| Not in ITU-R recommendation |Punctuation||Quotation mark ["]| |Punctuation||Dollar sign [$]| Not in ITU-R recommendation |Punctuation||At Sign [@]| |Prosigns||End of work| |Prosigns||Invitation to Transmit| Also used for K Also used for Ŝ also used for Ampersand [&] |Non-English Extensions||À, à| Shared by À, Å |Non-English Extensions||Ä, ä| Shared by Ä, Æ, Ą |Non-English Extensions||Å, å| Shared by À, Å |Non-English Extensions||Ą, ą| Shared by Ä, Æ, Ą |Non-English Extensions||Æ, æ| Shared by Ä, Æ, Ą |Non-English Extensions||Ć, ć| Shared by Ć, Ĉ, Ç |Non-English Extensions||Ĉ, ĉ| Shared by Ć, Ĉ, Ç |Non-English Extensions||Ç, ç| Shared by Ć, Ĉ, Ç |Non-English Extensions||CH, ch| Shared by CH, Ĥ, Š |Non-English Extensions||Đ, đ| Shared by Đ, É, Ę Not to be confused with Eth (Ð, ð) |Non-English Extensions||Ð, ð| Not to be confused with D with stroke (Đ, đ) |Non-English Extensions||É, é| Shared by Đ, É, Ę |Non-English Extensions||È, è| Shared by È, Ł |Non-English Extensions||Ę, ę| Shared by Đ, É, Ę |Non-English Extensions||Ĝ, ĝ| |Non-English Extensions||Ĥ, ĥ| Shared by CH, Ĥ, Š |Non-English Extensions||Ĵ, ĵ| |Non-English Extensions||Ł, ł| Shared by È, Ł |Non-English Extensions||Ń, ń| Shared by Ń, Ñ |Non-English Extensions||Ñ, ñ| Shared by Ń, Ñ |Non-English Extensions||Ó, ó| Shared by Ó, Ö, Ø |Non-English Extensions||Ö, ö| Shared by Ó, Ö, Ø |Non-English Extensions||Ø, ø| Shared by Ó, Ö, Ø |Non-English Extensions||Ś, ś| |Non-English Extensions||Ŝ, ŝ| Prosign for "Understood" |Non-English Extensions||Š, š| Shared by CH, Ĥ, Š |Non-English Extensions||Þ, þ| |Non-English Extensions||Ü, ü| Shared by Ü, Ŭ |Non-English Extensions||Ŭ, ŭ| Shared by Ü, Ŭ |Non-English Extensions||Ź, ź| |Non-English Extensions||Ż, ż| The &, $ and _ signs are not defined inside the ITU recommendation on Morse code. There is no standard representation for the exclamation mark (!), although the KW digraph (– · – · – –) was proposed in the 1980s by the Heathkit Company (a vendor of assembly kits for amateur radio equipment). While Morse code translation software prefers the Heathkit version, on-air use is not yet universal as some amateur radio operators in North America and the Caribbean continue to prefer the older MN digraph (– – – ·) carried over from American landline telegraphy code. For Chinese, Chinese telegraph code is used to map Chinese characters to four-digit codes and send these digits out using standard Morse code. Korean Morse code[dead link] uses the SKATS mapping, originally developed to allow Korean to be typed on western typewriters. SKATS maps hangul characters to arbitrary letters of the Latin script and has no relationship to pronunciation in Korean. During early World War I (1914-1916) Germany briefly experimented with 'dotty' and 'dashy' Morse, in essence adding a dot or a dash at the end of each Morse symbol. Each one was quickly broken by Allied SIGINT, and standard Morse was restored by Spring 1916. Only a small percentage of Western Front (North Atlantic and Mediterranean Sea) traffic was in 'dotty' or 'dashy' Morse during the entire war. In popular culture, this is mostly remembered in the book The Codebreakers by Kahn and in the national archives of the UK and Australia (whose SIGINT operators copied most of this Morse variant). Kahn's cited sources come from the popular press and wireless magazines of the time. Other forms of 'Fractional Morse' or 'Fractionated Morse' have emerged. Some methods of teaching or learning Morse code use the dichotomic search table below. It is possible to decode morse code using software. The variety ranges from wide-band software defined radio receivers coupled to the Reverse Beacon Network where a software for the Windows operating system decodes a bunch of signals at once and detects CQ messages on ham bands, to mobile apps (i.e. for the iPad ). |Wikimedia Commons has media related to Morse code.|
Alfred Dreyfus, Antisemitism, and the Revolutionary Legacy An in-depth exploration of the meanings that have been attached to the Dreyfus affair. Wistrich raises a series of challenging questions to explore how and why it achieved such international resonance, both in its own day and as a leitmotif in twentieth-century history. In examining the significance of the affair as a paradigm of modern French, European, and Jewish history, he throws a searching new light on the position of Jews in modern democratic society. Introduction: A Chosen Affair 1 The Century of Revolutions 2 Edouard Drumont, prophet of the 'new' Judeophobia 3 Heralds of national – populism: Barrès, Rochefort, Guérin 4 The doctrines of Action Française. 5 Artists and Intellectuals in the shadow of Dreyfus. 6 Alfred Dreyfus , Icon and Martyr 7 J'accuse : the timebomb of Emile Zola 8 Anticlerical passions from Clemenceau to Anatole France 9 Charles Péguy and the Soul of France 10 Jewish Dreyfusards – Joseph Reinach , Bernard Lazare, Léon Blum 11 Varieties of Antisemitism during the Belle Epoque 12 Dreyfusards versus anti - Dreyfusards 13 Nationalists, Radicals and the Rights of Man 14 The Left Divided: Jaurès, Guesde, Georges Sorel 15 Colonialism, Racism and Pogroms in French Algeria 16 The Army, the Church and secular Republicanism 17 The Image of France among the Nations 18 Jewish Responses to the Affair 19 Herzl, Dreyfus and the birth of Political Zionism 20 The Seedbed of modern Totalitarianism? A Critique of Hannah Arendt Epilogue: Dreyfus and the Consequences
Energy is the ability to do work. It is the ability to move an object. Energy can be when a force causes an object to move. There are two types of energy. Potential energy is the first type of energy. This is the energy of objects that haven't been used yet. Potential energy exists as stored energy before there is any movement. An easy example to understand would be something like a kid attempting to play the violin. The kid may have a lot of potential to become a very good musician. Yet, if they don't put the potential into action by doing things like taking lessons and practicing, that's all it ever will be. Potential. It will never become a reality. Another example would be the mouse trap in this video. Before the mouse trap springs, it has a lot of energy that has not been put into action. (Watch video until 1:22) Kinetic energy takes place when there is movement. This comes after the potential energy has been put into action. Going back to the kid with the violin, kinetic energy would represent the final product after he decides to take lessons and practice. He will become an excellent violin player. In this video, the marble is placed at the top of the ramp. While placed at the top, it has potential energy. Once it starts moving, it shows kinetic energy because it is moving. There are several forms that potential and kinetic energy can take. Energy can be stored in matter, produced by the motion of molecules and electrons, and generated by machines. Chemical energy is potential energy that is stored in matter, such as the energy in food or gasoline. Heat is kinetic energy that is created by movement of molecules. Electrical energy is created when electrons move through the matter. Batteries are a perfect example of this. Sound is energy created by vibrations. Mechanical energy is the energy of machines. It is created by the moving parts of a machine. Electromagnetic energy includes things like radios and microwaves. Nuclear energy is created by the split of a nucleus in an atom.
Math 6, 2nd ed. Math 6, 2nd ed. Resources About Math 6, 2nd ed. Math 6 (2nd edition) seeks to develop solid problem-solving skills, teach methods of estimation, and familiarize the student with the use calculators and computers. The curriculum emphasizes the application of math to real-life situations. In addition, manipulatives are used to assist the student with the math concepts presented. Page 67—this PDF of Student text page 67 shows the page with all the correct answers
One of the major arteries in the human body is Femoral Artery. It helps in the supply of blood to the lower limb. Near the groin region, this artery forms a delta shape when it passes through the femoral vein and femoral nerve. For surgeons, the femoral triangle or Scarpa’s triangle provides as a crucial anatomical landmark when surgery is require in the region. In this blog, we are going to explain all about femoral artery, deep femoral artery, and other concepts related to them. Let’s get started: This artery is used by embalmers to deliver chemicals to the body to maintain it after death. The femoral artery is sub-divided into a superficial artery, common artery, and deep artery. All of these arteries provide blood to the different sections of the body. Profunda femoris is the largest branch of the femoral artery that supplies blood to the thigh and buttock area. To bring oxygen-depleted blood from these areas back to the heart, the femoral vein runs with this artery. The common femoral artery continues as the superficial femoral artery and gives off the deep femoral branch. Within the femoral triangle In clinical practice, the relationship of the femoral artery to other structures inside the thigh can be essential. Inside the femoral triangle, the femoral artery is situated deep to the: - Superficial fascia - Fascia lata - Superficial inguinal lymph nodes - The genitofemoral nerve’s femoral branch - Superficial circumflex iliac vein The medial femoral cutaneous nerve passes the artery in a lateral towards a medial direction, at the apex of the femoral triangle. Inside the triangle, the tendons of the psoas major, adductor Longus and pectineus cross deep to the femoral artery. Adjacently, at the apex of the triangle, the vein is identified deep to the artery. To remember the content order of the femoral triangle, you can use mnemonic NAVY, from the lateral to medial: - lYmphatics (femoral canal) Within the adductor canal Inside the adductor canal, the femoral artery is found deep to the: - Superficial fascia - Deep fascia - Sartorius muscle The artery is superficial to the Longus and adductor Magnus muscles. Both nerves and veins change in their location concerning the femoral artery. The saphenous nerve or vein is found lateral to the femoral artery, however, it is also found anterior and then medial to the vein as it passes through the canal. The vastus medialis muscle and its vein are found anterolateral to the femoral artery. Deep Femoral Artery The branch of the femoral artery is the deep femoral artery of the human body. The common femoral artery is the largest artery in the human body that possesses multiple branches. The deep femoral artery supplies blood to the skin of the medial thigh region and muscles that flex, extend, and adduct the thigh. It possesses oxygen-rich blood to the muscles of the thigh and upper leg, a vein eliminate oxygen-depleted blood from the thigh. From the common femoral artery, the deep femoral artery is branching off at a point referred to as the femoral triangle. Once the deep femoral artery left the femoral triangle, it starts developing further branches to deliver blood to the back of the thigh. The medial and lateral circumflex femoral arteries, both these branches, and the deep femoral artery are essential suppliers of blood to the thigh and bones joined with it. Despite this, the medial circumflex supplies the femur with blood. You may also interested in: “What is double marker test cost & how is double marker test done” The deep femoral artery releases several other branches: Lateral Circumflex Femoral Artery It is the first limb or branch of the deep femoral artery. It terminates the anterior element/aspect of the proximal femur by categorizing it into three branches: ascending, traverse, and descending branches. These all branches give supply for the proximal aspect of the femur, the adjacent portion of the skin of the thigh, and the quadriceps femoris muscle. Medial Circumflex Femoral Artery It moves around the posterior aspect of the femur where it is divided into two terminal branches: ascending and traverse. Basically, these branches or limbs give supply to the adductors of the thigh. Perforating Femoral Arteries These arteries perforate the proximal part of the adductor Magnus muscle so it can appear in the flexor section of the thigh. The upper three pierce or perforators are referred to as the true collateral branches of the deep femoral artery. But the fourth pierce is referred to as the terminal branch of the deep femoral artery. After reading this blog, you’ll get detailed information about the femoral artery and other relate aspects. Now, you have an idea, how the femoral artery works and what exactly it is. The femoral artery is important because it is a common site of peripheral arterial disease (PAD) complications. And, it can lead to intermittent claudication symptoms in your thigh.
Insect hearing biomimicry inspires new approach to small antennas Ormia ochracea is a small parasitic fly best known for its strong sense of directional hearing. A female fly tracks a male cricket by its chirps and then deposits her eggs on the unfortunate host. The larvae subsequently eat the cricket. Though it doesn’t work out well for male crickets, such acute hearing in a tiny body has inspired a University of Wisconsin-Madison researcher as he studies new designs for very small, powerful antennas. For a structure like an antenna to effectively transmit or receive an electromagnetic wave at a given frequency, the size must be comparable to the wavelength at that frequency. Making the structure’s aperture size physically smaller than a wavelength becomes a critical performance issue. These small antennas aren’t as efficient and don’t work well beyond a narrow band of frequencies. Usually, an insect’s “ears” are not even located on the head, but instead are close together on its thorax or elsewhere, depending on the particular insect. Despite the small time and intensity differences, some insects have directional hearing capabilities surpassing those of humans. The parasitic fly, which appears to be among the smallest with superb directional hearing, can detect the direction of a chirping cricket with an accuracy of one to two degrees. “These are small antennas that actually work better than large antennas”, said Nader Behdad, an assistant professor of electrical and computer engineering, who took this knowledge and began designing circuits that could mimic an insect’s auditory system. “There hasn’t been any work done to design antennas that mimic the hearing mechanism of different insects. We’ve designed a basic proof-of-concept antenna and have some preliminary results. But at this point, we still need to understand what the physics are.” Behdad is designing a super resolving type of antenna, which is capable of distinguishing signals coming from different directions. If he can create very small, efficient super-resolving antennas, the technology could result in significantly more wireless bandwidth, better cell phone reception and other applications in the consumer electronics industry, as well as new radar and imaging systems. He is also interested in eventually using his research to explore small super-directive antennas, a class of antennas that could capture a lot of power coming from one direction. Though this type of antenna is still far from reality, the result could be a tiny antenna with the capabilities of a giant one.
Most fish have a covering of scales, which can be divided into a variety of types, over the outer surface of their bodies. These types include the plate-like placoid scales of sharks; the diamond-shaped ganoid scales of the gars; the thin, smooth, disk-like cycloid scales of most freshwater fish and many marine species; and the ctenoid scales (with ctenii—small projections along the posterior margins) of perches and sunfish (Casteel, 1976). All the species presented in this atlas are referable to the cycloid and ctenoid types. Distinguishable scale characteristics include (1) overall scale shape; (2) position and shape of focus; (3) circuli appearance; (4) the appearance of the lateral, anterior, and posterior fields; and (5) to some extent, thickness/robustness of the scale. As there is considerable variation in scale shape even between different areas of the same individual fish (Figure 2), scale outline is not always the best indicator for identification (Chikuni, 1968; Casteel, 1972). Size is also generally not a desirable characteristic, as scale size varies and overlap occurs not only between species and individuals, but also within a single specimen. Cycloid (Figure 3) and ctenoid (Figure 4) scales show considerable variation in their forms, although not always at either the genus or species level, permitting their use for identification purposes. The Salmonidae and, to a lesser extent, the Pleuronectidae, display particularly consistent morphological characters. At this time identification of preserved scales to higher taxonomic levels (e.g., Family) is straightforward providing that an adequate comparative reference collection is available. While species idenfication is possible for some genera, due to the lack of distinguishing characteristics, species-level assignments for many groups are not possible. The terminology used to describe scales usually refers to topographical features such as surface sculpturing or internal variations (Figure 3, Figure 4). Lagler (1947) established the terminology utilized here to describe various scale features as follows:
HELPING YOUNG CHILDREN LEARN LANGUAGE AND LITERACY: BIRTH THROUGH KINDERGARTEN 2,624 marked this research material reliable. Call or whatsapp: +2347063298784 or email: [email protected] Excellent and professional research project topics and materials website. All the research tools, journals, seminars, essays, article, books, term papers, softwares and project materials for your research guide and final year projects are available here. Type: Project Materials | Format: Ms Word | Attribute: Documentation Only | Pages: 65 Pages | Chapters: 1-5 chapters | Price: ₦ 3,000.00 CHAPTER 1 EARLY LITERACY POLICY INITIATIVES Before reading this chapter, think about… Focus Questions Language and Literacy: Definitions and Interrelationships National Literacy Policies and Initiatives The Standards Movement Good Start, Grow Smart Early Reading First Using Scientifically-based Reading Research to Make Curricular and Instructional Decisions A Continuum of Instructional Approaches Emergent Literacy Approach Scientifically-Based Reading Research (SBRR) Approach Blended Instruction — A “Value Added” Approach A Blended Literacy Instructional Program Summary Linking Knowledge to Practice CHAPTER 2 Oral Language Development Before reading this chapter, think about… Focus Questions Perspectives on Children’s Language Acquisition Behaviorist Perspective Linguistic Nativist Perspective Social-Interactionist Perspective A Neuro-Biological Perspective Linguistic Vocabulary Lesson Phonology Morphology Syntax Semantics Pragmatics Observing the Development of Children’s Language A Biological View of Development A Social-Interactionist View of Language Development What Is Normal Language Development? Factors Contributing to Variation in Rate of Language Acquisition Gender Differences Socioeconomic Level Cultural Influences Medical Concerns Congenital Language Disorders Disfluency Pronunciation Summary Linking Knowledge to Practice CHAPTER 3 Facilitating Oral Language Learning Before reading this chapter, think about… Focus Questions Home Talk: A Natural Context for Learning and Using Language Encouraging Personal Narratives Reading Storybooks Television as a Language Tool Time 4 Choosing Programming for Young Children Active Viewing School Talk: A Structured Context for Learning and Using Language Language Opportunities in School Teacher Discourse Reciprocal Discussions and Conversations Contexts for Encouraging Language Group Activities Learning Centers Dramatic Play Language-Centered Activities Sharing Storytelling Language Play Songs and Finger Plays Assessment: Finding Out What Children Know and Can Do Summary Linking Knowledge to Practice CHAPTER 4 Sharing Good Books with Young Children Before reading this chapter, think about… Focus Questions Making Books Accessible to Young Children Classroom Library Centers Books Physical Characteristics Classroom Lending Library Sharing Literature with Children Effective Story-Reading Techniques Adult Behaviors While Reading Child Behaviors During Reading Cultural Variations in Story Reading Classroom Read-Alouds Shared Book Experience Extending Literature Creative Dramatics Puppets Cooking Felt or Flannel Boards and Characters Art Projects Writing Author Study Assessment: Discovering What Children Know and Can Do Summary Linking Knowledge to Practice CHAPTER 5 Earlier Views: Readiness and Emergent Literacy Before reading this chapter, think about… Focus Questions Traditional Readiness View Emergent Literacy Concepts about Print Purpose and Functions of Print Graphic Awareness Conventions of Print Early Forms of Reading and Writing Emergent Writing Emergent Reading Home Literacy Experiences Access to Print and Books Adult Demonstrations of Literacy Behavior Supportive Adults Independent Engagements with Literacy Storybook Reading Learning Literacy in a Second Language Summary Linking Knowledge to Practice CHAPTER 6 Emergent Literacy Strategies Before reading this chapter, think about… Focus Questions Functional Literacy Activities Environmental Print Functional Print Labels Lists Directions Schedules Calendars Messages Sign-In and Sign-Up Lists Inventory Lists Linking Literacy and Play Curriculum Connections Shared Enactments Language Experience Approach or Shared Writing Group Experience Stories Individual Language Experience Stories Summary Linking Knowledge to Practice CHAPTER 7 The New View: Science-Based Reading Research (SBRR) Strategies Before reading this chapter, think about… Focus Questions Science-Based Reading Research Phonological and Phonemic Awareness Instruction Phonological Awareness RHYME ALLITERATION WORD AND SYLLABLE SEGMENTING ONSET AND RIME MANIPULATION Phonemic Awareness PHONEME ISOLATION PHONEME BLENDING PHONEME SEGMENTING PHONEME MANIPULATION Alphabet Instruction Songs Letter charts Alphabet word walls Games Phonics Instruction Print Awareness Instruction Teaching Concepts about Print Key words Assessment: Finding Out What Children Know and Can Do Summary Linking Knowledge to Practice CHAPTER 8 Teaching Early Writing Before reading this chapter, think about… Focus Questions Children’s Development as Writers The Context for Writing: The Writing Center Gather the needed materials Arrange the materials Computers and word processing Writing in other centers The Writing Workshop Focus lessons Writing time Group share time Journals and Interactive Forms of Writing Journals Dialogue writing Pen pals Publishing Children’s Writing Handwriting Assessment: Discovering What Children Know and Can Do Anecdotal notes Checklists Summary Linking Knowledge to Practice CHAPTER 9 Assessing and Adapting Instruction to Meet the Needs of Diverse Learners Before reading this chapter, think about… Focus Questions Determining What Children Know and Can Do What Is Important for Teachers to Know about Children’s Literacy Development? Two Kinds of Assessment Ongoing Assessment Ongoing Assessment Tools Addressing Storage Problems Creating a Portfolio WHAT IS A PORTFOLIO? HOW ARE ARTIFACTS SELECTED FOR INCLUSION? WHO SELECTS THE PIECES FOR INCLUSION? WHY WAS EACH ARTIFACT SELECTED FROM THE WORKING PORTFOLIO FOR INCLUSION IN THE SHOWCASE PORTFOLIO? HOW OFTEN SHOULD ARTIFACTS BE SELECTED FROM THE WORKING PORTFOLIO FOR INCLUSION IN THE SHOWCASE PORTFOLIO? SHARING THE PORTFOLIOS WITH OTHERS On-Demand Assessment Standardized Classroom-based On-demand Assessments Adapting Instruction to Meet the Needs of Special Populations English as Second Language Learners Children with Special Needs Summary Linking Knowledge to Practice CHAPTER 10 Integrating the Curriculum Before reading this chapter, think about… Focus Questions The Integrated Approach to Curriculum Design Erasing the Seams: Designing Integrated Curricula Phase 1: Selecting a Topic Phase 2: Determining What the Children Already Know and What They Want to Learn about the Topic Phase 3: Determining Ways to Answer Children’s Questions: The Activities or Projects Sharing Learning with Others Integrating Literature into the Study Phase 4: Assessment and Evaluation Phase 5: Involving Parents Designing the Classroom’s Physical Environment to Support the Integrated Curriculum Carve the Large Classroom Space into Small Areas Gather Appropriate Resources to Support the Children’s Learning Place Similar or Related Centers Near Each Other Make Literacy Materials a Part of the Fabric of Each Center Organizing The Classroom’s Daily Schedule: Creating a Rhythm to the Day What Happens During Whole-Group Time? What Happens During Small-Group Activity Time? What Happens During Center or Activity Time? HELPING YOUNG CHILDREN LEARN LANGUAGE AND LITERACY: BIRTH THROUGH KINDERGARTEN - The Project Material is available for download. - The Research material is delivered within 15-30 Minutes. - The Material is complete from Preliminary Pages to References. - Well Researched and Approved for supervision. - Click the download button below to get the complete project material. Frequently Asked Questions In-order to give you the best service available online, we have compiled frequently asked questions (FAQ) from our clients so as to answer them and make your visit much more interesting. We are proudly Nigerians, and we are well aware of fraudulent activities that has been ongoing in the internet. To make it well known to our customers, we are geniune and duely registered with the Corporate Affairs Commission of the republic of Nigeria. Remember, Fraudulent sites can NEVER post bank accounts or contact address which contains personal information. Free chapter One is always given on the site to prove to you that we have the material. If you are unable to view the free chapter 1 send an email to [email protected] with the subject head "FREE CHAPTER 1' plus the topic. You will get a free chapter 1 within an hour. You can also check out what our happy clients have to say. Students are always advised to use our materials as guide. However, if you have a different case study, you may need to consult one of our professional writers to help you with that. Depending on similarity of the organization/industry you may modify if you wish. We have professional writers in various disciplines. If you have a fresh topic, just click Hire a Writer or click here to fill the form and one of our writers will contact you shortly. Yes it is a complete research project. We ensure that our client receives complete project materials which includes chapters 1-5, full references, questionnaires/secondary data, etc. Depending on how fast your request is acknowledged by us, you will get the complete project material withing 15-30 minutes. However, on a very good day you can still get it within 5 minutes! What Clients Say Our Researchers are happy, see what they are saying. Share your own experience with the world. Be polite and honest, as we seek to expand our business and reach more people. Thank you. All Project Materials is a website I recommend to all student and researchers within and outside the country. The web owners are doing great job and I appreciate them for that. Once again welldone. Thank you for everything you have done so far; my communication with you, both by e-mail and whatsapp, has been the only positive point about the whole experience - you have been reliable and courteous in my research work and I sincerely appreciate that. I have been using you people for some time and I can say that you are good because you give me what I want, you don't disappoint. You guys to keep to the standard. You are highly recommended to serve more Researchers. I love all project materials / researchcub. There are good and wonderful. Nice Work! People also search for: helping young children learn language and literacy: birth through kindergarten, helping, young, children project topics, researchcub.info, project topic, list of project topics, project topics and materials, research project topics, covid-19 project materials, all project topics, journals, books, Academic writer, animal science project topics.
Provide some examples of the const keyword and it’s use with pointers The const keyword is used when we want to make something – like a variable – have read-only access. Here’s a simple example of the const keyword: Simple example of the const keyword const int j = 10; In our example above, the variable j is declared to use the const keyword. What does this mean? Well, it means that an attempt to change the value of “j” will result in an error. When using the const keyword with pointers it’s important to note that the real meaning of what’s actually constant changes depending on the exact location of the const keyword. Pointer to a constant examples A pointer to a const is a pointer that points to data that is constant. This simply means that you can not change the data that is being pointed to. Here are some examples in code to clarify what we mean by a pointer to a constant: // this syntax creates a pointer to a const const char* pntr = "constant data"; // this syntax also creates a pointer to a const // but note the syntax is different from example above // although it's exactly the same as the example above char const* pntr = "constant data"; //this will result in an error - //because the data can not be changed *pntr = "test"; //this is OK, because the address can be changed pntr = "testing"; So, a pointer to a constant simply means that it’s a pointer to some constant data. The ‘variable’ holds data that can’t be changed. Constant pointer examples With a constant pointer, the address of the pointer itself is constant. But, the variable itself can still change – or the data inside the variable can change, if you want to think of it that way. Here are some examples: //this is a constant pointer //the address of the pointer can't change //but the data can change char* const pntr = "Some Data"; //this is correct and valid //because the data can still change *pntr = "testing"; //this is incorrect because the address //of the pointer itself can not be changed pntr = "testing"; Constant pointer to a constant example Think of a constant pointer to a pointer as combination of the two examples presented above. With a constant pointer to a constant, it means that neither the data being pointed to nor the pointer address can be changed. Here’s an example: //this is a constant pointer to a constant //the address of the pointer can't change //and the data can't be changed const char* const pntr = "Example";
(RxWiki News) With recent wildfires raging through countless acres of land in the American West, it’s more important than ever to know how you can remain safe and healthy in the event of a wildfire. Wildfire smoke is a combination of gases and fine particles from burning vegetation, building materials and other materials. Children, the elderly, pregnant women, and those who have respiratory and heart conditions may be more likely to get sick if they breathe in wildfire smoke. However, wildfire smoke can harm everyone — no matter how healthy they are. Breathing in smoke can lead to coughing, trouble breathing, eye stinging, scratchy throat, runny nose and irritated sinuses. Serious health effects of smoke inhalation include the following: - Shortness of breath (wheezing) - Chest pain - Asthma attacks - Fast heartbeat Protect Yourself from Smoke If possible, limit your exposure to smoke. Be sure to pay attention to local weather forecasts, news or health warnings about smoke in your area. In addition, make sure to follow instructions given by local emergency management officials and take extra safety measures, such as avoiding spending time outdoors. How to Protect Yourself and Your Family During a Wildfire 1) Pay attention and follow visibility guidelines that help estimate air quality (if they are available). 2) Follow recommendations. If told to stay indoors, be sure to stay indoors and keep the indoor air as clean as you possibly can: - Keep windows and doors closed unless it is very hot outside. - Run an air conditioner (if you have one), but keep the fresh-air intake closed and the filter clean to prevent outdoor smoke from getting inside. - Seek shelter elsewhere if you do not have an air conditioner and it is too warm to stay inside with the windows closed. 3) Use an air filter. Use a freestanding indoor air filter that has particle removal to help protect those with heart disease, asthma or other respiratory conditions. An air filter will also help protect the elderly and children from wildfire smoke. Be sure to follow the manufacturer’s instructions on filter replacement and device placement. 4) Do not engage in activities that can add to indoor pollution. - Avoid using anything that burns, such as candles and fireplaces. - Refrain from vacuuming because it can stir up particles inside your home. - Avoid smoking tobacco or other products. 5) Follow your health care provider’s advice about your medications and your respiratory management plan if you have asthma or another respiratory condition. Contact your health care provider if your symptoms become worse. 6) Do not rely on dust masks for protection. Paper “comfort” or “dust” masks actually trap large particles, such as sawdust, and will not protect your lungs from smoke. 7) Consider purchasing an N95 mask. If properly worn, this mask will offer some protection. If you would like to have a mask on hand, see the “Respirator Fact Sheet” provided by the Centers for Disease Control and Prevention’s National Institute for Occupational Safety and Health. 8) Avoid smoke exposure while outdoors. Before you travel to a forest or park, be sure to check whether any wildfires are burning or any prescribed burns are planned. Be Aware of Other Risky Situations Large fires can lead to power outages, which can increase your risk of carbon monoxide poisoning. For ways to protect your family and reduce the risk of carbon monoxide poisoning, check out “Preventing Carbon Monoxide Poisoning.” It is also important to keep food safety, water supply and power line hazards in mind. Develop a family disaster plan to use in case you need to evacuate. In addition, if you must drive, make sure to drive safely. Smoke on the road can decrease visibility.
The Red(Bolshevik) had strong and powerful army that was controlled by Trotsky. Also Red had lots of advantages like geographical knowledge, size of army, and their motivation for their goals. The army of Red was controlled by Trotsky, and Trotsky paid price early for their action for people to being high spirits. They got more reasons to keep ruling communist. This was their motivation of victory in the Civil War. • Sense of Mission: The Treaty of Versailles was a sense of mission or a goal that Wilson had for America by “making the world safe for democracy.” He had a strong desire to strengthen and improve other countries and in essence, the world. In his opinion, the great blessings that America enjoyed were something that every country should experience. Therefore he fought for the ratification of the Treaty as it would support his belief, with one of his strategies being the League of Nations. This was a step away from isolation and neutrality and America would become more involved in the affairs of other The Civil is a war between two territories or more, the North and South in this case, were the two territories to fight for one goal. North wanted to abolish slavery, and the South wanted to keep it. War didn’t start from one man hitting another, in fact the war had a cause to it. The Southern people were worried about the North establishing a new nation. Meaning to vote for a new president. How Did The Patriots Win The War? The main three ways the Patriots won the war was their leadership, Foreign Allies and Communication and Supplies. For one the leadership was a big factor in the revolutionary war. American generals made mistakes but learned from them. They also had strong leaders like George Washington inspired the troops with loyalty. there were two really important generals for the Confederates one of the generals, General Robert E.Lee. The general for the Union is, General George B. Mcclellan. The two Generals had been a very big threat since they had been good in school and also on the field. The Union had 75,300 soldiers and the Confederate had 52,000 soldiers. no2 one really won this war because they both retreated at the same time since so many people had died, and all around it was brutal and horrifying and a war we would surely never forget. The Overland Campaign was a turning point in the Civil War: it was a strategic victory for the Union, but consisted of heavy losses on both sides. In just 40 days, the Union lost 55,000 men. The Confederates lost 36,000 men, but with an army roughly half of the Union’s to begin with, their losses were proportionally much greater. The final battle of the campaign, Cold Harbor, led to extremely high losses on both sides, but was a defensive victory for Lee. Anti-war sentiments grew in the North and Grant was labeled “the butcher.” Despite the high losses, Grant knew this is what had to happen in order to achieve the North’s strategic objectives in the war. The North had beaten the South in the Civil War. The North won the war for many reasons; they had some advantages over the South, a great leader, and the desire to win. The North and South fought many battles before the Civil War ended. Each battle had a different outcome and some encouraging the fight and some ended in despair. The North had many advantages. From being a respected soldier, to killing King Duncan, Banquo and Mcduff’s family, Macbeth has turned from a noble man into a tyrant. His once noble heart and kind soul has transformed into a cold and bitter one. You can say that Macbeth was a victim, but we must also consider the horrors of his actions, and his downfall as a tragedy. At every turn of the book, he was fighting inner enemies, falling to ambition and the misanthropic spiritual world. There was no stopping after killing Duncan, and he will do anything to protect his throne, battling against the suggestion of fate, and manipulations of his wife the whole time. As they straggled back , they passed General Lee, who stated “It is my fault” (History.net). In conclusion, this battle was the turning point of the war. With this Confederate loss, it forced the British to not help them in the war, leaving them with no other help. This battle also took the lives of half of General lees army. Although both sides took major casualties, the south took the worse of the two. So Macbeth has planned out Banquo 's murder and it turns very bloody as we can tell because of the twenty deep cuts. So this relates to the blood motif because of how Banquo got murdered and it relates another way because the way the murder is described. As the play develops we see Macbeth’s ambition take over and cause him to do very bad things. In this situation, it is deciding to murder Banquo to get rid of him. With that being said this quote relates to the theme of ambition because it was Macbeth 's ambition to become King and do whatever it takes to do so. In addition, huge alliances with a mixture of smaller and bigger countries had the potential to involve the world’s strongest military powers in a small dispute between obscure countries. This situation all meant the countries created increasingly effective weapons in order to be the best started a war which was further escalated by the Becoming a winner of a mass of land was great to reward to Britain, but this caused them to change the way that they were going to govern, especially in North America. Britain had to find a new way of controlling the Colonies. Before the war, England pursued
Welcome to The Adding and Subtracting with Facts From 1 to 9 (J) Math Worksheet from the Mixed Operations Worksheets Page at Math-Drills.com. This math worksheet was created on 2013-01-11 and has been viewed 2 times this week and 3 times this month. It may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. Teachers can use math worksheets as tests, practice assignments or teaching tools (for example in group work, for scaffolding or in a learning center). Parents can work with their children to give them extra practice, to help them learn a new math skill or to keep their skills fresh over school breaks. Students can use math worksheets to master a math skill through practice, in a study group or for peer tutoring. Use the buttons below to print, open, or download the PDF version of the Adding and Subtracting with Facts From 1 to 9 (J) math worksheet. The size of the PDF file is 29537 bytes. Preview images of the first and second (if there is one) pages are shown. If there are more versions of this worksheet, the other versions will be available below the preview images. For more like this, use the search bar to look for some or all of these keywords: addition, subtraction, adding, subtracting, add, subtract, math, mathematics, mixed, operations. The Print button initiates your browser's print dialog. The Open button opens the complete PDF file in a new browser tab. The Download button initiates a download of the PDF math worksheet. Teacher versions include both the question page and the answer key. Student versions, if present, include only the question page.
The Mean Adequacy Ratio (MAR) is a member of the class of indicators that are used to evaluate individual intake of nutrients. This index quantifies the overall nutritional adequacy of a population based on an individual’s diet using the current recommended allowance for a group of nutrients of interest (Hatloy et al., 1998). It was first developed in the 1970s as a way to evaluate the effectiveness of food stamps in rural Pennsylvania (Madden & Yoder, 1972). The MAR is based on the Nutrient Adequacy Ratio (NAR), a measure that expresses an individual’s intake of a nutrient as a percentage (capped at 100%) of the corresponding recommended allowance for that nutrient, given the respondent’s age and sex. The MAR is then calculated by averaging the NAR. The other indicators in the Data4Diets platform that measure individual nutrient intake include: total macronutrient intake, probability of inadequate intake, total individual micronutrient intake, and total individual energy intake. Rather than quantifying caloric intake, the MAR scales data on total nutrient intake to derive a comprehensive indicator of overall dietary adequacy, although it does not capture issues related to overconsumption or under-consumption. Method of Construction The first step to estimate the MAR is to estimate the NAR for all nutrients of interest. The NAR is equal to the ratio of an individual’s nutrient intake to the current recommended allowance of the nutrient for his or her age and sex, and can be represented as a ratio or as a percentage. In the United States, this recommended allowance is referred to as the Recommended Dietary Allowance (RDA), whereas in many other countries, it is referred to as the Recommended Nutrient Intake (RNI). If the intake of a nutrient exceeds the RDA/RNI, the NAR is capped at 100% or 1, depending on whether it is expressed as a percentage or ratio. This prevents nutrients with very high intake (NAR value > 1) from masking nutrients with very low intake (low NAR value) when they are averaged to calculate the MAR (Hatloy et al., 1998). Once the NAR is calculated for each nutrient, the MAR is calculated by averaging all the NAR values together, as demonstrated in the equation below: The MAR is reported on a scale from 0 to 100% (or 1), where 100% (or 1) indicates the requirements for all the nutrients were met. When repeated measurements of nutrient intake are available for at least a subsample of individuals, the “probability approach” can be calculated. The repeated days are required to adjust the population nutrient intake distribution to take account of the intra-subject variability. This process allows for the usual intake distribution to be calculated allowing measurement of the individual probability of inadequacy for each nutrient and a mean probability of adequacy (MPA) over a range of nutrients (Arimond et al., 2010). For more information on how to calculate this indicator, please see the highly detailed Methods section of the following paper published in the European Journal of Clinical Nutrition (Hatloy et al., 1998). Data are collected at the individual level to assess nutrient adequacy of populations, and can be calculated to include or exclude nutrients depending on programmatic or research priorities. The MAR has been used to validate dietary diversity indicators, and can provide additional context when examined in conjunction with standard individual dietary diversity scores (Acham et al., 2012; Steyn et al., 2014). As an index, it does not reveal which micro- or macronutrients are or are not consumed in adequate amounts, and instead provides a general picture of adequacy aspects of an individual’s diet quality within a population. Total intake for an individual micronutrient or macronutrient may be more appropriate if disaggregated information on specific nutrients is needed. In addition, data on individual intake can be paired with findings on individual health outcomes or demographic information, such as religion, income, education, or other characteristics of interest in order to assess differences between sub-population groups based on various other demographic characteristics. Strengths and Weaknesses One strength of this indicator is that it allows researchers to consider and communicate a population’s overall nutritional adequacy, rather than focusing on specific nutrients that may not alone indicate healthy diet composition (for example the NAR only investigates one nutrient at a time). However, this indicator is based on RDAs or RNIs, which are estimates of the necessary nutrient intake to meet the requirement of 97-98% of healthy people, and may vary for some nutrients (like zinc and iron) depending on the assumed absorption, which can differ depending on the type of food consumed (Institute of Medicine, 2006). Thus, even a MAR of 1 (meaning requirements of all nutrients are met) does not guarantee that a population’s needs are met nor that individuals within the population can properly absorb and use the nutrients. Additionally, a MAR below 1 does not necessarily indicate that a population suffers from nutritional deficiencies. Inherent in the way that the RDAs/RNIs are defined, the cut-off amount is actually above the required intake for all but 2-3% of the population (Institute of Medicine, 2000). Thus, a population’s nutritional status cannot be inferred from this measure (Institute of Medicine, 2000). Individual-level dietary data can be obtained from Weighed Food Records, quantitative 24-hour Dietary Recalls, or quantitative Food Frequency Questionnaires. The Food and Agriculture Organization/ World Health Organization Global Individual Food consumption data Tool (FAO/WHO GIFT) is a source for individual-level quantitative dietary data. The FAO/WHO GIFT aims to make publicly available existing quantitative individual food consumption data from countries all over the world. National or regional Food Composition Tables should be used to identify the nutrient contents of the foods and can be found at FAO's International Network of Food Data Systems (INFOODS) or the International Life Science Institute’s (ILSI) World Nutrient Databases for Dietary Studies (WNDDS). RDAs/RNIs can be obtained from the Institute of Medicine for the United States (Institute of Medicine, 2006), from the British Nutrition Foundation for the United Kingdom (British Nutrition Foundation, 2016), or the European Food Safety Authority of the European Union (EFSA, 2017). As an alternative to country-specific RDAs/RNIs (e.g. if they do not exist for the country of interest), the FAO/WHO global RNIs can be used (FAO/WHO, 2001).
Russian scientists have found a multicellular organism in a river in the Russian Arctic that has lived frozen in permafrost for 24,000 years – even more so when thawed, scientists have found that it can still reproduce. In the past, researchers have claimed that Bdelloid rotifers the toughest, tiniest animal you’ve never heard of, could survive in such a frozen “between life and death” state only up to 10 years. A new study published in Current Biology, claims it could survive for thousands of years. Its main finding is that “the multicellular organisms could be frozen for thousands of years, stored and then brought back to life”, states Stas Malavin of the Russian Institute of Biological Research in an interview. Multicellular invertebrates called Bdelloid rotifers are a class of rotifers utterly made up of females. These microscopic worm-like organisms are labeled an “evolutionary scandal” by biologists for having thrived for millions of years without having reproduction. Recently, researchers have discovered Bdelloid rotifers can persist for at least 24,000 years in Siberian permafrost and then reproduce. Researchers calculated the age of organisms by radiocarbon dating, which estimated their age to be between 23,960 and 24,485 years. These tiny, tough little creatures have a full digestive tract, consisting of mouth and anus, can survive very hostile conditions (radiations, extreme acidity, starvation, hypoxia, dehydration, etc.) by ceasing all activities and completely arrestingtheir metabolism. This behavior is commonly called ‘hidden life’ or cryptobiosis. The researchers working on the project noted that the rotifers found in the permafrost would have been below giant woolly creature’s feet (like the woolly rhino, now extinct). Remarkably, on thawing the sample in the lab, the rotifers were able to reproduce. Another Zoology Professor Matthew Cobb, from the University of Manchester, who isn’t a part of this study believes that global warming could melt the permafrost, waking up the frozen organisms, though we shouldn’t be terrified from it, rather it allows us to study the adaption mechanisms acquired by these organisms and the harmful effects of freezing between species, both existing and predecessors. Cobb believed that because rotifers can reproduce by parthenogenesis that is, only the females clone themselves, that gives it an additional advantage over conventional reproductions (shuffling up genes in every new generation), and the genes, in this case, are exactly copied down to the next generation, reducing the likelihood of variability for natural selection. The researchers can now compare this newly found variant of bdelloid to now be compared with the genome of the same organism found in Belgium, which is considered their modern equivalent to revealing what caused them to develop these striking characteristics. The outcome of this study is more for a question than an answer. This study could be a way forward to improving cryopreservation of cells, tissues, and organs. Humans to date, can’t preserve organs and tissues for such a considerable time, but Malavin believes that “These rotifers, together with other organisms found in the permafrost, represent a result of a big natural experiment that we can’t replicate, so they are good models to study further.” Rotifera bdelloid, anhydrobiosis, evolutionary scandal, cryptobiosis, microbiology, microscopy, parthenogenesis, permafrost, reproduction
Teacher notes on measuring mass, and a student’s worksheet revising the topic. Activity to introduce the kilogram. Includes weighing with scales and comparing the differing mass of objects. Compare the different units of mg, g, kg, t. Shows the relationship between the different measures. Finding objects of a set mass. Match the correct unit with the item to be weighed. Choose from mg, g, kg, t to measure the mass of different objects. Converting between metric units: g, kg. Converting between metric units: mg, g. 5pp. Features and functions of the Number Slide Gadget: Multiply and divide numbers by powers of ten, with and without decimals. 12pp.Sequence of lesson plans for the Number Slide Gadget. 17pp. Converting between t, kg, g, mg measures. Series sequenced from simple conversions to complex ones.
Anything and everything has a pattern to follow and function accordingly, and these functions, methods, approaches, and strategies are all together in the biggest and the vastest branch of science, which is mathematics. According to the father of mathematics, Archimedes, it has shaped the world by introducing people to unique counting and calculating perspectives. To narrow it further, geometry itself is divided into two more broad categories, namely Molecular Geometry and Electron Geometry, which basically refer to the positioning and ordering of electrons, neutrons, and atoms in order to provide specific and most suitable angles, positions, and shapes to molecules. Molecular geometry is basically the configuration of atoms in a molecule, mostly comparative to one central atom, whereas the configuration of pairs of electrons around a central atom is called electron geometry. These distinctive and conceptual branches of geometry are discussed extensively in this article. Molecular Geometry: 3-Dimensional Shape The characterization of atoms in a molecule arranged in a 3-dimensional space is known as molecular geometry. Some approaches, like spectroscopy and diffraction analysis, are usually deployed for understanding molecular geometry. Certain theories are utilized to help in getting information about the configuration of molecules like hybridization theory, Lewis concept, and VSEPR theory. Hybridization theory describes the presence of covalent bonds in atomic molecules. The process of mixing orbitals having the same energy but different in shape to form hybrid orbitals with the same number, shape, and energy is the basic process of hybridization. Types of Hybridization: Some types of hybridization are as follows: - sp hybridization: 1s and 1p orbitals - sp² hybridization: 1s and 2p orbitals - sp³ hybridization: 1s and 3p orbitals An American scientist (G.N. Lewis) first introduced the concept that each atom shares two electrons to form a covalent bond by bond pairing, whereas the pair of electrons uninvolved in any type of bonding is called a lone pair. It has been observed theoretically and practically that molecular geometry does not possess lone pairs in its central atom in the molecule due to the more incredible amount of space that it occupies than a single bond would settle in the molecule throughout the central atom. This could be one of the top critical points among many behind the absence of lone pairs of electrons in Molecular Geometry. Not only this, according to studies, lone pairs can be found in orbitals where electron bonding is present. The reason behind occupying more space than electron bonds is that lone pairs are captivated by the nucleus; this concludes as these lone pairs are being spread and functioning near the nucleus with some space. Valence Shell Electron Pair Repulsion Theory VSEPR stands for (Valence shell electron pair repulsion); this is the leading mechanism and theory that has proved to be the most helpful factor in the context of determination and prediction of shapes and structural behaviors of the molecule. Some benefits and critical aspects of the VSEPR charts are that they help with molecular shape prediction. Still, apart from that, it provides electron group identification and electronic structure and assigns AXmEn to the molecules with the identification of bonds and angles. VSEPR charts also provide three-dimensional molecular geometrical prediction and deep understanding based on the quantity of electron bond pairs and the valence shell of an atom in the molecule. It is widely used for the minimization of repulsion effects along with an auto-arrangement of electron bond pairs. The configuration of pairs of electrons revolving around a central atom is termed electron geometry. It includes the existence of lone pairs and pairs of bonds, as well as the determination of the shape of a molecule in an atom. It is usually specified by the electron pairs. Types of Electron Geometry Electron geometry is classified into various types according to electron groups. Linear Type of Electron Geometry It involves bonding between the central atom and two electron pairs at 180 degrees of angle, forming a straight-line shape. Trigonal Planar Type of Electron Geometry The central atom with three electron pairs at 120-degree angles is termed a trigonal planar arranged in a flat shape. Tetrahedral Type of Electron Geometry The central atom is encircled by four pairs of electrons bonding at an angle of 109.5 degrees, arranged in the form of a tetrahedron. Trigonal Bipyramidal Type of Electron Geometry A central atom bonds with five pairs of electrons at 120 degrees of angle, forming a trigonal shape with three pairs and 90 degrees of angle with two pairs. Octahedral Type of Electron Geometry A central atom with six pairs of bonding electrons forms 90 degrees of an angle. This arrangement gives the shape of two pyramids with an attached square base. Distinguishing Factors Between Molecular and Electron Geometry |Features||Molecular Geometry||Electron Geometry| |Basic Concept||Molecular Geometry deals with the understanding and determination of the whole atom and its organization. These atoms are merely present in molecules, and this branch of geometry has all the positioning in 3D.||Whereas Electron Geometry has all the concepts regarding the positioning and organization of numerous electrons.| |Lone Pairs||In the process of regulating the shape of molecules, this type of geometry keeps lone pairs out. However, lone pairs are considered when repulsion is required in angle and bonding.||While Electron geometry is the opposite of molecule geometry, it does consider not only lone pairs but also bond pairs for the determination of a molecule’s shape.| |Calculations||It calculates the total quantity of bond pairs of electrons in the process.||It calculates the total quantity of electron pairs in the process.| |Examples||As an example, we are breaking down H₂O (water) structure; it has O (oxygen) as a central atom in the molecule (along with 6 valence electrons of O), 2 electrons are donated by the hydrogen atom and 8 electrons are surrounding N (nitrogen), 4 electron groups along with 2 lone pairs and single bond pairs in electrons. Moreover, this type is also called Bent.||As an example, we are breaking down CH₄ (Methane); this substance has 4 central atoms in the molecule (along with 4 valence electrons of C (carbon), 4 electrons are donated by the Hydrogen atom and 8 electrons are surrounding C, 4 electron pairs and 4 single bonds are there along with no lone pairs. Further, this type is also called tetrahedral.| Determination of Shape in Molecule and Electron Geometry As we have already come to know so far, both are beneficial in their perspectives and conceptual studies. Some steps are concerned with the determination of shapes in both types of geometry, which are important to know about to be aware of the ongoing mechanisms. For Molecular Geometry - As mentioned above, lone pairs are not contemplated and reviewed in the process of determining a molecule’s shape, but bonding is still required to keep the function in progress. - So, single bonds work here for both (double and triple type of bonds, or they both are considered as a single bond) - In addition, the reason behind the consideration of bond pairs over lone pairs is that lone pairs often take up more space than bond pairs. - For the determination of an atom’s positions together with the shape of molecules, some geometrical parameters are kept into account, for example, bond angles and lengths, and torsional angles. - These parameters help in providing significant effects in the form of properties like reactivity, color, polarity, magnetism, etc. For Electron Geometry - For shape, the prophecy of an atom in a molecule is important in the way that this prognosticated atom must possess high electron negativity in terms of potential. - This atom should be central in the molecule, and its configurations, like the number of valence electrons it possesses, should be determined. - As this is a process, other atoms are supposed to be there, and their donated electron quantity must be calculated. - Calculate the number of electrons that are neighboring central atoms in a molecule. - For the determination of the total number of electron groups that are present, we are supposed to subtract the count of total neighboring electrons from (2) so that we can have the required result. - For the determination of the total count of lone pairs of electrons in the molecule, we need to perform a subtraction of a steric number from the number of single bonds that are present all over the central atom in the molecule. - Lastly, deep analysis and understanding of electron geometry are required to be determined. - One additional point to bring to your notice is that there is a chance of similarity in both types of geometry if lone pairs are excluded from the molecule through which central atoms are surrounded. - The difference is based on calculations, and its scale still counts; that leads us to say that although electron geometry does not consider lone pairs in its structure, it is still a critical study as it helps us understand many substantial properties which play a prominent role in the predictions of many molecular structures and their shapes. - They can be considered the same if lone pairs are taken out or are not considered in the process in the central atom. This is the only distinguishing factor between these two types of geometry branches, and that is the underlying reason behind the emphasis on the presence of lone pairs of electrons in the central atom. - It can be concluded that both have many salient and essential concepts that can not be neglected and prioritized as they are both significant as long as chemicals are in this world and they are providing observable effects almost everywhere.
Cardiovascular diseases and other non-communicable diseases are the biggest killers in the world today. They cause an estimated 17 million deaths each year, which is equivalent to nearly half of all annual global deaths. The prevalence of such non-communicable diseases (NCDs) is increasing worldwide and becoming a major public health concern in both developed and developing countries. Although there are various risk factors for cardiovascular disease, the association between excess body weight and NCDs has been well established. It’s true! Not only is excess weight a risk factor for cardiovascular disease, but it can also increase your risk of diabetes, cancer, and hypertension. According to the World Health Organization (WHO), being overweight refers to having more body fat than is considered healthy for your height, age, and sex. They simply define overweight as a body mass index (BMI) between 25 and 30 kg/m2, while obesity is defined as a BMI of 30 kg/m2 or more. BMI is calculated by dividing weight in kilograms by height in meters squared. So, it’s not just about how much you weigh, but also how much of your body is made up of fat. You might not think your weight is a problem—but if you’re carrying around too many pounds, it could be putting you at risk of developing serious health conditions like diabetes or heart disease. What are Cardiovascular Diseases? Cardiovascular diseases are a group of conditions that affect the heart and blood vessels. They can lead to heart attacks, strokes, and other serious health problems. Cardiovascular diseases are the leading cause of death worldwide, accounting for 17.5 million deaths in 2015 alone. CVDs are the leading cause of death globally. The different types of cardiovascular diseases include; - Hypertension (high blood pressure) - Coronary heart disease (CHD) – is a condition where there’s an inadequate supply of blood to the heart muscle because of narrowed or blocked arteries that carry blood from the heart. - Heart failure - Atherosclerosis also called atherosclerotic cardiovascular disease (ASCVD), includes coronary artery disease, peripheral arterial disease, and cerebrovascular disease such as stroke. ASCVD is due to the hardening of the arteries by plaque deposits, which restrict blood flow through them. This can lead to angina pectoris or chest pain; heart attack; transient ischemic attack or TIAs—which cause stroke-like symptoms but no lasting damage; chronic kidney disease/end-stage renal failure due to cardiorenal syndrome; and sudden cardiac death from ventricular fibrillation. - Ischaemic stroke – caused by a blockage or rupture in one of the brain’s blood vessels - Venous thromboembolism (VTE) – is a clot that travels through your bloodstream to your lungs, causing shortness of breath or chest pain. - Stroke: a brain attack caused by a clot blocking an artery in your brain, or bleeding within your brain itself. Overweight and Cardiovascular Disease Being overweight is a major risk factor for cardiovascular disease, and it increases the risk of death from heart disease or stroke. Although you may think that heart disease is a disease of your grandfather’s generation, it is the leading cause of death worldwide for both the young and old. Being overweight also affects other noncommunicable diseases (NCDs), such as diabetes, cancer, musculoskeletal disorders, kidney disease (nephropathy), diabetes mellitus type 2, liver cancer, and some types of cancers like endometrial cancer in women or prostate cancer in men. In wondering about the link between overweight and obesity with other cardiovascular diseases. it is important to note that overweight and obesity are the fifth leading risk for global deaths—and they are the leading risk factors for cardiovascular diseases. If you are overweight, your risk of developing cardiovascular diseases (CVDs) is higher than if you are normal weight. When you have excess body fat, your heart has to work harder. This means that it has more blood to pump and takes longer for the blood to return. As a result, there’s less oxygen reaching your body’s organs—the heart itself included. This puts you at risk of the following: - Increased risk of heart disease and stroke - Increased risk of heart failure - Sleep apnea (when interrupted breathing can occur while sleeping) As well as other conditions such as diabetes and fatty liver disease. Causes of Overweight and Obesity - Genetics. The genes you inherit from your parents can play a role in whether you become overweight or obese. This may be because certain genes make it easier to gain weight, or because they affect how the body uses energy and calories. - Lifestyle habits. Your lifestyle habits can affect your weight. For example, if you eat too much and don’t get enough physical activity, you’re likely to gain weight over time—and this is true no matter what your genetics say about how much fat your body might carry on its frame. - Diet and physical activity patterns. Overweight and obesity tend to run in families and are influenced by diet and lifestyle factors that are learned early in life (such as poor eating habits or lack of physical activity). Managing Cardiovascular Diseases By Losing Weight To manage and prevent cardiovascular diseases, losing weight is the best way to go. A healthy diet and regular exercise are essential for maintaining healthy body weight. The risk of cardiovascular disease is reduced by as much as 80% when you lose 10% of your current body weight. This means that if you’re overweight or obese, losing even 5-10 kg could significantly reduce your risk of heart disease and stroke. Losing weight may also help improve other NCDs like diabetes, high blood pressure, and high cholesterol levels. How To Lose Weight To Control Your Risk If you are overweight and obese, losing weight may be the best way to reduce your risk of developing cardiovascular disease. But it’s not easy to lose weight. The average person who loses weight will regain two-thirds of it within a year and almost all of it within five years. That’s why it’s so important for people who are overweight or obese to find ways to change their eating habits and start exercising regularly now—before they have any serious health problems. The good news is that there are many resources available to help with losing weight, such as books written by doctors and registered dietitians, websites that offer advice on eating healthy food while still enjoying what we eat most (like desserts), and even apps that can keep track of calories consumed. With these tools at your disposal, losing weight should be much easier than ever before! To maintain a healthy weight, you’ll need to eat a balanced diet and get regular exercise. Here are some tips for how to lose weight: - Eat a healthy diet: This may seem obvious but eating a healthy diet full of fruits, vegetables, whole grains, and lean proteins will help keep your blood sugar levels balanced while helping reduce hunger pains which makes it easier to stick with an exercise plan! A good way to lose weight is by eating less food than you need. Your body uses the excess energy from the food you eat as fuel, so if you don’t take in enough calories, your body will start burning fat instead. Try limiting your daily calorie intake so that it’s between 1,200 and 2,000 calories per day. If you’re trying to cut out junk food like candy bars or cookies from your diet entirely (or at least limit them), try replacing them with fruits or vegetables instead! - Exercise regularly! Exercise helps burn calories and build muscle mass—both of which help contribute towards maintaining an ideal body weight for health reasons such as cardiovascular disease prevention! Aim for 30 minutes of moderate activity on most days of the week; this means walking briskly at 4–5 miles per hour (6–8 kilometers per hour). You can also try other forms of exercise such as swimming or cycling if those don’t appeal! Just remember not to overdo it—you shouldn’t be feeling tired after exercising because then it wouldn’t be considered “moderate”! - Exercising is key when trying to lose weight because it helps burn calories and fat while strengthening muscles. You can choose an activity that works for you and do it regularly (at least 3 times per week). Some examples include walking, running, swimming, or biking. Check out more resources here on how to lose weight to manage and prevent cardiovascular diseases. It has been quite a learning journey! We started by going over the basics of cardiovascular disease, including its symptoms and risk factors. Then we looked at what happens when a person becomes overweight and how this can lead to obesity. Finally, we explored some resources that could help you reduce your weight and therefore your risk of developing cardiovascular disease. In conclusion, there is a strong link between overweight and cardiovascular disease. Cardiovascular disease is the leading cause of death worldwide. Being overweight can cause type 2 diabetes and high blood pressure, that’s why it’s very important to watch your weight. Being overweight is also a risk factor for other non-communicable diseases like stroke, certain cancers, chronic respiratory disease, and dementia. Several treatments are available to treat obesity in children. You deserve to live a life free from heart disease, so we hope you feel encouraged and empowered by everything you’ve read here today. If you’re feeling overwhelmed or even scared about any health issues you may have encountered in this process, don’t worry: that just means it’s time to reach out for help! We’d be happy to connect with one of our knowledgeable peers who can get you set up on a path towards better health—just click here to speak with someone now.
A. Analytical teaching B. Discovery teaching C. Invention teaching D. None of these - The property of a substance to absorb moisture from the air on exposure is called____________? - Which from the following is termed as student-centered learning method? - The cooperative learning method which combines whole class learning plus heterogeneous small groups is termed as: - Circles of learning were formulated by__________? - Teachers should present information to the students clearly and in interesting way, and relate this new information to the things students: - The meaning of teaching method is? - ___________ is a pair activity in which students have a short period (typically 30 seconds) to share all they know by writing in a graphic organizer. - The connection between stimulus and response is called____________? - In teaching-learning process which of the following things is done first? - A common technique to help people begin the creative process is____________? - According to Edward Thorndike, learning is about responding to______________? - The models based on the philosophy that learning occurs when there are changes in mental structure are called____________?
Heat Transfer in Block with Cavity This example shows how to solve for the heat distribution in a block with cavity. Consider a block containing a rectangular crack or cavity. The left side of the block is heated to 100 degrees centigrade. At the right side of the block, heat flows from the block to the surrounding air at a constant rate, for example . All the other boundaries are insulated. The temperature in the block at the starting time is 0 degrees. The goal is to model the heat distribution during the first five seconds. Create Thermal Analysis Model The first step in solving a heat transfer problem is to create a thermal analysis model. This is a container that holds the geometry, thermal material properties, internal heat sources, temperature on the boundaries, heat fluxes through the boundaries, mesh, and initial conditions. thermalmodel = createpde('thermal','transient'); Add the block geometry to the thermal model by using the geometryFromEdges function. The geometry description file for this problem is called Plot the geometry, displaying edge labels. pdegplot(thermalmodel,'EdgeLabels','on') ylim([-1,1]) axis equal Specify Thermal Properties of Material Specify the thermal conductivity, mass density, and specific heat of the material. thermalProperties(thermalmodel,'ThermalConductivity',1,... 'MassDensity',1,... 'SpecificHeat',1); Apply Boundary Conditions Specify the temperature on the left edge as 100, and constant heat flow to the exterior through the right edge as -10. The toolbox uses the default insulating boundary condition for all other boundaries. Set Initial Conditions Set an initial value of 0 for the temperature. Create and plot a mesh. generateMesh(thermalmodel); figure pdemesh(thermalmodel) title('Mesh with Quadratic Triangular Elements') Specify Solution Times Set solution times to be 0 to 5 seconds in steps of 1/2. tlist = 0:0.5:5; solve function to calculate the solution. thermalresults = solve(thermalmodel,tlist) thermalresults = TransientThermalResults with properties: Temperature: [1320x11 double] SolutionTimes: [0 0.5000 1 1.5000 2 2.5000 3 3.5000 4 4.5000 5] XGradients: [1320x11 double] YGradients: [1320x11 double] ZGradients: Mesh: [1x1 FEMesh] Evaluate Heat Flux Compute the heat flux density. [qx,qy] = evaluateHeatFlux(thermalresults); Plot Temperature Distribution and Heat Flux Plot the solution at the final time step, t = 5.0 seconds, with isothermal lines using a contour plot, and plot the heat flux vector field using arrows. pdeplot(thermalmodel,'XYData',thermalresults.Temperature(:,end), ... 'Contour','on',... 'FlowData',[qx(:,end),qy(:,end)], ... 'ColorMap','hot')
This information is provided by the National Institutes of Health (NIH) Genetic and Rare Diseases Information Center (GARD). Familial atrial fibrillation is an inherited heart condition that disrupts the heart’s rhythm. It is characterized by erratic electrical activity in the heart’s upper chambers (the atria), causing an irregular response in the heart’s lower chambers (the ventricles). This causes a fast and irregular heartbeat (arrhythmia). Signs and symptoms may include dizziness, chest pain, palpitations, shortness of breath, or fainting. Affected people also have an increased risk of stroke and sudden death. While complications may occur at any age, some affected people never have associated health problems. Familial atrial fibrillation may be caused by changes (mutations) in any of various genes, some of which have not been identified. It is most often inherited in an autosomal dominant manner, but autosomal recessive inheritance has been reported. For more information, visit GARD.
While we were talking about activity ideas, I remembered a project I had done with elementary students when they were learning about maps in social studies. Students were given an unlabeled map of the library and had to fill it in - forcing them to walk around the space and really study what was there. We could definitely make this work for high school students, but I knew right away we’d need to make it a digital assessment - reviewing maps from a few fourth grade teams is one thing, but 500 high school students needs to be something you can handle online! Our first step was to create a map of the library. If your space is a nice square or rectangle, this is actually not that hard. Having just completed a kitchen renovation, I was pretty familiar with the many free online floor plan tools available. Most even have furniture that you can drop into the space - they may not have circulation desks, but a book shelves and tables are pretty standard items. But there are actually tools out there designed to help librarians design spaces - this one, from The Library Store, is pretty simple to use and allows you to find representation of some library-specific pieces of furniture. Magazine display racks anyone? Remember, the goal here is not to come up with something for an architect to use, it’s just to give an idea of the general layout. Our library was tricky - it’s a large space with lots of nooks and crannies, a quiet study are that bends around a corner, and study rooms, classrooms, and a writing center that jut off at off angles. So I made a crude representation using Google Draw (I majored in engineering in college and I could feel my drafting professor judging me, but sometimes you just have to make do!). Here’s how the mapping activity works: - Students will make a digital copy of the map you created - Using image editing software, they fill in the blank spaces - Students share their finished work back with you Our school Learning Management System (Schoology) actually has an assessment tool designed for labeling pictures, (imagine those ”label the parts of the cell” worksheets) so that was the obvious choice for us to use. But don’t worry if you don’t have this software! There’s a few ways you can do this using basic software. Here are a few suggestions for creating the labels and answers: Google Draw comes to the rescue again. Upload a copy of your map as an image to use as background. Using the shapes tool, create blank label spaces. Then create labels that students can drag and drop onto the blank spaces. Kami, a Chrome plugin I’ve mentioned before, allows you to edit PDF documents. Just upload your drawing as a PDF and have students type their answers in the blank spaces with a text box. Other options to annotate images: Evernote, Powerpoint - basically any tool that allows you to draw shapes and add text. Have the students copy your picture, edit the labels, and then share back with you. - Some of this is actually just measuring how well students can read a map! Navigating through a two dimensional space - as opposed to following a GPS navigation - is a skill most teens (and many adults) are not familiar with. To help with this, give students a way to orient themselves; after the first class got lost I realized it would help to project the map on the classroom screen and give them a “You are Here” pointer to start. - Think about how your map is drawn - I put the library classrooms at the top of our map. But that meant when students left the classroom to go explore the library, they basically had to turn the computer around to get the map facing the right direction! - Some of my labels were unclear - fiction was right next to graphic novels, which students consistently mixed up. That was a bad question, not a lack of understanding.
Tell me and I forget. Teach me and I remember. Involve me and I learn. Teaching with Intentionality Ways to Increase Engagement Put It to Movement: Keep Things Moving Along (Pace): Keep Things Unexpected: Questioning skills definitely up the rigor, and by randomly picking students by drawing names (popsicle sticks) from a can or using an app like random name picker, we keep all kiddos on their toes, but does this improve engagement? I’d say it does, but there are other questioning teaching techniques we can use too. Techniques such as Think, Pair, Share or Pairs Check (Students pair up to work on a problem and then check with another partnership to confirm they’re correct) gets 100% included. These techniques help students realize that the teacher expects everyone engaged. Here are a few variations on Think, Pair, Share that you might try out: - Mingle Pair Share-kids move about the room to pair up - Sticky Note Responses-pose the question and have students respond on stickies for sorting and discussion - Huddle Up-kids form groups to discuss and respond - Silent Partners-Kids get up, find partner, one partner is silent and other talks out answer and then they switch. When time is called, kids share what they’re partner said. - Scoot/Quiz Quiz Trade-Students have question cards, pair up, discuss questions and then move. - Graffiti Walls-Brainstorming techniques work well for questioning and are easy to use since all you need is a blank piece of paper and something to write with. Students record their questions, answers, or big ideas. Choose High Interest Topics and Activities: Creating with Technology
Things you should know about Autism: - This month is Autism Awareness month, a month long event that helps raise awareness and improve support for people with autism. - The People-First Language (PFL) mentioned in the Disability Act requires that you use the ‘People’ first and the ‘Disability’ second. For example: instead of referring to a person as autistic, you would instead say ‘person with autism’. This is because it implies autism is something a person has and not something they are. - According to the National Autistic Society, approximately 700,000 children and adults in the UK have autism. Strategies such as SPELL and TEACHH frameworks are being used in schools which prioritise understanding the autonomy of autism, appreciating the ‘strengths and uniqueness‘ of autism, and raising awareness to others who may not understand the culture of autism. The TEACHH method is designed for students of all ages and aims to integrate visual learning in academic activities to improve developmental and behavioural issues. The SPELL framework opts for a communication based approach that explores the underlying issues and emotional context. This stands for Structure, Positive Approaches, Empathy, Low Arousal, and Links. Support for Autism in Education However, many teachers and parents believe there is not being enough done to support students with autism. There are many challenges that students face that are heightened by the environment they learn in. People on the autism spectrum cope better when there is structure and an uninterrupted routine as they can face challenges in unpredictable circumstances. Although you may think your classroom routine runs like clockwork, there are certain elements such as playtime and lunchtime that create challenging situations for students. To support these students you can offer break and lunchtime clubs to help add more structure at playtime, supervision with staff who have had autism awareness training, and offer a quiet zone for students who can get overwhelmed. Bad acoustics in classrooms, dining areas, and sports whole increase the impact of sensory dysfunction which is a common trait of autism. With budget cuts in schools increasing, it becomes more difficult for schools to cater to the needs of students living with a disability. According to the National Autistic Society, ‘63% of children on the autism spectrum are not in the kind of school their parents believe would best support them’. We asked the Twitter community if they think there is enough support for students with autism and if they think there is enough being done to raise awareness in schools, the majority (75%) answered no, with the remaining saying ‘no, but it is improving’ (25%). - Some people think that people with autism aren’t interested in building friendships but this isn’t the case. Some people may struggle with social skills but like any other friendship you can learn to understand each others needs, likes, and dislikes to help build a lasting relationship. - Another common misconception is that people with autism have an intellectual disability, again this is untrue. Autism is a spectrum condition which means people will be affected in different ways. It may be that people with autism face more challenges in the learning environment but many students are able to excel in subject and have normal to high IQs. - The idea that autism only affects children and young people is false. Autism affects people throughout their whole lives, and although it is more common in men, it also affects women. How You Can Get Involved The National Autistic Society offer a workplace assessment as part of their Brain in Hand project, this is a support app that aims to ‘help young people to reduce anxieties and grow vital independence skills’. You can read more about it on here. With many teachers, student, and parents saying there is not enough awareness or support available for children and young people with autism, taking these steps and having mandatory training sessions for staff are part of conquering any challenges you may face. The Autism Education Trust has a booklet for parents on working together with your child’s school that you can download for free here. This downloadable booklet helps identify information and characteristics in terms of your child and their education. This gives your school information on what to prioritise, their behaviours, sensory dysfunctions, and preferred environments to help create a safe and pleasant learning environment for students with autism.
On the second day of NUSSP, in anticipation of our energy day in week two and to collect data ahead of time, students were given a homework assignment: a Bedroom Load Curve Datasheet and Instructions Handout. Each student was given a Kill-a-Watt Sensor, which allows the students to measure their personal energy use. The sensor plugs directly into a power socket, and then devices plug into the sensor. This allows you to see a variety of things; we were interested only in the idle load and the active load. The devices’ idle load is how much power it draws when it’s off/not active, which might range from nothing (which is ideal) to a lot (such as in a computer or X-Box, which needs to stay partially on even when it’s not in use). The active load is how much power the device draws when the device is turned on; the largest active loads we found were from hairdryers and air conditioners. Other devices in the house might draw more power (washing machines), however the students were only tasked with analyzing devices in their own bedrooms. Using some math (below), the students were able to calculate how much power each device uses throughout the day; this allows us see how much total power is needed for each person, as well as to look at when during the day the most power is needed. On our energy day we learned what energy is, the different types, and how we (humans) can gather and harness it. Professor Mike Kane (CEE) came in to talk to the students about energy use and to go over (analyze and explain) the students’ bedroom load curve results. We did not have enough time to graph each individual student’s load curve and overlap it, so we used an example load curve (see below) from one of Professor Kane’s classes, which amalgamated the loads of each student (from devices in their bedroom and kitchen). Professor Kane then talked about current energy production and how it relates to how we actually use our devices. In general, there are energy use spikes during the morning (people waking up) and in the evening, with lulls during the day and at night. Energy generation from coal/nuclear power plants are consistent throughout the day. However, renewable energy production throughout a day often is limited by the availability of the resource: wind energy can only be harvested when wind is blowing and solar energy can only be used when the sun is shining. Therefore, the energy grid needs to be carefully managed, with extra energy being generated (during times it’s not needed as much) being stored for the times it will be needed.
Vast underground reservoirs of water: the concept might have been hard to fathom for Angelenos in August 1868, but the evidence was clear enough. When workers employed by former Governor John Downey drilled a hole into the ground some two and a half miles west of the village of Compton, water came gushing out in a fountain four feet high. They had bored Los Angeles’ first free-flowing artesian well. The mechanics were straightforward. When rainfall seeps into the ground, it flows just like water above the ground, moving through the porous sedimentary strata that underlie Los Angeles until it encounters something impermeable, like harder rock uplifted by an earthquake fault. Then, the water pools behind the obstruction, building up pressure. When Downey’s workers bored their hole into the ground, the water rushed through its new escape route, just as it would if you were to halt the flow of water through a garden hose and then puncture it with a needle. Still, Yellowstone was four years away from becoming a national park, and the sight of water spouting from the ground under its own power drew curious onlookers. For a time, stagecoach drivers bound for San Pedro diverted from their route so that passengers could glimpse the hydrologic wonder. But within a decade, the sight had become commonplace. Orange growers and farmers openly tapped the region’s groundwater to irrigate their crops, while communities exploited wells for their domestic water supply. One town named itself Artesia after the practice. By 1892, Los Angeles County was withdrawing water from the ground through 627 separate artesian wells, few of them requiring pumps. Many were capped to capture the water. Others were left uncapped, spraying jets of water into the air. Unfortunately, there were limits to the region’s bounty. The underground aquifers represented centuries of accumulated, captured rainfall, and Los Angeles was draining them faster than they could be replenished. Within a few decades, water pressure had plummeted and the wells stopped flowing freely. Pumps replaced the natural hydrologic forces, and engineers eyed a new source of water in the distant Owens Valley. Above: “Water gushes out of an artesian well near Long Beach, circa 1900. Courtesy of the Photo Collection – Los Angeles Public Library.” Nathan Masters of the USC Libraries blogs here on behalf of L.A. as Subject, an association of more than 230 libraries, cultural institutions, official archives, and private collectors hosted by the USC Libraries and dedicated to preserving and telling the sometimes-hidden histories of Los Angeles.
Plant and animal communities change through time. Walk through the country and you'll see patches of vegetation in many stages of development—from open, cultivated fields through grassy shrublands to forests. Clear lakes gradually fill with sediment and become bogs. We call these changes—in which biotic communities succeed one another on the way to a stable end point—ecological succession. In general, succession forms the most complex community of organisms possible, given its physical conditions of the area. The series of communities that follow one another is called a sere. Each of the temporary communities is referred to as a seral stage. The stable community, which is the end point of succession, is the climax. If succession begins on a newly constructed deposit of mineral sediment, it is called primary succession. If, on the other hand, succession occurs on a previously vegetated area that has been recently disturbed, perhaps by fire, flood, windstorm, or human activity, it is referred to as secondary succession. Primary succession could happen on a sand dune, a sand beach, the surface of a new lava flow or freshly fallen layer of volcanic ash, or the deposits of silt on the inside of a river bend that is gradually shifting, for example. Such sites are often little more than deposits of coarse mineral fragments. In other cases—floodplain silt deposits, for example—the surface layer is made of redeposited soil, with substantial amounts of organic matter and nutrients. Succession begins with the pioneer stage. It includes a few plant and animal pioneers that are unusually well adapted to otherwise inhospitable conditions that may be caused by rapid water drainage, dry soil, excessive sunlight exposure, wind, or extreme ground and lower air temperatures. As pioneer plants grow, their roots penetrate the soil. When the plants decay, their roots add organic matter directly to the soil, while their fallen leaves and stems add an organic layer to the ground surface. Large numbers of bacteria and invertebrates begin to live in the soil. Grazing mammals feed on the small plants and birds forage the newly vegetated area for seeds and grubs. The pioneers soon transform conditions, making them favorable for other species that invade the area and displace the pioneers. The new arrivals may be larger plants with foliage that covers the ground more extensively. If this happens, the climate near the ground will have less extreme air and soil temperatures, higher humidity, and less intense insolation. These changes allow still other species to invade and thrive. When the succession has finally run its course, a climax community of plant and animal species in a more or less stable composition will have been established. Sand dune colonization is a good example of primary succession. Animal species also change as succession proceeds. This is especially noticeable in the insects and invertebrates, which go from sand spiders and grasshoppers on the open dunes to sowbugs and earthworms in the dune forest. Secondary succession can occur after a disturbance alters an existing community. Old-field succession, taking place on abandoned farmland, is a good example of secondary succession (Figure 8.24). SUCCESSION, CHANGE, AND EQUILIBRIUM So far, we've been describing successional changes caused by the actions of the plants and animals themselves. One set of inhabitants paves the way for the next. As long as nearby populations of species provide colonizers, the changes lead automatically from bare soil or fallow field to climax forest. This type is called autogenic (selfproducing) succession. But in many cases, autogenic succession does not run its full course. Environmental disturbances, such as wind, fire, flood, or clearing for agriculture interrupt succession temporarily or even permanently. For example, winds and waves can disturb autogenic succession on seaside dunes, or a mature forest may be destroyed by fire. In addition, inhospitable habitat conditions such as site exposure, unusual bedrock, or impeded drainage can hold back or divert the course of succession so successfully that the climax is never reached. Introducing a new species can also greatly alter existing ecosystems and successional pathways. The parasitic chestnut blight fungus was introduced from Asia to New York City in 1904. From there, it spread across the eastern states, decimating populations of the American chestnut tree within a period of about 40 years. This tree species, which may have accounted for as many as one-fourth of the mature trees in eastern forests, is now found only as small blighted stems sprouting from old root systems. While succession is a reasonable model to explain many of the changes that we see in ecosystems with time, we must also take into account other effects. External forces can reverse or rechannel autogenic change temporarily or permanently. The biotic landscape is a mosaic of distinctive biotic communities with different biological potentials and different histories.
Venus was Earth-like until climate disaster turned it into hell planet New research has found that Venus may have once been habitable, until a mysterious resurfacing event transformed it into a hell-like planet. VENUS — New research shows that Venus may have once been habitable like Earth, before it was turned into a hell-like planet by a mysterious event. According to the Europlanet Society, NASA's Pioneer Venus found evidence in 1978 that Venus may have once had shallow oceans on its surface. To see if it has ever had a stable climate that can support liquid water, researchers from NASA's Goddard Institute for Space Studies created a series of five simulations with different levels of water coverage. According to all five scenarios, Venus maintained a stable temperature of between 20 to 50 degrees Celsius for about 3 billion years. This means it would have been able to support liquid water, and possibly allow life to emerge. Researchers believe a massive resurfacing event 700 million years ago triggered an outgassing of carbon dioxide that made Venus' atmosphere too hot and dense for life to survive. The exact cause of the resurfacing event is unknown, but scientists say it may be linked to volcanic activity. Magma and molten rock flowing up to the surface would have released large amounts of carbon dioxide into the atmosphere. If magma solidifies before reaching the top, it can create a barrier that would have prevented gas from being reabsorbed. Although more missions are needed to better understand Venus' history and evolution, the recent findings have implications for exoplanets in the "Venus-zone", and how they may actually host liquid water and temperate climates. NEXT ON TOMONEWS Walrus sinks Russian Navy inflatable boat in Arctic
Sea levels are rising and coastal marshes are getting wetter. In addition to affecting coastal communities, these changes have consequences for the many species that live in marshes, and recent evidence shows that several of these species are suffering rapid population declines. If we are to prevent the extinction of these species, we either need to find ways to slow the loss of suitable marsh habitat or we need to allow marshes to move inland with the rising waters. So-called “marsh migration”, is emblematic of a key adaptation strategy for ameliorating the effects of climate change. Whether it is to accommodate rising oceans, warming temperatures, or changing rainfall patterns, allowing habitats to move in conjunction with shifting physical conditions is often assumed to be the best hope that many species have for avoiding extinction. Yet, we know little about the practical implications of this assumption, especially in areas where most land is privately owned. In the northeastern United States, coastal land is largely in private ownership. For example, in coastal Connecticut there are estimated to be more than 30,000 landowners with properties in areas that are expected to be affected by sea-level rise this century. How much coastal marshes can move into these areas will depend a lot on the actions of these people. With this in mind, we set out to understand the attitudes and beliefs of these landowners towards various conservation actions that could help protect marshes and the organisms that live in them. Over a thousand landowners responded to our survey, of which almost half were unlikely to participate in any conservation actions, and 1 in 5 said they were likely to build some form of shoreline protection within the next 20 years. Shoreline protection, which includes building structures like sea walls, can help protect infrastructure, but can both hamper marsh migration and cause erosion of adjacent areas, potentially shifting problems to neighbors and to unprotected natural habitats such as marshes and beaches. Of those surveyed, few landowners (7%) expressed an interest in participating in a conservation easement – one of the most common strategies used for land protection in the US. Participation might be higher if the monetary incentives given to landowners were raised above current levels, but this would clearly increase the cost of conservation. Other conservation strategies were more favored by landowners, but these approaches are also likely to be more expensive. For example, almost a fifth of landowners expressed interest in outright purchase by a conservation organization, and even more favored future interest agreements – a novel strategy in which conservation groups would be guaranteed ownership of a property if it lost more than half its value due to flooding, but which requires that they pay the landowner the fair market value at the time the agreement is made. These results suggest that ensuring protection of the best sites for marsh migration will require more active engagement with landowners. Perhaps the biggest impediment to landowner participation is convincing people that they will receive appropriate compensation for their property losses. Few (7%) respondents to our survey believed they would be offered an incentive to participate in conservation, and two-thirds worried about receiving a fair price for their property. Working to ensure that landowners know the options available to them, and building trust that they will be treated fairly, is clearly important. The research also highlighted some unexpected, but in hindsight unsurprising, results. For example, conservation groups often assume that educating people about climate change will lead to decisions that benefit conservation. We found, however, that those who acknowledge the reality of climate change also say that they are more likely to build shoreline protection. Increased understanding, therefore, has potential to lead to actions that damage natural systems, unless it is coupled with education about the consequences of different responses. Even then, the potentially high costs to landowners faced with property loss due to sea-level rise will likely limit the options for ecosystem protection. Overall, the study illustrates the need to understand the human component of conservation. Ecological predictions about where species will occur in the future need to be coupled with information on societal responses to determine what is really plausible. The projected area over which marsh migration can occur, for example, is far more extensive than what society is likely to allow. Ensuring that coastal marshes, the species they support, and the other services these wetlands provide people, persist into the future will require that we understand human behavior as much as the ecological responses to climate change. This research was led by SHARP PhD student Chris Field, in collaboration with Ashley Dayer of Virginia Tech and SHARP PI Chris Elphick. It was funded by Connecticut Sea Grant, the Connecticut Department of Energy and Environmental Protection, the University of Connecticut College of Liberal Arts and Sciences, and a Robert and Patricia Switzer Foundation Environmental Fellowship to Chris Field. Photo credits: Chris Elphick.
Did you know that the combination of Math & Music is one of the most common double majors among undergraduate students? Or that the same part of the brain is utilized for each discipline? This common ground is created by the fact that music is built on a number of the fundamentals of math. For example, musical pieces are read using fractions: the time signature is usually composed of two integers written as a ratio. The digit on the bottom discloses to the musician which note gets a single count (beat) and the number on the top advises how many notes are in each measure. The piece is then divided into bars or measures, in which each measure represents an equivalent amount of time. To keep beat, the musician needs to understand the numeric value of certain notes. One measure can contain one whole note, two half-notes, four quarter-notes, eight eighth-notes, or sixteen sixteenth-notes. Without the ability to count and organize numbers quickly in her head, the musician cannot keep pace with the composition. There are more subtle connections, too. Math’s popular Fibonacci sequence (1, 1, 2, 3, 5, 8, 13, 21, etc.) can actually be seen on piano scales! A one-octave scale on the piano– note that “oct” means eight, as in “octagon”– has 13 keys from C to C. There are eight white keys and five black keys, and the black keys are arranged in group of three and two. Pretty crazy, right? Even Pythagoras, of Pythagorean Theorem fame, got in on the math and music action. He discovered that different sounds can be created with a variety of weights and vibrations. This led to an important discovery: a vibrating string’s pitch is equivalent to its length. The longer the string, the greater the period of the string’s sonic frequency (this is a calculus concept), and therefore the lower the sound. If you’ve ever wondered why harps and pianos have an hourglass figure to accommodate strings of different length, Pythagoras is your answer! Students who take part in musical training utilize their brain just as they do for solving math problems. They sniff out patterns, enhance numerical literacy, and strength problem-solving abilities. Students taking music courses have the ability to solve complicated math problems in a faster and more effective way than do their non-musical peers. So if your child is struggling in math, consider encouraging him to pick up a violin or a trumpet and get to the music! If your child needs some extra help with math before the school year begins in September, enroll in BLAST Off – Subject-specific group tutoring today! This entry was posted in Brain Food, BrainStorm’s resource center for parents. To explore further into the world of BrainStorm, choose your location:
Everyone knows that a waterfall is a place where a river or creek flows over a vertical drop-off, but did you know that there is a geologic reason why they form? A waterfall, like Cedar Falls pictured above, forms where a hard, resistant rock such as sandstone overlies a soft, easily eroded rock like shale. The difference in the rate each rock type weathers is what creates the waterfall. When a stream passes over a single rock type, it erodes it evenly, carving a channel with a gradual slope. However, when a stream’s course passes from a hard to a soft bedrock, it scours the soft rock at a faster rate. As the supporting soft rock is eroded, the overlying harder rock progressively collapses, creating a vertical bluff over which the stream flows. As this process continues an ever taller waterfall develops, and the location of the waterfall gradually migrates upstream. Because we know how landforms such as waterfalls form, geologists can use tools, like aerial photographs and satellite images, to predict what kind of rock will be in an area before ever going there.
The arrival of the autumn and winter months signals many things, including the flu season. According to the Centers for Disease Control and Prevention (CDC), flu activity peaks between December and February. It is likely that the flu virus and the virus that causes COVID-19 will spread in both autumn and winter. The flu vaccine is your best chance to prevent the disease, and it's more important than ever this year. The CDC currently recommends an annual flu vaccine for anyone over 6 months of age. In addition to getting your vaccine, there are some other ways to protect yourself during this flu season. Avoid close contact with people who are ill and stay home when you are ill. It is important to continue good hygiene by covering coughs and sneezes and washing your hands. Safety measures had a positive effect on influenza cases earlier this year, and they will continue to be crucial as we enter the flu season. Getting your flu condition If you are unsure about getting a flu vaccine, here are some reasons why it is especially important among pandemics: - Reduces the risk of getting both viruses at the same time – Fighting at the same time Influenza and COVID-19 infections can be much worse than fighting alone. No one knows what to expect until it happens – and then it's too late. - Eliminates symptom confusion between flu and COVID-19 – You are less likely to get flu symptoms such as fever, cough and body aches . These are symptoms that can be confused with COVID-19. - Reduces the burden on the medical system – Influenza and COVID-19 are both respiratory diseases, so they rely on some of the same life-saving hospital equipment. If you are worried about staying healthy this flu season, please consult your doctor. As expected during the pandemic, any cough, sneeze or tickle in the throat can cause you anxiety. Many symptoms of colds, flu and COVID-19 are similar – making it difficult to distinguish between them. Different viruses cause each of these diseases, which means that there are different symptoms. - COVID-19— The three most common symptoms to consider are fever, dry cough and shortness of breath. Check out the infographic below for additional symptoms. - Flu – If you feel good one day and miserable the next day, it could be the flu. Common symptoms include cough, fatigue, fever or chills, headache, body aches, runny or stuffy nose, sore throat, vomiting and diarrhea. - Cold – The most important thing is that you do not get a fever with a cold. Symptoms of the common cold usually appear gradually and may begin with a sore throat or irritated sinuses. An important difference between the diseases is a symptom of COVID-19 – shortness of breath. If you have any further questions on the use of this product, ask your doctor. The only way to confirm your illness is to get tested. Diabetes affects over 30 million Americans – and the number is increasing every day. Although type 1 diabetes cannot be prevented, you can take steps to prevent type 2 diabetes – the most common type. - Eat healthy. Get lots of fiber and whole grains and understand how the food you eat affects your blood sugar. - Be more active. Aim for at least 30 minutes of exercise daily and try to use both aerobic exercise and resistance training. - Lose extra weight and keep it away. If you are overweight or obese, weight control can be an important part of diabetes prevention. One in three American adults is at risk for type 2 diabetes, but almost 85% do not know they have it. Take control now during the American diabetes month and see your doctor to test your blood sugar. We all here at CoverLink wish you continued health and safety this year!
Here are 30+ Kindergarten activities for hands-on learning. Learning is kindergarten is done through movement, conversation, and play. We can support that learning at home with kindergarten activities for hands-on learning. What our kindergarteners need is playful, hands-on activities – not computer-based learning. We don’t need our children stuck in front of a screen responding to voice commands for the majority of their education. Five and six-year-olds need to move their bodies, dig hands deep in new textures, and manipulate objects to develop a deeper understanding of how things work. Simply put, Kindergarteners need opportunities to connect hands-on learning to the world around them. We can help our five and six-year-olds learn with this list of 30+ Kindergarten activities for hands-on learning. RELATED: Play-based learning is super important. That is why we created this list of 50 KID ACTIVITIES AT HOME. Conversation and collaboration Opportunities to problem-solve Creating playful environments Encouraging imaginary play Opportunities to construct and build Moving their body through rough-and-tumble play Exploring new textures Encouraging a growth mindset and embracing challenges Just like we look for new activities to inspire our thinking as adults, we must do the same for our 5-7-year-olds. RELATED: New to homeschooling? You’ll find these HOMESCHOOLING QUICK TIPS practical and helpful. What a child should learn in Kindergarten can vary from state to state, but the majority of states use the Common Core Standards. According to the Common Core, these standards are to help create clear and consistent learning goals to help prepare children for life. They are to keep parents and teachers on the same page to work together for common goals. 41 out of 50 states have adopted the Kindergarten standards. In this case, we have access to what our Kindergarteners will learn throughout the year. Furthermore, I can help you implement hands-on learning activities with this list of 30+ Kindergarten Activities. Literacy hands-on learning activities for Kindergarten are some of my favorites. First, let’s take a look at the Kindergarten literacy categories: Oral and visual communication Phonics, spelling, and word study’ Independent reading with peers and adults Here are my FAVORITE LITERACY MANIPULATIVES for young learners. 25+ Activities to Help Pencil Grip – Hint, it isn’t tracing letters! The Best Books for Kindergarten – A FANTASTIC collection of read-alouds. Stamp a Story – Writing together using stamps to lead the story plot. Alphabet Sort– Taking a closer look at the lines and curves of letters. Building Word Families – Taking a closer look at words that rhyme. What I Like – A conversation starter using stickers. Drive and Park – A hands-on way to assess letter sounds or recalling letter names. Splash the Alphabet – A movement game to review letter names and sounds. Color Word Car Park – Learning to recognize color words through movement. 40+ Back to School Books – The best booklist to build comfort and confidence! Hands-On Story Sequence – Using blocks to retell the beginning, middle, and ending of a story. 14+ Book Activities to Retell a Familiar Story – Hands-on activities to help retell familiar Kindergarten stories. How Balance Can Improve Reading – A little practice can go a long way! Math Learning Activities for Kindergarten Did you know that Kindergarten math activities typically fall under the following categories? Counting and cardinality Operations and algebraic thinking Numbers and operations in base ten Measurement and data Next are my favorite Kindergarten math activities for hands-on learning. START HERE: my favorite math manipulatives for young learners. Kindergarten Money Activity – a DIY coin bank you need to make. Indoor Mini-Golf – Create a mini-golf course using train tracks. DIY Rainbow Board Game – A simple board game using dice and markers. Build a Shape – A hands-on 3D shape exploration. What is a Ten Frame? – Using objects to better understand a group of ten. Pour to the Lines – A measurement game using colored water. Magnetic Measurement – A measurement game using magnetic tiles. Domino Addition Track – Using dominoes to practice simple addition facts. Hands-On Number Line – Using shoes to become familiar with the number line. Shape Pictures – Using shapes to create pictures. Indoor Graphing Game – Use socks to better understand Kindergarten graphing. Count Up – A hands-on number line with cubes. Pattern Practice – A fun way to create patterns with objects you have at home. Let’s look at growth mindset activities for Kindergarten A growth mindset for all children is an important aspect of learning. We can begin to implement growth mindset activities for five and six-year-olds to encourage reflection and goal-setting. This is what allows a child the opportunity to fail, and brainstorm ways to persevere and succeed. A growth mindset develops with time and strategies that adults can model. Here, we are encouraging Kindergarteners to take risks and encourage questions. As parents, caregivers, and teachers, our job is to use phrases that acknowledge the work behind the product. Last, but not least, here are growth mindset activities for ages 5-7. Rainbow Weave – Incorporating art into hand-eye coordination. Can You Stack It? – Making predictions with blocks. Can You Build a Home– A thinking and design game with Legos. Block Play – An open-ended building game for strategy. Giant Tic-Tac-Toe – Engage in a game with rules with your Kindergartener. Chalk Board Game – Outdoor game for number sequence and sportsmanship. How to Paint with Kids – Understanding the process of art exploration and expression. How to get your Kindergartener started with painting and the supplies needed.
Introduction: Drug resistance is a serious medical problem. Indiscriminate use of antibiotics has led to a state where multi drug resistant bacteria have become increasingly prevalent. Therefore regular surveillance of important pathogens and their resistant pattern is mandatory. Aim: To find out prevalence of organisms causing infection and their sensitivity pattern. Material and methods: 676 clinical samples were screened among which 156 Gram Negative(GN) Isolates were processed for their antibiotic sensitivity profile against 12 different antibiotics. Results: Escherichia coli is the most common isolate of 156 gram negative isolates. Among all antibiotics, ampicillin is least sensitive (22%). Antibiotics with good sensitivity are Imipenam, Meropenam (100%), Levofloxacin 94%, Amikacin 89% Ciprofloxacin 79%, Gentamycin 77%. Pseudomonas is 100% sensitive to Amikacin. Conclusion: Antibiotic resistance in our area is still moderate. It is essential to test for older generation antibiotics before deciding on higher antibiotics for treatment which will have a tremendous impact on the treatment as well as cost effectiveness. Regular surveillance helps in implementing better therapeutic strategies.