content
stringlengths
275
370k
Everyone has heard how well foreign students, including those from Singapore, do in math compared to U.S. students. But few people understand why this is so. You will have a better idea of why they excel if you check out the Primary Mathematics program for the elementary grades. Primary Mathematics was first published (in English) for students in Singapore, so it was also called Singapore Math when it was first brought to the U.S. in 1998. Homeschoolers are much more likely to refer to it as Singapore Math rather than as Primary Mathematics. Primary Mathematics has taken the homeschool market by storm, and with good reason. This program teaches children to think mathematically rather than just having them memorize the mechanics of problem solving. Primary Mathematics lays a solid foundation for conceptual understanding using a three-step process, taking children from concrete, to pictorial, then abstract approaches to learning. Concepts are addressed from a number of directions that challenge students to think and understand. Primary Mathematics is more advanced than just about every other math program used in the U.S. There are three different versions: the U.S. Edition that was adapted directly from the version used in Singapore but substitutes U.S. measurements, spellings and conventions; the Standards Edition that aligns with the math standards for California, changing the order of presentation for some topics and adding units on topics such as probability, graphing, data analysis, and negative numbers; and the Common Core Edition, which slightly reorganizes topics to cover those required by the Common Core State Standards (CC). None of these are “dumbed down” to align with standards. The question that arises is which of these editions to choose. The scope and sequence remains challenging for all three versions. A comparison chart at www.singaporemath.com/v/PMSS_comparison.pdf shows where each of the CC standards is taught in each edition. On that chart, you can see that Primary Mathematics continues to teach some standards at earlier grade levels than is required by the CC. Common Core Editions add a few very specific topics, and they review many topics at different levels to satisfy the Common Core. Consequently, those editions have more pages than others. Standards and Common Core Edition textbooks as well as textbooks 1A through 2B in U.S. Editions are printed in full color while the rest of the U.S. editions are printed in two colors. (All workbooks are printed in black and white.) Color might be important for some learners, but the cost is significantly higher for Standard and Common Core Editions. All editions have periodic reviews. While U.S. and Standards Editions have cumulative reviews, the Common Core Editions do not. In the textbooks, concepts are taught thoroughly and sequentially within units rather than in a spiral fashion. The cumulative reviews are the primary means of reviewing previously-taught concepts since they are not addressed again in future units. With the Common Core Editions, the publisher wanted to allow teachers to skip units if they so desire, but to do that the publisher had to limit review to only what has been taught in each unit. Parents can create their own cumulative review by having students complete selected problems from each review, then revisiting problems from previous units at a later date. The supplemental Extra Practice books might also be used to create your own cumulative review. The Primary Mathematics series has levels 1 through 6 which cover material for approximately grades one through six and beyond. The Common Core Editions have only levels 1 through 5. Each level has two textbooks, two workbooks, and two teacher's or home instructor's guides labeled A and B—that's four student books per course. Textbooks range in length from about 80 to 190 pages each. (Common Core Editions are significantly longer than others.) However, textbooks and workbooks are each about 10 by 7½ inches, with uncrowded, large print, so they don't intimidate students. The textbooks might be used either as consumable or non-consumable books. In the latter case, students write answers in a notebook to preserve the textbooks. There are quite a few problems to solve between textbooks and workbooks, so I generally recommend letting students write in the books to save recopying the problems. (None of these books are reproducible.) Correlated workbook exercises are indicated at the end of each textbook lesson. Children should be able to work through workbook exercises independently once they can read directions without a problem. While each level of U.S. and Standards Editions has both teacher's guides and home instructor’s guides available (with the exception of Standards Edition levels 6A and 6B), the home instructor's guides are designed specifically for homeschoolers, are less expensive, and are what I recommend. You do not need both. Common Core Editions have only teacher's guides right now (no home instructor's guides), and these are the only teacher's guides that include reduced pictures of student pages, a very helpful feature. Both the home instructor’s guides and teacher's guides have lesson plans, teaching instructions, and answer keys. The program requires one-on-one teaching throughout most lessons for the younger grades. Older students can be taught using activities and lesson presentations from teacher's or home instructor's guides, but some students will be able to work independently through the books on their own. The guides incorporate work with hands-on resources, but you can skip those activities if they are not needed. Some children will find the visual representations in the textbooks sufficient. Singapore Math Inc.® carries a number of supplemental books, many of which are keyed to the Primary Math series. Extra Practice books correlate directly with each level of each edition. Check their website for more information. Placement tests are available at their website. If your child is not starting at the beginning of the program, it is vital that you use the placement test to determine the appropriate level. Important note: It is not unusual for a child to place one or two levels below their official grade level. Primary Mathematics 1A and 1B Book 1A begins with an assumption that children already have a basic sense and recognition of numbers. It begins with counting to 10, but by the fourth unit of the first book, students are learning subtraction. Single-digit multiplication is introduced in 1B, with division introduced very briefly immediately after. (Students are not expected to memorize multiplication facts yet.) The text stresses conceptual understanding over math-fact drill at this level. Drill suggestions are given in the guides, but you might want to provide opportunity for more practice with math facts using other resources. Practical applications are used in lesson presentation and word problems. In addition to the arithmetic operations, this first level teaches ordinal numbers, shapes, measurement, time telling, money, and graphs. Primary Mathematics 2A and 2B The second level teaches addition and subtraction with renaming (carrying and borrowing), multiplication and division, place value, measurement, money, introduction of fractions, writing numbers in words, time telling, graphs, and very introductory geometric shapes and area. Primary Mathematics 3A and 3B This level has more advanced work on the four arithmetic operations including long division, fractions (equivalent fractions plus adding), measurement, graphs, time, and geometry. It also teaches two-step word problems and mental calculation. It will be challenging for most students to begin this program at the third level if they have been using a different math program. However, the pictorial lessons do help students pick up concepts they might not have been taught previously. Make sure that if you are just starting this program, you watch for this problem, and provide the necessary teaching before expecting your child to do the lessons. Primary Mathematics 4A and 4B At the fourth level, students learn all four functions with both fractions and decimals. Geometry coverage is also very advanced as students compute the degrees of angles and solve complex area and perimeter questions. Students also work with advanced whole number concepts (e.g., factors, multiples, rounding off), money, other geometric concepts, graphs, and averages. Primary Mathematics introduces two-digit multipliers at this level but doesn’t really concentrate on two-digit multipliers and divisors until the fifth level. While students complete quite a few computation problems, the number of word problems seems to gradually increase at this level. Primary Mathematics 5A and 5B At the fifth level, students do advanced work with decimals plus multiplication and division with two-digit multipliers and divisors. They learn to work with percents and continue with advanced work on fractions, geometry, and graphs. Time and rate word problems, as well as other types of word problems, are given a great deal of attention. There are more word problems than drill type problems. Some of the geometry taught at this level is rarely introduced before high school level. For example, students learn to calculate the degrees of angles in a parallelogram given the measurements of only two angles. Students just beginning Primary Mathematics having used something else before fifth grade, will generally need to catch up to Primary Mathematics advanced scope and sequence. You might consider using Math Works! that was designed for just that purpose. Primary Mathematics 6A and 6B Because of this series’ advanced scope and sequence, at the sixth level much of the work is more typical of other publishers’ high school level texts. Students work with fractions, but a typical problem requires students to perform three different operations on four different fractions within a single problem, much like an advanced algebra type problem, although without variables. Common geometry problems are set up in proof-style format, although you need not require students to present their solutions in that format. Among other concepts covered at this level are graphs, algebraic expressions, geometry (e.g., radius, diameter and circumference of circles plus the volume of solids), advanced fractions, ratio, percents, tessellations, and lots of word problems including time/rate/distance problems. It might be challenging for parents with a weak math background to use this level without some assistance. There are a number of supplemental books that you might use along with Primary Mathematics although none of these are required for a complete course. The Primary Mathematics series has an Extra Practice workbook for each course. These are exactly what they sound like—a source for additional practice if it is needed. There are sets of Extra Practice books for U.S., Common Core, and Standards editions, and you need to choose them to fit with the edition you are using—they are not interchangeable. All Extra Practice books are printed in black and white. They include illustration such as those in the textbooks, even though they are not in color. Answer keys are at the back of each book. Extra Practice workbooks for the Common Core editions are substantial with about 200-250 pages per book. They briefly reteach the key concepts covered in the corresponding unit in a section called "Friendly Notes." The presentation in Extra Practice is different from that in the texts, so this might be especially helpful if a child hasn't really understood the lesson. Friendly Notes are followed by one or more exercises. Exercises are generally each two or more pages in length. While many units have about two exercises per unit, especially at lower levels, I did spot units with five and six exercises each. There are generally more exercises per unit at higher levels. Extra Practice workbooks for the Standards editions are very similar to those for the Common Core editions. Extra Practice workbooks for the U.S. editions are about half the size of the Common Core versions. They do not include reteaching material. Instead they have lots of practice problems. Extra Practice books can be used as needed rather than on a regular basis. It is easy to identify which sections you might use to either review the presentation of the concepts and/or to practice those concepts. Challenging Word Problems See my separate review of the Primary Mathematics Challenging Word Problems series. Singapore Math Intensive Practice books correlate directly with the U.S. editions of Primary Mathematics with two books for each level corresponding to the A and B books for Primary Mathematics. These black and white workbooks provide additional practice on what has been taught in each unit of Primary Mathematics as well as mid-year and end-of-year reviews. For each section, there is a large set of problems of various types. There are more visual illustrations in younger levels than older levels. Word problems are presented in a separate section of their own. Word problems are followed by a set of "Take the Challenge!" problems—these are primarily puzzles of the sort you find in critical thinking skill books. While the basic problem sets and word problems can be used with all students, Take the Challenge! problems might be more difficult for some students. Singapore Math® Live Experienced teacher, Brenda Barnett has been licensed by the publisher to provide online teaching assistance for Primary Mathematics courses. Singapore Math® Live is a teaching assistance program and solutions guide for the teacher rather than for the student. Barnett has laid out courses of study for levels 3, 4, and 5 of Primary Mathematics that require you to use workbooks A and B, Intensive Practice A and B, and Challenging Word Problems for each level. (You won't need the Primary Math "textbooks" themselves.) You need to purchase the books separately from other sources. The cost for Singapore Math® Live is $50 per level for 12 months. Levels 1, 2, and 6 should be available for the Fall of 2016. Your subscription gives you access to the YouTube recordings of Barnett's instruction for each week's lessons. At the beginning of each week's "class recording" Barnett writes the pages to be covered in each of the books. The assignment can also be printed as a PDF document. Barnett does not go through the entire lessons in the videos. Instead, she highlights concepts or problems that might pose difficulty, explaining how to teach only as needed. She does work through each of the word problems in both the Intensive Practice books and Challenging Word Problems. We never see Barnett on the videos. What we see, instead, is a whiteboard type screen on which Barnett works out problems while we listen to her voice. Parents will need to spend some time watching Barnett's videos on their own each week with the textbooks in front of them. Once parents are familiar with how to teach each week's lesson and how to help students solve the problems, they can then teach their children. While this requires more prep time each week, it should enable parents to teach effectively as needed and assist their students without having to stop and figure things out for themselves. Sample lessons are available free at the Singapore Math® Live website. Click here to check it out. Note that Singapore Math® Live is a separate company from Singapore Math, Inc., but they operate with the permission and approval of Singapore Math, Inc. While Primary Mathematics is one of my Top Picks, it isn't the easiest math series to teach. Singapore Math® Live should be a big help for those parents who need more guidance and assistance.
A forest and fire ecologist discusses her research on how to reduce the damage being done to BC’s forests by fires. BY LATE AUGUST there had been over 1100 forest fires in BC during 2017. With 1 million hectares burned, it was officially a record-breaking season. In the previous ten years, the largest area lost was in 2014 when 339,168 hectares went up in smoke. One would have to go back to 1958’s record of 855,000 hectares burned to come anywhere close. This fire season also resulted in the longest state of emergency in BC’s history. Interestingly, between 2006 and 2016 the average annual number of fires was 1,844. So this year’s 1000 (and rising) fires were, on average, a lot bigger than previous years. And the outlook does not look any better. Natural Resources Canada’s Canadian Forest Service predicts a potential doubling of the amount of area burned in Canada by the end of this century, compared with amounts burned in recent decades. One of the more than 1000 wildfires in BC in 2017 Besides the devastation to forests and wildlife this summer, over 45,000 people were evacuated from their homes. While residents of the Interior bore the brunt of the unpleasant and sometimes tragic consequences, even those of us on BC’s coast experienced numerous smoky days, with attendant health issues. And, of course, there’s an impressive impact on BC’s economy. In 2014, when less than one-third of the area burned, direct costs were $300 million. So this year’s direct costs will be significantly higher. And then there’s all the indirect costs, from health care through impacts on tourism, small business, and agriculture. BC, of course, is not alone. Heat waves and droughts have led to horrific wildfires in Italy, France, Spain and especially Portugal. In California, 100 million trees are expected to be casualties of their drought and rising temperature. A changing climate has being identified as increasing the intensity of these events. A recently published meta-analysis by 63 scholars in Nature Ecology and Evolution found that trees in droughty conditions shut pores that let in carbon dioxide in order to conserve moisture. That also blocks the water transport within the tree, leading to dehydration and carbon starvation—in other words, dead, dry trees that don’t absorb atmospheric carbon and easily catch fire. Forest fires themselves are a significant contributor to greenhouse gas emissions. They function as a “feedback loop”—warmer, drier conditions caused by climate change produce more forest fires, which release carbon and thereby contribute to climate change. Forests fires are one factor reducing the ability of BC forests to act as carbon sinks (logging and insect outbreaks also contribute). According to the federal government’s Forest Service, in the past Canada’s forests absorbed about one-quarter of the carbon emitted by human activities, but in some recent years they have become carbon sources, emitting more than they absorb. Is there anything we, or our elected governments can do to lessen wildfires and their impact? Focus interviewed forest and fire ecologist Jill Harvey about the situation. Harvey, who graduated from UVic in 2017 with a PhD in geography and whose research was published in July in two peer-reviewed journals, looks both to the past and the future. “The mechanisms driving global climate change and ecosystem response are numerous,” she says. “Therefore, the research questions I ask target understanding changing disturbance regimes and tree growth-climate responses. Looking back into the past and into the future, my research examines both the causes and consequences of environmental change in temperate forests with a special interest in the outcomes for forest structure, ecosystem function and management implications.” Focus caught up with Harvey (via email) in Greifswald, Germany, where she is doing postdoctoral research at the Institute for Botany and Landscape Ecology. She is there to gain international expertise in advanced tree-ring and climate science approaches, which she will bring back to Canada. Q. What does your research show about the history of forest fires in British Columbia? A. Historically, many sites in the Cariboo Forest Region burned every 15 to 25 years between 1600 and 1900 AD. These fires consumed fine fuels and maintained open forests. In the last 100 years, very few of these sites recorded a single fire. Effective and widespread fire suppression has resulted in denser forests throughout much of the Cariboo, providing more fuel for fires. For example, one of my research sites near Hanceville burned in mid-July in the Hanceville Fire Complex, which is over 200,000 hectares in size and only 25 percent contained [on August 21]. At that site, nine historic fires were recorded between 1769 and 1896, with fire occurring about every 16 years. No fires have burned at that site for over 120 years. All the fuel that has accumulated over the past 120 years is supporting the fire that is burning right now. Q. How did you conduct your recent research? A. The fires that are burning in the Cariboo Forest Region are intense due to the accumulation of fuels over the past century. As I mentioned, fires prior to the 20th century were more frequent and generally less severe. These lower intensity fires oftentimes “scarred” mature Douglas-fir trees, but did not cause the tree to die. These living Douglas-fir trees, that can reach over 500 years of age, are recorders of past fire activity. Fire scars are preserved in the chronology of the tree’s life recorded annually as tree rings. Using principles of dendrochronology, tree-ring science [done by tree core sampling], I am able to date the year of the fire and sometimes even the season that the fire occurred in. When you compile the fire records from multiple trees at a site you can gain a pretty clear picture of the history of fire activity at site. And when you compile many sites across a region, you can identify years of widespread fire activity—like we are experiencing this summer. I then link the years when fires burned to historical records of climate to see what kind of climate conditions are associated with different types of fires. For example, I found that fires that burned in forests next to expansive grasslands are associated with wet, cool springs. Wet, cool springs promote the growth of fine fuels, an important prerequisite to the spread of fire in fuel limited environments (eg. grasslands). In years when widespread fires burned at many sites across the Cariboo Forest Region, I found that multiple years of drought preceded these large fire years. Q. What changed so much 120 years ago? A. Around the end of the 19th century and towards the 1950s, European settlement in the Cariboo Forest Region increased. As it is now, fire was dangerous in areas where people lived, cattle grazed, and transportation corridors were constructed. Fires were suppressed and care was taken not to set fires. As stewards of the landscape, Indigenous people of the region had used fire effectively and carefully, thinning forests and promoting vegetation diversity. Indigenous burning was discouraged and forbidden in the early 20th century. Fires were perceived to “destroy” forests. That is the irony we are facing now. The measures that we have taken for over 100 years to “protect” our forests by suppressing fires, have actually predisposed forests to more intense, and much more damaging fires. Q. What does your research show about the way a forest fire changes ecology? A. I conducted an intensive survey of historical patterns of fire severity in the Churn Creek Protected Area, which is located in the Cariboo Forest Region. Many of my plots were in forested areas next to grasslands. When I collected data in 2013 and 2014 for this study, these forests were incredibly dense with many young trees in the understory. I sampled hundreds of these young trees and when I got back to the lab and determined the age of these trees—almost all of them established in the late 1800s over a 20-year period. Prior to the late 1800s, frequent fire in these grassland-adjacent forests eliminated seedlings and kept forests open, encouraging the growth of native grass communities and promoting habitat for many animal species. Now, these dense forests have changed the composition of the herbaceous understory and eliminated habitat for multiple ungulate and bird species. Q. Given your research and that of others, how should forest management practices change in BC? A. Considering the costs associated with fighting the fires of 2017 [potentially $1 billion] and the fact that scientists have already confirmed that more fire is expected in the future, more funding should be directed to fire management and research that reduces fire risk. Today’s forest management plans should continue to enhance practices such as thinning dense forests and using prescribed fire to reduce fuel loads. We also should consider expanding these activities in the province to include larger areas. Increased research directed at prescribed burning approaches, smoke dispersal and the effects of fire is crucial. If fires are to be more frequent in the future, we need to use the fires of this summer to improve our understanding of the ecological effects of fire. These insights would allow us to improve the resilience of both the forests and communities of BC. Q. What does climate change mean for the future of BC’s fires? A. Climate projections for the next 50 to 100 years clearly and consistently show an increase of one to three degrees Celsius or even more. Future drier and warmer climates will undoubtedly lead to more fires in the province and for longer periods of time. If we do not reduce the fuel load now, we can expect more intense fires across multiple locations in the future. Q. So we can’t necessarily reduce the number of fires, but we could work on reducing their intensity? A. Yes, I think that we can reduce the intensity with which fires burn in targeted areas, such as areas around communities. Efforts to thin forests can be focused in these settings to inhibit the spread of fire towards people’s homes and property. Q. Wasn’t reducing the fuel load and prescribed burning recommended, among other measures, after the 2003 fire season when 260,000 hectares burned with costs of $700 million? Were these not done—or not enough? A. Yes, prescribed burning and thinning were recommended following the 2003 fire season and these treatments were conducted in some regions. However, I do think that more can be done going forward, especially after this summer. Q. I understand the area burned annually in Canada is 2.5 times larger than the area harvested. Does that mean we should allow more logging? A. No, I don’t think we should log more! Many of the large fires that burn every year are in the northern boreal forests of Canada where it is very difficult and oftentimes unnecessary to suppress the fire (no people or communities nearby). Fire is also a very important part of the ecology of boreal forests and in these environments trees are generally not targeted for logging. The tree species and/or sizes are currently considered unsuitable. Q. What in your mind is the best path forward? Is there any good news about BC forests and fire? A. We cannot simply hope that a fire year like 2017 won’t happen again. It will happen again, and it will likely happen more frequently. We must use this summer as a catalyst for change in forest management practices and research. There are many stakeholders to consider when we plan our path forward after this summer. We must first consider those directly affected by the fires of 2017 and hear their stories and collectively recover from a very difficult time. We need to critically review how we manage our forests and look back to the 2003 fire year and see if we have made progress. We need to integrate insights from historical fire perspectives, Indigenous land management practices, and fire behaviour and meterology science. Immediate resources for directly reducing fire risk such as forest thinning and prescribed fire are essential. Fire-related research needs to occur at all scales, and across all involved disciplines. The 2017 fires present an exciting opportunity for fire ecologists to examine what happens next. Understanding how landscapes recover after a fire will help us develop appropriate management strategies important for the reforestation. We also need to look at how other forest agencies, such as in the US and Australia, are managing forests and fire and provide opportunities for inter-interagency and international collaboration between managers and scientists. Leslie Campbell is the editor of Focus.
1. On Nov. 5, at 4 a.m. ET, India launched a spacecraft bound for Mars. 2. The rocket, PSLV-C25, carrying the 3,000-pound Mars orbiter Mangalyaan (“Mars craft” in Hindi) took off from the island of Sriharikota in the southern state of Andhra Pradesh. 3. After 44 minutes the Mars orbiter separated from the rocket and will have to travel 485 million miles to reach an orbit around Mars. 4. The orbiter will have to travel over 300 days and will be expected to reach an orbit around Mars on Sept. 24, 2014. 5. India’s Mars mission, which began in 2010, cost $72 million. That’s a fraction of the cost of NASA’s Mars project. 6. If successful, India will be only the fourth nation the world to reach the red planet after the US, the Soviet Nation and Europe. More than half of all Mars projects by different countries have failed, including those by China and Japan. 7. The main objectives of the Mars mission are to determine how Martian weather systems work and to search for methane, an indicator of life processes on the planet. The data collected by the orbiter will help to understand what conditions could make life possible on other planets. 8. The orbiter will have at least six months to investigate Mars’ landscape and atmosphere. Mars’ surface captured by NASA’s Mars rover Curiosity.
The clivus is the surface of a portion of the occipital and sphenoid bones in the base of the skull. It is surrounded by the neurovascular structures of the brainstem, as well as both internal carotid arteries. Tumors of the clivus can be benign or cancerous; they can be classified as chordomas or chondrosarcoma. Chordomas are rare, aggressive, slow-growing, invasive, and locally destructive tumors that arise from the notochordal, a structure that appears in embryonic stages and guides the growth of the bony skull and spine. Normally, notochord remnants form part of the intervertebral discs. A chordoma occurs when additional notochord cells are enclosed by the developing bones. These rare tumors are slow-growing and benign, but they may invade nearby structures and they tend to recur after treatment, can destroy surrounding tissue, and may spread to other parts of the body. Chondrosarcomas, which are even rarer than chordomas, are tumors of the cartilage that the skull replaces during development, although their exact origin is unclear. Most of these tumors are slow-growing, although in rare cases they can be aggressive and malignant. Males are affected more frequently than females, and this tumor has a propensity for local recurrence, direct extension from the primary site, and systemic and cerebrospinal fluid metastasis. Examination of some masses may allow a physician to determine their cause based on location, size, and consistency. In other cases, however, additional tests may be required, such as: - Neurological exam — includes evaluation of eye movements, hearing, sensation, motor function, swallowing, sense of smell, balance and coordination - MRI — Magnetic Resonance Imaging best delineates clival chordomas and chondrosarcomas from meningiomas. It uses a magnetic field rather than x-rays (radiation). - CT Scan — Computed tomography combines a sophisticated x-ray with computer technology. CT scanning is helpful to delineate the bony involvement in clival tumors. Injections of iodine dye (contrast material) may be used to enhance the visibility of abnormal tissue during CT scans. - Biopsy — A sample of tissue is taken and examined under a microscope. Biopsy is ultimately necessary to properly diagnose a clival tumor. Planning and execution of clival tumor removal can be among the most complex and difficult procedures in skull-base surgery. Radical surgical resection is attempted when possible, as these tumors have a high incidence of recurrence if incompletely removed. Adjuvant radiotherapy with proton beam or gamma knife is used in cases of subtotal resection.
On the grandest scale, our universe is a network of galaxies tied together by the force of gravity. Cosmic Web, a new effort led by cosmologists and designers at Northeastern’s Center for Complex Network Research, offers a roadmap toward understanding how all of those tremendous clusters of stars connect—and the visualizations are stunning. The images below show us several hypothetical architectures for our universe, built from data on 24,000 galaxies. By varying the construction algorithm, the researchers have designed cosmic webs that link up in a number of different ways; based on the size, proximity, and relative velocities of individual galaxies. I call it God View. “Before, the cosmic web was more like a metaphor,” Kim Albrecht, the designer behind the new visualizations told Gizmodo. “This is the first time somebody has made these calculations and thought about it as an actual network.” The mathematical tools the researchers have developed will not only shed light on the large-scale structure of cosmos, they could help us answer fundamental questions about the birth and evolution of the universe. But if the science sounds a little out-of-this-world, don’t worry. You don’t need a physics PhD to appreciate the beauty of it.
The factors of 50 and the prime factors of 50 differ because fifty is a composite number. Also, despite being closely related, the prime factors of 50 and the prime factorization of 50 are not exactly the same either. In any case, by reading on you can learn the answer to the question what are the factors of 50? and everything else you want to know about the topic. What are the Factors of 50? They are: 50, 25, 10, 5, 2, 1. These are all the factors of 50, and every entry in the list can divide 50 without rest (modulo 0). That’s why the terms factors and divisors of 50 can be used interchangeably. As is the case for any natural number greater than zero, the number itself, here 50, as well as 1 are factors and divisors of 50. Prime Factors of 50 The prime factors of 50 are the prime numbers which divide 50 exactly, without remainder as defined by the Euclidean division. In other words, a prime factor of 50 divides the number 50 without any rest, modulo 0. For 50, the prime factors are: 2, 5. By definition, 1 is not a prime number. Besides 1, what sets the factors and the prime factors of the number 50 apart is the word “prime”. The former list contains both, composite and prime numbers, whereas the latter includes only prime numbers. Prime Factorization of 50 The prime factorization of 50 is 2 x 5 x 5. This is a unique list of the prime factors, along with their multiplicities. Note that the prime factorization of 50 does not include the number 1, yet it does include every instance of a certain prime factor. 50 is a composite number. In contrast to prime numbers which only have one factorization, composite numbers like 50 have at least two factorizations. To illustrate what that means select the rightmost and leftmost integer in 50, 25, 10, 5, 2, 1 and multiply these integers to obtain 50. This is the first factorization. Next choose the second rightmost and the second leftmost entry to obtain the 2nd factorization which also produces 50. The prime factorization or integer factorization of 50 means determining the set of prime numbers which, when multiplied together, produce the original number 50. This is also known as prime decomposition of 50. Besides factors for 50, frequently searched terms on our website include: We did not place any calculator here as there are already a plethora of them on the web. But you can find the factors, prime factors and the factorizations of many numbers including 50 by using the search form in the sidebar. To sum up: The factors, the prime factors and the prime factorization of 50 mean different things, and in strict terms cannot be used interchangeably despite being closely related. The factors of fifty are: 50, 25, 10, 5, 2, 1. The prime factors of fifty are 2, 5. And the prime factorization of fifty is 2 x 5 x 5. Remember that 1 is not a prime factor of 50. No matter if you had been searching for prime factorization for 50 or prime numbers of 50, you have come to the right page. Also, if you typed what is the prime factorization of 50 in the search engine then you are right here, of course. Taking all of the above into account, tasks including write 50 as a product of prime factors or list the factors of 50 will no longer pose a challenge to you. If you have any questions about the factors of fifty then fill in the form below and we will respond as soon as possible. If our content concerning all factors of 50 has been of help to you then share it by means of pressing the social buttons. And don’t forget to bookmark us. Thanks for your visit.
By Neepa Sevak Anger is an instinctive human sentiment to circumstances beyond our control, our lack of ability to emotionally deal with certain situations. Each one of us has lived through anger at some moment whether as a transitory frustration or as a progressive wrath. Depending on how it is articulated, anger can have constructive or harmful consequences. Positive angry thoughts can help you survive a crisis and resolve the situation in a practical manner, whereas, negative anger feelings can trigger hostile, uncontrollable, violent behaviors. Anger can be a self-protective reaction to underlying fears, dissatisfaction, hopelessness, or frustration. Some people find it difficult to communicate their anger, some can burst out with rage, whereas for some it can surface habitually. When anger holds up your aptitude to think or function plainly, it gets in the way of your personal or professional interactions, begets violence, and others get flustered by your rage, you may possibly be ill with an Anger syndrome which must be treated without delay. Types of Anger - Anger from mortification: These people have an underprivileged self-esteem, which they mask by rebuking and dishonoring others. - Atrocious Anger: These people feel insecure, unreasonably vulnerable by others, and anger is an avenue of self-defense. They imagine that others are angry instead of recognizing their own wrath. - Expectation Anger: These people have a negative outlook and have unrealistic expectations of themselves and from others. Their root of anger is not accepting people as they are. - Forestalling Anger: These people are terrified of their own anger, or the anger of others. They are fearful of losing control and feel protected in peaceful situations. - Revulsion Anger: These people have an unresolved sentiment of anger, which causes resentment and they become hostile towards those they cannot forgive. - Impulsive Anger: These people feel loss of control and hence they are aggressive, go like a bullet, and can be a threat to themselves and others. Their actions are impulsive, for which they are later repentant. - Premeditated Anger: These people contemplate their anger, they like controlling others, and they get what they want by intimidating or overpowering others. - Principled Anger: These people are fanatics, self-opinionated and uncompromising. They do not comprehend other people and get heated when others not pass their expectations. - Obsessed Anger: These people get psychologically thrilled and find pleasure from their strong feelings of anger. They get angry frequently even at insignificant trifles, which tarnishes their relationships. - Underhanded Anger: These people by no means expose their anger. Their anger is exhibited in devious ways like disregarding things and others, frustrating others, mislaying their own requirements. Causes of Anger - Fear, anxiety, depression. - Feelings of hurt, disrespect, humiliation, embarrassment, dissatisfaction, jealousy, and sadness. - Unable to forgive and forget. - Lack of appreciation, feelings of rejection. - Lack of control or controlling nature. - Unfriendly, violent parents or other family members. - Physical or sexual abuse. - Substance abuse. - Media violence. Symptoms of Anger - Bad temper, rudeness, violence, loss of control - Self-stimulation, obsession, compulsion, withdrawal, unpredictable behavior - Anxiety, restlessness, frustration, depression or nervous breakdown - Pessimistic and vindictive attitude - Easily offended - Inability to act or think rationally - Difficulty managing personal, social and professional rapport - Drug, alcohol, gambling, smoking or other addictions - Eating disorders - Flushed face - Rapid pulse, increased blood pressure - Shortness of breath - Tightness of jaws and fist - Nervous twitching or shaking of body - Self harm - Suicidal thoughts or suicide Homeopathic Approach to Anger Self Care Measures for Anger - Learn to recognize as well as acknowledge your anger and identify the cause of it. - Identify the situations that provoke you. - Take the help of a close family member or friend and communicate your anger emotions with them. - Exercise regularly and practice deep breathing exercises, meditation, and yoga. - Develop hobbies like listening to music, reading books, painting, writing, etc. - Reduce the intensity of your anger with a solitary period of silence and rest when you recognize the signs of anger. - During an outburst analyze your alternatives for behaving and envision how you may react. Be aware that you are accountable for your anger and actions. - Release all shame and guilt and replace your negative behaviors with more positive actions. - Develop a sense of wittiness. - Avoid alcohol, drugs and any other addictives. - Focus on responsibilities one at a time and proceed towards larger objectives when you are ready. - Take your time fixing your problems - Put into practice what you moralize to your children. - Always remember one thing that you cannot control the behavior of others and pardoning is not overlooking, it is recalling and letting go. Too much antagonism will jeopardize your personal, social, professional life and your overall health. If you are noticing that your anger level is on the high, then consider Homeopathy and be healthy, happy, calm, focused, behave better and become rage free. Control your anger before it controls you.
In the previous post, we have discussed about divisibility by 2. In this post, we discuss about divisibility by 3. Rule: A number is divisible by 3 if the sum of the digits is divisible by 3. The number 321 is divisible by 3 because 3 + 2 + 1 = 6 is divisible by 3. On the other hand, the number 185 is not divisible by 3 because 1 + 8 + 5 = 14 is not divisible by 3. Now, why does this rule work? Notice how the numbers are represented in expanded notation: This means that number in hundreds can be represented as where h, t, u are the hundreds, tens, and units digits. Now, we can represent as and regroup the terms as Of course, is divisible by 3, so it only remains to show that is divisible by 3. But, is the sum of the digits of a 3-digit number. This proves (for three digit numbers) that the rule above is true. Although the proof above works only for 3-digit numbers, it can be done to any number of digits.
An integer is a whole number (not a fraction) that can be positive, negative, or zero. Therefore, the numbers 10, 0, -25, and 5,148 are all integers. Unlike floating point numbers, integers cannot have decimal places. Integers are a commonly used data type in computer programming. For example, whenever a number is being incremented, such as within a "for loop" or "while loop," an integer is used. Integers are also used to determine an item's location within an array. When two integers are added, subtracted, or multiplied, the result is also an integer. However, when one integer is divided into another, the result may be an integer or a fraction. For example, 6 divided by 3 equals 2, which is an integer, but 6 divided by 4 equals 1.5, which contains a fraction. Decimal numbers may either be rounded or truncated to produce an integer result.
By Kiera Ford, volunteer contributor, American Red Cross In times of uncertainty, when it seems to be the height in violations of one’s rights, National Human Rights Month is here to remind us of the day that our rights were codified and set in place. During World War II, in order to combat Hitler’s deadly dictatorship, the Allies established a list of rights that everyone should have. National Human Rights Month actually started back in 1948, when the United Nations General Assembly met and put into place the basic freedoms that every man and woman should have: Freedom of Speech, Freedom of Religion, Freedom from Fear, Freedom from Want and some 30 other freedoms necessary for life. Since these articles of freedoms were written, they have helped keep people safe from dictatorship, bring prisoners of war home safely, and have helped maintain the rights and civil liberties of people across the world. What can you do to celebrate National Human Rights Month? The best way to acknowledge and even celebrate National Human Rights Month is by recognizing the work and diligence that those before us have put in so that we can live freely and comfortably. Because not everyone is able to observe these freedoms that, for most, are awarded at birth, take time this month to advocate for one another, to fight for the basic human rights of that everyone should have, and to make other’s aware of the fight that some of us still have to endure. You can also work with other organizations throughout the nation or in your local community to spread knowledge and good will to man. This work doesn’t have to be monumental. It can be as simple as a donation or a volunteering for an hour once a week. Whatever you choose to do, remember to never take for granted the rights you’ve been given at birth. Why is National Human Rights Month important to the Red Cross? Read through our Seven Fundamental Principles to see how humanity, impartiality, neutrality, independence, voluntary service, unity, and universality play a part in our mission.
Young children are naturally curious and passionate about learning (Raffini, 1993). In their pursuit of knowledge, they’re prone to poking, pulling, tasting, pounding, shaking, and experimenting. “From birth, children want to learn and they naturally seek out problems to solve” (Lind, 1999, p. 79). Duckworth (1987) refers to “knowing the right answer” as a passive virtue and discusses some of its limitations. “Knowing the right answer,” she says, “requires no decisions, carries no risks, and makes no demands. It is automatic. It is thoughtless” (p. 64). A far more important objective is to help children realize that answers about the world can be discovered through their own investigations. Developing new concepts or ideas is an active process and usually begins with child-centered inquiry, which focuses on the asking of questions relevant to the child. While inquiry involves a number of science-related activities and skills, “the focus is on the active search for knowledge or understanding to satisfy students’ curiosity” (Lind, 1999, p. 79). Preschool age children are inquisitive and open-minded, perfect traits for budding young scientists! Science at a preschool level is a lot of fun, kids are truly mesmerized by chemical reactions, love exploring nature, and jump to build things. Compel your kids to get creative, question, reason and learn with this fun and engaging activity! The goal is simple: design and build a system that will protect an egg from a 1 meter (3.3 feet) drop. Eggs that smash or crack fail the test while eggs that survive without a scratch pass! Build your egg protectors from resources such as: Plastic straws, Popsicle sticks, Tape, Recycled paper, Glue, Plastic bags, Boxes, Used material, Plastic containers, Cotton or any other items you may think of! Make sure to have a lot of paper napkins at hand! Note to the parents: The aim of the activity is to create something that can absorb the energy the egg gathers as it accelerates towards the ground. A hard surface will crack the egg so you have to think carefully about how you can protect it. Something that will cushion the egg at the end of its fall is a good place to start, you want the egg to decelerate slowly so it doesn't crack or smash all over the ground. You'll need to run a few trials so have some eggs ready as guinea pigs, those that don’t survive will at least be comforted knowing they were smashed for a good cause, and if not, you can at least have scrambled eggs for dinner right? Disclaimer: This presents an overview of child development. It is important to keep in mind that the time frames presented are averages and some children may achieve various developmental milestones earlier or later than the average but still be within the normal range of development. This information is presented to help parents understand, at a high level, what to expect from their child. Any questions/concerns you may have about your child’s development should be shared with your doctor.
This piano theory workbook is full of worksheets that expands on previous books to include: - Activities (“Sound Alikes”, “Cross Out”, “Scale Trails”) continue to entertain yet challenge students’ reasoning skills - New facts (diminished triads - 3 forms of minor, transposition, and melody writing) appear frequently throughout - All books exactly follow the Texas State theory curriculum Just The Facts is a unique, student-friendly music theory workbook series, useful as preparation for the Texas State theory test. These piano theory worksheets are designed to maximize learning and fun which we feel is the best way to learn music theory! - Each music theory workbook has twelve 2-page lessons plus extra pages of ‘real’ music analysis. - Based on Discovery and Spiral Learning Theories. - New concepts are introduced as a FACT then reintroduced as a FACT REMINDER. - Every music theory lesson has ear training, analysis activities, and a musical game. - Written activities and analysis help students apply concepts to repertoire
Your browser is not able to view Flash content. Since the resource listed below uses Flash, you will likely have a less than optimal experience if you choose to view that site on this computer or mobile device. Grades2 to 12 In the ClassroomUse digital images of lab experiments or class activities for sharing on a class wiki or blog with clickable enhancements offering additional information. Have students add links or even a blog reaction or explanation to their project or experiment image. Use the site for making a photography or art portfolio blog. Have students annotate images to explain their work or various techniques they used. World language or ESL/ELL teachers can enhance images with links to sound files or other explanations for better understanding. Use in world language to label items in an image with the correct words in that language. Young students could write simple sentences to practice language skills while explaining about a favorite picture or activity. Use in Science to explain the experiment or in a Consumer Science class to explain cooking or other techniques. Consider creating a class account for student groups to use together. Teachers can create a ThingLink of an image with questions and links that students must investigate to respond as a self-directed learning activity. An image of a tree could have questions and links about types of leaves, photosynthesis, and the seasons, for example. Gifted students could create a collection of annotated images that link to sound files to add "personalities" to science objects (think of the talking trees in the Wizard of Oz) or create an annotated image of a almost anything they research to go beyond regular curriculum they have already mastered: Annotate an image of a food product to link to information about its sources and potential harms. Annotate an image of a campaign poster and "debunk" its claims with links to video clips that show the politician in action, etc. Annotate an advertisement with links its propaganda techniques. Teens with a sophisticated sense of humor will especially enjoy linking to ironic examples that debunk or offer a satire of the original! Includes an education-only area for teachers and students Parent permission advised before posting student work created using this tool Includes Interaction w general public/ public galleries with unmoderated content Requires registration/log-in (WITH email) Products can be embedded Multiple users can collaborate on the same project Includes teacher tools for registering and/or monitoring students
Carbohydrates are one of three basic macronutrients needed to sustain life (the other two are proteins and fats). They are found in a wide range of foods that bring a variety of other important nutrients to the diet, such as vitamins and minerals, phytochemicals, antioxidants, and dietary fiber. Fruits, vegetables, grain foods, and many dairy products naturally contain carbohydrates in varying amounts, including sugars, which are a type of carbohydrate that can add taste appeal to a nutritious diet. Carbohydrates, often referred to as are your body's primary energy source, and they're a crucial part of any healthy diet. Carbs should never be avoided, but it is important to understand that not all carbs are alike. As any Personal Trainer in Alpharetta will tell you, there are two types of carbohydrates. Carbohydrates can be either simple (nicknamed "bad") or complex (nicknamed "good") based on their chemical makeup and what your body does with them. Complex carbohydrates, like whole grains and legumes, contain longer chains of sugar molecules; these usually take more time for the body to break down and use. Simple carbohydrates are composed of simple-to-digest, basic sugars with little real value for your body. The higher in sugar and lower in fiber, the worse the carbohydrate is for you — remember those leading indicators when trying to figure out if a carbohydrate is good or bad. Fruits and vegetables are actually simple carbohydrates — still composed of basic sugars, although they are drastically different from other foods in the category, like cookies and cakes. The fiber in fruits and vegetables changes the way that the body processes their sugars and slows down their digestion, making them a bit more like complex carbohydrates. Simple carbohydrates to limit in your diet include: - Artificial syrups - White rice, white bread, and white pasta - Potatoes (which are technically a complex carb, but act more like simple carbs in the body) - Pastries and desserts Complex carbohydrates are considered "good" because of the longer series of sugars that make them up and take the body more time to break down. They generally have a lower glycemic load, which means that you will get lower amounts of sugars released at a more consistent rate — instead of peaks and valleys —to keep you going throughout the day. Picking complex carbohydrates over simple carbohydrates is a matter of making some simple substitutions when it comes to your meals. "Have brown rice instead of white rice, have whole-wheat pasta instead of plain white pasta," These simple changes in your diet can make a world of difference in your overall health and wellness. By eating more Complex Carbs over Simple carbs you not only will drop your blood sugar, lower cholesterol, but it will even help you lose those stubborn last few pounds. If you have any more questions about your diet or ways to increase your wellness reach out to your Personal Trainer in Alpharetta or any of our other AMAZING staff here at Thrive Health Systems! We are here to help 770-667-0099
The geography of Ancient China is often described by geologists in a system of three steps: The first step is to the far west near present day Tibet. With the highest mountains on earth around here the climate is quite cold and in the summer quite warm this place is widely considered inhospitable, from -40℃ (-40 F) in the winter to 37℃ (100 F) in the summer. Due to this there aren’t many villages and when found villages are quite small. The next step is the middle of China. It is covered with desert and a small amount of grassland. People here raise grazing cattle & yaks. There are some low hills but no snow. With cold winters and hot summers, this area was never densely populated. The final step is the East. This area is accounts for circa, 95% of modern and Ancient Chinese population. Two long rivers flow through here, the Yellow and the Yangtze River. Here there is plenty of water for crops and agriculture flourished. In the North, wheat was the main crop and in the South, rice was more frequent.
Permaculture is an ethically based design system for creating human habitats that are in harmony with the natural world. Permaculture systems are diverse, stable, resilient, and produce abundantly. Usually the abundant part comes in the form of food. But permaculture is not merely edible landscaping on steroids as I feel it is commonly perceived. A permaculture system not only produces food but also conserves water and energy, enriches the local biosphere for both people and wildlife, and brings people together. It produces more than it consumes. The term ‘permaculture’ was coined by Bill Mollison in 1978. It was a contraction of ‘permanent agriculture’, but it has expanded to ‘permanent culture’ because true sustainability encompasses more than just farming. The governing design ethics and principles reflect permaculture’s agricultural roots, but they are applicable to pretty much all industries. The three governing design ethics of permaculture: - Care for the Earth – Without a healthy earth humans cannot flourish. - Care for the people – Provide the necessary resources for life. - Return of surplus – Use only what is needed and return waste back into the system. The twelve design principles of permaculture: - Observe and interact – By first observing we can create solutions that are well suited to our situation. - Catch and store energy – Having a surplus of energy in storage is pretty much the definition of abundance. - Obtain a yield – The system needs to produce something useful, of course. - Apply self-regulation and accept feedback – Providence and an open mind ensure systems will keep working. - Use and value renewable resources and services – Make use of nature’s abundance. - Produce no waste – In nature there is no waste, our systems should be the same. - Design from patterns to details – Use the big picture the lay the frame work, then fill in the details. - Integrate rather than segregate – Individual components support each other. - Use small and slow solutions – Simple systems are more resilient than complex ones. - Use and value diversity – Diversity increases resilience and better takes advantage of natural elements present in the system. - Use edges and value the marginal – The interfaces and edges of the system often contain valuable resources. - Creatively use and respond to change – Recognize that challenges are opportunities. I’ll go into more detail of each of these in future posts. That is the textbook definition of permaculture in a nutshell. But what does permaculture look like in the real world? Like the Garden of Eden Load MoreSomething is wrong. Response takes too long or there is JS error. Press Ctrl+Shift+J or Cmd+Shift+J on a Mac.
Fatigue: A condition characterized by a lessened capacity for work and reduced efficiency of accomplishment, usually accompanied by a feeling of weariness and tiredness. Fatigue can be acute and come on suddenly or chronic and persist. Fatigue is different from drowsiness . In general, drowsiness is feeling the need to sleep, while fatigue is a lack of energy and motivation. Drowsiness and apathy (a feeling of indifference or not caring about what happens) can be symptoms of fatigue. While no one knows what causes chronic fatigue syndrome, for more than a century, doctors have reported seeing illnesses similar to it. In the l860s, Dr. George Beard named the syndrome neurasthenia because he thought it was a nervous disorder with weakness and fatigue. Since then, health experts have suggested other explanations for this baffling illness. There are many possible physical and psychological causes of fatigue. Some of the more common are: An allergy that leads to hay fever or asthma Anemia (including iron deficiency anemia) Depression or grief The sense of fatigue is believed to originate in the reticular activating system of the lower brain. Musculoskeletal structures may have co-evolved with appropriate brain structures so that the complete unit functions together in a constructive and adaptive fashion. The entire systems of muscles, joints, and proprioceptive and kinesthetic functions plus parts of the brain evolve and function together in a unitary way. You can easily become tired if you are depressed or experiencing emotional stress. Depression that requires medical help often shows itself through heavy fatigue. Symptoms of fatigue include the following: Weakness, lack of energy, tiredness, exhaustion Passing out or feeling as if you are going to pass out Palpitations (feeling your heart beating) Reduced immune system function Short term memory problems Reduced ability to pay attention to the situation at hand Symptoms of fatigue are often caused by more than one problem. Treating a specific problem, such as anemia, may make you feel better, but other things may still need to be done. That is why many different approaches are considered. These approaches may or may not include medicines. Treating cancer-related fatigue often involves many health professionals, including doctors, nurses, social workers, physical therapists, nutritionists, and a number of others. Behaviour therapy, physiotherapy, occupational therapy, counselling, relaxation therapy, and graded exercise may help. Reducing stress, eating a healthy diet, rest periods, pacing and support groups also help many people with CFS. Anxiety or anxiolytic agents: Anxiolytic agents are used to treat panic disorder in CFS patients. Examples include alprazolam (Xanax), clonazepam ( Klonopin), and lorazepam (Ativan). Common adverse reactions include sedation, amnesia, and withdrawal symptoms (insomnia, abdominal and muscle cramps, vomiting, sweating, tremors, and convulsions).
Kids may be overweight due to overeating, a lack of exercise, poor sleeping habits or medical conditions. Putting kids on diets may be too restrictive or compromise nutrition intake; instead, focus on making small, progressive changes towards a healthier lifestyle. Kids can lose excess weight by eating a nutrient-rich diet, practicing moderation, using portion control and increasing physical activity. Focus on Variety Serve kids nutritious meals that that emphasize vegetables with lean protein, complex carbohydrates and healthy fats. Cook a variety of foods to expose them to different tastes and discover what they like. Sometimes, cooking foods with a different technique can help kids enjoy a food more. For instance, roasted sweet potatoes and Brussel sprouts are sweet and flavorful versus plain boiled veggies. Snack on fun, finger foods like crunchy carrots with hummus dip or apple slices with cheese. Kids can still enjoy desserts like a slice of cake or a few pieces of candy. By eating nutrient-rich meals, kids will more likely be satisfied with one slice of cake instead of two and not feel deprived. Kids can lose up to 20 pounds by making healthier choices at breakfast. Typical breakfast foods such as cereal are often high in sugar. Sugary foods cause blood sugar levels to rise and fall, leading to hunger cravings, overeating and weight gain. Dietitian Leslie Beck, in the Globe and Mail, suggests choosing cereals that have no more than eight grams of sugar and at least five grams of fiber per serving. In addition to low-sugar cereals, look for breakfast foods that contain protein and healthy fats. Protein is satiating while healthy fats are essential for healthy brain and bodily functioning. Breakfasts such as scrambled eggs with avocado and smoothies made with milk, fruit and almonds are delicious and nutritious. Move It and Lose It Children can lose weight without deprivation by exercising more. Find fun ways to exercise by joining kid recreational clubs such as soccer, hockey and basketball. The Centers For Disease Control and Prevention recommends that children should engage in 60 minutes of moderate-intensity activity daily with three of those days involving more vigorous exercise. However, kids who need to lose weight may need to exceed this guideline. Start with 60 minutes of daily activity such as brisk walking, playing soccer, swimming or biking and gradually add on minutes to reach 70 to 80 minutes of exercise per day. Kids can lose weight by eating nutrient-rich vegetables that are high in fiber but low in calories. Fiber is an indigestible plant carbohydrate that aids digestion and stabilizes blood sugar levels. Eating fibrous foods prevents hunger cravings that lead to overconsumption of sugary and fatty foods. Kids will not be as inclined to reach for extra snacks and desserts if they are already full. Most vegetables are also low in calories, low in sugar and have little to no fat.
This task will look at how holistic societies and knowledge systems function. It focus on the knowledge systems of the Ngunnawal people, who live around Canberra in southern Australia. Two ways of working with this topic are suggested below: 1. Draw a circle on a page. 2. Write these categories around the outside of the circle. Space them evenly around the edge: Seasons of year 3. Read the articles below and make notes in the circle you have drawn. This website is made by the Ngunnawal people. It introduces visitors to their culture and history. Read the “History” section of this website: This booklet has been produced by the government for teachers to use in Australian classrooms when teaching about the Ngunnawal. The first pages of the booklet are relevant for working with the task here. Read pages 1-4: As you read, stop each time you find a new piece of information, decide in which categories it fits, and draw a line between these categories. Make a note of which bit of information the line represents either along the line, or add it at the edge. For example, when you read that Ngunnawal collect food from hunting, you draw a line between Food and Nature, and write the word “hunting” along that line. 4. When you have finished, discuss these questions: a. How is holistic knowledge formed? b. In what way does nature play an essential role in Ngunnawal life? c. Think of other societies you know well. Are their holistic elements in the knowledge systems of these societies? d. What are the advantages and disadvantages of holistic knowledge systems? Alternative way for doing the task: Nine pupils stand in a large circle, with each person representing one of the categories mentioned in part 2 of the task above. As the other pupils work with part 3 of the task, instead of drawing lines, they send a line of thread or a rope between the categories. When the pupils are finished, there should be lines criss-crossing the circle. Then the teacher can start cutting the lines ,discussing what happens when they are broken. Categories can also be taken out (discuss how this might happen in real life situations) and pupils can discuss how this will effect the lines in the circle as a whole. Pupils can then discuss the questions in part 4 of the task above. (When the circle is broken down, there will also be the opportunity to discuss how different government policy has worked for some indigenous societies, what sort of help is needed in holistic societies and what challenges there are in giving the right sort of help. This could also be linked back to the differences between indigenous and western knowledge systems – see here) For more information on holistic systems within indigenous societies, look at http://firstpeoples.org/who-are-indigenous-peoples/how-our-societies-work
A group of engineers have proposed a novel approach to computing: computers made of billionth-of-a-meter-sized mechanical elements. Their idea combines the modern field of nanoscience with the mechanical engineering principles used to design the earliest computers. In a recent paper in the New Journal of Physics, the researchers, from the University of Wisconsin-Madison (UWM), describe how such a nanomechanical computer could be designed, built, and put to use. Their work is a contemporary take on one of the very first computer designs: the “difference engine,” a 15-ton, eight-foot-high mechanical calculator designed by English mathematician and engineer Charles Babbage beginning in 1822. Corresponding UWM scientist Robert Blick said that he was also inspired by the design of a small hand-cranked mechanical calculator invented and sold in the 1950s, the Curta. The computer they envision could never be as fast as traditional semiconductor-based computers, where individual transistors can operate at 100 gigahertz (GHz). However, Blick told PhysOrg.com, “We designed the circuits in this nanomechanical computer with the idea in mind that, at the nanoscale, mechanical motion is quite fast – 100 megahertz to a few gigahertz. This should make them competitive with existing micro-processors, which are used in a variety of mundane applications.” Among these applications are appliances, electronic toys, and automobiles, all which contain basic computers in order to function but don't require ultra-fast processors. The design's basic unit is the “nanomechanical single-electron transistor,” or NEMSET, a tiny circuit component that combines a typical silicon transistor with a nanoscale mechanical switch – a tiny moving part. A full circuit composed of multiple NEMSETs could be created, the researchers say, using one step of photolithography and one step of etching, methods commonly used to create silicon-based circuits. The nanomechanical computer has three main advantages compared to semiconductor-based computers. It is more resilient to electric shock, its circuits can operate at significantly higher temperatures (several hundred degrees Celsius), and it is much more energy efficient, dissipating a fraction of the energy of traditional computers. Additionally, the computer's memory structure may have an edge over standard memory. A nanomechanical form of memory may not need to be restricted to the “1” and “0” states that a typical computer uses to store a single bit (the most basic unit of information; these values correspond to a memory cell that is either charged or uncharged). A nanomechanical system could have several stable states, allowing for more efficient data storage. Citation: Robert H Blick, Hua Qin, Hyun-Seok Kim and Robert Marsland, “A nanomechanical computer—exploring new avenues of computing” New Journal of Physics 9 (2007) 241. Copyright 2007 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. Explore further: Biomimetic photodetector 'sees' in color
Jaundice is a condition which causes discolouration of skin, mucous membranes and whites of the eyes. It mainly occurs due to increased amount of bilirubin in the blood. Jaundice is an underlying cause of many diseases. One often suffers from this condition in case of liver diseases like hepatitis or liver cancer. Neonatal jaundice is yellowing of skin and other tissues in a newborn baby. The bilirubin level is likely to rise in newborn infants as their liver is not mature to remove it from the blood. It is a common condition, wherein 50-60% kids suffer during the first few week after their birth. Signs of neonatal jaundice: - High pitched crying - Poor feeding - Change in muscle tone Signs & symptoms: The various signs and symptoms of jaundice are: - The bilirubin levels are likely to rise. - Change in colour of eyes (conjunctiva – the white part of the eyes) is one of the first visible symptoms of jaundice. The white part of the eye will change to yellow. - The skin too may look yellow. - Pale or light coloured stool. - The colour of the urine will be dark yellow or brown. - Itching of skin However, if the jaundice is due to some underlying disease in progress, it may have additional symptoms like: - Nausea & vomiting - Abdominal pain - Loss of appetite - Pre-hepatic (when the bile is made in the liver): Jaundice is caused by rapid increase in the breakdown and destruction of the red blood cells. - Hepatic (the problem takes place within the liver): Here, the liver is unable or incapable or proper metabolism and excrete bilirubin. - Post-hepatic (after bile has been made in liver): Here, the main cause is interruption of normal drainage of conjugated bilirubin in the form of bile from liver into intestines. Special diet to be taken during jaundice: Jaundice, if diagnosed at an early stage, can soon be cured with proper, nutritious diet and right kind of exercises. Hence, during this time, it is highly important to pay attention to your diet chart and food intake. So, the doctors may suggest you to take ‘jaundice diet’ for the next few week. Dr Health gives you a brief on ‘jaundice diet’, which may help you to recover fast. - Intake of fluid, especially in the form of juices – orange, grapes, sugar-cane, lemon, carrot, beet is very important for patients - A full-fruit diet for initial 3 to 5 days can prove good for a quick recovery. Apple, pineapple, orange, pears, papaya, mango, etc. However, avoid eating bananas. - Your meal should include a bowl of raw vegetables (salad), and par boiled vegetables like spinach, carrot, fenugreek, accompanied by a glass of buttermilk. - Barley water and coconut water too, is good for your liver, and hence you can have it on regular basis. - Include a bowl of hot, strained soup in dinner with boiled and mashed potatoes. - Try to include a glass of skimmed milk during the day time, or have it before going to bed. Foods to be avoided: Fried, fatty and spicy food, meat, butter, drinks like tea and coffee, spices and condiments, pulses and pickles. For diagnosing the cause behind jaundice the doctor may ask about your medical history. He may also ask you to get done whole body check-up or physical examination which may include blood tests, ultrasonography, MRI, CT scan, endoscopy and liver biopsy. Usually, jaundice is treated after knowing the underlying cause of the condition. The patient may or may not be hospitalised.
What causes cold sores Cold sores are a highly contagious common illness, caused by a strain of the herpes simplex virus called Type 1 (HSV-1). There are two types of Herpes Simplex Virus: HSV1 – the most common type usually causing cold sores (oral herpes) HSV2 – causes genital herpes In most cases, the virus is passed on in early childhood, for example when a child is kissed by someone with an active cold sore. The cold sore virus goes through the skin and travels up the nerve, where it will lie dormant until triggered. Who gets cold sores? - Around 80% of the UK population carry the herpes simplex virus but for many people the virus lies dormant in the nerves and never develops into cold sores on the lips or mouth - 1 in 5 people in the UK have frequently recurring cold sores - Cold sores affect all age groups. The vast majority of individuals will suffer their first attack between 10-19 years of age, but a cold sore can be triggered at any stage of your life Cold sore triggers Cold sores will not usually appear until after puberty when one of a number of reasons might contribute to an attack, such as: - General stress – fatigue and tiredness - Colds or other viruses that lower the body’s immune system - Emotional upset - The onset of menstruation and changes in hormone levels - Changes in weather – strong sunlight in the summer and cold winds in the winter Did you know? The symptoms and stages of a cold sore attack can be different for everyone. You may show no symptoms at all. You may have trivial symptoms such as a small spot, which you do not realise is a cold sore. A cold sore outbreak may start with a tingling sensation around the mouth, chin, nose or other areas of the face. For most people cold sores can disappear within a week to 10 days. Reference: www.netdoctor.co.uk and www.nhs.uk
The first reconnaissance of all the major planets of the Solar System culminated in the Voyager 2 encounter with Neptune in August 1989. Neptune itself was revealed as a planet with gigantic active storms in its atmosphere, and off-center magnetic field, and a system of tenuous, lumpy rings. Whereas only two satellites were known prior to the encounter, Voyager discovered six more. Triton, the largest satellite, was revealed as a frozen, icy world with clouds and layers of haze, and with vertical plumes of particles reaching five miles into the thin atmosphere. This latest Space Science Series volume presents the current level of understanding of Neptune, its riings, and its satellites, derived from the data received from the Voyager. The book's chapters are written by the world's leading authorities on various aspects of the Neptune system and are based on papers presented at an international conference held in January 1992. Covering details of Neptune's interior, atmosphere, rings, magnetic fields, and near-space environment--as well as the small satellites and the remarkable moon Triton--this volume is a unique resource for planetary scientists and astronomers requiring a comprehensive analysis of Neptune viewed in the context of our knowledge of the other giant planets. Until another spacecraft is sent to Neptune, Neptune and Triton will stand as the basic reference on the planet.
ethylene chloride (C2H4Cl2), also called ethylene dichloride or 1,2-dichloroethane, a colourless, toxic, volatile liquid having an odour resembling that of chloroform. It is denser than water, and it is practically insoluble in water. Ethylene chloride is produced by the reaction of ethylene and chlorine. The annual production of ethylene chloride exceeds that of all other organohalogen compounds and ranks behind only that of ethylene and propylene among all organic compounds. Almost all ethylene chloride is converted to vinyl chloride for the production of polyvinyl chloride, or PVC. The conversion of ethylene chloride to vinyl chloride is carried out at temperatures of about 500 °C (930 °F) in the presence of a catalyst.
New technologies take a while to reach maturity. 3D printing has only been of real interest for a few years now, but the rate of advancement is scary-fast. True, the printers you can buy today are mostly good for printing plastic desk flair, but the future of this technology is hot — about 4,000 °F “hot.” NASA recently started experimenting with a type of 3D printing to produce a rocket engine injector, and it has worked in an early test. The procedure employed to produce this engine is known as selective laser melting manufacturing. Whereas normal 3D printers melt and extrude plastics (usually ABS), selective laser melting uses a high-powered laser to melt and fuse metallic powders into the desired 3D structure. The injector is a critical component of rocket engines where the fuel is introduced and burned to produce thrust. NASA believes that this process could speed the development and production of rocket engines dramatically. The type of engine made with selective laser melting would have taken over a year to fabricate the old fashioned way. With 3D printing, it only took four months and cost 70% less. There is still considerable additional testing to be done before a 3D printed rocket gets strapped to any expensive equipment, let alone any craft carrying humans. However, the initial results are encouraging. The injector assembly is subjected to intense pressure and heat, but this one held up. Maybe one day some form of selective laser melting will be part of consumer 3D printers. For the time being, you’ll have to settle for plastic models of rockets at home.
The Western United States, commonly referred to as the American West or simply "the West," traditionally refers to the region comprising the westernmost states of the United States. Because the U.S. expanded westward after its founding, the meaning of the West has evolved over time. Prior to about 1800, the crest of the Appalachian Mountains was seen as the western frontier. Since then, the frontier moved further west and the Mississippi River was referenced as the easternmost possible boundary of the West. Besides being a purely geographical designation, "The West" also has anthropological connotations. While this region has its own internal diversity, there is arguably an overall shared history, culture (music, cuisine), mind set or world view and closely interrelated dialects of English. As with any region of such geographically large extent and varied culural histories, many subregions of The American West possess distinguishing and idiosyncratic qualities. In its most extensive definition, the western U.S. is the largest region, covering more than half the land area of the United States. It is also the most geographically diverse, incorporating geographic regions such as the Pacific Coast, the temperate rainforests of the Northwest, the Rocky Mountains, the Great Plains, most of the tall-grass prairie eastward to Western Wisconsin, Illinois, the western Ozark Plateau, the western portions of the southern forests, the Gulf Coast, and all of the desert areas located in the United States (the Mojave, Sonoran, Great Basin, and Chihuahua deserts). The states from the Rockies westward have something of a dual nature of semiarid steppes and arid deserts in the lowlands and plateaus, and mountains and coniferous forests in the uplands and coastal regions. The region encompasses some of the Louisiana Purchase, most of the land ceded by Britain in 1818, some of the land acquired when the Republic of Texas joined the U.S., all of the land ceded by Britain in 1846, all of the land ceded by Mexico in 1848, and all of the Gadsden Purchase. Arizona, New Mexico, Nevada, Colorado, and Utah are usually always considered to be part of the southwest. Texas and Oklahoma are also considered southwest sometimes as well, while Idaho, Montana, Oregon, Washington, and Wyoming can be considered part of the Northwest, and the addition of the Canadian province of British Columbia comprise the Pacific Northwest. There is also another region of both southwest and northwest states called the Mountain West, which is Arizona, New Mexico, Colorado, Utah, Nevada, Montana, Idaho, and Wyoming. The West can be divided into the Pacific States; Alaska, California, Hawaii, Oregon, and Washington, with the term West Coast usually restricted to just California, Oregon, and Washington, and the Mountain States, always Arizona, Colorado, Idaho, Montana, Nevada, New Mexico, Utah, and Wyoming. Alaska and Hawaii, being detached from the other western states, have few similarities with them, but are usually also classified as part of the West. Western Texas in the Chihuahuan Desert is also traditionally considered part of the Western U.S, though from a climatological perspective the West might be said to begin just west of Austin, TX where annual rainfall drops off significantly from what is typically experienced in the East, with a concurrent change in plant and animal species. Some western states are grouped into regions with eastern states. Kansas, Nebraska, South Dakota and North Dakota are often included in the Midwest, which also includes states like Iowa, Illinois and Wisconsin. Arkansas, Louisiana, Oklahoma, and Texas are also considered part of the South. It is rare for any state east of the Mississippi River to be considered part of the modern west. Historically, however, the Northwest Territory was an important early territory of the U.S., comprising the modern states of Ohio, Indiana, Illinois, Michigan and Wisconsin, as well as the northeastern part of Minnesota. Also, American sports leagues with a "Western" conference or division often have members east of the Mississippi for various reasons such as not enough true Western teams, not strictly adhering to geographic regions, etc. For example, the NBA and NHL each have a Western Conference with a member in Tennessee. According to the 2000 Census, the West's population was: - 68.5% White - 12.1% of Some other race - 7.9% Asian - 4.9% Black or African American - 4.3% Two or more races - 1.8% American Indian and Alaska Native - 0.5% Native Hawaiian and Pacific Islander - 24.3% were Hispanic or Latino (of any race) As defined by the United States Census Bureau, the Western region of the United States includes 13 states (with a total 2006 estimated population of 69,355,643) and is split into two smaller units, or divisions: - The Mountain States: Montana, Wyoming, Colorado, New Mexico, Idaho, Utah, Arizona, and Nevada - The Pacific States: Washington, Oregon, California, Alaska and Hawaii However, the United States Census Bureau uses only one definition of the West in its reporting system, which may not coincide with what may be historically or culturally considered the West. For example, in the 2000 Census, the Census Bureau included the state with the second largest Hispanic population, Texas, in the South, included the state with the second largest American Indian population, Oklahoma, also in the South, and included the Dakotas, with their large populations of Plains Indians, in with the Midwest. However, it should be noted that the western half of Oklahoma and far West Texas, are usually neither culturally, geographically or socioeconomically identified with the South. Statistics from the 2000 United States Census, adjusted to include the second tier of States west of the Mississippi, show that, under that definition, the West would have a population of 91,457,662, including 1,611,447 Indians, or 1.8% of the total, and 22,377,288 Hispanics (the majority Mexican), or 24.5% of the total. Indians comprise 0.9% of all Americans, and Hispanics, 12.5%. Asians, important from the very beginning in the history of the West, totaled 5,161,446, or 5.6%, with most living in the Far West. African-Americans, totaled 5,929,968, or 6.5%—lower than the national proportion (12.8%). The highest concentrations (12%) of black residents in the West are found in Texas—which is also considered a Southern state—and in California. The West is still one of the most sparsely settled areas in the United States with 49.5 inhabitants per square mile (19/km²). Only Texas with 78.0 inhabitants/sq mi. (30/km²), Washington with 86.0 inhabitants/sq mi. (33/km²), and California with 213.4 inhabitants/sq mi. (82/km²) exceed the national average of 77.98 inhabitants/sq mi. (30/km²). The entire Western region has also been strongly influenced by European, Native and Hispanic culture; it contains the largest number of minorities in the U.S. and encompasses the only four American states where all racial groups including Caucasians are a minority (California, Hawaii, New Mexico, and Texas). While most of the studies of racial dynamics in America such as riots in Los Angeles have been written about European and African Americans, in many cities in the West and California, European and African Americans together are less than half the population because of the preference for the region by Hispanics and Asians. African and European Americans, however, continue to wield a stronger political influence because of the lower rates of citizenship and voting among Asians and Hispanics. Because the tide of development had not yet reached most of the West when conservation became a national issue, agencies of the federal government own and manage vast areas of land. (The most important among these are the National Park Service and the Bureau of Land Management within the Interior Department, and the U. S. Forest Service within the Agriculture Department.) National parks are reserved for recreational activities such as fishing, camping, hiking, and boating, but other government lands also allow commercial activities like ranching, logging and mining. In recent years, some local residents who earn their livelihoods on federal land have come into conflict with the land's managers, who are required to keep land use within environmentally acceptable limits. The largest city in the region is Los Angeles, located on the West Coast. Other West Coast cities include San Diego, San Jose, San Francisco, San Bernardino, Sacramento, Seattle, and Portland. Prominent cities in the Mountain States include Denver, Colorado Springs, Phoenix, Tucson, Albuquerque, Las Vegas, Salt Lake City, and Cheyenne. Along the Pacific Ocean coast lie the Coast Ranges, which, while not approaching the scale of the Rocky Mountains, are formidable nevertheless. They collect a large part of the airborne moisture moving in from the ocean. East of the Coast Ranges lie several cultivated fertile valleys, notably the San Joaquin Valley of California and the Willamette Valley of Oregon. Beyond the valleys lie the Sierra Nevada in the south and the Cascade Range in the north. Mount Whitney, at 14,505 feet (4,421 m) the tallest peak in the contiguous 48 states, is in the Sierra Nevada. The Cascades are also volcanic. Mount Rainier, a volcano in Washington, is also over 14,000 feet (4,300 m). Mount St. Helens, a volcano in the Cascades erupted explosively in 1980. A major volcanic eruption at Mount Mazama around 4860 BCE formed Crater Lake. These mountain ranges see heavy precipitation, capturing most of the moisture that remains after the Coast Ranges, and creating a rain shadow to the east forming vast stretches of arid land. These dry areas encompass much of Nevada, Utah and Arizona. The Mojave Desert and Sonoran Desert along with other deserts are found here. Beyond the deserts lie the Rocky Mountains. In the north, they run almost immediately east of the Cascade Range, so that the desert region is only a few miles wide by the time one reaches the Canadian border. The Rockies are hundreds of miles wide, and run uninterrupted from New Mexico to Alaska. The Rocky Mountain Region is the highest overall area of the United States, with an average elevation of above 4,000 feet. The tallest peaks of the Rockies, 54 of which are over 14,000 feet (4,250 meters approx.), are found in central and western Colorado. The West has several long rivers that empty into the Pacific Ocean, while the eastern rivers run into the Gulf of Mexico. The Mississippi River forms the easternmost possible boundary for the West today. The Missouri River, a tributary of the Mississippi, flows from its headwaters in the Rocky Mountains eastward across the Great Plains, a vast grassy plateau, before sloping gradually down to the forests and hence to the Mississippi. The Colorado River snakes through the Mountain states, at one point forming the Grand Canyon. The Colorado is a major source of water in the Southwest and many dams, such as the Hoover Dam, form reservoirs along it. So much water is drawn for drinking water throughout the West and irrigation in California that in some years, water from the Colorado no longer reaches the Gulf of California. The Columbia River, the largest river in volume flowing into the Pacific Ocean from North America, and its tributary, the Snake River, water the Pacific Northwest. The Platte runs through Nebraska and was known for being a mile (2 km) wide but only a half-inch (1 cm) deep. The Rio Grande forms the border between Texas and Mexico before turning due north and splitting New Mexico in half. According to the United States Coast Guard, "The Western Rivers System consists of the Mississippi, Ohio, Missouri, Illinois, Tennessee, Cumberland, Arkansas and White Rivers and their tributaries, and certain other rivers that flow towards the Gulf of Mexico." Climate and agricultureEdit As a very gross generalization, the climate of the West can be described as overall semiarid; however, parts of the West get extremely high amounts of rain and/or snow, and still other parts are true desert and get less than 10 inches of rain per year. Also, the climate of the West is quite unstable, and areas that are normally wet can be very dry for years and vice versa. The seasonal temperatures vary greatly throughout the West. Low elevations on the West Coast have warm to very hot summers and get little to no snow. The Desert Southwest has very hot summers and mild winters. While the mountains in the southwest receive generally large amounts of snow. The Inland Northwest has a continental climate of warm to hot summers and cold to bitter cold winters. Annual rainfall is greater in the eastern portions, gradually tapering off until reaching the Pacific Coast where it again increases. In fact, the greatest annual rainfall in the United States falls in the coastal regions of the Pacific Northwest. Drought is much more common in the West than the rest of the United States. The driest place recorded in the U.S. is Death Valley, California. Violent thunderstorms occur east of the Rockies. Tornadoes occur every spring on the southern plains, with the most common and most destructive centered on Tornado Alley, which covers eastern portions of the West, (Texas to North Dakota), and all states in between and to the east. Agriculture varies depending on rainfall, irrigation, soil, elevation, and temperature extremes. The arid regions generally support only livestock grazing, chiefly beef cattle. The wheat belt extends from Texas through the Dakotas, producing most of the wheat and soybeans in the U.S. and exporting more to the rest of the world. Irrigation in the Southwest allows the growing of great quantities of fruits, nuts, and vegetables as well as grain, hay, and flowers. Texas is a major cattle and sheep raising area, as well as the nation's largest producer of cotton. Washington is famous for its apples, and Idaho for its potatoes. California and Arizona are major producers of citrus crops, although growing metropolitan sprawl is absorbing much of this land. Local state and Government officials started to understand, after several surveys made during the latter part of the 19th century, that only action by the federal government could provide water resources needed to support the development of the West. Starting in 1902, Congress passed a series of acts authorizing the establishment of the United States Bureau of Reclamation to oversee water development projects in seventeen western states. During the first half of the 20th century, dams and irrigation projects provided water for rapid agricultural growth throughout the West and brought prosperity for several states, where agriculture had previously only been subsistence level. Following World War II, the West's cities experienced an economic and population boom. The population growth, mostly in the Southwest states of New Mexico, Utah, Colorado, Arizona, and Nevada, has strained water and power resources, with water diverted from agricultural uses to major population centers, such as Las Vegas and Los Angeles. Plains make up most of the eastern half of the West, underlain with sedimentary rock from the Upper Paleozoic, Mesozoic, and Cenozoic eras. The Rocky Mountains expose igneous and metamorphic rock both from the Precambrian and from the Phanerozoic eon. The Inter-mountain States and Pacific Northwest have huge expanses of volcanic rock from the Cenozoic era. Salt flats and salt lakes reveal a time when the great inland seas covered much of what is now the West. The Pacific states are the most geologically active areas in the United States. Earthquakes cause major damage every few years in California. While the Pacific states are the most volcanically active areas, extinct volcanoes and lava flows are found throughout most of the western half of the West. History and cultureEdit Facing both the Pacific Ocean and the Mexican border, the West has been shaped by a variety of ethnic groups. Hawaii is the only state in the union in which Asian Americans outnumber white American residents. Asians from many countries have settled in California and other coastal states in several waves of immigration since the 19th century, contributing to the Gold Rush, the building of the transcontinental railroad, agriculture, and more recently, high technology. The border states—California, Arizona, New Mexico, and Texas—all have large Hispanic populations, and the many Spanish place names attest to their history as former Spanish and Mexican territories. Other southwestern states such as Colorado, Utah, and Nevada have large Hispanic populations as well, with many names places also attest to the history of former Mexican territories. Mexican-Americans have also had a growing population in Northwestern states of Oregon and Washington, as well as the southern state of Oklahoma. The West also contains much of the Native American population in the U.S., particularly in the large reservations in the mountain and desert states. The largest concentrations for black Americans in the West can be found in Los Angeles, Oakland, Sacramento, San Francisco, Las Vegas, Denver, Colorado Springs and parts of Arizona. Alaska—the northernmost state in the Union—is a vast land of few people, many of them native, and of great stretches of wilderness, protected in national parks and wildlife refuges. Hawaii's location makes it a major gateway between the U.S. and Asia, as well as a center for tourism. In the Pacific Coast states, the wide areas filled with small towns, farms, and forests are supplemented by a few big port cities which have evolved into world centers for the media and technology industries. Now the second largest city in the nation, Los Angeles is best known as the home of the Hollywood film industry; the area around Los Angeles also was a major center for the aerospace industry by World War II, though Boeing, located in Washington state would lead the aerospace industry. Fueled by the growth of Los Angeles—as well as the San Francisco Bay area, including Silicon Valley—the center of America's high tech industry-California has become the most populous of all the states. Oregon and Washington have also seen rapid growth with the rise of Boeing and Microsoft along with agriculture and resource based industries. The desert and mountain states have relatively low population densities, and developed as ranching and mining areas which are only recently becoming urbanized. Most of them have highly individualistic cultures, and have worked to balance the interests of urban development, recreation, and the environment. Culturally distinctive points include the large Mormon population in the Mormon Corridor, including southeastern Idaho, Utah, Northern Arizona and Nevada; the extravagant casino resort towns of Las Vegas and Reno, Nevada; and, of course, the many Native American tribal reservations. American Old WestEdit Major settlement of the western territories by migrants from the states in the east developed rapidly in the 1840s, largely through the Oregon Trail and the California Gold Rush of 1849; California experienced such a rapid growth in a few short months that it was admitted to statehood in 1850 without the normal transitory phase of becoming an official territory. The largest migration in American history occurred in the 1840s as the Latter Day Saints left the Midwest for the safety of the West. Both Omaha, Nebraska and St. Louis, Missouri laid claim to the title, "Gateway to the West" during this period. Omaha, home to the Union Pacific Railroad and the Mormon Trail, made its fortunes on outfitting settlers; St. Louis built itself upon the vast fur trade in the West before its settlement. The 1850s were marked by political controversies which were part of the national issues leading to the Civil War, though California had been established as a non-slave state in the Compromise of 1850; California played little role in the war itself due to its geographic distance from major campaigns. In the aftermath of the Civil War, many former Confederate partisans migrated to California during the end of the Reconstruction period. The history of the American West in the late 19th and early 20th centuries has acquired a cultural mythos in the literature and cinema of the United States. The image of the cowboy, the homesteader and westward expansion took real events and transmuted them into a myth of the west which has influenced American culture since at least the 1920s. Writers as diverse as Bret Harte and Zane Grey celebrated or derided cowboy culture, while artists such as Frederic Remington created western art as a method of recording the expansion into the west. The American cinema, in particular, created the genre of the western movie, which, in many cases, use the West as a metaphor for the virtue of self-reliance and an American ethos. The contrast between the romanticism of culture about the West and the actuality of the history of the westward expansion has been a theme of late 20th and early 21st century scholarship about the West. Cowboy culture has become embedded in the American experience as a common cultural touchstone, and modern forms as diverse as country and western music and the works of artist Georgia O'Keeffe have celebrated the supposed sense of isolation and independence of spirit inspired by the unpopulated and relatively harsh climate of the region. As a result of the various periods of rapid growth, many new residents were immigrants who were seeking to make a new start after previous histories of either personal failure or hostilities developed in their previous communities. With these and other migrants who harbored more commercial goals in the opening country, the area developed a strong ethos of self-determinism and individual freedom, as communities were created whose residents shared no prior connection or common set of ideals and allegiances. The open land of the region allowed residents to live at a much greater distance from neighbors than had been possible in eastern cities, and an ethic of tolerance for the different values and goals of other residents developed. California's state constitutions (in both 1849 and 1879) were largely drafted by groups which sought a strong emphasis on individual property rights and personal freedom, arguably at the expense of ideals tending toward civic community. The 20th centuryEdit By 1890, the frontier was gone. The advent of the automobile enabled the average American to tour the West. Western businessmen promoted U.S. Route 66 as a means to bring tourism and industry to the West. In the 1950s, representatives from all the western states built the Cowboy Hall of Fame and Western Heritage Center to showcase western culture and greet travelers from the East. During the latter half of the 20th century, several transcontinental interstate highways crossed the West bringing more trade and tourists from the East. In the news, reports spoke of oil boom towns in Texas and Oklahoma rivaling the old mining camps for their lawlessness, of the Dust Bowl forcing children of the original homesteaders even further west. The movies replaced the dime novel as the chief entertainment source featuring western fiction. Although there has been segregation, along with accusations of racial profiling and police brutality towards minorities due to issues such as illegal immigration and a racial shift in neighborhood demographics, sometimes leading to racially based riots, the West has a continuing reputation for being open-minded and for being one of the most racially progressive areas in the United States. Major Metropolitan AreasEdit Other population centersEdit - The city of El Paso, Texas, although belonging to a state considered part of the Southern United States, is also considered part of the Western United States. If counted it would rank #16. The region's distance from historical centers of power in the East, and the celebrated "frontier spirit" of its settlers offer two clichés for explaining the region's independent, heterogeneous politics. Historically, the West was the first region to see widespread women's suffrage. California birthed both the property rights and conservation movements, and spawned such phenomena as the Taxpayer Revolt and the Berkeley Free Speech Movement. It has also produced three presidents: Herbert Hoover, Richard Nixon and Ronald Reagan. The prevalence of libertarian political attitudes is widespread. For example, the majority of Western states have legalized medicinal marijuana (all but Utah, and Wyoming) and some forms of gambling (except Utah); Oregon and Washington have legalized physician-assisted suicide; Most rural counties in Nevada allow licensed brothels. There is less resistance to the legal recognition of same-sex unions: California, Hawaii, Nevada, Oregon, and Washington recognize them. The West Coast leans toward the Democratic Party. San Francisco's two main political parties are the Green Party and the Democratic Party. Seattle has historically been a center of radical left-wing politics. Both the Democratic leaders of the Congress are from the region: House Minority Leader Nancy Pelosi of California and Senate Majority Leader Harry Reid of Nevada. Interior areas are more Republican, with Alaska, Arizona, Idaho, Utah, and Wyoming being Republican strongholds, and Colorado, Montana, Nevada, and New Mexico being swing states. The state of Arizona has been won by the Republican presidential candidate in every election except one since 1948, while the states of Idaho, Utah, and Wyoming have been won by the Republican presidential candidate in every election since 1964. As the fastest-growing demographic group, Latinos are hotly contested by both parties. Immigration is an important political issue for this group. Backlash against illegal immigration led to the passage of California Proposition 187 in 1994, a ballot initiative which would have denied many public services to illegal immigrants. Association of this proposal with the California Republicans, especially incumbent governor Pete Wilson, drove many Hispanic voters to the Democrats. - ^ US Census Bureau - ^ US Census Bureau's official map, regdiv.pdf - ^ http://www.nationalatlas.gov/articles/people/a_gender.html - ^ "Inland Aids to Navigation" (PDF). Coast Guard Auxiliary: National ATON-CU study guide (Section XIV). United States Coast Guard. pp. 14–2. http://uscg.mil/hq/cg3/cg3pcx/publications/auxmanuals/ATON2000StudyGuideSec14Inland.pdf. Retrieved 2009-03-21. - ^ Western States Data Public Land Acreage - ^ Spothopping.com - ^ Frederick Jackson Turner, The Significance of the Frontier in American History, 1920, ISBN 0486291677, Ch.1: "In a recent bulletin of the Superintendent of the Census for 1890 appear these significant words: "Up to and including 1880 the country had a frontier of settlement, but at present the unsettled area has been so broken into by isolated bodies of settlement that there can hardly be said to be a frontier line. In the discussion of its extent, its westward movement, etc., it can not, therefore, any longer have a place in the census reports." On-line version of the book - ^ Stephen D. Cummings and Patrick B. Reddy, California after Arnold (2009) pp 165-70 - Beck, Warren A., Haase, Ynez D.; Historical Atlas of the American West. University of Oklahoma Press, Oklahoma, 1989. ISBN 0-8061-2193-9 - Lamar, Howard. The New Encyclopedia of the American West. Yale University Press, 1998. ISBN 0-300-07088-8 - Milner II, Clyde A; O'Connor, Carol A.; Sandweiss, Martha A. The Oxford History of the American West. Oxford University Press; Reprint edition, 1996. ISBN 0-19-511212-1 - Phillips, Charles; Axlerod, Alan; editor. The Encyclopedia of the American West. Simon & Schuster, New York, 1996. ISBN 0-02-897495-6 - Pomeroy, Earl. The American Far West in the twentieth century (2008) 570 pages; comprehensive scholarly history - White, Richard. "It's Your Misfortune and None of My Own"": A New History of the American West. University of Oklahoma Press; Reprint edition, 1993. ISBN 0-8061-2567-5 - History of the American West Photo collection at Library of Congress - Photographs of the American West: 1861-1912 US National Archives & Records Administration - Census 2000 Briefs and Special Reports at the website of the US Census Bureau - Western Region Labor Statistics Bureau of Labor Statistics - History: American West, Vlib.us - Guide to the American West - The American West - Institute for the Study of the American West - High Plains Western Heritage Center - National Cowboy & Western Heritage Museum - Museum of the American West - Center of the American West - History: American West, Vlib.us - U.S. West: Photographs, Manuscripts, and Imprints SMU Central University Libraries |This page uses content from the English language Wikipedia. The original content was at Western United States. The list of authors can be seen in the page history. As with this Familypedia wiki, the content of Wikipedia is available under the Creative Commons License.|
About TEC mapping The ionosphere is a significant source of error in satellite navigation systems, such as GPS. In ordinary operation the position of a GPS receiver is estimated by measuring the time delay between a radio signal transmitted from each satellite and the reception of that signal at the receiver. Assuming a constant speed of light, this time delay can be converted to a receiver-satellite distance. By comparing the distance to multiple satellites a GPS receiver can determine its three dimensional position. The ionosphere disrupts this approach since the GPS radio signal is slowed by the presence of free electrons, causing an additional time delay and hence an error in the distance to each satellite. The greater the total number of electrons (Total Electron Count, or TEC) on the signal path, the greater the time delay. The GPS system broadcasts on two frequencies and since the ionosphere is a dispersive medium the time delay on each signal, for a given TEC, will depend on the frequency of that signal. This allows the TEC to be measured by examining the differential time delay between the two frequencies. Once the GPS receiver data has been processed into line of site TEC measurements for all receiver and satellite pairs, this data needs to be combined into a regional map of TEC. To do this, an important simplifying assumption is made. If we consider the typical idealised vertical electron density of the ionosphere, shown in Figure 1 below, we see that the vast majority of the electrons reside in the 'F2' layer. This allows us to assume that all electrons along the receiver to satellite path reside at a single altitude, without a significant loss of accuracy. This turns the mapping challenge from three dimensions into two, since we now only need to specify how the electron density of this model 'thin shell' ionosphere varies with latitude and longitude. In addition to the GPS derived TEC data, the real time map is also fed information from the IRI-2007 ionospheric model, driven using critical frequency (foF2) measurements from the IPS regional ionosonde network. The GPS data is more accurate for specifying TEC than this model, however the GPS coverage is poor in some parts of the Australiasian region under consideration (particularly over oceans) so the IRI model is used only where GPS TEC data is unavailable. We construct the TEC map using a set of Spherical Cap Harmonic (SCH) basis functions. These are related to spherical harmonics, but defined over a region rather than the entire globe. We use a Kalman Filter to find the optimal set of co-efficients given the GPS TEC and IRI model data. The GPS TEC data are obtained from a number of sources including IPSnet, Geoscience Australian, LINZ and SunPoz instruments, as well as some regional IGS sites. Data is obtained every 15 minutes. The resulting map can be used to correct for the effects of the ionosphere in real time for improved GPS positioning. The results can also be used for post processing of data. If you are interested in this service please contact IPS.
Background on Individual Education What are individual education strategies? - Strategies that work to reduce tobacco use initiation (smoking and smokeless tobacco products), increase cessation of tobacco use, and reduce exposure to environmental tobacco smoke by altering attitudes and beliefs about tobacco use as well as enhancing individual knowledge and skills. - Your intervention may focus specifically on helping people to do one or more of the following: avoid the temptation to start using tobacco by discussing the harmful effects; quit using tobacco by providing information, motivation, skill-building opportunities and support, address barriers to quitting or help people to develop strategies to overcome barriers; prevent the urge to go back to using tobacco once they have quit (relapse prevention); or keep away from specific environments in order to minimize exposure to environmental tobacco smoke. How can I use individual education strategies in tobacco interventions? - The specific strategies used to provide information differ based on the focus of the intervention. For example, information may be provided through individual counseling sessions or self-help materials such as newsletters, brochures, posters, fact sheets, videos, or websites. Other strategies may provide cues to action (e.g., a calendar prompting the individual to enter the number of days without using tobacco) rather than education to specifically increase knowledge about tobacco use. These strategies are designed to provide information to individuals (e.g., one way communication such as through a brochure). - The content of the message may focus on a wide variety of materials, including: information (e.g., short and long term benefits of avoiding or quitting tobacco use, relationship of tobacco use to health and quality of life), recommendations (e.g., how to stop using tobacco, how to deal with barriers to quitting), resources (e.g., health education classes, support groups) or skill-building exercises (e.g., how to respond to stress without using tobacco). How do “tailored messages” and “targeted messages” differ? How can I use these messages in tobacco interventions? - Individual education interventions may work best when information is matched to the individual. “Tailored messages” take into account specific individual characteristics in creating a tobacco use message designed for the individual. Materials or strategies may be developed specifically to meet an individual’s characteristics in terms of readiness to change, attitudes, beliefs, current tobacco use behaviors and other lifestyle characteristics. The concept of readiness to change (drawn from the Transtheoretical Model or Stages of Change) suggests that individuals may need different kinds of interventions to help them address tobacco use depending on how ready they are to change their behaviors. For example, some tobacco users may not have even considered quitting while others may have thought about quitting but don’t know how to begin. Others may have had several quit attempts but still have not successfully been able to stop using tobacco. - Tailored health education materials are developed based on characteristics that are unique to each individual; therefore, an individual assessment (e.g., survey, interview) is required in order to collect information specific to the individual. - Alternately, other materials or strategies may be geared toward a specific subgroup of the population of interest (e.g., pregnant women). These are often called “targeted messages” because they consider the specific needs of this subpopulation. In a similar manner, these strategies can be used to influence groups of people (e.g., encourage targeted population to cease smoking for the health of their fetus) but the messages are not specific to each individual. What is an example of a tailored message? - A recommendation to quit smoking may take into account the following information about the individual: Rachel, a twenty year old computer programmer smokes two packs of cigarettes a day and has been thinking about quitting over the last two weeks. Her reasons for and against quitting smoking have to do with her physical appearance. She believes that quitting will decrease the yellowing of her nails and fingertips, and might help decrease her chances of getting wrinkles. That said, she is also really worried about gaining weight. While Rachel knows that in the long run smoking may impair her health, she does not really see that as a significant issue to worry about at the moment. - A message can be designed for Rachel as follows: “Congratulations Rachel! Thinking about quitting is the first step to becoming a non-smoker. Quitting smoking can have a number of immediate positive effects. For example, the yellow in your fingernails and fingertips will go away within a few weeks. The smell of tobacco on your clothes and furniture will also start to disappear. As you know, smoking can increase your facial wrinkles and quitting now can really help. You can start by cutting down today. Keep reading to find out some specific tips you might find helpful in allowing you to cut down without gaining weight by watching what you eat and increasing your physical activity.” What is self management and how can I use it in tobacco interventions? - Another common individual education approach is to build the skills to change behavior through self-management. Self-management takes individuals through a process of identifying an issue (smoking), assessing their routine through self monitoring (keeping track of when you smoke the most, when you have the greatest cravings, etc.), making sense of your routine (what is happening that increases your cravings), identifying and setting a goal, contracting a change, developing an action plan to achieve the goal (including how to overcome barriers), developing specific skills to overcome these barriers and achieve the goal (e.g., developing alternative coping skills in general and specifically around what to do when experiencing tobacco cravings) and rewarding changes as they are made. What are “stages of change” and why are they important in tobacco interventions? - Previous work has shown that individuals go through different stages of readiness to change behavior and that different types of interventions can help individuals move from one stage to the next. For example, if people are not really thinking about quitting (pre-contemplation), the informational strategies described above may be particularly helpful in getting them to begin to think about quitting (contemplation). However, as individuals begin to get ready to change (preparation), providing self-management techniques and skill building may help them begin to take steps toward quitting (action). What is skill building and how can it be used in tobacco interventions? - Skill building strategies can be tailored to individuals or targeted to the population of interest. Both tailored and targeted strategies can be delivered once or at regular intervals (e.g., weekly, monthly or quarterly) and appear in the form of print, telephone, audio, video or computer kiosk messages. They may be conducted on their own or in combination with other intervention activities (e.g., enhancing support for non-smoking or creating policies to reduce exposure to environmental tobacco smoke). What else do I need to consider for tobacco interventions? - Some studies describe advantages of an interactive, web-based tailored intervention over a more traditional print version, including: the ability to receive immediate feedback, an interactive nature similar to interpersonal counseling, and the ability to use graphics and other features to increase interest and attention. Furthermore, once on the web, the tailored intervention can reach a relatively larger group of people making it more cost-effective. In addition, it can be updated continuously to include the most recent knowledge.
This is a great hands-on science activity that can promote inquiry and understanding of the scientific process. Students can explore different ecosystems (forest, meadow, beach, mountain) and gather data and evidence at each site in order to compare environmental factors such as plant life, animal life, humidity, wind speed, temperature and amount of light. It would be best if the ecosystems are close by, for instance a forest and a meadow, in order to have the data collected at the same time and under similar conditions. Have students measure and record the conditions in different ecosystems. They can choose to use instruments such as thermometers, pH Meters or litmus paper, hygrometers (to measure humidity), compasses and cameras. If students have access to smartphones or tablets they can use many available apps for altitude, barometers, pH and other measurements. Google Earth will also allow them to pinpoint the location of the ecosystem using latitude and longitude and there are weather apps and resources that can also use. Students can record their findings and display them in graphs, spreadsheets or an annotation program, such as Explain Everything, to share their results with others. I have curated a collection of Science and Math measurement apps, many of which can support these activities. These activities provide opportunities for students to make curriculum connections to the processes of Science, using Scientific instruments, measurement, graphing, understanding ecosystems and the integration of Numeracy into Science. Use inquiry questions to support and guide student learning. These questions need to be both open-ended and ones that require investigation, experimentation and collaboration to answer. Smarter Science has created a question matrix to help frame inquiry questions. Some examples are: • What evidence do you have? • What did you expect to find and why? • How was it different than other ecosystems? • What patterns, similarities and differences did you notice? • How can you explain these patterns, similarities and differences? • Why did you chose these instruments to work with? This type of activity helps students develop their communication skills and the ability to share their observations and results with others. This lesson is one of 5 included in my iBook - 5 Inquiry Activities. The iBook can be downloaded for free from iTunes.
Amaranth C. Borsuk Inquires into basic elements of creative writing that occur in multiple genres and media. Studies and practices writing in a workshop atmosphere. Beginning writers are often encouraged to find their own voice: to discover what it is that defines their unique style and perspective and to write from that place. This injunction is founded on the assumption that you know who you are and what it is you want to say. But does creative writing always have to be about the self? What about riddles and word games, personae, unreliable narrators and fictional worlds? In this introductory creative writing course, we will explore poems, stories, and dialogues, taking the conundrum of voice as an entry into the discovery of your own creative process. We will examine issues including the relationship between self and society, the dynamic between the author and the persona he or she adopts, the reliability of the subjective narrator, and how nuances of language, imagery, metaphor, tone, setting, subject matter, character development, and humor help writers create a specific voice. By learning to recognize and analyze these techniques in others’ work, you will begin to develop a "voice" of your own. Student learning goals To develop daily habits of writing and reading To cultivate an engagement with language and a sense of play To learn techniques for close reading texts that can be applied to both published work and peer review To learn to review classmates’ work with a generous and constructive spirit To explore literary techniques for creating a speaker, whether self or non-self, across genres General method of instruction We will meet twice a week for workshops in which we will discuss the readings, critique student work, and try in-class exercises. Students will write in established forms and create some of their own, finding ways to play with the voices that permeate American literature and culture, from Amazon reviews to tweets. No previous writing experience is necessary, but students taking this class should be prepared to devote time and attention to the reading, participate actively in class, and complete assignments on time. Class assignments and grading Students will write weekly short responses to the readings, keep a writing journal, and complete a series of creative experiments to be workshopped in small groups in class. As a final assignment, students will turn in a portfolio of revisions along with their quarter-long journal.
Looking for Spring Activities? Explore Life Sciences/Earth Sciences with your class to discover more about rabbits through research and hands-on activities. Connects rabbits to other living things, like people, mammals, and plants. This complete Science unit covers these curriculum topics: • Basic needs of living things: air, water, food, shelter • Life cycle and food chain • Physical and behavioral adaptations • Treating living things with care and respect Vocabulary covered in this unit - mammal, kit, predator, prey, colony, burrow, camouflage, living thing, herbivore, adaptation Please see the preview to see what's included. Covers Ontario Science Curriculum: Understanding Life Systems Grade 1 - Needs and Characteristics of Living Things Grade 2 - Growth and Changes in Animals Looking for more Science Units? Maple Sugar Bush All about Apples Unit All about Pumpkins Unit Simple Machines Bundle Rube Goldberg Machines You can save money on your next Teachers pay Teachers purchase by leaving a comment below and earning TpT credit! Your feedback is greatly appreciated. Perfectly Imperfect in Second Grade
Squamate Fun Facts - Squamates are a diverse group of legged and legless lizards, including snakes. There are nearly 8,000 squamate species. - Squamates vary drastically in size and weight: the smallest living squamate, the Virgin Islands Dwarf Sphaerodactylus, is about 1 inch long and weighs less than one-tenth of an ounce, while the largest living squamate, the Komodo Dragon has been known to reach about 10 feet long and weigh over 350 pounds. - The longest-known squamate, a fossil called Mosasaurus, was about 56 feet and probably weighed about 18 tons. - At almost 30 feet long, the fossil of the giant Madtsoia indicates it was big enough to have eaten a horse. - Squamate fossils have been found on every continent, including Antarctica. - Chameleons have tongues longer than their bodies that can shoot out at an insect at speeds of up to 16 feet per second; their turreted eyes can look in two different directions at once. - Many geckos have a clear lower eyelid that is fused closed. They use their tongue to "wash" this "window." - Many squamates have a third "eye." This hole in the skull between the eyes doesn't form images, but allows light to reach an organ in the brain, probably helping the squamates respond to seasonal changes in length of day. - The Emerald and Amazonian Tree Boas from South America have 3-D infrared vision to better see their prey while hunting at night. - Many squamates have five toes on each hand and foot, while some have fewer, or none at all. - Because the Banded Gecko's ears are positioned below its skull, you can shine a light on one side of its head and see it on the other. - Snakes don't have real ears; instead, they pick up vibrations with their lower jaw to "hear." - Humans and giraffes have seven neck vertebrae, while many squamates have eight. Some fossil lizards have as many as nineteen, including the fossil "relatives" of Platecarpus featured in Lizards and Snakes: Alive! - The Common Leaf-tail Gecko has over 300 teeth, more than any other amniote-a group made up of reptiles and mammals. - Gabon Vipers have the longest fangs of any living snake: they can be nearly two inches tall. Diet and Defense - To scare potential predators, the Western Hooknose Snake draws air into a vent at the base of its tail and blows it out causing a loud "pop." - Reticulated Pythons can eat a human. - Red Spitting Cobras can spit venom into a person's eyes from as far as six feet away. - The Gila Monster and the Beaded Lizard are the only two known venomous lizards; their relatives possessed venom by 80 million years ago. - Some squamates--like Platecarpus and the Campbell's Milksnake--have rows of long, sharp teeth on the roofs of their mouths to help swallow prey and keep it from escaping. - Venomous snakes do not always inject venom when they bite. These so-called "dry bites" are actually quite common. - Some lizards have mildly toxic green blood that deters predators with its bad taste. - Basilisks can run on water with the help of fringes that increase the surface area of their toes. By churning their legs to create pockets of air in the surface of the water they can seemingly defy gravity. To manage a feat like this, a 175-pound human would have to maintain a speed of 65 miles per hour. - Some squamates can "fly." The Paradise Tree Snake flattens its body to create a more aerodynamic shape and is able to change direction in mid-flight. - Some squamates can stick to glass, ceilings, and other smooth surfaces. The toes of geckos, anoles, and some skinks have toe pads with microscopic filaments that are so tiny they are able to form weak bonds with the molecules of these smooth surfaces. More About This Resource... This online collection of fun facts was created to support the Museum’sLizards & Snakes: Alive! exhibit. It includes 25 facts about squamates, divided among these four categories: - Squamate Anatomy - Diet and Defense Less than 1 period Supplement a study of biology with an activity drawn from this online Lizards & Snakes: Alive! resource. - Send students to the Squamate Fun Facts page or print copies of it for them to read. - Working in small groups, have students use the Web and library resources to learn more about squamates. - Each group should create a list of five additional fun facts. Have the groups display their fun facts in a poster. - Assemble a classroom gallery of squamate fun facts.
About the Hebrew Language Hebrew is a Semitic language spoken by the majority of the 7 million people in Israel. Ancient (or Classical) Hebrew flourished as a spoken language from sometime before the 10th Century BC. It faded as a spoken language around the 3rd or 4th Century BC, replaced by Aramaic, but it remained as a lingua franca with scholars and was used by the Jewish community around the world. It continued as a written form for contracts, laws, commerce, and poetry. Near the end of the 19th Century, it was revived in its present form as Modern Hebrew and replaced a score of languages spoken by Jews at this time. It was declared an official language in British-ruled Palestine in 1921, along with English and Arabic. In 1948 it became an official language of the newly-declared state of Israel. Modern Hebrew is written from right to left, using the Hebrew alphabet. • • • • • • • • • • • • • • •
Explanation: Grand spiral galaxies often seem to get all the glory. Their young, blue star clusters and pink star forming regions along sweeping spiral arms are guaranteed to attract attention. But small irregular galaxies form stars too, likeNGC 4449, about 12 million light-years distant. Less than 20,000 light-years across, the small island universe is similar in size, and often compared to our Milky Way’s satellite galaxy, the Large Magellanic Cloud (LMC). This remarkable Hubble Space Telescope close-up of the well-studied galaxy was reprocessed to highlight the telltale reddish glow of hydrogen gas. The glow traces NGC 4449’s widespread star forming regions, some even larger than those in the LMC, with enormous interstellar arcs and bubbles blown by short-lived, massive stars. NGC 4449 is a member of a group of galaxies found in the constellation Canes Venatici. Interactions with the nearby galaxies are thought to have influenced star formation in NGC 4449. Visit the NASA/JPL website to view more Astronomy Pictures of the DayPrint This Post
Teenage Aggression: Causes and Prevention Strategies According to a study by Solis and Vidal (2006), which was published in the Hermilio Valdizan Psychiatry and Mental Health Journal, adolescence is a challenging stage of life that can trigger a lot of stress, whether from school issues, uncertainty about the future, and social pressure, among other issues. Consequently, teenage aggression is something that many parents and educators have to deal with every day. There are many ways to be aggressive, and many are associated to each individual’s coping mechanisms. In this article, we’ll talk about the definition of aggression, focusing on adolescence as the critical period during which individuals develop their personality and acquire certain habits. Osorio (2013) described aggression as one of the tactics of social competition, one of the normal skills in the human behavioral repertoire that are directed towards gaining the upper hand in conflict situations (in other words, wins/losses, victories/defeats). Where do you draw the line between aggression and violence? Osorio argues that physical harm is what distinguishes the two. Behaving in an aggressive way doesn’t have to involve physical harm. A study by Mestre et al. (2012) analyzed the relationship between adolescent coping mechanisms and emotions to determine how these processes relate to aggression. The results of the study showed that there were clear differences between the subjects with high and low levels of aggression and the coping mechanisms that they used. Specifically, they saw how more aggressive teens tended to rely more on unproductive coping skills. The less aggressive teens tended to use problem-solving strategies to deal with their emotions. Coping mechanisms and teenage aggression Thus, teenage aggression and aggression during other stages of life are associated with an individual’s coping styles. Coping mechanisms are the strategies that you use when dealing with problems and adversity. A more aggressive individual is more likely to use maladaptive strategies. Frydenberg, a specialist in coping mechanisms, focused on adolescence to develop her theory, which includes 18 different coping strategies. These are grouped into three coping styles: - Solving the problem. This strategy includes focusing on solving the problem. - Non-productive coping. Worrying, imaging different outcomes, and ignoring the problem. - Reference to others. This includes seeking help in a social support network. Frydenberg (1997) argues that many of the risky behaviors that adolescents engage in, such as drug use, sexual promiscuity, violence, and aggression, is a result of their inability to deal with certain challenges or problems. Coping style and teenage aggression are closely related. Preventing teenage aggression One important tool in preventing this kind of behavior is early emotional education. A democratic, positive, and respectful parenting style is also very beneficial. A study by Del Barrio et al. (2009) attempted to lay the foundation for preventing aggression in children and teens through the information we have about the connection between child-rearing habits and aggression. The study found that certain aspects of parenting are especially important for preventing aggressiveness in teens. They are: - Improving maternal hostility. - Moderating children’s behavior. - Increased emotional communication. As you can see, working on communication is essential to prevent teenage aggression. The same goes for younger children. Fostering assertive and open communication styles can prevent aggressiveness. Remember that, in communication, aggression is one extreme, just like passivity. Being in the center is the very best option and that’s where assertiveness resides. Alternatives to aggression Aggression is very tempting for teens. The root causes are varied and include personality, an inability to appropriately deal with problems (a coping mechanism deficiency), family problems, conflict with peers, low frustration tolerance, an underdeveloped prefrontal cortex, etc. The intensity and the way that aggression manifests itself can also vary. It can manifest as physical violence, verbal expression, aggressive communication, etc. As always, the best tool to fight aggression is prevention. Accompanying teens at school, at home, or where they hang out and providing tools for them to work on their self-control and emotional regulation as some of the best strategies to avoid this behavior. In conclusion, the most important thing is to offer them alternative coping strategies. For example, dialogue, reflection, breathing, and putting things into perspective. There are many ways for teens to redirect their emotions and avoid turning to aggression.It might interest you...
There are many different types of lines. Lines are; straight, curving, angular, soft, outline, implied, etc. Lines define, enclose, connect, and/or dissect! Lines are lighter and more fluid than other design elements, such as forms and shapes. And yet lines add their unique energy to design. Technically, lines are 1-dimensional, meaning they only have a length (not width or depth), but those of us in the arts know better! Line Movement Lines, like shapes and forms, contribute to a design's feeling and energy level on an unconscious level. As we discussed with the rectangle (see Working with Expressive Shapes and Forms), lines are also affected by their direction or major flow. Line directions and energy are: - Horizontal lines imply quiet and repose and have lower energy. - Vertical lines imply strength and stability, having medium energy - Diagonal lines suggest motion and action, creating higher energy. Which line(s) express? Image courtesy of Basic Design Let's look at the word nervous. Nervous could apply to either the top line or the line that is sixth from the. Both lines share an organic line quality, a strong diagonal flow, and multiple peaks and valleys. Line Direction & Energy Lines, like shapes and forms, contribute to a design's feeling and energy level on an unconscious level. As we discussed the shape the rectangle, lines are also affected by their direction or major flow. The directions of the line are: - Horizontal lines imply quiet & repose, lower energy. - Vertical lines suggest strength & stability; medium energy. - Diagonal lines indicate motion and action, higher energy. Line quality is affected by the type of material used to create it. Just like shapes and forms, lines can feel either man-made or organic. - Hard-edged lines have distinct boundaries and imply strength. - An Outline is a line that is of even thickness throughout. It is flat and tends to segment the shapes. Example of hard edge lines Example of outline. - Soft edge lines are softly drawn and can feel more organic by nature. They imply gentleness. - Contour Line creates the illusion of being 3-dimensional. Thinner where the light hits the object, thicker where the line is in shadow. Example of soft edge lines. Example of contour lines. Line Quality Speaks Volumes Now let's take a look at a piece by Saul Streinburg. Based on what you understand about lines, what do you think is happening in the illustration below? Look at the line quality and energy level while assessing. Illustration by Saul Streinburg. It feels like the adult is talking over the little girl's story. He seems authoritative and possibly angry, while the girl appears to tell a gentle story about her day or dreams. She talks about animals, home, and kids, while the line energy is soft and organic. The line quality of the adult is thick, implying loud. It is also organic, but the energy level is high due to the multiple diagonals. Implied Line & Psychic Line, which are not actual lines! Implied lines are not actual lines! They can be formed by dashes, dots, etc. Because they are placed near one another, our mind groups, thus our mind reads them like lines. Implied lines lead the viewer's eyes into and around the picture plane. Psychic lines are not "real lines" either, yet we feel a mental connection between two or more objects. For example, when we look at an arrow pointing to an object, first, we look at the arrow and then the object as if they are connected with a line. Line Examples, top, actual line; middle implied lines; bottom, psychic line Let's look at psychic lines in action. What is happening in the The Conjurer by a follower of Hieronymus Bosch? Look at the directions of the eyes of all of the people in this piece. Next, what is happening to the purse of the man participating in the magic show (in the Medieval era, men carried sacks tied to their belts, which were called purses)? The Conjurer, by a follower of Hieronymus Bosch The conjurer is watching his victim. Our victim is watching the ball in the hand of the conjuror. In the meantime, the victim is being pick-pocketed! The pick-pocket looks innocently into the air, but at least a few of his accomplices keep an eye on the victim! In this humorous example of psychic lines, the eyes of the figures as "lines." Note, whether human or animal, we humans unconsciously follow the direction our eyes point toward. Design Basics, Seventh Edition by David Lauer Point and Line to Plane, by Wassily Kandinsky.
Difference between Bevel gear and worm Gear - Bevel gears are useful when the direction of a shaft’s rotation needs to be changed - They are usually mounted on shafts that are 90 degrees apart, but can be designed to work at other angles as well - The teeth on bevel gears can be straight, spiral or hypoid. - Bevel gears with equal numbers of teeth and shaft axes at 90 degrees are called miter gears. - locomotives, marine applications, automobiles, printing presses, cooling towers, power plants, steel plants, railway track inspection machines, etc. - WORM AND WORM GEAR - Worm gears are used when large gear reductions are needed. It is common for worm gears to have reductions of 20:1, and even up to 300:1 or greater - Worm gears are used widely in material handling and transportation machinery, machine tools, automobiles etc
Winter Sniffles: Cold? Or flu? Influenza (flu) and the common cold are both contagious respiratory illnesses, but they are caused by different viruses. Flu is caused by influenza viruses only, whereas the common cold can be caused by a number of different viruses, including rhinoviruses, parainfluenza, and seasonal coronaviruses. Seasonal coronaviruses should not be confused with SARS-CoV-2, the virus that causes COVID-19. Because flu and the common cold have similar symptoms, it can be difficult to tell the difference between them based on symptoms alone. In general, flu is worse than the common cold, and symptoms are typically more intense and begin more abruptly. Colds are usually milder than flu. People with colds are more likely to have a runny or stuffy nose than people who have flu. Colds generally do not result in serious health problems, such as pneumonia, bacterial infections, or hospitalizations. What are the symptoms of flu versus the symptoms of a cold? - The symptoms of flu can include fever or feeling feverish/chills, cough, sore throat, runny or stuffy nose, muscle or body aches, headaches, and fatigue (tiredness). - Cold symptoms are usually milder than the symptoms of flu. People with colds are more likely to have a runny or stuffy nose. Colds generally do not result in serious health problems. Protect yourself by getting a yearly flu vaccine. Talk with your healthcare professional to know which vaccines are right for you to reduce the spread of germs and illness.
Human Impact on Climate Change PowerPoint After learning about several ways in which our everyday actions impact climate change, choose one action to conduct more research on and create a PowerPoint presentation to tell us more! Use the unit material and reliable online resources to gather more information. There are several ideas with information throughout the unit but there is even more information out in the world! Think about what you do every day, and how the activity uses energy or natural resources. Think about a product you buy—how it was made, what natural resources were used to make it? You can also do a quick google search of “everyday activities that effect the environment” and start reading some articles for more ideas. Remember to use reliable sources from the Internet. There is a lot of misinformation out there and finding reliable information can be difficult. The best sources of reference material for your presentation are scientific journals found in the CSU Online Library databases. Click herefor a biology research tutorial that demonstrates how to locate library resources relating to biology You can also find reliable statistics at organization websites listed in the Unit under “Combat Climate Change.” Your presentation must include: - What everyday activity or product have you chosen to present? Why did you choose this activity or product? Why is it important? - Connect the activity/product to its impact on the environment and climate change. How does doing the activity or making the product use natural resources, disrupt habitat, impact wildlife or other effects on the environment? - Report data and statistics, with references, on how this activity/product effects the environment. - What can people do to decrease the activity/product’s impact on the environment? Be sure to follow the formatting and guidelines provided below: - Include at least three visual aids. - Include three reliable references, and at least one source must come from the CSU Online Library. - Use bulleted information on slides (five lines or fewer). - Include details in the speaker notes (more information that you would say during an actual presentation). - Include a separate title slide and separate reference slide. - Use an appropriate font and background. - Include at least 11 slides, but not more than 15 slides (not counting your title slide and reference slide). - Use correct APA format for references and citations, and use correct grammar and spelling. - Upload the presentation as a .ppt or .pptx file. The following resource(s) may help you with this assignment.
In this article, we will be discussing the Python Quicksort Algorithm in complete detail. We will start with it’s explanation, followed by a complete solution which is then explained by breaking it down into steps and explaining each of them separately. At the end we have also included a small comparison of the Python Quicksort with others similar algorithms. As a bonus, we also have a video explanation of the Python Quicksort Algorithm, which is included at the very end of the article. As the name implies, it’s a way of quickly sorting a list of values. Typically done with a recursive solution, Quick sort uses the concept of a pivot around which the values are sorted. The pivot is any arbitrary value picked out from the list of values (usually the first value). We then divide the list of values around the pivot into two smaller lists, and then call the Quick sort function on each list. The break case is when the length of the list reaches one. Quicksort can be a little unstable (worst case scenario), but this can be mostly avoided using Randomized Quicksort which minimizes the change of encountering the worst case. The idea behind Randomized Quicksort is to pick random pivots, which can reduce the change of picking a pivot which is very far from the median value. Another common technique along the same lines, is to shuffle the array first. This helps if the array is already sorted, or partially sorted. The ideal pivot is always the median value, as it results in the creation of two equal halves. (Average) Big O notation: n * log(n) Worst Case: n2 (When values are already sorted) QuickSort Algorithm Solution Below is our solution to the Python Quicksort Algorithm. There are many different variants that you’ll find online (same concept though), but I like this one alot as it translates perfectly into other languages. If you do it this way, you can do the same thing in any other programming language. (sometimes the solutions utilize a language specific technique or function). If you are new to the Quicksort Algorithm, pay close attention to the explanation that follows. def Qsort(array, low, high): if (low < high): pivot = low i = low j = high while (i < j): # Main While Loop while array[i] <= array[pivot] and i < high: # While Loop "i" i += 1 while array[j] > array[pivot]: # While Loop "j" j -= 1 if (i < j): array[i], array[j] = array[j], array[i] array[pivot], array[j] = array[j], array[pivot] Qsort(array, low, j - 1) Qsort(array, j + 1, high) return array else: return array QuickSort Algorithm – Worked Example The following key is (generally) observed in below images. Red for the pivot and related numbers Blue for numbers to be swapped Green for numbers that have been swapped In the below images, we are using some random data of 8 numbers, and assuming the pivot to be the first number. I the first image, we can clearly see that 4 has been selected as the pivot. The value of i is equal to low (the lower index of the array), and the value of j is equal to high (the upper index of the array). The first thing that’s going to happen is that the “i” while loop will run, until it finds a number greater than the pivot, or it reaches the end of the array. In this case, it stops index 1, a.k.a number 8. The value of “i” increments until it finds a larger value, so it’s currently “ The same thing happens with the “j” while loop, which runs until if finds a smaller value than the pivot, which it does so at index 5, a.k.a number 3. In the “j’ while loop, the value of j decrements from it’s “high” value until the large value is found. It’s currently at value “ The next step is two swap both of these numbers, which is shown in the second diagram in the image above. We then continue on, looking for the next set of numbers to be swapped. The “i” and “j” while loops run once again, and another set of numbers is found. The same process from earlier repeats itself, and the same thing happens with number 9 and 1 (index 2 and 4). Both numbers are swapped. Once the second swap is completed, we once again run the “i” and “j” while loops. This time the value of i becomes greater than j. Hence, the swap between the jth index does not take place. Instead, we break out of the main while loop, and swap the jth with the pivot. And so, we complete one major step in the process of completing the quicksort algorithm. You will now notice, that the value of the left side of the pivot (4), are all smaller than the pivot, and the values on the right side, are all greater. We will now begin our recursive calls on each side of the array (excluding the pivot). We’ll begin with the left side of the array first. We do not split the array or anything, we just change the high values when passing it to the Qsort() function. Previously, the high values were, 0 and 7. Now they are 0 and 2. We can basically ignore the last 5 values. Once again, we pick the first value as the new pivot (2). We repeat the same process from before, swapping 3 and 1. Remember, 3 is greater than 2, and 1 is less than 2. Hence, the jth are swapped. Once again, the value of i exceeds that of j. Hence we skip the jth swapping, and swap the pivot with the jth position once again. We now have a perfectly sorted array on the left side (including the pivot). The length of the array is still not 1 though, hence it will perform recursive calls on the left side and right side again. Luckily, the left array is just a single number, hence it will return immediately, and the same goes for the right. This is actually a weakness in the Quicksort algorithm, where it will continue “sorting” even if the array is already sorted. We’ll now pick up some speed, since we’ve covered most of the concepts. The above image shows the three steps for the right hand side of the original array. We have a slightly unique case here as both j point to the same index. This is because i could not find a number greater than the pivot, and j found a smaller number at the same index it started out as (the In any case, the main while loop only runs if i is less than j, which it is not. We then exit the main while loop once again, and swap the jth position with the pivot. Since there is no “right” side of the pivot, (remember, pivot is 9) we only have to call the recursive function on the left hand side ( [6, 8, 7] ). We have another unique case here, as there is no value that is less than the pivot. Due to the reason, the jth counter keeps decrementing until it reaches the pivot, which satisfies its conditions (less than or equal to pivot). In short, we basically swap the pivot with itself in the second step of the above diagram. In the third step of he above diagram, we once again call the recursive on the only existing side (the right side). The first step in the image below shows what we are left with (8 and 7). And finally, you can see that we have a completely sorted array. Here is another, Python specific solution to the quicksort algorithm. It’s very similar, (same concept) but it’s more simplified due the use of the append() function, which normally isn’t available in most languages. Although this is simpler, the other method I would more, as it can be easily duplicated in other languages as well. def Qsort(array): lesser = greater = equal = if len(array) > 1: pivot = array for x in array: if x < pivot: lesser.append(x) if x > pivot: greater.append(x) if x == pivot: equal.append(x) return Qsort(lesser) + equal + Qsort(greater) else: return array The reason why this is Python specific, is due to the append() method, which isn’t normally available. Also notice above how we never call the recursive function on the pivot. The equal list contains numbers that are the same as the pivot. (The reason why it’s a list is because there can be multiple numbers that have the same number as the pivot). Here we you can watch the video version of the Python QuickSort Algorithm. If you are having trouble understanding, or still have some doubts, be sure to check this out. It’s a more interactive experience which will definitely help you. This marks the end of the Python Quicksort Algorithm Tutorial. Any suggestions or contributions for CodersLegacy are more than welcome. Questions regarding the tutorial content can be asked in the comments section below.
Nasa’s Mars Rover Curiosity has made an surprising discovery in regards to the Red Planet’s previous. The findings throughout the exploring of a area often known as the ‘Marker Band’ revealed proof of historic water ripples that shaped inside lakes. ALSO READ: Nasa’s Hubble telescope captures the beginning of a brand new ‘spoke’ season of Saturn The Curiosity Rover has been ascending the foothills of Mount Sharp, a 5-kilometre-tall mountain thought to have lakes and streams that will have made out there a wealthy atmosphere for microbial life. “This is the best evidence of water and waves that we’ve seen in the entire mission,” mentioned Ashwin Vasavada, Curiosity’s challenge scientist, including that the proof has been present in a area they thought can be dry. What are the newly found clues of water on Mars? 1. In the Marker Band, a slim band of darkish rock that stands out from the remainder of Mount Sharp, Curiosity found rippling rock textures. Scientists consider they shaped billions of years in the past when materials on the lake’s backside was stirred up by waves on the lake’s floor, leading to rippling textures within the rock. ALSO READ: How NASA helps in rescue operations in earthquake-hit Turkey and Syria Curiosity has been unable to drill a pattern from this layer of rock regardless of quite a few makes an attempt on account of its excessive hardness. 2. Another clue found by the Curiosity rover is a channel in Gediz Vallis valley, which is assumed to have been eroded by a small river. Scientists consider moist landslides occurred right here as effectively, sending car-sized boulders and particles to the valley flooring. 3. An odd rock texture that was most likely attributable to some type of cyclical sample within the climate or local weather has additionally piqued the staff’s curiosity. This sort of rhythmic sample in Earth’s rock layers is regularly attributable to periodic atmospheric occasions, which can point out that Mars’ local weather has modified over time. The Curiosity mission’s findings display that Mars was once a lot wetter than beforehand believed, and future investigation is more likely to yield way more fascinating findings in regards to the planet’s previous. Source web site: www.hindustantimes.com
February 1972 Popular Electronics Table of Contents Wax nostalgic about and learn from the history of early electronics. See articles published October 1954 - April 1985. All copyrights are hereby acknowledged. According to this 1972 article in Popular Electronics, there were as many as 50,000 computers in the world at the time using magnetic core memories. Among them was the Apollo Guidance Computer that was onboard the Apollo 11 Lunar Module that Neil Armstrong used in July 1969 to land on the moon*. Semiconductor memories were being manufactured in 1972, but believe it or not they were not as fast as the magnetic core memories. Machinery was not available with enough precision and repeatability to thread the read, write, sense, and (sometimes) inhibit wires through each ferrite core. The TPX-42** IFF (Identification Friend or Foe) secondary radar I worked on in the USAF had a 1 kByte magnetic core memory. Small women with small hands were the most adept at doing the job of manual assembly. I'm guessing there were no coffee breaks for those dedicated women. Valium breaks were more likely. * The Apollo 11 Lunar Module was ejected after Armstrong and Aldrin re-entered the Command Module. It crashed somewhere on the moon's surface. ** See bottom of page. Computer Core Memories Still Handmade Aided by powerful microscopes, skilled women weave hair-like wires and tiny ferrite cores into a computer memory. It is an ironic fact that one of the most critical and costly parts of modern computers is produced by handwork more exacting than the finest embroidery. This is the core memory, the portion of the computer that stores information for high-speed electronic calculation - and transfers it at speeds measured in billionths of seconds. The performance of this central memory, more than any other part, determines how efficiently a computer can do its job. In fact, a core memory may account for more than half the cost of a large high-speed computer. The cores are tiny rings of an iron oxide material, some less than a fiftieth of an inch in diameter. Each core may be magnetized in a clockwise or counterclockwise direction to store a unit (bit or binary digit) of computer information. Women's deft hands string these tiny beads together with hair-like wires. Up to three wires may be run through the almost invisible center of the core. The wires carry electric current that reads, writes, or erases the information in each individual core. As many as ten million cores may be contained in a single memory. For years, computer designers have sought ways to automate the production of high-speed memories and eliminate this handwork. Various methods have been tried at great expense, but none has yet emerged that can equal the combination of speed, economy and reliability that hand-wired cores achieve. Common #7 needle and 00 thread dwarf cores and wires in typical section of Ampex computer memory. Each core stores unit of computer data. Three wires go through center of each core. Semiconductor memories with higher operating speeds are beginning to be used in some of the newer computers. Such memories have shown up to three times the speed of core memories, though they have yet to equal their economy. But the core will continue its vital role for many years to come. Further increases in core speed, economy and compactness are certain. Besides there are more than 50,000 computers in use in the world today that rely on core memories. Since these computers have been designed with cores, it would require radical and expensive changes in the computer itself to replace existing core memories with semiconductors. Posted May 3, 2018
by P. Kendall* (7/12) - Salmonella, Campylobacter, E. coli and Listeria bacteria in food cause food infection. - Staphylococcus and Clostridium botulinum bacteria produce a toxin (or poison) as a byproduct of growth and multiplication in food and cause food intoxication. - Clostridium perfringens can multiply in foods to sufficient numbers to cause food poisoning. - By following simple steps (clean, separate, cook, and chill) you can prevent most food-related illness. And, when in doubt, throw it out! Bacterial foodborne infections occur when food, that is contaminated with bacteria, is eaten and the bacteria continues to grow in the intestines, setting up an infection which causes illness. Salmonella, Campylobacter, hemorrhagic E. coli and Listeria all cause infections. Food intoxication results from consumption of toxins (or poisons) produced in food by bacterial growth. Toxins, not bacteria, cause illness. Toxins may not alter the appearance, odor or flavor of food. Common kinds of bacteria that produce toxins include Staphylococcus aureus and Clostridium botulinum. (See fact sheet 9.305, Botulism, for more information on its prevention.) In the case of Clostridium perfringens, illness is caused by toxins released in the gut when large numbers of vegetative cells are eaten. Salmonellosis is a form of food infection that may result when foods containing Salmonella bacteria are consumed. The Salmonella family includes more than 2300 serotypes, but two types, Salmonella enteritidis and Salmonella typhimurium are the most common in the United States and account for half of the infections. Once eaten, the bacteria may continue to live and grow in the intestine, set up an infection and cause illness. The possibility and severity of the illness depends in large part on the size of the dose, the resistance of the host and the specific strain of Salmonella causing the illness. Salmonella bacteria are spread through indirect or direct contact with the intestinal contents or excrement of animals, including humans. For example, they may be spread to food by hands that are not washed after using the toilet. They also may be spread to raw meat during processing so that it is contaminated when brought into the kitchen. Because of this, it is important to make sure hands and working surfaces are thoroughly washed after contact with raw meat, fish and poultry and before working with foods that require no further cooking. Salmonella bacteria grow at temperatures between 41 and 113 degrees F. They are readily destroyed by cooking to 160 F and do not grow at refrigerator or freezer temperatures. They do survive refrigeration and freezing, however, and will begin to grow again once warmed to room temperature. Symptoms of salmonellosis include headache, diarrhea, abdominal pain, nausea, chills, fever and vomiting. These occur within 8 to 72 hours after eating contaminated food and may last four to seven days. Arthritis symptoms may follow three to four weeks after onset of acute symptoms. Infants, young children, pregnant women, the elderly or people already ill have the least resistance to disease effects. Foods commonly involved include eggs or any egg-based food, salads (such as tuna, chicken, or potato), poultry, beef, pork, processed meats, meat pies, fish, cream desserts and fillings, sandwich fillings, raw sprouts, and milk products. These foods may be contaminated at any of the many points where the food is handled or processed from the time of slaughter or harvest until it is eaten. Campylobacteriosis or Campylobacter enteritis is caused by consuming food or water contaminated with the bacteria Campylobacter jejuni. C. jejuni commonly is found in the intestinal tracts of healthy animals (especially chickens) and in untreated surface water. Raw and inadequately cooked foods of animal origin and non-chlorinated water are the most common sources of human infection (e.g., raw milk, undercooked chicken, raw hamburger, raw shellfish). The organism grows best in a reduced oxygen environment, is easily killed by heat (120 F), is inhibited by acid, salt and drying, and will not multiply at temperatures below 85 F. Diarrhea, nausea, abdominal cramps, muscle pain, headache and fever are common symptoms. Onset usually occurs two to ten days after eating contaminated food. Duration is two to seven days, but can be weeks with such complications as urinary tract infections and reactive arthritis. Meningitis, recurrent colitis, acute cholecystitis, and Guillain-Barre syndrome are rare complications. Deaths, also rare, have been reported. Preventive measures for Campylobacter infections include pasteurizing milk; avoiding post-pasteurization contamination; cooking raw meat, poultry and fish; and preventing cross-contamination between raw and cooked or ready-to-eat foods. Prior to the 1980s, listeriosis, the disease caused by Listeria monocytogenes, was primarily of veterinary concern, where it was associated with abortions and encephalitis in sheep and cattle. As a result of its wide distribution in the environment, its ability to survive for long periods under adverse conditions, and its ability to grow at refrigeration temperatures, Listeria has since become recognized as an important foodborne pathogen. L. monocytogenes is frequently carried by humans and animals. The organism can grow in the pH range of 4.4 to 9.6. It is salt tolerant and relatively resistant to drying, but easily destroyed by heat. (It grows between 32 F and 113 F). Listeriosis primarily affects newborn infants, pregnant women, the elderly and those with compromised immune systems. In a healthy non-pregnant person, listeriosis may occur as a mild illness with fever, headaches, nausea and vomiting. Among pregnant women, intrauterine or cervical infections may result in spontaneous abortion or still birth. Infants born alive may develop meningitis. The mortality rate in diagnosed cases is 20 to 25 percent. The incubation period is a few days to several weeks. Recent cases have involved raw milk, soft cheeses made with raw milk, and raw or refrigerated ready-to-eat meat, poultry or fish products. In 2011, a large outbreak of listeriosis was caused by cantaloupe contaminated during processing at one facility. Preventive measures for listeriosis include maintaining good sanitation, turning over refrigerated ready-to-eat foods quickly, pasteurizing milk, avoiding post-pasteurization contamination, and cooking foods thoroughly. Staphylococcus bacteria are found on the skin and in the nose and throat of most people; people with colds and sinus infections are often carriers. Infected wounds, pimples, boils and acne are generally rich sources. Staphylococcus also are widespread in untreated water, raw milk and sewage. When Staphylococcus bacteria get into warm food and multiply, they produce a toxin or poison that causes illness. The toxin is not detectable by taste or smell. While the bacteria itself can be killed by temperatures of 120 F, its toxin is heat resistant; therefore, it is important to keep the staph organism from growing. Foods commonly involved in staphylococcal intoxication include protein foods such as ham, processed meats, tuna, chicken, sandwich fillings, cream fillings, potato and meat salads, custards, milk products and creamed potatoes. Foods that are handled frequently during preparation are prime targets for staphylococci contamination. Symptoms include abdominal cramps, vomiting, severe diarrhea and exhaustion. These usually appear within one to eight hours after eating staph-infected food and last one or two days. The illness seldom is fatal. Keep food clean to prevent its contamination, keep it either hot (above 140 F) or cold (below 40 F) during serving time, and as quickly as possible refrigerate or freeze leftovers and foods to be served later. Clostridium Perfringens Food-Borne Illness Clostridium perfringens belong to the same genus as the botulinum organism. However, the disease produced by C. perfringens is not as severe as botulism and few deaths have occurred. Spores are found in soil, nonpotable water, unprocessed foods and the intestinal tract of animals and humans. Meat and poultry are frequently contaminated with these spores from one or more sources during processing. Spores of some strains are so heat resistant that they survive boiling for four or more hours. Furthermore, cooking drives off oxygen, kills competitive organisms and heat-shocks the spores, all of which promote germination. Once the spores have germinated, a warm, moist, protein-rich environment with little or no oxygen is necessary for growth. If such conditions exist (i.e., holding meats at warm room temperature for several hours or cooling large pots of gravy or meat too slowly in the refrigerator), sufficient numbers of vegetative cells may be produced to cause illness. Foods commonly involved in C. perfringens illness include cooked, cooled, or reheated meats, poultry, stews, meat pies, casseroles, and gravies. Symptoms occur within eight to 24 hours after contaminated food is eaten. They include acute abdominal pain and diarrhea. Nausea, vomiting and fever are less common. Recovery usually is within one to two days, but symptoms may persist for one or two weeks. E. coli Hemorrhagic Colitis Escherichia coli belong to a family of microorganisms called coliforms. Many strains of E. coli live peacefully in the gut, helping keep the growth of more harmful microorganisms in check. However, one strain, E. coli O157:H7, causes a distinctive and sometimes deadly disease. Symptoms begin with nonbloody diarrhea one to five days after eating contaminated food, and progress to bloody diarrhea, severe abdominal pain and moderate dehydration. In young children, hemolytic uremic syndrome (HUS) is a serious complication that can lead to renal failure and death. In adults, the complications sometimes lead to thrombocytopenic purpura (TPP), characterized by cerebral nervous system deterioration, seizures and strokes. Ground beef is the food most associated with E. coli O157:H7 outbreaks, but other foods also have been implicated. These include raw milk, unpasteurized apple juice and cider, dry-cured salami, homemade venison jerky, sprouts, lettuce, spinach, and untreated water. Infected food handlers and diapered infants with the disease likely help spread the bacteria. Preventive strategies for E. coli infections include thorough washing and other measures to reduce the presence of the microorganism on raw food, thorough cooking of raw animal products, and avoiding recontamination of cooked meat with raw meat. To be safe, cook ground meats to 160 F. Preventing Food-Borne Illness Foodborne illness can be prevented. The following food handling practices have been identified by the Food Safety and Inspection Service of USDA as essential in preventing bacterial foodborne illness. Purchase and Storage - Keep packages of raw meat and poultry separate from other foods, particularly foods to be eaten without further cooking. Use plastic bags or other packaging to prevent raw juices from dripping on other foods or refrigerator surfaces. - Buy products labeled “keep refrigerated” only if they are stored in a refrigerated case. Refrigerate promptly. - Buy dated products before the label sell-by, use-by or pull-by date has expired. - Use an appliance thermometer to make sure your refrigerator is between 35 and 40 F and your freezer is 0 F or below. Figure 1: Temperature of food for control of bacteria. - Wash hands (gloved or not) with soap and water for 20 seconds before preparing foods and after handling raw meat or poultry, touching animals, using the bathroom, changing diapers, smoking or blowing your nose. - Thaw only in refrigerator, under cold water changed every 30 minutes, or in the microwave (followed by immediate cooking). - Rinse raw produce thoroughly under running tap water before eating. - Scrub containers and utensils used in handling uncooked foods with hot, soapy water before using with ready-to-serve foods. Use separate cutting boards to help prevent contamination between raw and cooked foods. - Stuff raw products immediately before cooking, never the night before. - Don’t taste raw meat, poultry, eggs, fish or shellfish. Use pasteurized milk and milk products. - Do not eat raw eggs. This includes milk shakes with raw eggs, Caesar salad, Hollandaise sauce, and other foods like homemade mayonnaise, ice cream or eggnog made from recipes that call for uncooked eggs. - Use a meat thermometer to judge safe internal temperatures for cooked foods (see Figure 1). If your microwave has a temperature probe, use it. - When using slow cookers or smokers, start with fresh rather than frozen, chunks rather than roasts or large cuts, and recipes that include a liquid. Check internal temperature in three spots to be sure food is thoroughly cooked. - Avoid interrupted cooking. Never partially cook products, to refrigerate and finish later. Also, don’t put food in the oven with a timer set to begin cooking later in the day. - If microwave cooking instructions on the product label are not appropriate for your microwave, increase microwave time to reach a safe internal temperature. Rotate, stir and/or cover foods to promote even cooking. - Before tasting, boil all home-canned vegetables and meats 10 minutes plus one minute per 1,000 feet. - Wash hands with soap and water before preparing, serving, or eating food. Serve cooked products on clean plates with clean utensils and clean hands. - Keep hot foods hot (above 140 F) and cold foods cold (below 40 F). - In environmental temperatures of 90 F or warmer, leave cooked food out no longer than one hour before reheating, refrigerating or freezing. At temperatures below 90 F, leave out no more than two hours. - Wash hands before handling leftovers and use clean utensils and surfaces. - Remove stuffing before cooling or freezing. - Refrigerate or freeze cooked leftovers in small, covered shallow containers (2 inches deep or less) within two hours after cooking. Leave airspace around containers to help ensure rapid, even cooling. - Use cooked leftovers within 4 days. Don’t taste leftovers to determine safety. - If reheating leftovers, cover and reheat to appropriate temperature before serving (a rolling boil for sauces, soups, gravies, “wet” foods; 165 F for all others). - If in doubt, throw it out. So they cannot be eaten by people or animals, discard outdated, unsafe or possibly unsafe leftovers in the garbage disposal or in tightly-wrapped packages. Jay, J.M. Modern Food Microbiology. Gaithersburg, MD: Aspen Publishers, 2005. Center for Food Safety and Applied Nutrition of the Food and Drug Administration (FDA), U.S. Department of Health and Human Services. 2012. Bad Bug Book – Foodborne Pathogenic Microorganisms and Natural Toxins Handbook, 2nd Ed. Available at: http://www.fda.gov. U.S. Department of Agriculture Food Safety and Inspection Service. 2011. Basics for Handling Food Safely. Available at: www.fsis.usda.gov/PDF/Basics_for_Safe_Food_Handling.pdf. - Hand washing, U.S. Department of Agriculture, usdagov, Flickr: https://www.flickr.com/photos/usdagov/7008315603/ - Shopping cart, U.S. Department of Agriculture, usdagov, Flickr: https://www.flickr.com/photos/usdagov/7008318683/ *P. Kendall, Ph.D., R.D., Colorado State University, associate dean of research, food science and human nutrition.8/98. Revised 7/12. Colorado State University, U.S. Department of Agriculture and Colorado counties cooperating. Extension programs are available to all without discrimination. No endorsement of products mentioned is intended nor is criticism implied of products not mentioned. Go to top of this page.
By Chris Gleason “OK class, today is practice chart turn in day.” Audible groans and murmurs came from the band. As I began collecting the monthly practice charts I noticed Spencer writing “20 minutes” in every box on the chart. I moved in on his position like a stealthy cougar ready to pounce. With a triumphant “A Ha!” I snatched his paper and told him to follow me into my office. I immediately picked up the phone and called his father. “Mr. Williams, I just witnessed your son filling out his practice chart and forging your signature.” With little hesitation Mr. Williams responded, “No, I filled it out and signed it this morning.” How could this be possible? The child was lying and so was the father! My first instinct was to dock both Spencer and his father 10 points for a committing a crime against musicianship. Instead, I took a long hard look at what I was doing to create an environment in which kids lied about practicing and parents covered it up. After many years of making mistakes, reflecting, and reading, I have come to a few conclusions: - Grades tend to diminish students’ interest in whatever they’re learning. - Grades create a preference for the easiest possible task. - Grades tend to reduce the quality of students’ thinking. So how do we get students to stop focusing on the grade? How is assessment different than evaluation? What role does assessment play in my classroom? Changing the Narrative The word “assessment” has been used in different ways throughout the years. Regardless of the exact definition, the word has become toxic in education. Visions of long standardized, multiple-choice tests flood the minds of students when the word is evoked. Similarly, educators think of testing that in most cases does not reflect what is most important in their classrooms. Tainted with the view that everything worthwhile can be measured and reduced to a number, educators pressed for data have to battle an inner war of both valuing assessment and discouraging it. We need to take back this term and use it for good in our classrooms. Assessment plays a critical and vital role in the education process. “Curiosity is the engine of achievement.” What is the Assessment Process? Assessment simply is a strategy for gathering data that is directly linked to your outcomes. The Assessment Process includes three steps (as seen in the figure below). The first step is to assess or gather information about learning. To be honest, as educators we are always assessing students. In fact, it is impossible not to assess learners just as it is impossible to stop assessing internal things such as hunger, mood, energy level, etc., or external things such as temperature and light. We are always taking the “pulse” of the class and individual students in an informal sense just as we are gathering data about student performance, knowledge, and desires, beliefs, and connections. The key to the gathering stage is to consider what information is important since there is so much of it. It is very easy to get swallowed up by the data or to get lost in less-than-important facts and numbers. What educators need to ask themselves is “What question do I want answered?” and “Do I have a tool to capture or gather the information?” The second step in the Assessment Process is evaluation. Evaluation is defined as the process of analyzing or interpreting the data. When analyzing or interpreting the data we often compare the results to a set standard, others or ourselves. As most researchers would tell you, one data point does not provide a tremendous amount valid data. Acquiring data over time can help to identify trends yielding a clearer picture of stability, growth or decline. The question is how to collect reliable data over time and deciding what to compare it to. The third step in the Assessment Process is to act. Based on the assessment and evaluation several possible actions could result including (but not limited to) grades, reflections, strategy modification, etc. It is important to note that assigning a grade is only one of the many actions that could take place. Moreover, assigning a grade or number may be the least significant action to affect student learning. For example, you finish rehearsing a technical passage with your clarinet section and ask, “Clarinets show using your hand a number between 1-5 as to how proficient you are playing that passage.” This “data” can help inform you and the student if a sectional (or some other strategy) is needed. It doesn’t mean that you should grab a grade book and jot down responses. I embrace the belief that teachers can de-emphasize grades while building intrinsic motivation when we promote autonomy, mastery, and purpose. How do we Assess? Start with your outcome as this is the destination. Ask yourself: - What evidence is needed for me, the student, and others to conclude that the student has made it to the outcome? Does the assessment(s) that I have created really answer this question? - How much choice or autonomy does the student have in determining how they will show understanding? - What tool could be used to clearly communicate different levels of achievement? Also, do the students have input into creating this tool? Next, consider where your students are starting. Ask yourself: - What knowledge, skills or experiences do your students already possess? How could I find this information out? - Where will you begin so that you are capturing the majority of your class? For those students who do not fit into this box, what strategies do you have to support them? How can you identify these students? - How can you avoid the “curse of knowledge”? In other words, educators sometimes gloss over things that come easy to us. We need to have empathy for our new learners. Great news! Every strategy you create for your classroom is already a formative assessment. The key is craft creative, varied, and rich strategies that lead to your outcome. Ask yourself: - What strategies will be best suited for student self assessment? - What strategies will be best suited for peer assessment? - What strategies will be best suited for teacher assessment? - For all of the above – what strategies would best be saved or documents (formal) versus just observed or “taken in” (informal)? - What tool could be used to clearly communicate different levels of achievement? Teacher, Take Heart! Courage is necessary when assessing students. The wise teacher knows that they will learn a lot about themselves and about education from their students. True authentic assessment means to take a look at what is working and what is not working. When a class does poorly on a task is this a reflection of the teacher, the class, or a bit of both? It takes courage to look at the “data” and evaluate what went wrong. Often times, if an entire class does poorly it is mostly a reflection of the educator picking too difficult a task or not sequencing and layering skills/knowledge to get to the benchmark. Teachers with an open mindset will learn from this, reevaluate, and try a new approach. Teachers with a closed mindset will blame the students and refuse to look at their own teaching as the potential problem. What and how we assess points to what we value. What we spend time and effort assessing ultimately tells our students what is most worthwhile about their experience in our classroom. We can speak about the importance of creativity, critical thinking, internal motivation, and process over product, but do these values shine when it comes to the assessment going on in your classroom? Do you assess what is easy to measure or what is actually most important? Do you utilize progressive teaching practices, but then run out of time for any meaningful feedback? Could your students explain your classroom assessment process to their parents? Going Beyond the Grade I embrace the belief that teachers can de-emphasize grades, while building intrinsic motivation when we promote autonomy, mastery, and purpose. For example I have my students take ownership of quarterly reflections and individualized self-assessments that are based on rubrics created by the student and teacher. Parents rave over the quality and depth of the multifaceted report that includes both student and teacher comments. I engage students’ distinct and diverse interests and intelligences by using authentic summative projects that are presented in a video prior to performances (or as we call them, “informances”). I educate students about their brains and myelin. Instead of demanding practice charts, I teach the value and characteristics of deep practice. I also teach the value and necessity of mistakes, something too often stigmatized in our product-focused education system. As Sir Ken Robinson stated in his 2013 Ted Talk about the growth of the human mind, “Curiosity is the engine of achievement.” We need to harness the research and strategies to create schools that spark children’s imaginations. As music educators, let’s take back the term “assessment” and use it to help our students achieve and succeed.
Where stem cells are found in the body? Adult or somatic stem cells exist throughout the body after embryonic development and are found inside of different types of tissue. These stem cells have been found in tissues such as the brain, bone marrow, blood, blood vessels, skeletal muscles, skin, and the liver. Embryonic stem cells. These stem cells come from embryos that are three to five days old. At this stage, an embryo is called a blastocyst and has about 150 cells. These are pluripotent (ploo-RIP-uh-tunt) stem cells, meaning they can divide into more stem cells or can become any type of cell in the body. - Human pluripotent stem cell: One of the "cells that are self-replicating, are derived from human embryos or human fetal tissue, and are known to develop into cells and tissues of the three primary germ layers. - Pluripotent adult stem cells are rare and generally small in number, but they can be found in umbilical cord blood and other tissues. The quantity of bone marrow stem cells declines with age and is greater in males than females during reproductive years. - Adult stem cells can be isolated from the body in different ways, depending on the tissue. Blood stem cells, for example, can be taken from a donor's bone marrow, from blood in the umbilical cord when a baby is born, or from a person's circulating blood. Pluripotent cells can give rise to all of the cell types that make up the body; embryonic stem cells are considered pluripotent. Multipotent cells can develop into more than one cell type, but are more limited than pluripotent cells; adult stem cells and cord blood stem cells are considered multipotent. - Embryonic cells within the first couple of cell divisions after fertilization are the only cells that are totipotent. Pluripotent cells can give rise to all of the cell types that make up the body; embryonic stem cells are considered pluripotent. - Induced pluripotent stem cells (iPSCs) are adult cells that have been genetically reprogrammed to an embryonic stem cell–like state by being forced to express genes and factors important for maintaining the defining properties of embryonic stem cells. - Most embryonic stem cells are derived from embryos that develop from eggs that have been fertilized in vitro—in an in vitro fertilization clinic—and then donated for research purposes with informed consent of the donors. They are not derived from eggs fertilized in a woman's body. Pluripotent stem cells, i.e. cells that can give rise to any fetal or adult cell type, can be found in a number of tissues, including umbilical cord blood. Using genetic reprogramming, pluripotent stem cells equivalent to embryonic stem cells have been derived from human adult skin tissue. - Regenerative medicine is a branch of translational research in tissue engineering and molecular biology which deals with the "process of replacing, engineering or regenerating human cells, tissues or organs to restore or establish normal function". - The hollow blastocyst—which is where embryonic stem cells come from—contains a cluster of 20-30 cells called the inner cell mass. These are the cells that become embryonic stem cells in a lab dish. The process of extracting these cells destroys the embryo. Don't forget that the embryos were donated from IVF clinics. - Cell therapies would use stem cells, or cells grown from stem cells, to replace or rejuvenate damaged tissue. Scientists also want to use stem cells to understand disease and find drugs that might treat it. Embryonic stem cells could be used to make more specialized tissues that have been lost to disease and injury. Updated: 3rd October 2019
SC.6.E.6 Over geologic time, internal and external sources of energy have continuously altered the features of Earth by means of both constructive and destructive forces. All life, including human civilization, is dependent on Earth's internal and external energy and material resources. SC.6.E.6.1 Describe and give examples of ways in which Earth's surface is built up and torn down by physical and chemical weathering, erosion, and deposition. SC.6.E.6.2 Recognize that there are a variety of different landforms on Earth's surface such as coastlines, dunes, rivers, mountains, glaciers, deltas, and lakes and relate these landforms as they apply to Florida. SC.6.E.7 The scientific theory of the evolution of Earth states that changes in our planet are driven by the flow of energy and the cycling of matter through dynamic interactions among the atmosphere, hydrosphere, cryosphere, geosphere, and biosphere, and the resources used to sustain human civilization on Earth. SC.6.E.7.1 Differentiate among radiation, conduction, and convection, the three mechanisms by which heat is transferred through Earth's system. SC.6.E.7.2 Investigate and apply how the cycling of water between the atmosphere and hydrosphere has an effect on weather patterns and climate. SC.6.E.7.3 Describe how global patterns such as the jet stream and ocean currents influence local weather in measurable terms such as temperature, air pressure, wind direction and speed, and humidity and precipitation. SC.6.L.14.2 Investigate and explain the components of the scientific theory of cells (cell theory): all organisms are composed of cells (single-celled or multi-cellular), all cells come from pre-existing cells, and cells are the basic unit of life. SC.6.L.14.4 Compare and contrast the structure and function of major organelles of plant and animal cells, including cell wall, cell membrane, nucleus, cytoplasm, chloroplasts, mitochondria, and vacuoles. SC.6.L.14.5 Identify and investigate the general functions of the major systems of the human body (digestive, respiratory, circulatory, reproductive, excretory, immune, nervous, and musculoskeletal) and describe ways these systems interact with each other to maintain homeostasis. A Scientific inquiry is a multifaceted activity; The processes of science include the formulation of scientifically investigable questions, construction of investigations into those questions, the collection of appropriate data, the evaluation of the meaning of those data, and the communication of this evaluation. B The processes of science frequently do not correspond to the traditional portrayal of "the scientific method." C Scientific argumentation is a necessary part of scientific inquiry and plays an important role in the generation and validation of scientific knowledge. D Scientific knowledge is based on observation and inference; it is important to recognize that these are very different things. Not only does science require creativity in its methods and processes, but also in its questions and explanations. SC.6.N.1.1 Define a problem from the sixth grade curriculum, use appropriate reference materials to support scientific understanding, plan and carry out scientific investigation of various types, such as systematic observations or experiments, identify variables, collect and organize data, interpret data in charts, tables, and graphics, analyze information, make predictions, and defend conclusions. SC.6.N.1.4 Discuss, compare, and negotiate methods used, results obtained, and explanations among groups of students conducting the same investigation. SC.6.N.1.5 Recognize that science involves creativity, not just in designing experiments, but also in creating explanations that fit evidence. SC.6.N.2 The Characteristics of Scientific Knowledge A Scientific knowledge is based on empirical evidence, and is appropriate for understanding the natural world, but it provides only a limited understanding of the supernatural, aesthetic, or other ways of knowing, such as art, philosophy, or religion. B Scientific knowledge is durable and robust, but open to change. C Because science is based on empirical evidence it strives for objectivity, but as it is a human endeavor the processes, methods, and knowledge of science include subjectivity, as well as creativity and discovery. SC.6.N.2.1 Distinguish science from other activities involving thought. SC.6.N.2.2 Explain that scientific knowledge is durable because it is open to change as new evidence or interpretations are encountered. SC.6.N.2.3 Recognize that scientists who make contributions to scientific knowledge come from all kinds of backgrounds and possess varied talents, interests, and goals. SC.6.N.3 The terms that describe examples of scientific knowledge, for example; "theory," "law," "hypothesis," and "model" have very specific meanings and functions within science. SC.6.N.3.1 Recognize and explain that a scientific theory is a well-supported and widely accepted explanation of nature and is not simply a claim posed by an individual. Thus, the use of the term theory in science is very different than how it is used in everyday life. SC.6.N.3.2 Recognize and explain that a scientific law is a description of a specific relationship under given conditions in the natural world. Thus, scientific laws are different from societal laws. SC.6.N.3.3 Give several examples of scientific laws. SC.6.P.11.1 Explore the Law of Conservation of Energy by differentiating between potential and kinetic energy. Identify situations where kinetic energy is transformed into potential energy and vice versa. SC.6.P.13.2 Explore the Law of Gravity by recognizing that every object exerts gravitational force on every other object and that the force depends on how much mass the objects have and how far apart they are.
Back to Health A to Z. Glue ear is where the empty middle part of the ear canal fills up with fluid. This can cause temporary hearing loss. Ear infections may be more common in children than in adults, but grown-ups are still susceptible to these infections. Unlike childhood ear infections, which are often minor and pass quickly, adult ear infections are frequently signs of a more serious health problem. There are three main types of ear infections. This material must not be used for commercial purposes, or in any hospital or medical facility. Failure to comply may result in legal action. Medically reviewed by Drugs. The middle ear is the space behind the eardrum, which is connected to the back of the throat by a passageway called the Eustachian tube. Middle ear infections, also called otitis media, can occur when congestion from an allergy or cold blocks the Eustachian tube. Fluid and pressure build up, so bacteria or viruses that have traveled up the Eustachian tube into the middle ear can multiply and cause an ear infection. The auditory tube allows fluid to drain from the ear into the back of the throat. If the auditory tube becomes clogged, fluid will become trapped in the middle ear space. This fluid is called an effusion by your healthcare providers. An ear infection sometimes called acute otitis media is an infection of the middle ear, the air-filled space behind the eardrum that contains the tiny vibrating bones of the ear. Children are more likely than adults to get ear infections. Because ear infections often clear up on their own, treatment may begin with managing pain and monitoring the problem. Most people are aware that middle ear infections are very common in young children. Children are known to be more prone to this condition due to their more horizontal Eustachian tubes and propensity for harvesting infections in general. Pain may increase until the eardrum ruptures from fluid pressure. Ear infections are not as common in adults as they are in children, although they can be more serious. The symptoms of ear infections in adults should be closely monitored and diagnosed by a doctor to avoid any complications. Certain situations and actions put some people more at risk for ear infections than others. NCBI Bookshelf. Interventions for adult Eustachian tube dysfunction: a systematic review. Health Technology Assessment, No.
A Primer On Organic Reactions By James Ashenhurst Curved Arrows (for reactions) Last updated: May 21st, 2019 If you think of electrons as the currency of chemistry, reactions are transactions of electrons between atoms. Just like double entry book keeping was developed to formalize how financial transactions are recorded, chemists have developed their own convention for showing transactions of electrons between atoms. It’s called the curved arrow formalism. Previously I covered how we apply the curved arrow formalism to drawing resonance forms. Here, I’m going to show how we can extend it to show reactions. The same principles that applied to resonance forms apply here to reactions, except that unlike resonance forms we’re dealing with actual reactions, not components of a resonance hybrid. The Purpose Of The Curved Arrow Formalism The purpose of the curved arrow is to show movement of electrons from one site to another. Electrons move from the tail to the head. Most of the arrows you’ll see have a double-barb at the head, representing the movement of a pair of electrons. (note: there are also single-barbed arrows depicting the motion of a single electron; those are covered in detail here [see In Summary: Free Radicals). The Three Legal Moves There are only three legal moves you can do with curved arrows. These three moves are for depiction of a pair of electrons moving from: - long pair → bond - bond → lone pair - bond → bond Other than a few problematic examples, every reaction you will encounter in Org 1/ Org 2 can be described using a combination of these three “moves”! This is just like the three moves for drawing resonance curved arrows! [See: How to used Curved Arrows to interconvert resonance forms] However, unlike drawing resonance forms – which only involve changes in π bonds – the bond here in question can be either a single (sigma) bond or a π bond. Curved Arrows Also Represent A Way of Tracking Changes In Formal Charge Curved arrows are also a way of tracking changes in formal charge: - Since a pair of electrons are being donated from the “tail”, the atom at the tail will have a formal “loss” of one electron, making its charge more positive by 1. - Also, since a pair of electrons are being accepted at the “head”, the atom at the had will have a formal “gain” of one electron, making its charge more negative by 1. Here are three general examples of each transaction. There’s a second layer of analysis that can be done here (sigma bond versus π bond) but that can be saved for later. Example #1: Lone Pair → Bond The first example shows the simplified example of a lone pair on a naked hydroxide ion going to hydronium ion (H+).(Note 1). The arrow shows the formation of a bond between O and H, with electrons from the lone pair on the oxygen. Note the changes in formal charge: the “tail” become more positive, and the “head” becomes more negative. Example #2: Bond → Lone Pair The second example shows the reverse reaction: dissociation of water to give H+ and hydroxide ion. Here, the arrow shows the breaking of the O-H bond, to end up as a lone pair on oxygen. Note the changes in formal charge: at the “tail”, hydrogen goes from “sharing” to “lacking”, thus losing an electron. At the “head”, oxygen goes from “sharing” to “owning”, gaining an electron. We adjust the charges accordingly. (Why does it break this way? Rule of thumb: bonds generally break so as to put the electrons on the atom that will best stabilize them. Oxygen, being more electronegative than hydrogen (3.5 vs. 2.2) will better stabilize the additional electrons). Example #3: Bond → Bond Finally, an example of a bond breaking to form another bond. Here, the arrow shows the breaking of a π bond between C1 and C2, and the formation of a new C–H bond between C1 and H. Again, the atom at the “tail” (C-1) becomes more positive by 1, and the atom at the head (H+) becomes more negative by 1. There’s a little problem with this type of curved arrow: the identity of the atom that is forming the new bond (C1 in this case) is somewhat ambiguous. For this reason, some instructors (myself included) occasionally draw an additional “dotted line” to remove the ambiguity. Others have developed a “bouncy arrow” technique. Conclusion: Curved Arrows Are The Accounting System of Organic Chemistry So to summarize, this “accounting system” lets us not only account for the bonds that are formed and broken during a reaction, it also lets us keep track of the charges. This is really useful! If you’re given a molecule with these “curved arrows” drawn on it, it’s a lot like a computer program. The arrows give you precise directions on what to do in order to obtain the product. Note 1: . I say it’s somewhat artificial because in reality, each of these will be accompanied by a “counter ion” of opposite charge).
One of the most remarkable observatories in the world does its work not on a mountaintop, not in space, but 45,000 feet high on a Boeing 747. Nick Howes took a look around this unique airliner as it made its first landing in Europe. SOFIA (Stratospheric Observatory for Infrared Astronomy) came from an idea first mooted in the mid-1980s. Imagine, said scientists, using a Boeing 747 to carry a large telescope into the stratosphere where absorption of infrared light by atmospheric water molecules is dramatically reduced, even in comparison with the highest ground-based observatories. By 1996 that idea had taken a step closer to reality when the SOFIA project was formally agreed between NASA (who fund 80 percent of the cost of the 330 million dollar mission, an amount comparable to a single modest space mission) and the German Aerospace Centre (DLR, who fund the other 20 percent). Research and development began in earnest using a highly modified Boeing 747SP named the ‘Clipper Lindburgh’ after the famous American pilot, and where the ‘SP’ stands for ‘Special Performance’. Maiden test flights were flown in 2007, with SOFIA operating out of NASA’s Dryden Flight Research Center at Edwards Airforce Base in the Rogers Dry Lake in California – a nice, dry location that helps with the instrumentation and aircraft operationally. As the plane paid a visit to the European Space Agency’s astronaut training centre in Cologne, Germany, I was given a rare opportunity to look around this magnificent aircraft as part of a European Space ‘Tweetup’ (a Twitter meeting). What was immediately noticeable was the plane’s shorter length to the ones you usually fly on, which enables the aircraft to stay in the air for longer, a crucial aspect for its most important passenger, the 2.7-metre SOFIA telescope. Its Hubble Space Telescope-sized primary mirror is aluminium coated and bounces light to a 0.4-metre secondary, all in an open cage framework that literally pokes out of the side of the aircraft. As we have seen, the rationale for placing a multi-tonne telescope on an aircraft is that by doing so it is possible to escape most of the absorption effects of our atmosphere. Observations in infrared are largely impossible for ground-based instruments at or near sea-level and only partially possibly even on high mountaintops. Water vapour in our troposphere (the lower layer of the atmosphere) absorbs so much of the infrared light that traditionally the only way to beat this was to send up a spacecraft. SOFIA can fill a niche by doing nearly the same job but at far less risk and with a far longer life-span. The aircraft has sophisticated infrared monitoring cameras to check its own output,and water vapour monitoring to measure what little absorption is occurring. The 2.7-metre mirror (although actually only 2.5-metres is really used in practice,) uses a glass ceramic composite that is highly thermally tolerant, which is vital given the harsh conditions that the aircraft puts the isolated telescope through. If one imagines the difficulty amateur astronomers have some nights with telescope stability in blustery conditions, spare a thought for SOFIA, whose huge f/19.9 Cassegrain reflecting telescope has to deal with an open door to the 800 kilometres per hour (500 miles per hour) winds .Nominally some operations will occur at 39,000 feet (approximately 11,880 metres) rather than the possible ceiling of 45,000 feet (13,700 metres), because while the higher altitude provides slightly better conditions in terms of lack of absorption (still above 99 percent of the water vapour that causes most of the problems), the extra fuel needed means that observation times are reduced significantly, making the 39,000 feet altitude operationally better in some instances to collect more data. The aircraft uses a cleverly designed air intake system to funnel and channel the airflow and turbulence away from the open telescope window, and speaking to the pilots and scientists, they all agreed that there was no effect caused by any output from the aircraft engines as well. The cameras and electronics on all infrared observatories have to be maintained at very low temperatures to avoid thermal noise from them spilling into the image, but SOFIA has an ace up its sleeve. Unlike a space mission (with the exception of the servicing missions to the Hubble Space Telescope that each cost $1.5 billion including the price of launching a space shuttle), SOFIA has the advantage of being able to replace or repair instruments or replenish its coolant, allowing an estimated life-span of at least 20 years, far longer than any space-based infrared mission that runs out of coolant after a few years. Meanwhile the telescope and its cradle are a feat of engineering. The telescope is pretty much fixed in azimuth, with only a three-degree play to compensate for the aircraft, but it doesn’t need to move in that direction as the aircraft, piloted by some of NASA’s finest, performs that duty for it. It can work between a 20–60 degree altitude range during science operations. It’s all been engineered to tolerances that make the jaw drop. The bearing sphere, for example, is polished to an accuracy of less than ten microns, and the laser gyros provide angular increments of 0.0008 arcseconds. Isolated from the main aircraft by a series of pressurised rubber bumpers, which are altitude compensated, the telescope is almost completely free from the main bulk of the 747, which houses the computers and racks that not only operate the telescope but provide the base station for any observational scientists flying with the plane. PI in the Sky The Principle Investigator station is located around the mid-point of the aircraft, several metres from the telescope but enclosed within the plane (exposed to the air at 45,000 feet, the crew and scientists would otherwise be instantly killed). Here, for ten or more hours at a time, scientists can gather data once the door opens and the telescope is pointing at the target of choice, with the pilots following a precise flight path to maintain both the instrument pointing accuracy and also to best avoid the possibility of turbulence. Whilst ground-based telescopes can respond quickly to events such as a new supernova, SOFIA is more regimented in its science operations and, with proposal cycles over six months to a year, one has to plan quite accurately how best to observe an object. Forecasting the future Science operations started in 2010 with FORCAST (Faint Object Infrared Camera for Sofia Telescope) and continued into 2011 with the GREAT (German Receiver for Astronomy at Teraherz Frequencies) instrument. FORCAST is a mid/far infrared instrument working with two cameras between at five and forty microns (in tandem they can work between 10–25 microns) with a 3.2 arcminute field-of-view. It saw first light on Jupiter and the galaxy Messier 82, but will be working on imaging the galactic centre, star formation in spiral and active galaxies and also looking at molecular clouds, one of its primary science goals enabling scientists to accurately determine dust temperatures and more detail on the morphology of star forming regions down to less than three-arcsecond resolution (depending on the wavelength the instrument works at). Alongside this, FORCAST is also able to perform grism (i.e. a grating prism) spectroscopy, to get more detailed information on the composition of objects under view. There is no adaptive optics system, but it doesn’t need one for the types of operations it’s doing. FORCAST and GREAT are just two of the ‘basic’ science operation instruments, which also include Echelle spectrographs, far infrared spectrometers and high resolution wideband cameras, but already the science team are working on new instruments for the next phase of operations. Instrumentation switch over, whilst complex, is relatively quick (comparable to the time it takes to switch instruments on larger ground observatories), and can be achieved in readiness for observations, which the plane aims to do up to 160 times per year. And whilst there were no firm plans to build a sister ship for SOFIA, there have been discussions among scientists to put a larger telescope on an Airbus A380. With a planned science ambassador programme involving teachers flying on the aircraft to do research, SOFIA’s public profile is going to grow. The science output and possibilities from instruments that are constantly evolving, serviceable and improvable every time it lands is immeasurable in comparison to space missions. Journalists had only recently been afforded the opportunity to visit this remarkable aircraft, and it was a privilege and honour to be one of the first people to see it up close. To that end I wish to thank ESA and NASA for the invitation and chance to see something so unique.
Scientists are skeptical, and still recovering from the shock of subatomic particles, neutrinos, moving in a vacuum apparently at speeds faster than the speed of light. The scientists in Gran Sasso, Italy have been running the experiment known as OPERA (Oscillation Project with Emulsion-tRacking Apparatus) for three years. The Gran Sasso team has access to the large accelerator at Cern, near Geneva, Switzerland, from which they shot the particles using a system of magnets over a distance of 450 miles to Gran Sasso. A beam of light traveling that distance would require about 2.4 milliseconds to span the space. Light travels fast enough to circle the earth six times in a single second. The neutrinos arrived 60 billionths of a second before they were expected. The speed of light in a vacuum is 299,792,458 meters per second but the particles were traveling at 299,798,454 meters per second., Einstein, the father of modern physics and of the theory of special relativity, states that nothing can travel faster than light in a vacuum. We now believe there are exceptions. In the center of our Sun, deep inside superheated and compressed atoms, subatomic particles throb back and forth from energy to matter, and in their matter phase are exceeding the limit of light speed. And at the hypothesized “Big Bang”, existence itself far surpassed the speed of light, expanding outward in all directions. Stephen Parke head of theoretical physics at Fermilab has a lot to say about the unexpected new discovery. “If this is true”, Parke says, “It would rock the foundations of physics.The existence of faster-than-light particles would also wreak havoc on scientific theories of cause and effect. If things travel faster than the speed of light, A can cause B; but then, B can also cause A. … If that happens, the concept of causality becomes ambiguous, and that would cause a great deal of trouble.” Parke’s conclusion: “Your first response is it can’t possibly be true, that they must have made a mistake!” Astrophysicist Dave Goldberg at Philadelphia’s Drexel University makes the point that “If faster-than-light neutrinos did exist, they would likely have been observed in nature before now. For example in 1987, detectors on Earth identified neutrinos and photons, light particles, from an exploding star. Both types of particles reached our planet at almost exactly the same instance.” Goldberg sums it up this way: “If neutrinos travel faster than light by the amount the OPERA team claims, then neutrinos from that supernova should have been detected in 1984; three years before the photons. It’s possible, but unlikely.” Responding to suggestions of faulty measurement, OPERA Team co-coordinator Antonio Ereditato said, “We are competent experimentalists, we made a measurement and we believe our measurement is sound. Now it is up to the community to scrutinize it. We are not in a hurry. We are saying, tell us what we did wrong; redo the measurement if you can. There will be all sorts of science fiction writers who will give their own opinions on what this means, but we don’t want to enter that game.” Here’s an interesting interpretation. Heinrich Paes at Dortmund University believes that it may be possible for the neutrinos to transport through hidden dimensions and shortcuts in space-time. “The extra dimension is warped in a way that particles moving through it can travel faster than particles that go through the known three dimensions of space. It’s like a shortcut through this extra dimension. So it looks like particles are going faster than light, but actually they don’t.” From the University College in London, Professor Jenny Thomas remarked that if the OPERA discovery were correct, it would overturn everything we thought we understood about relativity and the speed of light”.Powered by Sidelines
The topics covered in this chapter can be summarized as follows: |6.1||Clastic Sedimentary Rocks||Sedimentary clasts are classified based on their size, and variations in clast size have important implications for transportation and deposition. Clastic sedimentary rocks range from conglomerate to mudstone. Clast size, sorting, composition, and shape are important features that allow us to differentiate clastic rocks and understand the processes that took place during their deposition.| |6.2||Chemical Sedimentary Rocks||Chemical sedimentary rocks form from ions that were transported in solution, and then converted into minerals by biological and/or chemical processes. The most common chemical rock, limestone, typically forms in shallow tropical environments, where biological activity is a very important factor. Chert and banded iron formation are deep-ocean sedimentary rocks. Evaporites form where the water of lakes and inland seas becomes supersaturated due to evaporation.| |6.3||Depositional Environments and Sedimentary Basins||There is a wide range of depositional environments, both on land (glaciers, lakes, rivers, etc.) and in the ocean (deltas, reefs, shelves, and the deep-ocean floor). In order to be preserved, sediments must accumulate in long-lasting sedimentary basins, most of which form through plate-tectonic processes.| |6.4||Sedimentary Structures and Fossils||The deposition of sedimentary rocks takes place according to a series of important principles, including original horizontality, superposition, and faunal succession. Sedimentary rocks can also have distinctive structures that are important in determining their depositional environments. Fossils are useful for determining the age of a rock, the depositional environment, and the climate at the time of deposition.| |6.5||Groups, Formations, and Members||Sedimentary sequences are classified into groups, formations, and members so that they can be referred to easily and without confusion.| Questions for Review - What are the minimum and maximum sizes of sand grains? - How can you easily distinguish between a silty deposit and one that has only clay-sized material? - What factors control the rate at which a clast settles in water? - The material that makes up a rock such as conglomerate cannot be deposited by a slow-flowing river. Why not? - Describe the two main processes of lithification. - What is the difference between a lithic arenite and a lithic wacke? - How does a feldspathic arenite differ from a quartz arenite? - What can we say about the source area lithology and the weathering and transportation history of a sandstone that is primarily composed of rounded quartz grains? - What is the original source of the carbon that is present within carbonate deposits such as limestone? - What long-term environmental change on Earth led to the deposition of banded iron formations? - Name two important terrestrial depositional environments and two important marine ones. - What is the origin of a foreland basin, and how does it differ from a forearc basin? - Explain the origin of (a) bedding, (b) cross-bedding, (c) graded bedding, and (d) mud cracks. - Under what conditions is reverse graded bedding likely to form? - What are the criteria for the application of a formation name to a series of sedimentary rocks? - Explain why some of the Nanaimo Group formations have been divided into members, while others have not.
Bacteria are living one-celled creatures that most often reproduce by an original cell splitting in half, a process known as binary fission. This process begins once a cell has grown to be big enough — about twice its original size. There are four, or sometimes five, steps to binary fission. First, the original cell (called a parent cell) has to grow large enough to begin the fission process. In the second step, the cell duplicates its chromosome so that it has two exact copies of the cell's genetic material. These copies attach themselves to the cell's plasma membrane. Then, the cell grows yet more, which pushes the two copies of genetic material even further apart. Fourth, a new wall is built across the center of the cell, dividing it into two parts, which each contain a copy of the chromosome. This cell wall is referred to as a septum. What happens next depends on the type of microbe. Some microbes stop here and remain linked together by the septum. Other types of microbes take one more step and completely separate along the septum. Either way, both cells become completely independent adult cells whether they remain attached to each other or not.
Many aquatic insects use polarized light to find water surfaces on which they reproduce, and where their larvae live and grow. Manmade objects and structures can sometimes mimic these water surfaces by polarizing light. Moreover, in some cases they can be more attractive to aquatic insects than water itself. This effect causes “ecological traps” that can lead aquatic insects to population decline or even extinction. Previous studies have shown that the attractiveness of polarizing synthetic surfaces can be reduced if grids of non-polarizing lines are strategically placed on them. In his senior project, Theodore Black measured the effect of line thickness on the attractiveness of polarizing non-water surfaces. Early in the morning he would install his polarizing traps near the water stream, and late at night he would collect them. Then, for days and days, he would sort and identify insects trapped in oil under the microscope, classifying them into such poetically named groups as non-biting midges (Chironomidae), black flies (Simulliidea), caddisflies (Trichoptera), and mayflies (Ephemeroptera). This work allowed Theo to analyze and describe the effect of non-polarizing line thickness on the attractiveness of traps, which will help to protect aquatic insects from human interference. Using this new information, engineers will be able to design solar panels that are efficient, yet don’t trick aquatic insects into laying eggs on it, helping them to avoid an evolutionary trap.
Data: Data are the facts or figures obtained from various sources. Information: Information is the processed data. Table: A database object comprising of related data entries is known as a table. Data present in a table is displayed in the form of rows and columns. Field: A field is a piece of information about an element which may be a person, student or employee. Record: A collection of related field is called record. Cell: A cell is an intersection of rows and columns. Data redundancy: Data redundancy means the repetition of data in a database. Relational Database: A Database Management System that stores data in the form of related tables is known as a relational database. Database model: A database model is the manner in which the data collection is stored, managed and administered. Relationship: It is a link or association between several entries. Following is the list of different types of relationship A primary key is the field that uniquely identifies in a table. Each record in the table must be unique. Every table has only one primary key. There are two types of primary keys as follows. A data type specifies the type of data that a field can hold. It is a form of storage which allows entry of previously defined values that each field can store. Some example of data types are as follows: Database is a collection of data organized to service many applications at the same time by storing and managing data so they appear to be at one location. list of different types of relationship are: Some example of data types are as follows: A relational database management system is defined as the database management system that stores data in the form of related tables. A primary key is a special relational database table column (or combination of columns) designated to uniquely identify all table records.
Fatal reactions to sunlight may have triggered a protective shift away from pale skin Common forms of skin cancer were Stone Age killers that prompted the evolution of black skin among human ancestors in Africa, a controversial new analysis concludes. Evidence gathered over the last 40 years shows that albinos living in tropical parts of Africa and Central America, where they are constantly exposed to high levels of ultraviolet radiation from the sun, frequently develop skin cancer and die young from it, says biologist Mel Greaves of the Institute of Cancer Research in London. Early members of the Homo genus in Africa were probably pale skinned and spent much of their days hunting and foraging in direct sunlight, Greaves asserts. Researchers generally agree that the loss of most body hair helped hominids control body temperature in tropical savannas.
Get a FREE 18" x 25" fluorescent marker map from antibodies-online! Download a PDF copy or Request a printed poster Coming Soon! Advanced fluorescence techniques and tools Fluorescence is a property possessed by certain materials, known as fluorophores, by which these materials absorb short-wavelength incident radiation and emit longer-wavelength radiation. Fluorescence occurs when high-energy incident radiation is absorbed by a fluorophore, causing the material to enter a vibrationally and electronically excited state. Upon relaxation to the ground state, a lower-energy photon is emitted from the material. Materials may possess fluorescent properties by virtue of their atomic, molecular, or macro-molecular structure. The shift in wavelength between light from the excitation source and emitted light is due to loss of energy (mostly in the form of heat). Unless additional energy is introduced to the system from an external source, the emitted photon will always be lower energy (longer wavelength) than the excitation source. For example, GFP (Green Fluorescent Protein) is excited by short-wavelength blue light (λmax~488nm), and emits longer wavelength green light (λmax~519nm). This phenomenon is known as Stokes shift. Fluorescence: at work in the life-sciences Fluorescence has always played a role in the natural world. As a byproduct of their molecular composition numerous naturally occurring minerals will fluoresce in the visible spectrum when exposed to ultraviolet light. Certain species of marine animals like the jellyfish Aequorea victoria also produce fluorescent proteins that emit light in the visual spectrum. The stunning and magnificent visages produced by some of the more colorful forms of naturally occurring fluorescence have captivated countless generations of researchers and naturalists. Innovative biologists and biochemists have also developed a number of techniques that allow them to leverage this impressive phenomena and employ it as a tool in nearly every avenue biological research. Fluorescent dyes and proteins Today, fluorophores are commonly used to label biological materials in nearly every life-science discipline. Fluorophores used in the life-sciences commonly fall into one of three categories: The three classes of fluorescent probes possess similar properties, and many fluorescent proteins have been engineered to share a nearly identical excitation and emission profile to commercially available dyes (e.g. mutations to green fluorescent protein produced eGFP which is almost spectrally identical to the dye FITC). However, they are often applied in different scenarios. Fluorescent proteins like GFP or RFP are commonly used to label the protein product of a specific transgene. Using molecular cloning techniques the coding sequence for a fluorescent protein is coupled to the coding sequence for the researcher's protein of interest. The resulting "fusion-protein" is a combined unit containing both the researchers target and the fluorescent protein. This powerful technique allows a researcher the opportunity to track the sub-cellular distribution and movement of their protein of interest in vivo. Small fluorescent dyes, on the other hand, are most often used in in vitro experiments. Some dyes like DAPI, DiI, or Ethidium Bromide will associate with biologically relevant molecules or structures on their own, allowing them to be used independently to label these structures. Other dyes like Fluorescein, Cyanine, Rhodamine, or the wide variety of Alexa Fluor™ dyes are commonly used as conjugates for primary or secondary antibodies, which are used in immunolabeling experiments like: Quantum dots are the most recent introduction. While these unique, synthetic nanocrystals display exceptional promise in the life-science world, they have not yet gathered wide-scale acceptance and use on the same level as fluorescent dyes and proteins. Quantum dots will be discussed in more extensive detail in our upcoming article, advanced fluorescence techniques and tools. Advancements in chemical synthesis and a consistent push for more flexible and efficient fluorophores have resulted in the development of hundreds of reactive fluorescent dyes that span the UV, visible, and infrared spectra. These dyes have often been chemically optimized for for specific applications, or for use with specific instrumentation. While the development of new dyes and reagents has significantly expanded the role and flexibility of the fluorophore in life-science research, the sheer number of options often leaves a researcher perplexed as to which specific fluorophore is best suited for a given application. The matter can become particularly complex when choosing a fluorophore for a double-labeling experiment which requires the use of multiple compatible fluorescent molecules. An Introduction to Fluorescence (Part 2) covers the specific intricacies of choosing a fluorophore for an experiment, including the most common criteria for consideration.
Types of Computer Networks can be classified on various properties. The Computer networks can also be classified on the basis of Computer network technology used by them. There are two types of Computer networks in this category. 1. Broadcast Networks. In broadcast networks, a single communication channel is shared among all the computers of the network. This means, all the data transportation occurs through this shared channel. The data is transmitted in the form of packets. The packets transmitted by one computer are received by all others in the network. The destination of packet is specified by coding the address of destination computer in the address field of packet header. On receiving a packet, every computer checks whether it is intended for it or not. If the packet is intended for it, it is processed otherwise, it is discarded. There is another form of broadcast networks in which the packets transmitted by a computer are received by a particular group of computers. This is called as "Multicasting". 2. Point to Point or Store and Forward Networks. This is the other type of networks on the basis of transmission technology. The store and forward networks consist of several interconnected computers and networking devices. The data is transmitted in the form of packets. Each packet has its own source and destination address. To go from a source to a destination, a packet on this type of network may first have to visit one or more intermediate devices or computers that are generally called as "routers". The packets are stored on an intermediate router unless the output line is free. When the output line is free, it is forwarded to the next router. The routing algorithms are used to find a path from the source to destination. The routing algorithms play a very important role in this type of network.
According to the creation-Flood Ice Age model, glacial maximum was reached when the ocean temperature cooled to an average of 50°F (10°C). At this ocean temperature, the net melting of the glaciers would be slow. Precipitation would still be substantial, but would decrease with time. As the oceans continued to cool, the amount of water evaporating into the atmosphere would continue to decrease proportionate to the ocean’s surface temperature. Accelerated melting would mark the end of the Ice Age. Warmer summers, colder winters As the Ice Age waned, volcanism gradually decreased as the earth became used to the new configuration of land and water caused by the Flood. Less gas and ash spewed into the stratosphere, and more sunlight warmed the summers. Summers, of course, would not be as warm as they are today in the mid and high latitudes because the nearby ice sheets and increased sea ice would keep the land somewhat cool. A decrease in volcanic activity would also affect the tropics. Temperatures there would warm quite quickly and would soon approximate today’s climate. Due to the slower melting of the ice sheets at higher latitudes, the tropical to polar temperature difference would be greater than it is today. This difference in atmospheric temperature is very important for understanding the demise of the woolly mammoth and other animals. This is because such a temperature difference would cause strong, windy, dry storms. At the same time as the decreased volcanism, the ocean water would continue cooling and sea ice would gradually develop in the polar latitudes. These two factors would result in a drying atmosphere in this phase of the Ice Age. Sea ice would form quickly because meltwater from melting ice sheets would flow out over the ocean water at mid and high latitudes. Fresh water has a tendency to float on the denser salt water, making it easier to form ice. Sea ice, especially with fresh snow on top, would reinforce the winter cooling trend by reflecting sunlight back into space. It would also stop the heat of the warmer water from entering the atmosphere. Thus, sea ice would increase the cooling of the atmosphere, which would further increase ocean cooling, sort of like a chain reaction. The net effect of this climatic change would be that winters would become quite cold and the summers mild as the ice sheets melted. Winters would be significantly colder than today, and summers warmer, but not as warm as today. The atmosphere would also become drier and drier. The climate over mid- and high-latitude continents would become more continental with colder winters and warmer summers. During the earlier phase when the ice was building up, the climate was equitable, having little seasonal contrast, but during deglaciation, snowfall on the ice sheets would be light and would easily melt by the time summer arrived. The winter cooling and drying would continue until the ice sheets melted. Figure 10.1 shows the generally expected temperature trend through the Ice Age to the present for the mid- and high-latitude continents of the Northern Hemisphere. Such colder winters and summers than today at the end of the Ice Age would also affect the ocean temperatures. It is likely that for a while the average ocean temperature cooled below its present average of 39°F (4°C) (see figure 9.1). How fast would the ice sheets melt? The summer melting rate for snow and ice can be estimated by using a heat balance equation of the snow or ice cover.1 It would work similar to the heat balance equations for the atmosphere and ocean. The heating and cooling terms are added up with the difference being the melting rate (figure 10.2). This equation is easy to apply and is often used to estimate snowmelt today. The only difficulty with applying the equation to the melting of an ice sheet is in trying to estimate the summer temperatures of the atmosphere near and over the ice sheet. Here is where I made several reasonable assumptions. First, I assumed the atmosphere above the ice sheet was about 18°F (10°C) cooler than it is today. This seems reasonable from climate simulations that are done without volcanic material in the stratosphere. For the calculation, I used temperature and sunshine data from central Michigan. Michigan was chosen because it would be typical within the periphery of the Laurentide ice sheet. I assumed winters during deglaciation would be so dry and cold that little new snow would accumulate, and the snow that did accumulate would easily melt by May 1. I also assumed melting stopped on September 30, much earlier than today. These seem like reasonable and conservative estimates of the melting time and the date of May 1 even allows for the top of the ice sheet to be “primed,” warmed to 32°F (0°C), so that all the meltwater for the five warm months would flow out of the ice sheet and not be refrozen within it. As with the previous equations, I used minimum and maximum values for the terms in the equation. One of the most important variables in the snowmelt equation is the reflectivity of the snow, which varies from about 80 percent of the sunlight for fresh, cold snow to 40 percent or lower for wet snow. A reflectivity of 40 percent is reached after several weeks of melting. If ice is exposed at the surface, the reflectivity is further reduced to between 20 and 40 percent. In the low-altitude glaciers of Norway, the reflectivity in the melt zone has been observed to fall as low as 28 percent. So, a reflectivity for the periphery of the ice sheet of 40 percent was assumed to be a maximum value during summer melting. Reflectivity can be lowered even more if dust from dry storms is added to the ice surface. The end of the Ice Age would bring huge dust storms, especially just south of the ice sheets. These storms would develop from the large temperature differences between polar latitudes and the subtropics. So, the ice sheet surface along the periphery most likely accumulated a large quantity of dust. After the melt season, the dust would concentrate on the snow or ice surface. Figure 10.3a–c shows three snapshots of a pile of snow after a snowstorm. As the snow melted, the debris within the snow became more and more concentrated on the surface. As a result of more concentrated debris, more sunlight was absorbed by the snow and less reflected. The reflectivity of a permanent snow cover in Japan was observed to drop as low as 15 percent in late summer due to dust from air pollution. So a 15 percent reflectivity, representing a dusty snow or ice surface, was used as the minimum reflectivity. Plugging the minimum and maximum estimates for reflectivity and the other variables into the snowmelt equation, I obtained minimum and maximum estimates of melting. I averaged the two extreme melt rates for a best estimate and ended up with a melting rate of about 30 feet per year (10 m/yr). According to this estimate, if central Michigan had an average ice depth of 2,300 feet (700 m), the ice would melt in only 75 years! Farther north, the amount of sunlight is, of course, less and the snow surface was probably less dusty. So, the ice would melt more slowly in the interior ice sheets. If the ice were of average thickness in the interior, it would take in the neighborhood of 200 years for this ice to disappear. It is expected that the melting rates for other ice sheets and mountain ice caps would correspond to those of the Laurentide ice sheet, so the total time for deglaciation would be in the neighborhood of 200 years. This is surprisingly fast — the melting would be catastrophic! This melt time requires much less time than the uniformitarian estimates. The Flood model rate of 30 feet/year (10 m/yr) at the periphery compares very closely to modern measurements in the cool, commonly cloudy melting zones of glaciers in Alaska, Iceland, and Norway. Sugden and John2 state that glacial melting can be rapid as indicated by: … many mountaineers whose tents in the ablation [melting] areas of glaciers may rest precariously on pedestals of ice after only a few days. The present-day glaciers do not disappear at this melt rate because they are nourished by a huge amount of mountain snow in the winter that continually flows into the melting zone. The question immediately comes up as to why uniformitarian scientists believe that ice sheets took many thousands of years to melt. The reason, like many aspects of Ice Age research, is because of their dating methods and theories, especially the astronomical theory of the Ice Age, which vastly stretches out every physical process. Mainstream scientists rarely use equations for snow and ice melt; they depend instead on their assumption of a long time-period. All indications are that a melting rate of 30 feet/year (10 m/yr) of ice is reasonable along the periphery. Such a melting rate in a cool Ice Age climate has ominous consequences for theories and models that depend upon the uniformitarian assumption or present processes. At such a melting rate, ice sheets could not even get started within a uniformitarian climate even if a mechanism for cold enough temperatures could be found. An Ice Age simulation by Rind, Peteet, and Kukla3 started by placing 30 feet (about 10 m) of ice everywhere the ice sheet covered. Then they ran their Ice Age climate model fully expecting the ice to grow with the higher reflectivity that the snow and ice would provide in the model. Instead of growing, the 30 feet of ice melted everywhere in 5 years! The main reason is because summer sunshine is very powerful at mid and high latitudes. This experiment makes one wonder how an ice sheet could develop within the uniformitarian climate. We touched on this difficulty in chapter 3. Putting it all together, I conclude that it took about 500 years for the Ice Age to reach its maximum and 200 years to melt. This is a total of 700 years from start to finish — a time much different than uniformitarian theories. Given the unique conditions that existed after a worldwide Flood, I have also concluded that there was only one Ice Age. It was indeed a rapid, even a catastrophic, Ice Age. It could easily have occurred between the time of the Genesis flood and the time historical records first were written in northern Europe. Is there evidence for catastrophic melting at the end of the Ice Age? Scientists have found an increasing amount of evidence of catastrophic flooding during deglaciation. One example is the Lake Missoula flood, which was rejected for over 40 years because it seemed too “biblical” (see Catastrophic deglaciation flooding by glacial Lake Missoula later in this chapter). It was finally accepted in the 1960s since the evidence for it is overwhelming.4 With the acceptance of the Lake Missoula flood, geologists have found strong evidence for catastrophic Ice Age floods in other areas of the Northern Hemisphere.5 A flood on par with the Lake Missoula flood was discovered coming out of the Altay Mountains of south-central Siberia.6 A glacier during the Ice Age had enclosed a large lake just over 1,600 feet (485 m) deep. The ice dam failed and water about 1,500 feet (450 m) deep flowed down the Chuja River Valley and eventually into the Ob River of western Siberia. Another Ice Age flood was the Bonneville flood that occurred when ancient Lake Bonneville, the largest Ice Age lake in the southwest United States, dropped about 300 feet (about 100 m) in several weeks, initiating catastrophic flooding down the Snake River of Idaho. One of the more interesting, but speculative, Ice Age flood or floods are the subglacial (under the ice) catastrophic bursts postulated by John Shaw and other collaborators.8 Shaw, in his most radical suggestion, postulates a large lake in the vicinity of Hudson Bay that discharged about 50 times more water than glacial Lake Missoula (figure 10.4). One major pathway for the subglacial flood started in the northwest territories of Canada and passed southwest through northern Saskatchewan, ran almost the length of Alberta, and ended in northern Montana.9 A second major pathway is believed to have started around southern Hudson Bay or Labrador and flowed south into southern Ontario, the eastern Great Lakes, and New York. This later subglacial flood is believed to have carved the Finger Lakes of New York. Of course, Shaw’s hypothesis has generated considerable controversy, especially the suggestion of a huge lake in the vicinity of Hudson Bay. After reviewing most of the evidence, I have concluded that his case is strong. If he is correct or partially correct, the current uniformitarian Ice Age paradigm needs to be almost totally rewritten to allow for a gigantic lake near Hudson Bay. He suggests that the lake had to exist near the peak of the Ice Age because flooding generally occurred when the ice boundary was close to its maximum extension. Such a large lake and catastrophic flooding when there was supposed to be a huge ice sheet over Canada is uniformitarian Ice Age heresy — at least currently. Mounting evidence is convincing many mainstream scientists that the Ice Age was very different from uniformitarian expectations. Catastrophic deglaciation flooding by glacial Lake Missoula Geologist J. Harlen Bretz, while examining the geology of eastern Washington in the 1920s, discovered a most unusual phenomenon. He discovered huge, deep canyons etched into hard lava. This caused him to surmise that only a flood of heretofore unheard of proportions could have formed them. The Grand Coulee had been gouged 900 feet (275 m) deep and 50 miles (80 km) long. The flood carved out the canyon where Palouse Falls is located in southeast Washington when water overtopped a lava ridge forming a canyon six miles (10 km) long and 500 feet (150 m) deep. At first, Bretz did not understand where all this water could have originated. At the same time, J.T. Pardee postulated that a large lake existed in western Montana that was dammed by a lobe of the Cordilleran ice sheet in northern Idaho. Bretz finally made the connection and dubbed it the Lake Missoula or Spokane flood. Figure 10.5 shows glacial Lake Missoula in western Montana and the path of the Lake Missoula flood through the Pacific Northwest. Geologists of that era were not prepared to hear of such a catastrophe. It seemed too much like the biblical flood against which they had a strong bias, so Bretz’s idea was severely challenged. For 40 years the geological establishment criticized his idea and made up other theories that, today, seem farfetched. Finally, in the 1960s, with the advent of aerial photography and better geological work, Bretz’s “outrageous hypothesis” was verified. At the peak of the Ice Age, thick ice filled the Lake Pend Oreille River Valley in northern Idaho blocking the Clark Fork River. Meltwater from the ice flooded the valleys of western Montana, gradually filling them until they could hold no more. It had risen to about 4,200 feet (1,280 m) above sea level, based on abundant shorelines observed in the valleys of western Montana, most notably the hills east and northeast of Missoula (figure 10.6). The water depth was 2,000 feet (600 m) at the ice dam. The lake contained 540 cubic miles (2,200 cubic km) of water, half the volume of present day Lake Michigan. Glacial Lake Missoula burst through its ice dam, probably in a matter of hours, and roared over 60 mph (30 m/sec) in places through eastern Washington into the Columbia Gorge and emptied into the Pacific Ocean. It was 450 feet (135 m) deep when it rushed over Spokane, Washington. It eroded 50 cubic miles (200 cubic km) of hard lava and silt from eastern Washington. Scoured-out lava over eastern Washington resembles a large braided stream from satellite pictures, although the stream had to have been 100 miles (160 km) wide! Much of the basalt rock has been rolled into huge gravel bars that are commonplace over the very dry scablands of eastern Washington. They look like normal gravel bars found in rivers, but on a stupendous scale. One near the Columbia River south of Vantage, Washington, is 20 miles (32 km) long and about 100 feet (30 m) high. Another bar is 300 feet (90 m) high and fills up portions of the Snake River Valley (figure 10.7). The rushing water scoured the lava so badly that it formed the lava badlands near Moses Lake, Washington. As the Flood water came to the narrow constriction through the Horse Heaven Hills, called Wallula Gap, it backed up and formed a lake 800 feet (245 m) deep. From there, the waters rushed up the surrounding valleys, including the Walla Walla and Yakima River Valleys. The rushing water formed a series of repeating beds of sand and silt called rhythmites. Bretz noticed these unusual deposits lying on top of lava flows and included them in his evidence for the Lake Missoula flood. The best outcrop is found in Burlingame Canyon in the Walla Walla Valley (figure 10.8). The canyon was cut in about one week by water diverted from an irrigation canal, exposing the series of rhythmites. Thirty-nine of these sand and silt couplets have been counted and have inspired several theories on how they formed during the Lake Missoula flood. As the muddy water churned down the Columbia River Gorge, the flood enlarged the gorge between The Dalles and Portland, Oregon. Leaving the gorge, it spread out in the wide lowlands of the Willamette Valley, depositing a layer of silt rhythmites about 50 feet (15 m) thick and laying a huge gravel bar in the Portland area that is 400 feet (120 m) deep and covers 200 square miles (500 square km). The water continued racing toward the Pacific Ocean where it carved a small canyon in the continental slope. It took about a week for Lake Missoula to empty. Strewn all along the flood path are large erratic boulders that could have only been floated in by icebergs. Most of the boulders are granite from outcrops in northern Idaho and northern Washington. One found in the central Willamette Valley attests to the power of icebergs to transport boulders during the Lake Missoula flood. Originally it weighed 160 tons (145,000 kg), before tourists broke off pieces for souvenirs. Today the rock is only 90 tons (82,000 kg). A rock of this size and composition could not have been rolled into place by water. It is composed of argillite, a slightly metamorphosed shale that is too delicate to take the rigors of water transport. Its nearest possible source is in extreme northeastern Washington. Argillite is also abundant in northern Idaho and western Montana. The boulder had to have been transported at least 500 miles (800 km). Ice rafting during the Lake Missoula flood is the only reasonable explanation. Geologists today overwhelmingly accept the Lake Missoula flood. Before, they had trouble believing there was a flood of these proportions; later many debated how many such floods took place during the Ice Age. In the 1980s, opinion swayed from one or a few floods to anywhere between 40 and 100. The rhythmite layers found in Burlingame Canyon have played a key role in this controversy. A recent analysis of most of the data has revealed that there probably was just one Lake Missoula flood, similar to what Bretz originally believed.10
The expression "basic ethical principles" refers to those general judgments that serve as a justification for particular ethical prescriptions and evaluations of human actions. Three basic principles, among those generally accepted in our cultural tradition, are particularly relevant to the ethics of research involving human subjects: the principles of respect of persons, beneficence and justice. These are based on the Belmont Report. 1. Respect for Persons. -- Respect for persons incorporates at least two ethical convictions: first, that individuals should be treated as autonomous agents, and second, that persons with diminished autonomy are entitled to protection. The principle of respect for persons thus divides into two separate moral requirements: the requirement to acknowledge autonomy and the requirement to protect those with diminished autonomy. In most cases of research involving human subjects, respect for persons demands that subjects enter into the research voluntarily and with adequate information. To respect autonomy is to give weight to autonomous persons' considered opinions and choices while refraining from obstructing their actions unless they are clearly detrimental to others. Respect for the immature and the incapacitated may require protecting them as they mature or while they are incapacitated. Some persons are in need of extensive protection. The extent of protection afforded should depend upon the risk of harm and the likelihood of benefit. The judgment that any individual lacks autonomy should be periodically reevaluated and will vary in different situations. 2. Beneficence. -- Persons are treated in an ethical manner not only by respecting their decisions and protecting them from harm, but also by making efforts to secure their well-being. Such treatment falls under the principle of beneficence. Two general rules have been formulated as complementary expressions of beneficent actions in this sense: (1) do not harm and (2) maximize possible benefits and minimize possible harms. As with all hard cases, the different claims covered by the principle of beneficence may come into conflict and force difficult choices. 3. Justice. -- Who ought to receive the benefits of research and bear its burdens? This is a question of justice, in the sense of “fairness in distribution” or “what is deserved.” An injustice occurs when some benefit to which a person is entitled is denied without good reason or when some burden is imposed unduly. Another way of conceiving the principle of justice is that equals ought to be treated equally.
What is E. Coli? E. coli (Escherigia Coli) is a group of bacterial strains known for many decades to cause illness in humans. In 1885, German bacteriologist Theodor Escherich isolated the first strain. Today, hundreds of E. coli strains are known. The E. coli bacteria, also known as fecal coliform bacteria because it lives in intestinal contents, including feces, are a genetic relative of the dangerous salmonella pathogen. The genera Escherichia and Salmonella diverged around 102 million years ago, at the time when mammals and birds diverged genetically. Mammals and birds are host species of Escherigia and Salmonella, respectively. E. coli and salmonella were discovered in the same year, 1885. E. coli was isolated by German bacteriologist Theodor Escherich. E. coli O157:H7, E. coli O104:H4 and other deadly strains belong to a family of bacteria that has evolved since the 1960s, when scientists believe E. coli and another bacteria, shigella, met and swapped genes. This created a form of E. coli that secretes the dangerous shiga toxin. Today, known E. coli strains number in the hundreds. Some strains produce enzymes that can destroy many antibiotics, including all penicillins. None of these strains affects the taste, smell, or appearance of food. Read more: Now antibiotic-resistant bacteria are found in river water: Discovery of E.coli strains sparks concerns over growing threat of superbugs. The E. coli strain most commonly linked with foodborne outbreaks of illness is known as Shiga toxin-producing E. coli O157:H7. The federal Centers for Disease Control and Prevention estimates that Shiga toxin-producing E. coli causes hundreds of thousands of poisonings each year in the United States, with thousands of hospitalizations and scores of deaths. Hereafter, we will use the term “E. coli” to refer solely to the most commonly harmful strains of Escherichia coli bacteria, most specifically Shiga toxin-producing E. coli O157:H7 and Shiga toxin-producing E. coli O104:H4. In the mid-1990s, the federal government declared E. coli “an adulterant” in restaurant-served beef – putting E. coli on the same illegal footing as hazardous chemicals in food or foreign objects in food. After that, it was 7 years until E. coli presence in U.S. ground beef peaked. Since then, it is down some 80 percent. However, during the same time, E. coli rates were rising in other food commodities, such as spinach and lettuce, according to the USDA Food Safety and Inspection Service.
This experiment uses ESA’s Biolab facility in the Columbus laboratory on the International Space Station. Flying to space is harmful to living beings and on a cellular level radiation takes its toll while weightlessness seems to impair immune systems. Triplelux is a biology experiment but aims at helping to explain this impairment of crewmembers’ immune systems at a cellular level. Triplelux aims to understand the negative mechanisms at a cellular level by clearly separating the effects of microgravity to other factors in spaceflight. Cells from the immune system called leukocytes absorb bacteria as a first line of defence against infections. Called phagocytosis, the leukocytes envelop a foreign body and then zap it with oxygen to finish it off. In this experiment leukocytes will be observed as they ingest a safer substitute for bacteria called Zymosan. Triplelux looks at cells from different sources, rats and hemocyte immune cells from invertebrates, more specifically blue mussels and expose them to three differing scenarios: normal spaceflight, spaceflight at normal gravity levels and a ground control on Earth. ESA’s Biolab facility is used to spin the samples while in space so they feel the equivalent force of gravity. After 90 minutes the samples are frozen at –80°C in one of the Space Station’s scientific freezers while they await their trip back to Earth for analysis. Triplelux-B is the first part of this experiment series that will analyse haemocyte cells from the common blue mussel. Running this experiment should show once and for all whether it is microgravity, radiation or a combination of these factors that influence this aspect of immune systems during spaceflight. The findings will help to safeguard astronaut’s health but also people on Earth with similarly weakened immune systems such as sufferers of Chronic Granulomatous Disease. Seedling Growth 2 Most plants need light to grow but plants on the International Space Station adapt to living under artificial light. Finding alternatives to sunlight is important because more countries rely on greenhouses for fresh produce. This experiment analyses how Arabidopsis Thaliana reacts to a combination of red light and microgravity. Arabidopsis Thalianais is a common plant found all over Europe, Asia and Africa. It was the first plant to have its entire DNA sequenced, so biologists know this species well. Previous ESA research on the Space Station showed that plants can sense gravity at very low levels. This experiment introduces an extra element of red light and aims to discover how it will influence plant growth. Better understanding of this plant has increased our knowledge of plants in general. This experiment in space will further help to understand how the plant reacts to light and gravity, which will improve our crops in space and on Earth. Last update: 8 January 2015
Measures of dispersion are the quantities that characterize the ‘spread’ of the data, such as range, inter–quartile range, or standard deviation. The range R is the difference between the maximum and minimum value of Xi. This measure is unstable since it depends upon the two extreme values of the data and not the entire set. The extreme values can result from exceptional observations, but the range is useful to show the extent (or limits) of the data. In order to reduce the influence of the extreme values, inter–quartile range (IQR) or inter–decile range is often used as indicators of the dispersion of the data. Inter–quartile range = q3 – q1 Inter–decile range = q9 – q1 Standard deviation (denoted by s or s.d.) is the root mean square of the deviations from the arithmetic mean. The standard deviation indicates the average distance of the observations from the mean of the data set. To get a better estimate of the standard deviation of the population (denoted by s), standard deviation is often computed with n–1 instead of n in the denominator. However, for large values of n (n ³ 30) there is practically no difference between the two definitions. Variance is the square of the standard deviation. The standard deviation is an absolute measure of deviation that expresses variation in the same units as the original data. Coefficient of Variation (C V) is a relative measure that indicates the magnitude of variation relative to the magnitude of the mean. Coefficient of variation is computed as follows:
Degree of Comparison In English, degree of comparison is used when we compare one person or one thing with another. There are three degrees of comparison: 1. Positive degree 2. Comparative Degree 3. Superlative Degree 1. Positive degree: Positive degree is the simplest form of the adjective. It is used when we talk about only one person or one thing. For example: (a) This house is big. (b) You are intelligent. (c) This book is small. 2. Comparative Degree: Comparative Degree is used when we compare two persons or two things. For example: (a) This house is bigger than that. (b) You are more intelligent than Ramesh. (c) Moon is smaller than earth. 3. Superlative Degree:Superlative Degree is used when more than two persons or things are compared. For example: (a) This is the biggest house in my colony. (b) You are the tallest boy in the class. (c) This book is the smallest of all. Things to remember: 1. Adjectives that express shape or material or time or the highest or lowest degree of quality, cannot be compared. For example : round, square, earthen, golden, daily, annual, perfect, extreme, eternal, chief, complete, supreme, unique, universal, dead, empty, ,etc. 2. The comparatives – inferior, prior, superior, senior, junior are followed by “to” not by “than”.For example: · Rahil is senior to you. · She is superior to you. · He is junior to me. 3. The comparatives – elder, former, upper, outer, utter, latter, inner, etc. are neither followed by “than” nor by “to”. They are followed by “of” when selection is implied. For example: · Surjeet is a member of the upper ofthe two chambers. · Aashutosh is a former member of this school. · She got the upper hand. 4. Superlatives are generally preceded by “the” and followed by “of”, except when they arew qualified by possessive pronouns or when they qualify the vocative case. For example: · Kiran is the cleverest of all the sisters. · Rakhi is my closest firend. · Dearest friend, please help me. 5. Don’t use the comparative degree after the word “Comparatively”, which in itself expresses the idea of comparison. For example: · I am comparatively well (not better). · She is comparatively rich. 6. Adjectives of different degrees cannot be joined by “and”. For example: Rajan is very tall and strongestboy in the class. (Incorrect) Rajan is the tallest and strongest boy in the class. (Incorrect) 7. Double comparatives and superlatives should not be used. For example: · She is more happier than me. (Incorrect) · She is happier than me. (correct)
Learning Accounting and Bookkeeping Basics Accounting System Basics: a Quick Primer In this article, we give a brief history of accounting- and what it is. We then explain exactly what an accounting or bookkeeping system is and what one is made up of. The process of accounting is 500 years old. It was begun by Fra Luca Pacioli, aka "The Father of Modern Day Accounting". In fact, the system has changed so little that Pacioli could teach accounting today. The change has been in the tools we use. In the 1400's they used quill pens and parchment paper. Today we use computers. However, computers are simply an efficient calculator that emulates the accounting process that we did until late in the 20th century by hand. To understand the accounting process you will need to understand the books of accounting. A complete set of books for any business will likely consist of three parts: Each part provides information that will be valuable to a business owner. Journals (aka, The Original Books of Entry) The journals are the data entry tool of accounting and the front door to the accounting system. Each time we enter a transaction into an accounting system it is going into one of seven different journals. These seven journals are set up to efficiently record one or more types of transactions. For instance, the Sales Journal records the Sales transactions; The Cash Disbursements Journal records checks, and so on and so on. All entries to the accounting system must be entered into a journal before being processed further. The owner will use these books to locate individual entries. The General Ledger (aka, The Final Book of Entry) Every company should have one general ledger. This is the book where the financial data is organized into accounts. These accounts are listed in three primary categories: Assets, Liabilities, and Capital. Once a transaction is entered into a journal, the entries are then organized by account and transferred into the General Ledger. After all journals have been posted to the General Ledger, each account balance is determined (i.e., bank account, accounts receivable, etc.) so that the financial statements can be prepared for the owner. Subsidiary Ledgers (aka, The Books of Analysis) Some accounts in the General Ledger require additional information that the General Ledger does not supply. For instance, the General Ledger provides only a total of all customers that owe money to the company in the Accounts Receivable account. Obviously, in order to account for the customer's balance, the company needs to know who each customer is and how much each customer owes. This detail comes from the Subsidiary Ledger. So, in the General Ledger we might show a total balance of $6,000 in Accounts Receivable, while in the Subsidiary Ledger we might have a page for each customer showing their name, address, and the balance that individual customer owes the company. When we add up all the individual customer accounts we find they add up to $6,000. Certainly, the subsidiary ledger is required when collecting what our customers owe us. Computers, today, often mask these books, or change their name. But an accounting system, by any other name, is still an accounting system. Request a Accounting and Bookkeeping Tutorial Accounting System Basics: a quick primer What is the difference between an accountant and bookkeeper? Bookkeeping Training and Accounting Training are Different Financial Analysis- the accountants tool Accounting Training: Consider your Options Want answers? Let’s explore the perfect training solution and business plan just for you. We can keep you updated on special course offers. Also, you’ll get our free introductory video about the benefits and methods of starting your own practice.
Beginning in the nineteenth century with Great Britain and ending with the Bolshevik Revolution in Russia and the collapse of the Ottoman Empire in the Balkans, the European nations established in constitutions the principle of equality under the law and dropped all restrictions on residence or occupational activities for Jews and other national and religious minorities. At the same time, the societies of Europe underwent rapid economic change and social dislocation. The emancipation of the Jews allowed them to live and work among non-Jews, but exposed them to a new form of political antisemitism. It was secular, social, and influenced by economic considerations, though it often reinforced and was reinforced by traditional religious stereotypes. The emancipation of the Jews enabled them to own land, enter the civil service, and serve as officers in the national armed forces. It created the impression for some others—particularly those who felt left behind, traumatized by change, or unable to achieve occupational satisfaction and economic security in accordance with their expectations—that Jews were displacing non-Jews in professions traditionally reserved for Christians. It also created for some the impression that at the same time, Jews were being overrepresented in future-oriented professions of the late nineteenth century: finance, banking, trade, industry, medicine, law, journalism, art, music, literature, and theater. The collapse of restraints on political activism and the broadening of the electoral franchise on the basis of citizenship, not religion, encouraged Jews to be more politically engaged. Though active all along the political spectrum, Jews were most visible—due to increased opportunities—among liberal, radical, and Marxist (Social Democratic) political parties. The introduction of compulsory education and the broadening of the franchise toward universal suffrage spawned the development of antisemitic political parties and permitted existing parties to use antisemitic rhetoric to obtain votes. Publications such as the Protocols of the Elders of Zion, which first appeared in 1905 in Russia, generated or provided support for theories of an international Jewish conspiracy. As religious confession became subsumed in European political culture by national identity and nationalist sentiment, a new series of stereotypes that reinforced and was reinforced by older prejudices fueled antisemitic politics: 1) enjoying the benefits of citizenship, Jews were nevertheless secretly disloyal—their "conversion" was only for material gain; 2) Jews displaced non-Jews in traditionally "noble" professions and activities (land ownership, the officer corps, the civil service, the teaching profession, the universities), while they "clannishly" blocked the entry of non-Jews into professions that they controlled and that represented the future prosperity of the nation (for example, industry, trade, finance, and the entertainment industry); 3) Jews used their disproportionate control of the media to mislead the "nation" about its true interests and welfare; and 4) Jews had assumed the leadership of the Social Democratic, and later, Communist movements in order to destroy middle class values of nation, religion and private property. That these prejudices bore little relationship to political, social, and economic realities in any European country did not matter to those who became attracted to their political expression.
May 30, 2012 Images of Mercury reveal an unusual blend of mineral compounds in its surface structure, as well as a thin atmosphere. The planet Mercury is 4878 kilometers in diameter. The moons Ganymede and Titan are both larger, while Earth’s moon is slightly smaller. Mercury orbits the Sun at a mean distance of 57,910,000 kilometers—a year on Mercury lasts a mere 88 days. Since Mercury rotates every 58.6 days, the planet completes three rotations for every two orbits about the Sun. Close proximity to its primary means that temperatures on Mercury can reach 427 Celsius when the Sun is at its zenith. Being two-thirds closer to the Sun, Mercury receives an average of nine times more radiation at its surface than Earth. The searing heat, as well as intense bombardment by charged particles from the Sun, pose a dilemma for planetary scientists: Mercury has a thin but detectable atmosphere. How a planet with such a weak gravity field (only 38% as great as Earth) and with so much “erosion” by solar radiation can retain the smallest remnant of an atmosphere is a mystery. As mentioned in a previous Picture of the Day about Saturn’s moon Titan, weak gravity fields are not supposed to be able to keep atmospheric gases from leaking away into space. Ancient moons are thought to be airless deserts because whatever gas they once held should have long since been dissipated by the solar wind. Ions tend to drag gases and dust away, like a stream of water dissolving a riverbank. Gradually, the atmospheric density should fall to zero, with nothing remaining to protect the surface from meteoric bombardment or coronal mass ejections. According to consensus opinions, that is why so many moons look alike and why they have no atmospheres: they have all undergone similar evolution over billions of years. Titan, and now Mercury, have called such presumptions into question. Cassini mission engineers have speculated that there is some form of gas generator on Titan, replenishing its frigid methane atmosphere. On Mercury, where cold is not the issue, the solar wind is thought to be powerful enough to knock particles off the surface rocks, leaving the ions to recombine suspended in near orbit, weakly held by the gravity field. Since the molecules are not able to persist, satellite probes like MESSENGER can detect them during its planetary flybys as they leak away. The MESSENGER mission has also constructed images using eleven different color filters on its Wide Angle Camera (WAC). By combining the information from infrared, visible red, and violet filters, then running it through arbitrary red, green, and blue channels, a false-color impression of Mercury’s surface composition can be displayed. While the colors are not truly what would be visible to the unaided eye, they allow geologists to visualize the variations in chemical distribution as well as how various features correlate to mineral concentration. For example, Caloris Basin appears to be composed of geologically different material than its surroundings. The supposed impact craters within the basin demonstrate that their rims and floors are made of something else entirely. Perhaps the dark blue substance came from volcanic events after the impacts, or perhaps it is the remains of the impactors themselves that we see. Presently, no one is sure which minerals correspond to which colors, so it is difficult to be certain of what past events caused what. In April of 2009, NASA’s THEMIS satellites found “electrical tornadoes” about 60,000 kilometers above the Earth at the interface between Earth’s magnetosphere and the solar wind. During the most recent MESSENGER flyby of Mercury, similar flux tubes were found, connecting its magnetic field directly with the Sun through twisting Birkeland current filaments. Birkeland currents are well-known to plasma physicists and Electric Universe proponents. They act as cosmic transmission lines through space, confining plasma within their vortices and allowing electric currents to flow over great distances. As was suggested in the recent past, rather than reckoning celestial bodies like Titan or Mercury to be geriatric denizens of a wizened Solar System, it is more reasonable, given the anomalies detailed for many years in the Thunderbolts Picture of the Day, to think of them as youthful members of a dynamic ensemble. Mercury is probably a relatively young planet and may have come to its present orbit and circumstances within the last 10,000 years. If that is the case, then the presence of an atmosphere of whatever density would not be surprising. The presence of electric currents flowing like giant tornadoes into Mercury hint at a time when those currents might have been far more powerful. There might have been a period in Mercury’s history when those helical currents were energized to the glow mode or the arc mode stage. If that happened, then the surface of Mercury would have been the scene of gigantic electric discharges blasting out craters, cutting vast chasms, and rearranging the atomic structure of the planet’s crust over large areas. Caloris Basin and the altered materials in the craters could be part of what has been left behind after the increased electrical energy through Mercury’s structure dissipated.
Climate and energy are complex topics, with rapidly developing science and technology and the potential for controversy. Yet these topics are among the most important issues for students to understand, as we face societal challenges to confront climate change and seek a transition to clean and sustainable energy. How can educators effectively bring these important subjects into their classrooms? There are many ways to approach climate and energy depending on the grade level, course topics and instructional method. Yet no matter the pedagogic setting, using a literacy-based approach can provide a sound foundation to build learners' understanding of these topics. Original Literacy Documents Climate Literacy - from US Climate Change Science Program Energy Literacy - from the US Department of Energy - summaries of each principle - possible challenges for educators - suggested pedagogic approaches for each grade level - relevant teaching materials from the CLEAN reviewed collection
Firing waste disposal and recycling are, perhaps, the most versatile, reliable and efficient means available to the agglomerations. In many cases, using these technologies is the only possible way of disposal of industrial and domestic waste. The method applied for waste disposal is suitable for waste in any physical state: liquid, solid, gaseous or paste. Along with combustible waste processing and recycling non-combustible waste can be used. In this case, the waste is subjected to high temperature (1000°C) combustion products. Controlled incineration process is called oxidation of solid, liquid or gaseous combustible waste. During combustion carbon dioxide, water, and ash are generated. Sulfur and nitrogen contained in the waste are turn into various oxides and chlorine is reduced to HC1 during combustion. In addition to the gaseous products of the combustion of waste produced the solids – metal, glass, slag, etc; requiring further recycling or disposal. This method is characterised by high sanitary and hygiene-effectiveness. The list of application of combustion methods and nomenclature of waste that can be processed is constantly expanding. These include waste of chlororganic industries, basic organic synthesis, production of plastics, rubber and synthetic fibers, oil refining industry, wood chemistry, chemical pharmaceutical and microbiological industry, mechanical engineering, radio engineering, instrument-making industry, paper and many other industries. Combustion method allows processing mixtures of organic and inorganic products, as well as halogenated waste, such complex in terms of waste disposal. A mixture of organic and inorganic salts – the most difficult material for combustion, as it typically partly comprised of water. When burning such material’s molecule organic compounds are destroyed and inorganic compounds are converted into oxides and carbonates, which are derived from the combustion zone along with the slag and ash. Fine particles of oxides and carbonates contained in the flue gas are captured in a wet scrubber. One of the most hazardous waste, the primary method of processing of which is burning, is halogenated products. Fluorine and bromine wastes are less common, but they are treated in the same manner as the chlorine-containing materials. Chlorinated organic materials may comprise an aqueous phase or a certain amount of water. Waste with a high content of chlorine have a low calorific value, as the chlorine, bromine and fluorine similarly, prevents the combustion process. According to Simdean, a prominent company with focusing on innovative plasma waste disposal and industrial waste disposal methods, an optimal performance of the combustion process depends on the compliance with process parameters: temperature in the combustion reactor, the specific load, the working volume of the reactor, dispersion spraying, the aerodynamic structure and the degree of turbulence of gas flow in the reactor, as well as a number of other factors. Burning is carried out in furnaces of various designs, the main element of which is a grate, which actually runs the process. The space inside the furnace is divided into several zones, where the series of processes takes place. The combustion process consists of five stages – drying, gasification, ignition, combustion and post-combustion, which are usually launched sequentially, but may take place simultaneously as well.
Purpose: What kind of thinking does this routine encourage? This routine helps students to explore different perspectives and viewpoints as they try to imagine things, events, problems, or issues differently. In some cases this can lead to a more creative understanding of what is being studied. For instance, imagining oneself as the numerator in a fraction. In other settings, exploring different viewpoints can open up possibilities for further creative exploration. For example, following this activity a student might write a poem from the perspective of a soldier’s sword left on the battlefield. Application: When and Where can it be used? This routine asks students to step inside the role of a character or object—from a picture they are looking at, a story they have read, an element in a work of art, an historical event being discussed, and so on—and to imagine themselves inside that point of view. Students are asked to then speak or write from that chosen point of view. This routine works well when you want students to open up their thinking and look at things differently. It can be used as an initial kind of problem solving brainstorm that open ups a topic, issue, or item. It can also be used to help make abstract concepts, pictures, or events come more to life for students. Launch: What are some tips for starting and using this routine? In getting started with the routine the teacher might invite students to look at an image and ask them to generate a list of the various perspectives or points of view embodied in that picture. Students then choose a particular point of view to embody or talk from, saying what they perceive, know about, and care about. Sometimes students might state their perspective before talking. Other times, they may not and then the class could guess which perspective they are In their speaking and writing, students may well go beyond these starter questions. Encourage them to take on the character of the thing they have chosen and talk about what they are experiencing. Students can improvise a brief spoken or written monologue, taking on this point of view, or students can work in pairs with each student asking questions that help their partner stay in character and draw out his or her point of view. This routine is adapted from Debra Wise, Art Works for Schools: A Curriculum for Teaching Thinking In and Through the Arts (2002) DeCordova Museum and Sculpture Park, the President and Fellows of Harvard College and the Underground
1911 Encyclopædia Britannica/Appraiser APPRAISER (from Lat. appretiare, to value), one who sets a value upon property, real or personal. In England the business of an appraiser is usually combined with that of an auctioneer, while the word itself has given place, to a great extent, to that of “valuer.” (See the articles Auctions and Auctioneers, and Valuation and Valuers.) In the United States appraiser is a term often used to describe a person specially appointed by a judicial or quasi-judicial authority to put a valuation on property, e.g. on the items of an inventory of the estate of a deceased person or on land taken for public purposes by the right of eminent domain. Appraisers of imported goods and boards of general appraisers have extensive functions in administering the customs laws of the United States. Merchant appraisers are sometimes appointed temporarily under the revenue laws to value where there is no resident appraiser without holding the office of appraiser (U.S. Rev. Stats. § 2609).
Usher syndrome is a rare, inherited disorder that involves loss of hearing and sight. Hearing loss is due to the inability of the auditory nerves to send sensory input to the brain. This is called sensorineural hearing loss. The vision loss, called retinitis pigmentosa (RP), usually happens after age ten. RP is a deterioration of the retina. The retina is a layer of light-sensitive tissue that lines the back of the eye. It changes visual images into nerve impulses in the brain that allow us to see. RP slowly gets worse over time. Nerve and Retina of the Eye Copyright © Nucleus Medical Media, Inc. Three types of Usher syndrome have been identified: types I, II, and III. The age of onset and severity of symptoms separate the different types. Please be aware that this information is provided to supplement the care provided by your physician. It is neither intended nor implied to be a substitute for professional medical advice. CALL YOUR HEALTHCARE PROVIDER IMMEDIATELY IF YOU THINK YOU MAY HAVE A MEDICAL EMERGENCY. Always seek the advice of your physician or other qualified health provider prior to starting any new treatment or with any questions you may have regarding a medical condition.
“Over two-thirds of Americans favor the creation of a national initiative process” (Broder 2000, 229) - Q. HOW DO WE “SAVE” AMERICA? A. Amend the constitution with the US Direct Democracy Amendment. - Q. What is an Initiative? A. The Initiative is the power of the voters to propose statutes and amendments to the constitution. - Q. What is a Referendum? A. The Referendum is the power of the voters to approve or reject laws or parts of laws through elections. - Q. What is recall? A. Recall is the power of the voters to remove an elected or appointed public servant. DEFINITION, JOINT RESOLUTION A legislative measure, designated and numbered consecutively upon introduction, which requires the approval of both chambers, with one exception, is submitted (just as a bill) to the president for possible signature into law. The one exception is that joint resolutions, not bills, are used to propose constitutional amendments. These resolutions require 2/3 affirmative vote in each house but are not submitted to the president; they become effective when ratified by three-quarters of the states, through state conventions or states legislatures. DEFINITION, PROPOSED AMENDMENT An amendment may be proposed with a 2/3 vote of the House of Representatives and the Senate or a national convention called by Congress at the request of 2/3 of the state legislatures. The latter procedure has never been used. The amendment may then be ratified by 3/4 of the state legislatures (38 states) or state convention (convention of states) shall vote on the amendment. The 21st amendment was the only one to be adopted in this way. However, it is the “POWER OF THE CONGRESS” to decide which method of ratification will be used. WE ARE AGGRESSIVE USDD ACTIVIST, IF YOUR POLITICIANS DO NOT SUPPORT THE USDD AMENDMENT, VOTE THEM OUT OF OFFICE, THEY DON’T DESERVE TO BE REELECTED. The Constitution, Article V does provide for one other way to ratify; through a state convention. A state convention (convention of states) differs from the state legislature in that it is usually an entirely separate body from the legislature. Why specify state conventions over state legislatures as every other amendment had been ratified up to then? The thought was that the average citizens at the state conventions would be free from their political party pressure to reject the proposed amendment than elected officials. Note: “It is the power of Congress to decide which method of ratification will be used, state legislatures, or state conventions”. This is why we must insistent that our legislators support a state convention (convention of states) for the ratification of *S.J. Res. 2525 (US Direct Democracy Amendment) because of party political pressure and lobbyist money influence. - Join US Direct Democracy Amendment. - Log-in to your state and copy & paste the text to your legislators. If any of your legislators refuse to support the US Direct Democracy Amendment, they will be replaced at their next election with a candidate who does support the USDD Amendment. Then politically organize in the Forum, simply press the “Forum” button and discuss the responses from your legislators. My response from US Senator Pat Roberts (R) was quite a disappointment, in effect, he said that the US Congress has all the answers, and don’t bother us. We are going to keep bothering them until we have a 2/3 majority support in the US House of Representatives, and the US Senate for the US Direct Democracy Amendment. - The following is mandatory: - That your current legislators understand our insistence for a convention of the states for ratification. - That your current legislators understand our insistence on their support for *S.J. Res. 2525, US Direct Democracy Amendment. Using the Proposed Amendment when 2/3 of the US House of Representatives and US Senate pass, then 3/4 (38) states ratify, it becomes the Amendment XXVIII, THE US DIRECT DEMOCRACY AMENDMENT. When the people have desired an amendment to the Constitution, and the amendment passed with a 2/3 majority in the House and Senate, it is then sent to the state legislatures for ratification requiring a three-fourths majority. REQUIRED VOTES FOR PASSAGE OF JOINT RESOLUTION US House <2/3 majority = 290/435 affirmative votes> US Senate <2/3 majority = 67/100 affirmative votes> Join the United States Direct Democracy Amendment campaign $1 a year Subscription The Honorable U.S. Representative/Senator Washington, DC zip Good Day Representative/Senator (Last Name), As your constituent, can I count on your support for *S.J. Res. 2525 , that proposes an amendment to the Constitution of the United States relating to direct democracy, US Direct Democracy Amendment, upon convention of the state(s) ratification? IN THE SENATE OF THE UNITED STATES PROPOSED AMENDMENT TO THE U.S. CONSTITUTION Proposing an amendment to the Constitution of the United States relating to direct democracy. (Reported in Senate) *S.J. Res. 2525 IS Calendar No. 999, 114th Congress, 2d Session *S.J. Res. 2525 Proposing an amendment to the Constitution of the United States relating to the direct democracy process. November 5, 2019 Mr./Ms. Legislators Last Name for (him/herself) introduced the following joint resolution; which was read twice and referred to the Committee on the Judiciary. Proposing an amendment to the Constitution of the United States relating to direct democracy. Resolved by the Senate and House of Representatives of the United States of America in Congress assembled (two-thirds of each House concurring therein), that the Direct Democracy Article is necessary, and further, the power of Congress has decided that the method of ratification shall be “conventions of the several States”, “state conventions.” The following article is proposed as an amendment to the Constitution of the United States, that shall be valid to all intents and purposes as part of the Constitution when ratified by “convention of several States”, “state conventions”, as provided in the Constitution, within seven years from the date of the submission hereof to the States by the Congress. `SECTION 1. That this proposed amendment may be cited as the `United States Direct Democracy Amendment’. `SECTION 2. That the proposed amendment, the United States Direct Democracy Amendment, consist of the following; the initiative, is the power of the people to propose laws and amendments to the Constitution and to adopt or reject them; the referendum, is the power of the people to approve or reject laws in whole or in part. `SECTION 3. That this article shall be inoperative unless it has been ratified as an amendment to the Constitution, specifically, through “conventions of the several states” as provided in the Constitution, within seven years from the date of the submission hereof to the states from the Congress. Reported by (Mr./Ms.)(his/her last name), without amendment. I acknowledge that it is the “power of the Congress” to decide which method of ratification will be used, state legislatures or state conventions (convention of the states). Because of the influence of lobbyist on a number of legislators, I respectfully ask that you support “conventions of the several states, “state conventions”, as the method of ratification. City, State zip State of California, Department of Justice, Ballot Initiatives California’s Statewide Initiative Process
A diagnosis of intestinal metaplasia of the gastric mucosa or gastric cardia is significant because it is a precancerous lesion, which increases the risk of gastric cancer, states PubMed Central. Intestinal metaplasia occurs when goblet cells, which normally line the intestines, are found in another area of the body, such as the esophagus.Continue Reading Intestinal metaplasia of the esophagus is referred to as Barrett's esophagus, according to the American Cancer Society. Barrett's esophagus is caused by chronic reflux when the contents of the stomach back up into the esophagus. This is typical of gastroesophageal reflux disease or GERD. This damages the lining of the esophagus and normally takes years to happen. While most people who develop Barrett's esophagus do not develop cancer, it does increase an individual's risk of esophageal cancer. A person who has Barrett's esophagus sometimes develops cells that are more abnormal, according to American Cancer Society. This is referred to as dysplasia, which is a precancerous condition but treatable. Cells that show dysplasia are not able to metastasize or spread to other areas of the body. Typically, a person who has Barrett's esophagus with cells that show dysplasia experiences a great deal of acid reflux. Most often, this requires additional testing and follow-up biopsies in six months to one year.Learn more about Gastrointestinal Issues
Are monkeys, like humans, able to ascertain where objects are located without much more than a sideways glance? Quite likely, says Lau Andersen of the Aarhus University in Denmark, lead author of a study conducted at the Yerkes National Primate Research Center of Emory University, published in Springer's journal Animal Cognition. The study finds that monkeys are able to localize stimuli they do not perceive. Humans are able to locate, and even side-step, objects in their peripheral vision, sometimes before they perceive the object even being present. Andersen and colleagues therefore wanted to find out if visually guided action and visual perception also occurred independently in other primates. The researchers trained five adult male rhesus monkeys (Macaca mulatta) to perform a short-latency, highly stereotyped localization task. Using a touchscreen computer, the animals learned to touch one of four locations where an object was briefly presented. The monkeys also learned to perform a detection task using identical stimuli, in which they had to report the presence or absence of an object by pressing one of two buttons. These techniques are similar to those used to test normal humans, and therefore make an especially direct comparison between humans and monkeys possible. A method called "visual masking" was used to systematically reduce how easily a visual target was processed. Andersen and his colleagues found that the monkeys were still able to locate targets that they could not detect. The animals performed the tasks very accurately when the stimuli were unmasked, and their performance dropped when visual masking was employed. But monkeys could still locate targets at masking levels for which they reported that no target had been presented. While these results cannot establish the existence of phenomenal vision in monkeys, the discrepancy between visually guided action and detection parallels the dissociation of conscious and unconscious vision seen in humans. "Knowing whether similar independent brain systems are present in humans and nonverbal species is critical to our understanding of comparative psychology and the evolution of brains," explains Andersen. Andersen, L.M. et al (2013). Dissociation of visual localization and visual detection in rhesus monkeys (Macaca mulatta), Animal Cognition. DOI 10.1007/s10071-013-0699-7.
These lessons have been developed to bring garden-based nutrition education and California standards-based social studies curriculum together. We believe that food history is an integral part of understanding culture and that experiential lessons in the garden and cooking classroom can encourage students to eat more healthily and deepen their understanding of past civilizations. The lessons below are works in progress and are not yet approved by the Network for a Healthy California. These are a sample; garden and cooking activities have been modified and tested for most seventh grade social studies units (Fall of Rome, Rise of Islam, Civilizations of West Africa, Ancient China, Feudal Japan, Feudal Europe, Renaissance, and Mezo-America). For eighth grade we have lessons for the Constitutional Convention, Native American, and Westward Expansion units. If you would like more information please contact [email protected] China in the Garden (Grade 7) Students are introduced to fruits and vegetables that originated in China as well as wellness practices and the cultural significance of citrus through a multi-station activity. A follow up cooking lesson is available. Europe in the Garden (Grade 7) The grains, fruits, and vegetables of Feudal Europe are presented to students in a multi-station activity. Students will understand that social status under Feudalism affected food options through a poster matching game, get to grind wheatberries into flour (to be used to make bread later), and taste peas while learning about the nutritional value of various European fruits and vegetables. Westward Expansion in the Garden (Grade 8) Students are introduced to fruits and vegetables available during the overland journey from the East Coast of the United States to the West during the 1800's. Students act as settlers in this garden based card game, and move along one of the three trails, which presents challenges in regards to geography, climate, and food security. After gathering produce from the different regions of the U.S., students put them together in a cooking activity, making bean chili and writing ghost stories.
Literacy is one of the cultural heritage that is priceless. In an effort to care for cultural heritage, a medium of learning about literacy can be presented using computer technology in the form of interactive multimedia. One such software is very supportive in its application is Macromedia Flash MX. Problems in this study is how to plan, create or produce and test the software in the form of interactive multimedia for learning literacy with Macromedia Flash MX. The goal of this learning media users are the general public for anyone in need. The purpose of making this learning media literacy is to help learning the subject of historical characters, numbers, punctuation and evaluation was developed with Macromedia Flash MX. The expected benefits of making learning media literacy are : for the world of technology, exploit and develop technology Macromedia Flash MX for interactive learning media. For education, can be used as input, a reference form of interactive multimedia instructional media. For the world of culture, contributing in the effort to preserve the form of multimedia technology literacy. Working procedures of making learning media literacy is to determine the material, determines the learning scenario, determine the flow chart of the program, specify the script program, create a prototype program, and the last is the test and evaluation. From the results of performance test systems, learning programs this script can be run on any computer without installation process. In tests performed on a multimedia expert, expert cultures and languages, as well as a general correspondent says the program is worthy of teaching media literacy. Deficiencies in the program can be corrected in subsequent program development towards a more perfect. key: learning aksara java, sunda, bugis, macromedia flash
Last year the Cornell Creative Machine Lab created a system where two chatbots, machines designed to emulate the conversational abilities of humans, engaged in a “conversation” with each other. The result, visualized as an animation with avatars, was a fantastic, if vaguely absurd, example of two computers interacting with each other. But what does it even mean for a computer to interact with another computers? Sure, there are basic ways in which systems build on others. But when the nature of the interaction is a bit more subjective, the dialog becomes more about AI. So it was cool to recently discover Pareidoloop. Instead of being about language (like the chatbot experiment), this is about visuals. It’s about what can a computer “draw” that’s recognizable Pareidoloop starts by generating random collections of polygons and then feeds them into a face detector application. Over time it learns which “drawings” are more face-like and begins to learn how to create collections that are increasingly face-like. As I ventured down the black-hole of related links, there were some interesting examples of using image and facial-recognition, and machine learning, software in other, unusual, ways. The first is Roger Alsing’s Genetic Programming: Evolution of Mona Lisa. Here he got a system to learn how to create a replica of the Mona Lisa using only 50 semi-transparent polygons. The second is Greg Borenstein’s using facial-recognition software to find faces in every-day objects. It’s something we humans do all the time, called pareidolia. And it’s fascinating to see when computers see faces where we see them, and where they see them somewhere completely unexpected.
Our Diversity Bulletin Board Ideas page will provide you with great resources for a number of diversity related topics. Our bulletin boards will help you integrate various activities into your daily curriculum. Ideas within the diversity bulletin board pages include resources to help promote the unique qualities of all students. We are always looking for new ideas! Your creativity can help other teachers. Submit your bulletin board idea and don't forget to include a photo if you have one! This indicates resources located on The Teacher's Corner. Lessons from Crayons Grades K-6th This was a fun board but a little time consuming. You get the melted effect on the crayons by using a blow drier to melt them. Be careful because this can be a little messy. The saying: "We could all learn a lesson from crayons; some are sharp, some are pretty, some are dull, and all are diofferent colors, but they all have to learn to live in the same box." Submitted by: Leanne Wilson - Plainview Middle School Friendship Flags Grades K-6th Materials needed: Coffee filters, food coloring, water, small bowls, fishing line, scissors, hole punch, paper hole reinforce-ments, crayons, newspaper, paper towels. Objectives: Understand diversity and child will experience mixing colors and develop fine motor skills through folding filters and stringing them. Procedure: The student may draw or write on the coffee filter. They then fold the filter into quarters (see picture) and dips corners and edges into colored water. Student opens filter and lays it aside to dry. Student repeats procedure with three or four other coffee filters. When filters are dry, student punches two holes at the top of each one and puts a paper hole reinforcement around each hole. Student then strings filters together by threading fishing line through holes to make a set of Friendship Flags. This can be hung on the wall with a saying like "When different colors are brought together, it can make something even more beautiful". Submitted by: Jennifer Crayons Living Together I made a bulletin board for the guidance office of my middle school regarding diversity. I used a large sheet of yellow construction paper. I folded it and cut it out in the shape of a box of crayons. I then made crayons to go in the box and extra crayons to go on the box. I made up odd names for the crayons outside the box. At the top of the board I had a poem that was something like the following: Crayons come in all colors. Some are sharp, some are dull, but they all have to learn to live together in the same box. Submitted by: Linda lwillard@email-removed I Am Special This is a great activity and display for the beginning of year, or any other time that you would like to focus on unique qualities of individual students (photo below). It's a fun activity that allows you to introduce and/or practice quotation marks with your students. Make a class set of a basic person shape. Students then decorate and cut out their shape. Students write their own sentence explaining why they are special. Here is an example: Sue said, "I am special because I can ride my bike around the block with no training wheels." Having a parent volunteer available to type the statements would be great. After printing, students cut out their statement and glue it to a piece of construction paper that resembles speech bubble. Submitted by: Jennifer Email us your diversity bulletin board ideas and pictures using the link at the top of the page! Back to Bulletin Board Ideas by Theme
There's a term called the Heat Index that is tossed around to describe how hot the weather feels. For those not familiar with the heat index, it's a bit like the Wind Chill index used to describe how cold the weather feels under certain conditions. With the heat index (HI) it's the air temperature and the relative humidity that are combined in an attempt to indicated the "human perceived" equivalent temperature — how hot it feels, not just how hot the thermometer says it is. A high heat index becomes a survival issue for humans because it thwarts our natural ability to cool ourselves. Normally, humans cool themselves through the evaporation of perspiration. According to the laws of physics, evaporation carries heat away from the body. But when the heat index is high, that indicates a high relative humidity in addition to the high air temperature. Humidity in the air drastically reduces the evaporation rate, leaving humans unable to cool themselves by that method. You'll work up a sweat, but that moisture on your skin will not evaporate and carry away your excessive body heat. You'll just be hot and wet. If you don't take measures to counteract the high heat index, eventually, you can fall victim to heat cramps, heat exhaustion and perhaps even deadly heat stroke. When the heat index is high: - Stay inside an air conditioned room - Reduce work load - Increase water intake - Take cooling showers or baths
Place Value: Lemons and Limes Each sour fruit in this worksheet has a number written inside. Kids will match each number to the written equivalent of the number. For example, 67 is the same as saying 6 tens and 7 ones, so that's a match! Kids completing this worksheet practice reading whole numbers and identifying the place value for each digit up to the thousands place. They also look at a three-digit number, determine the place value for each digit, and write the number using words.
Correctly watering your vegetable garden requires more effort than turning on the sprinkler once a week. The amount of water required by vegetable plants varies depending on the soil composition, the type of plant, the amount of rainfall and the temperature between waterings. For best results, group plants with similar water needs in the same area of the garden. Lay out a drip irrigation system or soaker hoses that can be turned on separately for each garden section. Using a water source close to the ground rather than a sprinkler minimizes the amount of residual water on the plants' leaves, reducing the chance of fungal infections. Apply water to new plants more frequently than established ones; the roots of new plants are closer to the soil surface and dry out more quickly. Water your garden in the early morning to allow plant leaves time to dry during the heat of the day. Apply water when the soil around the base of the plants is dry to 1/2 inch below the surface. Water your plants deeply rather than often. The general rule of thumb is 1 inch of water per week. However, plants' needs vary, and sandy soils lose moisture more quickly than loamy or clay soils. In very hot summers, the need for water will increase for all vegetables. Water plants individually with a watering can if only one plant in an area needs nourishment.
The world's first clock to run on Shaot Zmaniot. Synagogues around the world publish weekly times for davening, Shabbat, etc., which change from day to day and from place to place. If you are running ZmanimClock, however, you only have to remember one set of times, since the clock changes as needed. Sha'ot Zmaniot, sometimes translated as Seasonal Time, refers to time that changes as a function of the number of hours of daylight. The origins of this method of time keeping are in agricultural societies, in which people rose with the sun and ended their day when there was no more light. In these cultures, the "real" time "6:45AM" doesn't mean anything, whereas "within an hour of sunrise" is roughly equivalent to "when you wake up". Thus, certain commandments (mitzvot) that are tied to times of the day, are frequently expressed in terms of the seasonal time. For example, one must recite shma "when you wake up" -- and the discussions in the Talmud center around what time that would be. Whatever the final answer is, the important idea is that the time be close to when you wake up, in other words close to sunrise, and thus the time ends up being seasonal. Similarly, Shabbat starts Friday evening at sundown, regardless of what the real time is. While certain times are tied to sunrise and sunset, others are tied to the amount of light or dark before sunrise or after sunset. This method of time is sometimes called "shitat hamaalot", or the "degrees" technique, by which times are measured by the number of degrees that the sun goes beneath the horizon, which has a proportional relationship to the amount of incidental light visible at that time. Since the earth rotates with constant REAL time (non constant SEASONAL time), the number of minutes corresponding to a degree does not correspond to a fraction of the number of hours of night. To illustrate, imagine you lived high in the northern hemisphere, at a time of the year where there was only 1 hour between sunrise and sunset. If "dark" were defined as 1 seasonal hour, it would be 1/12 of one hour (or 5 minutes) after sunset, which is clearly not the case. Instead, dark is defined as some number of degrees below the horizon, which at those latitudes might never happen during certain times of the year. Since measuring latitudes, longitudes, sunrise times, etc. are not trivial calculations, and more problematically, vary from time to time and place to place, the rabbis devised a proxy for determining the number of degrees. Assuming the number of real minutes per degree of sun-motion below the horizon is constant (which is not totally correct, but nevertheless is not too far off), and assuming that the "average" person walks at a certain speed, one can express time in terms of distance, and speak of "the time it takes to walk x units of distance", which at the time of the Talmud was a "mil" (not the same as our mile today). Science does the inverse today in talking of distance in terms of time, as in the concept of a light-year. On ZmanimClock, the prayer times that are "zmaniot" end up being fixed on the clock: 9sAM for finishing shma, 12:30sPM for earliest mincha, etc. The times that are tied to degrees of darkness, such as first light, "misheyakir", Shabbat ending, etc. vary and do not have fixed times on ZmanimClock. They also vary slightly at different latitudes and different times of year, as the number of minutes it takes the sun to move one degree is mostly ~4 minutes but can change slightly. On ZmanimClock, these are calculated dynamically based on the number of degrees that sun is below the horizon, which reflects directly the amount of darkness. One interesting case is candle-lighting Friday afternoon. The notion of 18 minutes seems more related to "tosefet Shabbat", or adding some time before Shabbat really starts so we don't just rush into it, and thus you could fix candle-lighting on ZmanimClock to be 5:42 sPM, even if this would not always line up with candle lighting on the real clock. Havdalah and Shabbat ending, unfortunately, will be different times at different times of the year. For example, one opinion holds that absolute darkness takes "the time to walk 4 mil" after sunset. Depending on how long you think it takes to walk 1 mil, this can vary from 72 to 90 minutes. And in fact, the idea that "alot hashachar", or dawn, is 72 minutes before sunrise comes from this formula. Special thanks to to Rabbi Aharon Brisk, author of Otzar Hazmanim, for his help putting these ideas together and for his encouragement throughout the ZmanimClock project. Copies of Otzar Hazmanim can be ordered .
From CreationWiki, the encyclopedia of creation science Assassin Bugs are any of the species of predatory insects that belong to the taxonomic family Reduviidae. As their name implies, they are perhaps best known for the manner in which they eat their food. This is by sneaking up from behind its prey then stabbing it with its proboscis and injecting it with toxins and digestive enzymes that start to liquify the insides of the prey down. The members of the family Reduviidae are also called ambush bugs due to their surprising stealth. Assassin bugs have six jointed legs like all insects, two antennae and an exoskeleton made of chitin. They range in size from four to forty-four millimeters. Their body is made of three sections, the head, the thorax, and the abdomen. The assassin bug's head contains its eyes, antennae, and proboscis. The thorax has attachment points of the legs and wings; the abdomen is where the reproductive and most digestive organs are. The head of the assassin bug has the insect's proboscis or rostrum. The proboscis is the part of the head that the bug uses to feed on its prey. When the proboscis is not in uses, it is folded down into a groove called the prosternum. The assassin bug can also make a rasping sound by rubbing the proboscis across a ridge called the stridulitrum. The legs of the assassin bug are covered with very fine spines. These spines are for grasping a downed prey. Only the front legs do not have these spines on them. The legs are also long compared to the body, so it has a bigger range for an attack. Assassin bugs reproduce sexually and asexually. They also have incomplete metamorphosis. The female assassin bug does not need to have her eggs fertilized but can be. Once the female lays her eggs, she dies soon after. Then the brown, tube shaped eggs hatch after about one week. The new assassin bugs molt after two weeks and in six to nine weeks become of age. The male will ride on top of the female until she has oviposition, this is mate guarding. The males will also guard the eggs unlike most insects. Assassin bugs are one of eleven types of insects that do that. Male assassin bugs guard eggs because this is more attractive to females than males without eggs. This is because it shows the females that the male will care for the eggs. Assassin bugs eat a lot of insects. They eat most garden pests and are beneficial to farming areas. Assassin bugs only eat one type of prey. These can be many insects or, like the bee assassin, eat one type of insect, or it may eat blood like the conenose. The conenose is also called the kissing bug for its habit of biting the lips or eye or face of sleeping humans. The kissing bug is thinner than normal so it can fit in narrow cracks. These bugs are found in Europe, the Americas, Africa, and Austrilia. Since assassin bugs are slow and their predators are very fast they have very interesting ways to hide. The backpack bug keeps all the dead bodies of the its past meals on its back. This backpack can be much bigger than the bug itself and is useful for a movable cover to hunt with and a distraction for an escape from a predator. Another is the masked hunter; it puts lint and other dirt like things on itself so after a while it looks like a walking dust bunny. This is perfect for where is hunts, a bed for bed bugs or insects like bed bugs. Masked hunters will also bite humans too. Some use these collections to lure prey out of hiding. The termite feeding assassin will dangle a dead termite in front of the opening of the termite mound so the hungry termites look at it. They go to investigate it then when the termites come out, the termite feeding assassin attacks and eat the curious termites. Another type are assassins that use chemical lures. The ant luring assassins that have glands that secrete a surgery substance. So when the ant comes by and smells the surgery stuff, the ant follows the scent to the assassin. Once the ant gets there, the assassin rears up to show the surgery substance glands. Then the ant eats the sugary substance that tranquillizes the ant and the assassin has an easy meal. The bee assassin uses a similar technique. It puts plant resin on its front legs so the bees are attracted to the smell and get eaten by the assassin. The last kind of getting prey is by stealth. The thread leg bugs use its thin legs to be able to walk around very softly, so softly that it can steal catches from spiders on the spider's web. Another is the ambush bug, it goes to a flower and waits motionlessly. Once an insect like a butterfly or bee land on the flower not knowing of the ambush planned, the ambush bug attacks then eats the hapless insect. Since assassin bugs eat all insect pests of a garden or farm, they can be used to protect the garden or farm by eating them. This is better than using insecticides because of the health concern with using them. A person may think this sounds like very new ecological find, but it was used by the Chinese a long time ago. They would use the immature insects to kill farm pests because the young insects will eat its prey faster than an older one. There are some problems with rearing the assassin bug for commercial use as in that they will eat each other before being fully grown. People have found a way to rear them without them eating each other and a good food for all parts of the insect's development, this food is the meal worm. Unfortunately, a meal worm feeds one insect for one feeding. If an artificial diet was made, it would lower the labor and food needed thus making the assassin bugs more cost affective. - Assassin bugZoom Schools, WGBH Educational Foundation, 2009. - Reduviidae Unknown Author, AbsoluteAstronomy.com, 2009. - Assassin Bugs William F. Lyon , Ohio State University Extension, 2006. - Family Reduviidae Tony Chew, Sandy Chew, Peter Chew, Chew Family, June 07, 2008. - Insect Reproduction Tony Chew, Sandy Chew, Peter Chew, Chew Family, November 04, 2006. - Common Assassin Bug Tony Chew, Sandy Chew, Peter Chew, Chew Family, March 22, 2008. - Insect rebels have a cause Jean Ruiter, University of Delaware, 2005. - Assissin Bugs & Ambush Bugs Blake Newton, University of Kentucky, October 19,2006. - ASSASSIN BUG Unknown, EduWebs, 2005. - Assassin Bugs, the World'sMost Cunning Killers Unkown, About, 2009. - Good Bad or Just a Nuisance Nanette Londeree, Marin Rose Society, March 3,2006. - Assassin bugs-a beneficial pest management Paul Grundy, State of Queensland, 2009.
A creative resource to use in the classroom to teach number. Each puzzle starts at a different beginning number (1-11) making children count forwards to 20 and think about the next number, rather than counting from 1 each time. Each puzzle has 10 pieces and reveals an animal at the end. This activity would work well as: - A maths rotations, - An activity for a parent helper to do with a small group of children - A small group activity - Part of your fine motor program. Simply laminate and cut out the cards. I place the pieces for each puzzle in a zip lock bag but you could make it more challenging by having all the pieces in the middle and having students make the puzzles looking at the pictures and the starting numbers. A really rich task for numbers before and after.
Hepatocellular carcinoma, or liver cancer, is a form of cancer with a high mortality rate. Liver cancers can be classified into two types. They are either primary, when the cancer starts in the liver itself; or metastatic, when the cancer has spread to the liver from some other part of the body. Primary liver cancer Primary liver cancer is a relatively rare disease in the United States, representing about 2% of all malignancies. It is, however, much more common in other parts of the world, representing from 10–50% of malignancies in Africa and parts of Asia. The American Cancer Society estimated that, in the United States in 2001, at least 16,200 new cases of liver cancer were diagnosed (10,700 in men and 5,500 in women), causing roughly 14,100 deaths. In adults, most primary liver cancers belong to one of two types: hepatomas, or hepatocellular carcinomas, which start in the liver tissue itself; and cholangiomas, or cholangiocarcinomas, which are cancers that develop in the bile ducts inside the liver. About 75% of primary liver cancers are hepatomas. In the United States, about five persons in every 200,000 will develop a hepatoma; in Africa and Asia, over 40 persons in 200,000 will develop this form of cancer. Two rare types of primary liver cancer are mixed-cell tumors, or undifferentiated tumors. There is one type of primary liver cancer that usually occurs in children younger than four years of age and between the ages of 12–15. This type of childhood liver cancer is called a hepatoblastoma. Unlike liver cancers in adults, hepatoblastomas have a good chance of being treated successfully. Approximately 70% of children with hepatoblastomas experience complete cures. If the tumor is detected early, the survival rate is over 90%. Metastatic liver cancer The second major category of liver cancer, meta-static liver cancer, is about 20 times as common in the United States as primary liver cancer. Because blood from all parts of the body must pass through the liver for filtration, cancer cells from other organs and tissues easily reach the liver, where they can lodge and grow into secondary tumors. Primary cancers in the colon, stomach, pancreas, rectum, esophagus, breast, lung, or skin are the most likely to spread (metastasize) to the liver. It is not unusual for the metastatic cancer in the liver to be the first noticeable sign of a cancer that started in another organ. After cirrhosis, metastatic liver cancer is the most common cause of fatal liver disease. Hepatocellular carcinoma has occasionally been reported to occur in familial clusters. It appears that first-degree relatives (siblings, children, or parents) of people with primary liver cancer are 2.4 times more likely to develop liver cancer themselves. This finding indicates a small overall genetic component, however, specific disease genes have not yet been identified. Certain genetic diseases are associated with a higher risk for liver cancers. These include Hemochromatosis, alpha-1 Antitrypsin deficiency, glycogen storage disease, tyrosinemia, Fanconi anemia, and Wilson disease. Hepatocellular carcinoma is the sixth most common cancer of men and eleventh most common cancer of women worldwide, affecting 250,000 to one million individuals annually. Liver cancer is becoming more common in the United States. It is 10 times more common in Africa and Asia where liver cancer is the most common type of cancer. Liver cancer affects men more often than women and, like most cancers, it is more common in older individuals. Risk factors for primary liver cancer The exact cause of primary liver cancer is still unknown. In adults, however, certain factors are known to place some individuals at higher risk of developing liver cancer. These factors include: - Exposure to hepatitis B (HBV) or hepatitis C (HBC) viruses. In Africa and most of Asia, exposure to hepatitis B is an important factor; in Japan and some Western countries, exposure to hepatitis C is connected with a higher risk of developing liver cancer. In the United States, nearly 25% of patients with liver cancer show evidence of HBV infection. Hepatitis is commonly found among intravenous drug abusers. - Exposure to substances in the environment that tend to cause cancer (carcinogens). These include a substance produced by a mold that grows on rice and peanuts (aflatoxin); thorium dioxide, which was used at one time as a contrast dye for x rays of the liver; and vinyl chloride, a now strictly regulated chemical used in manufacturing plastics. - Cirrhosis. Hepatomas appear to be a frequent complication of cirrhosis of the liver. Between 30 and 70% of hepatoma patients also have cirrhosis. It is estimated that a patient with cirrhosis has 40 times the chance of developing a hepatoma than a person with a healthy liver. - Use of oral estrogens for birth control. This association is based on studies of older, stronger birth control pills that are no longer prescribed. It is not clear if newer, lower dose birth control pills increase risk for liver cancer. - Use of anabolic steroids (male hormones) for medical reasons or strength enhancement. Cortisone-like steroids do not appear to increase risk for liver cancer. - Hereditary hemochromatosis. Hemochromatosis is a disorder characterized by abnormally high levels of iron storage in the body. It often develops into cirrhosis. - Geographic location. Liver cancer is 10 times more common in Asia and Africa than in the United States. - Male sex. The male/female ratio for hepatoma is 4:1. - Age over 60 years. Signs and symptoms The early symptoms of primary, as well as meta-static, liver cancer are often vague and not unique to liver disorders. The long lag time between the beginning of the tumor's growth and signs of illness is the major reason why the disease has such a high mortality rate. At the time of diagnosis, patients are often tired, with fever, abdominal pain, and loss of appetite. They may look emaciated and generally ill. As the tumor grows bigger, it stretches the membrane surrounding the liver (the capsule), causing pain in the upper abdomen on the right side. The pain may extend into the back and shoulder. Some patients develop a collection of fluid, known as ascites, in the abdominal cavity. Others may show signs of bleeding into the digestive tract. In addition, the tumor may block the ducts of the liver or the gall bladder, leading to jaundice. In patients with jaundice, the whites of the eyes and the skin may turn yellow, and the urine becomes dark-colored. If the doctor suspects a diagnosis of liver cancer, he or she will check the patient's history for risk factors and pay close attention to the condition of the patient's abdomen during the physical examination. Masses or lumps in the liver and ascites can often be felt while the patient is lying flat on the examination table. The liver is usually swollen and hard in patients with liver cancer; it may be sore when the doctor presses on it. In some cases, the patient's spleen is also enlarged. The doctor may be able to hear an abnormal sound (bruit) or rubbing noise (friction rub) if he or she uses a stethoscope to listen to the blood vessels that lie near the liver. The noises are caused by the pressure of the tumor on the blood vessels. Blood tests may be used to test liver function or to evaluate risk factors in the patient's history. Between 50% and 75% of primary liver cancer patients have abnormally high blood serum levels of a particular protein (alpha-fetoprotein or AFP). The AFP test, however, cannot be used by itself to confirm a diagnosis of liver cancer, because cirrhosis or chronic hepatitis can also produce high alpha-fetoprotein levels. Tests for alkaline phosphatase, bilirubin, lactic dehydrogenase, and other chemicals indicate that the liver is not functioning normally. About 75% of patients with liver cancer show evidence of hepatitis infection. Again, however, abnormal liver function test results are not specific for liver cancer. Imaging studies are useful in locating specific areas of abnormal tissue in the liver. Liver tumors as small as an inch across can now be detected by ultrasound or computed tomography scan (CT scan). Imaging studies, however, cannot tell the difference between a hepatoma and other abnormal masses or lumps of tissue (nodules) in the liver. A sample of liver tissue for biopsy is needed to make the definitive diagnosis of a primary liver cancer. CT or ultrasound can be used to guide the doctor in selecting the best location for obtaining the biopsy sample. Chest x rays may be used to see whether the liver tumor is primary or has metastasized from a primary tumor in the lungs. Liver biopsy is considered to provide the definite diagnosis of liver cancer. In about 70% of cases, the biopsy is positive for cancer. In most cases, there is little risk to the patient from the biopsy procedure. In about 0.4% of cases, however, the patient develops a fatal hemorrhage from the biopsy because some tumors are supplied with a large number of blood vessels and bleed very easily. The doctor may also perform a laparoscopy to help in the diagnosis of liver cancer. A laparoscope is a small tube-shaped instrument with a light at one end. The doctor makes a small cut in the patient's abdomen and inserts the laparoscope. A small piece of liver tissue is removed and examined under a microscope for the presence of cancer cells. Treatment and management Treatment of liver cancer is based on several factors, including the type of cancer (primary or metastatic); stage (early or advanced); the location of other primary cancers or metastases in the patient's body; the patient's age; and other coexisting diseases, including cirrhosis. Treatment options include surgery, radiation, and chemotherapy. At times, two or all three of these may be used together. For many patients, treatment of liver cancer is primarily intended to relieve the pain caused by the cancer but cannot cure it. The goal of surgery is to remove the entire tumor, curing liver cancer. However, few liver cancers in adults can be cured by surgery because they are usually too advanced by the time they are discovered. If the cancer is contained within one lobe of the liver, and if the patient does not have cirrhosis, jaundice, or ascites, surgery is the best treatment option. Patients who can have their entire tumor removed have the best chance for survival. If the entire visible tumor can be removed, about 25% of patients will be cured. The operation that is performed is called a partial hepatectomy, or partial removal of the liver. The surgeon will remove either an entire lobe of the liver (a lobectomy) or cut out the area around the tumor (a wedge resection). Doctors may also offer tumor embolization or ablation. Embolization involves killing a tumor by blocking its blood supply. Ablation is a method of destroying a tumor without removing it. One method of ablation, cryosurgery, involves freezing the tumor, thereby destroying it. In another method of ablation, ethanol ablation, doctors kill the tumor by injecting alcohol into it. A new method of ablation using high-energy radio waves is under development. Chemotherapy involves using very strong drugs, taken by mouth or intravenously, to suppress or kill tumor cells. Chemotherapy also damages normal cells, leading to side effects such as hair loss, vomiting, mouth sores, loss of appetite, and fatigue. Some patients with incurable metastatic cancer of the liver can have their lives prolonged for a few months by chemotherapy. If the tumor cannot be removed by surgery, a tube (catheter) can be placed in the main artery of the liver and an implantable infusion pump can be installed (hepatic artery infusion). The pump allows much higher concentrations of cancer drugs to be carried directly to the tumor. Hepatocellular carcinoma is resistant to most drugs. Specific drugs such as doxorubicin and cisplatin have been proven effective against this type of cancer. Systemic chemotherapy can also be used to treat liver cancer. Systemic chemotherapy does not, however, significantly lengthen the patient's survival time. Radiation therapy is the use of high-energy rays or x rays to kill cancer cells or to shrink tumors. In liver cancer, however, radiation is only able to give brief relief from some of the symptoms, including pain. Liver cancers are not sensitive to levels of radiation considered safe for surrounding tissues. Radiation therapy has not been shown to prolong the life of a patient with liver cancer. Removal of the entire liver (total hepatectomy) and liver transplantation are used very rarely in treating liver cancer. This is because very few patients are eligible for this procedure, either because the cancer has spread beyond the liver or because there are no suitable donors. Further research in the field of transplant immunology may make liver transplantation a possible treatment method for more patients in the future. Gene therapy may be a future treatment for liver cancer. Scientists are still investigating the possible use of gene therapy as a treatment for cancer. There is controversy surrounding experimentation with gene therapy on humans. As such, it may be years before science is able to create a clinically available gene therapy treatment. Liver cancer has a very poor prognosis because it is often not diagnosed until it has metastasized. Fewer than 10% of patients survive three years after the initial diagnosis; the overall five-year survival rate for patients with hepatomas is around 4%. Most patients with primary liver cancer die within several months of diagnosis. Patients with liver cancers that metastasized from cancers in the colon live slightly longer than those whose cancers spread from cancers in the stomach or pancreas. There are no useful strategies at present for preventing metastatic cancers of the liver. Primary liver cancers, however, are 75–80% preventable. Current strategies focus on widespread vaccination for hepatitis B; early treatment of hereditary hemochromatosis; and screening of high-risk patients with alpha-fetoprotein testing and ultrasound examinations. Lifestyle factors that can be modified in order to prevent liver cancer include avoidance of exposure to toxic chemicals and foods harboring molds that produce aflatoxin. In the United States laws protect workers from exposure to toxic chemicals. Changing grain storage methods in other countries may reduce aflatoxin exposure. Avoidance of alcohol and drug abuse is also very important. Alcohol abuse is responsible for 60–75% of cases of cirrhosis, which is a major risk factor for eventual development of primary liver cancer. A vaccination for hepatitis B is now available. Widespread immunization prevents infection, reducing a person's risk for liver cancer. Other protective measures against hepatitis include using protection during sex and not sharing needles. Scientists have found that interferon injections may lower the risk for someone with hepatitis C or cirrhosis to develop liver cancer. Blumberg, Baruch S. Hepatitis B and the Prevention of Cancer of the Liver. River Edge, NJ: World Scientific Publishing Company, Inc., 2000. Elmore, Lynne W., and Curtis C. Harris. "Hepatocellular Carcinoma." The Genetic Basis of Human Cancer. Ed. Bert Vogelstein and Kenneth Kinzler, 681–89. New York: McGraw-Hill, 1998. Shannon, Joyce Brennfleck. Liver Disorders Source Book: Basic Consumer Health Information about the Liver, and How It Works. Detroit: Omnigraphics Inc., 2000. Greenlee, Robert T., et al. "Cancer Statistics, 2001." CA: A Cancer Journal for Clinicians. 51 (January/February 2001): 15–36. Hussain, S. A., et al. "Hepatocellular carcinoma." Annals of Oncology 12 (February 2001): 161–72. Ogunbiyi, J. "Hepatocellular carcinoma in the developing world." Seminars in Oncology 28 (April 2001): 179–87. American Cancer Society. 1599 Clifton Rd. NE, Atlanta, GA 30329. (800) 227-2345. <http://www.cancer.org>. American Liver Foundation. 75 Maiden Lane, Suite 603, New York, NY 10038. (800) 465-4837 or (888) 443-7222. <http://www.liverfoundation.org>. National Cancer Institute. Office of Communications, 31 Center Dr. MSC 2580, Bldg. 1 Room 10A16, Bethesda, MD 20892-2580. (800) 422-6237. <http://www.nci.nih.gov>. Rebecca J. Frey, PhD Judy C. Hawkins, MS Liver Cell Cancer News Table Of Contents - Primary liver cancer - Metastatic liver cancer - Genetic profile - Risk factors for primary liver cancer - Signs and symptoms - Physical examination - Laboratory tests - Imaging studies - Liver biopsy - Treatment and management - Radiation therapy - Liver transplantation - Future treatments
One Enemy, Whole World is Fighting With The Coronavirus (COVID-19) was first reported in Wuhan, Hubei, China in December 2019, the outbreak was later recognised as a pandemic by the WHO. ABOUT THE DISEASE What is Corona Virus Coronaviruses are a type of virus. There are many different kinds, and some cause disease. A newly identified type has caused a recent outbreak of respiratory illness now called COVID-19. HOW DOES CORONA VIRUS SPREAD? How it Spreads Human ContactCOVID-19 is thought to spread mainly through close contact from person-to-person in respiratory droplets from someone who is infected. People who are infected with coronavirus often have symptoms of illness. Contaminated ObjectsIt may be possible that a person can get COVID-19 by touching a surface or object that has the virus on it and then touching their own mouth, nose, or possibly their eyes. This is not thought to be the main way the virus spreads. Social GatheringIf an infected person coughs or sneezes their droplets can infect people nearby. That’s why it’s important to avoid close contact with others. Understand that people may be infected and have only to no symptoms at all. It’s crucial to practice good hygiene, respiratory etiquette and social and physical distancing.
"Enemies of the State" Enemies of the State Although Jews were the main target of Nazi hatred, they were not the only group persecuted. Other individuals and groups were considered "undesirable" and "enemies of the state." Once the voices of political opponents were silenced, the Nazis stepped up their terror against other "outsiders." Like Jews, Roma (Gypsies) were targeted by the Nazis as "non-Aryans" and racial "inferiors." Roma had been in Germany since the 1400s and had faced prejudice there for centuries. They had also been victims of official discrimination long before 1933. Under the Nazis, Romani (Gypsy) families in major cities were rounded up, fingerprinted and photographed, and forced to live in special camps under police guard. Jehovah's Witnesses, members of a small Christian group, were victimized not for reasons of race but because of their beliefs. Witnesses' beliefs prohibited them from entering the army or showing obedience to any government by saluting the flag or, in Nazi Germany, raising their arms in the "Heil Hitler" salute. Soon after Hitler took power, Witnesses were sent to concentration camps. Those who remained at large lost their jobs, unemployment and social welfare benefits, and all civil rights. The Witnesses, nevertheless, continued to meet, to preach, and to distribute religious pamphlets. Homosexuals were victimized by the Nazis for reasons of behavior. The Nazis viewed homosexual relations as "abnormal" and "unmanly" behavior which, by not producing offspring, threatened Nazi policies encouraging the reproduction of "Aryans." Soon after Hitler took office, the Storm Troopers (SA) began raids against homosexual clubs. Many homosexuals were arrested and imprisoned in concentration camps. Dozens of teenagers were in this group. June 24, 1933 Jehovah's Witnesses banned in Prussia The Nazi government of Prussia, the largest state government in Germany, bans Jehovah's Witnesses. Jehovah's Witnesses refuse to make the "Heil Hitler" greeting and, beginning in 1935, to serve in the German army. The Nazis begin mass arrests of Jehovah's Witnesses in 1936. Many Witnesses are imprisoned in concentration camps, and they are represented in nearly every major camp. Generally, Jehovah's Witnesses refuse to renounce their convictions, even though they could obtain release from the camps by signing a declaration renouncing their beliefs. June 28, 1935 Nazis toughen law against homosexuality The Nazis persecuted German male homosexuals, whose sexual orientation was considered a hindrance to the preservation of the German nation. On June 28, 1935, the Nazi state toughens Paragraph 175 of the German penal code, making even friendships between male homosexuals a criminal offense. "Chronic" homosexuals are deported to jails and prisons; some are later remanded to the camps. Between 5,000 and 15,000 homosexuals, mostly German or Austrian, were imprisoned in concentration camps, where they had to wear a pink triangular patch marking them as homosexuals. August 18, 1944 Communist Party leader executed in Buchenwald Ernst Thaelmann, leader of the German Communist party since 1925 and one-time candidate for the German presidency, is executed in the Buchenwald camp. He is killed by his SS guards during an air raid on a nearby factory. Thaelmann had been arrested after the fire that destroyed the Reichstag (German parliament) building in 1933. He spent more almost 12 years in the camps. Communists, Social Democrats, and trade unionists were among the first groups persecuted by the Nazis. Series: Nazi Camp System Critical Thinking Questions - How and why do regimes target individual groups? - Consider a more recent example of a specific group targeted for persecution and/or destruction. How are members of the group identified, separated, and brutalized? - What options do other nations or coalitions have when a civilian group is targeted for discrimination and/or destruction within one country? - Investigate the concept of “race.” How has scientific study changed our understanding of the human race since the Holocaust?
Diverse Coral Reefs Coral reef news and research have been popping up recently. Here are a few tidbits to catch you up… Diversity in Marine Protected Areas How diverse are coral reefs in global marine protected areas (MPAs)? According to a study published this week in Nature Communications, it depends on how you measure diversity. While many MPAs have large numbers of species, the evolutionary diversity among those species is actually quite small. The MPA networks only encompass 1.7% of the total known evolutionary history of corals and 17.6% of the evolutionary history of fish. Although the total number of different species in an ecosystem is usually taken as a good measure of its health, a measure of relatedness is also important because closely related species are more likely to perform similar roles in the ecosystem, whereas more distinct lineages may perform unusual or complementary roles vital for the reef’s function. Seeing Corals from the Sky Most research on corals and on reef ecosystems takes place on diving expeditions, but last week, NASA announced a new three-year expedition using advanced instruments on airplanes to survey the world’s coral reefs in detail. The COral Reef Airborne Laboratory (CORAL) will measure the condition of threatened coral ecosystems and create a unique database of uniform scale and quality. CORAL’s airborne instrument, the Portable Remote Imaging Spectrometer (PRISM), will work along with in-water measurements to analyze reef conditions in the context of the prevailing environment, including physical, chemical, and human factors. The results will reveal how the environment shapes reef ecosystems, NASA says. “We know reefs are in trouble,” says Eric Hochberg. “We’ve seen the reefs of Jamaica and Florida deteriorate and we think we know what is happening there. However, reefs respond in complex ways to environmental stresses such as sea level change, rising ocean temperatures and pollution. The available data were not collected at the appropriate spatial scale and density to allow us to develop an overarching, quantitative model that describes why and how reefs change in response to environmental changes. We need accurate data across many whole reef ecosystems to do that.” Deep Coral Reef Ecosystems Academy scientists Luiz Rocha and Bart Shepherd and their colleagues published a study a few weeks ago on mesophotic coral reef communities, ecosystems at 30-150 meters (100-500 feet) deep, a region we like to call, the Twilight Zone (cue dramatic music). The team focused on two Caribbean locations: Bermuda and Curaçao. Their findings were quite different despite the locations’ proximity. They recorded 38 species in Bermuda and 66 in Curaçao, but Bermuda had many more fish overall, with increasing abundance the deeper the scientists dove. In Curaçao, however, the abundance decreased at depth. They write, “High fishing pressure is evident in both localities…” and the deeper areas may serve as “refugia” for the fish. Image, Butterflyfish and coral staghorn: Terry Hughes
It’s Wednesday so time for some yoga. Today’s story is Marv the Metal Detective. For today’s learning we have… Remember to practise your sounds daily. RWI will be holding daily virtual lessons for children to practise their sounds: Set 1 sounds at 9:30am Set 2 sounds at 10:00am Set 3 sounds at 10:30am They have updated these lessons to include word time for set 1 speed sounds and spelling for set 2 and 3. There is also “Storytime with Nick” three times a week (Mon, Wed & Fri at 2pm) and some poems that your child can learn and perform. I have attached a Grammar Hammer. The children are used to seeing these in the classroom. It covers a variety of areas that we have been learning about on our SPaG mats. For the rest of the week we are going to be taking one final look at Traction man. You are going to write your very own Traction Man adventure. At the end of Traction Man is Here, Traction Man is relaxing with Scrubbing Brush after saving the day again by rescuing the spoons. But close by there is danger! Scissor Shark! You are going to write about what happens when Traction Man comes across Scissor Shark! Traction Man is written a little bit like a comic. A comic tells the story through lots of pictures. Sometimes the pictures will have some speech bubbles to tell the reader what the characters are saying. Sometimes the pictures will have action words or sound words to give the reader a better picture of what is happening in the story. Sometimes there will be captions for each picture which tell the story in sentences. Today you are going to plan your Traction Man adventure. Use the planning chart to write down your ideas so you will remember them for tomorrow. You do not need to write in sentences on your plan. They are your notes – just important words or phrases (groups of words) Your story must have a beginning, middle and end. What is he doing? Remember, you are going to be drawing pictures to tell your story so think carefully about what happens! Click on the document to plan your story. This morning we have a maths warm up to get our brains working, try to complete this independently. Now we have learnt about halving ( ½ ) today we are going to be learning about finding a quarter ( ¼ ) of a shape. Click on the powerpoint below and then complete the work for your group. Remember that ¼ is 1 part of 4 equal parts. Remember to practise your times tables with Times Tables Rock Stars. Today we are going to learn about the adding sign + Today I want you to physically add amounts, you could use crayons, chocolate buttons, pennies, anything you have at home. For a maths question like 4 + 3 =. We need to read the number sentence by pointing to each part as you read it. Go through the number sentence and complete the action: 4 get 4 objects + “get ready to get some more” 3 get 3 more = how many are there? Count carefully and write your answer. Last week we learnt about using primary colours to create pieces of art, today we are going to learn about a famous artist who liked to use lots of different colours. His name is Andy Warhol. Click on the powerpoint to learn more about him. He liked to choose a picture of an object or person and repeat this picture using different colours each time. Today I would like you to create a piece of art in the style of Andy Warhol. Click on the worksheet below to use some of the pre-made repeated pictures or you can create your own. I look forward to seeing these.
Tommaso di Giovanni di Simone, known as Masaccio, was not only the greatest Florentine painter of the early fifteenth century, but remains one of the most important figures in the history of Western art. In only a few brief years, he created the early Renaissance style of painting. Masaccio was born in the Val de’Arno in 1401, and the sixteenth-century biographer Vasari tells us that he received his affectionately applied nickname, which means “Slovenly Tom” in Italian, because he was indifferent to this personal appearance, careless with his possessions, and uninterested in worldly gains. He was a pupil of Masolino di Panicale with whom he worked on the celebrated “Brancacci Chapel” frescoes in the Church of Santa Maria del Carmine in Florence. He was registered in the two Florentine painters’ guilds in 1422 and 1424, went to Pisa in 1426, and seems to have gone to Rome twice. On his second trip to the Holy City in 1428 he died mysteriously, possibly from poisoning, at the age of about twenty-seven. The sculptor Donatello and the architect Brunelleschi, friends of the painter, had already established the Early Renaissance style in their respective fields when Masaccio began to paint, and helped to influence his thinking. Masaccio added striking new elements to Giotto’s concepts of space and form, used scientific perspective with a fixed vanishing point and a fixed point of view for the spectator, controlled light coming from one source to cast shadows and create atmosphere, emphasized movement of the human body, and eliminated useless detail. Masaccio’s view of the world was almost classically impersonal but possessed a deep underlying feeling. It therefore seems probable that he had studied the sculpture of classical antiquity. Masaccio’s influence extended in his time to Paolo Uccello, Andrea del Castagno, Filippo Lippi, and Benozzo. High Renaissance masters such as Leonardo da Vinci, Michelangelo Buonarroti, and Raphael paid him tribute, and his influence continues into the present.
What is Astigmatism and can it be fixed? Astigmatism is a refractive error that is closely similar to the nearsightedness and farsightedness eye conditions. It is an abnormal curve that occurs on the cornea, causing two points of focus to fall on different directions. This makes objects which are up close and at a distance blurry. This condition might be present in one from birth, after an eye injury, surgery or related diseases. This condition changes how light passes or refracts on the retina. This eye condition is even worse at night or in other conditions that require low light. This is because the pupil dilates because it is in need of more light. Symptoms of Astigmatism Some of the signs and symptoms that are associated with this type of eye refractive error include, but are not limited to the following. - Blurred vision This is the dominant sign in people suffering from this eye condition. You will find that you cannot see clearly, and images are blurry and at times distorted. - Straining of the eye Just as myopia and hyperopia, astigmatism also causes an individual to strain his or her eyes when seeing. This happens because you want to see what is happening and who is ahead of you but you cannot. This straining of the eye leads to huge discomforts. Headaches mostly occur as a result of straining the eye. This is uncomfortable and can cause severe pains. When you no longer enjoy the things you used to, and you have realized that your life has completely changed over a short period because of the deterioration of your eyesight then you should see the doctor. An eye doctor is able to determine whether or not you have astigmatism, and if you do, to what degree is the condition. He or she can then give the appropriate solution to the problem. However, children and adolescents might not be able to tell whether their eye vision is deteriorating. It is important that they get their eye tested by a pediatrician, ophthalmologists and other trained personnel when they are born, when they are in their school age at least once or twice a year. Common causes of Astigmatism - Corneal Astigmatism The refractive error occurs when the curvature of the cornea develops an oblong shape and not the spherical shape as it should. Light is then prevented from coming into a single focus into the retina. This leads to a blurry and double vision. - Lenticular Astigmatism This is caused by an imperfect curvature of the lens, which in turn focuses light behind or in front of the retina. The signs and symptoms of lenticular astigmatism are the same as that of the corneal astigmatism. Eye Meridians and how the relate to Astigmatism Eye meridians are lines responsible for marking degrees from 1 to 180 degrees. As we know, astigmatism occurs when light is not focused to one point on the retina. In this refractive error, the surface of the cornea is toric. This causes each of the eye meridians to refract light in a unique way. There are eye meridians which are known as principal meridians. These meridians are responsible for refracting the most and the least amount of light. The two principal meridians focus light on two different points. Astigmatism exists in three forms which include regular, irregular and oblique astigmatism. This type of astigmatism occurs when the principal meridians are separated by 90 degrees on the 90 and 180 degrees lines. An example of this situation is a 180/90 degrees. Irregular astigmatism is mainly caused by physical injury which causes scarring of the cornea. In addition to that, this type of refractive error occurs when the principal meridians are not perpendicular to each other. This type of refractive error is similar to the regular astigmatism where the principal meridians are perpendicular to one another. However, the difference that occurs is that they are not at 90 degrees or 180 degrees, but are a titled version of the regular astigmatism which at points such as 40/130 degrees. The three main most common types of Astigmatism There are three main common types of astigmatism which include myopic, hyperopic and mixed astigmatism. Myopic astigmatism occurs when one or two principal eye meridians focus light in front of the retina. Hyperopic astigmatism on the other hand occurs when one or two principal eye meridians focuses light behind the retina, whereas mixed astigmatism occurs when one principal ocular meridian focuses light behind the retina, and the other principal eye meridian focuses light in front of the retina. Is LASIK or PRK long term solutions to solving Astigmatism? LASIK and PRK are both refractive eye surgeries that can be used in correcting Astigmatism. These refractive surgeries are the safest procedures which offer permanent solution to the eye problems. Both refractive surgeries are used to correct nearsightedness and farsightedness in patients, making it the best fit in solving astigmatism problems. They provide long term solutions to astigmatism, and can change an individual’s life for the better. You can finally go back to doing the things that you loved without fear of having a blurry or distorted vision. Many people tend to be worried about the risks that are associated with performing laser eye surgeries. Any surgery has risks that are involved but when done properly, while taking the necessary precautions, there is nothing to worry about. LASIK and PRK are used to correct a variety of refractive errors and LASIK is the most popular procedure. However, if there is only a small amount of astigmatism in one’s eye then laser surgery will not be necessary. There are some cases where astigmatism impairs vision, and this is when you should consider going for the refractive surgeries. The only difference that occurs between the two procedures is in the first step taken in the respective procedures. Moreover, a patient takes long to heal when PRK procedure is used as compared to when the LASIK eye procedure is involved.
Published: 2023-04-23 12:24:06 • Hyab The areas of optical communication and fiber optics have changed the way we send and receive data. This has made it possible to build high-speed, high-capacity networks, which are the backbone of the digital world we live in today. Magnets have become essential parts of many optical communication systems, making them work better and do more. Magneto-optic effect and Faraday rotator The magneto-optical effect is what happens when a magnetic field changes the way light is polarized as it moves through a substance. The Faraday effect is one of the best known magneto-optical effects. It causes the plane of polarization of light to change as it moves through a magnetically active material. Based on this effect, Faraday rotators, which are optical devices, are widely used in optical communication systems for many different reasons. Optical Isolators: These are devices that are made to let light in one direction but prevent it from going in the other. This one-way transmission keeps sensitive parts, such as lasers, safe from back reflections, which can make them unstable and damage their performance. Control polarization: Faraday rotators can be used to change how light is polarized in optical fibers. This ensures that the signal is sent with as little loss and distortion as possible. Optical switches: Faraday rotators can be built into optical switches, which control how light signals move through fiber optic networks. By controlling the magnetic field, the switches can quickly change the direction of light transmission. This makes it possible to set up networks that can be changed quickly and in different ways. Fiber alignment controlled by magnets For signal transmission and coupling to work well in fiber optic communication systems, optical fibers must be aligned very precisely. This alignment can be done using magnets using magnet-activated systems. These systems have several advantages. High accuracy: Magnetic alignment systems can be precise to the sub-micron level, ensuring that fibers connect and transmit signals in the best possible way. Flexibility: Using magnets makes it possible to make flexible alignment systems that can work with different types and sizes of fibers and that can also adapt to changes in the environment, such as temperature and vibration. Non-contact operation: Magnetic alignment systems do not require physical contact between the fibers to function, so damage and contamination are less likely to occur. Magnets are part of optical amplifiers. Magnets can also be useful in optical amplifiers, which are devices that increase signal strength in optical communication networks. An example is how magnets are used in erbium-doped fiber amplifiers (EDFA). A magnetic field can improve the amplifier's performance by controlling how the light interacts with the erbium ions and how it is polarized. In summary: magnets have been a big part of the development of fiber optics and optical communication. They have made it possible to manufacture important parts and systems such as Faraday rotators, optical isolators and magnetically controlled fiber alignment systems. As the need for faster, more reliable and larger optical communication networks grows, magnetic technology will continue to play a key role in shaping the future of this field. Magnets could be used in optical amplifiers and other new optical technologies to improve their performance and capacity.
Free Energy is the fundamental energy that is released into the environment when matter is converted to electrons and protons in the process of the atom’s creation and decay. It is released as electrons, protons, neutrons, and electrons in the form of light, sound, protons, antifreeze (fossil fuels) and most other forms of energy. Free Electrons are the electrons, protons, antifreeze and electrons produced from the nucleus of the atom without undergoing any energy flux and without any chemical reactions in the form of hydrogen, helium or other elements. Free Electrons are also known as Polar Electrons. Free Electrons are created in the process of the atom’s creation and decay and are the fundamental energy that makes the atoms themselves possible; without them atomic structure and function would never become possible. The process of the atom’s creation and decay happens in three steps. First, a nucleus is created. This nucleus is made up of electrons and protons and is the nucleus from which the atom forms. Second, a double terminal “hole” is created in the nucleus or an atom. A double terminal hole is a hole containing two electrons or two protons, in between the electrons in the nucleus. This hole is filled with a “conductor”, usually protons, or at least some of them. The hole has an attractive or repulsive force that attracts or repels the two electrons of the second terminal and the pair of electrons that are within the hole. Therefore, this process creates the electron current in the hole. Third, a nucleon is created in the same way; the “hole” is filled with two protons or one neutron, in between the neutron of the second terminal of the atom and the electrons of the nucleus. The atoms of the atom have a positive charge. The atoms are also filled with a positive charge with electrons arranged in pairs. This creates the positive charge in the nucleons. Next, a proton and an electron are created in this way. The proton/electron pair is called a proton to a neutron pair. The proton pair is called a proton to a neutrino pair. The proton/neutrino pair is called a quark pair. After the proton pair and proton/electron pair have created their respective nuclei, one of them is converted to a proton while the other one is converted to a neutron. Since the proton has opposite charge and the same mass, this conversion makes no free energy generator 100% self running simple ideas, nikola tesla free energy secret work of the order of abia, tesla free energy generator antennas, free energy equation delta sonic buffalo, most advanced free energy magnetic motors plans synonym
Jacqueline Williamson graduated with a BBA in Personnel Admin., an MPA in HR Management and an MS in Education. Try Thinking Outside the Box This article is concerned with the modification of instruction for the varying abilities of students within the so-called “standard range”, students who have been enrolled in regular vocational classes. These students have been found to have special learning patterns; whether they do things exceptionally well or they need assistance in adjusting in a conventional learning environment. This article is also designed to make the instructor aware of the general learning characteristics of gifted and slower learners and to give the instructor skills in planning his/her instruction so that the “special” student’s specific needs are adequately met. This should be accomplished without detracting from the more typical student. In many cases, instructors tend to prepare lessons for the majority of students who fall in the “average learner” category. However, by preparing a standard lesson plan, the needs of the gifted and slower learner as well as the “average” learners are actually not accommodated. An instructor needs to plan to use teaching techniques that will help all students reach their highest learning potential. According to the Cognitive Processes of Learning there are three essential conditions for meaningful learning (R. E. Mayer, 1987): reception, availability and activation. The reception and availability conditions are met when teachers focus their learner’s attention on a problem and provide them with an anticipatory set or advance organizer (Glover & Corkill, 1990). Teachers fulfill the activation condition by modeling the inquiry process with skilled questions techniques. (Effective Teaching Methods 4th Edition—Gary D, Borich) In order to successfully plan lessons for students with a range of learning characteristics, an instructor needs to be aware of the Cognitive Processes of Learning as well as the learning behavior of students. As the instructor observes students working in the classroom and laboratory, he/she can become sensitive to the particular needs and limitations of each individual. To facilitate the instructor in recognizing and responding to those individual needs and limitations, he/she will need an understanding of the general characteristics of gifted and slower learners. The following are lists of these characteristics. Types of Learners These learners are generally characterized as followed: - They tend to have good reading ability and to enjoy reading. - They tend to be verbal and communicative. - They tend to be generally aggressive and competitive in the scholastic situation. - They tend to be independent, initiating more activities on their own and more frequently attempting to overcome obstacles by themselves. - They tend to be able to deal with abstract concepts and theoretical ideals. - They tend to be able to generalize, to see relationships, and to visualize. These learners are generally characterized as follows: - They tend to have low reading abilities. - They tend not to be aggressive or highly competitive. - They tend to learn physically (to understand a concept best if they can learn it through tactile means). - They tend to be able to deal with the real and concrete far better than the abstract and theoretical. - They tend to have difficulty in handling relationships, such as size, time and space. - They tend to be limited in self-direction, personal initiative, and ability to overcome obstacles. Techniques for the Gifted Learner … Once the teacher identifies the special characteristics of the gifted learner, these are some of the methods the teacher should incorporate: 1. Keep the more capable learner challenged with new material. It is important that you have prepared new activities for the students and are ready to present them to the students as soon as they have finished the last task. They should have advanced work designed to extend the students’ abilities. 2. Maintain high expectations. More capable learners respond well to reasonable scholastic pressure. You should accept only high-quality work from the students. You should not allow them to become satisfied with mediocre performance. 3. Evaluate students’ work with care and thoughtfulness. Those who are more capable need praise and reward for exceptional results. However, they also respond positively to expert criticism of their efforts and probing questions about their knowledge. 4. Use discovery techniques. In laboratory and class work, purposely omit some instruction, insert some difficulties into the job, or leave some problems unresolved for students to overcome by themselves. Care must be used in dealing with students who have differing learning rates and capacities. Slower learners are students who simply require more time to reach their educational goals. The more capable learners appear to learn rapidly without undue effort. Techniques for the Slower Learner The same systematic learning procedures should be incorporated with the slower learner. These are some considerations. 1. Provide opportunities for plenty of practice and drill. Practice can strengthen the bonds of learning and lead to greater and longer retention. 2. Provide the time necessary to learn. If a slower learner needs more time to master a new subject or skill, arrange for the student to have the time. 3. Teach visually. Slower students can often profit more from seeing a skill demonstrated well than from a verbal discussion. A well-presented demonstration can help to clear up what might otherwise be confusing or meaningless. 4. Use real experiences related to the classroom instruction. Field trips specially planned to show certain operations being performed would help slower learners. 5. Use a physical approach to learning. Use hands-on approach. Provide models or real objects for the student to manipulate. 6. Teach by small steps. Slower learners may need to know each step of the job from beginning to completion. They may need to be led carefully through the whole process before they can do it themselves. 7. Use a reward system for good work. Slower learners, who may be unaccustomed to success, tend to respond to reward in any form. 8. Use individualized learning material whenever possible. With well-selected materials, a slower learner can progress at his/her own rate and use learning techniques compatible with his/her own learning style. After you adjust to this more effective method of teaching you are ready to enter into the next stage of development. As you become more confident in your ability to manage your diverse teaching environment; you should also be successful in dealing with a variety of behavior problems (if they should arise.) Typically, this new stage of development will engage the following thoughts: - Where can I find good instructional materials? - Will I have enough time to cover the content? - Where can I get ideas for encouraging class participation? - How do I indoctrinate new concepts into my class? The most outstanding instructors are the ones who teach on a level that addresses all basic student needs. Understanding your students is only the beginning in a successful teaching experience. Practice What You Preach ... Here is a scenario to gauge your understanding of these teacher concepts. It is a self-assessment for your information. Dorothy is a new student in your Unified Geometry Class for Gifted Students. She has come to you from a neighboring state and her parents have been through a trying divorce. You are giving a test on the Equilateral, Isosceles and Scalene Triangles. It seems as if everyone is finishing up their examination but Dorothy. She is looking out the window or watching other students turn in their papers. Finally, when the last student has left Dorothy begins in earnest completing her exam and brings it to you. Upon grading you find that Dorothy has made a perfect score. 1. What type of learner is Dorothy? 2. How would you address her unique situation? Who is hardest to train? © 2013 Jacqueline Williamson BBA MPA MS Jacqueline Williamson BBA MPA MS (author) from Memphis on October 15, 2018: All your students are special but some require more attention than others. It’s great when you can allow students to assist each other under your guidance and supervision. Remember, you are still the teacher! Jacqueline Williamson BBA MPA MS (author) from Memphis on March 16, 2015: Thank you ... glad it was of some use! peachy from Home Sweet Home on March 16, 2015: your teaching tips are useful, more to learn from you Johng938 on August 19, 2014: I simply couldn't depart your web site prior to suggesting that I actually loved the usual info a person supply for your guests? Is gonna be again continuously in order to check up on new posts. bkfkdeeadfbc Johnc877 on August 19, 2014: Definitely pent subject material, be thankful in support of picky facts . ekecgfbcfecg Jacqueline Williamson BBA MPA MS (author) from Memphis on May 24, 2014: Exceptional learning doesn't stop once the student reaches adulthood. Most teachers recognize these learners when they are juvenile but forget that adults can also continue to possess learning characteristics that can be identified as exceptional. Many employers fail to identify these workers and this can be a terrible mistake! Jacqueline Williamson BBA MPA MS (author) from Memphis on April 09, 2014: Teaching Special Needs Students, whether children, teens or even adults requires patience, perseverance and dedication. It's not just about a "paycheck." It is about giving someone the greatest gift available - satisfying the thirst for knowledge. Patricia Scott from North Central Florida on November 10, 2013: Well said. I was a teacher for forty years and had children in class who ran the gamut of readiness to learn, ability to access knowledge, if you will. My first task was to meet them where they were and to enable them to find their strengths and move forth. There were never any who would not try when they realized they were in a safe environment where they could make mistakes and it was ok. The gifted children were given the opportunity to excel. For a classroom teacher who has a wide range of needs in her children including SED children it is challenging on a daily basis. "Understanding" is a pivotal starting point...again, well said.. Thanks for sharing. Angels are on the way ps Jacqueline Williamson BBA MPA MS (author) from Memphis on November 09, 2013: Thank you for your wonderful comment. I worked in the Special Education Dept. at the University of Memphis. Learning student behavior is paramount to successful teaching. I applaud your dedication. wabash annie from Colorado Front Range on November 09, 2013: I found this hub interesting and insightful, perhaps more so as I am a retired special education teacher. It is sometimes difficult and time consuming but it is important to closely observe students in order to better understand their learning styles and needs. Excellent hub! Jacqueline Williamson BBA MPA MS (author) from Memphis on November 08, 2013: I have observed people for many years and I know it's a rare gift but I have this six-sense about my students. I would determine by her ability to finish so soon after everyone left that she possesses the skills but seems to be sidetracked. I would have a conference with her to discuss her problem. I can see your point and appreciate your feedback. Thanks! Marie Alana from Ohio on November 08, 2013: It seems like Dorothy is an adult student who may be going through a hard time, but she is coping. Although she is going through a hard time, she was still able to get a perfect score. One can not determine whether a student is a slow learner or a gifted learner from one circumstance. There is more that has to be done in order to determine this. Great hub!
A user is a person who utilizes a computer or network service. Users of computer systems and software products generally lack the technical expertise required to fully understand how they work. Power users use advanced features of programs, though they are not necessarily capable of computer programming and system administration. A user often has a user account and is identified to the system by a username (or user name). Other terms for username include login name, screenname (or screen name), account name, nickname (or nick) and handle, which is derived from the identical citizens band radio term. Some software products provide services to other systems and have no direct end users. End users are the ultimate human users (also referred to as operators) of a software product. The end user stands in contrast to users who support or maintain the product such as sysops, database administrators and computer technicians. The term is used to abstract and distinguish those who only use the software from the developers of the system, who enhance the software for end users. In user-centered design, it also distinguishes the software operator from the client who pays for its development and other stakeholders who may not directly use the software, but help establish its requirements. This abstraction is primarily useful in designing the user interface, and refers to a relevant subset of characteristics that most expected users would have in common. In user-centered design, personas are created to represent the types of users. It is sometimes specified for each persona which types of user interfaces it is comfortable with (due to previous experience or the interface's inherent simplicity), and what technical expertise and degree of knowledge it has in specific fields or disciplines. When few constraints are imposed on the end-user category, especially when designing programs for use by the general public, it is common practice to expect minimal technical expertise or previous training in end users. The end-user development discipline blurs the typical distinction between users and developers. It designates activities or techniques in which people who are not professional developers create automated behavior and complex data objects without significant knowledge of a programming language. This section does not cite any sources. (October 2019) (Learn how and when to remove this template message) A user's account allows a user to authenticate to a system and potentially to receive authorization to access resources provided by or connected to that system; however, authentication does not imply authorization. To log into an account, a user is typically required to authenticate oneself with a password or other credentials for the purposes of accounting, security, logging, and resource management. Once the user has logged on, the operating system will often use an identifier such as an integer to refer to them, rather than their username, through a process known as identity correlation. In Unix systems, the username is correlated with a user identifier or user id. Computer systems operate in one of two types based on what kind of users they have: - Single-user systems do not have a concept of several user accounts. - Multi-user systems have such a concept, and require users to identify themselves before using the system. Each user account on a multi-user system typically has a home directory, in which to store files pertaining exclusively to that user's activities, which is protected from access by other users (though a system administrator may have access). User accounts often contain a public user profile, which contains basic information provided by the account's owner. The files stored in the home directory (and all other directories in the system) have file system permissions which are inspected by the operating system to determine which users are granted access to read or execute a file, or to store a new file in that directory. While systems expect most user accounts to be used by only a single person, many systems have a special account intended to allow anyone to use the system, such as the username "anonymous" for anonymous FTP and the username "guest" for a guest account. Various computer operating-systems and applications expect/enforce different rules for the format. - User Principal Name (UPN) format – for example: [email protected] - Down-Level Logon Name format – for example: DOMAIN\UserName Some usability professionals have expressed their dislike of the term "user" and have proposed changing it. Don Norman stated that "One of the horrible words we use is 'users'. I am on a crusade to get rid of the word 'users'. I would prefer to call them 'people'." - 1% rule (Internet culture) - Anonymous post - End-user computing, systems in which non-programmers can create working applications. - End-user database, a collection of data developed by individual end-users. - End-user development, a technique that allows people who are not professional developers to perform programming tasks, i.e. to create or modify software. - End-user license agreement (EULA), a contract between a supplier of software and its purchaser, granting the right to use it. - Registered user - User error - User agent - User experience - User space - Jargon File entry for "User". Retrieved November 7, 2010. - "Power Users' Guide". sap.com. Retrieved 2015-01-14. - "Windows Confidential: Power to the Power User". microsoft.com. 2012. Retrieved 2015-01-14. - "The State of the Art in End-User Software Engineering" (PDF). media.mit.edu. 2011-10-12. Retrieved 2015-01-11. - "Understanding Organizational Stakeholders for Design Success". 2004-05-06. Retrieved 2016-08-31. - Rigsbee, Sarah, and William B. Fitzpatrick. "User-Centered Design: A Case Study on Its Application to the Tactical Tomahawk Weapons Control System."Johns Hopkins APL Technical Digest 31.1 (2012): 76–82. - "What is end user?". Retrieved November 7, 2010. "User Name Formats". MSDN. Developer technologies. Microsoft. Retrieved 2016-01-11. The down-level logon name format is used to specify a domain and a user account in that domain [...]. - Don Norman. "Words Matter. Talk About People: Not Customers, Not Consumers, Not Users". - "Don Norman at UX Week 2008 © Adaptive Path". Retrieved 8 November 2010.
Teachers are empowered to have a strong influence on children, who in turn play a role in shaping society. But with this empowerment comes responsibility. In a study recently published in the journal Ethics and Education, Assistant Professor of Education Christopher Martin is arguing that teacher-education programs can and should aim to empower teachers to communicate the reasons behind their decisions. He thinks that educating teachers in this way could serve as a more beneficial form of teacher accountability than what's currently being done. Q: Describe the influence teachers have on students. Christopher Martin: Teachers can play a tremendously positive role in the lives of students. They can help to shape the way students think, feel and relate to others. However, this involves a much greater degree of influence than we grant to others. In order for this arrangement to work, the public must be able to put a trust in teachers to carry out this role ethically and responsibly. Q: How can teachers work to promote public trust? CM: One obvious way is to respect codes of professional conduct and ethical norms. I think that teachers have a proactive, perhaps even educational, role to play in promoting public trust, and one of those ways is through their ability to communicate the value of what they do and why they do it. This requires time and opportunity for teacher candidates to engage in serious thinking and reflection on the nature and value of education. This is where teacher education plays an important role. When someone gets an education in teaching they should acquire not just an understanding of what they will be teaching but develop an informed sense of why what they are teaching is worthwhile. Q: In your article, you argue that teachers should be able to give reasons for some of the decisions they make. Why is this important? CM: There is a growing sense that certain approaches to teacher accountability that we see internationally, such as standardized testing, classroom inspections or pay for performance, do not serve teachers or students especially well. Nor do they do much to inspire public trust. When teachers are able to account for their decisions in the public sphere, the trustworthiness of the profession increases. This should support the case for the professional autonomy and independence teachers need in order to excel at their work. Q: How can teachers learn to communicate their practice to the public? CM: If teacher-education programs see part of their responsibility as enabling students to cultivate a deep understanding of the values of education—part of which would require discussion and debate about those values—students will acquire the ability to communicate along the way. Being able to communicate the reasons behind your decisions is not necessarily about persuading others to agree. It is about promoting understanding. What people are often looking for is a sense that there is a certain degree of fair-mindedness and care that goes into educational decision-making. Teachers may know this to be the case, but it makes a real difference if the profession is equipped to communicate this fact to the same people that are asked to place trust in them.
In space (within the Solar system), you will get mostly two types of "radiation" that have health consequences: - Photons of various energy, from long wave radio to gamma rays. - High-energy charged particles, mostly electrons and protons ejected from the Sun upper atmosphere (this is known as the solar wind). Main source for these is, of course, the Sun. Photons, being electrically neutral, totally laugh at magnetic fields; a "magnetic barrier" will work only for charged particles. We know what UV can do to human skin despite an atmosphere, so one can imagine that some extra shielding will be needed in space. Assuming that you have superconductors, you can maintain a powerful magnetic field for indefinite times, with energy being consumed only when a particle is indeed deflected. The shape and position of this field requires some care, though. For instance, the Earth magnetic field is not very good at protecting the Earth from solar wind; instead, it just moves around the impact point: high energy particles concentrate on polar regions, producing beautiful auroras. An awful lot of research on the subject of optimal magnetic shielding for spaceships is referenced from this page. An aggravating circumstance of radiations in space is that it does not occur with a continuous flow; instead, it comes in bursts of considerable intensity, when solar flares occur. A good space shielding will be utter overkill most of the time, but will occasionally become an absolute necessity to avoid the crew being, well, killed over. A mitigating characteristic, though, is that the source position is well known (the Sun tends to be highly visible) and flares can be observed "visually" some hours before the onslaught of high-energy particles, giving time to raise extra shields. Outside of the solar system, things change quite a bit. The solar wind actually creates a kind of "bubble" around the Sun, called the heliosphere, which acts a bit like a magnetic shield against the rest of the Universe. On the border of the heliosphere is a rather confused situation about which much is theorized but little is known; the Voyager 1 probe is currently moving through it. Beyond, there is not much to fear about solar wind, but a lot more about other high energy particles of many types, collectively known as cosmic rays. We don't really know where cosmic rays come from, but the sources appear to be multiple. For our present discussion, this means that cosmic rays don't come from a unique predictable direction, and happen at seemingly random times, so shields of any kind must be up at all times. Moreover, not all of these particles are charged, so magnetic shields won't be enough. Note that cosmic rays are also a problem within the solar system, even close to the Earth, but leaving the heliosphere increases the issue dramatically. An extra hazard is beautifully exposed in Arthur C. Clarke's "The Songs of Distant Earth". If you are outside of the heliosphere, then you are traveling to the stars -- so you must be traveling fast, because stars are far, far away. This implies that low-energy particles or bigger fragments (e.g. stray atoms or molecules from nebulae) will have a high relative speed, and the repeated impacts will be damaging for the ship and its inhabitants. In the book, they add a big layer of ice in front of the ship, and must renew it regularly. As for materials for more tangible shields (which will protect from neutral particles as well), a good candidate is not lead, but water. Water has a very good ratio of absorbing power per weight; also, water has other uses that lead does not offer, such as bathing, watering plants, raising fishes (tilapias offer a lot of protein while requiring only limited amounts of swimming space), and, come what may, even drinking, should the onboard stocks of decent beverages become depleted. A popular design is a space ship as a big tumbling cylinder, creating "artificial gravity". The "ground" (the cylinder surface, from the inside) can be a big pool, and habitats would then be floating, like fish farms. The water maintains the inner ecosystem and provides excellent radiation shielding at the same time. Astronauts double as sailors. Other possible materials include various polymers, gold (used for lunar modules on Apollo missions -- when you go to the Moon, you do it with style), and even "biological waste" from the crew. This whole radiation issue is still one of the unsolved problems for the trip to Mars, so that's an active research area.
The Fundations Program What Is Fundations? Fundations is a systematic phonics, spelling, and handwriting program that is designed for students in grades kindergarten to third. Fundations is taught in the whole-class setting, but can also be taught in a small group or 1:1 setting, for students who may require additional time for learning in these areas. Key features of this program include the following, and are presented from Unit to Unit and year to year: - Instruction in Letter formation, using scripted language of “sky line, plane line, grass line, worm line” - Instruction in Phonological & phonemic awareness - Instruction in Sound-symbol Correspondence knowledge, teaching students a letter name, it’s key word, and it’s sound(s) - Instruction in Phonics, word study, and advanced word study - Irregular (trick) word instruction (for both reading and spelling) - Comprehension strategies - Written composition (spelling and handwriting) For more detailed information about this program visit: https://www.wilsonlanguage.com/programs/fundations/
Who Is Albert Einstein? Albert Einstein (March 14, 1879 – April 18, 1955) was a German-born theoretical scientist who developed the theory of modern physics (in addition to quantum mechanics). His work is also known for their influence on the philosophy of science. Common people are commonly known as Equation Equation-energy = mc2, which is called “the most famous equation in the world.” He was awarded the Nobel Prize in physics in 1921 for physics theory, especially for the discovery of light-effects laws, a major step in the development of quantum theory. Near the beginning of his career, Einstein believes that the Newtonian mechanism is not enough to correct the laws of the legislative mechanical mechanism of the electric field. This led him to develop his theory of relevance during his time at the Swiss Patent Office (Bern, 1902-1909). However, he knew that he was able to expand the principle of gravity-related relationships and disseminate the article on General Relations in 1916 with his theory. He continues to solve the problem of quantum mechanics and quantum mechanics, making his explanation of molecular dynamics and molecular dynamics. He also considered the light-sensitive properties of the photon theory of light. In 1917, he used a general theory of relation with the form of the structure of the universe. Except one year in Prague, Einstein lived in Switzerland between 1895 and 1914, where he resigned from Germany in 1896 and received a postgraduate degree from the Swiss Institute of Applied Arts (later Eidgenössische Technische Hochschule, ETH) in 1900, after becoming president for more than five years. He was granted a Swiss citizenship in 1901, which he retained in his life. In 1905, he received a Ph.D. from the University of Zurich. In the same year, four research papers were published during his (Miracles’) years, which led him to the technical world at the age of 24. Einstein studied physics theoretical in Zurich between 1912 and 1914 before leaving for Berlin, where he was elected to the Prussian Academy of Sciences In 1933, during his visit to the United States, Adolf Hitler came to power. Due to his Jewish background, Einstein was not in Germany. He was settled in the United States and became a citizen of America in 1940. During the Second World War, he supported the letter to President Franklin D. Roosevelt reminding him to develop a “new bomb” of the United States starting a similar research. This also makes the Manhattan Project. Einstein supported allies, but he protested the idea of using nuclear fission as a weapon. The statement Russell-Einstein signed with British philosopher Bertrand Russell, who stressed the dangers of nuclear weapons. He graduated from the Higher Education Institute in Princeton, New Jersey, until he died in 1955. Einstein published more than 300 scientific papers and over 150 unscientific works. His intellectual achievements and origins make the word “Einstein” a “wisdom”. In Albert Iconham, on the 14th March 1879, in the Kingdom of Württemberg of the German Empire, his parents Hormen Austin, the salesman and engineer and the Polyne Kutch. In 1880, the family was migrated to Munich, where he built Jacob Atomicchin-Gabriel Jeanine and Kai, a company constructed as a direct electric wind. Ithine was a neutral architectural inspector and Albert has been involved in Catholic Primary School in Manchester for three years. In the eighteenth year, Leopold Gymnezium (now the Elbert Austin Gymnasium) was moved, where he acquired elementary and secondary education until he left him from the German Church seven years later. In 1894, Hermann and Jacob Monounch lost their efforts to access the supplies of electricity, as they lost their power to change their DD to the most efficient AC standard. Damage on the Munich factory. In the search of work, ion stand extended to the family athlete, first found and found a few months later. When his family was moved to Poyia, ion Stein 15, in Manch, completed his education at the Louopol gymnasium school. His father planned to follow the electronic engineer, but did argue with Such Standards authorities and did not like educational system and teaching methods. He later wrote that his learning and learning emotions lost in the heart of heart. In December 1894, he traveled to Italy to join his family in Piaia, and in school he tried to go with the medical aid from the hospital to school. At that time, in Italy, he wrote a brief article “In The Magnetic Field of Investigation in the Eis Case”. Austin had always mathematical and nature with mathematics from his friends. Austin’s 12-year-old mathematical and academic gymmetric. Estonian children discovered their original guide in the past several years. The family’s family, Michael Toldd, said that after 16 years to learn engineering for Austin Engineering, no later than that, “Irene Stein worked through all the books and then highlights himself … his boy is not his master. Do not follow “12 years have passed, its standard nature can be considered as a” mathematical mathematical month. ” The Austrian attempted to know the age of 12, he said, “Teaching the difference and equality”. How useful was this post? Click on a star to rate it! Average rating / 5. Vote count:
Reading Time: 11 minutes Influenza refers to the infectious disease affected people seasonally. It is caused by virus triggering several symptoms in the body. The signs can range from mild to severe. You can start experiencing the symptoms two days after getting exposed to the virus. The signs may remain for at least a week while the coughing persists for two weeks. In most of the cases, taking rest and drinking fluids alone can help you bounce back. But, in some cases, you need medical treatment. Read ahead to know about cold and flu, which is common in all parts of the world. What Is Influenza? Influenza refers to the viral infection that attacks your respiratory system. So, you can expect the virus to attack your lungs, throat, and nose. The condition is commonly known as the flu. But, you must never confuse it with stomach flu, which causes vomiting and diarrhea. The condition can resolve on its own without taking any medications. Normally, healthy people can get back to their life within a week. But, influenza can cause deadly complication in some people. The category of people who can develop deadly complications due to cold and flu include: - Older adults (more than 65 years of age) - Young children (age under 2 years) - Residents of long-term care facilities or nursing homes - Pregnant women - People with serious health problems (like kidney disease, asthma, heart disease, liver disease, and diabetes) - Women who have recently given birth (two weeks postpartum) - People with weak immune systems - Obese people (with BMI 40 or higher) Symptoms Of Influenza You can experience symptoms similar to the common cold at first. So, you may not realize the seriousness of the condition. The initial symptoms can include: - Sore throat - Runny nose You develop cold due to the condition slowly while flu affects you suddenly. The symptoms associated with the flu can make you feel worse. So, influenza can trigger symptoms like: - Achy muscles - High fever (temperature over 38 degree Celsius) - Persistent, dry cough - Feeling weak - Nasal congestion - Sore throat You can treat the condition effectively at home with just bed rest and drinking more fluids. So, healthy people have no necessity to see a doctor. But, people with underlying health conditions need to see the doctor right away to avoid developing complications. When you get medical assistance within the first forty-eight hours of noticing the symptoms, you can reduce the severity and length of the illness. You can also prevent more serious health complications. Causes Of Influenza The virus is the main cause of cold and flu. Flu viruses can travel in the air and enter the body. People with infection can transmit the virus through air droplets when they talk, cough or sneeze. You can also contract the problem when you touch the surface containing germs like computer keyboard or telephone, the virus transfers to your mouth, nose or eyes. People infected by the virus can spread the problem to others. Affected people with the virus are contagious from the day or before the first symptoms appear. The contagious nature of the problem remains for five days after the symptoms begin. People with a weakened immune system and children are contagious more time compared to others. Viruses causing the cold and flu constantly change over time. So, new strains appear regularly. People who have suffered influenza in the past have antibodies in the body. So, it will fight the particular strain of virus or similar virus that you have contracted before. Vaccinated yourself for the condition can also lessen the severity of the problem with a similar virus in the future. But, you face risk from the new strains of the virus. The antibodies cannot protect you from the new sub-types of the virus. It is because the new virus has different immunology from the ones you have suffered before. Types Of Influenza Viruses You can classify the virus into four different categories, namely A, B, C, and D. Human influenza virus A and B affect people during the winter season. The types of C infections can cause mild respiratory complication. It does not cause any epidemics. While type A has new strains that have resulted in the pandemic. Type D of the viruses does not cause illness or infection in people. It only affects cattle. Risk Factors Associated With Influenza Seasonal flu and cold is a common condition that affects anyone. But, some factor may increase the risk of developing influenza or its associated complications. So, the factors increasing the risk of the problem are: Seasonal cold and flu targets young children more compared to adults. It can also affect older adults with a weak immune system. So, people in a nursing home or living in military barracks have more chances of developing influenza. Weak Immune System People undergoing treatment for cancer can have a weak immune system. Similarly, individuals who take corticosteroids, anti-rejection drugs or suffer from AIDS/HIV have a weak immune system. So, they can catch the flu and develop complications. Their immune system cannot fight the infection leading to serious issues. Suffer From Chronic Illnesses If you have serious health issues like heart problems, diabetes, or asthma, it increases your risk of developing cold and flu. Pregnant women can develop cold and flu. It is a problem in the second and third trimesters during pregnancy. Women who have given birth recently (two weeks postpartum) are more likely to develop cold and flu. The condition can result in severe complications. People with excess fat accumulation in the body experience a high risk of cold and flu. So, obese people with the BMI (Body Mass Index) of 40 and above can experience the problem. Complications Due To Influenza You need to get treatment for influenza if you are old. Seasonal flu may not cause a serious issue to young and healthy people. It only causes aching and severe discomfort. The symptoms tend to go away after a week or two. The flu may not cause lasting problems or complications. But, young children and older people can suffer from complications. Young and old have a weak immune system. So, without treatment, flu can cause other serious issues like: Among the above-mentioned health complications, pneumonia can result in a fatal complication. When people suffering from serious health issues contract pneumonia, it can cause death. Therefore, young children, people suffering from health issues, and older people need to see a doctor when they suffer from the flu. With medical assistance, you can prevent serious complications. People suffering from symptoms like body aches, headaches, cough, sore throat, stuffy nose, fever or chills need to see a doctor. It is important to know the underlying reasons for the symptoms. In some cases, you may think some respiratory problems cause the issue. So, seeing a doctor is necessary to diagnose cold and flu. You can get infected by the virus without fever or suffer from the symptoms outside flu season. Therefore, instead of self-diagnosing the problem, you need to see a doctor tell conclusively the problem. The different diagnostic tests that can reveal the problem you suffer are: Your doctor conducts a physical exam to check for the signs and symptoms. Based on the signs you exhibit, your doctor can order for lab tests that can detect influenza virus conclusively. Rapid Influenza Diagnostic Tests (RIDTs) It is the common test used to detect cold and flu virus. The test looks for antigens (parts of the virus) from the sample taken from the back of your throat or nose. In most of the cases, you can get the test results within fifteen minutes. But, it is not accurate as other flu tests. Due to the inaccuracy of the test results, your doctor can diagnose you with cold and flu even when then the test has negative results. Rapid Molecular Assays (RMA) The test can detect the genetic material of the virus affecting your body. It is more accurate compared to RIDTs. The tests can produce results within fifteen to twenty minutes. More Sensitive Flu Tests You have more sensitive flu tests available in specialized laboratories and hospitals. It offers more accurate results. For the above mentioned diagnostic tests, your doctor takes a swab sample from the back of your throat or inside of the nose. It is sent to a lab for testing. Based on the test results and symptoms you experience, your doctor can suggest treatment options. Treatments For Influenza Everyone affected by cold and flu never requires the same treatment methods. Each person has different symptoms. So, your doctor decides treatment based on your symptoms and severity. One medicine is not suitable for all affected people. The treatment is based on the severity of the condition and complications you can develop. Therefore, the common treatment options are: Cold And Flu Preparation You have several preparations that aim to alleviate the symptoms caused due to cold and flu. The preparation contains several ingredients like: - Analgesics: To provide relief from pain due to flu and cold. - Antihistamines: It is effective in treating runny nose and stops sneezing. - Decongestants: It offers relief from a blocked or stuffy nose. - Cough suppressant: For keeping the dry cough under control. Your doctor can suggest preparation that you can buy from medical shops. In some cases, the doctor recommends taking antiviral medication. You need to take the medication as soon as you notice the symptoms (within one or two days). It is usually recommended for people who come under the high-risk category. People under the high-risk category suffer from health complications like heart problem, diabetes, asthma or immunity issues. Such people can develop serious complication due to cold and flu. The common medications prescribed for the condition are: - Oseltamivir is an oral medication. But, the medication can cause delirium or self-harm behaviors. - Zanamivir is inhaled through a device that looks similar to an asthma inhaler. It is not suggested for people suffering from respiratory problems like lung disease or asthma. You need to take the medication early to shorten the duration of your illness. Shortening it by a day or two can prevent the development of serious complications. Remember to take the medication along with food to prevent side effects like nausea or vomiting. Older antiviral drugs like rimantadine and amantadine are discontinued due to certain strains of influenza becoming resistant to it. You need to take good rest (preferably bed rest) and drink plenty of fluids to treat the flu. Home Remedies To Manage Influenza You need to take some home care measure when you come down with cold and flu. It will ease the symptoms and offer you relief. Drink Plenty Of Fluids You need to keep yourself hydrated to fight the symptoms associated with the flu. Drink the following through the day to prevent dehydration and other complications: You need to take rest while your body is fighting the flu. Try to sleep as it will help your immune system fight the infection. Take Pain Relievers If you feel pain and discomfort due to influenza, then you need to consider taking over-the-counter pain medication. Aching in the body is common signs associated with flu. So, you can take acetaminophen or ibuprofen to combat the pain associated with the problem. Small children or teens recovering from flu must never take aspirin. It can result in Reye’s condition. The condition is rare but can cause fatal complications. Is it possible to prevent cold and flu? You need to make an effort to ensure you prevent catching of the virus. You may think that getting cold and flu means taking a couple of weeks out of school or work. But, not all people can get back to their normal life after the health problem. People with a weakened immune system or having serious health conditions like diabetes, heart problem or asthma can suffer serious complications. So, the trick is to prevent the virus from attacking. Here are some tips to prevent the virus from attacking your body. You have the vaccination available to protect yourself from the condition. Getting the annual flu vaccination can prevent its occurrence. Anyone who is above six months or older can get the flu vaccination. The flu vaccination protects you from three to four types of influenza viruses. So, it offers protection against the virus that can commonly affect your health. Cold and flu virus is the only respiratory virus that you can prevent by getting the vaccination. So, talk to your doctor about getting it. Keep Hands Clean It is easy to pick the virus by touching any surface like tables that have come in contact with a sick person. The flu germs can linger on the hard surfaces like desks, counters, doorknobs, and faucets. The virus can stay alive for up to eight hours. So, when you touch the surface, it can transfer to your hands. You need to keep your hands clean. So, wash hands with soap and water to clear the germs. You can even use hand sanitizer to avoid spreading of germs. Try to carry disinfectant wipes or hand sanitizer to clean your hands. Cover Your Mouth And Nose When you come in contact with a sick person, the germs can attack you. When the infected person coughs or sneezes, it sends out a spray laden with the virus. The droplets can reach your body through the nose or open mouth. Therefore, cover your mouth and nose when visiting a sick person. Avoid Touching Your Face When you touch a surface laden with germs and put your hands on your face, it results in a viral attack. Touching your face is the easiest way for the germs to get inside your body. So, try not to touch your mouth, nose, or eyes with the fingers without cleaning your hands. Sharing is caring. It is a wonderful concept. But, it is not advisable to follow the motto during the flu season. Therefore, you need to avoid sharing of glasses, plates, utensils or anything else that comes in contact with your mouth. Always wash the utensils and dishes with hot water and soap. Strengthen Immune System To ward off the virus from attacking your body, you need to strengthen your immune system. Therefore, you need to follow the pointers suggested below: - Eat a healthy diet with all nutrients to ensure your immune system stays strong. A well-balanced diet can prevent diseases from affecting you. - Add exercise to your daily routine. Daily physical activity can keep your immune system. So, you can expect a speedy recovery from illness. - Sleep well for at least eight hours a day. Resting will give your body a chance to heal and increase immunity. - Quit smoking as it increases your chances of getting respiratory problems. You become more vulnerable to viral attacks. So, talk to your doctor about quitting the bad habit. You can follow the steps suggested to keep yourself protected by the virus causing cold and flu. It avoids the discomfort associated with the problem. Control Spreading Of Infection Although you put efforts to prevent cold and flu from attacking you, it may not give you 100% results. The steps are not effective to completely avoid the problem. Once you have contracted the virus, you need to take measures to reduce its spread. As a responsible person, you need to ensure others are not affected by the virus through you. So, you need to follow the steps suggested to reduce the infection from spreading: Wash Your Hands You need to keep your hands clean to avoid transferring the virus to other surfaces. So, wash hands with soap and water or carry an alcohol-based sanitizer to avoid the transfer of germs. Contain Your Sneezes And Coughs Cover your mouth and nose when you sneeze or cough. It avoids the spread of the virus through the droplets to other standing near you. Cough or sneeze into your inner crook of the elbow or use a tissue. Remember to throw away the tissue after use. Flu can spread easily when people gather together. So, you need to avoid places like public transportation, schools, office, childcare centers, and auditorium. Avoiding such places not only reduce the chances of spreading but also reduce the chances of infection. If you are feeling sick, stay at home for a day (24 hours). With reduced fever, the possibility of infecting others reduces. Influenza is a severe condition that lasts longer compared to the common cold. In most cases, healthy people can recover from the cold and flu in about one or two weeks. But, people with a weak immune system suffer from life-threatening problems due to cold and flu. So, without medical treatment, people suffering from an underlying health condition, young or old can suffer fatal complications. Therefore, you need to get medical assistance to manage the problem. If you have a cold and flu, then take rest and drink plenty of fluids to get well. With proper care, you can get back to your normal life within a few weeks.View Article Sources