content
stringlengths
275
370k
Understand the importance of Enzymes for Digestion! - Small group learning kit - Student copymasters and teacher guide included All the food in the world is of no use if the human body does not have the ability to extract necessary nutrients from it. With this activity, students will be able to expose three nutrients (carbohydrates, proteins, and lipids) to different digestive enzymes. These samples will be compared to nutrients to which no enzymes are added and chemical tests will be used to determine if the enzymes were effective in digesting the compounds. Upon completion, students will not only understand the importance of the digestive system but also the vital role enzymes play in releasing nutrients from food and converting them to a form usable by the body.
The right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of race, color, or previous condition of servitude. The Congress shall have power to enforce this article by appropriate legislation. The Fifteenth Amendment to the United States Constitution was passed by Congress on February 26th 1869, and ratified by the States on February 3rd, 1870. Although many history books say that it “conferred” or “granted” voting rights to former slaves and anyone else who had been denied voting rights “on account of race, color, or previous condition of servitude,” a close reading of the text of the amendment reveals that its actual force was more idealistic. It basically affirmed that no citizen could rightfully be deprived of the right to vote on the basis of that citizen’s race, color or previous condition of servitude – in other words, that such citizens naturally had the right to vote. That is how “rights” should work, after all; if something is a right, it does not need to be conferred or granted and cannot be infringed or denied. It is worth noting that the Fifteenth Amendment only clarified the voting rights of all male citizens. States have the power to define who is entitled to vote, and at the time of the signing of the Constitution, that generally meant white male property owners. The States gradually eliminated the property ownership requirement, and by 1850, almost all white males were able to vote regardless of whether or not they owned property. A literacy test for voting was first imposed by Connecticut in 1855, and the practice gradually spread to several other States throughout the rest of the 19th Century, but in 1915, the Supreme Curt ruled that literacy tests were in conflict with the Fifteenth Amendment. Section 2 of the Fifteenth Amendment sets forth the means of enforcing the article: by “appropriate legislation.” It was not until nearly one hundred years later, with the passage of the Voting Rights Act of 1965, that the enforcement of the Fifteenth Amendment was sufficiently clarified that no State could erect a barrier such as a literacy test or poll tax that would deny any citizen the right to vote, as a substitute for overtly denying voting rights on the basis of race or ethnicity. The Civil Rights Act of 1957 had taken a step in that direction, but practices inconsistent with the Fifteenth Amendment remained widespread. The Nineteenth Amendment. ratified in 1920, had granted women the right to vote. The only remaining legal barrier to citizens is age, and that barrier was lowered to 18 by the Twenty-Sixth Amendment, ratified in 1971. Many people do not realize that a State could permit its citizens to vote at a lower age than 18, and none has. The moral inconsistency between a Declaration of Independence that proclaimed that all men (and, by widely accepted implication, all women) were created equal, and a Constitution that tolerated inequality based on race and gender, required more than 150 years to be resolved. The ratification of the Fifteenth Amendment in 1870 was one of the major milestones along that long path. Colin Hanna is the President of Let Freedom Ring, a public policy organization promoting Constitutional government, economic freedom, and traditional values. Let Freedom Ring can be found on the web at www.LetFreedomRingUSA.com.
It is sometimes difficult to understand the behavior of a function given only its definition; a visual representation or graph can be very helpful. A graph is a set of points in the Cartesian plane, where each point indicates that . In other words, a graph uses the position of a point in one direction (the vertical-axis or -axis) to indicate the value of for a position of the point in the other direction (the horizontal-axis or -axis). Functions may be graphed by finding the value of for various and plotting the points in a Cartesian plane. For the functions that you will deal with, the parts of the function between the points can generally be approximated by drawing a line or curve between the points. Extending the function beyond the set of points is also possible, but becomes increasingly inaccurate. Graphing linear functions are easy to understand and do. Because we know that two points can form a line, only two points are needed for us to graph a linear function if those two points are on the function. Oppositely, we can write down the equation of a linear function if we only know two points that are on the function. The following section mainly talks about different forms of linear function notations so that you can easily identify or graph the function. Plotting points like this is laborious. Fortunately, many functions' graphs fall into general patterns. For a simple case, consider functions of the form The graph of is a single line, passing through the point with slope 3. Thus, after plotting the point, a straightedge may be used to draw the graph. This type of function is called linear and there are a few different ways to present a function of this type. The slope is the backbone of linear functions because it shows how much the output of a function changes when the input changes. For example, if the slope of a function is 2, then it means when the input of a function increases by 1 unit, the output of the function increases by 2 units. Now, let's look at a more mathematical example. Consider this function: . What does the number mean? It means that when increases by 1, decreases by 5. Using mathematical terms: It is easy to calculate the slope because the slope is like the speed of a vehicle. If we divide the change in distance and the corresponding change in time, we get the speed. Similarly, if we divide the change in over the corresponding change in , we get the slope. If given two points, and , we may then compute the slope of the line that passes through these two points. Remember, the slope is determined as "rise over run." That is, the slope is the change in -values divided by the change in -values. In symbols: Interestingly, there is a subtle relationship between the slope and the angle between the graph of the function and the positive -axis, . The relationship is: It is an obvious relationship, but it can be ignored relatively easily. When we see a function presented as we call this presentation the slope-intercept form. This is because, not surprisingly, this way of writing a linear function involves the slope, , and the -intercept, . Example 1: Graph the function . The slope of the function is 3, and it intercepts the -axis at point . In order to graph the function, we need another point. Since the slope of the function is 3, then Knowing that the function goes through points , the function can be easily graphed. Example 2: Now, consider another unknown linear function that goes through points . What is the equation for this function? The slope can be calculate with the formula mentioned above. And since the -axis interception is , we can know that Thus, the equation of this linear function should be If someone walks up to you and gives you one point and a slope, you can draw one line and only one line that goes through that point and has that slope. Said differently, a point and a slope uniquely determine a line. So, if given a point and a slope , we present the graph as We call this presentation the point-slope form. The point-slope and slope-intercept form are essentially the same. In the point-slope form we can use any point the graph passes through. Where as, in the slope-intercept form, we use the -intercept, that is the point . The point-slope form is very important. Although it is not used as frequently as its counterpart the slope-intercept form, the concept of knowing a point and drawing the line in the direction of the slope will be encountered when we go into vector equations for lines and planes in future chapters. Example 1: If a linear function goes through points , what is the equation for this function? The slope is: Since we know two points, the following answers are all correct The two-point form is another form to write the equation for a linear function. It is similar to the point-slope form. Given points and , we have the equation This presentation is in the two-point form. It is essentially the same as the point-slope form except we substitute the expression for . However, this expression is not widely used in mathematics because in most situations, and are known coordinates. It would be redundant to write down a bulky instead of a simple expression of the slope. The intercept form looks like this: By writing the function in the intercept form, we can quickly determine the -axis intercepts. When we discuss planes in 3D space, this form will be quite useful to determine the -axis intercepts. To graph a quadratic function, there is the simple but work-heavy way, and there is the complicated but clever way. The simple way is to substitute the independent variable with various numbers and calculate the output . After some substitutions, plot those and connect those points with a curve. The complicated way is to find special points, such as intercepts and the vertex, and plot it out. The following section is a guide to find those special points, which will be useful in later chapters. Actually, there is a third way, which we will discuss in Chapter 1.5. Quadratic functions are functions that look like this , where are constants The constant determines the concavity of the function: if , concaves up; if , concaves down. The constant is the -coordinate of the -axis interception. In other words, this function goes through point . The vertex form has its advantages over the standard form. While the standard form can determine the concavity and the -axis interception, the vertex form can, as the name suggests, determine the vertex of the function. The vertex of a quadratic function is the highest/lowest point on the graph of a function, depending on the concavity. If , the vertex is the lowest point on the graph; if , the vertex is the highest point on the graph. The vertex form looks like this: , where are constants The vertex of this function is because when , . If , is the absolute minimum value that the function can achieve. If , is the absolute maximum value that the function can achieve. Any standard form can be converted into the vertex form. The vertex form with constants looks like this , where are constants in the standard form The factored form can determine the -axis intercepts because the factored form looks like this , where are constants and are solutions for the equation Thus, it can be determined that the function passes through points . However, only certain functions can be written in this form. If the quadratic function does not have -axis intercept, it is impossible to write it in the factored form. Example 1: What is the vertex of this function? The equation can be transformed into the vertex form very easily Thus, the vertex is . Example 2: The image on the right is a quadratic function. Describe the meaning of the colored texts, which are important properties of a quadratic function. This is the equation for the quadratic function. In this case, , . Since there are two -axis intercepts, we can find that . These are the coordinates for the two -axis intercepts. Knowing the coordinates, the function can be written in its factored form: If you have difficulties deriving the quadratic formula or understanding the expression , see Quadratic function. This is the vertex for the quadratic function. Because , the vertex is the lowest point on the graph. Since the vertex is known, we can write the function in the vertex form: Although this does not look like the equation we've just discussed earlier, note that . The graph of the function is symmetric about this line. In other words, Point and line will be discussed in the next chapter (1.5). They are the focus and the directrix respectively. If you can skillfully and quickly determine those special points, graphing quadratic functions will be less torturing. Exponential and Logarithmic functions Exponential and logarithmic functions are inverse functions with each other. Take the exponential function for example. The inverse function of , , is which is a logarithmic function. Since geometrically, the graph of the inverse function is flipping the graph of the original function over line , we only need to know how to graph one of those functions.
The Vocabulary Cartoons series uses mnemonic memory techniques to learn vocabulary. Simply put, by associating a word with something familiar, the words are locked in place quicker. The books use pictures and rhyming words to facilitate this process. There is an old Chinese proverb which says, “What you see once is worth what you hear a hundred times.” Translated into English, that proverb became, “A picture paints a thousand words.” If you combine that with silly rhymes and crazy pictures, you get Vocabulary Cartoons! Here are some quotes from the book and website: “In recent years neuroscientists have uncovered astonishing facts about how the brain learns, stores, and retrieves information! The use of mnemonic applications is high on the list of the way the brain learns most naturally and efficiently.” “Vocabulary Cartoons works on the principle of mnemonics. A mnemonic is a device that helps you remember something by associating what you are trying to remember with something you already know. A mnemonic device could be in many different forms like; rhymes, songs, pictures to name a few. For example, “Columbus sailed the ocean blue in fourteen hundred ninety-two” is a classic mnemonic rhyme which helps you remember when Columbus discovered America.” “Following the mnemonic principle of association, Vocabulary Cartoons link together an auditory (rhyming) word association and a visual association in the form of a humorous cartoon. These powerful mnemonics help students retain the meanings of words longer and with less effort than trying to memorize definitions straight out of a dictionary.” Here are some examples from the books: Students learn hundreds of SAT level words faster and easier with powerful rhyming and visual mnemonics. In independent school tests, students with Vocabulary Cartoons learned 72% more words than students with traditional rote memory study materials and had 90% retention. Contains 290 SAT words with 29 review quizzes consisting of matching and fill-in-the-blank problems. I have been a huge fan of Vocabulary Cartoons for a couple of years now, so when TOS announced we would be reviewing the SAT Word Power edition, I couldn’t be more excited. I had already purchased a copy to use with my 10th Grader this year, along with the next book in the series, Vocabulary Cartoons II, SAT Word Power. I tend to be a very visual learner, so this is right up my alley! I still remember Guerilla Gorillas from a couple of years ago! The thought of gorillas with guns running through the woods is just hard to forget! That’s the idea of Vocabulary Cartoons. The kids have gotten more than a couple good laughs from the cartoon format, and I never have to remind them to work on their vocabulary. Each of the 29 sections contain 10 vocabulary words followed by a one page review/quiz. The review contains 10 matching questions and 10 fill-in-the-blank questions. In addition to reviewing the words/cartoons daily, my daughter writes sentences for each word in the section. The last day of the week, she completes the review. Here’s a link to the word list for this book: Vocabulary Cartoons: SAT Word Power Without any doubts, I would give this book all five rocks! These books are fun and entertaining and can be used by the whole family. If you’re looking for a new approach to learning vocabulary, I would highly recommend Vocabulary Cartoons: SAT Word Power Other products available include, but not limited to: For more reviews on this product head over to The Old Schoolhouse Homeschool Crew Blog at: Disclaimer: Members of the TOS Crew were given this product for free to use with their family. In return, they were asked to post an honest, informational review on this product. No monetary compensation was received.
Climate change is a difficult subject for adults as well as children to wrap their brains around. Of course, there are political, cultural, and even neurological reasons for that. But there may be a simpler explanation, too. The results of climate change are incredibly vast and varied. It can be a challenge to think of climate change as a singular concept when its manifestations range from rising sea levels to species decline to intense storms. Consider focusing on a small part of climate change impact as an entryway to understanding. Check out this video that shows the difference in outcomes between melting land and sea ice. Using two medium-sized plastic containers, water, blue food coloring, clay, pushpins, and ice cubes, the experiment demonstrates how melting land ice raises sea levels and deteriorates coasts while melting sea ice has a much lesser effect. Just have a few moments? Show the video to your students to get them thinking. Have a bit longer? Encourage your class to take on the experiment themselves. Before the video or experiment: Ask the class to predict what they think happens when large amounts of natural ice melt. Do they imagine there would be a difference between melting land and sea ice? Why or why not? How do they think melting ice may someday affect the area they live (or not)? After the video or experiment: Challenge your students to use their reasoning skills to explain why the two outcomes were so different. Then, let them explore this interactive Nat Geo map that shows how the Earth would change if all its land ice melted into the sea. What other experiments and activities have you used in your class to help students visualize climate change? Let us know in comments or by emailing [email protected]!
What is the Pythagorean Theorem? Pythagorean theorem is a square area having sides as hypotenuse which is equal to the sum of the other 2 sides of the square. The pythagoras theorem explains how the three sides of a right angle triangle are relative in Euclidean geometry. Pythagorean Theorem Formula If the sides of a Pythagorean triangle are "a" & "b" and z is hypotenuse, the pythagorean theorem formula will be: $$a^2 + b^2 = c^2$$ How do you find the Pythagorean Theorem? To find pythagoras theorem manually you need to: - Put the two lengths into the pythagorean theorem equation. For example, the values of a is 6, b is 10 and we want to determine the length of hypotenuse c. - After you put the values into the formula, you have 6²+ 10² = c² - Square each of these terms: 36 + 100 = 136 = c² - Now, take square root of both sides of the formula to get c = 11.6 Use pythagorean theorem calculator to avoid manual calculations. Learn how to find the area of the shaded region using our online calculator. What is a Hypotenuse? A hypotenuse is the longest side of a right angled triangle. Hypotenuse formula is as same as the pythagorean theorem formula which is $$a^2 + b^2 = c^2$$ The hypotenuse equation is the rearrangement of pythagoras theorem to solve the hypotenuse c. Take the square root of both the sides of the formula a² + b² = c² and determine c. When we do so, we obtain c = √(a² + b²). By definition, it's an extension of the pythagorean theorem and can be calculated using hypotenuse calculator. What is Pythagorean Theorem Calculator? Pythagorean theorem calculator provides a best alternative for manual calculations. It saves a lot of time & provides accurate results. Pythagorean calculator calculates length of any omitted side of a right angle triangle if we have lengths of remaining two sides.It solves pythagorean theorem problems while calculating them accurately. How to use Pythagorean Theorem Calculator? Use our pythagorean theorem calculator if you are not familiar with its manual calculation. Just feed the lengths in the 2 fields and click on "CALCULATE" button. The Pythagorean theorem calculator will instantly gives you the value of hypotenuse. You can use our other online tools like integral calculator and derivative calculator to learn Integration & Derivation.
With global warming as the topic of discussion, we’ll look at the description of the term global warming, the causes and characteristics of global warming, the effects, the advantages and disadvantages, the adaptations and mitigation and global warming in relation to the Bible. The term ‘global warming’ refers to the increase in the average atmospheric temperature of the Earth. This process as it is known today, refers to the earth’s heating process that has consistently occurred since mid-20th century and its highly likely continuation due to the use of the word ‘warming’ in its present-continuous form. The earth’s atmospheric temperature has increased 0. 74 ± 0. 18 °C (1. 33 ± 0. 32 °F) during the 100 years before 2005 (Michaels, J. P. 2007). The global body that was setup to investigate and formulate recommendations on global warming, the Intergovernmental Panel on Climate Change (IPCC), concludes that most of the temperature increase since the mid-twentieth century is as a result of the increase in greenhouse gas emissions. Natural occurrences such as solar variation and volcanoes probably had an almost negligible warming effect from pre-industrial times to 1950 and a subsequent cooling effect from 1950 onward. Majority of scientists working on climate change agree with the IPCC’s findings and conclusions over the subject (Weart, R. S. 2003). Current scientific examinations and observations show that global atmospheric temperature will likely rise by a significant 1. 1 to 6. 4 °C (2. 0 to 11. 5 °F) during the most period in the twenty- first century (Lafreniere, F. G. 2008). There are two conspicuous uncertainties in these projections; one is the uncertainty in the estimate which is as a result of using random figures (estimates) of future greenhouse gas emissions and using experiments with differing climate sensitivity. The second uncertainty is how warming and related changes will vary from region to region around the globe. Warming is expected to continue for the next hundreds of years even if man controls his production of green house gases, his results from the large heat absorption capability of the oceans. Increasing global temperature will cause sea levels to rise and will change the amount and pattern of precipitation in form of rain, snow and fog, likely including an expanse of the subtropical desert boundaries among other notable effects as will be assessed here-in (Johansen, B. E. 2006). Most national governments have signed and ratified the Kyoto Protocol which aims at reducing greenhouse gas emissions to a controllable extent. The Kyoto Protocol has brought with it unending debates of whether a single body should attempt to control green house emissions by dictating the terms of industrial functions of the signatory countries. As the debate rages on, the earth is continuously, rapidly disintegrating (Gore, A. 2007). CAUSES OF GLOBAL WARMING The causes of global warming present another raging debate of the beginning of global warming. The scientist William Ruddiman has argued that human influence on the subject began about 8,000 years ago with the on-set of forest clearing to provide Agricultural land and was enhanced 3,000 years later with the start of Asian rice irrigation. Ruddiman’s findings and conclusions have been a point of argument and diversion in the scientific world. Below are some of the causes of global warming that have received and caused less divergence than convergence with the various parties involved (Gore, A. 2007). Greenhouse Gas Emissions Commonly known as the green house effect, it is defined as the process by which absorption and emission of infrared radiation by atmospheric gases warm a planet’s atmosphere. (Langholz . A. J and Turner. K. 2003). The scientific consensus is that the increase in atmospheric greenhouse gases due to human activity caused most of the warming observed since the start of the industrial era, and the warming cannot be satisfactorily explained by natural causes alone. This attribution is clearest for the most recent 50 years, being the period most of the increase in greenhouse gas concentrations took place and for which the most complete measurements exist. (Houghton, J. T. 2004). The greenhouse effect was discovered by Joseph Fourier in 1824 and first investigated quantitatively by Svante Arrhenius in 1896. Existence of the greenhouse effect as such is not disputed. The question is instead how the strength of the greenhouse effect changes when human activity increases the atmospheric concentrations of particular greenhouse gases such as carbon dioxide (CO2). Naturally occurring greenhouse gases have a mean warming effect of about 33 °C (59 °F), without which Earth would be uninhabitable. On Earth the major greenhouse gases are water vapor, which causes about 36–70 percent of the greenhouse effect; carbon dioxide (CO2), which causes 9– 26 percent; methane (CH4), which causes 4–9 percent; and ozone, which causes 3–7 percent. Human activity since the industrial revolution has increased the atmospheric concentration of various greenhouse gases, leading to increased radiative forcing from CO2, methane, ozone, CFCs and nitrous oxide. The atmospheric concentrations of CO2 and methane have increased by 36% and 148% respectively since the beginning of the industrial revolution in the mid-1700s. These levels are considerably higher than at any time during the last 650,000 years, the period for which reliable data has been extracted from ice cores. From less direct geological evidence it is believed that CO2 values this high were last seen approximately 20 million years ago. Fossil fuel burning has produced approximately three-quarters of the increase in CO2 from human activity over the past 20 years. Most of the rest is due to land-use change, in particular deforestation. CO2 concentrations are expected to continue to rise due to ongoing burning of fossil fuels and land-use change. The rate of rise will depend on uncertain economic,sociological, technological, and natural developments. Fossil fuel reserves are sufficient to reach this level and continue emissions past 2100 if coal, tar sands or methane clathrates are extensively exploited.
1. WASH YOUR HANDS: This is still the best way to prevent colds and flu. Wash your hands frequently with soap and warm water for at least 15 seconds. Use “hand sanitzer” when washing facilities are not available. 2. USE A TISSUE INSTEAD OF A HANDKERCHIEF: Wipe or blow your nose and immediately throw the tissue away. Handkerchiefs continually spread germs to your hands and face. 3. DON’T TOUCH YOUR FACE: Touching your eyes, nose or mouth is a fast way for germs to get into your body. 4. COUGH AND SNEEZE AWAY FROM OTHERS: Instead of coughing or sneezing into your hands, turn away from others, cough or sneeze into your sleeve or use a tissue. 5. WATCH THAT MOUTH: Avoid placing objects such as pens or pencils into your mouth. Also avoid licking your fingers when eyes, nose or mouth is a fast way for germs to get into your body. 6. TAKE CARE AT WORK/In Classroom: Clean your area and phone often. Wash your hands after using the bathroom, lunchroom, copy/fax machine, and any other space that is used by others. Some germs can survive on objects for hours or a few days. 7. BE AWARE OF COMMUNITY SPACE: Doorknobs, light switches, refrigerator doors, bathroom and kitchen counters, telephones, computers, and remote controls are all places germs can reside. 8. USE HAND SANITIZERS: Keep liquid or gel hand sanitizers or anti-bacterial wipes handy. 9. TEACH YOUR CHILDREN: Children are very susceptible to colds. Teach them to wash their hands often with soap and warm water. Saying the ABCs while washing their hands assures they wash long enough (at least 15 seconds). 10. DON’T SHARE CUPS: Use paper cups in the bathrooms and kitchen. 11. DON’T SHARE FOOD UTENSILES: This may be difficult for most to do at home but it is important so you can not pass it back and forth. 12. USE DISPOSABLE PRODUCTS: Germs can live on towels and sponges for hours so use paper towels in the kitchen and bathrooms or wash bathroom hand towels often. Disinfect sponges by running them through the dishwasher and replace them frequently.
Write a random number generator True Random Numbers You may be wondering how a computer can actually generate a random number. You might try to get a good seed from details of the way the user uses your program. Write a random number generator It is. In addition, behavior of these generators often changes with temperature, power supply voltage, the age of the device, or other outside interference. And those were created by people who probably have more time to think about random numbers than you do! When choosing numbers between 0 and we are likely to get roughly 10 of each number. Also it will never be below 0. Inverse CDFs are also called quantile functions. Imagine the problems you already have finding errors in your code. However, surprising as it may seem, it is difficult to get a computer to do something by chance as computer follows the given instructions blindly and is therefore completely predictable. To get the next number, we have to remember something in our case, the last answer from the previous time. Other considerations[ edit ] Random numbers uniformly distributed between 0 and 1 can be used to generate random numbers of any desired distribution by passing them through the inverse cumulative distribution function CDF of the desired distribution see Inverse transform sampling. So if you know the code of this program, and the time it is run at, you can predict the number generated, but it wont be easy. If RdRand contained an NSA backdoor, the government would be able to break encryption keys that were generated with only data supplied by that random number generator. However, most studies find that human subjects have some degree of non-randomness when attempting to produce a random sequence of e. To generate a different sequence of random numbers we use a "seeding" function. That said, I enjoy clean examples even for easy ideas, so if you do too, then read on! Some 0 to 1 RNGs include 0 but exclude 1, while others include or exclude both. Random Numbers Random Numbers on a computer are not really random. They will most likely not be exact, and click a few pixels off or type ever so slightly slower, even if they are trying to do exactly the same thing. Write a random number generator that generates random numbers between 1 and 6 However, most studies find that human subjects have some degree of non-randomness when attempting to produce a random sequence of e. This chip uses an entropy source on the processor and provides random numbers to software when the software requests them. You could then predict the output for every second, and the range of numbers generated. Why do we need PRNG? An example would be the TRNG hardware random number generator, which uses an entropy measurement as a hardware test, and then post-processes the random sequence with a shift register stream cipher. This is known as entropy. The upperRange defines the maximum value. Computers can generate truly random numbers by observing some outside data, like mouse movements or fan noise, which is not predictable, and creating data from it. And those were created by people who probably have more time to think about random numbers than you do! But if you want negatives you could just negate it manually. They are often designed to provide a random byte or word, or a floating point number uniformly distributed between 0 and 1. The default random number generator in many languages, including Python, Ruby, R, IDL and PHP is based on the Mersenne Twister algorithm and is not sufficient for cryptography purposes, as is explicitly stated in the language documentation. Why do we need PRNG? That would be strongly biased towards whatever numbers happen to be in the middle of the book at the top of the page. The recurrence relation can be extended to matrices to have much longer periods and better statistical properties. This value lies in knowing what to expect when you get a sequence of random numbers However, most studies find that human subjects have some degree of non-randomness when attempting to produce a random sequence of e. If you know any kind of basic programming this should be understandable and easy to convert to any other language desired. Random number generator algorithm Reusing numbers will bias the result, no matter which page he picks them from. What would happen if the "path" or program flow was different random every time? Matlab: rand The rand function in Matlab Matlabs random number generation function is called rand. But some of the same ideas come up there. It is not possible to generate truly random numbers from deterministic thing like computers so PRNG is a technique developed to generate random numbers using a computer. Other properties, such as no predictable relation between the current random number and the next random number is desireable. So no ngeatives. Another example of entropy: If you play the game Dragon Warrior for the Nintendo, but use an emulator instead of a real Nintendo, then you can save a snapshot of your game before you fight a monster, memorize what the monsters are going to do, and figure out exactly the right way to respond. And a software bug in a pseudo-random number routine, or a hardware bug in the hardware it runs on, may be similarly difficult to detect. Writing a Pseudo Random Number Generator A lot of smart people actually spend a lot of time on good ways to pick pseudo-random numbers. PRNGs are not suitable for applications where it is important that the numbers are really unpredictable, such as data encryption and gambling. Traditionally you would just multiply this number by a decimal value between 0 and 1 this cuts out that step. When choosing numbers between 0 and we will not hit every number. They will most likely not be exact, and click a few pixels off or type ever so slightly slower, even if they are trying to do exactly the same thing. You can download a sample of random numbers by visiting their quantum random number generator research page. Some Possible Techniques to Create Random Sequence: Time - Use the computers clock Radiation - Install some radiation in the computer and compute how often the atoms decay The upperRange defines the maximum value. The Group at the Taiyuan University of technology generates random numbers sourced from chaotic laser. And those were created by people who probably have more time to think about random numbers than you do! Hence, the numbers are deterministic and efficient. It then multiplies that input by 7, and then finds the remainder when dividing by Also it will never be below 0. Entirely by coincidence, computers often use the number of seconds since January 1, based on 24 review
Researchers at the Okinawa Institute of Science and Technology in Japan have developed a nanoplasmonic sensor that can measure cell division over extended periods and detect biomolecules with high sensitivity. The device has potential as a diagnostic test for disease biomarkers, or as a research tool to screen the effects of therapeutic molecules on cell growth. Miniaturized devices for cell culture experiments and diagnostic tests have significant clinical and research potential. The Japanese researchers have developed a solution that employs a nanoplasmonic material. Essentially, the device consists of a glass chip that is covered with millions of gold nanostructures with silicon dioxide stalks. The researchers have dubbed the structures “nanomushrooms”. When white light shines through the slide, the nanomushrooms absorb and scatter it, but this is affected by any materials near the nanomushrooms, such as cells or biomolecules. By analyzing how the light has changed once it has passed through this surface, the researchers can measure a wide variety of phenomena, from cell division to the presence of biomolecules, with high sensitivity. One of the major advantages of the approach is that cells can grow on the nanoplasmonic surface for extended periods, meaning that long term cell culture analysis is possible. Many nanomaterials are toxic to cells, making them unsuitable for cell analysis devices. “Usually, when you put live cells on a nanomaterial, that material is toxic and it kills the cells,” said Nikhil Bhalla, first author of the paper in journal Advanced Biosystems. “However, using our material, cells survived for over seven days.” “Normally, you have to add labels, such as dyes or molecules, to cells, to be able to count dividing cells,” said Bhalla. “However, with our method, the nanomushrooms can sense them directly.” The device can detect an increase in cell numbers of just 16 cells in a sample of 1000 cells, and the technique has potential for cell growth assays, such as drug screens. However, this high sensitivity also extends to other types of bioassays. The approach also has potential in detecting biological molecules, such as antibodies, raising the possibility of a nanoplasmonic diagnostic device to detect disease biomarkers. “Using our method, it is possible to create a highly sensitive biosensor that can detect even single molecules,” said Bhalla. Study in Advanced Biosystems: Large-Scale Nanophotonic Structures for Long-Term Monitoring of Cell Proliferation…
Here is picture of plant morphology: Update: For the YouTube video tutorial of this trick on learning plant morphology click here: https://youtu.be/5oMzbJHZKFQ Let's Start With Remembering the Root - Root - Primary root, Secondary root It's fairly easy to remember that 'Root' has only two subtypes from the 2 'o's in the word. Primary and Secondary are easy as remembering 1 and 2. - Shoot - Stem, Branches, Leaves, Flowers and Fruits. The possible difficulty in memorizing 'Shoot' is that you might not realize if you've written all the parts. To verify if you have written all the parts in shoot you can simply count the number of letters in the word 'shoot' - Number of letters in the word 'Shoot' : 5. So you know how many parts are present in the Shoot So shoot has 5 parts. And since in morphology the 'shoot' starts with the letter 'S' . You can quickly remember the first part is "Stem". ( Click here for the trick to memorize the memorize the Morphology of Plants - Stem ) - 'Shoot' starts with letter 'S'. So you know you have to begin with 'Stem' - The letter 'h' looks like 'b'. So you can quickly think about 'branches'. The fruit, flower and leaves are obviously very easy to recollect since these are the first things that come to your mind if you want to draw a plant. Additionally, you can always know how many items are remaining in your list by counting the number of letters in 'Shoot' (that's 5). There you go, now you've easly learned to remember flowering plant morphology!
Sound Waves Give Tropical Trees a Checkup Measuring how fast sound travels through a tree trunk can reveal whether it's rotting from the inside. Like a book and a cover, you can't always judge a tree by how it looks. Fungus can rot a living tree from the inside, leaving behind a healthy-looking but hollow trunk. Typically the rot is only seen when the tree is cut down. When a tree decays, it releases carbon dioxide back into the atmosphere, contributing to climate change. And the tropics are home to 96 percent of the world's tree diversity, according to researchers, and those trees store a quarter of the world's terrestrial carbon. To get a better read on the health of tropical forests, researchers are now using sound to measure decay in trees. The research, published in the journal Applications in Plant Sciences, was conducted collectively by a group of college professors, grad students and high school teachers and students, who used a new way to examine 1,800 live trees in the Republic of Panama. The group used a method for sending a sound wave through a tree and measuring how fast the sound wave travels - a process called sonic tomography. Sounds waves travel faster through solid trees, a measure of their health. In a decayed tree or one with cavities, where molecules are packed less tightly, sound waves travel more slowly. The process provides insights into how rotting affects tree mortality overall in a forest, said Greg Gilbert, lead author of the article and chair of the Department of Environmental Studies at the University of California, Santa Cruz, in a statement. "Most of the decay is hidden," Gilbert said. "The tomography now allows us to see how many apparently healthy trees are actually decayed inside." And while the method won't work with some trees - including palms or any species that uses internal tissue to store water - it has benefits outside of the forest. The team used sonic tomography to perform a checkup on trees in Panama City, in part to highlight which ones might snap off in storms or heavy winds and cause injury or damage to property. WATCH: How Astronauts Grow Plants in Space
Mercury is often cited as the most difficult of the naked-eye planets to see. Because it's the closest planet to the sun, it is usually obscured by the light from our star. "Mercury has been known since very early times, but it is never very conspicuous, and there are many people who have never seen it at all," legendary British astronomer Sir Patrick Moore wrote in "The Boy's Book of Astronomy," (Roy Publishers, 1958). "The reason for this is that it always seems to keep close to the sun in the sky, and can never be observed against a dark background." Although that's mostly true, there are times during the year when Mercury can be surprisingly easy to spot. And we are in just such a period right now. Mercury is called an "inferior planet" because its orbit is nearer to the sun than Earth's is. Therefore, Mercury always appears, from our vantage point (as Moore wrote), to be in the same general direction as the sun. That's why relatively few people have set eyes on it. There is even a rumor that Nicolaus Copernicus — who, in the early 1500s, formulated a model of the universe that placed the sun, rather than Earth, at the center of the solar system — never saw it. Yet Mercury is not really hard to see. You simply must know when and where to look, and find a clear horizon. For those living in the Northern Hemisphere, a great "window of opportunity" for viewing Mercury in the evening sky opened up in late January. That window will remain open through Feb. 17, giving you a number of chances to see this so-called elusive planet with your own eyes. When and where to look Currently, Mercury is visible about 35 to 40 minutes after sunset, very near to the horizon, about 25 degrees south of due west. Your clenched fist held at arm's length measures roughly 10 degrees, so approximately 2.5 "fists" to the left of due west, along the horizon, will bring you to Mercury. You can also use brilliant Venus as a benchmark. Just look the same distance — 25 degrees — to Venus' lower right, and you'll come to Mercury. If your sky is clear and there are no tall obstructions (like trees or buildings), you should have no trouble seeing Mercury as a very bright "star" shining with a trace of a yellowish-orange tinge. Tonight (Jan. 31), Mercury will be shining at magnitude -1.0, which means that only three other objects in the sky will appear brighter: the moon, Venus and Sirius (the brightest star in Earth's night sky). In the evenings that follow, Mercury will slowly diminish in brightness, but it will also slowly gain altitude as it gradually moves away from the sun's vicinity. It will be at greatest elongation, 18.2 degrees to the east of the sun, on Feb. 10. Look for it about 45 minutes to an hour after sundown, still about 25 degrees to the lower right of Venus. Shining at magnitude -0.5 (just a smidge dimmer than the second-brightest star in the sky, Canopus, in the constellation Carina), it sets more than 90 minutes after the sun, making this Mercury's best evening apparition of 2020. While viewing circumstances for Mercury are quite favorable north of the equator, that is not so for those in the Southern Hemisphere, where this rocky little world will hang very low to the horizon while deeply immersed in bright twilight, making the planet very difficult to see. Southern Hemisphere observers will get their chance to spot Mercury in late March and early April, when the elusive planet will appear to soar high into the eastern sky at dawn. Mercury, like Venus and the moon, appears to go through phases. Soon after it emerged into the evening sky in January, Mercury was a nearly full disk, which is why it currently appears so bright. By the time it arrives at its greatest elongation, or its greatest separation from the sun, on Feb. 10, it will appear nearly half-illuminated. The amount of the planet's surface illuminated by the sun will continue to decrease in the days to come. When Mercury begins to turn back toward the sun's vicinity after Feb. 10, it will fade at a rather rapid pace. By Feb. 14, it will dim to magnitude +0.2, nearly as bright as the star Rigel, in the constellation Orion. By the evening of Feb. 17, Mercury's brightness will drop to magnitude +1.6 — about as bright as the star Castor, in the constellation Gemini, but only about 9% as bright as it appears now. In telescopes, Mercury will appear as a narrowing crescent. This, in all likelihood, will be your last view of the elusive planet this month, for the combination of its lowering altitude and its descent into the brighter sunset glow will finally render Mercury invisible in the evenings that follow. It will arrive at inferior conjunction, meaning it will pass between Earth and the sun, on Feb. 25. It will reappear in the morning sky in late March and early April. Swift, with a dual identity In ancient Roman mythology, Mercury was the swift-footed messenger of the gods. The planet is well named, for it is the closest planet to the sun and the swiftest of the solar system. Averaging about 30 miles per second (48 kilometers per second), Mercury makes a journey around the sun in only 88 Earth days. Interestingly, it takes Mercury 59 Earth days to rotate once on its axis, so all parts of its surface experience long periods of intense heat and extreme cold. Although its mean distance from the sun is only 36 million miles (58 million km), Mercury experiences by far the greatest range of temperatures: 800 degrees Fahrenheit (426 degrees Celsius) on its day side, and minus 280 degrees Fahrenheit (minus 173 degrees Celsius) on its night side. In the pre-Christian era, this speedy planet actually had two names, as astronomers did not realize that it could alternately appear on one side of the sun and then the other. The planet was called Mercury when it was in the evening sky, but it was known as Apollo when it appeared in the morning. It is said that Pythagoras, in about the fifth century B.C., pointed out that they were one and the same. - Rare Mercury transit, the last until 2032, thrills skywatchers around the world - The most enduring mysteries of Mercury - Surprise! Dust ring discovered in Mercury's orbit Joe Rao serves as an instructor and guest lecturer at New York's Hayden Planetarium. He writes about astronomy for Natural History magazine, the Farmers' Almanac and other publications. Follow us on Twitter @Spacedotcom and on Facebook.
- To review colours and practise describing the colours of things. - To practise describing a person’s appearance. Here are some of the frequent words used in this lesson that have appeared in previous lessons. Using the flashcards, check that you remember their meanings. In Lesson 27 you learned the main words for colours. You are about to fine tune your command of these terms, so review Lesson 27 first to make sure you can automatically produce them. Here are two ways you can talk in a more nuanced, more precise way about colours. 1. In Indonesian, dark colours are “old” (tua) and light colours are “young” (muda). |Blusnya kuning tua. Her blouse is dark yellow. |or||Blusnya berwarna kuning tua. Her blouse is dark yellow in colour. |Atap rumahnya biru muda. His house has a blue roof. Atap rumahnya berwarna biru muda. 2. In English you can use the suffix "-ish" to describe something that has a certain hue or tinge, for example "reddish", "greenish", "yellowish" etc. In Indonesian you produce a similar effect by 1 reduplicating the adjective and 2 nesting it between the affixes ke- and –an. He has reddish hair. Orang yang mempunyai penyakit hati kulitnya menjadi kekuning-kuningan. A yellowish skin tone develops in people suffering from liver disease. Here are some of the very common adjectives that can be used to talk about someone’s appearance. (You have met and practised some of them already.) Check that you know the meanings of all of them. |Someone’s general proportions||tinggi, pendek, besar, kecil, biasa| |Someone’s general appearance||cantik, ganteng, jelek, biasa, kurus, gemuk| |The appearance of someone’s hair||botak, panjang, pendek, lurus, keriting| |The colour of someone’s hair||hitam, pirang, putih, kemerah-merahan, coklat tua| |The colour of someone’s skin||putih, kuning langsat, sawo matang, hitam manis| |The colour of someone’s eyes||hitam, coklat, biru, kehijau-hijauan| Translate these sentences into good Indonesian. Remember two things: 1. with a tiny number of special exceptions, the adjective follows the noun in Indonesian, and 2. “to have” is often expressed with the possessive suffix –nya. First check the model below. Cue sentence: She has very long hair. You write: Dia itu rambutnya panjang sekali. or (more commonly) Rambutnya panjang sekali. Imagine that you are a police officer (polisi). You have to describe a fugitive criminal (penjahat) to journalists, to the public or to fellow police officers. Your description should cover sex, age, place of origin, height, whether the person is fat or thin, hair colour, eye colour, complexion and what the person is wearing. Your description might begin like this (as you repeat the role play, you should radically vary this opening): Selamat pagi (siang/sore/malam) Bapak-Bapak dan Ibu-Ibu. Nama saya Ridwan. Saya kepala kantor polisi di sini. Kami sedang mencari seorang penjahat dari Wonosobo bernama Joko yang dicari karena mencuri mobil. Dia berumur kira-kira 25 (dua puluh lima) tahun. Tinggi badannya 170 (seratus tujuh puluh) sentimeter dan agak gemuk. Dia berkulit sawo matang dengan rambut lurus dan pendek. Ia memakai kemeja lengan pendek, (etc. etc. etc.) …. The police officer should conclude by saying something like this: Saya minta tolong kepada Bapak-Bapak dan Ibu-Ibu kalau melihat orang ini, segera hubungi kami di nomor 2457278 (dua empat lima tujuh dua tujuh delapan). Terima kasih. Ada pertanyaan? The “audience” now quizzes the speaker, asking for clarification and more details. There are many kinds of questions you can ask, but two important ones that relate to recent lessons are: 1. use memakai to ask about what the fugitive is/was wearing. For example 2. Use –nya constructions to ask about what kind of personal features and clothes the fugitive has. For example |Joko itu||rambutnya keriting?| |Apakah||penjahat itu||berbaju biru?| 'Light Red' is, as we have learned above, merah muda. More common, however, is it to refer to a pinkish colour with the term merah jambu. Jambu air is the name of the fruit very closely related to the Malay apple, which in Hawai'i is known as the mountain apple. The fruit with the Latin name Syzygium aqueum is juicy and slightly sweet-sour and is often part of the spicy fruit salad called rujak. Of course you should also try to re-cycle all the kinds of questions you have practised so far, for example questions beginning with: Di mana …. Siapa yang …. Hari apa / Tanggal berapa / Jam berapa / Kapan …. Apakah ada …. and many more. The prefix ber- forms intransitive verbs (that is verbs that cannot take an object) such as berangkat (to leave), bekerja (to work), berhenti (to stop), belajar (to study), berenang (to swim), bertemu (to meet), berpikir (to think), berdiri (to stand), berbelanja (to shop), bersiap-siap (to get ready), beristirahat (to rest), bersisir (to comb). A large group of ber- verbs is based on nouns and has the general meaning "to have/own [base]": bersuami to have a husband berumur be aged bernama be named beranak have children berguna to be useful berhasil to have success, to succeed A subgroup of this group means "to use, wear, travel by [base]": bertopi to wear a hat berdasi to wear a tie bersepatu to wear shoes berkacamata to wear glasses berbaju to wear a shirt berkuda to ride a horse bersepeda to ride a bicycle Adjectives or possessive nouns can be added: berkacamata hitam to wear sunglasses berbaju biru to wear a blue shirt bertopi koboi to wear a cowboy hat bersepeda balap to ride a racing bike Instead of using ber- one can of course also use memakai or naik: Anak itu berkacamata minus. / Anak itu memakai kacamata minus. That child wears minus (myopic) glasses. Toni setiap hari bersepeda ke kantor. / Toni setiap hari naik sepeda ke kantor. Every day Toni goes to his office by bike.
Selected Plants of Navajo Rangelands Horsetail has aerial, jointed stems, which occur in two different forms: A single, simple, cone-bearing stem grows in early spring, and a vegetative, non fertile stem grows after the first. This second stem has many whorls of slender, green-jointed branches. Roots are tuber-bearing and rhizomatous.Horsetail lacks flowers, but has a single cone, ¾ to 1½ inches long. It reproduces by spores, which look like a light yellow powder. Leaves are small and scale-like, often not green, whorled, and united at the base to form a sheath around the stem. Horsetail occurs in woods, fields, meadows and swamps, and moist soils alongside streams, rivers, and lakes, and in disturbed areas. It usually occurs on moist sites but can also be found on dry and barren sites such as roadsides, borrow pits, and railway embankments. Horsetail is sensitive to moisture stress; drought conditions result in a reduction in the production of new shoots. Associated species include rushes and sedges. Horsetail is not an important range forage for livestock, and excessive amounts (more than 20%) in hay can cause scours, paralysis, and death in horses. Usually animals avoid the plant. Horsetail has been used as a cough medicine for horses and as a diuretic tea. It has been made into dyes for clothing. The stems of the plant can be used for scouring and polishing. The young shoots can be eaten either cooked or raw. Copyright 2018 New Mexico State University. Individual photographers retain all rights to their images. Partially funded by the Western Sustainable Agriculture Research and Education Program (westernsare.org; 435.797.2257), project EW15-023. Programs and projects supported by Western SARE are equally open to all people. NMSU is an equal opportunity/affirmative action educator and employer.. NMSU does not discriminate on the basis of age, ancestry, color, disability, gender identity, genetic information, national origin, race, religion, retaliation, serious medical condition, sex (including pregnancy), sexual orientation, spousal affiliation or protected veteran status in its programs and activities as required by equal opportunity/affirmative action regulations and laws and university policy and rules. For more information please read the NMSU Notice of Non-discrimination (opens in new window).
Describing a distribution of test scores. Measures of central tendency are used to describe the center of the distribution. There are three measures commonly used: Mean, arithmetic average of the scores Xbar = sum of X divided by N find the mean for the following data set data set of 5 scores: 32,25,28,30,20 Median 50th percentile Mode is the score occurring most frequently in the distribution find the mode: data set of scores 14,23,21,11,14,32,25,23,35,22,21,33,23 Measures of variability determine the spread of scores from a data set. Range (see chapter 2 notes) Interpercentile range Indication of how scores vary around the median. Most widely used is interquartile range where x.75 – and x. 25 is used to look at Can use other interquartile ranges such as x.75 – x.25 interquartile deviation =interquartile range divided by 2. Standard Deviation which used the mean to estimate. Tell how much scores Lets learn about how to determine the s. !st take the following numbers and find the mean. (N=5; SX =93) 93/5 = 18.693/5 = 18.6 X X 2 SX =93 X 2=2001 these squared deviations are termed variance. And is represented by a lower case sigma squared. Or s squared. s2 = NSX2 – (SX)2 Once the variance is known you can determine the standard deviation. We determine the standard deviation is computed by getting the square root of the variance To compute standard deviation we use the following formula; s = the square root of NSX2 – (SX)2 s = 8.23 Or the square root of the variance. We can graph tests scores in the form of a curve. We know the form of curve depends on the way the scores are distributed. The most common types of curves are represented in data sets at this level are the normal curve and skewed curve. If we were to administer a test to a large population and graphed them we would most likely see a curve that is similar to a normal curve (bell shaped with known properties). Characteristics of a normal curve 2. mean, median and mode are identical 3. the area under the curve is = to 100% 4. on the baseline the mean is placed in the center and 3 marks are placed an equal distance apart to represent three s.d.’s above and below the mean, this divides the curve into 6 parts. 5. The curve never touches the baseline. Thus a score four s.d.s or more are possible above and below the the mean What we can learn from a bell-shaped curve. 1. 84.1% of the scores fall within one standard deviation above the mean. 2. 97.7% fall with 2 s.d.’s above 3. 99.9% fall within 3 s.d.’s lets say we have a mean of 35 on a test and a s.d. of 8. We can say that at a score of 43 or one s.d. above the mean, 84% of the students scored at that level or below. Or if we subtract, and get a .score of 27 or one s.d. below the mean we would report 16% of the students scored at or below this level. A bell shaped curve is associated with a normal curve. It represents most scores being in the middle of the distribution. The two ends are called the tail. In a normal curve most scores obtain scores surrounding the average. The fewest scores are at the ends representing the high and low scores. In most class room setting you normally won’t find a normal curve. Usually we will find that student’s will do well or poorly. In this case the curve will form a skewed distribution. In such a curve the mean median and mode will not be equal. When the scores are mostly low, a curve is said to be positively skewed. This is because the majority of the scores fall in the lower part of the distribution. With few high scores causing little or no tail on the left and then a longer tail on the left. It is not symmetrical. If the test is negatively skewed. The majority of the scores fall in the upper part of the distribution or to the right. With many high scores causing little or no tail. On the left and the left end of the tail will have a longer tail. It is not symmetrical. This type of skew is wanted if a mastery test is given it shows the majority of the students mastered the material. What is the best way to describe these distributions? Should you use a mean and standard deviation or median and interpercentile range? It depends: we should use the following information to make that decision. 1. place in a frequency distribution 2. graph as a frequency polygon 3. examine graph and decide if it approaches a normal curve Why if the scores are skewed the mean will be in the skewed tail of the curve. It is more affected by the extreme scores in the tail. In this case the median better represents the center. We use the interpercentile as the measure of variability because it is the corresponding measure. A standard score is one that is standardized by taking the deviation of the score from the mean and dividing it by the standard deviation. When using median and interpercentile range to interpret use percentile ranks to convert scores. When using mean and s.d. use a standard score transformation Two types of score transformations z-score transformation and t-score transformation. To use the z score transformation or standard deviation unit. Use the following formula. 3-6. What it does. It forms a distribution with fixed parameters In this distribution. The mean is 0 and a standard deviation of 1 Look at a figure 3-6. The scores above the mean are positive; below the mean negative. It doesn’t matter the unit of measure of the test. Allows you to compare scores that normally could not be measured. I.e. fitness tests where time can be compared to distance. Certain scores are easily determined even without the use of a formula. For example a if the mean was 30 and the s.d. was 5. A score of 35 would be 1 and a score of 25 would be –1.
Like all vascular plants, trees use two for transportation of and nutrients: the (also known as the ) and the (the innermost layer of the ). Girdling results in the removal of the, and death occurs from the inability of the to transport (primarily ) to the. In this process, the is left untouched, and the tree can usually still temporarily transport and minerals from the roots to the leaves. Trees normally sprout shoots below the wound; if not, the roots die. Death occurs when the roots can no longer produce and transport nutrients upwards through the. (The formation of new shoots below the wound can be prevented by painting the wound with herbicide. Ring barking techniques have been developed to disrupt or impede sugar transport in phloem, stimulating early flower production and increasing fruiting, and for controlling plant size, reducing the need for pruning. Girdling is a slow process compared to and is often used only when necessary, such as in the removal of an individual tree from an ecologically protected area without damaging surrounding growth. Accidental girdling is also possible and some activities must be performed with care. Saplings which are tied to a supporting stake may be girdled as they grow, due to friction caused by contact with the tie. If ropes are tied frequently to a tree (e. g. to tether an animal or moor a boat), the friction of the rope can also lead to the removal of bark. The practice of girdling has been known in Europe for some time. Another example is the girdling of selective trees in some Northern, such as, in order to prevent that fir from massive invasion of the mixed oak woodland. Girdling can be used to create standing dead wood, or. This can provide a valuable habitat for a variety of wildlife, including insects and nesting birds. The tree\’s roots gather water and nutrients from soil. The water and nutrients are carried up the tree through a series of tiny, tubelike structures called xylem. The xylem transports the water and nutrients to the tree\’s canopy, where they are used by leaves during photosynthesis to manufacture sugars that are food for the tree. The sugars are in the form of sap. The phloem, a layer of cells next to the outer bark, transports the nutrient-rich sap downward to all parts of the tree. The tree uses the sap to grow new limbs, produce flowers and set fruit or seeds. In a pine tree, the sap feeds the tree as it develops pine cones and grows taller and stronger.
Like so much in American life, the standard clothing sizes we use today can be traced back to the Civil War. If that answer sounds glib, it isnt meant to be. The Civil War was the pivotal event in American history, marking a transition to the modern era, and heralding changes that stood until the 1940s. It even changed the way we buy our clothes. Antebellum Clothing Sizing Prior to the Civil War, the overwhelming majority of clothing, for men and women, was tailor-made or home-made. There was a limited variety of mass produced, standardized clothing items, mainly jackets, coats, and undergarments, but even these were only produced in limited quantities. For the most part, clothing for men was made on an individual basis. The Civil War changed that. Mass Producing Uniforms During the war, the Northern and Southern armies both needed large quantities of uniforms in a hurry. The South, without a large industrial base, relied primarily on home manufacture for uniforms, and through the war Southern armies typically suffered from a shortage of clothing. The North changed garment making history forever. It quickly became apparent that the Northern armies could not be supplied with uniforms using traditional modes of clothing production. Fortunately, the North had a well developed textile industry that could meet the challenge. When the government began to contract with factories for mass produced uniforms, the textile manufacturers quickly realized that they could not make every uniform for a particular soldier. The only option was to standardize the soldiers uniforms. They sent tailors to the armies, to measure the men, and saw that certain measurements, of arm length, chest size, shoulder width, waist size, and inseam length, would appear together with reliable regularity. Using this mass of measurement information, they put together the first size charts for mens clothing. After the War So why didnt the textile companies go back to the older production methods after the Civil War? The answer lies in profits, as with many things in business. Clothing manufacturers saw that the standardized sizes they had introduced significantly reduced the manufacturing cost of mens clothing; rather than make one item for one man, they could make one size of an item, mens jackets for example, for a group of men. Suddenly, clothing was easier to produce, mass production became the staple of discount mens clothing, and the clothing industry would never be the same again. Well, in this modern era fashion has evolved a long way. We are not at war anymore. We have the ability to make your clothing bespoke and make you look DAPPER. Take a look at http://www.mohanstailors.com/process/. This is how at Mohan’s Singapore we make a perfect suit. As you can see the process is simple. In fact the whole process, from measurement to wearing your new suit can be don in 24 hrs, if you so choose. Come in and let’s have a chat about your requirement. For Inquiries and appointments call (+65) 67324936 ask for Max Mohan
The Age of the Aeronaut Aeronauts were the first voyagers and navigators of flight. Ballooning made celebrities of aeronauts, whose adventures filled newspapers, sold books, and inspired works of fiction. Flight offered a sense of freedom and a radical new frontier for exploration. Flights of fancy did not stop with ballooning, as inventors, engineers, and scientists devised navigable airships and early planes. The search for new modes of flight continues to propel science and the imagination today.
Magnetic Relaxation(redirected from Relaxation (NMR)) Also found in: Medical, Wikipedia. The relaxation or approach of a magnetic system to an equilibrium or steady-state condition as the magnetic field is changed. This relaxation is not instantaneous but requires time. The characteristic times involved in magnetic relaxation are known as relaxation times. Relaxation has been studied for nuclear magnetism, electron paramagnetism, and ferromagnetism. Magnetism is associated with angular momentum called spin, because it usually arises from spin of nuclei or electrons. The spins may interact with applied magnetic fields, the so-called Zeeman energy; with electric fields, usually atomic in origin; and with one another through magnetic dipole or exchange coupling, the so-called spin-spin energy. Relaxation which changes the total energy of these interactions is called spin-lattice relaxation; that which does not is called spin-spin relaxation. (As used here, the term lattice does not refer to an ordered crystal but rather signifies degrees of freedom other than spin orientation, for example, translational motion of molecules in a liquid.) Spin-lattice relaxation is associated with the approach of the spin system to thermal equilibrium with the host material; spin-spin relaxation is associated with an internal equilibrium of the spins among themselves. See Magnetism, Spin (quantum mechanics) a stage of relaxation. In magnetic relaxation, the system of spin magnetic moments of the atoms and molecules of a medium takes part in the process by which the medium achieves thermodynamic equilibrium. In many cases, the interaction between the spins (between the magnetic moment of the spins) is much stronger than the other interactions in which the spins participate, such as the interaction of the spins with the phonons of a crystal. Equilibrium is thus often reached more rapidly in the spin system than in the medium as a whole—that is, than for the other internal degrees of freedom. Magnetic relaxation therefore proceeds in stages. The last and longest stage generally corresponds to the achievement of equilibrium between the spins and other degrees of freedom—for example, between the spin system and phonons, which are the quanta of vibrations of the crystal lattice. Each stage of magnetic relaxation is described by its own relaxation time, for example, spin-spin and spin-lattice relaxation times are used with crystals. In media that have a magnetic structure—ferromagnetic and antiferromagnetic materials—magnetic relaxation occurs through the collision of spin waves (magnons) with each other and also with phonons, dislocations, impurity atoms, and other crystal defects. In solids, magnetic relaxation depends essentially on the structure of the solid. Determining factors here include the character of the crystal lattice (single crystal or polycrystal), the presence of impurities and dislocations, and the domain structure. A decrease in the number of crystal defects and in the crystal temperature generally results in an increase in the magnetic relaxation time. The magnetic relaxation of nuclear spins (nuclear magnetic moments) has its own specific characteristics, which are due to the especially weak interaction of the nuclear spins with the other degrees of freedom of the medium. Magnetic relaxation plays a role in the processes of magnetization and alternating magnetization (seeMAGNETIC VISCOSITY) and determines the width of nuclear magnetic resonance lines, electron paramagnetic resonance lines, ferromagnetic resonance lines, and antiferromagnetic resonance lines. The properties of ferromagnetic and antiferromagnetic materials in high-frequency electromagnetic fields depend substantially on magnetic relaxation. In many cases, magnetic relaxation sets limits to the use of materials. For example, it imposes restrictions on the conditions governing the use of magnetic thin films in technology and on the speed of magnetic elements in electronic computer storage devices. Magnetic relaxation times are among the parameters of a solid that are altered comparatively readily by industrial processes, such as alloying and hardening. M. I. KAGANOV
Thoracic Spine Anatomy The thoracic spine is the central part of the spine, also called the dorsal spine, which runs from the base of the neck to the bottom of your rib cage. The thoracic spine provides the flexibility that holds the body upright and protects the organs of the chest. The spine is made up of 24 spinal bones, called vertebrae, of which, the thoracic region of the spine is made up of 12 vertebrae (T1-T12). The vertebrae are aligned on top of one another to form the spinal cord, which gives your body its posture. The different parts of the thoracic spine include bone and joints, nerves, connective tissues, muscles, and spinal segment. Each vertebra is made up of a round bony structure called the vertebral body. The protective bony ring attaches to each vertebral body and surrounds the spinal cord to form the spinal canal. The bony ring is formed when two pedicle bones join two lamina bones that connect to the back of the vertebral body directly. These lamina bones form the outer rim of the bony ring. When vertebrae arrange one on top of the other, the bony ring forms a hollow tube that surrounds the spinal cord and nerves and provides protection to the nervous tissue. A bony knob-like structure projects out at a point where the two laminae join at the back of the spine. These projections are called spinous processes, and the projections at the side of the bony ring are called transverse processes. Joints of the Vertebrae Between each vertebra, there are small bony knobs at the back of the spine that connect the two vertebrae together called facet joints. Between each pair of vertebra, two facet joints are present, one on either side of the spine. The alignment of the two facet joints allows the back and forth movement of the spine. The facet joints are covered by a soft tissue called the articular cartilage, which allows the smooth movement of the bones. Nerves of the Thoracic Spine On each side, the left and right side of the vertebra, is a small tunnel called neural foramen. The two nerves that leave each vertebra pass through this neural foramen. These spinal nerves group together to form a main nerve that passes to the organs and limbs. These nerves control the muscles and organs of the chest and abdomen. An intervertebral disc is present in front of this opening which is made up of connective tissue. The discs of the thoracic region are smaller compared to the cervical and lumbar spine. Soft Tissue of the Thoracic Spine Connective tissue holds the cells of the body together and ligaments attach one bone to another. Anterior longitudinal ligament runs down to the vertebral body and the posterior longitudinal ligament attaches on the back of the vertebral body. A long elastic band called ligamentum flavum connects the lamina bones. The muscles in the thoracic spine are arranged as layers. Strap-shaped spine muscle, called erector spinae make up the middle layer of the muscle. The deepest layer of muscles attaches along the back of the spine bones and connects to the vertebrae. These muscles connect one rib to the other. The spinal segment includes two vertebrae separated by an intervertebral disc, nerves that leave the spinal column at each vertebra, and small facet joints of the spinal column.
The words flowed easily, as if the guide had said them many times before. And indeed he had, and would continue to do so throughout the day. Despite the manner in which they were said, and how many times they were said, the emotion behind them never left. For it was here that hundreds of thousands came to die. At first it was Polish political prisoners, those who disagreed with and resisted Nazi ideals, or simply were found to be listening to foreign radio, reading illegal pamphlets. Then, came those unlucky enough to have been born into the wrong racial or ethnic groups - the Jewish, the Slavic, the Romani and others. Auschwitz is considered to have opened on June 14, 1940 when the first shipment of Polish prisoners were sent to the camp. At that time, it wasn't a death camp. It's sole purpose was not yet to carry out the so-called ''Final Solution'' to the ''Jewish Problem.'' These prisoners, most of whom only survived a few months, were there because they opposed the German occupation of Poland. The Jewish, along with the Romani, Slavic, and other ethnic groups, arrived later. There are two camps that can be visited here - Auschwitz and Auschwitz II Birkenau (most commonly called Birkenau). Within the first year of being open, Auschwitz was expanded, more barracks were built, to house even more prisoners who were often sent to nearby factories as slave labor during the day and returned to the camp at night. Their daily lives at the camp were miserable. Food rations were poor and consisted 1500 calories or more likely less, per person per day, the labor was hard and demanding, and the living conditions were terrible. Clothing was often inadequate in the colder months. Many people arrived at the camp in over crowded train cars. They were told that they were being resettled and many times brought the things they would need to start life anew - combs, clothing, shoes, food, sometimes even small household appliances. When they arrived, they were told to leave their belongs, that they would be returned later, and to form two lines - men in one, women and children in another. As they marched past an SS officer, they were further segregated. Those who looked young, healthy, and able to work were sent in one direction to be registered, the rest - the elderly, the feeble, the weak, children, pregnant women - were sent in another direction. They were told they were being sent for decontamination, after all, communicable diseases such as typhus run rampant in such close quarters. Other prisoners, chosen to help with the newest influx of prisoners and knowing what their fate would be, would sometimes tell young, healthy mothers to leave their small children with their grandparents. For many, this was the last time they saw each other. Families were split up, husbands from wives, mothers from their children, siblings from each other. An unlucky few were chosen neither for labor, nor for immediate gassing, but rather, for terrible medical experiments. Genetic tests were run to support the Nazi ideal that the Aryan race was superior. Some camp doctors performed experiments on behalf of drug companies, or to test the limits of survival of the human body. Twins were often used in a number of experiments. Various sterilization procedures were often tested in an effort to find the easiest, most efficient way in which to render an entire population infertile. Many times, the prisoners did not survive these gruesome experiments. Those that did were often left maimed, unable to care for themselves, and unable to work. And in the camps, an inability to work meant only one thing - the gas chambers. Throughout the camp, one can see the reminders of the horrors that happened here - the stacks of suitcases, often times with names, addresses and professions painted on for easy identification, the mounds of combs that were brought, the piles of eye glasses, over 2 tons of human hair that had been cut from prisoners before they were killed to later be made into textiles, the piles of shoes they wore on their final journey, some of which so incredibly small they could only have belonged to a child. And the stacks of empty Zyklon B canisters - the fastest, most efficient, method the Nazi's found for killing the prisoners. Our guide then took us to the most infamous part of the camp - the gas chambers and crematorium. It was here that hundreds of thousands lost their lives, their bodies burned afterwards, for ashes were better disposed of than bodies. From there, were traveled to Birkenau, 3 kilometers away. While Auschwitz was built as a labor camp, and only later became a death camp, Birkenau was built for extermination. With multiple gas chambers and crematoriums, one's life expectancy here was short. Today, nothing but rumble remains of the the gas chambers. The Nazis destroyed them before they retreated from the camp. Sometimes, in an effort to escape the cruel life they lived within the camps, the prisoners would commit suicide by walking directly towards the barbed wire fences. It wasn't the fences that killed them, it was the guards stationed in the watch towers, armed with weapons. And those guards were rewarded for their duty. Birkenau was bigger than I expected. The large, flat expanse of fields, dotted by brick chimneys, the only evidence that remains of the hundreds of barracks that once stood, bisected by railroad tracks, and surrounded by trees whose leaves were exploding the autumn color, was almost peaceful. It did not look like a death camp. It's calmness gave no hint of the horrors, the killing, that happened here. The rain and clouds that, fittingly, had been present all morning, were clearing out and the sun was shining. Had we been anywhere else, it would have turned into a beautiful fall day. As it was, the warmth from the sun and the colors on the trees, were a stark juxtaposition to the history of this place. Many ask why anyone would want to visit a place with such a terrible history. Perhaps there is a bit of morbid curiosity behind their motives. Perhaps some lost their family members here and are coming to remember them. Still others, and an ever dwindling number, actually remember being prisoners here, and have the tattooed number on their arm to prove it. Indeed, our visit was one of the most sobering experiences we have ever had. Despite that fact that the Holocaust seems to have happened in a different era, we must remember was only 70 years ago that the camp was liberated, less than a lifetime ago. We must never forget what happened here, the 1.1 million people who lost their lives, the families that were destroyed. It is in their memory that we visited, so that this may never happen again.
Probing the Proton's Weak Side The weak force acts on subatomic particles, such as the protons, neutrons and electrons that make up atoms. These particles carry a weak charge, a measure of the influence that the weak force can exert on them. While the weak charge has not been measured precisely for the proton, its properties have been predicted by the Standard Model, the theory that describes protons, neutrons and other particles. By measuring the proton's weak charge in an experiment called Q-weak, scientists hope to gain insight into the Standard Model and perhaps lead to predictions of heavy particles, such as those that may be produced by the Large Hadron Collider at CERN in Europe. "The weak charge for the proton is exquisitely, accurately predicted by the Standard Model. And so, what we are doing is taking a prediction that is very, very precise and testing it with a precise measurement," says Roger Carlini, a co-spokesperson for the Q-weak collaboration and a Hall C staff scientist. In the experiment, a beam of electrons is directed into a container of liquid hydrogen. The electrons are polarized - mostly spinning in one direction, while the protons in the nuclei of the hydrogen atoms are unpolarized. Some of the polarized electrons interact with the protons by striking glancing blows. These electrons are collected by a series of detectors that measure their direction and energy. To get a measurement of the weak force out of the behavior of these electrons, scientists are exploiting differences between two of the fundamental forces: the electromagnetic force and the weak force. When the electrons encounter the protons, they may interact through either of these forces. One factor that affects this choice is the spin direction of the incoming electrons. "We have the ability to rapidly change the polarization of the electrons, pointing in the direction of the beam and pointing opposite," Carlini says. The experimenters flip the electrons' spin from one direction to the other a thousand times each second throughout the experiment, while recording how many interactions they get for each direction. The electromagnetic force is unfazed by the electrons' spin direction. It treats electrons spinning one direction the same as those spinning opposite. The weak force, however, is picky. It preferentially reveals itself more often with electrons spinning in one direction versus the other. While physicists will not be able to pick out which electrons they measure interacted with the protons via the electromagnetic force versus the weak force, they will be able to calculate the proton's weak charge by determining the difference in the number of electrons they measure for one spin direction versus the other. "What we compute is what was different about that rate between the two. In other words, everything else cancels. And if there were no weak force, you'd get zero," explains Carlini. "What you actually get is something which is proportional to the weak charge of the proton. It's just a way of picking out the effective weak force." The Right Time for Weak Carlini says this isn't the first measurement of the strength of the weak force. The European Organization for Nuclear Research, CERN, and Fermilab scientists, for instance, have made high-precision measurements of the weak force. However, those measurements were made of different particles (at higher energies). "This measurement is building on thirty or forty years of developing experimental techniques. If it wasn't for all those other experiments, we couldn't do Q-weak, because we couldn't interpret it," Carlini says. Some of those experimental techniques were developed at Jefferson Lab. For instance, the collaboration of scientists conducting the Hall A Proton Parity Experiment (HAPPEx, HAPPEx-II and HAPPEx-III) worked with CEBAF accelerator staff to fine-tune parameters of the accelerator's electron beam. The G-Zero experiment also made use of new techniques to separate the weak force from the electromagnetic. Lessons learned from these experiments and others were incorporated into the plans for the unique Q-weak measurement. "It's a whole bunch of stuff that this place has that comes together that no one else in the world even comes close to at any time in the past or the present." A Long Haul From beginning to end, the Q-weak experiment will require more than a decade of effort. It was originally approved by Jefferson Lab in January of 2002, and the experimental run phase will wrap up in 2012. The measurement was preceded by a nearly year-long installation period for the special equipment required to carry it out. "It's a standalone experiment. We have our own detectors, our own magnet, a new beam line, a Compton polarimeter and a very high-power cryotarget," Carlini says. Q-weak will run for approximately two years to get enough data to spot the effect of the weak force. Carlini and his colleagues are looking forward to settling into a routine. "It's going to be a long haul: two years is a long time to run. And hopefully, we'll get into a quiet phase where we just take a lot of data and things don't change much." Q-weak is the largest-installation experiment left on the books to run before the 12 GeV upgrade takes Jefferson Lab's CEBAF accelerator offline in 2012. It is also the last experiment scheduled to run in Hall C before the upgrade. Once the run is complete, scientists will begin the arduous process of interpreting the massive amounts of data they will have collected. The Q-weak collaboration consists of more than 100 scientists and 50 institutions. It is funded by Jefferson Lab, the Department of Energy, the National Science Foundation and the Natural Sciences and Engineering Research Council of Canada. Jefferson Science Associates, LLC, a joint venture of the Southeastern Universities Research Association, Inc. and PAE, manages and operates the Thomas Jefferson National Accelerator Facility, or Jefferson Lab, for the U.S. Department of Energy's Office of Science. DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.
The decimal system of counting is part of our language, math, and the measurement units used by all right-thinking nations. It's so deeply engrained in how we operate that it's often difficult to imagine using anything else. However, it's mostly a historic accident, based on the number of fingers we happen to have. Although the vast majority of societies used decimal numbers, some developed systems based on five or 20 digits instead. But there were also some rare exceptions. A new paper in PNAS performs an analysis of a Polynesian culture's language, and it concludes that its speakers developed a mixed decimal/binary system. The researchers then go on to argue that the inclusion of binary made certain math operations much easier. There are some instances of binary systems being used. For example, the paper cites a language from Papua New Guinea that only includes words for one and two. But it's not a full binary system in that there's no concept of larger digits; these are simply represented by additive combinations of the two digits (so five is expressed as "2 + 2 + 1"). In contrast, the authors argue that the indigenous people of Mangareva performed full binary calculations but layered the results on top of a decimal counting system. The trick is that there is nobody left who actually uses the Mangarevan numerical system; instead, it has to be inferred from the language and cultural background of the people. Mangareva was settled during the Polynesian expansion. It's a small group of islands at the eastern edge of French Polynesia where things start to thin out before remote outposts like Pitcairn Island and Easter Island. Nevertheless, it was fully incorporated into the Polynesian trade network, which meant that its residents had to be able to keep track of trade goods. In addition, many of these goods (like sea turtles and fish) were used by political figures as gestures of munificence during formal feasts with their subjects. Keeping track of just how generous these gifts were was an important part of the political culture. So counting large numbers was a critical part of the Mangarevan culture. Polynesians as a whole used a decimal system of counting, but different island groups often had distinct terms for different groupings of 10. The researchers describe the Mangarevan language's specific terms for groups of 10 and then show how these could be used as a form of binary, allowing calculations to rapidly manipulate groups of 10 to conveniently perform addition, subtraction, multiplication, and division. The system, they argue, would allow large groups of trade goods to be rapidly inventoried with a relatively small cognitive load, essential for a culture without any writing. (To complicate matters a bit further, different types of goods were handled in batches of different numbers. For example, you always counted individual sea turtles, but fish were handled in groups of two. If you said there were 40 fish, you actually had 80. But the math was all done by counting groups of two.) The authors were able to make all these inferences by examining the Mangarevan language, which is still spoken by roughly 1,000 people in the islands. However, the actual math system has been lost, replaced by a full decimal system introduced by French missionaries. The best we can currently do is infer that it would have been easier to handle some things in binary; we can't confirm that this was the mental process used by the Mangarevans before the arrival of Europeans. If right, however, the Polynesians were using a binary system for a few hundred years before Leibniz introduced it to European thought around 1700. And it was still a few centuries before electronics made binary a central part of most calculations.
Using motors with Robots in Quorum In this tutorial, we will become more familiar with how to use the motors of the robots. The topics discussed will include: - How motors work in general. - The differnent ways that you can use motors in a program. How Motors work A motor is only able to perform a single specific task: rotation. A motor can either rotate backwards (with the RotateBackwards() action) or forward (with the RotateForward() action). All robot movement is controlled through motors. It is important to understand the orientation of the motor movement in order to give the robot instructions. The RotateForward() action rotates the motor forward away from the connection port. It may help to think of the "front" of the motor as the part that rotates and the "back" of the motor as the end that has the connection port. The image below illustrates the forward direction of a motor. The amount that a motor rotates is measured in degrees, where a rotation of 360 degrees means the motor has completed one full revolution. Rotation can also be specified as a negative number to indicate a reverse rotation. A rotation of -360 degrees will cause the motor to complete one revolution backwards. With Quorum, the speed at which a motor rotates is measured in degrees per second. So, with a speed of 360 degrees per second (the default setting) a motor will complete one revolution every second. Similarly, a motor speed of 720 will cause the motor to complete two full revolutions every second. The maximum speed for a motor varies since it will depend on battery voltage and the other things that the robot is doing. Generally, the range of speed is between 600-900 degrees per second. There are two types of motors, large and medium. The large motors are generally used to move a robot along a surface, while the medium motors usually control other parts of the robot. The large motors allow parts to be connected on either side, while the medium motors only allow a component to be connected to its front. An illustration of the medium motor is shown below. Despite their differences, however, any motor action can be used with either type of motor. When connecting motors to the robot, it is important to note which motor is plugged into which port. The motor ports on the brick are on top and labeled A, B, C, and D. We will give instructions to the motors using these letters in our code. Now that we have a better grasp on the basics of motors, we can explore different ways they can be utilized in a program. Programming MotorsThe Quorum Robot Library provides actions for controlling motor movement, speed, and other information. With this set of actions, we can instruct the robot to move a certain amount or to move until we tell it to stop. If we want a motor to finish its rotation before the program moves on to the next line of code, we tell the program to wait for that motor to finish. The following program demonstrates these actions: use Libraries.Robots.Lego.Motor Motor motor //rotates one revolution motor:RotateByDegrees("C", 360) motor:Wait("C") //does nothing, motor is already at 360 motor:RotateToDegree("C", 360) motor:Wait("C") //rotates one more revolution motor:RotateByDegrees("C", 360) motor:Wait("C") //rotates backwards two revolutions motor:RotateToDegree("C", 0) motor:Wait("C") use Libraries.Robots.Lego.Motor Motor motor //motor B will rotate 2 revolutions per second motor:SetSpeed("B", 720) //motor B should reach 1440 degrees in 2 seconds motor:RotateByDegrees("B", 1440) motor:Wait("B") When a program is running, information is stored for each connected motor, including what degree a motor is moving to, how far it has already rotated, and the speed at which it is moving. We can also detect stalls by asking if the motor is currently moving. Lastly, we can reset the motor's rotation information back to 0. use Libraries.Robots.Lego.Motor use Libraries.Robots.Lego.Screen Motor motor Screen screen //returns the default of 360, since we haven't set the speed motor:GetSpeed("A") motor:RotateByDegrees("A", 1080) screen:Output("Rotating to: " + motor:GetRotationTarget("A"), 0) repeat while motor:IsMoving("A") //displays real-time speed on the screen screen:Output("Speed: " + motor:GetSpeed("A"), 1) end //GetRotation will return 1080, since that's how far it has moved screen:Output("Rotation: " + motor:GetRotation("A"), 2) motor:ResetRotation("A") //now returns 0, since we reset the rotation information screen:Output("After reset: " + motor:GetRotation("A"), 3) //do one more rotation so we can look at the screen motor:RotateByDegrees("A", 1080) motor:Wait("A") - The default rotation speed of a motor is 360 degrees per second. - Class constants can be used to refer to motors when passing them to an action. For example, if we have a Motor object called motor, then the code motor:Stop("A")is the same thing as For documentation on the Motor class, see here. In the next tutorial, we will discuss Sensors, which describes how to use lego sensors.
For dragons to breath fire is certainly possible to have happened. Today fireflies produce light, eels produce electricity and the bombardier beetle can produce fire. The bombardier beetle has a canon near his rear end where he can blast his enemies with chemicals that are 212 degrees Fahrenheit: the temperature of boiling water. How does the beetle do that? This beetle has compartments where he stores fiery chemicals. When these chemicals unite they explode. If a tiny beetle can possess such a sophisticated mechanism in its rear, then a dinosaur or dragon could also have had a similar mechanism that would allow it to breathe fire to defend itself. In fact, some of the dinosaurs have compartments in their heads that are connected to their nasal passages. The Tyrannosaurs Rex has a head the size of a car but his brain was the size of a baseball. The rest of his head was full of these compartments connected to his sinuses. If he had these special chemicals stored in these hollow compartments, it is possible that he could have been a fire breathing dragon. In addition, legends of fire breathing dragons have circulated around the world for centuries and throughout history. They are depicted in artwork and literature. Almost all cultures have dragon legends despite having had no contact with each other. Check out our Bible Answers page for more information on a variety of topics. In His service,
- Having a large task to achieve can leave us overwhelmed, demotivated and anxious. A large task can usually be broken down to a number of smaller and more specific goals. - Goal- setting can improve motivation and help to handle anxiety. For example, a hurdler might wish to improve their time in order to qualify for a team. Using goal- setting the athlete should start by identifying 1 or 2 specific aspects of their performance to work on. - They can set themselves small, manageable goals to aim for. Once these goals have been achieved, the athlete should be well on the way to achieveing their overall goal. - Research has generally supported the idea of goal-setting. However there is some disagreement as to whether goals should be specific or general, easy or difficult. - Weinberg et al (1987) tested the effect of goal- setting on sit-up performance and found no difference in the performance of participants given moderate or difficult goals and those told to 'do their best'. This seems to contradict the principle of specific goal- setting. -Athletes and coaches report that goal- setting is seen as an important strategy. -Weinberg et al (2000) surveyed 328 Olympic athletes…
DIFFERENTIAL AND INTEGRAL CALCULUS BY AUGUSTUS DE MORGAN CONTENTS: On the Ratio or Proportion of Two Magnitudes On the Ratio of Magnitudes that Vanish Together On the Ratios of Continuously Increasing or Decreasing Quantities The Notion of Infinitely Small Quantities On Functions Infinite Series Convergent and Divergent Series. Taylors Theorem, Derived Functions. Differential Coefficients The Notation of the Differential Calculus Algebraical Geometry On the Connexion of the Signs of Algebraical and the. Directions of Geometrical .Magnitudes The Drawing of a Tangent to a Curve. Rational Explanation of the Language of Leibnitz Orders of Infinity A Geometrical Illustration : Limit of the Intersections of Two Coinciding Straight Lines, The Same Problem Solved by the Principles of Leibnitz An Illustration from Dynamics Velocity, Acceleration, etc, Simple Harmonic Motion The Method of Fluxions Accelerated Motion Limiting Ratios of Magnitudes that Increase Without Limit. Recapitulation of Results Retched in the Theory of Functions, Approximations by the Differential Calculus Solution, of Equations by the Differential Calculus Partial and Total Differentials Application of the Theorem for Total Differentials to the Determination of Total Resultant Errors Rules for Differentiation.. Illustration of the Rules for Differentiation Differential Coefficients of Differential Coefficients Calculus of Finite Differences. Successive Differentiation Total and Partial Differential Coefficients. Implicit Differentiation Applications of the Theorem for Implicit Differentiation Inverse Functions. Implicit Functions. Fluxions, and the Idea of Time The Differential Coefficient Considered with Respect to Its Magnitude. The Integral Calculus Connexion of the Integral with the Differential Calculus Nature of Integration. Determination of Curvilinear Areas. The Parabola Method of Indivisibles. Concluding Remarks on the Study of the Calculus Bibliography of Standard Textbooks and Works of Reference on the Calculus.
Context: There is a claim for the 6th dwarf planet Hygiea - As of today, there are officially five dwarf planets in our Solar System. - The most famous is Pluto, downgraded from the status of a planet in 2006. - The other four, in order of size, are Eris, Makemake, Haumea and Ceres. - The 6th one, called as Hygiea, has so far been taken to be an asteroid. It lies in the asteroid belt between Mars and Jupiter. - Using observations made through the European Space Organisation’s SPHERE instrument at the Very Large Telescope (VLT), astronomers have now found Hygiea may possibly be a dwarf planet. This is the first time astronomers have observed Hygiea in high resolution to study its surface and determine its shape and size. - VLT observations indicate that Hygiea satisfied all the conditions of a dwarf planet The International Astronomical Union sets four criteria for a dwarf planet - it orbits around the Sun, - it is not a moon - it has not cleared the neighbourhood around its orbit - it have enough mass that its own gravity pulls it into a roughly spherical shape - SPHERE( of European Space Organisation) is a powerful planet finder and its objective is to detect and study new giant exoplanets orbiting nearby stars using a method known as direct imaging — in other words, SPHERE is trying to capture images of the exoplanets directly, as though it were taking their photograph
What should your child be learning at home? Are you concerned about the language skills your child should be gaining from you at home? Children learn a great deal of language skills during their first years at home, however, the rate at which children develop speech and language skills before the age of three varies from one child to the next. Parent-infant interaction is crucial in developing communication skills, setting a child up for later learning and development. In several studies a strong relationship has been drawn between development of language skills and many social and economic factors experienced by the family and parents. So, what are some reasons language development may differ from one child to the next? What habits or behaviours can parents rely on to ensure their child is learning effective communication skills in those crucial years at home? Studies have explored four factors that may be at the root of differences in rate of early language development: 2. Job (hours spent at work vs. at home) 3. Educational background 4. Parent input (amount of time spent communicating with child) How do these factors translate to language development? And which one has the biggest impact on a child? According to Topping et al. (2013), socio-economic status accounts for large variances in vocabulary. Commonly children of parents with higher educational background as well as higher income command a significantly larger vocabulary by the age of three than children whose parents have completed less education or work in lower paying jobs (Topping et al., 412). Some reasons for this difference are found in the types of language used by parents and families in each category. While vocabulary is important, it is the way parents communicate with their child that seems to make a greater impact. The use of language to inspire conversation and encourage questions significantly broadens a child’s vocabulary and use of expressive language as opposed to language that is used more often to direct a child’s behaviour (412). Additionally, parents in all demographics are found to consistently overestimate the amount they speak directly to their children in a day (412). So, what does this mean? There are some things parents can try in order to engage their children and encourage language and communication development. One is telling or discussing stories– encouraging open-ended questions and response. Ensuring conversations are two-way rather than issuing directives allows children to become more expressive. Another thing parents can do is encourage symbolic play– for example playing kitchen. Activities that encourage creativity, discussion, and symbolic associations are valuable steps to developing effective communication skills (Topping et al., 416). Written by: Kimberly Thomson, Head of Research at Simone Friedman Speech-Language Services Sources: Rowe, Meredith L., Stephen Raudenbush, Susan Goldin-Meadow (2012). The Pace of Vocabulary Growth Helps Predict Later Vocabulary Skill. Child Development 83(2), 508-522. Topping, Keith, RayenneDekhinet, and Suzanne Zeedyk (2013). Parent-infant Interaction and Children’s Language Development. Educational Psychology 33(4), 391-426.
Ammonia is a compound found within the human body. It is a major byproduct of protein catabolism and is required for the anabolism of certain essential cellular compounds. Extra amounts of ammonia are processed by the liver and kidneys in order to keep blood levels within a tightly controlled range. If levels build up and exceed this range then severe neurological damage and even death can occur. Ammonia is a waste product produced by the body as a byproduct of metabolism. It is mainly produced within the digestive tract however it can also be produced wherever amino acid breakdown occurs within the body. The major areas in the body that produce ammonia are the intestines, skeletal muscle, liver, and kidneys. The body gains energy from proteins via the Kreb’s Cycle which occurs within the mitochondria of all cells. In this process some of the intermediary products produced within the cycle can be converted to glutamate if transaminated for other metabolic processes. It is the conversion of glutamate to α-ketoglutarate via the enzyme glutamate dehydrogenase that creates ammonia. Glutamate can also be produced from glutamine via a hydrolysis reaction; the addition of water and glutaminase produces the compound glutamate. Both of the aforementioned reactions are constantly in flux within the cells as needed producing ammonia as a waste product. In order to rid the body of excess ammonia the hepatocytes in the liver metabolize the ammonia into urea. This is accomplished via the Krebs-Henseleit urea cycle which is a combination of the Krebs and urea cycle that specifically processes ammonia. The urea is then excreted via the kidneys without harming the body. Some ammonia is also produced and excreted by the kidneys. This occurs within the renal tubular cells as they generate ammonia via the glutamate reaction. Because the tubular cells are selectively permeable the ammonia cannot get back through into the blood stream and is excreted directly into the urine. In the event that the liver or kidneys are not working properly urea and ammonia can back up into the system and these cycles will shut down causing toxic amounts of ammonia to build up in the blood stream. It is normal to have low levels of ammonia in the blood stream as it is constantly being produced by the body, however high levels can hold much clinical significance for a physician. Normal ranges for ammonia can differ between labs. As a general rule patients less than 30 days old have an ammonia level around 64-107 umol/L, 1 month to 12 years old is around 29-57 umol/L, and greater than 12 years old is around 11-32 umol/L. While hyperammonemia has many different causes its overall effect on the body is expressed solely as neurological damage. Ammonia can cross the blood-brain barrier via passive diffusion. It needs to be able to do this because neurons need to use energy from the Kreb’s cycle just like all other cells in the body. The problem occurs when ammonia builds up to toxic levels in the cerebrospinal fluid. At this point all sorts of important brain functions begin to become inhibited causing brain swelling and alteration in cognition. If the situation is left unchecked eventually the body will fall into a coma and die. Hyperammonemia can be caused by either an inherited disorder or it can be caused by an acquired illness. Inherited causes include inherited deficiencies of urea cycle enzymes such as lysine and ornithine. These disorders usually manifest in infancy however there are rare cases that manifest in adulthood. There are dozens of inherited disorders that affect enzymes related to ammonia metabolism the key is that the outcome is still the same if ammonia levels become toxic. The list of acquired causes of hyperammonemia is a bit more varied. The two big causes are liver and kidney failure. Severe liver failure directly impacts ammonia metabolism whereas kidney failure affects the body’s ability to excrete urea causing a backup of ammonia. A third cause is Reye’s syndrome which occurs in children. This occurs when a child has a viral infection and the liver becomes overloaded due to aspirin toxicity. Once the liver is overworked then the ammonia levels begin to back up to toxic levels and cause damage. Other drugs such as heparin and valproic acid will actually increase ammonia. Tetracycline and diphenhydramine will decrease ammonia. The key to finding the cause of hyperammonemia is determining at what point in the body the metabolic cycle is getting jammed up. Treatment for hyperammonemia is relatively simple in concept; fix the metabolic cycle. For inherited enzyme deficiencies a change in diet is usually required. For patients with acquired causes of hyperammonemia treatment can be as simple as adhering to a low protein diet to something as drastic as undergoing a liver transplant. In general to treat acute hyperammonemia physicians can give a patient an intravenous solution of sodium benzoate and phenylacetate to remove excess nitrogen from the blood. Dialysis can also be performed if needed to remove excess toxins. These are quick fixes though, the main goal is to treat the underlying condition that created the environment of hyperammmonemia. * Crisan, E., Chawla, J. (2009). Hyperammonemia. Emedicine from WebMD. Retrieved December 20, 2009 from http://emedicine.medscape.com/article/1174503-overview * Essig, M. (2009). Ammonia. Yahoo Health. Retrieved December 20, 2009 from http://health.yahoo.com/blood-diagnosis/ammonia/healthwise—hw1768.html * Burtis, C. A., Ashwood, E. R., Bruns, D. E., Tietz Textbook of Clinical Chemistry & Molecular Diagnostics. St. Louis, Missouri: Elsevier (2006). 1765-1766, 1789-1791. * Glutamic Acid. (2009). Retrieved December 20, 2009 from http://en.wikipedia.org/wiki/Glutamic_acid
Parents hoping to help their children get a better night's sleep may want to have the children leave their cell phones on the breakfast table overnight. The problems associated with poor sleep are diverse and well documented, and they can set the stage for negative health consequences that may last a lifetime. More immediately, too little sleep contributes to poor academic performance, behavior issues, as well as childhood and adolescent obesity, and high blood pressure. Researchers collected data from over two thousand fourth and seventh graders in an effort to better understand the connection between screens in the bedroom at night and the quality and duration of the children’s sleep.Small screens, which include Internet access and gaming, are more interactive and therefore more arousing. They can also be the source of alerting sounds for calls and texts that awaken the children’s sleep rhythms and disturb their sleep cycles.ADVERTISEMENT They asked the children in the study about their weekday bedtimes and wake times, and about the presence of TVs and small screens (cell phones, smart phones, iPod touch devices, etc.) in their bedrooms. Fifty-four percent of the students reported that they slept near a small screen, while 75% had a TV in the bedroom. Not surprisingly, there were more seventh-graders (65%) with small screens than fourth graders (46%). Seventh graders slept for an average of 8.8 hours per night while fourth graders slept 9.8 hours. The data revealed that both TVs and small screens caused sleep loss, but small screens were the culprits when it came to excessive daytime sleepiness. The children who slept near small screens reported 20.6 fewer minutes of sleep per weeknight than those who did not sleep near small screens. Those with a TV in their bedrooms reported 18.0 fewer minutes of weekday sleep. There are several reasons why small screens may cause both less sleep time and increase the likelihood of daytime fatigue, the researchers say. Receiving phone calls and text messages around sleep time can be inappropriately stimulating at bedtime. The use of screens at night may displace sleep time by delaying bedtime and leading to shorter sleep duration. Watching exciting or disturbing content on screens may affect sleep onset and quality. One reason why phones seemed to produce more daytime fatigue than television, according to the researchers, is that TV watching is a passive activity, while small screens, which include Internet access and gaming, are more interactive and may therefore be more arousing. Cell phones can also be the source of alerting sounds for calls and texts that awaken the children’s sleep rhythms and disturb their sleep cycles. The child may then have difficulty falling back to sleep. Most parents already know that watching TV or using small screen devices, instead of going to sleep, interferes with a child’s restful sleep, but this study makes it even clearer the sorts of problems that bedtime screen time causes. What parents may not appreciate is the extent to which disturbed sleep impacts their children’s immediate and long-term physical, social, and emotional health, as well as daily productivity. Screens are pervasive in our lives and setting limits is not an easy task. However, the goal of preventing both short- and long-term health problems in children makes it necessary to rise to the challenge. Help your children become aware of the effects of screen time at bedtime and have them leave their phones out of their rooms at some agreed-upon hour. Parents may want to set an example, especially since cell phones are bad for parents' productivity too. The study is published in Pediatrics.
(PhysOrg.com) -- Amino acids are markers for potential life since they are the building blocks of proteins. Now scientists in California have for the first time found the shock wave created when a comet has a glancing blow with a planet can deform molecules inside the comet, break bonds and create new ones, forming new molecules, including an amino acid complex. Researchers at the Lawrence Livermore National Laboratory in Livermore, California used about one million computer hours on the laboratory’s Atlas computer cluster to simulate what chemical events might occur in a single ice grain inside a comet striking a planet with a glancing blow. They were looking in particular to see if amino acids might be formed. There are a number of theories on how amino acids were first formed on Earth, including the interactions of lightning or UV radiation with the primordial “soup” of simple molecules, and the presence of amino acids on interstellar dust, but Nils Goldman and his team thought amino acids might also be produced by the shock compression wave formed when a comet hits a planet. The sudden jolt produces a compression wave that passes through the comet faster than the speed of sound, and this could deform and break up molecules inside it, which would then form other molecules. The computer simulation began with a grain of ice containing a mix of 210 molecules commonly used by researchers as representative of ice inside comets. The molecules include ammonia, carbon monoxide, carbon dioxide, methanol, and water. They then simulated what would happen when the comet containing the ice grain and moving at 29 km per second hit the Earth side-on (since a head-on collision would most likely destroy the comet). Their computer model used density functional theory simulations and a quantum mechanical treatment of the electrons in the molecules, such that if electrons in the model came close enough to electrons in other atoms a bond would be created. They first modeled a weak shock wave with a pressure of 10 gigapascals, which produced a temperature of 700 kelvin. In this model the ice grain was compressed by 40% and formed new C-N bonds, producing molecules such as carbamide (urea - CH4N2O), which is a natural molecule formed in the liver from ammonia produced by the de-amination of amino acids. Its formation suggested that processes creating amino acids were also possible. Goldman said that under these reactive conditions if one sort of molecule with a C-N bond is formed, it is easy to imagine more carbons adding to it and forming more complex molecules such as amino acids. Goldman and colleagues then simulated higher pressure and temperature collisions, and found that when the pressure was 47 gigapascals and the temperature was 3,141 kelvin for the first 20 picoseconds (20 trillionths of a second) after impact, large and complex molecules containing C-N bonds were formed. Further molecules were formed in the relaxation period after the shock compression wave, during which the compressed comet cools and expands. After 50 picoseconds of relaxation there were five C-N molecule types, including carbamide, hydrogen cyanide (HCN), and what appeared to be the amino acid glycine (C2H4NO2), but with a carbon dioxide molecule attached. Hydronium ions (H3O+) were also formed. Goldman said he was certain glycine would be formed within the first microsecond through a spontaneous reaction of the glycine/CO2 complex with a hydronium ion to form glycine, water and carbon dioxide, but the simulation is too complex to run long enough to see this. Goldman presented his findings last week at the Spring 2010 meeting of the American Chemical Society in San Francisco, California. Explore further: 'Invisible' protein structure explains the power of enzymes
Language and Literature in the MYP Language is what makes us human. It is a recourse against the meaningless noise and silence of nature and history. Literature is the art of discovering something extraordinary about ordinary people, and saying with ordinary words something extraordinary. Language is fundamental to learning, thinking and communicating; therefore it permeates the whole curriculum. Indeed, all teachers are language teachers, continually expanding the boundaries of what students are thinking about. Mastery of one or more languages enables each student to achieve their full linguistic potential. Students need to develop an appreciation of the nature of language and literature, of the many influences on language and literature, and of its power and beauty. They will be encouraged to recognize that proficiency in language is a powerful tool for communication in all societies. Furthermore, language and literature incorporates creative processes and encourages the development of imagination and creativity through self-expression. All IB programmes value language as central to developing critical thinking, which is essential for the cultivation of intercultural understanding, as well as for becoming internationally minded and responsible members of local, national and global communities. Language is integral to exploring and sustaining personal development and cultural identity, and provides an intellectual framework to support conceptual development. The six skill areas in the MYP language and literature subject group—listening, speaking, reading, writing, viewing and presenting—develop as both independent and interdependent skills. They are centred within an inquiry-based learning environment. Inquiry is at the heart of MYP language learning, and aims to support students’ understanding by providing them with opportunities to independently and collaboratively investigate, take action and reflect. As well as being academically rigorous, MYP language and literature equips students with linguistic, analytical and communicative skills that can also be used to develop interdisciplinary understanding across all other subject groups. Students’ interaction with chosen texts can generate insight into moral, social, economic, political, cultural and environmental factors and so contributes to the development of opinion-forming, decision-making and ethical-reasoning skills, and further develops the attributes of an IB learner. To assist in achieving these broader goals, this guide provides both teachers and students with clear aims and objectives for MYP language and literature, as well as details of internal assessment requirements.
The asteroid impact would have produced the energy equivalent of 100 terratons of TNT (4.2 x 1023 J), a billion times the energy produced by both atomic bombs dropped during World War II. Upon impact, the asteroid would have burrowed underground in less than a second, spewing a fireball of super-heated dust, ash and steam into the atmosphere (termed "ejecta"), part of which would have fallen back to earth igniting wildfires; colossal shock waves would have been emitted, triggering earthquakes and creating tsunamis measuring over a mile in height; volcanic eruptions would have been added to the mix. Over time, the particles suspended in the atmosphere blocked the sunlight, which dramatically cooled the earth for weeks or months and interrupted photosynthesis by plants, which affected the food chain. These particles then rained back onto earth, covered the entire surface for many years, contributing to an extremely harsh environment. The shock production of carbon dioxide led to a sudden greenhouse effect, and high concentrations of carbon dioxide created acid rain. Basically, it was one huge mess. In addition to the iridium layer, other factors indicate a major climate change occurring at 65 mya, one being the "fern spike". Using an electron microscope to analyze pollen and spores, a study called palynology, paleobotanists have discovered that below and above the K-T boundary layer, fern spores routinely measure 15-30 percent of the total spore/pollen count. However, immediately above the iridium layer the fern spores measure 99 percent of the total count, indicating that basically only fern had managed to survive some devastating event. This fits, as we know ferns can tolerate soils low in nutrients. Fossil leaf sizes and shapes also indicate dramatic climate change. All of this precisely coincides with the 65 myo layer, above which no dinosaurs have ever been found. But of course not all questions have been answered. For example, why did dinosaurs die but frogs survived the impact? In 2010, a panel of 41 experts studied the asteroid theory and concluded that it was an asteroid strike that wiped out the dinosaurs, and further that the extraterrestrial body struck in a sulfate-rich area, releasing deadly sulfur, making the dust even more toxic. Yet other theories still survive. Perhaps several asteroids struck the earth in a short period, or a series of volcanic eruptions in India may have been the culprit, or climate change, or a sea level change. We will have the opportunity to hear about these theories Monday. One of the best exposures of the K-T boundary in the world is nearly in our back yard, found in Long Canyon in Trinidad Lake State Park, a few miles off I-25 about 125 miles south of Canon City. The 1" clay layer is clearly visible up a steep slope. Interpretive signs explain the theory. In Mexico, the K-T layer varies from 3 to 300 feet thick. To experience a meteor crater, you can travel a bit further from home to Meteor Crater near Winslow, Ariz., where a meteor 54 yards across slammed into earth about 50,000 years ago at the estimated speed of 28,600 mph, leaving a crater 1 mile in diameter. Please join the Canon City Geology Club at 6:30 p.m. Monday at the First United Methodist Church Fellowship Hall on the northwest corner of Ninth and Main streets as we welcome Dr. Pete Modreski as our speaker. He is a geochemist at the US Geological Survey in Denver and will speak on "The Day the Mesozoic Died", focusing on current theories for the demise of the dinosaurs at the end of the Cretaceous period. He will show a video featuring Kirk Johnson, former Chief Curator at the Denver Museum of Nature and Science, now Director of the Smithsonian's National Museum of Natural History in Washington, D.C.
In this experiment, you will use a Styrofoam-cup calorimeter to measure the heat released by three reactions. One of the reactions is the same as the combination of the other two reactions. Therefore, according to Hess’s law, the heat of reaction of the one reaction should be equal to the sum of the heats of reaction for the other two. This concept is sometimes referred to as the additivity of heats of reaction. The primary objective of this experiment is to confirm this law. The reactions we will use in this experiment are: (1) Solid sodium hydroxide dissolves in water to form an aqueous solution of ions. (2) Solid sodium hydroxide reacts with aqueous hydrochloric acid to form water and an aqueous solution of sodium chloride. (3) Solutions of aqueous sodium hydroxide and hydrochloric acid react to form water and aqueous sodium chloride. In this experiment, you will Combine equations for two reactions to obtain the equation for a third reaction. Use a calorimeter to measure the temperature change in each of three reactions. Calculate the heat of reaction, ΔH, for the three reactions. Use the results to confirm Hess's law. Sensors and Equipment This experiment features the following Vernier sensors and equipment.
Story Time for Preschoolers Becoming a Reader “Rattle, shake, screech, roar — who’s knockin’ at my door?” Matthew tears through the house, a sheet over his head. “Boom, boom, in my room!” he yells. “A witch is flyin’ on her broom!” For the past month, Matthew has immersed himself in a world of Halloween books. Although he does not yet know how to read text, he spends time every day looking at books with spooky ghosts, goblins, and skeletons. He recites lines he has memorized from the many times his parents have read them aloud. And he makes up his own, like the ones above. All this adds up to one thing: Matthew is becoming a reader. Moving Toward School — and Reading Preschoolers know a lot of things they didn’t know as babies. They don’t read independently, but if they’ve been read to a lot, they know a thing or two about reading: - They know books are read from front to back. - Pictures should be right-side up. - Reading is done from left to right. - The language of books is different from spoken language. - Words have different sounds in them. - There are familiar and unfamiliar words. - Stories have a beginning, a middle, and ending. All of these are emergent literacy skills — important building blocks toward the day when they’ll read independently. How can you encourage further development of these skills? Just keep reading aloud. Choosing lots of different books to read aloud will build your preschooler’s vocabulary, and help your child learn about different topics and understand how stories are structured and what characters do in them. Your child also will learn that: - Text is words written down. - Letters in a specific order form a word. - There are spaces between words. Understanding these basic concepts will help when kids start formal reading instruction in school. When and How to Read Many kids this age have moved beyond the small world of home to childcare or preschool. They may even be enrolled in lessons or classes. Read-aloud time can be a chance to slow down and spend time together. Try to have set times to read together. Before bed works well, as do other “down” times in the day, like first thing in the morning or after meals. Your child will enjoy cuddling with you, hearing your voice, and feeling loved. Kids between 3 and 5 years old are eager to show off what they know and love to be praised. Continue to choose some books with simple plots and repetitive text that your child can learn and retell. Encourage your child to “read” to you and praise the attempts. Here are some additional tips: - Yes, you should read that book for the millionth time — and try not to sound bored. Your child is mastering many skills with each re-reading. - When you are looking at a new book, introduce it. Look at the cover and talk about what it might be about. Mention the author by name and talk about what an author does. - Ask your child why a character may have taken a specific action. - Ask what part of the story your child liked best and why. - Talk about the parts of the story — how did it begin? What happened in the middle? What did your child think of the ending? - Move your fingers under the words as you read to demonstrate the connection between what you are saying and the text. - When you come to familiar or repetitive lines, pause and let your child finish them. (“I do not like green eggs and….I do not like them, Sam….”) - Ask your child to point out letters or words he or she might recognize. You might also occasionally point to words and sound them out slowly while your child watches. While it’s important to sometimes ask your child more complicated questions, your top goal should be to enjoy reading and have fun. Don’t make reading a book like a test your child needs to pass. Look at the pictures, make up alternative words together, and be playful and relaxed. Also, remember that reading comes to different kids at different times. Some kids fall in love with books earlier than others. So if your child is one who doesn’t seem as interested right away, just keep reading and showing how wonderful it can be. Remember these three key phrases: Read with me! Talk with me! Have fun with me! These three things can help your child on the road to reading success. The Best Books for Busy Minds Preschoolers like books that tell stories; they’re also increasingly able to turn paper pages and sit still, so longer picture books are a good choice for this age group. Continue to read your child books with predictable texts and familiar words, but also include those with a richer vocabulary and more complicated plots. Consider reading chapter books that take more than one session to finish. Kids are curious and like reading books about other kids who look and act like them, but also want stories with kids who live in different places and do different things. Expose your child to many characters and talk about how they act and what decisions they make. Include talking animals, monsters, and fairies to stimulate your little one’s vast imagination. Reinforce positive feelings about something your child has learned to do (kick a soccer ball, paint a picture) by reading books about kids who have done the same thing. Help your child talk about fears and worries by reading books about going to the doctor or dentist, starting a new school, or dealing with a bully. And pick books that will challenge your child and help advance developing skills. Alphabet books, counting books, or books with lots of new vocabulary are all good choices. Books about going to school — especially when kids are about to start preschool or kindergarten — are a great choice, as are books about making friends. Pick nonfiction books that talk about a single subject of interest to your child — owls, the ocean, puppies, the moon — especially if they have great illustrations. And don’t forget poetry — preschoolers still love rhymes. This age group is starting to enjoy jokes, so silly poems or songs will be a huge hit. Wordless picture books that convey meaning through the illustrations are also a must. Once the two of you have been through a wordless book a couple of times, your child will most likely begin telling you the story — and may even be found “reading” the story to favorite stuffed animals or dolls. Try homemade books too — photo albums with captions and scrapbooks captivate preschoolers. When your child makes drawings, ask him or her to tell you what they are, label them, and then assemble them into a “book” that you can read together. You can even laminate the pages and have fun creating book covers so that they will last for years to come. Books aren’t the only things your preschooler will love to read — magazines with lots of pictures and catalogues also are appealing. And ask people your child loves to send letters, postcards, or e-mails. Read these together and keep them in a special box where your child can look at them. Other Ways to Encourage Book Time Read-aloud time isn’t the only opportunity your child should have to spend time with books — preschoolers love to choose and look at books on their own. Keep books in a basket on the floor or on a low shelf where your child can reach them easily and look at them independently. Keep some books in the car and always have a few handy in your bag for long visits to the doctor or lines at the post office. At this age consider fostering independent reading by putting a reading lamp bedside so your child can look at books for little while before going to sleep. And kids who have just given up naps can be encouraged to spend “quiet time” looking at books on their own. Most important of all: Remember to let your child catch you reading for enjoyment. Turn off the TV, pull out a book, and curl up on the couch where your child can see you — and join you with his or her own favorite book. Reviewed by: Carol A. Quick, EdD Date reviewed: May 2013
Little is known about how sibling relationships impact child and family functioning, but Penn State researchers are beginning to shed light on intervention strategies that can cultivate healthy and supportive sibling relationships. Parents frequently rank their children’s sibling rivalry and conflict as the number one problem they face in family life. “In some other cultures, the roles of older and younger, male and female siblings are better defined, and in those more-structured family relationships, there is not much room for bullying and disrespect,” said Mark Feinberg, research professor in the Prevention Research Center for the Promotion of Human Development. “In the United States, and Western culture more generally, there are few guidelines for parents about how to reduce sibling conflict and enhance bonding and solidarity among siblings. “This is an important issue not only because siblings share a lifetime-long relationship, but also because sibling relations appear to be as important as parenting and peer relations for many aspects of a child’s development and well-being.” The SIBlings are Special (SIBS) Program, started by Feinberg and Susan McHale, professor of human development and family studies, addresses relationships between brothers and sisters, which are critical for learning the life skills that can strengthen a child’s development. Results from a randomized trial across 16 elementary schools in Pennsylvania demonstrated that the program shows promise in promoting healthy sibling relationships, improving family life and enhancing children’s social, emotional and academic development. The researchers published their findings in the current issue of the Journal of Adolescent Health. “What we have learned from testing the SIBS program is laying the groundwork for evidence-based programs designed to prevent sibling problems, as well as to foster mutually beneficial relationships,” Feinberg said. SIBS consists of 12 after-school sessions for elementary-aged sibling pairs, as well as monthly family nights. The program focuses on ways siblings can share responsibilities and practice making decisions together. Session topics include negotiating win-win solutions to conflict, setting goals together, finding mutually enjoyable activities and understanding each other’s feelings. During the program’s three family nights, children show parents what they have learned, and parents learn productive strategies for handling sibling relations — which typically have been ignored by most parenting programs. “Sibling relationships are the only life-long relationships in most people’s lives,” Feinberg said. “This makes it especially important that sisters and brothers learn at a young age how to work as a team and support each other.” Researchers observed the sessions and administered questionnaires to both the parents and children. Siblings who entered the study were randomly assigned to receive the afterschool SIBS program or to a control condition. Parents of siblings in both the intervention and the control conditions received a popular book about sibling relationships. Siblings exposed to the intervention demonstrated more positive interactions, increased self-control and demonstrated greater social competence and academic performance. They also experienced decreases in the impact of internalizing problems, such as depression, shyness and worry. Researchers found that SIBS also enhanced child-mother relationships. Mothers involved in the SIBS program demonstrated increased use of appropriate sibling parenting strategies, such as helping resolve conflicts peacefully and encouraging siblings to work problems out by themselves. These mothers also reported lower levels of depression symptoms after the program was completed compared to mothers in the control condition. “Overall, the results of the SIBS intervention are promising,” said McHale. ” Brothers and sisters got along better, learned from each other and liked being around each other more. As individuals, siblings in the study were better off emotionally and academically. Mothers also accrued benefits, with many reporting being happier about their personal and family life.” The National Institute on Drug Abuse, as a part of the National Institutes of Health’s American Recovery and Reinvestment Act, funded SIBS, which is a part of the Prevention Research Center at Penn State. “Everyone has personal stories about their siblings,” said Feinberg. “Some are good and some are not so good. So it’s obviously an important area to study. This program is playing a large role in identifying how to derive the best and longest-lasting benefits from healthy and enjoyable brother and sister relationships.”
In this language arts activity, students locate 54 words beginning with the letter "W." They may self correct by selecting the link at the end of the page. 3 Views 4 Downloads - Activities & Projects - Graphics & Images - Lab Resources - Learning Games - Lesson Plans - Primary Sources - Printables & Templates - Professional Documents - Study Guides - Writing Prompts - AP Test Preps - Lesson Planet Articles - Interactive Whiteboards - All Resource Types - Show All See similar resources: Translate transition words for your class with this handout and brief exercise. Fairly straightforward and informative, it includes sample sentences and a working link to a more complete list of transition words. There are two different... 9th - 12th English Language Arts CCSS: Adaptable Don't Say - A Word Choice Exercise Tired of to be verbs and other overused words? Use this pair of worksheets to encourage your writers to use more interesting word choices. The first page provides a model of words to use other than said. The second template can be filled... 3rd - 12th English Language Arts Tone Worksheet 3 The interpretation of a poem often lies in the mind of its reader, especially when reading the tone. Focus on author's word choice, middle schoolers read four different poems and briefly state a perceived tone for each, along with the... 6th - 9th English Language Arts CCSS: Designed Put your common writing errors to rest with this resource, which prompts high schoolers to create eulogies and tombstones for overused and incorrect words. They work on correcting common errors in spelling and usage mistakes in their own... 10th - Higher Ed English Language Arts CCSS: Adaptable "Snapshot" Exercises & Sensory Detail Word Bank Read a sample of creative descriptive writing to your science class. Discuss how writing can be used to record and communicate observations that scientists make. Reading selections and thought-provoking questions are suggested. Also... 2nd - 12th English Language Arts Word Structure- Prefix and Suffix Identify common prefixes and suffixes used in the English language and categorize the different kinds of information provided in a dictionary entry. Learners will write at least five pieces of information that they learn about a word... 6th - 11th English Language Arts If you need an introduction to skill W.9-10.1 for writing, then you’re off to a good start here. Included is a funny script that introduces what the students need to know to conquer the skill. It also provides an introduction activity... 9th - 10th English Language Arts CCSS: Adaptable Tell Us All: Tools for Integrating Math and Engineering What a scam! Middle and high schoolers pose as journalists exposing consumer fraud. In this lesson, they write an article for a magazine using data collected during previous investigations (prior lessons) to defend their findings that a... 7th - 12th Math
Bonding Between Atoms With the review material from the previous pages about classifications of atoms and tendencies to gain and lose electrons in mind, let's consider bonding between atoms. The very heart of bonding is the attraction between positive and negative charges, specifically the positive charge of the nucleus and the negative charge of the electrons. The varying tendencies of atoms to gain or lose electrons allows them to attract one another in various ways and form different kinds of bonds. Determining Bond Types Because the inert gases are not particularly good at either gaining or losing electrons, they are not particularly good at forming bonds. They do form some bonds, but not many, and we won't be concerned with them here. Keep in mind throughout this lesson that you can (and should) use this simple idea to determine the type of bonding by looking at the types of atoms that are involved. Like many generalities this is an oversimplification (particularly with transition metals and metalloids), but it can be very useful one. Bond Type Characteristics Metallic, ionic, and covalent are the three primary types of chemical bonding. We can call them atomic bonds because they bond atoms together. Here are some things to keep in mind as you study each of these types of bonding. Ionic and covalent are the most important in chemistry because ionic and covalent bonding can result in the formation of compounds. You will see that ionic and covalent bonding between different elements results in the formation of compounds because the atoms bond to one another in fixed ratios. Remember, fixed ratios of one element to another is a crucial characteristic of compounds. Metallic bonding results in the formation of alloys rather than compounds because it does not require that the atoms combine in fixed ratios. The nature of each of these three kinds of bonding is addressed in the next three sections of this lesson. Try your hand at using this generality by doing exercise 4 in your workbook. You can check your answers below. Sodium bonds to sodium using metallic bonding. Sodium bonds to iron with metallic bonding. Sodium bonds to fluorine with ionic bonding. Iron bonds to fluorine with ionic bonding. Phosphorus bonds to fluorine with covalent bonding. Fluorine bonds to fluorine with covalent bonding. If you got all of these correct, continue. If not, check with the instructor, explain your answers and find out why these pairs of atoms are considered to have the kinds of bonds listed. Distance Learning questions Clackamas Community College
The concept of shifting or sliding baselines refers to the way that significant changes to an ecological system are mistakenly measured against a previous baseline, which may be significantly different from the original state of the system. The concept assumes an 'original state' exists and that a return to that state might be possible, if it could be determined and there was sufficient control of human interference. The concept has been important in both marine conservation planning and fisheries management. However, long periods of exploitation, observed and projected climate change, and the disappearance of some environments, suggest that a return to an original state is unlikely to be achievable in many systems. In addition, protection based on static marine protected areas is unlikely to meet common conservation objectives, as species and habitats are moving and species assemblages shuffling with the changing climate. An alternative to modeling single species distribution changes is to examine change in environmental proxies, such as sea surface temperature (SST). Here, projected changes in SST for the period 2063-2065 from a downscaled ocean model are used to illustrate the similarity to, and movements of, present pelagic environments within conservation planning areas off Eastern Australia. The future environment of small planning areas differs from their present environment and static protected areas might not protect range-changing species. Climate-aware conservation planning should consider the use of mobile protected areas to afford protection to species' changing their distribution, and develop conservation objectives that are not underpinned by a return to historical baselines. marine protected area, range change, regional marine planning
What is Ebola? It displays itself with flu like symptoms, vomiting and bloody diarrhoea up to 10 days after contact with the virus. After 10-15 days, bleeding occurs through the mouth, nose and eyes. Some victims see blood seeping through the skin, which can result in painful blisters. The virus is usually transmitted via the urine of infected rats in situations of poor hygiene. Who is at risk of Ebola fever? Any traveller or person working in a medical situation in areas where Ebola fever has been reported. This includes Western Africa Zaire, Gabon and Uganda. Travellers to these areas should be aware of Government Travel advisories to such areas. How can I prevent Ebola fever? Travellers usually do not venture to areas where Ebola is a risk. However any traveller to an area where disease is spread due to poor hygiene, should take care with storing unused food in rat proof containers and ensure that accommodation is free from rodents by maintaining a suitable level of hygiene. Travellers should also contact their GP immediately at the first sign of fever on returning from a trip overseas. Note: This information is designed to complement and not replace the relationship that exists with your existing family doctor or travel health professional. Please discuss your travel health requirements with your regular family doctor or practice nurse.
But the t test also takes into account sample size. Examples are based on sample means of 0 and 1 (n = 10). These are standard error (SE) bars and confidence intervals (CIs). Just 35 percent were even in the ballpark -- within 25 percent of the correct gap between the means. check over here Replication, and researchers' understanding of confidence intervals and standard error bars. v t e Retrieved from "https://en.wikipedia.org/w/index.php?title=Error_bar&oldid=724045548" Categories: Statistical charts and diagramsStatistics stubsHidden categories: All stub articles Navigation menu Personal tools Not logged inTalkContributionsCreate accountLog in Namespaces Article Talk Variants Views Read Two observations might have standard errors which do not overlap, and yet the difference between the two is not statistically significant. E2.Figure 7.Inferences between and within groups. So Belia's team randomly assigned one third of the group to look at a graph reporting standard error instead of a 95% confidence interval: How did they do on this task? To address the question successfully we must distinguish the possible effect of gene deletion from natural animal-to-animal variation, and to do this we need to measure the tail lengths of a Gentleman. 2001. However, if n is very small (for example n = 3), rather than showing error bars and statistics, it is better to simply plot the individual data points.What is the difference Large Error Bars J. Instead, you need to use a quantity called the "standard error", or SE, which is the same as the standard deviation DIVIDED BY the square root of the sample size. The two are related by the t-statistic, and in large samples the s.e.m. J Insect Sci (2003) vol. 3 pp. 34 Need to learnPrism 7? What Do Small Error Bars Mean National Library of Medicine 8600 Rockville Pike, Bethesda MD, 20894 USA Policies and Guidelines | Contact Advertisement Science Blogs Go to Select Blog... With the error bars present, what can you say about the difference in mean impact values for each temperature? When n ≥ 10 (right panels), overlap of half of one arm indicates P ≈ 0.05, and just touching means P ≈ 0.01. Error ...Assessing a within group difference, for example E1 vs. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2064100/ All rights reserved. How To Interpret Error Bars That although the means differ, and this can be detected with a sufficiently large sample size, there is considerable overlap in the data from the two populations.Unlike s.d. Sem Error Bars Means and SE bars are shown for an experiment where the number of cells in three independent clonal experimental cell cultures (E) and three independent clonal control cell cultures (C) was In this case, P ≈ 0.05 if double the SE bars just touch, meaning a gap of 2 SE.Figure 5.Estimating statistical significance using the overlap rule for SE bars. check my blog Since what we are representing the means in our graph, the standard error is the appropriate measurement to use to calculate the error bars. Simple communication is often effective communication.. The size of the s.e.m. What Are Error Bars In Excel The trouble is in real life we don't know μ, and we never know if our error bar interval is in the 95% majority and includes μ, or by bad luck Here, SE bars are shown on two separate means, for control results C and experimental results E, when n is 3 (left) or n is 10 or more (right). “Gap” refers There may be a real effect, but it is small, or you may not have repeated your experiment often enough to reveal it. this content This month we focus on how uncertainty is represented in scientific publications and reveal several ways in which it is frequently misinterpreted.The uncertainty in estimates is customarily represented using error bars. Consider the example in Fig. 7, in which groups of independent experimental and control cell cultures are each measured at four times. Calculating Error Bars However, if n = 3, you need to multiply the SE bars by 4.Rule 5: 95% CIs capture μ on 95% of occasions, so you can be 95% confident your interval Sample 1: Mean=0, SD=1, n=100, SEM=0.1 Sample 2: Mean 3, SD=10, n=10, SEM=3.33 The SEM error bars overlap, but the P value is tiny (0.005). You will want to use the standard error to represent both the + and the - values for the error bars, B89 through E89 in this case. On judging the significance of differences by examining the overlap between confidence intervals. As always with statistical inference, you may be wrong! Error Bars Standard Deviation Or Standard Error But as we've seen, that doesn't guarantee that there's a significant difference between the effects of older brothers and older sisters. and 95% CI error bars with increasing n. Cumming, G., F. Ah, statisticians are making life confusing for undergrads. #21 sam September 12, 2008 Question…Ok, so the true mean in the general population in unknown. http://stevenstolman.com/error-bars/error-bars-excel-2003-individual-error-bars.html The small black dots are data points, and the column denotes the data mean M. With many comparisons, it takes a much larger difference to be declared "statistically significant".
Causes and Symptoms of an Infectious Disease An infectious disease is caused by any of the pathogenic microorganisms like viruses, bacteria, fungi or parasites. Infectious diseases are capable of being spread directly or indirectly from one person to the other. Infectious diseases of animals are termed as ‘zoonotic diseases’; these can cause diseases when transmitted to human beings. It is always advisable to have diagnostic tests done to identify an infectious disease. Microorganisms like bacteria, fungi and viruses coexist in our bodies; we are exposed to them at all times. Though they are largely harmless, in certain conditions, these can cause diseases. Certain infectious diseases like measles can be prevented by vaccines. Following regular hygienic practices which include proper hand-washing can prevent most infectious diseases. Symptoms of an infectious disease The symptoms of an infectious disease include – - Aching muscles Causes of an infectious disease The causes of an infectious disease can be broken down into microorganisms, direct contact, indirect contact, insect bites and food/water contamination These microorganisms are known to give rise to infectious disease, and can be detected with appropriate diagnostic tests – - Bacteria – These uni-cell microorganisms can cause illnesses that include urinary tract infections and tuberculosis. - Viruses – Viruses cause a range of diseases that can be as simple as common cold and as complicated as AIDS. - Fungi – Most skin diseases are a result of fungal infections. Some of the examples include athlete’s foot, Tinea Versicolor, and ringworm. - Parasites –One of the common examples is malaria, caused by a parasite transmitted through a mosquito bite. Person to person contact in the form of touch, kiss, cough, sneeze or exchange of body fluids during sexual contact can lead to the direct transfer of microorganisms like bacteria or viruses from one person to another. Being scratched by an infected animal can also lead to the spread of an infectious disease to a person. A pregnant mother can pass germs that cause infection disease to the unborn child. Germs on the mobile phone, door knob, toilet seat etc. are an example of indirect contact and subsequent transmission of an infectious disease. If you touch a mobile phone that is handled by someone who has a cold or flu; and then touch your eyes, mouth or nose without washing your hands, you can be susceptible to the infectious disease. Food and Water Contamination Unwashed food utensils or undercooked food can lead to contamination of food and water. For instance, the E.coli bacterium present in undercooked meat can lead to food poisoning, with symptoms like diarrhea and fever. While most infectious diseases cause mild to moderate amount of complications, some of them can be serious like pneumonia, meningitis, and AIDS. Some infections are known to increase cancer risk, for instance, Human papillomavirus for cervical cancer and Hepatitis C or Hepatitis B for liver cancer. Certain infectious diseases have a way of being dormant, only to resurface later. For instance, someone with chicken pox can develop shingles later in life. When to visit a doctor Go to the doctor if any of these things happen to you – - Diarrhea for more than 3 days in case of an adult, and more than 24 hours, in case of a child - A nagging headache that goes away and comes again - A cough for more than a week - Unexplained Fever - Rash or swelling on the skin - Bitten by an animal - Breathing problems - Vision problems Regular health check goes a long way in helping your doctor arrive at an appropriate course of treatment, based on the results from pathology services. Do not hesitate to go for diagnostic tests; these can be vastly helpful to the doctor in prescribing the right course of treatment in case you are affected by an infectious disease. Since there is a multitude of infectious diseases, an accurate diagnosis and health check goes a long way in expediting the treatment.
The rocks in this scene were formed from a past lava flow. Over time, the rocks have eroded into smaller rocks due to mechanical weathering, which has started to change this igneous rock into sedimentary rock. Would this change be considered a physical or chemical change? Justify. Location: Big Island in Hawaii. Author: Karl Spencer. In this scene, sunlight is gleaming near trees during a cold winter afternoon. An unbalanced chemical equation is also shown that represents the process of photosynthesis. How would you balance this equation? Justify by describing the process of photosynthesis as it impacts plants. Location: Bryce Canyon National Park. Author: Karl Spencer. In this scene, various types of stems are included in a collage. Use the scene numbers to select and describe the following: bulb, tuber, corm and rhizome. Justify your selections by describing the function of how each undergoes vegetative propagation. Location: Classroom in Southeast Texas. Author: Karl Spencer.
This Vowel and Consonant Mail worksheet also includes: Clarify the difference between a consonant and a vowel with an activity inspired by mail. Learners help out a confused mail carrier by categorizing and alphabetizing words by writing them on the corresponding mailboxes. One mailbox is labeled double consonants and the other double vowels. An answer key is included.
Learn something new every day More Info... by email There are more than 100 types of brain tumors. The main categories are gliomas, metastatic tumors, pituitary tumors, primitive neuroectodermal tumors (PNET), and benign tumors such as meningiomas. These types of brain tumors are usually classified according to the types of cell in which they originated and the severity of the tumor. Gliomas are the largest of the various types of brain tumors. Gliomas originate in the brain’s supportive tissue, which is made up of glial cells. The most prevalent type of glioma is the astrocytoma, which is very malignant and can affect both children and adults. Astrocytomas exert cranial pressure that can lead to headaches, balance issues, nausea and double vision. A brain stem glioma is one of the types of brain tumors that occur in the stem of the brain. Such tumors are most common in children, can affect movement in one or both sides of the body, and can cause double vision and coordination problems. Their location means most brain stem gliomas are not treated with surgery. The most common types of brain tumors are metastatic tumors, which are tumors that have invaded the brain from other infected parts of the body. These types of brain tumors are very malignant and are usually treated aggressively with chemotherapy, radiation, surgery or a combination of two or more of the three. They often return after remission and normally originate as skin, lung or breast tumors. Pituitary tumors occur on or near the pituitary gland at the base of the skull. These types of tumors can have an adverse effect on the function of the pituitary gland — largely responsible for controlling the body’s glandular system, which produces and regulates vital hormones. Pituitary tumors are more likely to be benign than many other types of brain tumors, but their proximity to the pituitary gland and the brain still makes them very dangerous. They are normally removed surgically, and recurrence is not as prevalent as with malignant tumors. Other malignant brain tumors include PNET tumors, which are very invasive and can occur anywhere in the brain. They are associated with severe headaches caused by increased cranial pressure. They often occur near the cerebellum, where they are referred to as medulloblastomas. Aggressive treatment is called for to halt PNET tumors from metastasizing and moving throughout the central nervous system. The most common benign types of brain tumors are meningiomas. Meningiomas occur in the tissue that protects the brain and is found next to the skull. Meningiomas are slow-growing and patients are often unaware of them for many years before they are detected. They are generally treated by surgery with a good success rate, though they occasionally recur or become malignant. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
* Definition of Depression Nearly everyone experiences occasional feelings of sadness; depressed feelings are a natural reaction to disappointment, loss, difficulties in life, or low self-esteem. But when periods of intense sadness last for weeks at a time and hamper your ability to function normally, you may be suffering from clinical, or major, depression. Major depression is diagnosed when you have at least five of the following nine symptoms for at least two weeks: - Depressed feelings throughout most of the day, especially in the morning; - Feelings of worthlessness or guilt; - Constant fatigue or lack of energy; - Insomnia or excessive sleeping; - Indecisiveness, inability to concentrate; - Restlessness, inability to remain still and calm; - Lack of interest in activities you once enjoyed; - Recurrent thoughts of death or suicide; - Significant weight loss or gain within a short period of time. Depression is a complex disorder which can be caused by many different agents; mental health experts believe that major depression is actually a symptom of one or more underlying health issues, rather than an isolated disease. Understanding why you are experiencing depression can help your mental health care provider direct your treatment appropriately, enabling you to enjoy renewed quality of life. * Causes of Depression The health conditions and genetic/environmental factors discussed here are all known to be associated with depression. Determining precisely why you are feeling depressed and addressing the particular issue are critical to solving your depression problem and allowing you to live to your fullest potential. As with numerous other health disorders, it is clear that heredity plays a role in depression. Not everyone who has depressive symptoms has a family history of emotional issues; nor does having depression in your family guarantee that you will experience depression. However, research has shown that individuals with mental illness in their background have a greater chance of experiencing symptoms of depression themselves. - Trauma & Stress Traumatic and stressful life events, such as loss of a loved one, abuse, chronic illness or pain, or a move to an unfamiliar location can trigger depression in certain individuals. These events result in changes in neurotransmitter levels (discussed later in more detail), leading to brain chemistry imbalances that cause depression symptoms. - Medications & Recreational Drugs There are a large number of substances which many of us use regularly that can cause depression in some people. Prescription medications, birth control pills, anti-inflammatory drugs (including steroids), antihistamines, cholesterol pills, high blood pressure medications, antidepressants and tranquilizers are all linked to depressive symptoms. Nicotine, caffeine, alcohol, and street drugs are all known to lead to depression in certain individuals, as well. - Neurotransmitter Imbalances & Abnormalities in Brain Physiology Neurotransmitters are chemical "messengers" in the brain that regulate mood, thought, and memory. When neurotransmitters are not available at sufficient levels, depression can be the result. Researchers have noted that individuals with depression often have an abnormally small hippocampus, a small structure in the brain that is closely associated with memory. A smaller hippocampus has fewer serotonin receptors; serotonin is a neurotransmitter that is vital in regulating emotions. - Brain Inflammation Inflammation, often present with autoimmune disorders such as diabetes, triggers the body's immune system response. Regulatory proteins called cytokines are marshaled into action to fight off possible infection; these peptides create a stress response, altering the levels of certain neurotransmitters, which results in depressive Environmental toxins, such as heavy metals and molds, can trigger an immune reaction which sets off a cytokine response. Many foods such as those in the gluten and dairy family are known to cause symptoms of depression, ADHD, and other mood disorders. When we eat a food that we are allergic or sensitive to it causes a rise in histamine and an inflammatory response in the brain and gut. - Digestive Disorders Digestive dysfunction, including bowel disorders, yeast overgrowth, gluten and other food allergies, and impaired digestion of proteins, can also set off an immune system response which can lead to depression. - Nutritional Imbalances Many important nutrients, especially the B vitamins, minerals such as zinc and magnesium, amino acids, and the Omega 3 fatty acids, are building blocks for important neurotransmitters. Insufficient dietary intake of these nutrients can result in neurotransmitter imbalances, a significant cause of depression symptoms. - Impaired Methylation Methylation, a metabolic process which takes place in every cell in the body, is important for the manufacture of hormones, the regulation of neurotransmitters, and the synchronization of the neural networks that affect mood and cognition. When this process is impaired, it can disrupt the entire system. - Hormone Imbalances When hormones such as insulin, thyroid or adrenal hormone, and sex hormones are not available at proper levels, they can negatively affect the way we think and feel. Depression is a serious illness which can have a significant negative impact on your life. Fortunately, lab testing is available that can help you to pinpoint the exact cause of your depression, allowing your health care provider to assist you in choosing the best treatment for your depressive symptoms. With proper care, your symptoms should disappear, leaving you to enjoy life to its fullest once again. Purchase Lab Tests We Don't Guess; We Test! We have tests for all conditions. Purchase here and get started towards a healthier you today! Need Help? Call Us 800-385-7863 Sorry but there are no refunds on lab tests, telephone consultations, or office visits.Learn More Depression Have you Down? Integrative Psychiatry is making break thrus in treating this common, costly, and catastrophic epidemic. Start your journey to recovery today! Need Help? Call Us 800-385-7863Learn More Purchase Neurotransmitter Tests/Products Find and purchase the Neurotransmitter tests and products you need organized by neurotransmitter. Need Help? Call Us 800-385-7863Learn More Purchase Natural Health and Brain Supplements Here We carry professional brands such as Neuroscience, Designs For Health, Neurvana, Douglas Labs, Pure Encapsulations. Orders over $60.00 ship free! Need Help? Call Us 800-385-7863Learn More Professional Consultation and Evaluation Do you need more direction? Schedule your personalized private consultation / evaluation with our integrative psychiatry professionals. Need Help? Call Us 800-385-7863Learn More Search by Health Condition Not sure what test or supplement you need? Search by health condition to find the right solution for you. Need Help? Call Us 800-385-7863Learn More Natural ways to increase dopamine levels and information on choosing the right dopamine level tests and products. Need Help? Call Us 800-385-7863Learn More Learn About The Four Major Neurotransmitters and the Neurotransmitter Imbalances. Need Help? Call Us 800-385-7863Learn More Search by Product Category Not sure what test or supplement you need? Search by product category to find the right solution for you. Need Help? Call Us 800-385-7863Learn More © 2004-2017 Integrative Psychiatry All rights reserved. Website Designed by Little Frog Innovations, Inc.
Definition of Newton's laws of motion in English: Three fundamental laws of classical physics. The first states that a body continues in a state of rest or uniform motion in a straight line unless it is acted on by an external force. The second states that the rate of change of momentum of a moving body is proportional to the force acting to produce the change. The third states that if one body exerts a force on another, there is an equal and opposite force (or reaction) exerted by the second body on the first. - The stunt draws upon a variety of physics theories including the conservation of angular momentum and Newton's laws of motion. - Therefore, when an atom emits or absorbs a photon, its momentum changes in accordance with Newton's laws of motion. - For example, Isaac Newton's laws of motion state that a body moving through empty space with no forces acting on it will go on moving in the same way. Definition of Newton's laws of motion in: - British & World English dictionary What do you find interesting about this word or phrase? Comments that don't adhere to our Community Guidelines may be moderated or removed.
You will determine the specialized stem, leaf, and root adaptations of dry environment plants by designing a garden using a simulator. After completing this tutorial, you will be able to complete the following: There are hundreds of thousands of different types of plants on the planet. The reason for this immense diversity is their adaptations to the particular environments in which they live. As with any organism, there are many different types of adaptations that plants have to make them well suited to a particular environment. The main plant parts (roots, stems, and leaves) are rich in adaptations. Dry environments can be particularly difficult for plants to survive. Due to the lack of water, plants in these types of environments must be able to get whatever water is available easily and then store that water for the long periods of time when water is unavailable. For this reason, many plants in dry environments have succulent stems, fibrous roots, and spiny leaves. Succulent stems are good for plants in dry environments because they are excellent at storing water. Fibrous root systems can also be advantageous because they are able to get water quickly and easily when there is rain. In addition, fibrous roots can help hold the soil in place. This is important for plants that live in dry environments where flash flooding frequently occurs when it rains. One major problem for plants in dry environments is transpiration-evaporation of water from plants. Wide-bladed leaves have many stomata (openings for gas exchange), and thus transpiration is enhanced. This feature would be extremely detrimental to plants living in dry environments. Plants in dry areas have adapted small spiny leaves. The spiny leaves are also a deterrent for animals looking for food and moisture. All of these adaptations make the plants well suited to their particular environment-in this case, a dry environment. |Approximate Time||20 Minutes| |Pre-requisite Concepts||Students should be familiar with environments and plant parts.| |Type of Tutorial||Concept Development| |Key Vocabulary||adaptations, arid, cactus|
The Unalienable Right of Property: Its Foundation, Erosion and Restoration by Richard A. Huenefeld I. The Declaration of Independence Affirms Unalienable Property Rights - A. Introduction B. The Laws of Nature and of Nature’s God C. The Meaning of the “Pursuit of Happiness” D. The Break from Feudal Notions of Property The purpose of this examination of the foundation of property law in America is similar to the purpose of the Declaration of Independence. “Not to find out new principles, or new arguments, never before thought of, not merely to say things which had never been said before; but to place before mankind the common sense of the subject . . . .”1 The “common sense of the subject” expressed in the Declaration of Independence was that a national civil government must be based upon the “Laws of Nature and of Nature’s God.” The laws of nature and of nature’s God dictate that all men are equally endowed by their Creator with unalienable rights to “Life, Liberty and the pursuit of Happiness.” In Jefferson’s day, the common sense of the subject was that the pursuit of happiness included the unalienable right of the individual to acquire, possess, protect and dispose of property. Because the purpose of civil governments was to secure unalienable rights, violations of one’s unalienable right of property were subject to civil sanction. Today, however, the common sense of the subject is quite the opposite. The modern idea is that civil government properly possesses all power over all subjects of property. Any rights that may exist are derived from the civil government. Any rights to property that a person has may be regulated, limited or revoked by the civil government in order to satisfy the “public interest.” Some have advocated that there are no such things as rights, but merely social duties. There is a clear distinction between the common sense in Jefferson’s day and current thinking about property rights. This has resulted from a failure to remain faithful to the laws of nature and of nature’s God. Scholars have adopted alternative theories of property premised upon power and expediency. They have supported such theories by inaccurate interpretations of feudal history. They have failed to recognize the fact that colonial America rejected European feudalism. When such ideas infiltrated the political arena, the progressive movement was able to extend the power of American civil government and its control over private property. The rejection of the laws of nature and of nature’s God has led to expanded civil power and the violation of unalienable rights. Once the unalienable right of property is violated, life and liberty are no longer secure. The challenge facing America is to end the violation of not only property rights but also unalienable rights in general. The only way such a challenge can be won is to return to the Declaration of Independence, the laws of nature and of nature’s God, and unalienable rights. In 1776 representatives of the thirteen United Colonies gathered to declare their independence from a tyrannical English government. They fashioned an unimpeachable legal basis for their claim to liberty. The Declaration of Independence stated the fundamental principles upon which civil governments should be established. Understanding the Declaration is crucial to a proper assessment of the principles of law in the United States. If the declaration of independence is not obligatory, our intire political fabrick has lost its magna charta, and is without any solid foundation. But if it is the basis of our form of government, it is the true expositor of the principles and terms we have adopted.2 Grasping these principles is also essential to a correct comprehension of the foundation of property law in the United States. The whole Declaration is premised upon the “Laws of Nature and of Nature’s God.” An examination of the foundation of property rights is meaningless apart from this language. Reliance upon the laws of nature and of nature’s God was not a new position created as an expedient measure to justify independence from Great Britain. Thomas Jefferson verified this by his statement that the intent was “[n]ot to find out new principles, or new arguments, never before thought of.”3 Rather, the Framers were relying upon a centuries’ old premise of law. Sir Edward Coke addressed the subject of the law of nature as early as the seventeenth century. The law of nature is that which God at the time of creation of the nature of man infused into his heart, for his preservation and direction; and this is lex aeterna, the moral law, called also the law of nature. . . . This law of nature, which indeed is the eternal law of the Creator, infused into the heart of the creature at the time of his creation, was two thousand years before any laws written, and before any judicial or municipal laws.4 Blackstone in the eighteenth century provided an exposition of the laws of nature and of nature’s God as that phrase was historically understood. Blackstone began with the common definition of law as “a rule of action . . . which is prescribed by some superior, and which the inferior is bound to obey.”5 From this he made it clear that “[m]an, considered as a creature, must necessarily be subject to the laws of his creator, for he is entirely a dependent being.”6 Blackstone concluded that because “man depends absolutely upon his maker for every thing, it is necessary that he should in all points conform to his maker’s will.”7 This argument was crucial because the “will of his maker is called the law of nature.”8 This law of nature, being coeval with mankind and dictated by God himself, is of course superior in obligation to any other. It is binding over all the globe in all countries, and at all times: no human laws are of any validity, if contrary to this; and such of them as are valid derive all their force, and all their authority, mediately or immediately, from this original.9 Blackstone affirmed that the Creator endowed man with the faculty of reason so that he could recognize the purpose of these laws. Regrettably, man’s reason is no longer, “as in our first ancestor before his transgression, clear and perfect, unruffled by passions, unclouded by prejudice, unimpaired by disease or intemperance.”10 Rather, every person now finds that his ability to reason and understand is full of error. Man’s faulty intellect has been aided by direct revelation. ‘The doctrines thus delivered we call the revealed or divine law, and they are to be found only in the holy scriptures.”11 Blackstone explained that the revealed precepts were recognizable as a part of the original law of nature. As a result, he concluded that “[u]pon these two foundations, the law of nature and the law of revelation, depend all human laws; that is to say, no human laws should be suffered to contradict these.”12 Blackstone referred to the law of revelation as the “law of God.” Thus, we have the law of nature and the law of God. Blackstone used the two separate phrases, the “law of nature” and the “law of God,” because they referred to different expressions of God’s revelation. The “law of nature” referred to God’s eternal law revealed in Creation. The “law of God” referred to the law revealed in Scripture. Jefferson’s construction of the phrase “Laws of Nature and of Nature’s God” evidenced reliance upon the same symmetry used by Blackstone. Jefferson was careful to use the distributive plural “laws.” This word choice served to distinguish the law of nature and the law of God who is over nature. As well, the distributive plural served to link the two phrases as signifying the same thing.13 Jefferson obviously was not formulating some new philosophy. His phraseology carried a commonly understood meaning. “It was not Jefferson’s task to create a new system of politics or government but rather to apply accepted principles to the situation at hand.”14 Jefferson phrased the Declaration to indicate that all men were created with certain inherent rights premised upon the laws of nature and of nature’s God. The colonists had to appeal to the laws of nature and of nature’s God because the British Parliament declared the colonists to be outside the British constitution and denied the colonists the protection of the laws. British execution of law was no longer consistent with the laws of nature and of nature’s God and therefore did not warrant obedience. Once inconsistent with the laws of nature and of nature’s God, British execution of law also proved violative of human rights. Therefore, in order to protect their rights, the colonists had to appeal to unalienable rights. The rights of Englishmen were apparently contingent upon the good graces of the Crown and were too country specific. The laws of nature and of nature’s God are the foundation for unalienable rights. The word “unalienable” may be defined as that which is not alienable nor transferable.15 Therefore, that which is unalienable may not be sold or transferred to another. By this, it is apparent that unalienable rights speak of rights which no man may sell, trade or transfer. As well, if a man may not freely transfer such a right by his own choice, certainly he may not be compelled to do so by some other person or power. To better understand the text of the Declaration, we must also comprehend the nature of “rights.” A simplistic definition would be that a right is a just claim.16 However, a more complete definition indicates that a right is a just claim based upon conformity to the perfect standard of truth and justice; that perfect standard is found only in the infinite God and His will or law.17 This definition evidences the relation of rights to the laws of nature and of nature’s God. The two above definitions are consistent with a contextual examination of the Declaration of Independence. The statement that all men are “endowed by their Creator with certain unalienable Rights” affirmed rights as just claims based in God’s will. Men have certain rights which they may not be denied because the Creator fashioned man’s nature in such a way that denial of those rights denies man’s humanity. The nature of man and the nature of rights inherent within man was not left to subjective manipulation. The Declaration of Independence relied upon the laws of nature and of nature’s God as the only objective standard. Unalienable rights may be properly alienated only as a result of forfeiture for the commission of a wrongful act. Forfeiture occurs when subsequent to an individual’s commission of a civil wrong for which he is found liable by due process, or by commission of a criminal act for which he is procedurally determined guilty, he is deprived of life, liberty or property in accordance with justice. Such deprivations do not involve the alienation of an unalienable right because any just claim to life, liberty or property was forfeited when the wrongful act was committed.18 The Declaration of Independence was an effort by the colonists to declare their ultimate reliance upon the laws of nature and of nature’s God. Their appeal to an absolute and objective standard was an effort to secure the unalienable rights of the colonists. They recognized that they had a just claim to the unalienable right to life. The same held true for liberty. They also declared a just claim to the unalienable right to the “pursuit of Happiness.” This phrase must be examined in order to understand the foundation of property law in the United States. The Declaration of Independence affirms that people are endowed with unalienable rights, including “Life, Liberty and the pursuit of Happiness.” The language is distinguishable from the “life, liberty and property” wording usually attributed to John Locke. An examination of appropriate documents reveals a deliberate purpose for the specific wording of the Declaration. The intent was to select language which would not be considered redundant because the Lockean use of the word property included liberty. Also, the intent was to avoid standardizing eighteenth-century practices or concepts of property law, such as slavery. The intent was to select language which referred to a person’s general rights which include property, contract and other economic liberties consistent with the eternal laws of justice found in the laws of nature and of nature’s God. The Magna Carta of 1215 served to influence American constitutional liberties.19 This document was the result of protests against the use of governmental power for tyrannical purposes. It affirmed that the rule of law limits the authority of men exercising governmental power. From this premise, the Magna Carta affirmed the principle that life, liberty and property must be protected. Sir Edward Coke supported this interpretation in his treatise concerning that document.20 The First Virginia Charter of 1606 stated that rights enjoyed by Englishmen, principally those of the Magna Carta, would be enjoyed by settlers of the new colonies in North America.21 This same general guarantee is found in the Charter of New England of 1620,22 the Charter of Massachusetts Bay of 1629,23 the Charter of Maryland of 1632,24 the Charter of Maine of 1639,25 the Charter of Connecticut of 1662,26 the Charter of Rhode Island of 1663,27 and the Charter of Carolina of 1663.28 So the colonies were initiated upon the principle that the rule of law protected inherent rights of life, liberty and property. The Bill of Rights of 1689 was another major British document affirming fundamental rights and liberties of Englishmen.29 This document fostered further protections of life, liberty and property. The document obliged the British government to secure these rights of the colonists because they too were Englishmen. However, as noted previously, the rights of Englishmen were eventually violated by Parliament and the Crown. John Locke published his Two Treatises of Government within a few years of the enactment of the Bill of Rights. The Second Treatise is the primary source for Locke’s arguments concerning life, liberty and property. He used several variations of the phrase: “Life, Health, Liberty, or Possessions”;30 “Life, Liberty, Health, Limb or Goods”;31 “Estate, Liberty, Limbs and Life”;32 “Lives, Liberties and Estates”;33 “Lives, Liberties, and Possessions”;34 and “Lives, Liberties, or Fortunes.”35 Locke did not use the phrase “life, liberty and property” in his second treatise. To do so would have been redundant. Locke repeatedly pointed out that by using the word property, he meant “that Property which Men have in their Persons as well as Goods.”36 He used “the general Name, Property” to refer to “Lives, Liberties and Estates.”37 Using the phrase “pursuit of Happiness,” the Declaration of Independence avoided the redundancy which occurred if the Lockean use of the word property was related to liberty. Further insight into the “pursuit of Happiness” is available from Blackstone. Blackstone discussed the fundamental rights of Englishmen in light of the Magna Carta. The declaration of rights and liberties in the Magna Carta conformed to the natural liberties of all individuals.38 The natural liberties inherent within the individual were endowed by God at the person’s creation.39 Blackstone indicated that these rights were reducible to three primary articles: the right of personal security, the right of personal liberty and the right of private property.40 Blackstone indicated that the inherent right of personal security included the person’s “enjoyment of his life, his limbs, his body, his health, and his reputation.”41 This list may be summarized in the word “life.” Life is a gift from God and is therefore an inherent right.42 Blackstone addressed the subject of personal liberty as a God-given, inherent right. Blackstone argued that it consisted of the liberty to move about, at will, from place to place without fear of restraint or imprisonment without due process.43 According to Blackstone a third inherent right was the God-given gift of private property. The right “consists in the free use, enjoyment, and disposal of all [personal] acquisitions.”44 He also spoke of the “sacred and inviolable rights of private property.”45 Blackstone’s three-part expression was translated into the American legal tradition by the Declaration and Resolves of the First Continental Congress. The document affirmed the position that “the inhabitants of the English colonies in North-America, by the immutable laws of nature, the principles of the English constitution, and the several charters or compacts” were “entitled to life, liberty and property.”46 This language indicated reliance upon both the Magna Carta and the immutable laws of nature because the Magna Carta was a voidable act of man, while the laws of nature were permanent. The language indicated that the draftsmen did not rely upon the Lockean view of property as simply another way of saying life, liberty and estate. Rather, they relied upon Blackstone’s three-part division of God-given, inherent rights. The meaning of the “pursuit of Happiness” is further revealed by the Bill of Rights to the Constitution of Virginia. Adopted June 12, 1776, roughly one month prior to the Declaration of Independence, a key provision states: That all men are by nature equally free and independent, and have certain inherent rights, of which, when they enter into a state of society, they cannot, by any compact, deprive or divest their posterity; namely, the enjoyment of life and liberty, with the means of acquiring and possessing property, and pursuing and obtaining happiness and safety.47 This language reflected Blackstone’s three-part expression of inherent rights. Life, liberty and property were recognized as inherent rights of the individual and not originating with civil society. The property aspect was expanded to reflect just what sort of property rights were inherent. Apparently, the means of acquiring and possessing property generally were inherent rights. This does not imply that people have an inherent right to any specific item or amount of property. The language indicated that the means of pursuing and obtaining happiness were equally inherent rights. This may include such economic rights as contract and profession. The language of the Virginia Bill of Rights was similar to that of the Declaration. The Declaration, however, relied upon the “pursuit of Happiness” rather than property. This word choice served the purpose of avoiding the Lockean redundancy and of encompassing in few words, more than rights in property. To understand the context of the phrase, recourse must again be made to Blackstone’s Commentaries. Blackstone indicated that the Creator “has been pleased so to contrive the constitution and frame of humanity, that we should want no other prompter to inquire after and pursue the rule of right, but only our own self-love, that universal principle of action.”48 Blackstone clarified this by pointing out that the Creator has so intimately connected, so inseparably interwoven the laws of eternal justice with the happiness of each individual, that the latter cannot be attained but by observing the former; and, if the former be punctually obeyed, it cannot but induce the latter. In consequence of which mutual connection of justice and human felicity, he has not perplexed the law of nature with a multitude of abstracted rules and precepts, referring merely to the fitness or unfitness of things . . .; but has graciously reduced the rule of obedience to this one paternal precept, “that man should pursue his own true and substantial happiness.”49 An unalienable right to the “pursuit of Happiness” meant simply that every individual was created with the inherent right to live in accordance with the laws of eternal justice. The phrase also avoided redundancy by the use of the word “property.” It allowed recognition of more rights than that of property or the legal procedures for dealing with property. It also allowed for use of Blackstone’s own pretext test to determine whether property right “tends to man’s real happiness, and therefore justly concluding that . . . it is a part of the law of nature.”50 It may be concluded that the denial of property rights is “destructive of man’s real happiness, and therefore that the law of nature forbids it.”51 The above interpretation of the “pursuit of Happiness” phrase was adopted by individual states. State constitutions drafted after the Declaration of Independence indicated a common understanding. The Constitution of Pennsylvania of August 16, 1776, affirmed: That all men are born equally free and independent, and have certain natural, inherent and inalienable rights, amongst which are, the enjoying and defending life and liberty, acquiring, possessing and protecting property, and pursuing and obtaining happiness and safety.52 The Delaware Declaration of Rights of September 11, 1776, indicated “[t]hat every member of society hath a right to be protected in the enjoyment of life, liberty and property.”53 Delaware chose to use Blackstone’s brief three-part expression of inherent rights. The Constitution of Vermont of July 8, 1777, affirmed: That all men are born equally free and independent, and have certain natural, inherent and unalienable rights, amongst which are the enjoying and defending life and liberty: acquiring, possessing and protecting property, and pursuing and obtaining happiness and safety.54 The Constitution of Massachusetts of October 25, 1780, recognized: All men are born free and equal, and have certain natural, essential, and unalienable rights; among which may be reckoned the right of enjoying and defending their lives and liberties; that of acquiring, possessing, and protecting property; in fine, that of seeking and obtaining their safety and happiness.55 A Justice of the United States Supreme Court, D. J. Brewer, referred to the Constitution of Massachusetts for his argument to protect private property.56 He informed Yale graduates: Its last clauses simply define what is embraced in the phrase, – “the pursuit of happiness.” They equally affirm that sacredness of life, of liberty, and of property, are rights, – unalienable rights; anteceding human government, and its only sure foundation; given not by man to man, but granted by the Almighty to every one: something which he has by virtue of his manhood, which he may not surrender, and of which he cannot be deprived.57 The Constitution of New Hampshire of June 2, 1784, affirmed: All men have certain natural, essential, and inherent rights; among which are – the enjoying and defending life and liberty – acquiring, possessing and protecting property – and in a word, of seeking and obtaining happiness.58 As the constitutions of Massachusetts and New Hampshire indicated, seeking and obtaining happiness was used as a shorthand reference to a host of unalienable rights, including property. The Framers paralleled the inherent right of property with the unalienable right to the pursuit of happiness. The “pursuit of Happiness” phrase of the Declaration referred to the right to use just means of acquiring, possessing, and protecting property, and seeking, pursuing and obtaining happiness, but by using more abbreviated language. Judge Brewer affirmed this definition of the phrase: When among the affirmations of the Declaration of Independence, it is asserted that the pursuit of happiness is one of the unalienable rights, it is meant that the acquisition, possession, and enjoyment of property are matters which human government cannot forbid, and which it cannot destroy . . . .59 Clearly, the “pursuit of Happiness” phrase carried a very specific meaning. Part of the problem involved when addressing the unalienability of property rights is that, historically, the specific meaning has not been carefully maintained or clearly articulated. In a certain, carefully defined context, property rights are alienable. In a more general sense, property rights are unalienable. Obviously, property of various descriptions is bought and sold daily. When a person sells a piece of property, be it a house, a piece of land or a car, he is transferring his right to that item of property. He is alienating his property right to that item. He is alienating his right to that particular subject of property. He is not, however, alienating his right to own property. Because of the ability to alienate one’s right to a particular subject of property, some writers have concluded that one’s general right to property is necessarily an alienable right.60 The misunderstanding is due to a failure to distinguish between the right to freely transfer or alienate particular items by use of the procedural means provided in law, and the inability to transfer the general right to acquire, possess or dispose of property. The unalienable right of property refers to that general right to acquire, possess or transfer property. That right cannot be denied without denying an inherent right that is indicative of one’s humanity. The general right to acquire, possess or transfer property is an unalienable right, derivative of the laws of nature and of nature’s God, and encompassed in the phrase the “pursuit of Happiness.” Unalienable rights are general rights understood only in light of the laws of nature and of nature’s God. A person has an unalienable right to life in general. He does not have an unalienable right to a specific quality of life, or quantity of life. Likewise, a person has an unalienable right to liberty in general. He does not have an unalienable right to unlimited liberty without responsibility. Similarly, the unalienable right to the pursuit of happiness is a general statement. A person does not have an unalienable right to a particular degree of happiness, or particular kind of happiness. An unalienable right to property also must be understood in a general sense. A person does not have an unalienable right to a particular piece of property, or amount of property. The unalienable right of property refers to the general right to use means consistent with the laws of nature and of nature’s God in order to acquire, possess or transfer property. That right cannot be denied without denying an inherent aspect of a person’s humanity. The same is true of all unalienable rights. According to the Declaration of Independence, the United States is established upon principles derived from the laws of nature and of nature’s God. Therefore, the civil government of the United States is obligated to secure the unalienable rights of the individual. The unalienable right to the pursuit of happiness includes the general right to property. This foundation must be embraced in order to secure property rights. To do this, it is necessary to demonstrate that although modern theorists have assumed that American property law is premised upon ancient feudalism, Americans consciously rejected feudalistic practices. The foundation for law in the United States is the law of nature and of nature’s God. This objective standard differs from a system based upon feudalistic concepts. This standard facilitated rejection of the feudalistic principle that all property was subject to ultimate title in the civil ruler. Such feudal subordination threatened unalienable rights. Historically, feudal institutions were implemented in Europe to meet the need for an effective military force to stabilize power in the state. The system provided strict military and political cooperation. Individuals were commonly supported by a grant of land in return for obedient service, thereby becoming vassals dependent upon the lord.61 Feudalism originated with the military practice of the nations that migrated into the regions of Europe at the decline of the Roman Empire.62 The ultimate proprietor of property held the source of political power.63 A proper military subjection was introduced. “Military ideas predominated; military subordination was established; and the possession of land was the pay which the soldiers received for their personal services.”64 But because the chief represented the society, the ultimate property of the soil and the source of power vested in him.65 The institution of the Domesday Book formalized the subordination of feudal estates.68 By consenting to the introduction of feudal concepts, the English meant no more than to establish a defensive military system.69 The system was firmly rooted in English common law by the thirteenth century. A fundamental maxim of English tenures, though Blackstone called it a mere fiction, was “that the king is the universal lord and original proprietor of all the lands in his kingdom; and that no man doth or can possess any part of it, but what has mediately or immediately been derived as a gift from him, to be held upon feodal services.”70 Blackstone considered such a doctrine of subordination contrary to English understanding and intent. The Norman lawyers, skilled in the feudal constitutions, and understanding the import and extent of the feudal terms, introduced rigorous doctrines and services as if the English owed everything they had to the bounty of their sovereign king.71 The principle of military organization was coupled with the notion that all lands were originally granted out by the sovereign. The grantor forcefully retained the dominion or ultimate property of the land while the grantee had only the use and possession. In this manner, the feudal system came to be considered a civil establishment, rather than only a military plan. As a result of subordination to the king, oppressive consequences were instituted hindering the civilization and improvement of the people.72 Kent affirmed that the feudal system degenerated and, except in England, “annihilated the popular liberties of every nation in which it prevailed.”73 Kent indicated that “the great effort of modern times” should be “to check or subdue its claims, and recover the free enjoyment and independence of allodial estates.”74 Under the feudal system, the absolute power of the king included the only true ownership right to property of any sort. The concept of holding the right to use by grant of right from the king was a part of the English common law. At best one might argue that American property law was premised upon the common law and the inherent feudal traditions. However, America rejected the feudal concept of subordination because it threatened unalienable rights. By custom and by statute Americans sought to establish a legal tradition distinguishable from the English common law. The legal terminology of feudal law continued for convenience, but the oppressive practice of feudal subordination was rejected. Powell has noted the American independence from the common law of England. The early colonial charters included a power to legislate so long as the laws were not repugnant to the laws of England. The mere absence of repugnance allowed freedom for substantial change. Being free from the impositions of the laws of England, the colonists began to regulate their affairs by a generally popular sense of right derived from the Scriptures. Some aspects of the English common law were found helpful and were eventually legislated; others were rejected. The theory was one of selective incorporation of certain common law principles.75 Jesse Root also affirmed liberation from the English common law. While the colonists were knowledgeable of that common law, they were free from total subjection to it. This freedom allowed the Americans to reject many of the feudal practices inherent in the English system. The Americans fostered increased property rights, because they recognized that all rights were derived from the law of nature and of revelation. As a result, their title to land was not subject to the king.76 Current property law asserts that if feudal property law was a part of English common law, and if the colonies in America were subject to English common law, then America would also be subject to feudal property law. The arguments of Root and Powell, however, affirm the American independence from the English version of the common law and the rejection of feudal subordination. Other writers give further evidence for the American independence from feudal subordination. Washburn said that “Great Britain relinquished all claim . . . to the proprietary and territorial rights of the United States: and these rights vested in the several States . . . .”77 The states consisted of landowners acting as a corporate body to secure individual rights. New York New Jersey, South Carolina and Michigan expressly denied the existence of feudal subordination. In 1793 Connecticut “declared every proprietor in fee-simple of land to have an absolute and direct dominion and propriety in it.”78 In 1779 Virginia statutorily abolished feudal subordination practices. Pennsylvania, Maryland and Wisconsin declared their land allodial. Joseph Story indicated that [i]n all the colonies the lands within their limits were by the very terms of their original grants and charters to be holden of the crown in free and common socage . . . . All the slavish and military part of the ancient feudal tenures was thus effectually prevented from taking root in the American soil; and the colonists escaped from the oppressive burdens [of subordination] . . . . In short, for most purposes, [American] lands may be deemed to be perfectly allodial, or held of no superior at all . . . .79 Kent affirmed that the states were never marked by subordination. In all the states, the “ownership of land [was] essentially free and independent.”80 The New York legislature abolished any notion of the existence of subordination and declared all lands within the state to be allodial. The entire and absolute property vested in the owner. The title to land was essentially allodial, and every tenant in fee simple had an absolute and perfect title, even though the technical language called his estate fee simple and the tenure free and common socage. This technical language was “interwoven with the municipal jurisprudence” of the states, while all vestiges of feudal subordination were rejected.81 Writing in 1896, Earl Hopkins gave a brief history of feudalism in Europe. He pointed out that “[t]he feudal system never took root in the United States.”82 An estate would have been “by free and common socage, and not subject to the burdensome incidents of subordination of tenure . . . . Lands [were] allodial; that is, held in absolute ownership . . . .”83 Minor and Wurtz indicated that [t]he only feudal tenure ever recognized in this country was that of free and common socage, . . . the tenure upon which all the grants of colonial land by the crown were based . . . . [A]t the time these grants were made, the socage tenure had already been stripped of all its burdensome incidents, so that they never existed here.84 After the War for Independence, even the “subservience to sovereignty evidenced by the socage tenure” was abolished.85 In essence, feudal subordination was abolished entirely. An American’s authority over property must be free from any obligation to superior title in the state. Hopkins did point out that the documentary title evidencing ownership of the land was originally derived from the state. This was merely a legal convention evidencing title so that the state could help secure the person’s property right. Until the patent was issued, the legal title remained in the United States as trustee. The equitable title was in the holder of the certificate of entry which was issued by the register of the land office and entitled the claimant to the patent.86 Washburn also addressed the patent process. [W]hile in equity a purchaser acquires a good title to land which he may have entered and actually paid for, and for which he holds the certificate from the proper officer, in order to prevail in a court of law he must have had a title by patent . . . . [A]fter the purchase from the United States, the purchaser acquires all the property which the United States had in the land; that the equitable and legal title passes from the United States, which only retains the formal technical legal title in trust for the purchaser until the patent issues . . . . “Lands which have been sold by the United States can, in no sense, be called the property of the United States . . . . When sold, the government, until the patent shall issue, holds the mere legal title for the land in trust for the purchaser . . . .” . . . “The patent [was] conclusive evidence . . . of the relinquishment to the patentee of all interest the United States held, as trustee, in the land.”87 Feudalism never existed in the United States. Some of the feudal language was used because it was familiar, but the burdens of feudalism were rejected. References to socage tenure denoted land held by a fixed service, which is not military, nor in the power of the lord to vary. Socage tenures do not exist any longer . . . . An estate in fee-simple means an estate of inheritance, and . . . it has lost entirely its original meaning as a beneficiary . . . estate . . . . Whether a person holds his land in pure allodium, or has an absolute estate of inheritance in fee-simple, . . . his title is the same . . . .88 Escheat was another of those incidents which has lost its feudal character. [E]scheat of lands was regarded as merely falling back into the common ownership of the State . . . because the tenant did not see fit to dispose of them in his lifetime, and left no one who . . . has any claim to inherit them . . . . Land being allodial in the United States, escheat properly speaking did not apply to it, but in case of failure of heirs . . . .89 The principle was that if land escheats to the state, it was held in trust for the citizens until a bona fide purchaser seeks a patent. Washburn indicates that some aspects of our law of real estate, including the forms of conveyance, as well as the terms in use in applying them, were borrowed originally from the feudal system . . . . [However, ] the adoption of expressions or forms of process borrowed from a once existing system of laws, does not necessarily imply that that system has not become obsolete.90 States frequently passed statutes to emphasize the fact that the feudal practice of subordinating all property right to the civil authority was rejected. Thus, the colonists made a conscious and absolute break from the systems which they knew to be a threat to the unalienable right of property. While it is true that they used the technical legal language of the feudal and common law tradition, this was done for convenience. To have fostered feudal subordination would have undermined the effort to affirm the unalienable rights of the individual as dictated by the laws of nature and of nature’s God. The continued use of feudal terminology in deeds and titles would be used eventually to argue that America was established upon feudal principles and to deny any just claim to unalienable property rights. The Declaration of Independence affirmed unalienable property rights. The history of the phrase “Laws of Nature and of Nature’s God,” the foundation of the Declaration of Independence, evidences the validity of unalienable rights and the necessity to secure them. The history of the phrase “pursuit of Happiness” evidences that it included unalienable property rights. American legal writers recognized that the laws of nature and of nature’s God and the pursuit of happiness supported the rejection of the European feudalistic practice of subordination. The next section evidences that the challenge to secure unalienable property rights has been brought about by a failure to adhere to the Declaration of Independence in favor of European theories of property. * Copyright © 1989, 2006 Richard A. Huenefeld. Used with permission. 1. Letter from Thomas Jefferson to Henry Lee (May 8, 1825), reprinted in 16 The Writings of Thomas Jefferson 117, 118 (A. Lipscomb ed. 1905) [hereinafter Writings]. 2. J. Taylor, New Views of the Constitution of the United States 2 (Washington 1823 & photo. reprint 1971). 3. Writings, supra note 1. 4. Calvin’s Case, 77 Eng. Rep. 377, 392 (1609). 5. 1 W. Blackstone, Commentaries *38. 6. Id. at 39. 9. Id. at 41. 11. Id. at 42. 13. Gary T. Amos, Biblical Principles of Government 270 (1987) (unpublished manuscript). 14. Sources of Our Liberties 318 (R. Perry ed. 1978) [hereinafter Sources]. 15. Black’s Law Dictionary 1366 (5th ed. 1979). 16. 2 S. Johnson, A Dictionary of the English Language, s.v. “right” (London 1755). 17. 2 N. Webster, An American Dictionary of the English Language, s.v. “right” (New York 1828 & photo. reprint 1980) (reprinted in one volume). 18. Christians for Justice International, A Declaration of Universal Rights 2 (1988). 19. The Magna Carta (1215), reprinted in Sources, supra note 14, at 11. 20. See generally E. Coke, The Second Part of the Institutes of the Laws of England (London 1797 & photo. reprint 1986). 21. 7 The Federal and State Constitutions, Colonial Charters, and Other Organic Laws 3788 (F. Thorpe ed. 1909 & photo. reprint 1977) [hereinafter Thorpe]. 22. 3 id. at 1839. 23. Id. at 1857. 24. Id. at 1681. 25. Id. at 1635. 26. 1 id. at 533. 27. 6 id. at 3220. 28. 5 id. at 2747. 29. Bill of Rights (1689), reprinted in Sources, supra note 14, at 245. 30. J. Locke, Two Treatises of Government 311 (P. Laslett rev. ed. 1963). 32. Id. at 356. 33. Id. at 395. 34. Id. at 429. 35. Id. at 460. 36. Id. at 430. 37. Id. at 395. 38. 1 W. Blackstone, Commentaries *127-29. 39. Id. at 125. 40. Id. at 129. 43. Id. at 134. 44. Id. at 138. 45. Id. at 140. 46. Declaration and Resolves of the First Continental Congress (1774), reprinted in Sources, supra note 14, at 287. 47. 7 Thorpe, supra note 21, at 3813. 48. 1 W. Blackstone, Commentaries *40. 49. Id. at 40-41. 50. Id. at 41. 52. 5 Thorpe, supra note 21, at 3082. 53. Delaware Declaration of Rights (1776), reprinted in Sources, supra note 14, at 338. 54. 6 Thorpe, supra note 21, at 3739. 55. 3 id. at 1889. 56. D. Brewer, Protection to Private Property from Public Attack, An address delivered before the graduating class at the sixty-seventh anniversary of Yale Law School on June 23, 1891 (The Microbook Library of American Civilization 40071). 57. Id. at 4. 58. 4 Thorpe, supra note 21, at 2453-54. 59. D. Brewer, supra note 56, at 5. 60. Story, Natural Law, in 9 Encyclopedia Americana 150,151 (F. Lieber new ed. 1836), reprinted in 7 J. Christian Jurisprudence 31 (1988). Story wrote many articles on private law for Lieber’s Encyclopedia. These articles, though written on the behest of Lieber, are unsigned, as Story requested. See 1 F. Lieber, Civil Liberty and Self-Government 232, at nn. 3, 14 (1883). See also Letter from Joseph Story to Edward Everett (Nov. 1, 1832) (Story Papers, Massachusetts Historical Society). 61. S. Painter, Feudalism and Liberty 3-7 (1961). 62. 2 W. Blackstone, Commentaries *45. 63. Watkins, Introduction to G. Gilbert, The Law of Tenures vi (London 5th ed. 1824). 64. Id. at viii (quoting Robertson, I Hist. of Scotl. 16 c. 1). 65. Id. at ix. 66. 3 J. Kent, Commentaries *492. 67. C. Moynihan, Introduction to the Law of Real Property 3-4 (2d ed. 1988). 68. 2 W. Blackstone, Commentaries *49. In 1086 William the Conqueror instituted a comprehensive and detailed survey of English lands. This resulted in a statistical record of the feudal tenures embodied in two volumes commonly known as the Domesday Book C. Moynihan, supra note 67, at 6-7. 69. Id. at 51. 72. Id. at 53-58. 73. 3 J. Kent, Commentaries *501. 74. Id. The term “allodial” simply meant land held in absolute ownership, not in dependence upon any other body or person in whom the proprietary rights were supposed to reside, or to whom the possessor of the land was bound to render service. 75. 1 R. Powell, The Law of Real Property 97-106 (1988). 76. J. Root, The Origin of Government and Laws in Connecticut, 1798, reprinted in The Legal Mind in America 31-40 (P. Miller ed. 1962). Perry Miller also indicates that James Kent and David Hoffman defended an American legal tradition divested of the peculiarities of the English common law and premised instead upon the laws of nature. Id. at 93-94. See also D. Hoffman, A Course of Legal Study (Philadelphia 2d ed 1846 & photo. reprint 1968). 77. 1 E. Washburn, A Treatise on the American Law of Real Property 68-69 (5th ed. 1887). 78. Id. at 69. 79. 1 J. Story, Commentaries on the Constitution of the United States 125 (5th ed. 1905). 80. 3 J. Kent, Commentaries *488. 82. E. Hopkins, Handbook on the Law of Real Property 31 (1896). 84. R. Minor & J. Wurtz, The Law of Real Property 12 (1910). 86. E. Hopkins, supra note 82, at 403. 87. 3 E. Washburn, supra note 77, at 208-10. 88. 3 J. Kent, Commentaries *514. 89. 3 E. Washburn, supra note 77, at 52-53. 90. 1 id. at 71.
Land reservoirs helped offset sea level rise, study says Recent increases in the storage of excess groundwater may be helping to offset sea level rise by as much as 15%, a new study finds. While the capacity of land to store water is known to be an important factor affecting sea level rise, the magnitude of its storage contributions are not fully understood. Land masses store water in numerous ways, though some human-induced changes — including to groundwater extraction, irrigation, impoundment in reservoirs, wetland drainage, and deforestation – are affecting this process, as are climate-driven changes in rainfall, evaporation, and runoff. To gain more insights into how the land storage capacity may have changed over recent years, John Reager and colleagues analyzed satellite data from 2002 to 2014 that measure changes in gravity, and thus underlying changes in water storage. They combined this satellite data with estimates of mass loss of glaciers to determine what impact land water storage might have had on sea level change. Their analysis suggests that during this timeframe, climate variability resulted in an increase of approximately 3,200 gigatons of water being stored in land. This gain partially offset water losses from ice sheets, glaciers, and groundwater pumping, slowing the rate of sea level rise by 0.71 ± 0.20 millimeters per year, the authors say. While a small portion of the increase in land water storage can be directly attributed to human activities – primarily, the filling of reservoirs – the authors note that climate is the key driver. The greatest changes in land water storage were associated with regional climate-driven variations in precipitation.
Context[change | change source] Using the radioactive carbon-14 isotope as a tracer, Calvin, Andrew Benson and their team mapped the complete route that carbon travels through a plant during photosynthesis. They traced the carbon-14 from its absorption as atmospheric carbon dioxide to its conversion into carbohydrates and other organic compounds. The single-celled algae Chlorella was used to trace the carbon-14. Steps[change | change source] 3. Leave: A trio of three carbons leave and become sugar. The other trio moves on to the next step. 4. Switch: Using ATP and NADPH, the three carbon molecule is changed into a five carbon molecule. 5. The cycle starts over again. The carbohydrate products of the Calvin cycle are three-carbon sugar phosphate molecules, or 'triose phosphates' (G3P). Each step of the cycle has its own enzyme which speeds up the reaction. References[change | change source] - "The Nobel Prize in Chemistry 1961 Melvin Calvin". nobelprize.org. http://nobelprize.org/nobel_prizes/chemistry/laureates/1961/. Retrieved January 14, 2011. - Bassham J, Benson A, Calvin M (1950). "The path of carbon in photosynthesis". J Biol Chem 185 (2): 781–7. . http://www.jbc.org/cgi/reprint/185/2/781.pdf. - CALVIN, M (1956), "The photosynthetic cycle.", Bull. Soc. Chim. Biol. 38 (11): 1233–44, 1956 Dec 7, - Barker, S A; Bassham, J A; Calvin, M; Quarck, U C (1956), "Intermediates in the photosynthetic cycle.", Biochim. Biophys. Acta 21 (2): 376–7, 1956 Aug, , - Melvin Calvin (December 11, 1961). "The path of carbon in photosynthesis" (PDF). p. 4. http://nobelprize.org/nobel_prizes/chemistry/laureates/1961/calvin-lecture.pdf. Retrieved July 11, 2011. - Sadava, David; H. Craig Heller, David M. Hillis, May Berenbaum (2009). Life: The Science of Biology. Macmillan Publishers. pp. 199–202. .
Although there is little that we can be certain of regarding the life of this man, his immense influence on the Hellenic people by means of his epic poems is the essence of what ought to be considered. It is generally thought that he was a blind poet who flourished in the 9th or 8th century BCE. Like others of his profession, he composed poems orally and travelled to recite them for pay, especially at festivals or in the houses of nobles. Sometime after his death, his epic poems, the famous Illiad and Odyssey, were written and preserved for posterity. They concerned a remarkable time in Greek history, when, 400 years before, during the Bronze Age, a great war broke out between several Greek states and the city of Troy, which afterwards spread further; the first poem relates the events of the war, and the second the return of the king Odysseus home, after the siege of Troy. After they were written, the poems rose to such fame and admiration for many centuries, that they actually inspired and educated the Greek peoples more than any author before, and hence were almost considered as a sacred authority to learn from. The value of Homer is derived not only from the glorious style and imagination of his poetry, but also the genuine Greek spirit, character and life that he embellishes and preserves in it, from which many lessons can be taken. He tells a great story of a war in which both Gods and men took part, some acting cunningly and others heroically to gain more power and achieve some sort of influence. We see causes and consequences to actions, we see the descriptions of actions, we see the performers of the actions, and then we become aware of many important truths in life that were pertinent to ancient Greeks, such as honor, fame, comfort, glory, revenge, power, wealth, influence, home, security, purpose, etc. Homer paints life and mythology together in suitable and significant colors, in such a way that would move his Greek listeners and readers to enjoyment, reflection, and learning about themselves, their lives and their time. His songs are interwoven with his own ethnic culture; he celebrates his ethnic people by his epic poems, as he does his ethnic Gods by his hymns. Because he performed in both so well, he is justifiably remembered and his works are gladly preserved. The fame that Homer’s poems achieved inspired many imitations, some of which were honest and sincere, but others were deceitful and selfish. The practices of Orpheus are of the latter kind. Some believe him to have never existed (for example, Aristotle), as a few do also of Homer, but it is possible to gather pieces from history to construct the story of his life, as well as understand his motives, from what is known of the inventions he left behind. Since he is not mentioned in Homer or Hesiod, we know that he lived after their time, and here a suspicion arises as to how he appears in several places in Greek mythology. One story puts him as the first man who was taught the lyre by Apollo or even as the son of Apollo, another as a companion of the Argonauts, and a third as someone who descends into the underworld to recover his wife (a nymph) after death by playing music to Hades and Persephone that softened their hearts. This all looks suspicious when an important point is considered: Orpheus is known as the founder of mystery religions modelled to the very ancient Eleusinian mysteries, one is known as the Orphic mysteries and the other as the Dionysian mysteries. Since these mystery religions can be dated, because they appear at a certain point in history (6-5th century BCE), the character of Orpheus is certainly real and historical, but it was his followers who inserted him into mythology at a position so unjustly near the Gods themselves. The truth is, Orpheus belonged to a profession, just like Homer; he was a sort of prophet or magician who travelled to purify, teach or bless places or people for pay. Of this number, several are known to have existed in the 6th century or before, such as Musaeus and Epimenides. He differs from them, however, because he established his own religion, which corrupted and challenged the cosmogony and principles that the Greeks had accepted till that time. In the Orphic mysteries, the universe begins with Eros rather than Chaos, and Zeus fathers all the Gods after him, sometimes by raping, and thus Zeus is both a single God and many Gods at the same time. The Derveni Papyrus, a text of Orphic theology, presents these as allegories, but this must be an excuse to avoid impiety, because these corruptions are extremely bold and contradictory of the Hellenic religion. Another falsehood in this invented religion is that Zeus granted Dionysus (his son in the story) the succession to his throne as king of the universe, and then Dionysus is murdered by Hera out of jealousy that he was the son of Persephone and not hers, and from his torn flesh, sinful mankind is born and forced to suffer in cycles of rebirth. These tales are blasphemies of the grossest kind. In the mystery religion of Dionysus, the invention is carried on further: Dionysus is reborn by the ritual of eating bread and drinking wine (representing his flesh and blood) and he returns to govern the world after Zeus in a new cycle of history. Perhaps it can be seen already how much these corruptions resemble Christianity, which, few know, was actually influenced more by these two mystery religions than by Judaism. Eros was reinterpreted as Love, Dionysus as Jesus, Zeus as the Father, and what is stranger than all, the bread and the wine, as well as original sin, are exactly the same. To some degree, one could say that Orpheus himself was reinterpreted as Paul of Tarsus: Just as Paul travelled preaching and was executed in Rome for causing disturbances with his new religion, so was Orpheus, after his own travels, torn to pieces by the priestesses of Dionysus in one account, or according to another, struck by a thunderbolt of Zeus, as a punishment for his impiety.
In 1985 the American Ornithologists’ Union split the species formerly known as Arctic Loon (Gavia arctica) into two separate species, with the Old World form retaining the former name and the form breeding in North America renamed Pacific Loon (Gavia pacifica). The Arctic Loon as newly redefined breeds across northern Eurasia, with a small population in western Alaska, and winters coastally as far south as China and the Mediterranean. While Pacific Loon is a common winter visitor in the Pacific Northwest, the Arctic Loon was not known to occur in the region prior to the split. The two species are very similar; nonetheless birders soon learned how to tell them apart, and Arctic Loons began appearing in Northwest waters, albeit in tiny numbers. The most prominent field mark is the large, arching, white panel on the rear flanks, which is actually the white feathering of the thigh. This feather tract is dark in Pacific Loon, with the result that this species in normal resting posture does not have the flank patch. However, all species of loons may occasionally show extensive white on the flanks—for example, when birds roll over in the water to preen, exposing the white underparts—so this feature needs to be carefully assessed. Other corroborating characteristics (compared to Pacific Loon) are the Arctic’s somewhat larger size, thicker neck, larger bill with a slightly uptilted appearance, and more squared-off, flat-topped head. The first Washington record was a bird on the Columbia River behind Priest Rapids Dam (Grant and Yakima counties), January–April 2000. Winter 2000–2001 brought several sightings on inland marine waters (Kitsap, Snohomish, and Clallam counties), representing at least two and possibly four or more birds. Oregon has two accepted records from the coast, and there is at least one published record from British Columbia. Revised November 2007 |Federal Endangered Species List||Audubon/American Bird Conservancy Watch List||State Endangered Species List||Audubon Washington Vulnerable Birds List| View full list of Washington State's Species of Special Concern
I can explain the role of a narrator and how he or she differs from the author. 8,000 schools use Gynzy 92,000 teachers use Gynzy 1,600,000 students use Gynzy In this lesson, students learn how to determine who the narrator is in a story. They will be able to identify when the narrator is also a character in the story and when he is not. They will also discuss the role of the author. Students will use this knowledge to be able to write their own narrative pieces in which they establish their narrator and characters. Students will be able to identify the narrator, author, and characters, and determine if the narrator is also a character in a story. As a class, read a short excerpt from “A Dog’s Tale” and ask students if they can tell who is telling the story. How do they know? Have them underline the clues in the text that show that the narrator is a dog. Explain the role of the narrator, the author, and the characters. The author writes the story, the narrator tells the story, and the characters act it out. Next, students will read the passages and identify who the narrator is. After this, tell students that the narrator is sometimes also a character in the story, participating in the action and interacting with the other characters. Use the spinner to land on an action, then have students orally share a quick story in which the narrator does that activity with another character. Then read an excerpt from “The Box-Car Children” by Gertrude Warner and identify the narrator’s role in the story. Students are given 10 questions to review key concepts taught during the lesson. Students will write 2 story introductions. In one of them, the narrator will not be a character in the story. In the other, the narrator should also be a character. Save time building lessons Manage the classroom more efficiently Increase student engagement Gynzy is an online teaching platform for interactive whiteboards and displays in schools. With a focus on elementary education, Gynzy’s Whiteboard, digital tools, and activities make it easy for teachers to save time building lessons, increase student engagement, and make classroom management more efficient.
Momentum depends on mass and velocity Teaching Guidance for 14-16 Both mass and velocity Wrong Track: This ball has the larger mass, so it'll have the larger momentum. Right Lines: Momentum, a quantity that's a good measure of the motion of a body, depends on mass and on velocity. To work out which has the larger momentum you have to pay attention to both the mass and the velocity: →p = m × →v. A focus on compensation Thinking about the learning Students often fix on only one factor, when they should consider both factors that contribute to a physical quantity. They select either mass or velocity, and they fail to combine the two quantities into the compound quantity that is momentum. So their comparisons are often in error. Thinking about the teaching The key relationship is: →p = m × →v. That this kind of pattern (A = B × C) turns up so often, and so often causes troubles is one reason for focusing on the issue of compensation throughout the SPT materials. Because it's an often used pattern, and because students are known to have difficulty with it (through selecting just one of the factors) we'd suggest foregrounding it as a pattern, so that students give it due prominence.
Reflect on the key ideas Developing shared activities and ways to cooperate Today it is widely understood that literacy instruction is a responsibility that should be shared by teachers in all disciplines. However, it is inevitable that language specialists have a more detailed insight into language issues and better access to linguistic resources. Therefore, language experts have a special role to play. Watch a short video of an interview with Professor Bernd Rüschoff from the University of Duisburg-Essen, Germany. The questions asked in the interview are: - How would you diversify language education to encourage the use and awareness of minority languages and cultures? - What do you think might be the role of the "mother tongue" teacher in supporting plurilingualism? - How do you think cooperation between teachers of different subjects could be enhanced in schools? - How would you comment on Professor Rüschoff's answers? - What does he mean by sensitivy towards languages and awareness of languages? How could that be achieved? Consider the school culture in your country and discuss the following issues: - What elements encourage teachers to collaborate and which ones prevent them from doing so? - What is needed to develop the pedagogical culture so that more collaborative practices are taken up? If possible, discuss these questions with students of other subjects to see if you share similar viewpoints. Academic language is a challenge to all learners Traditionally, it has been taken for granted that students simply learn to use academic language successfully, but that assumption has proved to be questionable. Academic language is also challenging for native speakers of the language of schooling. Therefore, to talk about language and the support we plan and arrange for second language learners is highly beneficial for native speakers too. The key benefit of collaboration across school subjects is that language focus benefits all learners. Contribution from all teachers is needed: subject teachers are experts in their fields and they are masters of the language used within their disciplines. However, language experts are needed to provide an outline of the overall picture: - How does the language use differ in different subjects? - What makes academic language challenging? - What kind of identity work is involved in developing academic skills in different subjects? - Could language knowledge taught in the language of schooling classroom be related to academic literacies and enable students to compare, analyse and identify different characteristics of language use? - How can language skills be transferred from one subject to another? Two ECML projects focus on academic language skills and the integration of content and language learning. Read more about them: - Language descriptors for migrant and minority learners' success in compulsory education - Literacies through Content and Language Integrated Learning: effective learning across subjects and languages 5 principles for teaching content to language learners Effective academic language instruction has been defined and researched by many scholars. Dr. Jim Cummins is one of the leading authorities in the field. He has identified three key pillars of effective academic language instruction for English language learners: activate prior knowledge, access content and extend language. Based on these pillars, Pearson has created five principles for teaching content to language learners. Learn more about them here. Explore the principles and discuss them in groups - What do the principles mean with regard to the language of schooling classroom when teaching various content included in the curriculum? - What do the principles mean with regard to collaboration across subjects? How could the framework be used as a basis for structuring cooperation between teachers and cross-disciplinary projects? - How would the students in our example (see learner profiles) benefit from these principles? If possible, develop, put into use and report on an exercise involving teachers/ student teachers of another subject.
Published by: Digital Schools As a parent, it is natural to want to support your child’s education and help them succeed in school. In this article, we will explore ten ways that parents can positively impact their children’s learning. 1. Encourage reading at home: Reading is an essential skill that helps children with their learning and overall development. Parents can encourage their children to read at home and provide them with age-appropriate books. 2. Set up a dedicated study space: Having a designated study area can help children focus and concentrate on their home learning. A comfortable and quiet space will support them immensely. 3. Establish a regular routine: Consistency and structure can be helpful for children’s learning. Parents can establish a routine that includes dedicated times for homework and studying. 4. Communicate with teachers: Staying in regular communication with teachers can provide valuable insights and information about their children’s progress. 5. Get involved in school activities: Parents can show their support for their children’s education by volunteering in the classroom or attending school events. 6. Encourage extracurricular activities: Participation in extracurricular activities can help children develop important skills and interests. 7. Provide positive reinforcement: Parents can support their children’s learning by providing positive reinforcement for their efforts and achievements. 8. Encourage independence: Helping children develop independence and problem-solving skills can be beneficial for their learning. By following these tips, parents can play an active role in supporting their children’s education and helping them succeed in school. Investing in their children’s education and helping them develop their skills and interests can set them on the path to a bright and successful future.
Protection of Precious Migratory Birds Every year migratory birds depart from their natural breeding grounds in America and Western Europe at the end of the summer season to travel thousands of miles over various mountain ranges and oceans in different climatic conditions to their winter habitat in the tropics. For such birds, this expedition is both long and challenging, as they have to stop at strange and unusual surroundings to search for food, water and to rest before continuing their journey to the tropical lands. It is imperative for human beings to appreciate and understand the complexity of how these birds use their natural environment in order to ensure their protection and survival. As the appropriate habitat is critical to help these creatures survive, strive and reproduce, any scheme that bird conservationists formulate and implement on protecting migratory birds needs to focus on the right habitat management. The quality of the resources available at such places and their protection in important in understanding and analyzing the migration timetable of these birds and the paths they take to reach their destination. Conservationists also need to take into account a number of factors to maintain an adequate population of these species. These include the abundance and types of plants that these birds use for nesting in places they migrate to, the general distribution of such birds in the area, their predators in addition to their parasites. The others factors that can go a long way in preventing the extinction of these migratory birds include: - Keep cats indoors Most people are aware that cats feed on these birds and hanging bells around their necks is an ineffective way to warn migratory birds of the presence of such predators. This is the reason why it is imperative for cat owners to keep their keep indoors to ensure that these feathered creatures can live three to seven times longer. - Eliminate pesticides In order to conserve migratory birds, it is essential for people to eliminate pesticides of all kinds as these chemical substances have the potential to pollute the waterways that these birds rely on. In addition to this, such chemicals also kill the insects that are a part of the staple diet of such birds. - Reducing carbon footprints Reducing carbon footprints is essential for protecting migratory birds from becoming extinct. Simple acts that people can adopt like opting to use both hand-held and electric lawn mowers, carpool and low energy electrical appliances can be a catalyst in protecting the natural habitat of these birds. - Purchasing organic food and drinking shade-grown coffee By opting to buy and consume homegrown organic food that do not contain any harmful chemical substances that are normally found in pesticides, consumers of such food products can indirectly help in conserving the natural habitat of migratory birds and ensure their survival. Protecting migratory birds is not only the appropriate thing to do to ensure their survival for future generations, it also helps to enhance the economy of places where these birds to travel to and the environment human beings live in.
Researchers at the University of Michigan in Ann Arbor have created a new kind of semiconductor that is layered atop a mirror-like structure that can mimic the way leaves move energy from the sun over relatively long distances before using it to fuel chemical reactions, an approach that could one day improve the efficiency of solar cells. “Energy transport is one of the crucial steps for solar energy harvesting and conversion in solar cells,” says Bin Liu, a postdoctoral researcher in electrical and computer engineering and first author of the study in the journal Optica. “We created a structure that can support hybrid light-matter mixture states, enabling efficient and exceptionally long-range energy transport.” One of the ways that solar cells lose energy is in leakage currents generated in the absence of light. This occurs in the part of the solar cell that takes the negatively charged electrons and the positively charged “holes,” generated by the absorption of light, and separates them at a junction between different semiconductors to create an electrical current. In a conventional solar cell, the junction area is as large as the area that collects light, so that the electrons and holes don’t have to go far to reach it. But the drawback is the energy loss from those leakage currents. Nature minimizes these losses during photosynthesis with large light-gathering “antenna complexes” in chloroplasts and the much smaller “reaction centers” where the electrons and holes are separated for use in sugar production. However, these electron-hole pairs, known as excitons, are very difficult to transport over long distances in semiconductors. Liu explained that photosynthetic complexes can manage it thanks to their highly ordered structures, but human-made materials are typically too imperfect. The new device gets around this problem by not converting photons fully to excitons — instead, they maintain their light-like qualities. The photon-electron-hole mixture is known as a polariton. In polariton form, its light-like properties allow the energy to quickly cross relatively large distances of 0.1 millimeters, which is even further than the distances that excitons travel inside leaves. The team created in the Lurie Nanofabrication Facility the polaritons by layering the thin, light-absorbing semiconductor atop a photonic structure that resembles a mirror, and then illuminating it. That part of the device acts like the antenna complex in chloroplasts, gathering light energy over a large area. With the help of the mirror-like structure, the semiconductor funneled the polaritons to a detector, which converted them to electric current. “The advantage of this arrangement is that it has the potential to greatly enhance the power generation efficiency of conventional solar cells where the light gathering and charge separating regions coexist over the same area,” says Stephen Forrest, the Peter A. Franken Distinguished University Professor of Engineering, who led the research. While the team knows that the transport of energy is happening in their system, they aren’t totally sure that the energy is continuously moving in the form of a polariton. It could be that the photon sort of surfs over a series of excitons on the way to the detector. They leave this fundamental detail to future work, as well as the question of how to build efficient light-gathering devices that harness the photosynthesis-like energy transfer. The study was funded by the Army Research Office and Universal Display Corporation, and Universal Display Corporation has licensed the technology and filed a patent application. Forrest and the University of Michigan have a financial interest in Universal Display Corp.
The Stratigraphy of the Great Unconformity The Great Unconformity refers to a significant stratigraphic discontinuity that is found in the geologic records. It represents an interruption in deposition of sedimentary stones. This feature is of great importance for the stratigraphic interpretation sedimentary basins. It can be found in many geological locations around the globe. Numerous studies have been done on the Great Unconformity, which can be thought to represent millions of years worth of geologic time lost. James Hall, an American geologist first described the Great Unconformity late in 19th century. James Hall noticed an unconformity in the Appalachian Mountains’ Cambrian-Precambrian strata. Hall initially used the term ‘Great Unconformity’ to describe the unconformity in the Appalachians (Hall, 2019). Now, the Great Unconformity has been recognized as the largest unconformity in geologic records and was used to separate the Paleozoic from Precambrian periods (Geology.com 2020).
Dr. Ross, an I.B.M researcher, is growing a crop of mushroom-shaped silicon nanowires that may one day become a basic building block for a new kind of electronics. Nanowires are just one example, although one of the most promising, of a transformation now taking place in the material sciences as researchers push to create the next generation of switching devices smaller, faster and more powerful than today’s transistors. The reason that many computer scientists are pursuing this goal is that the shrinking of the transistor has approached fundamental physical limits. Increasingly, transistor manufacturers grapple with subatomic effects, like the tendency for electrons to “leak” across material boundaries. The leaking electrons make it more difficult to know when a transistor is in an on or off state, the information that makes electronic computing possible. They have also led to excess heat, the bane of the fastest computer chips. The transistor is not just another element of the electronic world. It is the invention that made the computer revolution possible. In essence it is an on-off switch controlled by the flow of electricity. For the purposes of computing, when the switch is on it represents a one. When it is off it represents a zero. These zeros and ones are the most basic language of computers. From The New York Times View Full Article
Cation Exchange Capacity Is An Important Part of Plant Nutrition Fertilizing the soil is similar in many ways to feeding ourselves. All life needs carbohydrates, minerals, and other nutrients in a healthy balance for optimal health. And just like eating too much fast food and not enough vegetables can lead to a variety of health problems in our own bodies, over fertilizing or feeding an imbalanced diet can harm plants as well. While plants need carbohydrates, proteins and fats just as we do, plants are quite different in how they ingest their food. When a fertilizer is applied, your plants will not use it in its whole form like we would eat. Plants absorb almost all their nutrients through specialized cells on their roots, and sometimes also through their leaves. These cells can only absorb nutrients that are in the form of ions dissolved in the soil’s water. When you apply fish meal, for example, the meal must be broken down and dissolved into the water in the soil into an almost elemental state of NO3-, K+, H2PO4-, H+ and other ions that the plants can absorb. Fast acting fertilizers such as Liquid All Purpose will reach this ionic state quickest, where as slow release fertilizers such as Nutri-Rich will break down and release these ions slowly over time. The plants will then use these ions to build proteins, starches and fats, as well as enzymes, hormones, and other compounds they need to grow and thrive. Good soil has the ability to hold these nutritive ions with minimal leaching, giving the plants more time to absorb them and thus requiring less fertilizer be used. What is Cation Exchange Capacity The measurement for this ability is called Cation Exchange Capacity, or CEC. The CEC measures the soil’s negative charge; a better negative charge will hold more positively charged ions (these are called cations). The CEC is measured on a scale of 0 (low) to 50 (high), with values under 10 having poor cation holding ability. CEC is higher in clay soil, because clay particles are negatively charged. Sand, on the other hand, has little to no charge and thus has a low CEC. CEC is also higher in soils with good levels of organic matter. Beneficial microorganisms such as mycorrhizal fungus can help to improve the CEC, as well as working to break down fertilizers into their ionic components. Soil with a higher CEC is buffered against changes in soil nutrients; while this is a good thing if your soil has a good level of soil nutrients and pH, it also means that it will be more difficult to change your soil nutrient levels if there is some aspect that needs corrected. For example, if you have overly acidic soil, the amount of free H+ ions is too high. If the CEC is good, it will strongly hold onto those H+ ions and be resistant to efforts to counter them. More lime will be needed in such soils, compared to soils with a low CEC that readily release the H+ ions and change the pH. Knowing your soil’s CEC level can help you make better fertilization decisions, and will give you a deeper understanding of your soil.
Fun fact: There is only one number that is spelled with the same number of letters as the number itself. Can you figure out which one it is? It's 4! Four has four letters in it, and it means the number 4. Answer: one thousand five hundred ninety eight Spelling numbers is challenging and there aren't many good tricks to learning it other than to practice and memorize patterns and letters. We are hopeful that our online tool will help you out with any number spellings that you need. For an introduction to this topic, please check out our page: writing numbers as words. Similar Spelling ProblemsHow do you spell 328? How do you spell 1086? How do you spell 1620? How do you spell 1315? How do you spell 320? How do you spell 674? How do you spell 563? How do you spell 772? How do you spell 2408? How do you spell 633? Random articlesAngles in a Triangle and other shapes Multi Step Equations Coterminal Angles, Initial/Terminal side Graphing Inverse Functions Equation of a Circle Straight Line Graph/Graphing Linear Inequalities Horizontal Line Test
Exemplars publishes hands-on, standards-based assessment and instruction material that focuses on authentic learning in the areas of Math, Science and Writing. Exemplars material is based on sound scientific research that underscores the following about student achievement: - Students who do demanding work in school perform better than students who are given less demanding work. - Student achievement is strongly related to effective assessment practices in the classroom, including student self- and peer-assessment. - The style of classroom instruction influences student performance. Demanding Work Drives Performance 1. Students who do demanding work in school perform better than students who are given less demanding work. Perhaps the most important study in this area was designed to answer the following question: "What happens to students' scores in standardized tests of basic skills when urban teachers in disadvantaged schools assign work that demands complex thinking and elaborated communication about issues important to student lives?" Bryk, Anthony S., Jenny K. Nagoaka, and Fred M. Newmann.Authentic Intellectual Work and Standardized Tests: Conflict of Coexistence?. Consortium on Chicago School Research, January 2001. Research done by the Chicago School Research Project, supported by the Annenberg Foundation, reports the results of a three-year study of more than 400 classrooms from 19 different Chicago elementary schools. The intellectual demands of more than 2,000 classroom assignments given to 5,000 third, sixth, and eighth-grade students in writing and math were analyzed for their level of difficulty and linked to the learning gains on standardized tests in reading, writing, and mathematics. Three standards were used to determine the level of intellectual challenge for each assignment. They were the extent that the assignment: (1) requires the construction of knowledge through disciplined inquiry including the use of prior knowledge and in-depth understanding; (2) requires elaborated communication; and (3) has value beyond success at school. After controlling for, "...differences in racial composition, gender, and socioeconomic status." the authors conclude: ... the evidence indicates that assignments calling for more authentic intellectual work actually improve student scores on conventional tests. The results suggest that if teachers, administrators, policy makers and the public at large place more emphasis on authentic intellectual work in classrooms, yearly gains on standardized tests in Chicago could surpass national norms. Bryk, Anthony S., Jenny K. Nagoaka, and Fred M. Newmann. Authentic Intellectual Work and Standardized Tests: Conflict of Coexistence?. Consortium on Chicago School Research, January 2001. On the other hand, in classrooms where assignments were less demanding, students gained 22 percent less in mathematics than the national average. Furthermore, "it is the intellectual demands embedded in classroom tasks, not the mere occurrence of a particular teaching strategy or technique, that influence the degree of student engagement and learning." Bryk, Anthony S., Jenny K. Nagoaka, and Fred M. Newmann. Authentic Intellectual Work and Standardized Tests: Conflict of Coexistence?. Consortium on Chicago School Research, January 2001. Effective Assessment Practices Lead to Achievement 2. Student achievement is strongly related to effective assessment practices in the classroom, including student self- and peer-assessment. A study conducted by Paul Black and Dylan Wiliam on the effect of classroom assessment practices on student achievement examined 250 articles and chapters on the subject. The study concluded that effective classroom assessment has a major impact on student achievement. For public policy towards schools, the case to be made here is firstly that significant learning gains lie within our grasp. The research reported here shows conclusively that formative assessment does improve learning. The gains in achievement appear to be quite considerable, and as noted earlier, amongst the largest ever reported for educational interventions. As an illustration of just how big these gains are, an effect size of 0.7, if it could be achieved on a nationwide scale, would be equivalent to raising the mathematics attainment score of an 'average' country like, England, New Zealand or the United States into the 'top five' after the Pacific rim countries of Singapore, Korea, Japan and Hong Kong. Black, Paul and Dylan Wiliam, Assessment and Classroom Learning. Assessment in Education, March, 1988. A second study by Black and Wiliam summarizes the finding of over 40 articles that share the following characteristics; quantitative evidence of increased learning was collected for both an experimental group and a control group. All of these studies demonstrate innovations that include strengthening the practice of formative assessment produce significant and often substantial learning gains. Black, Paul and Dylan Wiliam, Inside the Black Box: Raising Standards Through Classroom Assessment. Phi Delta Kappan, October 1998. Instruction Style Matters 3. The style of classroom instruction influences student performance. In 2001 the RAND Corporation published, "Hands-on Science and Student Achievement", a study written by Allen Ruby that examines the relationship between hands-on science and student achievement on both standardized and performance-based tests. The study used two sources of data, a RAND survey of 1,400 eighth graders and their teachers and the National Educational Longitudinal Survey of 1988 (NELS:88) a national survey of approximately 25,000 students in eighth, 10th and 12th grades and their teachers. Students in the RAND survey took both standardized and performance tests. NELS:888 students took only multiple-choice science tests. In both studies, teachers, and students reported on the amount of hands-on science they engaged in during science classes. In both studies, the data show hands-on science is positively related to test scores on both types of tests. The RAND survey showed a strong relationship between doing hands-on science and achievement on both performance tests and multiple-choice tests. NELS:88 results indicate that students in classrooms with hands-on science showed higher levels of achievement. The evidence for the relationship between hands-on science and multiple-choice tests is particularly strong because it is supported by two different surveys using different multiple-choice tests.
Two species of the fungus-like (Oomycete) organism Phytophthora (P. cactorum and P. plurivora) have long been known to cause cankers (bark infections) in horse chestnut, though cases were relatively uncommon and confined mainly to southern England. But in recent years there has been a dramatic upsurge in cases of bleeding canker, in many parts of the UK, from which Phytophthora could not be detected. Work in the UK and the Netherlands has established that the bacterium Pseudomonas syringae pv. aesculi is the cause of these new cases. The bleeding fluid is produced by the tree in response to the infection, which kills the inner bark, cambium and outer layers of wood, causing disruption to water and nutrient transport. If the canker girdles the stem, the stem dies. Research on the bacterium is still in progress. It may require wounds to infect (which may include naturally occurring lenticels, or pores, in the bark) or might exist on plant surfaces and be spread by wind-blown rain. Phytophthora spreads in a similar way and also forms resting spores which can remain for long periods in the soil.
Passive : A letter is being written by Rita. Past Progressive Active: Rita was writing a letter. Passive : A letter was being written by Rita. Past Perfect Active: Rita had written a letter. Passive : A letter had been written by Rita. Future II Active: Rita will have written a letter. Passive : A letter will have been written by Rita. Conditional I Active: Rita would write a letter. Passive : A letter would be written by Rita. Conditional II Active: Rita would have written a letter. Passive : A letter would have been written by Rita. Passive Sentences with Two Objects Rewriting an active sentence with two objects in passive voice means that one of the two objects becomes the subject, the other one remains an object. Which object to transform into a subject depends on what you want to put the focus on. Subject Verb Object 1 Object 2 Active: Rita wrote a letter to me. Passive: A letter was written to me by Rita. Passive: I was written a letter by Rita. examples He believes you. ( active) You are believed by him. (passive) They did the test. (active) The test was done by them. (passive) My mother is making a cake. (active) A cake is being made by my mother. (passive) She will hold a party. (active) A party will be held by her. (passive) The team has won the match.(active) The match has been won by the team. (passive) He would cancel the meeting. (active) The meeting would be cancelled by him. (passive) They made this plane in Germany. (active) This plane was made by them in Germany. (passive) The students sang a song. (active) A song was sung by the students. (passive) Gina read the book yesterday. (active) You've reached the end of your free preview. Want to read all 5 pages? - Fall '19 - Passive voice, Rita, Gerund
In math class, students do a lot of simple calculations. However, some students can finish their math problems quicker than others. Why would this happen? It is because their calculation speed is very slow. Being able to solve question faster has two advantages: saves time, and higher accuracy. When you are writing an exam, you can solve questions faster if you have good calculation skills. You will have not only enough time to solve, but also have time to check your answers. Because you will have more time to check, your answers will be more accurate. You need to remember following pairs of numbers: (1,9), (2,8), (3,7), (4,6) and (5,5) These are pairs in which add up into 10. If you see these pairs added up together, you can just know that it is equal to 10. However, there are other numbers being added together. In that case you need to be able to separate the calculation. For example, 4 + 7 We know that the pair for 4 is 6. Therefore, try to make the addition into 4 and 6 Since 7 = 6+1, you can easily break 7 into two parts. Then, since we know 4+6=10 10+1 = 11. When you are adding a single digit number into 10, you can just put your single digit instead of 10. This way the calculation can be much faster. 8+7 = 8+2+5 = 10+5 = 15 3+9 = 2+1+9 = 2+10 = 12 7+6 = 7+3+3 = 10+3 = 13 Also when you are subtracting, you need to know the 5 pairs of numbers: First, expressed the answer in 10-x form. 12 – 5 = 10+2-2 – 3 = 10 – 3 We get 10 – 3 According to the pairs above, (3,7) 10 – 3 = 7 Another example would be 22 – 15 22 – 15 = 20 + 2 – 2 – 13 =20 – 13 = 10 – 7 Calculation gets less important as students go to high school. However, being able to calculate faster is definitely an asset as one can solve more questions in a limited time. The best way to enhance the speed of calculation would be to solve a lot of calculation problems. Although you might not notice, your calculation speed will get faster and more accurate! This article was written for you by Edmond, one of the tutors with Test Prep Academy.
In September 1939 Germany and the Soviet Union invaded and divided Poland. Approximately two million Polish citizens were deported by the Soviets to labor camps or imprisoned. After Germany attacked the Soviet Union on June 22, 1941, with the subsequent Sikorski-Mayski Agreement of July 30, 1941, and the Polish-Soviet Military Agreement of August 14, 1941, the Soviets released thousands of Poles to fight with the Allies. Under the command of General Wladyslaw Anders, the Poles left the Soviet Union and made their way to the Middle East. Once there, the Poles formed the Polish II Corps and fought under British command. A brown bear first became part of Polish WWII history in 1942. When the Poles reached Persia (Iran), they met a young boy who sold them a orphaned bear cub. The bear became a mascot for the Polish II Corps. The Polish soldiers named him Wojtek (Voytek in English). As the bear grew he became more than a mascot and fit very well into army life. He learned how to smoke, enjoy a beer, wrestle and relax with his fellow soldiers, eat army food, go on guard duty, salute, nod his head when addressed, and liked riding in trucks. Wojtek and his fellow soldiers developed a camaraderie that would last a lifetime. Wojtek moved with the soldiers from Persia, to Palestine, to Iraq, and then to Egypt. When the Poles were preparing to sail from Egypt to Italy, a problem arose. The ship would only transport soldiers and supplies. It is said by some that General Anders officially “enlisted” Wojtek into the Polish Army at that time. Corporal Wojtek was listed as a soldier and left for Italy. In Italy the Poles fought with other Allied countries in the famous Battle of Monte Cassino. In the fourth battle to capture the Benedictine monastery, the Poles reached the top of the mountain and raised the Polish flag on May 18, 1944. Among the Polish units at Monte Cassino was the 22nd Transport Company. It was their responsibility to transport and distribute munitions, food, and fuel to the heavy artillery regiments. During the battle, one of the soldiers carrying munition boxes was Corporal Wojtek. Wojtek carrying a shell became the emblem of the company. After WWII ended, the Polish II Corps sailed from Italy to Scotland and was demobilized. WWII had ended, but Poland was not an independent, free country again. Many Poles felt they were left homeless and chose not to return to Poland after the war. But what would become of Corporal Wojtek? It was decided to send Wojtek to the Edinburgh Zoo in Scotland. He had a new home, but like the Poles he was not free. There are stories of Poles who visited Wojtek at the zoo, threw him cigarettes which he ate, and proclaimed he still understood Polish. A touching story is told of a man who brought a violin to the zoo and played a Polish mazurka for Wojtek. It is said Wojtek “danced” with the music. Wojtek had the look of a bear but, indeed, had the heart of a Pole. Wojtek was a popular resident at the Edinburgh Zoo but never again had his freedom or the camaraderie of his Polish friends. Wojtek died at the zoo on December 2, 1963. He was about 21 years old. In a newspaper Letters to the Editor section after Wojtek died, a Londoner, Michael George Olizar wrote, “He left his bones, like many other Polish veterans, on British soil.” Wojtek, the soldier bear, is still remembered and celebrated today. His story has been told in books, a BBC documentary, and there are statues and plaques dedicated in his memory around the world.
An herbicide is used to kill unwanted plants. Selective herbicides kill specific targets while leaving the desired crop relatively unharmed. Some of these act by interfering with the growth of the weed and are often based on plant hormones. Herbicides used to clear waste ground are nonselective and kill all plant material with which they come into contact. Some plants produce natural herbicides, such as the genus Juglans (walnuts). They are applied in total vegetation control (TVC) programs for maintenance of highways and railroads. Smaller quantities are used in forestry, pasture systems, and management of areas set aside as wildlife habitat. Herbicides are widely used in agriculture and in landscape turf management. In the U.S., they account for about 70% of all agricultural pesticide use. This section does not cite any references or sources. Please improve this section by adding citations to reliable sources. Unverifiable material may be challenged and removed. Prior to the widespread use of chemical herbicides, cultural controls, such as altering soil pH, salinity, or fertility levels, were used to control weeds. Mechanical control (including tillage) was also (and still is) used to control weeds. The first widely used herbicide was 2,4-dichlorophenoxyacetic acid, often abbreviated 2,4-D. It was first commercialized by the Sherwin-Williams Paint comapny and saw use in the late 1940s. It is easy and inexpensive to manufacture, and kills many broadleaf plants while leaving grasses largely unaffected (although high doses of 2,4-D at crucial growth periods can harm grass crops such as maize or cereals). The low cost of 2,4-D has led to continued usage today and it remains one of the most commonly used herbicides in the world. Like other acid herbicides, current formulations utilize either an amine salt (usually trimethylamine) or one of many esters of the parent compound. These are easier to handle than the acid. 2,4-D exhibits relatively good selectivity, meaning, in this case, that it controls a wide number of broadleaf weeds while causing little to no injury to grass crops at normal use rates. A herbicide is termed selective if it affects only certain types of plants, and nonselective if it inhibits a very broad range of plant types. Other herbicides have been more recently developed that achieve higher levels of selectivity than 2,4-D. The 1950s saw the introduction of the triazine family of herbicides, which includes atrazine, which have current distinction of being the herbicide family of greatest concern regarding groundwater contamination. Atrazine does not break down readily (within a few weeks) after being applied to soils of above neutral pH. Under alkaline soil conditions atrazine may be carried into the soil profile as far as the water table by soil water following rainfall causing the aforementioned contamination. Atrazine is said to have carryover, a generally undesirable property for herbicides. Glyphosate, frequently sold under the brand name Roundup, was introduced in 1974 for non-selective weed control. It is now a major herbicide in selective weed control in growing crop plants due to the development of crop plants that are resistant to it. The pairing of the herbicide with the resistant seed contributed to the consolidation of the seed and chemistry industry in the late 1990s. Many modern chemical herbicides for agriculture are specifically formulated to decompose within a short period after application. This is desirable as it allows crops which may be affected by the herbicide to be grown on the land in future seasons. However, herbicides with low residual activity (ie decompose quickly) often do not provide season-long weed control. Certain herbicides affect metabolic pathways and systems unique to plants and not found in animals making many modern herbicides among the safest crop protection products havcing essentially no effect on mamals, birds, amphibians or reptiles. Some herbicides can cause a variety of health effects ranging from skin rashes to death. The pathway of attack can arise from intentional or unintentional direct consumption of the herbicide , improper application resulting in the herbicide coming into direct contact with people or wildlife, inhalation of aerial sprays, or food consumption prior to the labeled pre-harvest interval. Under extreme conditions herbicides can also be transported via surface runoff to contaminate distant water sources. Most herbicides decompose rapidly in soils via soil microbial decomposition, hydrolysis or photolysis and some herbicides are more persistent with longer soil half-lives half-lives. Other alleged health effects can include chest pain, headaches, nausea and fatigue. All organic and non-organic herbicides must be extensively tested prior to approval for commercial sale and labeling by the Environmental Protection Agency. However, because of the large number of herbicides in use, there is significant concern regarding health effects. Some of the herbicides in use are known to be mutagenic, carcinogenic or teratogenic. However, some herbicides may also have a therapeutic use. Current research aims to use herbicides as an anti-malaria drug that targets the plant-like apicoplast plastid in the malaria-causing parasite Plasmodium falciparum. Classification of herbicides Herbicides can be grouped by activity, use, chemical family, mode of action, or type of vegetation controlled. Contact herbicides destroy only the plant tissue in contact with the chemical. Generally, these are the fastest acting herbicides. They are less effective on perennial plants, which are able to regrow from rhizhomes, roots or tubers. Systemic herbicides are translocated through the plant, either from foliar application down to the roots, or from soil application up to the leaves. They are capable of controlling perennial plants and may be slower acting but ultimately more effective than contact herbicides. Soil-applied herbicides are applied to the soil and are taken up by the roots of the target plant. Pre-plant incorporated herbicides are soil applied prior to planting and mechanically incorporated into the soil. Preemergent herbicides are applied to the soil before the crop emerges and prevent germination or early growth of weed seeds. Post-emergent herbicides are applied after the crop has emerged. Their classification by mechanism of action (MOA) indicates the first enzyme, protein, or biochemical step affected in the plant following application. The main mechanisms of action are: ACCase inhibitors are compounds that kill grasses. Acetyl coenzyme A carboxylase (ACCase) is part of the first step of lipid synthesis. Thus, ACCase inhibitors affect cell membrane production in the meristems of the grass plant. The ACCases of grasses are sensitive to these herbicides, whereas the ACCases of dicot plants are not. ALS inhibitors: the acetolactate synthase (ALS) enzyme (also known as acetohydroxyacid synthase, or AHAS) is the first step in the synthesis of the branched-chain amino acids (valine, leucine, and isoleucine). These herbicides slowly starve affected plants of these amino acids which eventually leads to inhibition of DNA synthesis. They affect grasses and dicots alike. The ALS inhibitor family includes sulfonylureas (SUs), imidazolinones (IMIs), triazolopyrimidines (TPs), pyrimidinyl oxybenzoates (POBs), and sulfonylamino carbonyl triazolinones (SCTs). ALS is a biological pathway that exists only in plants and not in animals thus making the ALS-inhibitors among the safest herbicides. EPSPS inhibitors: The enolpyruvylshikimate 3-phosphate synthase enzyme EPSPS is used in the synthesis of the amino acids tryptophan, phenylalanine and tyrosine. They affect grasses and dicots alike. Glyphosate (Roundup) is a systemic EPSPS inhibitor but inactivated by soil contact. Synthetic auxin inaugurated the era of organic herbicides. They were discovered in the 1940s after a long study of the plant growth regulator auxin. Synthetic auxins mimic this plant hormone. They have several points of action on the cell membrane, and are effective in the control of dicot plants. 2,4-D is a synthetic auxin herbicide. Photosystem II inhibitors reduce electron flow from water to NADPH2+ at the photochemical step in photosynthesis. They bind to the Qb site on the D1 protein, and prevent quinone from binding to this site. Therefore, this group of compounds cause electrons to accumulate on chlorophyll molecules. As a consequence, oxidation reactions in excess of those normally tolerated by the cell occur, and the plant dies. The triazine herbicides (including atrazine) and urea derivatives (diuron) are photosystem II inhibitors. Almost all herbicides in use today are considered "organic" herbicides in that they contain carbon as a primary molecular component. An notable exception would be the arsenical class of herbicides. Somtimes they are referred to as synthetic organic herbicides. Recently the term "organic" has come to imply products used in organic. Under this definition an organic herbicide is one that can be used in a farming enterprise that has been classified as organic. Organic herbicides are expensive and may not be affordable for commercial production. They are much less effective than synthetic herbicides and are generally used along with cutlural and mechanical weed control pratices. Organic herbicides include: Spices are now effectively used in patented herbicides. Vinegar is effective for 5-20% solutions of acetic acid with higher concentrations most effective but mainly destroys surface growth and so respraying to treat regrowth is needed. Resistant plants generally succumb when weakened by respraying. Steam has been applied commercially but is now considered uneconomic and inadequate. It kills surface growth but not underground growth and so respraying to treat regrowth of perennials is needed. Flame is considered more effective than steam but suffers from the same difficulties. Most herbicides are applied as water-based sprays using ground equipment. Ground equipment varies in design, but large areas can be sprayed using self-propelled sprayers equipped with a long boom, of 60 to 80 feet (20 to 25 m) with flat fan nozzles spaced about every 20 in (500 mm). Towed, handheld, and even horse-drawn sprayers are also used. Synthetic organic herbicides can generally be applied aerially using helicopters or airplanes, and can be applied through irrigation systems (chemigation). Control is the destruction of unwanted weeds, or the damage of them to the point where they are no longer competitive with the crop. Suppression is incomplete control still providing some economic benefit, such as reduced competition with the crop. Crop Safety, for selective herbicides, is the relative absence of damage or stress to the crop. Most selective herbicides cause some visible stress to crop plants. Major herbicides in use today 2,4-D, a broadleaf herbicide in the phenoxy group used in turf and in no-till field crop production. Now mainly used in a blend with other herbicides that allow lower rates of herbicides to be used, it is the most widely used herbicide in the world, third most commonly used in the United States. It is an example of synthetic auxin(plant hormone). atrazine, a triazine herbicide used in corn and sorghum for control of broadleaf weeds and grasses. Still used because of its low cost and because it works extrodinarily well on a broad spectrum of weeds common in the U.S. corn belt, Atrazine is commonly used with other herbicides to reduce the over-all rate of atrazine and to lower the for potential groundwater contamination, it is a photosystem II inhibitor. clopyralid is a broadleaf herbicide in the pyridine group, used mainly in turf, rangeland, and for control of noxious thistles. Notorious for its ability to persist in compost. It is another example of synthetic auxin. dicamba, a persistent broadleaf herbicide active in the soil, used on turf and field corn. It is another example of synthetic auxin. Glufosinate ammonium, a broad-spectrum contact herbicide and is used to control weeds after the crop emerges or for total vegetation control on land not used for cultivation. Glyphosate, a systemic nonselective (it kills any type of plant) herbicide used in no-till burndown and for weed control in crops that are genetically modified to resist its effects. It is an example of an EPSPs inhibitor. Imazapyr, is a non-selective herbicide used for the control of a broad range of weeds including terrestrial annual and perennial grasses and broadleaved herbs, woody species, and riparian and emergent aquatic species. Imazapic, is a selective herbicide for both the pre- and post-emergent control of some annual and perennial grasses and some broadleaf weeds. Imazapic kills plants by inhibiting the production of branched chain amino acids (valine, leucine, and isoleucine), which are necessary for protein synthesis and cell growth. Linuron, is a non-selective herbicide used in the control of grasses and broadleafed weeds. It works by inhibiting photosynthesis. metoalachlor, a pre-emergent herbicide widely used for control of annual grasses in corn and sorghum; it has largely replaced atrazine for these uses. Paraquat, a nonselective contact herbicide used for no-till burndown and in aerial destruction of marijuana and coca plantings. More acutely toxic to people than any other herbicide in widespread commercial use. picloram, a pyridine herbicide mainly used to control unwanted trees in pastures and edges of fields. It is another synthetic auxin. 2,4,5-Trichlorophenoxyacetic acid (2,4,5-T) was a widely used broadleaf herbicide until being phased out starting in the late 1970s. While 2,4,5-T itself is of only moderate toxicity, the manufacturing process for 2,4,5-T contaminates this chemical with trace amounts of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). TCDD is extremely toxic to humans. With proper temperature control during production of 2,4,5-T, TCDD levels can be held to about .005 ppm. Before the TCDD risk was well understood, early production facilities lacked proper temperature controls. Individual batches tested later were found to have as much as 60 ppm of TCDD. 2,4,5-T was withdrawn from use in the USA in 1983, at a time of heightened public sensitivity about chemical hazards in the environment. Public concern about dioxins was high, and production and use of other (non-herbicide) chemicals potentially containing TCDD contamination was also withdrawn. These included pentachlorophenol (a wood preservative) and PCBs (mainly used as stabilizing agents in transformer oil). Some feel that the 2,4,5-T withdrawal was not based on sound science. 2,4,5-T has since largely been replaced by dicamba and triclopyr. Agent Orange was a herbicide blend used by the U.S. military in Vietnam between January 1965 and April 1970 as a defoliant. It was a 50/50 mixture of the n-butyl esters of 2,4,5-T and 2,4-D. Because of TCDD contamination in the 2,4,5-T component, it has been blamed for serious illnesses in many veterans and Vietnamese people who were exposed to it. However, research on populations exposed to its dioxin contaminant have been inconsistent and inconclusive. Agent Orange often had much higher levels of TCDD than 2,4,5-T used in the US. The name Agent Orange is derived from the orange color-coded stripe used by the Army on barrels containing the product. It is worth noting that there were other blends of synthetic auxins at the time of the Vietnam War whose containers were recognized by their colors, such as Agent Purple and Agent Pink. ^ Kellogg RL, Nehring R, Grube A, Goss DW, and Plotkin S (February 2000), Environmental indicators of pesticide leaching and runoff from farm fields. United States Department of Agriculture Natural Resources Conservation Service. Retrieved on 2007-10-03. ^ Stryer, Lubert (1995). Biochemistry, 4th Edition. W.H. Freeman and Company, pp. 670. ISBN 0-7167-2009-4.
Try these friendship-boosting activities in the physically-distanced classroom or with a friend over video chat to help children connect with their peers and rekindle old friendships (or even strike up new ones!). As the global community continues to cope with the COVID 19 pandemic, some families are preparing for their children to return to the classroom, while others have long since returned yet are continuing to physically distance, and will conclude the school year this way. Across these diverse circumstances, it is likely we all feel some level of curiosity (or, more realistically, concern) about how the “new normal” has played out on our well-being and relationships. This is especially important for our children, whose friendships are likely to have been significantly affected by school closures and social distancing protocols. The research is clear: childhood friendships are integral to children’s well-being, necessary to cope with challenges during life transitions, and may have lifelong impacts on their mental and physical health. So how can parents and educators help children reconnect to their peers and feel confident in their friendships as we carry on into the summer ahead? While there are no one-size-fits-all solutions, incorporating activities that build friendships skills (such as emotional awareness), and reduce barriers to successful friendships (such as anxiety) can pave the way for children to engage with their peers in positive and meaningful ways. Try these 7 friendship-boosting activities to help children Get Along with Others and build better friendships inside or outside the classroom: 1. Break the ice with these hilarious, kid-friendly “would you rather?” conversation starters or a game of desert island. Ice breakers can help children feel more at ease, provide an opportunity for them to share their thoughts, and create an environment where listening and sharing are valued. 2. Build emotional understanding with feelings charades. Children with better emotional understanding - which includes identifying facial expressions - may be more likely to experience closer and more mutual frienships, as well as greater popularity, higher quality play, and less negative behaviours. 3. Make a friendship soup to highlight the qualities that help friendships thrive. Talking about positive friendship qualities can help build children's social skills, which are linked to positive peer relationships. 4. Read and discuss books about friendship for primary and intermediate readers. Reading fiction, in particular, can help children to learn empathy and be able to put themselves in another's shoes. 5. Use a calming activity, such as dot rock painting, to decrease anxiety. 6. Create opportunities for play and connection, such as through active social distancing games like mirror-mirror, detective, or zip zap zoom. Make sure to include time to debrief and unwind afterwards. Children need time and space for play in order to build, maintain, and enjoy their friendships! 7. For those connecting virtually, make the most of online with a video-chat scavenger hunt. Research shows that online interactions can enhance the closeness of friendships in older children and youth. Don’t forget about the power of the teacher-student relationship to create connection and belonging! Select images from freepik.com Anxiety can get in the way of friendships by causing children to interpret In her research on focus groups with children, Gibson (2007) reports that ice-breakers are a useful way to help children relax in a group, spark group cohesiveness, and potentially enhance the quality of discussion to follow. According to Pons, Harris, & Rosnay (2004) there are nine dimensions of emotional understanding in children, which develop over time: - the recognition of emotions, based on facial expressions - the comprehension of external emotional causes - impact of desire on emotions - emotions based on beliefs - memory influence on emotions - possibility of emotional regulation - possibility of hiding an emotional state - having mixed emotions - contribution of morality to emotional experiences A longitudinal study conducted by Sakyi et al. (2015), which followed over one-thousand French children and adolescents for 18 years, found that those with at least one good childhood friend were only half as likely as their friendless peers to experience internalizing symptoms in adulthood, such as anxiety and depression. Hartup & Stevens (1999) discuss the developmental significance of friendships, such as in easing transitions between schools, in their review of the literature "Friendships and Adaptation Across the Life Span". A longitudinal study conducted by Kundiff & Matthews (2018), which followed 267 American boys into adulthood, found that those who spent more time with friends as children tended to have lower blood pressure and lower BMI as men in their early 30s. In their cross-sectional study of 8-14 year-old children with and without an anxiety disorder, Crawford & Manassis (2011) also found that the more anxious a child is, the more likely he or she will be bullied. In their "Pedagogy of Friendship" model, developed from phenomenological inquiry with 5 and 6 year-old-children, Carter & Nutrbrown (2016) conclude that children both desire, and developmentally require, time and space to nurture their friendships. In their cross-sectional study of 8-14 year-old children with and without anxiety disorders, Crawford & Manassis (2011) found that having poor social skills predisposed children to experience lower relationship quality and increased their risk of being bullied. In their study of 794 preadolescents and adolescents, Valkenburg & Peter (2007) found that online communication was positively related to the closeness of friendships, but only when youth communicated online with existing friends (rather than with strangers).
Scientists need your help to find out what ants in your neighborhood like to eat. Would you ask an ant to join you for lunch? A team of researchers at North Carolina State University in Raleigh calls on citizen scientists around the world to flip the picnic concept – they want *us* to feed the ants. By counting ants, recording their meal preferences, and sending in data, you can help Dr. Magdalena Sorger and her colleagues better understand what foods ants have access to around the world. This citizen science project, called Ant Picnic, could spark new studies into ant behavior, natural resources, and the impact of global factors like climate change. “Not only does [Ant Picnic] engage the public with scientific exploration, it will collect very valuable data on what resources are limiting — at least for ants — in different habitats and geographic regions,” Dr. Andrew V. Suarez says of the project. He leads ant research at the Suarez Lab of the University of Illinois. Ant Picnic data are incorporated into the largest research project of its kind – the biggest study of global patterns in preferred resources and activity within a single group of organisms. How Does an Ant Picnic Work? First, participants prepare six specific types of food — including a cookie and cotton balls soaked in either sugar or salt water — for their ant guests. After being left outside for at least an hour, participants record and photograph the number of ants that chose each type of food. Bagging and/or freezing the ants are options for making the counting process easier, but letting the ants die is not required. Sorger recognizes that some young scientists may worry about the prospect of ant deaths.She says because ants are so small, it is difficult to see identifying traits, like the number of hairs on their bodies, when they’re alive. So, the loss of ant life is sometimes part of how scientists study them, and this helps them to protect ants and their habitats in the long run. For those who want to take their participation further, extensions to the project include comparing ant responses on green versus paved sites or testing additional food types in a separate location. The project also includes a robust curriculum that gives students and others a chance to analyze ant data themselves and address their own research questions. Sorger, who is a postdoctoral researcher at both the North Carolina Museum of Natural Sciences and the lab headed up by Rob Dunn at N.C. State, says she loves ants for many reasons, including their wide range of adaptations. They have “so many different shapes, forms and behaviors and each has its specific purpose, some we already know about, [while] others remain to be studied,” she explains. She also appreciates the ways in which their culture is similar to ours. Did you know, she says, that ants divide labor, farm, and live in “complex homes with temperature and airflow control?” The Ant Picnic project follows up on another citizen science initiative the lab has run since 2011, which involved collecting ants to figure out where different species live. A decade ago there were approximately 12,500 described species of ants, there are now close to 15,000, and the current estimate is that there are probably about 20,000 total. According to Sorger, there is much left to learn about what ants do and why. What Might We Learn? Sorger said she and her team “expect ants to choose the food type that is least common [or] abundant in their environment.” This expectation comes from the premise that animals seek the foods they need to survive; preference for one food over another may indicate a greater need for that type of food. By gathering data on hundreds of thousands of ants around the world, they aim to discover what is absent from the ants’ diverse habitats. This information can identify areas for further research into the ants’ food and survival in specific regions. “What ants eat at different times of the year and in different places around the world tells us what might be missing in their environment and how climate change could impact ant populations,” the project SciStarter page explains. Some studies have already shown that climate change can affect ants’ well-being, activities, and interactions; it has been shown to impact both foraging behavior and community stability. Suarez points out that citizen scientists are essential to the Rob Dunn Lab’s success; “ecological work at regional [or] continental scales is nearly impossible without extensive public engagement,” he says. Also, he suspects, “people of all ages will be excited to see what lives in their backyards.” If you are ready to learn more about ants and host some for lunch, Sorger would like to say thank you for having an Ant Picnic of your own:
This post is part of our online forum, “Black October,” on the Russian Revolution and the African Diaspora The political and social changes brought on by the October Revolution of 1917 permeated nearly every aspect of Soviet life. The Communist state assumed the role of guardian, protector and nurturer, and the moral upbringing and education of children took on particular significance. For Soviets, the task of shaping moral sensibilities required instilling a sense of solidarity with, and sympathy for those oppressed by Western political and economic systems. In particular, Western racism and colonization in Africa provided a rich opportunity to critique the exploitation of the continent. Children’s literature became an important tool in nurturing values and political sensibilities in the state’s youngest citizens. In particular, early Soviet works depicting Africans and people of African origin highlighted racial and economic injustices perpetrated on the continent but also reinforced pejorative attitudes about Africa. Works such as Kornei Chukovsky’s Barmaley (1925) posited a dark Africa, wild and foreboding, a place to be avoided. These attitudes continued to resonate and form the basis of a complicated relationship between Russians and Africans during the Soviet period. In children’s literature published just prior to and in the decade following the October 1917 Revolution, Africa captured the creative imagination of several Russian children’s writers. Despite the Soviet Government’s critique of Western racial oppression and colonialism, the cultural establishment drew on primitivist tropes and discourses of Black racial inferiority in their imaginings of Africa. The 1920s are particularly interesting given that it was a decade which saw a level of artistic freedom, innovation and creativity that essentially disappeared in the 1930s with the formation of the Union of Soviet Writers and the adoption of Socialist Realism as the party’s driving artistic principle. Writers who did not belong to the official party found difficulty in getting their work published. Many fell victim to Stalin’s purges and were either executed or perished in Soviet labor camps. By the October 1917 Revolution, nearly ninety percent of the African continent had been colonized. Some Soviet children’s writers confronted human and material exploitation in Africa while others pointed to the absence of civilized development and the ongoing “white man’s burden” that placed intervention in the context of humanitarianism. These narratives underscored the flaws of Western ideologies and strengthened the Soviet Union’s moral position in its political and ideological conflicts with the West. Soviet writers contested the exploitation of Africans yet simultaneously displayed a distinct ambivalence regarding their intelligence and humanity. In contrast to nineteenth-century colonizers and slaveholders, benevolent, color-blind Soviets became alternative civilizing agents. What resulted were children’s texts that often reflected negative racial stereotypes, and with varying degrees of subtlety, reinforced centuries-old pejorative notions of what constitutes “Africa” and “Africanness.” Arguably the most iconic image of Africa in Soviet children’s literature appears in the poem Barmaley (1925) by Kornei Chukovsky, the popular Soviet children’s poet and highly regarded translator of foreign children’s books into Russian. In Barmaley, the Russian pirate Barmaley seeks conquest in Africa. His viciousness is such that the entire continent should be avoided for fear of an encounter with him. The poem begins: “Little children!/ For nothing in the world / Do not go to Africa/ Do not go to Africa for a walk!// In Africa, there are sharks,/ In Africa, there are gorillas, / In Africa, there are large/ Evil crocodiles/ They will bite you, / Beat and offend you// Don’t you go, children, / to Africa for a walk/ In Africa, there is a robber,/ In Africa, there is a villain,/ In Africa, there is terrible/ Bahr-mah-ley!// He runs about Africa/ And eats children-/ Nasty, vicious, greedy Barmaley!” Yet Barmaley, for all of his viciousness, is not the narrative’s main threat; the main threat is Africa itself. The Africa of Chukovsky’s poem evokes notions of the dark, dangerous “other.” This threat is brought closer to the young reader by the suggestion that Africa as a physical space is not abstract and faraway, but is rather a place a child could encounter simply by walking (“Do not go (walk) to Africa”). Its wildness is not controlled or confined to zoos and habitats; it literally embodies the entire continent. In spite of the narrator’s admonitions, and the parents’ warning (“Africa is terrible, yes, yes, yes. Africa is dangerous, yes, yes, yes. Never go to Africa.”), Tanya and Vanya leave Leningrad to discover Africa for themselves. What they encounter draws upon stereotypical images that construct Africa as a continent of exotic vegetation and menacing animals that roam freely. The children do not encounter African natives yet the reader retains an impression of Africa as foreboding and inhospitable, primarily on the basis of its physical topography. Barmaley, written after the historic events of 1917, lacks a direct political message. It is a fantastic tale whose aim is to entertain, encourage obedience and perhaps frighten a little. Yet the negative image of Africa has been a lasting one. It falls along a continuum of intellectual thought that can be traced back to the 19th century and pejorative Russian attitudes towards Africa and Africans. Most of the writers who produced early Soviet children’s texts that included constructions of Africa did not achieve the same level of popular success as Chukovsky, yet their work is no less important for understanding the various literary contexts in which the depiction of Africa was seen in early Soviet children’s literature. Semen Poltavsky’s Multi-colored Children (Detki Raznotsvetki, 1927) is noteworthy given its particularly grotesque visual images that reinforce the notion of Africa as the “other.” The young Soviet hero Vanya builds a plane and embarks on a trip to visit representatives of the world’s four races. Poltavsky’s narrative does not address the issues of European colonialism or western racism, but it does pose a broader question of cultural superiority. At first glance, the reader sees a young, physically appealing Soviet child who simply desires adventure. Yet increasingly offensive racialized caricatures are introduced that reflect the perceived cultural superiority of Vanya and suggest a racial hierarchy that places Africans squarely at the bottom. In their depictions of Africa and people of African origin, Soviet children’s writers frequently reinforced stereotypes that may ultimately have had, however unintentionally, a negative impact on how Soviets perceived people of African origin. One example from Soviet popular culture would be the song “Chunga Changa.” Made popular by the animated film Katerok (1970), “Chunga Changa” presents Africa as a land of exotic animals and carefree, banana-eating natives whose primitivism is idealized. But this presumably positive view of Africa has a clear, patronizing subtext. Ongoing political and ideological conflicts and the drive to expand spheres of influence in Africa provided an ideal background for Soviet writers’ negative literary portrayal of exploitative economic systems. Within this context, Soviet children’s writers highlighted the historical discrimination against Africans and contrasted it with the Soviet ideal of international brotherhood and equality. Yet the use of pejorative images of Africans arguably contributed for the racism many Africans experienced in the Soviet Union throughout the Soviet era.1 Soviet Communists touted solidarity with their oppressed brothers across the globe, and equality was held up as an ideal, but children’s literature of the 1920s reveals how some brothers were more clearly equal than others. The ramifications of how this played out in practical terms were seen in the lived experiences of many Blacks in the Soviet Union for years to come. The 100th anniversary of the Russian Revolution provides an opportune moment to reconsider the crucial role of children’s literature, not only in the socialization process but also in the formation of racial attitudes. In a broader context, it also provides a moment to reflect on how the Russian Revolution fell short of its promise of racial equality in the Soviet Union and on the continued pervasive effects of internalized racial attitudes. - Although many Africans and African-Americans had successful and contented lives in the Soviet Union, their experiences were not universal. Several soviet historians have written on African experiences of experiences with racism in the Soviet Union. For an excellent examination of this history see Maxim Matusevich’s, Africa in Russia, Russia in Africa: Three Centuries of Encounters. Trenton, N.J.: Africa World Press, 2007 ↩
Organic matter constitutes 35%40% of the municipal solid waste generated in India. This waste can be recycled by the method of composting, one of the oldest forms of disposal. It is the natural process of decomposition of organic waste that yields manure or compost, which is very rich in nutrients. Composting is a biological process in which micro-organisms, mainly fungi and bacteria, convert degradable organic waste into humus like substance. This finished product, which looks like soil, is high in carbon and nitrogen and is an excellent medium for growing plants. The process of composting ensures the waste that is produced in the kitchens is not carelessly thrown and left to rot. It recycles the nutrients and returns them to the soil as nutrients. Apart from being clean, cheap, and safe, composting can significantly reduce the amount of disposable garbage. The organic fertilizer can be used instead of chemical fertilizers and is better specially when used for vegetables. It increases the soils ability to hold water and makes the soil easier to cultivate. It helped the soil retain more of the plant nutrients. Vermi-composting has become very popular in the last few years. In this method, worms are added to the compost. These help to break the waste and the added excreta of the worms makes the compost very rich in nutrients. In the activity section of this web site you can learn how to make a compost pit or a vermi-compost pit in your school or in the garden at home. To make a compost pit, you have to select a cool, shaded corner of the garden or the school compound and dig a pit, which ideally should be 3 feet deep. This depth is convenient for aerobic composting as the compost has to be turned at regular intervals in this process. Preferably the pit should be lined with granite or brick to prevent nitrite pollution of the subsoil water, which is known to be highly toxic. Each time organic matter is added to the pit it should be covered with a layer of dried leaves or a thin layer of soil which allows air to enter the pit thereby preventing bad odour. At the end of 45 days, the rich pure organic matter is ready to be used. For more information on composting link to
High functioning autism is not a single disorder; rather it is a spectrum of closely related disorders that share certain symptoms. It is also better conceptualized as a scale of varying degrees and severities instead of just a categorical diagnosis. There is no blood test for diagnosing high functioning autism, nor there is any one test for the evaluation of the disorder. Instead, the doctor will have to assess and take into account a variety of factors. To determine the level of autism in a child, the doctor will evaluate two main factors: the child’s abilities or skills to communicate with others and the presence of restricted, repetitive behavior. High Functioning Autism: Behavioral Patterns A child may display a pattern of behavior unique to them; indeed, the manifestations of symptoms may vary from case to case. Depending on where on the spectrum they are, the child may possess normal or below average intelligence. They may have difficulty learning and at the same time pick up some subjects more easily. They may even exhibit normal to above-average intelligence. Though not an official medical diagnosis, the term high functioning autism (HFA) is being commonly used to refer to individuals with autism spectrum disorder who has no intellectual disability. They are able to read, write, speak, and perform regular life tasks. The symptoms of HFA can be similar to those of Asperger’s syndrome, with the key difference being that children with HFA experience a significant delay in their early speech and language skills development while those with Asperger’s syndrome do not. Generally, individuals with high functioning autism have a lower verbal reasoning ability, in part due to the delays in their speech and language skills development. On the other hand, they possess better visual or spatial skill and thus higher performance IQ. Individuals with high functioning autism also have more control over their motions; those with Asperger’s syndrome tend to be clumsier. Additionally, people with HFA may exhibit curiosity and interest for many different things, as opposed to people with Asperger’s syndrome who may fixate on a single, often niche subject. Currently, autism spectrum disorders (ASD) is divided into three levels that reflect their severity. - Level 1. This is the mildest level of ASD. People at this level on the spectrum generally exhibit mild symptoms that don’t their personal or professional life and relationships. People at this level tend to be those referred to as having high functioning autism or Asperger’s syndrome. - Level 2. People at this level exhibit more severe symptoms. As such, they require more support, such as speech therapy or social skills training, in order to function effectively. - Level 3. This is the most severe level of ASD. People at this level require the most support, including full-time caregivers or intensive therapy in some cases. Just like with the diagnosis of the disorder, there is no single test for determining the level of ASD. The specialist working would you would have to spend a lot of time talking to someone and observing their behaviors to get a better idea of their: - verbal and emotional development - social and emotional capabilities - nonverbal communication abilities They will also try to assess how well someone is able to create or maintain meaningful relationships with others. Early intervention is recommended for effective management of autism. Diagnosing the disorder as early as possible increases the chance of helping the individual grow up to be well-adjusted and to lead as full a life as possible. How are the different levels treated? There aren’t any standardized treatment recommendations for different levels of ASD; instead, the treatment strategy would be devised depending on each person’s unique symptoms. Those with level 1 ASD may need less intensive treatments that those with level 2 or level 3 ASD, who will likely need a long-term treatment plan. ASD treatment plans may include: Since ASD can cause a variety of speech issues, speech therapy may be called for to help the individual learn to express themselves and engage in conversation. A speech therapist can help to address a range of speech problems. Some people with ASD have trouble with motor skills. This can make coordinated movements difficult for them. They may have trouble walking or running or jumping. With physical therapy, their muscles will be strengthened and their motor skills improved. This may be implemented in tandem with physical therapy. Occupational therapy is designed to help the person use their hands, legs, or other body parts more efficiently, in order to perform daily tasks more efficiently. Sensitivity to sounds, lights, and touch is common in individuals with ASD. Through sensory training, they can learn how to deal with sensory input, such as in loud crowded spaces. While there aren’t any medications designed to treat high functioning autism by itself, certain types can help to manage specific symptoms, such as depression or high energy. The imbalances in the hormone levels can be addressed with certain chemicals to help the individual feel lighter and better.
One of the stereotypes regarding teenagers is that they are poor decision makers and engage in risky behavior. This stereotype is usually explained in terms of the teenage brain (or mind) being immature and lacking the reasoning abilities of adults. Of course, adults often engage in poor decision-making and risky behavior. Interestingly enough, there is research that shows teenagers use basically the same sort of reasoning as adults and that they even overestimate risks (that is, regard something as more risky than it is). So, if kids use the same processes as adults and also overestimate risk, then what needs to be determined is how teenagers differ, in general, from adults. Currently, one plausible hypothesis is that teenagers differ from adults in terms of how they evaluate the value of a reward. The main difference, or so the theory goes, is that teenagers place higher value on rewards (at least certain rewards) than adults. If this is correct, it certainly makes sense that teenagers are more willing than adults to engage in risk taking. After all, the rationality of taking a risk is typically a matter of weighing the (perceived) risk against the (perceived) value of the reward. So, a teenager who places higher value on a reward than an adult would be acting rationally (to a degree) if she was willing to take more risk to achieve that reward. Obviously enough, adults also vary in their willingness to take risks and some of this difference is, presumably, a matter of the value the adults place on the rewards relative to the risks. So, for example, if Sam values the enjoyment of sex more than Sally, then Sam will (somewhat) rationally accept more risks in regards to sex than Sally. Assuming that teenagers generally value rewards more than adults do, then the greater risk taking behavior of teens relative to adults makes considerable sense. It might be wondered why teenagers place more value on rewards relative to adults. One current theory is based in the workings of the brain. On this view, the sensitivity of the human brain to dopamine and oxytocin peaks during the teenage years. Dopamine is a neurotransmitter that is supposed to trigger the “reward” mechanisms of the brain. Oxytocin is another neurotransmitter, one that is also linked with the “reward” mechanisms as well as social activity. Assuming that the teenage brain is more sensitive to the reward triggering chemicals, then it makes sense that teenagers would place more value on rewards. This is because they do, in fact, get a greater reward than adults. Or, more accurately, they feel more rewarded. This, of course, might be one and the same thing—perhaps the value of a reward is a matter of how rewarded a person feels. This does raise an interesting subject, namely whether the value of a reward is a subjective or objective matter. Adults are often critical of what they regard as irrationally risk behavior by teens. While my teen years are well behind me, I have looked back on some of my decisions that seemed like good ideas at the time. They really did seem like good ideas, yet my adult assessment is that they were not good decisions. However, I am weighing these decisions in terms of my adult perspective and in terms of the later consequences of these actions. I also must consider that the rewards that I felt in the past are now naught but faded memories. To use the obvious analogy, it is rather like eating an entire good cake. At the time, that sugar rush and taste are quite rewarding and it seems like a good idea while one is eating that cake. But once the sugar rush gives way to the sugar crash and the cake, as my mother would say, “went right to the hips”, then the assessment might be rather different. The food analogy is especially apt: as you might well recall from your own youth, candy and other junk food tasted so good then. Now it is mostly just…junk. This also raises an interesting subject worthy of additional exploration, namely the assessment of value over time. Going back to the cake, eating the whole thing was enjoyable and seemed like a great idea at the time. Yes, I have eaten an entire cake. With ice cream. But, in my defense, I used to run 95-100 miles per week. Looking back from the perspective of my older self, that seems to have been a bad idea and I certainly would not do that (or really enjoy doing so) today. But, does this change of perspective show that it was a poor choice at the time? I am tempted to think that, at the time, it was a good choice for the kid I was. But, my adult self now judges my kid self rather harshly and perhaps unfairly. After all, there does seem to be considerable relativity to value and it seems to be mere prejudice to say that my current evaluation should be automatically taken as being better than the evaluations of the past.
The moon’s surface is riddled with craters ranging in size and structural complexity, and billions of years ago before life emerged, the Earth looked the same way. “The bottom line is, everything that happened on the moon happened on the Earth,” said David Kring, crater expert and team leader for Center for Lunar Science and Exploration. “The Earth used to look just like that.” But Earth has several things the moon doesn’t — an atmosphere and liquid water that cause erosion. And the trump card, plate tectonics, that recycles much of the planet’s crust over millions of years and smooths away blemishes left by cosmic impacts. As a result, there are only around 160 known impact craters in existence today (though there are surely more that haven’t been discovered). Craters come in two flavors: those that aren’t caused by asteroids or comets, impact craters, are formed by powerful volcanic explosions. Such outbursts can be violent enough that once the eruption is over, the volcano collapses in on its empty vacant magma chamber and forms a caldera, or volcanic crater. Lake Toba in Sumatra, the largest volcanic structure on Earth, is an example of an enormous caldera that has filled with water over time. Whereas volcanic craters arise from deep inside the planet, impact craters originate in outer space. When a meteor makes it through Earth’s atmosphere without burning up, it strikes the ground faster than the speed of sound. “Something we don’t understand very well on the geological side (of crater formation) is, we still find it difficult to determine the trajectory of impacting objects for most impact craters,” Kring said. “We’re still searching for a clue to deduce that.” But no matter at what angle it makes contact, the enormous amount of kinetic energy the projectile carries immediately transfers to the target rock it hits, triggering powerful shock waves. Although craters look like imprints of a giant fist smashing the ground inward, impact shock waves have the opposite effect, which planetary scientists divide into three phases. The compression stage of crater formation involves that initial exchange of energy between the projectile and the impact area. During the excavation phase, the massive shock wave causes the projectile to simultaneously melt and vaporize, spewing plumes of searing hot rock vapor miles high into the atmosphere. The force can catapult chunks of molten and solid rock hundreds of miles from the impact site — this material is known as ejecta flow. And so far, the crater formation process has only lasted a few seconds. During the final modification phase, the remainder of ejecta partially refills and rings the crater site, and debris forms a rich mineral composite called breccia. Larger, more forceful impact events will form complex craters in which the rock at the center of the crater rebounds from the downward pressure of the shock wave and uplifts into a mound-like formation. But the environmental effects of impact crater formation go far beyond forming benign basins. For instance, the famous Chicxulub crater in Yucatan, Mexico, is thought to be the site of the meteor impact that instigated the K-T event, which wiped out the dinosaurs in a mass extinction that affected much of life on Earth. On Mars, meteor storms 100 million years ago may have literally shaken the Red Planet to the core and destroyed its magnetic field. Even the crater-covered moon might be a chip off old Earth’s block, an enormous shard shot into orbit following a giant impact event. Given such drastic, far-reaching outcomes of space rock impacts, Kring said that studying crater formation holds the answer to understand not only how life on Earth began but also how it could be wiped away again in a future, perhaps inevitable, K-T event. “There will be another Chicxulub-size impact event,” he said. Since tectonic plate movements has erased much of Earth’s crater record, the answers to the lingering questions about crater formation and timelines lie in the “exquisitely preserved” craters on the moon. But until NASA returns to the lunar landscape, researchers must rely on shockwave simulators, mathematical models and the well-worn geological formations on Earth to estimate how and when another impact event might occur. “Where we’re really going to get the answers – the gold standards of answers – is when we go back to the moon,” Kring said. Posted by: Soderman/NLSI Staff
Having examined what religion is not, sociologists consider what characteristics do constitute religion. Sociologists generally define religion as a codified set of moral beliefs concerning sacred things and rules governing the behavior of believers who form a spiritual community. All religions share at least some characteristics. Religions use symbols, invoke feelings of awe and reverence, and prescribe rituals for their adherents to practice. Religion differs from magic, which involves superstitious beliefs and behaviors designed to bring about a desired end. Religion has numerous rituals and ceremonies, which may include lighting candles, holding processions, kneeling, praying, singing hymns and psalms, chanting, listening to sacred readings, eating certain foods, fasting from other foods on special days, and so forth. These rituals, because of their religious nature, may differ quite a bit from the procedures of ordinary daily life. Religious individuals may practice their rituals and ceremonies alone, at home, or within special spaces: shrines, temples, churches, synagogues, or ceremonial grounds. In most traditional societies, religion plays a central role in cultural life. People often synthesize religious symbols and rituals into the material and artistic culture of the society: literature, storytelling, painting, music, and dance. The individual culture also determines the understanding of priesthood. A priest offers sacrifices to a deity or deities on behalf of the people. In smaller hunting‐and‐gathering societies no priesthood exists, although certain individuals specialize in religious (or magical) knowledge. One such specialist is the shaman, who the people believe controls supernatural forces. People may consult the shaman when traditional religion fails.
Strength of materials In materials science, the strength of a material is its ability to withstand an applied stress without failure. The applied stress may be tensile, compressive, or shear. Strength of materials is a subject which deals with loads, deformations and the forces acting on a material. A load applied to a mechanical member will induce internal forces within the member called stresses. The stresses acting on the material cause deformation of the material. Deformation of the material is called strain, while the intensity of the internal forces is called stress. The strength of any material relies on three different type of analytical method: strength, stiffness and stability, where strength refers to the load carrying capacity, stiffness refers to the deformation or elongation, and stability refers to the ability to maintain its initial configuration. Material yield strength refers to the point on the engineering stress-strain curve (as opposed to true stress-strain curve) beyond which the material experiences deformations that will not be completely reversed upon removal of the loading. The ultimate strength refers to the point on the engineering stress-strain curve corresponding to the stress that produces fracture. A material's strength is dependent on its microstructure. The engineering processes to which a material is subjected can alter this microstructure. The variety of strengthening mechanisms that alter the strength of a material includes work hardening, solid solution strengthening, precipitation hardening and grain boundary strengthening and can be quantitatively and qualitatively explained. Strengthening mechanisms are accompanied by the caveat that some mechanical properties of the material may degenerate in an attempt to make the material stronger. For example, in grain boundary strengthening, although yield strength is maximized with decreasing grain size, ultimately, very small grain sizes make the material brittle. In general, the yield strength of a material is an adequate indicator of the material's mechanical strength. Considered in tandem with the fact that the yield strength is the parameter that predicts plastic deformation in the material, one can make informed decisions on how to increase the strength of a material depending its microstructural properties and the desired end effect. Strength is expressed in terms of compressive strength, tensile strength, and shear strength, namely the limit states of compressive stress, tensile stress and shear stress, respectively. The effects of dynamic loading are probably the most important practical consideration of the strength of materials, especially the problem of fatigue. Repeated loading often initiates brittle cracks, which grow until failure occurs. The cracks always start at stress concentrations, especially changes in cross-section of the product, near holes and corners. The study of the subject of strength of materials often refers to various methods of calculating stresses in structural members, such as beams, columns and shafts. The methods employed to predict the response of a structure under loading and its susceptibility to various failure modes may take into account various properties of the materials other than material (yield or ultimate) strength. For example failure in buckling is dependent on material stiffness (Young's Modulus). Types of loadings - Transverse loading - Forces applied perpendicular to the longitudinal axis of a member. Transverse loading causes the member to bend and deflect from its original position, with internal tensile and compressive strains accompanying the change in curvature of the member. Transverse loading also induces shear forces that cause shear deformation of the material and increase the transverse deflection of the member. - Axial loading - The applied forces are collinear with the longitudinal axes of the member. The forces cause the member to either stretch or shorten. - Torsional loading - Twisting action caused by a pair of externally applied equal and oppositely directed force couples acting on parallel planes or by a single external couple applied to a member that has one end fixed against rotation. Uniaxial stress is expressed by where F is the force [N] acting on an area A [m2]. The area can be the undeformed area or the deformed area, depending on whether engineering stress or true stress is of interest. - Compressive stress (or compression) is the stress state caused by an applied load that acts to reduce the length of the material (compression member) in the axis of the applied load, in other words the stress state caused by squeezing the material. A simple case of compression is the uniaxial compression induced by the action of opposite, pushing forces. Compressive strength for materials is generally higher than their tensile strength. However, structures loaded in compression are subject to additional failure modes dependent on geometry, such as buckling. - Tensile stress is the stress state caused by an applied load that tends to elongate the material in the axis of the applied load, in other words the stress caused by pulling the material. The strength of structures of equal cross sectional area loaded in tension is independent of shape of the cross section. Materials loaded in tension are susceptible to stress concentrations such as material defects or abrupt changes in geometry. However, materials exhibiting ductile behavior (most metals for example) can tolerate some defects while brittle materials (such as ceramics) can fail well below their ultimate material strength. - Shear stress is the stress state caused by a pair of the built energy by opposing forces acting along parallel lines of action through the material, in other words the stress caused by faces of the material sliding relative to one another. An example is cutting paper with scissors or stresses due to torsional loading. - Yield strength is the lowest stress that produces a permanent deformation in a material. In some materials, like aluminium alloys, the point of yielding is difficult to identify, thus it is usually defined as the stress required to cause 0.002% plastic strain. This is called a 0.002% proof stress. - Compressive strength is a limit state of compressive stress that leads to failure in the manner of ductile failure (infinite theoretical yield) or brittle failure (rupture as the result of crack propagation, or sliding along a weak plane - see shear strength). - Tensile strength or ultimate tensile strength is a limit state of tensile stress that leads to tensile failure in the manner of ductile failure (yield as the first stage of that failure, some hardening in the second stage and breakage after a possible "neck" formation) or brittle failure (sudden breaking in two or more pieces at a low stress state). Tensile strength can be quoted as either true stress or engineering stress. - Fatigue strength is a measure of the strength of a material or a component under cyclic loading, and is usually more difficult to assess than the static strength measures. Fatigue strength is quoted as stress amplitude or stress range (Δσ = σmax − σmin), usually at zero mean stress, along with the number of cycles to failure under that condition of stress. - Impact strength, is the capability of the material to withstand a suddenly applied load and is expressed in terms of energy. Often measured with the Izod impact strength test or Charpy impact test, both of which measure the impact energy required to fracture a sample. Volume, modulus of elasticity, distribution of forces, and yield strength effect the impact strength of a material. In order for a material or object to have a higher impact strength the stresses must be distributed evenly throughout the object. It also must have a large volume with a low modulus of elasticity and a high material yield strength. Strain (deformation) terms - Deformation of the material is the change in geometry created when stress is applied (in the form of force loading, gravitational field, acceleration, thermal expansion, etc.). Deformation is expressed by the displacement field of the material. - Strain or reduced deformation is a mathematical term that expresses the trend of the deformation change among the material field. Strain is the deformation per unit length. In the case of uniaxial loading - displacements of a specimen (for example a bar element) strain is expressed as the quotient of the displacement and the length of the specimen. For 3D displacement fields it is expressed as derivatives of displacement functions in terms of a second order tensor (with 6 independent elements). - Deflection is a term to describe the magnitude to which a structural element bends under a load. - Elasticity is the ability of a material to return to its previous shape after stress is released. In many materials, the relation between applied stress is directly proportional to the resulting strain (up to a certain limit), and a graph representing those two quantities is a straight line. The slope of this line is known as Young's Modulus, or the "Modulus of Elasticity." The Modulus of Elasticity can be used to determine the stress-strain relationship in the linear-elastic portion of the stress-strain curve. The linear-elastic region is either below the yield point, or if a yield point is not easily identified on the stress-strain plot it is defined to be between 0 and 0.2% strain, and is defined as the region of strain in which no yielding (permanent deformation) occurs. - Plasticity or plastic deformation is the opposite of elastic deformation and is defined as unrecoverable strain. Plastic deformation is retained after the release of the applied stress. Most materials in the linear-elastic category are usually capable of plastic deformation. Brittle materials, like ceramics, do not experience any plastic deformation and will fracture under relatively low stress. Materials such as metals usually experience a small amount of plastic deformation before failure while ductile metals such as copper and lead or polymers will plasticly deform much more. Consider the difference between a carrot and chewed bubble gum. The carrot will stretch very little before breaking. The chewed bubble gum, on the other hand, will plastically deform enormously before finally breaking. Ultimate strength is an attribute related to a material, rather than just a specific specimen made of the material, and as such it is quoted as the force per unit of cross section area (N/m²). The ultimate strength is the maximum stress that a material can withstand before it breaks or weakens. For example, the ultimate tensile strength (UTS) of AISI 1018 Steel is 440 MN/m². In general, the SI unit of stress is the pascal, where 1 Pa = 1 N/m². In Imperial units, the unit of stress is given as lbf/in² or pounds-force per square inch. This unit is often abbreviated as psi. One thousand psi is abbreviated ksi. A Factor of safety is a design criteria that an engineered component or structure must achieve. FS = UTS / R, where FS: the factor of safety, R: The applied stress, and UTS: ultimate stress (psi or N/m^2) Margin of Safety is also sometimes used to as design criteria. It is defined MS = Failure Load/(Factor of Safety * Predicted Load) - 1 For example to achieve a factor of safety of 4, the allowable stress in an AISI 1018 steel component can be calculated to be R = UTS / FS = 440/4 = 110 MPa, or R = 110×106 N/m². Such allowable stresses are also known as "design stresses" or "working stresses." Design stresses that have been determined from the ultimate or yield point values of the materials give safe and reliable results only for the case of static loading. Many machine parts fail when subjected to a non steady and continuously varying loads even though the developed stresses are below the yield point. Such failures are called fatigue failure. The failure is by a fracture that appears to be brittle with little or no visible evidence of yielding. However, when the stress is kept below "fatigue stress" or "endurance limit stress", the part will endure indefinitely. A purely reversing or cyclic stress is one that alternates between equal positive and negative peak stresses during each cycle of operation. In a purely cyclic stress, the average stress is zero. When a part is subjected to a cyclic stress, also known as stress range (Sr), it has been observed that the failure of the part occurs after a number of stress reversals (N) even if the magnitude of the stress range is below the material’s yield strength. Generally, higher the range stress, the fewer the number of reversals needed for failure. There are four important failure theories, namely (1) maximum shear stress theory, (2) maximum normal stress theory, (3) maximum strain energy theory, and (4) maximum distortion energy theory. Out of these four theories of failure, the maximum normal stress theory is only applicable for brittle materials, and the remaining three theories are applicable for ductile materials. - Maximum Shear stress Theory- This theory postulates that failure will occur in a machine part if the magnitude of the maximum shear stress (tmax) in the part exceeds the shear strength (typ) of the material determined from uniaxial testing. This theory postulates, that failure will occur when, tmax = typ or max of [|S1-S2|/2 , |S2-S3|/2 , and |S3-S1|/2] = Syp/2 Dividing both side by 2, max of [|S1-S2| , |S2-S3|, and |S3-S1|] = Syp Using a design factor of safety Nfs, the theory formulates the design equation as, max of [|S1-S2| , |S2-S3|, and |S3-S1|] should be less than or equal to Syp/Nfs - Maximum normal stress theory- this theory postulates, that failure will occur in machine part if the maximum normal stress in the part exceeds the ultimate tensile stress of the material as determined from uniaxial testing. This theory deals with brittle materials only. The maximum tensile stress should be less than or equal to ultimate tensile stress divided by factor of safety. The magnitude of the maximum compressive stress should be less than ultimate compressive stress divided by factor of safety. As the three principal stresses at a point in the part S1, S2, or S3 may be both tensile and compressive stresses, when this theory is applied, we need to check for failures both from tension and compression. The method of application of this theory is to find the maximum tensile stress, and the maximum compressive stress from the given values of S1, S2, and S3. The largest positive value among S1, S2, and S3 is the maximum tensile stress and the smallest negative value is the maximum compressive stress. For example if S1 = 80 MPa, S2 = -100 MPa, and S3 = -150 MPa, then the maximum tensile stress = 80 MPa, and the maximum compressive stress = -150 MPa (smallest negative value!).Thus according to this theory, the safe design condition for brittle material can be given by: The maximum tensile stress should be less than or equal to Sut/Nfs and The magnitude of the maximum compressive stress should less than Suc/Nfs - Maximum strain energy theory-this theory postulates that failure will occur when the strain energy per unit volume due to the applied stresses in a part equals the strain energy per unit volume at the yield point in uniaxial testing. Strain energy is the energy stored in a material due elastic deformation, which is, work done during elastic deformation. Work done per unit volume = strain x average stress. During tensile test, stress increases from zero to Syp, that is average stress = Syp/2.Elastic strain at yield point = Syp/E, where E is the elastic modulus of elasticity.Strain energy per unit volume during uniaxial tension = average stress x strain = Syp2/2E - Maximum distortion energy theory- this theory is also known as shear energy theory or von Mises-Hencky theory. This theory postulates that failure will occur when the distortion energy per unit volume due to the applied stresses in a part equals the distortion energy per unit volume at the yield point in uniaxial testing. The total elastic energy due to strain can be divided into two parts. One part causes change in volume, and the other part causes change in shape. Distortion energy is the amount of energy that is needed to change the shape. Application of failure theory Out of the four theories, only the maximum normal stress theory predicts failure for brittle materials. The rest of the three theories are applicable for ductile materials. Out of these three, the distortion energy theory provides most accurate results in majority of the stress conditions.The strain energy theory needs the value of Poisson’s ratio of the part material, which is often not readily available. The maximum shear stress theory is conservative. For simple unidirectional normal stresses all theories are equivalent, which means all theories will give the same result. - Creep of materials - Deformation-mechanism maps - Diffusion in materials - Elasticity of materials - Fatigue of materials - Forensic engineering - Fracture mechanics - Fracture toughness - Heat transfer - Materials science - Material selection - Microstructures of materials - Plastics deformation in solids - Plasticity of materials - Schmidt hammer - Specific strength - Stress concentration - Strengthening mechanisms of materials - ^ Beer & Johnston (2006). Mechanics of Materials (5th ed.). McGraw Hill. pp. 210. ISBN 978-0-07-352938-7. - ^ Beer & Johnston (2006). Mechanics of Materials (5th ed.). McGraw Hill. pp. 7. ISBN 978-0-07-352938-7. - ^ Beer & Johnston (2006). Mechanics of Materials (5th ed.). McGraw Hill. pp. 5. ISBN 978-0-07-352938-7. - ^ Beer & Johnston (2006). Mechanics of Materials (5th ed.). McGraw Hill. pp. 9–10. ISBN 978-0-07-352938-7. - ^ Beer & Johnston (2006). Mechanics of Materials (5th ed.). McGraw Hill. pp. 52. ISBN 978-0-07-352938-7. - ^ Beer & Johnston (2006). Mechanics of Materials (5th ed.). McGraw Hill. pp. 60. ISBN 978-0-07-352938-7. - ^ Beer & Johnston (2006). Mechanics of Materials (5th ed.). McGraw Hill. pp. 693–696. ISBN 978-0-07-352938-7. - ^ Beer & Johnston (2006). Mechanics of Materials (5th ed.). McGraw Hill. pp. 47. ISBN 978-0-07-352938-7. - ^ Beer & Johnston (2006). Mechanics of Materials (5th ed.). McGraw Hill. pp. 49. ISBN 978-0-07-352938-7. - ^ R. C. Hibbeler (2009). Structural Analysis (7th ed.). Pearson Prentice Hall. pp. 305. ISBN 978-0-13-602060-8. - ^ Beer & Johnston (2006). Mechanics of Materials (5th ed.). McGraw Hill. pp. 53–56. ISBN 978-0-07-352938-7. - ^ Beer & Johnston (2006). Mechanics of Materials (5th ed.). McGraw Hill. pp. 27–28. ISBN 978-0-07-352938-7. - ^ Beer & Johnston (2006). Mechanics of Materials (5th ed.). McGraw Hill. pp. 28. ISBN 978-0-07-352938-7. - Fa-Hwa Cheng, Initials. (1997). Strength of material. Ohio: McGraw-Hill - Mechanics of Materials , E.J. Hearn - Alfirević, Ivo. Strength of Materials I. Tehnička knjiga, 1995. ISBN 953-172-010-X. - Alfirević, Ivo. Strength of Materials II. Tehnička knjiga, 1999. ISBN 953-6168-85-5. - Ashby, M.F. Materials Selection in Design. Pergamon, 1992. - Beer, F.P., E.R. Johnston, et al. Mechanics of Materials, 3rd edition. McGraw-Hill, 2001. ISBN 0-07-248673-2 - Cottrell, A.H. Mechanical Properties of Matter. Wiley, New York, 1964. - Den Hartog, Jacob P. Strength of Materials. Dover Publications, Inc., 1961, ISBN 0-486-60755-0. - Drucker, D.C. Introduction to Mechanics of Deformable Solids. McGraw-Hill, 1967. - Gordon, J.E. The New Science of Strong Materials. Princeton, 1984. - Groover, Mikell P. Fundamentals of Modern Manufacturing, 2nd edition. John Wiley & Sons,Inc., 2002. ISBN 0-471-40051-3. - Hashemi, Javad and William F. Smith. Foundations of Materials Science and Engineering, 4th edition. McGraw-Hill, 2006. ISBN 007-125690-3. - Hibbeler, R.C. Statics and Mechanics of Materials, SI Edition. Prentice-Hall, 2004. ISBN 013-129-011-8. - Lebedev, Leonid P. and Michael J. Cloud. Approximating Perfection: A Mathematician's Journey into the World of Mechanics. Princeton University Press, 2004. ISBN 0-691-11726-8. - Mott, Robert L. Applied Strength of Materials, 4th edition. Prentice-Hall, 2002. ISBN 0-13-088578-9. - Popov, Egor P. Engineering Mechanics of Solids. Prentice Hall, Englewood Cliffs, N. J., 1990. ISBN 0-13-279258-3. - Ramamrutham, S. Strength of Materials. - Shames, I.H. and F.A. Cozzarelli. Elastic and inelastic stress analysis. Prentice-Hall, 1991. ISBN 1-56032-686-7. - Timoshenko S. Strength of Materials, 3rd edition. Krieger Publishing Company, 1976, ISBN 0-88275-420-3. - Timoshenko, S.P. and D.H. Young. Elements of Strength of Materials, 5th edition. (MKS System) - Davidge, R.W., Mechanical Behavior of Ceramics, Cambridge Solid State Science Series, (1979) - Lawn, B.R., Fracture of Brittle Solids, Cambridge Solid State Science Series, 2nd Edn. (1993) - Green, D., An Introduction to the Mechanical Properties of Ceramics, Cambridge Solid State Science Series, Eds. Clarke, D.R., Suresh, S., Ward, I.M. (1998) Wikimedia Foundation. 2010. Look at other dictionaries: strength of materials — medžiagų atsparumo teorija statusas T sritis fizika atitikmenys: angl. science of the strength of materials; strength of materials vok. Festigkeitslehre, f rus. теория сопротивления материалов, f; учение о сопротивлении материалов, n pranc.… … Fizikos terminų žodynas strength of materials — medžiagų atsparumas statusas T sritis fizika atitikmenys: angl. resistance of materials; strength of materials vok. Materialfestigkeit, f rus. сопротивление материалов, n pranc. résistance des matériaux, f … Fizikos terminų žodynas strength of materials — Engineering discipline concerned with the ability of a material to resist mechanical forces when in use. A material s strength in a given application depends on many factors, including its resistance to deformation and cracking, and it often… … Universalium strength of materials — durability of materials in various environmental conditions … English contemporary dictionary strength of materials — Смотри сопротивление материалов … Энциклопедический словарь по металлургии science of the strength of materials — medžiagų atsparumo teorija statusas T sritis fizika atitikmenys: angl. science of the strength of materials; strength of materials vok. Festigkeitslehre, f rus. теория сопротивления материалов, f; учение о сопротивлении материалов, n pranc.… … Fizikos terminų žodynas method of strength of materials — Смотри метод сопротивления материалов … Энциклопедический словарь по металлургии Strength — is the amount of force that a muscle or group of muscles can exert.Strength may refer to:Physical ability: *Physical strength, as in people or animals *Superhuman strength, as in fictional characters *a character attribute (role playing… … Wikipedia materials science — the study of the characteristics and uses of various materials, as glass, plastics, and metals. [1960 65] * * * Study of the properties of solid materials and how those properties are determined by the material s composition and structure, both… … Universalium Strength of ships — The strength of ships is a topic of key interest to Naval Architects and shipbuilders. Ships which are built too strong are heavy, slow, and cost extra money to build and operate since they weigh more, whilst ships which are built too weakly… … Wikipedia
Introduction to Millimeter Wave Wireless Communications 1.1 The Frontier: Millimeter Wave Wireless Emerging millimeter wave (mmWave) wireless communication systems represent more than a century of evolution in modern communications. Since the early 1900s, when Guglielmo Marconi developed and commercialized the first wireless telegraph communication systems, the wireless industry has expanded from point-to-point technologies, to radio broadcast systems, and finally to wireless networks. As the technology has advanced, wireless communication has become pervasive in our world. Modern society finds itself immersed in wireless networking, as most of us routinely use cellular networks, wireless local area networks, and personal area networks, all which have been developed extensively over the past twenty years. The remarkable popularity of these technologies causes device makers, infrastructure developers, and manufacturers to continually seek greater radio spectrum for more advanced product offerings. Wireless communication is a transformative medium that allows our work, education, and entertainment to be transported without any physical connection. The capabilities of wireless communications continue to drive human productivity and innovation in many areas. Communication at mmWave operating frequencies represents the most recent game-changing development for wireless systems. Interest in mmWave is in its infancy and will be driven by consumers who continue to desire higher data rates for the consumption of media while demanding lower delays and constant connectivity on wireless devices. At mmWaves, available spectrum is unparalleled compared to cellular and wireless local area network (WLAN) microwave systems that operate at frequencies below 10 GHz. In particular, the unlicensed spectrum at 60 GHz offers 10× to 100× more spectrum than is available for conventional unlicensed wireless local area networks in the Industrial, Scientific, and Medical (ISM) bands (e.g., at 900 MHz, 2.4 GHz, 5 GHz) or for users of WiFi and 4G (or older) cellular systems that operate at carrier frequencies below 6 GHz. To reinforce this perspective, Fig. 1.1 shows the magnitude of spectrum resources at 28 GHz (Local Multipoint Distribution Service [LMDS]) and 60 GHz in comparison to other modern wireless systems. Over 20 GHz of spectrum is waiting to be used for cellular or WLAN traffic in the 28, 38, and 72 GHz bands alone, and hundreds of gigahertz more spectrum could be used at frequencies above 100 GHz. This is a staggering amount of available new spectrum, especially when one considers that all of the world’s cellphones currently operate in less than 1 GHz of allocated spectrum. More spectrum makes it possible to achieve higher data rates for comparable modulation techniques while also providing more resources to be shared among multiple users. Figure 1.1 Areas of the squares illustrate the available licensed and unlicensed spectrum bandwidths in popular UHF, microwave, 28 GHz LMDS, and 60 GHz mmWave bands in the USA. Other countries around the world have similar spectrum allocations [from [Rap02]]. Research in mmWave has a rich and exciting history. According to [Mil], - In 1895, Jagadish Chandra Bose first demonstrated in Presidency College, Calcutta, India, transmission and reception of electromagnetic waves at 60 GHz, over 23 meters distance, through two intervening walls by remotely ringing a bell and detonating some gunpowder. For his communication system, Bose pioneered the development of entire millimeter-wave components like: spark transmitter, coherer, dielectric lens, polarizer, horn antenna and cylindrical diffraction grating. This is the first millimeter wave communication system in the world, developed more than 100 years ago. A pioneering Russian physicist, Pyotr N. Lebedew, also studied transmission and propagation of 4 to 6 mm wavelength radio waves in 1895 [Leb95]. Today’s radio spectrum has become congested due to the widespread use of smart-phones and tablets. Fig. 1.1 shows the relative bandwidth allocations of different spectrum bands in the USA, and Fig. 1.2 shows the spectrum allocations from 30 kHz to 300 GHz according to the Federal Communications Commission (FCC). Note that although Figs. 1.1 and 1.2 represent a particular country (i.e., the USA), other countries around the world have remarkably similar spectrum allocations stemming from the global allocation of spectrum by the World Radiocommunication Conference (WRC) under the auspices of the International Telecommunication Union (ITU). Today’s cellular and personal communication systems (PCS) mostly operate in the UHF ranges from 300 MHz to 3 GHz, and today’s global unlicensed WLAN and wireless personal area network (WPAN) products use the Unlicensed National Information Infrastructure (U-NII) bands of 900 MHz, 2.4 MHz and 5.8 MHz in the low microwave bands. The wireless spectrum right now is already allocated for many different uses and very congested at frequencies below 3 GHz (e.g., UHF and below). AM Radio broadcasting, international shortwave broadcasting, military and ship-to-shore communications, and amateur (ham) radio are just some of the services that use the lower end of the spectrum, from the hundreds of kilohertz to the tens of megahertz (e.g., medium-wave and shortwave bands). Television broadcasting is done from the tens of megahertz to the hundreds of megahertz (e.g., VHF and UHF bands). Current cellphones and wireless devices such as tablets and laptops works at carrier frequencies between 700 MHz and 6 GHz, with channel bandwidths of 5 to 100 MHz. The mmWave spectrum, ranging between 30 and 300 GHz, is occupied by military, radar, and backhaul, but has much lower utilization. In fact, most countries have not even begun to regulate or allocate the spectrum above 100 GHz, as wireless technology at these frequencies has not been commercially viable at reasonable cost points. This is all about to change. Given the large amount of spectrum available, mmWave presents a new opportunity for future mobile communications to use channel bandwidths of 1 GHz or more. Spectrum at 28 GHz, 38 GHz, and 70-80 GHz looks especially promising for next-generation cellular systems. It is amazing to note from Fig. 1.2 that the unlicensed band at 60 GHz contains more spectrum than has been used by every satellite, cellular, WiFi, AM Radio, FM Radio, and television station in the world! This illustrates the massive bandwidths available at mmWave frequencies. Figure 1.2 Wireless spectrum used by commercial systems in the USA. Each row represents a decade in frequency. For example, today’s 3G and 4G cellular and WiFi carrier frequencies are mostly in between 300 MHz and 3000 MHz, located on the fifth row. Other countries around the world have similar spectrum allocations. Note how the bandwidth of all modern wireless systems (through the first 6 rows) easily fits into the unlicensed 60 GHz band on the bottom row [from [Rap12b] U.S. Dept. of Commerce, NTIA Office of Spectrum Management]. See page C1 (immediately following page 8) for a color version of this figure. MmWave wireless communication is an enabling technology that has myriad applications to existing and emerging wireless networking deployments. As of the writing of this book, mmWave based on the 60 GHz unlicensed band is seeing active commercial deployment in consumer devices through IEEE 802.11ad [IEE12]. The cellular industry is just beginning to realize the potential of much greater bandwidths for mobile users in the mmWave bands [Gro13][RSM+13]. Many of the design examples in this book draw from the experience in 60 GHz systems and the authors’ early works on mmWave cellular and peer-to-peer studies for the 28 GHz, 38 GHz, 60 GHz, and 72 GHz bands. But 60 GHz WLAN, WPAN, backhaul, and mmWave cellular are only the beginning — these are early versions of the next generation of mmWave and terahertz systems that will support even higher bandwidths and further advances in connectivity. New 60 GHz wireless products are exciting, not only because of their ability to satisfy consumer demand for high-speed wireless access, but also because 60 GHz products may be deployed worldwide, thanks to harmonious global spectrum regulations. Harmonized global spectrum allocations allow manufacturers to develop worldwide markets, as demonstrated through the widespread adoption and commercial success of IEEE 802.11b WLANs in 1999, and more recent innovations such as IEEE 802.11a, IEEE 802.11g, IEEE 802.11n, and IEEE 802.11ac WLANs that all operate in the same globally allocated spectrum. WLAN succeeded because there was universal international agreement for the use of the 2.4 GHz ISM and 5 GHz Unlicensed National Information Infrastructure bands, which allowed major manufacturers to devote significant resources to create products that could be sold and used globally. Without international spectral agreements, new wireless technologies will founder for lack of a global market. This was demonstrated by early incarnations of Ultra-wide band (UWB) at the turn of the century, whose initial hype dramatically waned in the face of nonuniform worldwide spectral interference regulations. Fortunately, the governments of the USA, Europe, Korea, Japan, and Australia have largely followed the recommendations of the ITU, which designate frequencies between 57 and 66 GHz for unlicensed communications applications [ITU]. In the USA, the FCC has designated bands from 57 to 64 GHz for unlicensed use [Fed06]. In Europe, the European CEPT has allocated bands from 59 to 66 GHz for some form of mobile application [Tan06]. Korea and Japan have designated bands from 57 to 66 GHz and 59 to 66 GHz, respectively [DMRH10]. Australia has dedicated a smaller band from 59.3 to 62.9 GHz. Consequently, there is roughly 7 GHz of spectrum available worldwide for 60 GHz devices. At the time of this writing, the cellular industry is just beginning to explore similar spectrum harmonization for the use of mobile cellular networks in frequency bands that are in the mmWave spectrum.1 Dubbed “Beyond 4G” or “5G” by the industry, new cellular network concepts that use orders of magnitude more channel bandwidth, for simultaneous mobility coverage as well as wireless backhaul, are just now being introduced to governments and the ITU to create new global spectrum bands at carrier frequencies that are at least an order of magnitude greater than today’s fourth-generation (4G) Long Term Evolution (LTE) and WiMax mobile networks. Thus, just as the WLAN unlicensed products have moved from the carrier frequencies of 1 to 5 GHz in their early generations, now to 60 GHz, the 1 trillion USD cellular industry is about to follow this trend: moving to mmWave frequency bands where massive data rates and new capabilities will be supported by an immense increase in spectrum. Unlicensed spectrum at 60 GHz is readily available throughout the world, although this was not always the case. The FCC initiated the first major regulation of 60 GHz spectrum for commercial consumers through an unlicensed use proposal in 1995 [Mar10a], yet the same idea was considered a decade earlier by England’s Office of Communications (OfCom) [RMGJ11]. At that time, the FCC considered the mmWave band to be “desert property” due to its perceived unfavorable propagation characteristics and lack of low-cost commercial circuitry. However, the allocation of new spectrum has ignited and will continue to ignite the inventiveness and creativity of engineers to create new consumer products at higher frequencies and greater data rates. This perception of poor propagation due to low distance coverage is heavily influenced by the O2 absorption effect where a 60 GHz carrier wave interacts strongly with atmospheric oxygen during propagation, as illustrated in Fig. 1.3 [RMGJ11][Wel09]. This effect is compounded by other perceived unfavorable qualities of mmWave communication links: increased free space path loss, decreased signal penetration through obstacles, directional communication due to high-gain antenna requirements, and substantial intersymbol interference (ISI, i.e., frequency selectivity) due to many reflective paths over massive operating bandwidths. Furthermore, 60 GHz circuitry and devices have traditionally been very expensive to build, and only in the past few years have circuit solutions become viable in low-cost silicon. Figure 1.3 Expected atmospheric path loss as a function of frequency under normal atmospheric conditions (101 kPa total air pressure, 22° Celsius air temperature, 10% relative humidity, and 0 g/m3 suspended water droplet concentration) [Lie89]. Note that atmospheric oxygen interacts strongly with electromagnetic waves at 60 GHz. Other carrier frequencies, in dark shading, exhibit strong attenuation peaks due to atmospheric interactions, making them suitable for future short-range applications or “whisper radio” applications where transmissions die out quickly with distance. These bands may service applications similar to 60 GHz with even higher bandwidth, illustrating the future of short-range wireless technologies. It is worth noting, however, that other frequency bands, such as the 20-50 GHz, 70-90 GHz, and 120-160 GHz bands, have very little attenuation, well below 1 dB/km, making them suitable for longer-distance mobile or backhaul communications. In the early days of 60 GHz wireless communication, many viewed fixed wireless broadband (e.g., fiber backhaul replacement) as the most suitable 60 GHz application, due to requirements for highly directional antennas to achieve acceptable link budgets. Today, however, the propagation characteristics that were once seen as limitations are now either surmountable or seen as advantages. For example, 60 GHz oxygen absorption loss of up to 20 dB/km is almost negligible for networks that operate within 100 meters. The shift away from long-range communications actually benefits close-range communications because it permits aggressive frequency reuse with simultaneously operating networks that do not interfere with each other. Further, the highly directional antennas required for path loss mitigation can actually work to promote security as long as network protocols enable antenna directions to be flexibly steered. Thus, many networks are now finding a home at 60 GHz for communication at distances less than 100 m. Also, the 20 dB/km oxygen attenuation at 60 GHz disappears at other mmWave bands, such as 28, 38, or 72 GHz, making them nearly as good as today’s cellular bands for longer-range outdoor mobile communications. Recent work has found that urban environments provide rich multipath, especially reflected and scattered energy at or above 28 GHz — when smart antennas, beamforming, and spatial processing are used, this rich multipath can be exploited to increase received signal power in non-line of sight (NLOS) propagation environments. Recent results by Samsung show that over 1 Gbps can be carried over mmWave cellular at ranges exceeding 2 km, demonstrating that mmWave bands are useful for cellular networks [Gro13]. Although consumer demand and transformative applications fuel the need for more bandwidth in wireless networks, rapid advancements and price reductions in integrated mmWave (>10 GHz) analog circuits, baseband digital memory, and processors have enabled this progress. Recent developments of integrated mmWave transmitters and receivers with advanced analog and radio frequency (RF) circuitry (see Fig. 1.4) and new phased array and beamforming techniques are also paving the way for the mmWave future (such as the product in Fig. 1.5). Operation at 60 GHz and other mmWave frequencies at reasonable costs is largely the result of a continuation of advancements in complementary metal oxide semiconductor (CMOS) and silicon germanium (SiGe technologies). Signal generation into terahertz frequencies (1 to 430 THz) has been possible since at least the 1960s through photodiodes and other discrete components not amenable to small-scale integration and/or mass production [BS66]. Packaging the analog components needed to generate mmWave RF signals along with the digital hardware necessary to process massive bandwidths, however, has only been possible in the last decade. Moore’s Law, which has accurately predicted that integrated circuit (IC) transistor populations and computations per unit energy will double at regular intervals every two years [NH08, Chapter 1], explains the dramatic advancements that now allow 60 GHz and other mmWave devices to be made inexpensively. Today, transistors made with CMOS and SiGe are fast enough to operate into the range of hundreds of gigahertz [YCP+09], as shown in Fig. 1.6. Further, due to the immense number of transistors required for modern digital circuits (on the order of billions) each transistor is extremely cheap. Inexpensive circuit production processes will make system-on-chip (SoC) mmWave radios — a complete integration of all analog and digital radio components onto a single chip — possible. For mmWave communication, the semiconductor industry is finally ready to produce cost-effective, mass-market products. Figure 1.4 Block diagram (top) and die photo (bottom) of an integrated circuit with four transmit and receive channels, including the voltage-controlled oscillator, phase-locked loop, and local oscillator distribution network. Beamforming is performed in analog at baseband. Each receiver channel contains a low noise amplifier, inphase/quadrature mixer, and baseband phase rotator. The transmit channel also contains a baseband phase rotator, up-conversion mixers, and power amplifiers. Figure from [TCM+11], courtesy of Prof. Niknejad and Prof. Alon of the Berkeley Wireless Research Center [© IEEE]. Figure 1.5 Third-generation 60 GHz WirelessHD chipset by Silicon Image, including the SiI6320 HRTX Network Processor, SiI6321 HRRX Network Processor, and SiI6310 HRTR RF Transceiver. These chipsets are used in real-time, low-latency applications such as gaming and video, and provide 3.8 Gbps data rates using a steerable 32 element phased array antenna system (courtesy of Silicon Image) [EWA+11] [© IEEE]. See page C2 (immediately preceding page 9) for a color version of this figure. Figure 1.6 Achievable transit frequency (fT ) of transistors over time for several semiconductor technologies, including silicon CMOS transistors, silicon germanium heterojunction bipolar transistor (SiGe HBT), and certain other III-V high electron mobility transistors (HEMT) and III-V HBTs. Over the last decade CMOS (the current technology of choice for cutting edge digital and analog circuits) has become competitive with III-V technologies for RF and mmWave applications [figure reproduced from data in [RK09]© IEEE]. Wireless personal area networks (WPANs) provided the first mass-market commercial applications of short-range mmWave using the 60 GHz band. The three dominant 60 GHz WPAN specifications are WirelessHD, IEEE 802.11ad (WiGig), and IEEE 802.15.3c. WPANs support connectivity for mobile and peripheral devices; a typical WPAN realization is demonstrated in Fig. 1.7, where products such as those shown in Fig. 1.5 may be used. Currently, the most popular application of WPAN is to provide high-bandwidth connections for cable replacement using the high-definition multimedia interface (HDMI), now proliferating in consumer households. The increasing integration of 60 GHz silicon devices allows implementation on small physical platforms while the massive spectrum allocations at 60 GHz allow media streaming to avoid data compression limitations, which are common at lower frequencies with reduced bandwidth resources. Easing compression requirements is attractive because it reduces signal processing and coding circuity requirements, thereby reducing the digital complexity of a device. This may lead to lower cost and longer battery life in a smaller form factor. Due to major technical and marketing efforts by the Wireless Gigabit Alliance (WiGig), the IEEE 802.11ad standard has been designed to incorporate both WPAN and WLAN capabilities, and WiGig-compliant devices are just starting to ship in laptops, tablets, and smartphones around the world, whereas WirelessHD-compliant devices have been shipping since 2008. The success of today’s USB standard in consumer electronics has demonstrated how harmonious interfaces lead to a proliferation of compatible devices. 60 GHz is poised to fill this role for high-definition multimedia systems, as illustrated in Fig. 1.8. Figure 1.7 Wireless personal area networking. WPANs often connect mobile devices such as mobile phones and multimedia players to each other as well as desktop computers. Increasing the data-rate beyond current WPANs such as Bluetooth and early UWB was the first driving force for 60 GHz solutions. The IEEE 802.15.3c international standard, the WiGig standard (IEEE 802.11ad), and the earlier WirelessHD standard, released in the 2008–2009 time frame, provide a design for short-range data networks (≈ 10 m). All standards, in their first release, guaranteed to provide (under favorable propagation scenarios) multi-Gbps wireless data transfers to support cable replacement of USB, IEEE 1394, and gigabit Ethernet. Figure 1.8 Multimedia high-definition (HD) streaming. 60 GHz provides enough spectrum resources to remove HDMI cables without sophisticated joint channel/source coding strategies (e.g., compression), such as in the wireless home digital interface (WHDI) standard that operates at 5 GHz frequencies. Currently, 60 GHz is the only spectrum with sufficient bandwidth to provide a wireless HDMI solution that scales with future HD television technology advancement. WLANs, which extend the communication range beyond WPAN, also employ mmWave technology in the 60 GHz band. WLANs are used to network computers through a wireless access point, as illustrated in Fig. 1.9, and may connect with other wired networks or to the Internet. WLANs are a popular application of unlicensed spectrum that is being incorporated more broadly into smartphones, tablets, consumer devices, and cars. Currently, most WLAN devices operate under the IEEE 802.11n standard and have the ability to communicate at hundreds of megabits per second. IEEE 802.11n leverages multiple transmit and receive antennas using multiple input multiple output (MIMO) communication methods. These devices carry up to four antennas and operate in the 2.4 GHz or 5.2 GHz unlicensed bands. Until IEEE 802.11n, standard advancements (in terms of data rate capabilities) have been largely linear, that is, a single new standard improves on the previous standard for the next generation of devices. The next generation of WLAN, however, has two standards for gigabit communication: IEEE 802.11ac and IEEE 802.11ad. IEEE 802.11ac is a direct upgrade to IEEE 802.11n through higher-order constellations, more available antennas (up to 8) per device, and up to 4 times more bandwidth at microwave frequencies (5 GHz carrier). IEEE 802.11ad takes a revolutionary approach by exploiting 50 times more bandwidth at mmWave frequencies (60 GHz). It is supported by device manufacturers that recognize the role of mmWave spectrum in the continued bandwidth scaling for next-generation applications. IEEE 802.11ad and mmWave technology will be critical for supporting wireless traffic with speeds competitive not only with gigabit Ethernet, but also 10 gigabit Ethernet and beyond. The largest challenges presented to 60 GHz and mmWave WLAN are the development of power-efficient RF and phased-array antennas and circuitry, and the high attenuation experienced by mmWaves when propagating through certain materials. Many strategies will be employed to overcome these obstacles, including 60 GHz repeaters/relays, adaptive beam steering, and hybrid wired/microwave/mmWave WLAN devices that use copper or fiber cabling or low microwave frequencies for normal operation, and mmWave frequencies when the 60 GHz path loss is favorable. Although the WPAN and WLAN network architectures provide different communication capabilities, several wireless device companies, including Panasonic, Silicon Image, Wilocity, MediaTek, Intel, and Samsung, are aggressively investing in both technologies. Figure 1.9 Wireless local area networking. WLANs, which typically carry Internet traffic, are a popular application of unlicensed spectrum. WLANs that employ 60 GHz and other mmWave technology provide data rates that are commensurate with gigabit Ethernet. The IEEE 802.11ad and WiGig standards also offer hybrid microwave/mmWave WLAN solutions that use microwave frequencies for normal operation and mmWave frequencies when the 60 GHz path is favorable. Repeaters/relays will be used to provide range and connectivity to additional devices. MmWave technology also finds applications in cellular systems. One of the earliest applications of mmWave wireless communication was backhaul of gigabit data along a line-of-sight (LOS) path, as illustrated in Fig. 1.10. Transmission ranges on the order of 1 km are possible if very high-gain antennas are deployed. Until recently, however, 60 GHz and mmWave backhaul has largely been viewed as a niche market and has not drawn significant interest. 60 GHz backhaul physical layer (PHY) design traditionally assumed expensive components to provide high reliability and to maximize range, resulting in bulky equipment and reducing the cost advantage over wired backhaul; however, a new application for wireless backhaul is emerging. Cellular systems are increasing in density (resulting in 1 km or less distances between base stations). Concurrently, cellular base stations require higher-capacity backhaul connections to provide mobile high-speed video and to implement advanced multicell cooperation strategies. If wireless backhaul devices are able to leverage recent mmWave hardware cost reductions, they may be able to service this growing need at a lower cost with more infrastructure flexibility. Further, backhaul systems are investigating LOS MIMO strategies to scale throughput into fiber capabilities [SST+09]. As operators continue to move to smaller cell sizes to exploit spatial reuse, the cost per base station will drop as they become more plentiful and more densely distributed in urban areas. Thus, wireless backhaul will be essential for network flexibility, quick deployment, and reduced ongoing operating costs. Consequently, wireless backhaul is likely to reemerge as an important application of 60 GHz and mmWave wireless communications. In fact, we envisage future cellular and WLAN infrastructure to be able to simultaneously handle backhaul, fronthaul, and position location connections using mmWave spectrum. Figure 1.10 Wireless backhaul and relays may be used to connect multiple cell sites and subscribers together, replacing or augmenting copper or fiber backhaul solutions. We foresee that mmWave will play a leading role in fifth-generation (5G) cellular networks. In the past generations of cellular technology, various PHY technologies have been successful in achieving ultra-high levels of spectral efficiency (bits/sec/Hz), including orthogonal frequency division multiplexing, multiple antennas, and effi-cient channel coding [GRM+10][STB09][LLL+10][SKM+10][CAG08][GMR+12]. Heterogeneous networks, coordinated multipoint transmission, relays, and the massive deployment of small cells or distributed antennas promise to further increase area spectral effi-ciency (bits/s/Hz/km2) [DMW+11][YHXM09][PPTH09][HPWZ13][CAG08][GMR+12]. The focus on area spectral efficiency is a result of extremely limited bandwidths available in the UHF and microwave frequency bands where cellular systems are deployed, as illustrated in Fig. 1.11. MmWave cellular will change the current operating paradigm using the untapped mmWave spectrum. Figure 1.11 United States spectrum and bandwidth allocations for 2G, 3G, and 4G LTE-A (long-term evolution advanced). The global spectrum bandwidth allocation for all cellular technologies does not exceed 780 MHz. Currently, allotted spectrum for operators is dissected into disjoint frequency bands, each of which possesses different radio networks with different propagation characteristics and building penetration losses. Each major wireless provider in each country has, at most, approximately 200 MHz of spectrum across all of the different cellular bands available to them [from [RSM+13]© IEEE]. Cellular systems may use mmWave frequencies to augment the currently saturated 700 MHz to 2.6 GHz radio spectrum bands for wireless communications [KP11a]. The combination of cost-effective CMOS technology that can now operate well into the mmWave frequency bands, and high-gain, steerable antennas at the mobile and base station, strengthens the viability of mmWave wireless communications [RSM+13]. MmWave spectrum would allow service providers to offer higher channel bandwidths well beyond the 20 MHz typically available to 4G LTE users. By increasing the RF channel bandwidth for mobile radio channels, the data capacity is greatly increased, while the latency for digital traffic is greatly decreased, thus supporting much better Internet-based access and applications that require minimal latency. Given this significant jump in bandwidth and new capabilities offered by mmWave, the base station-to-device links, as well as backhaul links between base stations, will be able to handle much greater capacity than today’s cellular networks in highly populated areas. Cellular systems that use mmWave frequencies are likely to be deployed in licensed spectrum at frequencies such as 28 GHz or 38 GHz or at 72 GHz, because licensed spectrum better guarantees the quality of service. The 28 GHz and 38-39 GHz bands are currently available with spectrum allocations of over 1 GHz of bandwidths, and the E-Band above 70 GHz has over 14 GHz available [Gho14]. Originally intended for LMDS use in the late 1990s, the 28 GHz and 38 GHz licenses could be used for mobile cellular as well as backhaul [SA95][RSM+13]. MmWave cellular is a growing topic of research interest [RSM+13]. The use of mmWave for broadband access has been pioneered by Samsung [KP11a][KP11b][PK11] [PKZ10][PLK12], where data rates were reported in the range of 400 Mbps to 2.77 Gbps for a 1 GHz bandwidth at 1 km distance. Nokia has recently demonstrated that 73 GHz could be used to provide peak data rates of over 15 Gbps [Gho14]. Propagation characteristics of promising mmWave bands have been evaluated in [RQT+12], [MBDQ+12], [RSM+13], and [MSR14], and show path loss is slightly larger in NLOS conditions compared with today’s UHF and microwave bands due to the higher carrier frequency. The scattering effects also become important at mmWave frequencies, causing weak signals to become an important source of diversity, and NLOS paths are weaker, making blockage and coverage holes more pronounced. To allow high-quality links, directional beamforming will be needed at both the base station and at the handset where propagation can be improved [GAPR09][RRE14]. Hybrid architectures for beam-forming appear especially attractive as they allow both directional beamforming and more complex forms of precoding while using limited hardware [EAHAS+12a][AELH13]. Applications to picocellular networks are also promising [ALRE13], indicating 15-fold improvements in data rates compared with current 3GPP LTE 4G cellular deployments. Work in [RRE14] shows over 20-fold improvement in end-user data rates over the most advanced 4G LTE networks in New York City. Results in [BAH14] show 12-fold improvements compared with other competing microwave technologies, and results in [ALS+14], [RRE14], and [Gho14] predict 20 times or more capacity improvements using mmWave technologies. As 5G is developed and implemented, we believe the main differences compared to 4G will be the use of much greater spectrum allocations at untapped mmWave frequency bands, highly directional beamforming antennas at both the mobile device and base station, longer battery life, lower outage probability, much higher bit rates in larger portions of the coverage area, cheaper infrastructure costs, and higher aggregate capacity for many simultaneous users in both licensed and unlicensed spectrum, in effect creating a user experience in which massive data-rate cellular and WiFi services are merged. The architecture of mmWave cellular networks is likely to be much different than in microwave systems, as illustrated in Fig. 1.12. Directional beamforming will result in high gain links between base station and handset, which has the added benefit of reducing outof-cell interference. This means that aggressive spatial reuse may be possible. Backhaul links, for example, may share the same mmWave spectrum, allowing rapid deployment and mesh-like connectivity with cooperation between base stations. MmWave cellular may also make use of microwave frequencies using, for example, the phantom cell concept [KBNI13] where control information is sent on microwave frequencies and data is sent (when possible) on mmWave frequencies. Figure 1.12 Illustration of a mmWave cellular network. Base stations communicate to users (and interfere with other cell users) via LOS, and NLOS communication, either directly or via heterogeneous infrastructure such as mmWave UWB relays. A number of universities have research programs in mmWave wireless communication. The University of Surrey, England, has set up a research hub for 5G mobile technology with a goal to expand UK telecommunication research and innovation [Surrey]. New York University (NYU) recently established the NYU WIRELESS research center to create new technologies and fundamental knowledge for future mmWave wireless devices and networks [NYU12]. Aalborg University has an active mmWave research effort. The Wireless Networking and Communications Group (WNCG) at The University of Texas at Austin has a vibrant research program on 5G cellular technologies including mmWave [Wi14]. Aalto University has an active mmWave research effort. The University of Southern California, the University of California at Santa Barbara, the University of California at Berkeley, the California Institute of Technology, the University of Bristol, and the Korea Advanced Institute of Science and Technology (KAIST) are just some of the many universities that have substantial research efforts on mmWave for future wireless networks. WPANs, WLANs, and cellular communication mark the beginning of mass consumer applications of mmWave technologies, where we evolve to a world where data is transported to and from the cloud and to each other in quantities we cannot fathom today. We believe that mmWave is the “tip of the iceberg” for dramatic new products and changes in our way of life, and will usher in a new generation of engineers and technologists with new capabilities and expertise. This exciting future will bring about revolutionary changes in the way content is distributed, and will completely change the form factor of many electronic devices, motivating the use of larger bandwidths found in the mmWave spectrum for many other types of networks, far beyond 60 GHz [RMGJ11][Rap12a]. For this to happen, however, many challenges must be overcome. Although we predict that future inexpensive UWB wireless cellular and personal area networks will be enabled through a move to mmWave frequencies and continued advancements in highly integrated digital and analog circuitry, we do not predict that all future advancements will be carried on the shoulders of solid-state process engineers, alone. Future wireless engineers will need to understand not only communications engineering and wireless system design principles, but also circuit design, antenna and propagation models, and mmWave electromagnetic theory to successfully codevelop their designs of future wireless solutions.
SUPPOSE you put $100 in a savings account that earns 10% interest each year. After five years how much will you have? That was a question posed in a multiple-choice quiz (completed by 150,000 people in 144 countries) by Standard & Poor’s, a rating agency. The answers proferred were "less than $150", "exactly $150" and "more than $150". The intention was to test whether respondents understood compound interest, in addition to basic mathematics. Alas, not that many did: just one-third of them answered three out of five similar multiple-choice questions correctly. Scandinavians are the most financially literate: 70% were able to answer three questions correctly; the corresponding figure for Angolans and Albanians was 15%. While education plays a large role in determining financial literacy, the link with GDP per person is remarkably strong, too (see chart). Previous research has shown that it can be difficult to drum in financial know-how at a young age. Instead, it is gained through experience. In developed countries, knowledge follows a U-shaped curve, with middle-aged adults performing better in financial-literacy surveys than both the young and the old (who, through a combination of cognitive impairment and less education, do worse). In developing countries, financial literacy is better among the young, who have typically received more schooling. The survey, the largest of its kind, demonstrates a striking gender divide in financial literacy. In 93 countries, the gap in correct answers between men and women was more than five percentage points. In Canada, 77% of men answered three questions correctly; the corresponding figure for women was just 60%. Women's lack of knowledge might well be explained by the deferring of financial decision-making to their husbands. But worryingly, the gender gap persists among well-educated single women too. When it comes to financial decision-making, many countries appear to be stuck in a 1960s time warp.
Word Roots A2 Software Grd 4-12 Empowers children to acquire an unlimited vocabulary, preparing them for testing and higher education. Grades 4-12+ (Latin plus some Greek) Word Roots teaches children the meanings of Latin and Greek prefixes, roots, and suffixes commonly used in English. Learning word elements dramatically improves spelling and the ability to decode unfamiliar words.åÊWood Roots will add hundreds of words to the students' vocabulary and greater depth to their thinking and writing. In each lesson students first learn the meanings of prefixes, roots, and suffixes.åÊThen they: divide known and unknown words into their elements or assemble elements to form whole words; match word parts or whole words to their definitions by analyzing their meanings; apply their new vocabulary in sentences.
Some principles of halftones and the myth of grey level capability. This information is basically true for all vendor's offerings. A halftone dot is formed inside a halftone "cell" The cell is a grid of pixels which are turned on to form the dot. The cell begins with no pixels turned on (0% tone) and as pixels are turned on the dot grows until all the pixels within the cell are turned on and the cell is filled (i.e. 100% tone).For example. If the cell size is 2 pixels wide by 2 pixels deep the halftone cell will contain a total of 4 pixels. As a result the following halftone dot tone values can be created: 0% = all pixels off 25% = 1 pixel turned on 50% = 2 pixels turned on 75% = 3 pixels turned on 100% = 4 pixels turned on. So, with a 2x2 pixel halftone cell it is only possible to have 5 tone levels (grey levels). I.e. the total number of tones possible equals the total number pixels available plus one. In this case 2x2=4 4+1 = 5. If the number of pixels is increased within the cell by making them smaller - i.e. cell size remains the same but the pixels are smaller - then the number of possible grey levels goes up. For a 3x3 cell the number of possible grey levels is 10 (3x3=9, 9+1=10 For a 10x10 cell the number of possible grey levels is 101 (10x10=100, 100+1=101 For a 16x16 cell the number of possible grey levels is 257 (16x16=256, 257+1=257) In a basic AM screen the dot is formed by turning on pixels starting from the center of the cell. For a basic FM screen the pixels within the cell are turned on pseudo-randomly. So, as resolution (the "dpi" of the recording device) increases - grey levels increases. As resolution decreases grey levels decrease. If the resolution (dpi) is fixed but the number of adjacent cells is increased (lpi, i.e. going from 100 lpi to 175 lpi) then the number of pixels available for each dot decreases and therefore the number of grey levels decreases. This principle is captured by the classic formula: (dpi/lpi) squared + 1 = number of grey levels So for a 2400 dpi output device: At 100 lpi: 2400 dpi/100 lpi = 24 squared = 576 plus one = 577 tones possible. No problem - more than enough grey levels. But at 175 lpi: 2400 dpi/175 lpi = 13.7 squared = 188 plus one = only 189 tones possible. A big problem because when the ratio of dpi to lpi drops below 16, the number of available grey levels drops to below 256. This can result in tonal reproduction that is inaccurate and uneven, causing visible shadestepping (a.k.a. banding or contouring) in gradients. Color steps abruptly from one tone to the next without a smooth transition. In 1984 the screening technology described in part one was the state of art for halftone screening with Postscript devices. The only way to recoup the lack of tones as one went to higher lpis was to increase the device dpi. I.e. go from 2400 dpi to 3200 dpi or higher. The penalty was slower imaging times and increased process control required in the film workflows of the day. However, the formula is only true for the tone represented by a single, isolated, halftone dot based on an individual halftone dot cell - something that never occurs in real production environments. So, around 1989 a new approach began to be adopted. The approach is based on the fact that we don't care about individual halftone dots. What is important is the tone represented in an area. For example, let's say that we want to see a 17% tone patch value in the presswork. However, if we cannot represent that area with individual 17% dots – because of that classic formula limitation – we can still create the 17% value by alternating 16% dots and 18% dots (this is called "dithering"). The eye (and instruments) integrate the alternating 16% and 18% dots and the result is the average value - in this example 17% – our desired tone value. Another way to look at it is: if we constrain our halftone cell to a pixel matrix of 16 x 16 pixels then we will always have 257 levels of grey in an area irrespective of how the dots within the cell are organized. However, if we build a tone area based on multiple halftone cells – a "supercell" we can get around the grey level limitations the formula would suggest. As one example, the highest lpi on a 2400 dpi device that I'm aware of was 1697 lpi on a poster printed with plates imaged on a Creo CtP device in 2000 by Metropolitan Fine Printers in Vancouver Canada. It won a "They said it can't be done" award at GrapExpo in Chicago. Supercell screening gets around the grey level limitations of the classic formula by looking at a tone area (the important criteria) rather than an individual dot. As a result, since about 1995 all AM screens from all vendors adopted variants of supercell screening technology: Agfa - ABS - Agfa Balanced Screening Heidelberg - HDS - High Definition Screening, and later IS screening Harlequin - HPS - Harlequin Precision Screening Creo/Kodak - Creosettes/Maxtone Fuji - just since 2004 CoRes screening As a result, 2400 dpi has become the defacto standard for imaging resolution in the commercial print industry. Higher resolutions, as far as halftone screening and grey levels is concerned, provides no additional value while imposing a penalty on imaging time. Where the various vendors distinguish themselves with their individual implementation of supercell screening is how they deal with issues such as rosette drift - the gradual shift from clear centered rosette to dot centered rosette - over the width of the plate, single channel moiré, miniscus effects as dots first touch, e.g. at the 50% point, and other nuances of halftone screening. Once you've passed the 200 lpi frequency - the human eye can no longer resolve the halftone structure at normal viewing distances. Beyond 200 lpi the argument can be made that there is no need to be constrained to the AM halftone structure. You might as well use an FM type screen. The lithographic issues will be the same since the imaging and press issues result from the size of halftone dots - not how they are organized.
A personality disorder is defined as a deeply ingrained and maladaptive pattern of behavior of a specified kind. Typically, they manifest by the time one reaches adolescence and can cause long-term difficulties in personal relationships or in functioning in society. Today we tend to use this term very flippantly in our everyday language, but this post will attempt to explore the different types of diagnosable personality disorders, what they look like, and how they are treated. The concept of a personality disorder has evolved greatly over the centuries. In ancient Greece, the study of personalities was rooted in what they called “characters,” most of which could be considered what we call personality disorders today. Tyrtamus (371-287 bc) divided the people of Athens into 30 different personality types, including ‘arrogance,’ ‘irony’ and ‘boastfulness.’ The teachings of Tyrtamus would go on to inform much of the subsequent studies such as those of Thomas Overbury (1581-1613) in England and Jean de la Bruyère (1645-1696) in France. Personality disorders as we know them today were first introduced by Philippe Pinel’s in 1801. He observed patients with symptoms such as outbursts of rage and violence and termed it manie, or mania, and patients with symptoms of psychosis (such as delusions and hallucinations) and termed it délires, or deliriums. Over 200 years later, psychologist Kurt Schneider (1887-1967) wrote Die Psychopathischen Persönlichkeiten (Psychopathic Personalities), a volume that still defines how we classify personality disorders today. According to Schneider: A personality disorder can be diagnosed if there are significant impairments in self and interpersonal functioning together with one or more pathological personality traits. In addition, these features must be (1) relatively stable across time and consistent across situations, (2) not better understood as normative for the individual’s developmental stage or socio-cultural environment, and (3) not solely due to the direct effects of a substance or general medical condition. The Diagnostic and Statistical Manual of Mental Disorders 5th Revision, DSM-5, is what is used as the standard for classifying personality disorders in the modern era. DSM-5 consists of 10 official personality disorders (PD) split up into three different clusters, named A, B and C. Cluster A is characterized by behavior that is odd, bizarre or eccentric, and includes Paranoid PD, Schizoid PD, Schizotypal PD. Cluster B is characterized by behavior that is erratic or dramatic and includes Antisocial PD, Borderline PD, Histrionic PD and Narcissistic PD. Finally, Cluster C is anxious and fearful behavior as seen in Avoidant PD, Dependent PD and Obsessive-compulsive PD. Though these clusters act as an effective way to categorize certain broad behaviors common to these disorders, psychologists are careful to assign them in any sort of concrete way. The clusters are based more on historic observations of tendencies rather than concrete and consistent characterizations. It’s very common for people with personality disorders to never seek mental healthcare. If they do, it’s often in times of crisis, such as when they begin inflicting harm on themselves or others. It is estimated that about 10% of the population has one or more of these ten personality disorders. Diagnosing personality disorders in patients is very important to mental health professional however, because they can often pre-indicate other mental health disorders. Also, because they can lead to significant stress and impairment within a patient, they often need to be treated in their own right. However, it can often be difficult to diagnose a personality disorder because judging the severity of a certain personality trait, and determining if it actually qualifies as a “disorder,” can be very subjective. Once a personality disorder is diagnosed, treatment is determined by the severity of the case, type of personality disorder and life situation. If it is a mild disorder, you may only need occasional monitoring from your primary care doctor. If the disorder is more severe, a combination of psychotherapy and medication will most likely be prescribed. However, personality disorders cannot really be treated, just managed and monitored, since they are ingrained in an individual. Another dimension of personality disorders, and one that can often make pressure on those suffering from them worse, is the stigma perpetuated by the media. Even though mental health advocates actively fight stigma associated with mental illness, Borderline Personality Disorder (BPD) in particular remains one of the field’s most misunderstood, misdiagnosed and stigmatized conditions. In an effort to bring awareness to this stigma, the National Education Alliance for Borderline Personality Disorder (NEA-BPD) will be celebrating persons in recovery from Borderline Personality Disorder throughout the month of October through their #BeyondBPDStigma social media campaign. Burton, Neel. “The 10 Personality Disorders.” Psychology Today, Sussex Publishers, 29 May 2012, www.psychologytoday.com/blog/hide-and-seek/201205/the-10-personality-disorders. “Personality Disorders.” Mayo Clinic, Mayo Foundation for Medical Education and Research, 23 Sept. 2016, www.mayoclinic.org/diseases-conditions/personality-disorders/diagnosis-treatment/treatment/txc-20247667. “Stigma Campaign! Month of October.” Borderline Personality Disorder, www.borderlinepersonalitydisorder.com/stigma-campaign-month-of-october-spread-the-word/.
- For the HP product, see HP TouchPad. A touchpad // or trackpad // is a pointing device featuring a tactile sensor, a specialized surface that can translate the motion and position of a user's fingers to a relative position on the operating system that is outputted to the screen. Touchpads are a common feature of laptop computers, and are also used as a substitute for a mouse where desk space is scarce. Because they vary in size, they can also be found on personal digital assistants (PDAs) and some portable media players. Wireless touchpads are also available as detached accessories. Operation and function Touchpads operate in one of several ways, including capacitive sensing and resistive touchscreen. The most common technology used as of 2010[update] entails sensing the capacitive virtual ground effect of a finger, or the capacitance between sensors. Capacitance-based touchpads will not sense the tip of a pencil or other similar implement. Gloved fingers may also be problematic. While touchpads, like touchscreens, are able to sense absolute position, resolution is limited by their size. For common use as a pointer device, the dragging motion of a finger is translated into a finer, relative motion of the cursor on the output to the display on the operating system, analogous to the handling of a mouse that is lifted and put back on a surface. Hardware buttons equivalent to a standard mouse's left and right buttons are positioned below, above, or beside the touchpad. Some touchpads and associated device driver software may interpret tapping the pad as a click, and a tap followed by a continuous pointing motion (a "click-and-a-half") can indicate dragging. Tactile touchpads allow for clicking and dragging by incorporating button functionality into the surface of the touchpad itself. To select, one presses down on the touchpad instead of a physical button. To drag, instead performing the "click-and-a-half" technique, one presses down while on the object, drags without releasing pressure and lets go when done. Touchpad drivers can also allow the use of multiple fingers to facilitate the other mouse buttons (commonly two-finger tapping for the center button). Some touchpads have "hotspots", locations on the touchpad used for functionality beyond a mouse. For example, on certain touchpads, moving the finger along an edge of the touch pad will act as a scroll wheel, controlling the scrollbar and scrolling the window that has the focus vertically or horizontally. Many touchpads use two-finger dragging for scrolling. Also, some touchpad drivers support tap zones, regions where a tap will execute a function, for example, pausing a media player or launching an application. All of these functions are implemented in the touchpad device driver software, and can be disabled. A touchpad was first developed for Psion's MC 200/400/600/WORD Series in 1989. Olivetti and Triumph-Adler introduced the first laptops with touchpad in 1992.Cirque introduced the first widely available touchpad, branded as GlidePoint, in 1994. Apple Inc introduced touchpads to the modern laptop in the PowerBook series in 1994, using Cirque’s GlidePoint technology; later PowerBooks and MacBooks would use Apple-developed trackpads. Another early adopter of the GlidePoint pointing device was Sharp. Later, Synaptics introduced their touchpad into the marketplace, branded the TouchPad. Epson was an early adopter of this product. As touchpads began to be introduced in laptops in the 1990s, there was often confusion as to what the product should be called. No consistent term was used, and references varied, such as: glidepoint, touch sensitive input device, touchpad, trackpad, and pointing device. Use in devices Touchpads are primarily used in self-contained portable laptop computers and do not require a flat surface near the machine. The touchpad is close to the keyboard, and only very short finger movements are required to move the cursor across the display screen; while advantageous, this also makes it possible for a user's palm or wrist to move the mouse cursor accidentally while typing. Touchpad functionality is available for desktop computers in keyboards with built-in touchpads. One-dimensional touchpads are the primary control interface for menu navigation on second-generation and later iPod Classic portable music players, where they are referred to as "click wheels", since they only sense motion along one axis, which is wrapped around like a wheel. Creative Labs also uses a touchpad for their Zen line of MP3 players, beginning with the Zen Touch. The second-generation Microsoft Zune product line (the Zune 80/120 and Zune 4/8) uses touch for the Zune Pad. Apple's PowerBook 500 series was its first laptop to carry such a device, which Apple refers to as a "trackpad". When introduced in May 1994 it replaced the trackball of previous PowerBook models. In late 2008 Apple's revisions of the MacBook and MacBook Pro incorporated a "Tactile Touchpad" design with button functionality incorporated into the tracking surface. Beginning in the second generation of MacBook Pro, the entire touchpad surface acts as a clickable button. Laptops today feature multitouch touchpads that can sense in some cases up to five fingers simultaneously, providing more options for input, such as the ability to bring up the context menu by tapping two fingers, dragging two fingers for scrolling, or gestures for zoom in/out or rotate. Psion's MC 200/400/600/WORD Series, introduced in 1989, came with a new mouse-replacing input device similar to a touchpad, although more closely resembling a graphics tablet, as the cursor was positioned by clicking on a specific point on the pad, instead of moving it in the direction of a stroke. Theory of operation There are two principal means by which touchpads work. In the matrix approach, a series of conductors are arranged in an array of parallel lines in two layers, separated by an insulator and crossing each other at right angles to form a grid. A high frequency signal is applied sequentially between pairs in this two-dimensional grid array. The current that passes between the nodes is proportional to the capacitance. When a virtual ground, such as a finger, is placed over one of the intersections between the conductive layer some of the electrical field is shunted to this ground point, resulting in a change in the apparent capacitance at that location. This method received U.S. Patent 5,305,017 awarded to George Gerpheide in April 1994. The capacitive shunt method, described in an application note by Analog Devices, senses the change in capacitance between a transmitter and receiver that are on opposite sides of the sensor. The transmitter creates an electric field which oscillates at 200–300 kHz. If a ground point, such as the finger, is placed between the transmitter and receiver, some of the field lines are shunted away, decreasing the apparent capacitance. Major manufacturers include: |Wikimedia Commons has media related to Touchpads.| |Look up touchpad or trackpad in Wiktionary, the free dictionary.| - Graphics pad - Kaoss pad - List of touch-solution manufacturers - Magic Trackpad - Pointing stick - "Tap and drag". Apple.com. - "The Tactile Touchpad". sigchi.com. - "A Comparison of Three Selection Techniques for Touchpads" (PDF). yorku.ca. - Getting Started With Your DOMAIN System. Apollo Computer. 1983. - "GUIdebook Psion MC Series brochure". guidebookgallery.org. - Olivetti S20, D33 and identically Triumph-Adler Walkstation 386, Walkstation 386SX - Diehl, Stanford; Lennon, Anthony J.; McDonough, John (Oct 1995). "Touchpads to Navigate By". Byte. No. October 1995 (Green Publishing). p. 150. ISSN 0360-5280. - Thryft, Ann R. "More Than a Mouse," Computer Product Development, EBN Extra, November 14, 1994, pp. E16 - E20 - "A WinBook for the Fussy". Windows Magazine. No. Dec 95. 1995. p. 105. - "Sharp Unveils Line of Notebooks". Westchester County Business Journal (Westchester County Business Journal) (November 20, 1995). 1995. - Malloy, Rich; Crabb, Don (October 1995). "Power Packed Power Books". Mobile Office (New York, NY) (October 1995): 44–52. - Jerome, Marty (1995). "Lightweight, Low-Cost Challenger". PC Computing (PC Computing) (December 1995): 96. - "MacBook design". Apple.com. - Ackerman, Dan (June 10, 2009). "Apple MacBook Pro Summer 2009 (Core 2 Duo 2.26 GHz, 2GB RAM, 160GB HDD, Nvidia GeForce 9400M, 13-inch)". CNET. Retrieved April 11, 2010. - "GUIdebook Psion MC Series brochure, page 4". guidebookgallery.org. - "Analog Devices’ Capacitive Shunt Method" (PDF). analog.com.
The First Conditional The first conditional is one of four types of hypothetical sentence in English, and it is probably the most commonly used. In which situations can we use it and why? When to use the first conditional The first conditional is used to express the future consequence of a realistic possibility now or in the future. For example, If I miss the train, I’ll take the next one. There is a 50% chance that the first part of this sentence (the action following ‘if’) will happen. And if it happens, the second part is 100% certain. Creating the First Conditional To make a sentence in the first conditional, we use, If + present simple, will/won’t + verb. If I pass this exam, I’ll celebrate. If I pass this exam, I won’t have to do it again. Like all conditionals we can also invert this structure: Will + verb if + present simple. I’ll celebrate if I pass this exam. I won’t have to do this exam again if I pass it. As an alternative to will, It’s possible to complete the second part of a first conditional sentence with a modal verb or an imperative. For example, If it rains, we can’t play tennis. If it rains, we must postpone our game. If it rains, wear your waterproof clothing. The important thing to remember with the first conditional is that we can never use will near if. Will can only come in the other part of the sentence. For example, We’ll be pleased if the client accepts our offer. We’ll be pleased if the client will accept our offer. Here are some other examples of the first conditional: If you practice frequently, you’ll learn quickly. If we don’t win today, we’ll be out of the competition. Your teacher can help if you don’t understand something. Call me if you’re late. If she does well in this interview, she’ll get the job. If you’re hungry, help yourself to whatever you want. We won’t miss the plane if we hurry. Our boss will be really pleased if we get this contract. As you can see, the first conditional is used in many different situations, both in and out of the workplace. So it’s undoubtedly a structure that’s worth learning and practicing. Start now by doing this fun quiz.
Getting Started with Motion Use the accelerometers to investigate movement. 1. Open the Accelerometer X by finding the X icon in the sensor drawer and press to open. 2. Place the phone flat on a table or level surface. Put the phone in graph mode. Try the following experiments, repeating each motion a few times while you watch the graph. Slide the phone to the left and right. Slide the phone toward and away from you. Lift the phone straight up off the table and then place it back down. For which motion do you get the highest value? 3. Touch the Y icon to open the Accelerometer Y card. Repeat the motions in Step 2. Which motion gave you the highest value? 4. Touch the Z icon to open the Accelerometer Z card. Again, repeat the motions in Step 2. Which motion gave you the highest value? What do you notice? 5. Experiment with using multiple sensors at once, repeating the motions from Step 2. What do you notice when you view Accelerometer X versus Accelerometer Y? Accelerometer X versus Y and Z? 6. Continue to explore by experimenting with moving your phone in different ways. Place the phone flat on a table and drum on or shake the table. Drop the phone, from a safe distance, into your hand or into something soft. Place the phone inside a long sock and swing it around your head. Which accelerometer worked best for recording your motions? What’s Going On? Objects have a tendency to stay put, or to keep moving if they’re moving—we call this tendency inertia. Newton’s First Law expresses this idea formally: An object continues in its state of motion or rest unless acted on by an unbalanced force. When an unbalanced force does cause an object to budge, we say the object accelerates: its velocity, or speed, changes—either by speeding up, slowing down, or changing direction. Acceleration is measured as a change of velocity (meters per second) in time, or meters per second squared (m/s2). Your phone has a device to measure these changes in motion—an accelerometer. Inside an accelerometer, small suspended masses are free to move. Changes in motion cause these masses to shift, much as your own head tends to flop forward when you’re in a car that stops suddenly. Measuring these subtle inertial shifts, an accelerometer in a phone can detect changes in motion and orientation, useful for switching the screen from landscape to portrait mode, for playing games on your phone, and more. You probably noticed a persistent acceleration in the Z axis, even with the phone sitting still on a table. This is the acceleration we experience here at the Earth’s surface due to the pull of gravity, approximately 9.8 m/s2. Use the accelerometers to study your motions throughout the day. How do the sensor cards or graphs change as you walk up and down stairs, skip, swing on a swing set, play soccer, or ride the bus? Compare with a friend. Experiment by attaching your phone to various other moving things—a skateboard, a bicycle, your dog. What patterns of acceleration can you detect? Take your accelerometer tool to an amusement park. Some roller coasters put your body through accelerations of up to 3 or 4 gs, that is, three or four times the normal acceleration due to gravity. How many gs do you (and your phone) experience on your favorite ride?
Optical fibres transmit data in the form of light or light signals over long distances. While electrical signals migrate in electrons from one end to the other, the photons (light particles) take over this task in optical fibres. Through optical fibres, optical signals can bridge large distances without amplifiers. Despite high distances, a high bandwidth is possible. The bandwidth of a single optical fibre is around 60 THz. In order to achieve high speeds in telecommunications networks, one usually uses optical connections between the nodes. In the switching centers the transmitted light signals are most times converted into electrical signals, evaluated and further processed. For further transmission, they are then converted back into light signals. At this point, the disadvantages of optical transmission systems become visible. For processing, optical signals must first be converted into electrical signals. Optical fibres made of plastic have a diameter of approximately 0.1 mm. They are extremely flexible but also sensitive. The fibre core (core glass) is the central area of an optical fibre, which serves to guide the light wave. The core consists of a material with a higher refractive index than the overlying jacket. A reflection takes place on the walls in the interior of the fibre, so that the light beam is guided almost without loss around each corner. The cladding glass is the optically transparent material of a fibre at which the reflection takes place. The cladding glass is also a dielectric material with a lower refractive index than the core. The dielectric material is non-metallic and non-conductive. It therefore contains no metallic components. The coating is the plastic coating, which is applied as mechanical protection on the surface of the jacket glass. Buffering is called the protective material, which is extruded on the coating. It protects the obtical fibre against environmental influences. Buffering is also available as a tube that isolates the fibre from stress in the cable when the cable is moved. Advantages of optical fibres against copper cables However, optical fibres are more expensive than copper cables. The cost of materials and the effort involved in assembly are higher. However, optical fibres have a considerably lower attenuation and are thus more suitable for long distances. Both fibre optic cables have the same basic structure. Around the fibre lie two layers of fabric and / or plastic for the insulation and protection of the fibre. This is coated with a PVC or LSZH (Low Smoke Zero Halogen) layer. The difference in the fibre types is already in the design phase. This is inside the cable and consists of a core of pure glass, surrounded by a cladding layer of reflective glass, which focuses the light beams inside the core into a single coherent one-way beam. Singlemode fibres have a very small glass fiber core diameter of 9μm, which allows only one type of light. As a result, the number of reflections resulting from the light transmitted through the core, as opposed to multimode fibres, is drastically reduced. This in turn lowers the transmission attenuation and allows the signal to move faster and farther. What are singlemode connections used for? Single-mode fibres are often used over long distances to transmit a large bandwidth reliably from point A to point B with an absolute minimum of interference or data errors, which is possible over many kilometers. Singlemode fibre patch cables are usually characterized by a yellow sheath and are currently manufactured according to OS2 ISO / IEC 24702 standard (with 0.4 dB/km) in a 9/125 μm ratio (core diameter / sheath diameter). It is important to pay attention to the use of G.652.D Low Waterpeak fibres as they offer an increased bending radius and increase the attenuation in the wavelength range between the 2nd and 3rd optical window. G.652.D singlemode fibres should be used especially when implementing CWDM / DWDM installations. However, a trend for shorter single-mode data links is already apparent, since the fibre is more complicated by its thin nature, but is now cheaper than a multimode fibre. Users also have the advantage of increasing bandwidth not shortening the link length. In this case, however, the overall damping of the connection has to be taken into account. Multimode fibre patch cables have a larger core diameter of 50μm, which transmits several modes of light. Because of the larger diameter, more data can be transmitted. However, far more refraction takes place and a greater attenuation is produced. As a result, multimode fibres are more likely to be used in backbones and local area networks (LANs), at far shorter distances than single-mode fibres, since the greater the bandwidth, the shorter is the possible connection length. What are multimode connections used for? Multimode fibre patch cables are available in different versions due to their historical development. Current designs in the glass fibre data transmission technology are the variants OM2 and OM3 according to ISO 11801 standard and OM4 according to TIA-492-AAAD standard. The core/sheath diameter ratio is 62.5/125 μm for OM1 fiber types. The diameter ratio is somewhat lower for OM2 (500 MHz/km, jacket color: orange), and for laser-optimized multimode fibres (LOMMF). It is 50/125 μm for OM3 (1500MHz/km, coat color: Aqua/Turquoise) 3500 MHz/km, jacket color: magenta/violet) fibers. OM2 fibres are designed for bandwidths of up to 10G. OM3 and OM4 fibres can also be used for higher bandwidths (currently up to 100G). Source: Elektronik Kompendium / CBO With its optical power meters and light sources, Kurth Electronic offers a cost-effective basis for daily tasks in optical fibre networks.
Education Home sciencelines Home Page Spring-Summer Phriendly Physics: An Invitation to Learn The most basic aspects of physics contain a richness and depth that can be appreciated and explored without mathematics or with very minimal math. This activity, modified from one used in Phriendly Physics, encourages exploration and careful observation of an event that appears very simple but is actually quite complex. Grade Levels: 3-5 - use the techniques of inquiry to investigate a physical phenomenon. - explore a research topic of their - make decisions as part of a group. - observe events closely. - learn to record their observations and thoughts in an organized MATERIALS (for each pair or group of students) - 2 (or more) one-meter ramps. It helps to have a slight depression or "track" down the center of the ramp. A three-dollar molding strip will serve this purpose. - Books or blocks of wood on which to set one end of the ramps. - A wide variety of balls. Select different sizes and materials: large and small, heavy and light, hollow and solid, smooth, rough, Lab notebooks or journals and writing utensils. Have each group of students find a place on a table or the floor to set up their ramps. Using the materials they have on hand, they can investigate different areas of study such as mass, momentum and acceleration. It is very important that they write down everything they do. If you feel it is appropriate, you may want to assign specific tasks to members of the group, one of which would be the group recorder, who would write down everything the group Then you circulate among the groups with the following objectives in mind: encouraging the students, assisting them as needed, and helping them record their results from their experiements. If students are having trouble focusing their investigation, you could prompt them by suggesting questions to investigate. Possible - Is it easier to observe the behavior of the balls if the pitch of the ramps is steep, or shallow? Why? Can you think of a situation that would make shallow preferable to steep? What about the other way around? - Does a stopwatch help you observe? Is there information you can obtain with a stopwatch that you can't obtain any other way? What is it? Is there information that you can get without using the stopwatch? What is it? - What information would you be able to get if you made pencil marks on the ramp? Where would you place the marks? Why? Would you get different information if you placed the marks differently? - Examine (look at, touch, squeeze, smell, bounce, etc.) a tennis ball and a steel ball. List all of the similarities and differences you notice between the two balls. Roll the two balls down the ramp at the same time. Look back at your list of similarities and differences. Which of those do you think were responsible for determining how the balls rolled down the ramps? Decide how you would test to see if you're right. Carry out your tests. We strongly recommend that you make up your own questions to fit the level of the students and the direction in which they are steering themselves. As a concluding activity, the groups could present their information to the rest of the class or propose another way of sharing what they have learned. Phriendly Physics Workshop schedule
Rat fleas may not be as common as cat or dog fleas, but they are important vectors of deadly diseases such as the bubonic plague. In fact, rat fleas are believed to have killed nearly a quarter of the European population in the 14th century by spreading bacterium called Yersinia Pestis responsible for the infamous ‘black plague’. Rat fleas are of two types. The Northern rat fleas, or the Nosopsyllus Fasciatus, predominantly infesting rodents, rats, mice and other small animals (wild squirrels, chipmunks, prairie dogs etc) in US and Europe. The second variety is the Oriental rat flea (Xenopsylla Cheopis) which the more common human plague vector. Plague from the Nosopsyllus Fasciatus/ Xenopsylla Cheopis in United States In the US, plague is commonly seen in areas near the western grasslands and the scrub woodland regions in states like Arizona, Colorado, New Mexico and Utah. Plague pathogen is transmitted to humans in following ways: - The infected rat fleas search for other sources of food when host rats are few in number. Humans and pets in the vicinity of infected rats or dead mice are more likely to get bitten by infected rat fleas. - Pets that romp freely outdoors are also likely to bring infected Nosopsyllus Fasciatus or Xenopsylla Cheopis fleas into the home. Exposure to infected rat fleas can give rise to primary bubonic plague or septicemic plague. - Coming in direct contact with fluids of infected animals (especially in case of hunters or butchers) without following proper meat handling precautions is another way of catching bubonic plague. - Cats which eat infected or dead rodents can also come in contact with infected rat fleas and get sick with the disease, which they then transmit through droplets to their owners or vets. Rat fleas: life cycle Both Xenopsylla Cheopis and Nosopsyllus Fasciatus begin life as small white eggs typically found around animal bedding and rat dwellings. The larvae that hatch from the eggs are approximately 2.5 to 3mm long. They do not drink blood like the adult Xenopsylla Cheopis or Nosopsyllus Fasciatus, rather they survive on flea droppings, animal hair etc. This is followed by the pupa stage. The pupa emerges from the cocoons as adult rat flea capable of drinking blood from animals and humans. Symptoms and Treatment of rat flea bites from the Xenopsylla Cheopis and Nosopsyllus Fasciatus The first sign of rat flea bites is swollen painful lymph glands. Bubonic and systemic plagues are both very serious diseases and medical help must be sought immediately. Sometimes, owing to lack of symptoms, the only way of diagnosing plague is through a blood/lab test. Once plague has been diagnosed, appropriate treatment and quarantine procedures need to be followed. Preventing rat fleas Prevention is the best way of controlling spread of plague from rat fleas. - It is essential to reduce rodent and rat dwellings. For this, one must remove garbage, clutter, wood piles and other possible rat food sources from around the homes. Residential areas, office buildings, schools and day care centers should be made rodent proof. - It is important to wear gloves when treating or skinning small animals, squirrels, etc. All meat handling precautions must be followed by hunters and butchers in order to prevent diseases through infected fluid transmission. - Rat fleas are common in camping sites, places of hiking, trekking etc. Flea Repellents must be used when one is engaging in such activities. Many FDA approved products like DEET and permethrin are safe to be applied to pets and humans. When using these products all safety precautions mentioned on the labels must be followed. - It is important to keep rat fleas off the pets. Flea infected pets must be promptly treated with vet approved products. - Pets that are free to roam outdoors are more likely to come in contact with infected rat fleas. At the first sign of sickness in such animals it is important to seek medical help immediately. It is also essential to prevent pets from sleeping on your bed especially when living in endemic areas.
The Bruce Murray Space Image Library How to tell the size of an asteroid using combined optical and infrared observations Filed under planetary astronomy, comets, asteroids, explaining science This chart illustrates how infrared is used to more accurately determine an asteroid's size. As the top of the chart shows, three asteroids of different sizes can look similar when viewed in visible-light. This is because visible-light from the sun reflects off the surface of the rocks. The more reflective, or shiny, the object is (a feature called albedo), the more light it will reflect. Darker objects reflect little sunlight, so to a telescope from millions of miles away, a large dark asteroid can appear the same as a small, light one. In other words, the brightness of an asteroid viewed in visible light is the result of both its albedo and size. The bottom half of the chart illustrates what an infrared telescope would see when viewing the same three asteroids. Because infrared detectors sense the heat of an object, which is more directly related to its size, the larger rock appears brighter. In this case, the brightness of the object is not strongly affected by its albedo, or how bright or dark its surface is. When visible and infrared measurements are combined, the albedos of asteroids can be more accurately calculated. NASA / JPL Most NASA images are in the public domain. Reuse of this image is governed by NASA's image use policy. Other Related Images Pretty pictures and
he United States Holocaust Memorial Museum is America’s national institution for the documentation, study, and interpretation of Holocaust history, and serves as this country’s memorial to the millions of people murdered during the Holocaust. The Museum’s primary mission is to advance and disseminate knowledge about (the Holocaust’s) unprecedented tragedy; to preserve the memory of those who suffered; and to encourage its visitors to reflect upon the moral and spiritual questions raised by the events of the Holocaust as well as their own responsibilities as citizens of a democracy. The Belfer National Conference presents Holocaust education for English Language Arts and Social Studies/History teachers from grade 6 through CEGEP. The goal of the three-day Conference is to give educators the tools for teaching about the Holocaust in their classrooms. Teachers will be introduced to information and teaching strategies using - Museum resources - Videos, film - Self-guided tour of the permanent exhibition - Survivor testimony At the conference, Museum educators and scholars share rationales, strategies, and approaches for teaching about the Holocaust. Participants have the opportunity to tour the Museum’s Permanent Exhibition, as well as the special exhibitions Remember the Children: Daniel’s Story and Some Were Neighbours: Collaboration & Complicity in the Holocaust, and to explore the Museum’s full range of resources. Those who complete the conference receive a set of educational materials from the Museum. The Riva and Thomas O. Hecht Scholarship Program, Teaching of the Holocaust for Educators, funds - Return airfare to Washington - and hotel accommodation for the duration of the Conference.
NOISE-INDUCED HEARING LOSS This is deafness caused by too much noise. Loud sound destroys the tiny hairlike cells in the inner ear that do the actual hearing. Loud noise is one of the most common causes of deafness. As many as 5% of adults have been diagnosed with this kind of hearing problem, countless others suffer without seeking medical attention. This is not surprising as noise induced deafness is permanent and incurable. Hearing aids can help, but they can't fully correct it. This kind of hearing loss can be prevented by staying away from loud and sustained noises. HOW LOUD ARE THOSE SOUNDS THAT HURT? You may be exposed, at work or through lifestyle; to noise that hurts your hearing. If you have to shout when you talk to a co-worker or friend who is standing next to you, the noise level is at a dangerous level. Both the loudness and the length of time you hear the noise are important. Sound is measured in decibels. Eight hours of suffering noise at 85 decibels could hurt your hearing. At higher sound levels, you could lose hearing in even less time. Some sounds take only minutes to damage your hearing for life. Common Noises That Might Hurt Your 140 to 170 decibels Jet aircraft taking off (heard from runway) 120 to 140 decibels Chain saw, rock concert 110 to 120 decibels Personal stereo players, home and commercial music systems By UK law, employers must start to take measures to save the hearing of their workers in workplaces where noise reaches 85 decibels or more. This means that the noise has to be monitored, and hearing protection provided for those who request it. After 90 decibels the noise exposure has to be reduced in addition to providing free hearing protection devices to workers. See HSE leaflets for more HOW TO RECOGNISE NOISE INDUCED DEAFNESS This usually happens slowly. There is no pain. After being in noise, you may notice a "ringing" sound in your ears. You might have trouble hearing people talk. After several hours or even a few days, these symptoms usually go away. However, damage has already been done and when you are exposed to this kind of noise again, you could get a hearing loss that lasts forever. Each noise assault adds to the previous one to produce permanent deafness. Early signs of noise-induced deafness include the following: - Having trouble understanding what people say, especially in crowded rooms - Needing to turn the TV sound higher (others tell you of this!) - Having to ask people to repeat what they just said to you (sometimes three times and then you give up because its embarrassing) - Not being able to hear high-pitched sounds, like a baby crying or a telephone ringing in another room Along with the hearing loss, you may also have ringing in the ears, called tinnitus. The only way to find out if you have a hearing loss is to have your hearing tested by a trained professional. HOW TO PREVENT NOISE INDUCED DEAFNESS - Make the health of your ears a part of your lifestyle. Stay away from loud or prolonged noise. Turn down the music volume. Buy power tools that have sound controls. - When you must be around noise, either at work or at play, use something to protect your hearing. - Hearing protection devices, like earplugs, earmuffs and canal caps, are sold in chemist shops and hardware stores. Different brands offer different amounts of protection. If you are not sure which kind is best for you, or how to use it correctly, ask your doctor. Often the best kind is the one that you feel comfortable in so you can wear it when you need it. - Keep your hearing protectors handy and in good condition. - Inform your family and friends how important it is to stay away from too much noise and to use hearing protection. - If you think you have a hearing loss (or if someone suggests that you have ), it is important to have your hearing tested. INFORMATION ON DECIBELS The unit of power ratio, the Bel, was named after Alexander Graham Bell, inventor of the telephone who wanted to measure the loss of signal over long phone wires . Technically, it is a logarithmic scale used to compare sound intensity or electric current levels. A Bel is a ratio of one number to another. In measuring sound, the pressure waves can be recorded by a sensitive detector. If the pressure required at a given frequency to reach the threshold of normal hearing is increased by tenfold then that second sound pressure is a Bel above the first. The human ear can detect sounds with an intensity as low as a million-millionth of a Watt per square metre, and as high as 1 Watt per square metre. This is a huge range of intensity; and is why we don't perceive loudness as proportional to intensity. To produce a sound that seems to humans about twice as loud requires a sound wave that has about ten times the intensity. The Bel is an unwieldy and large unit, and the decibel (dB), a tenth of a Bel, is more convenient. The decibel is used to express power, but it doesn't measure power. It is in fact a ratio of two power levels. A ratio of 3 decibels means a doubling of sound pressure. 6 decibels is a doubling of the doubling and is therefore an increase of 4 times. 9 decibels means an increase of 8 times and so on. By the time you reach 80 decibels the sound pressure has increased by 100 million times. Another 3 decibels to 83 doubles this again to 200 million times the original threshold pressure. Noise is measured in decibels and anything 80 decibels or higher is potentially damaging, particularly with sustained exposure. The louder the sound, the less exposure is needed to cause damage. A lawn mower producing a 95 decibel sound level will cause hearing loss in four hours, while a rock concert or amplified sound producing 110 decibels can cause damage in half an hour. At 160 decibels instant hearing loss occurs. Dr. Ronald Hoffman, Medical Director of the Hoffman Centre, New York American Academy of Family Physicians May, 2000 Health & Safety Executive demonstration Royal National Institute for the Deaf
Adolescent Mood Disorders and Mental Health Teens sometimes need extra help getting through rough patches of adolescence. As they transition from childhood to adulthood, teens undergo many changes and tremendous growth. Their bodies and brains change, and the circuits in the brain that carry important emotional information are still developing in adolescence. In addition, all teens develop at slightly different ages and levels. There is a great deal of social pressure on teens and some have trouble dealing with all of the changes in their bodies, emotions and everyday lives. Teens who have to deal with chronic illnesses, such as asthma, juvenile diabetes or arthritis, have an added layer of pressure and responsibility. They are more likely to have and other mood disorders. After all, if you’re a teen with a chronic illness, it’s natural to notice that your disease sometimes affects your ability to do the same activities as your friends. Another difference is the extra attention you might need at home. Mood disorders, most often anxiety disorders and depression, can set a teen back in school or in developing important social skills. Specialists in adolescent medicine can help you deal with these issues so you cope with and manage your illness better and have better well-being in general. Anxiety and depression are the most common mood disorders in all adolescents, regardless of their health. Other mood and mental health disorders can affect teens. With bipolar disorder, the brain causes major shifts in mood and activity levels. A teen might be unusually energetic and happy and then show signs of depression. Bipolar disorder tends to run in families. Sometimes anxiety or depression are temporary, and may be brought on by trauma, a family event or a setback in your health. If signs of depression or anxiety continue, teens should seek treatment. You know that as a teen, your emotions often are chalked up to typical teen moodiness. That means many adolescents with anxiety or depression don’t receive the treatment they need. Symptoms of anxiety disorders include: - Worry about social situations or other areas of your life, such as tests or being on time - Signs of panic, such as a racing heart, trouble breathing and feelings of impending doom - Other physical signs, such as dizziness, nausea, stomach pain or headaches Teen depression can be chronic or it can come and go. Signs of teen depression include: - Fatigue or sleepiness for no explainable reason - Bursts of anger or sudden irritability - Over sensitivity to criticism - Headaches, abdominal pain and other physical complaints with no illness - Trouble sleeping - Withdrawal from family, friends or activities - A drop in grades or school attendance If you’re wondering if your symptoms are normal, don’t be afraid to ask. Our adolescent and young adult team of health professionals can evaluate and treat mood disorders or mental health issues in teens who have chronic illnesses. Teen health is our specialty. Our counselors and doctors keep your information confidential. Usually, teens who have mood disorders or mental health issues receive counseling or therapy first. Also called psychotherapy, this treatment approach helps teens talk about their disorder and learn ways to deal with anxiety, depression and causes or triggers of mood disorders. Cognitive behavioral therapy focuses on your moods and behaviors, and helps find new ways to approach problems. Each teen or young adult with a mood or mental health disorder requires a personalized approach to therapy. Medications cannot cure mental health issues, but they can help manage symptoms. Sometimes, the doctor recommends taking medication for a short period of time until you feel better, or until the combination of therapy and medication has helped you learn to manage your mood disorder. Some teens require medication for longer periods of time. Antidepressants and other medications used to treat mood and mental health disorders can cause side effects. Parents and teens should work closely with their doctors to monitor side effects and how well medications are working.
Poetry and Ann Bradstreet For Lesson Notes (1-7) and for Lesson Completion (8-13) Part One (To complete upon viewing the ppt. presentation) 1) Based on what we know about the Puritans and how they viewed worldly objects and creative expression, why would it seem ironic that there are several among them who remain influential poets today? It would seem ironic because they had left few personal belongings behind them, but puritans confined within their culture so they can have a personal attachment. 2) A. Identify two similarities in the lives of Edward Taylor and Ann Bradstreet. B. Identify one to two key differences in their lives. Two similarities are they came with Puritans upbringing and they had hard …show more content… 12) Do the vocabulary words we have noted – recompense, manifold, persevere – in your opinion, reference her strong feelings about heaven or her strong feelings about the natural world? Be sure to explain your answer and reference the poem when explaining this. I Think It does because In these lines it shows how deeply she feels and her personal connection. Its showing how much love and life means to her. 13) Nor ought but love from thee, give recompense. 14) Thy love is such I can no way repay, 15) The heavens reward thee manifold, I pray. 16) Then while we live, in love let's so persevere 17) That when we live no more, we may live ever. “ 18) Knowing they are both Christians, what similarities can you pin point in the Christian beliefs of Edward Taylor and Ann Bradstreet? Use their poems as your example. Some Pin points that I can point out is in both they are calling on god for help and feeling like something is on their backs and they trying to let themselves know
KNOTS, BENDS, AND HITCHES The term knot is usually applied to any tie or fastening formed with a cord, rope, or line. In a general sense, it includes the words bends and hitches. A BEND is used to fasten two lines together or to fasten a line to a ring or loop. A HITCH is used to fasten a line around a timber or spar, so it will hold temporarily but can be readily untied. Many ties, which are strictly bends, have come to be known as knots; hence, we will refer to them as knots in this discussion. Knots, bends, and hitches are made from three fundamental elements: a bight, a loop, and a round turn. Observe figure 4-8 closely and you should experience no difficulty in making these three elements. Note that the free or working end of a line is known as the RUNNING END. The remainder of the line is called the STANDING PART. NOTE: A good knot is one that is tied rapidly, holds fast when pulled tight, and is untied easily. In addition to the knots, bends, and hitches described in the following paragraphs, you may have need of others in steelworking. When you understand how to make those covered in this chapter, you should find it fairly easy to learn the procedure for other types. The OVERHAND KNOT is considered the simplest of all knots to make. To tie this knot, pass the hose end of a line over the standing part and through the loop that has been formed. Figure 4-9 shows you what it looks like. The overhand knot is often used as a part of another knot. At times, it may also be used to keep the end of a line from untwisting or to form a knob at the end of a line. Figure 4-8.-Elements of knots, bends, and hitches The FIGURE-EIGHT KNOT is used to form a larger knot than would be formed by an overhand knot in the end of a line (fig. 4-10). A figure-eight knot is used in the end of a line to prevent the end from slipping through a fastening or loop in another line. To make the figure-eight knot, make a loop in the standing part, pass the running end around the standing part, back over one side of the loop and down through the loop, and pull tight. The SQUARE KNOT, also called the REEF KNOT, is an ideal selection for tying two lines of the same size together so they will not slip. To tie a square Figure 4-10.-Figure-eight knot knot, first bring the two ends of the line together and make an overhand knot. Then form another overhand knot in the opposite direction, as shown in figure 4-11. NOTE: A good rule to follow for a square knot is left over right and right over left. When tying a square knot, make sure the two overhand knots are parallel. `his means that each running end must come out parallel to the standing part of its own line. If your knot fails to meet this test, you have tied what is known as a "granny." A granny knot should NEVER be used; it is unsafe because it will slip under strain. A true square knot instead of slipping under strain will only draw tighter. The SHEEPSHANK is generally thought of as merely a means to shorten a line, but, in an emergency, it can also be used to take the load off a weak spot in the line. To make a sheepshank, form two bights Figure 4-11.-Square knot. (fig. 4-12, view 1). Then take a half hitch around each bight (views 2 and 3). In case you are using the sheepshank to take the load off a weak spot, make sure the spot is in the part of the line indicated by the arrow in view 2. The BOWLINE is especially useful when you need a temporary eye in the end of a line. It will neither slip nor jam and can be untied easily. To tie a bowline, follow the procedure shown in figure 4-13. The FRENCH BOWLINE is sometimes used to lift or hoist injured personnel. When the french bowline is used for this purpose, it has two loops which are adjustable, so even an unconscious person can be lifted safely. One loop serves as a se at for the person, while the other loop goes around the body under the person's arms. The weight of the person keeps both loops tight and prevents the person from falling. The procedure to follow in making the french bowline is shown in figure 4-14. The SPANISH BOWLINE is useful in rescue work, especially as a substitute for the boatswain's chair. It may also be used to give a twofold grip for lifting a pipe or other round object in a sling. Many people prefer the spanish bowline to the french bowline because the bights are set and will not slip Figure 4-14.-French bowline. back and forth (as in the french bowline) when the weight is shifted. To tie a spanish bowline, take a bight and bend it back away from you (fig. 4-15, view 1), forming two bights. Then lap one bight over the other (view 2). Next, grasp the two bights where they cross at (a) in view 2. Fold this part down toward you, forming four bights (view 3). Next, pass bight (c) through bight (e) and bight (d) through bight (f) (view 4). The complete knot is shown in view 5. The RUNNING BOWLINE is a good knot to use in situations that call for a lasso. To form this knot, start by making a bight with an overhand loop in the running end (fig. 4-16, view 1). Now, pass the running end of the line under and around the standing part and then under one side of the loop (view 2). Next, pass the running end through the loop, under and over the side of the bight, and back through the loop (view 3). Figure 4-15.-Spanish bowline Figure 4-16.-Running bowline. Figure 4-17.-Becket bend. An especially good knot for bending together two lines that are unequal in size is the type known as the BECKET BEND. The simple procedure and necessary instructions for tying a becket, single and double, are given in figure 4-17. When it comes to bending to a timber or spar or anything that is round or nearly round, the familiar CLOVE HITCH is an ideal selection. Figure 4-18 shows how this knot is made. A clove hitch will not j am or pull out; however, if a clove hitch is slack, it might work itself out, and for that reason, it is a good idea to make a HALF HITCH in the end, as shown in figure 4-19, view 1. A half hitch never becomes a whole hitch. Add a second one and all you have is two half hitches, as shown in figure 4-19, view 2. The SCAFFOLD HITCH is used to support the end of a scaffold plank with a single line. To make the scaffold hitch, lay the running end across the top and around the plank, then up and over the standing Figure 4-18.-Clove hitch. Figure 4-19.-Half hitch. part (fig. 4-20, view 1). Bring a doubled portion of the running end back under the plank (view 2) to form a bight at the opposite side of the plank. The running end is taken back across the top of the plank (view 3) until it can be passed through the bight. Make a loop in the standing part (view 4) above the plank. Pass the running end through the loop and around the standing part and back through the loop (view 5). A BARREL HITCH can be used to lift a barrel or other rounded object that is either in a horizontal or a vertical position. To sling a barrel horizontally (fig. 4-21), start by making a bowline with a long bight. Then bring the line at the bottom of the bight up over Figure 4-21.-Barrel hitch. the sides of the bight. To complete the hitch, place the two "ears" thus formed over the end of the barrel. To sling a barrel vertically, pass the line under the bottom of the barrel, bring it up to the top, and then form an overhand knot (fig. 4-22, view 1). While maintaining a slight tension on the line, grasp the two Figure 4-22.-A vertical barrel hitch. parts of the overhand knot (fig. 4-22, view 2) and pull them down over the sides of the barrel. Finally, pull the line snug and make a bowline over the top of the barrel (fig. 4-22, view 3).
Lecturer: Samantha Oates Weighting: 7.5 CATS Questions about the origin of the Universe, where it is going and how it may get there are the domain of cosmology. One of the questions addressed in the module is whether the Universe will continue to expand or ultimately contract. Relevant experimental data include those on the Cosmic Microwave Background radiation, the distribution of galaxies and the distribution of mass in the Universe. The module discusses the implications of these in some detail. Starting from fundamental observations such as that the night sky is dark and, by appealing to principles from Einstein's General Theory of Relativity, the module develops a description of the Universe. This leads to the Friedmann equation, Hubble's law, the cosmological redshift and eventually to the Big Bang Model, with singular behaviour at the origin of the Universe. The module also discusses the evolution of the primeval fireball, the synthesis of Helium and the origin of structure. To present the credentials of the Universe as we know it (via experiment) and introduce the simplest models which can describe it. The module should stress the role of experimental data and emphasize the need to distinguish between cosmology as a physical science, which makes testable predictions, and untestable pseudo-cosmologies which may claim to give appealing and all-encompassing accounts of the universe but are untestable. By the end of the module, you should: - have a good qualitative appreciation of the current status of cosmology - recognise the importance of observations in constraining possible cosmological theories - understand the idea of metrics used to describe local and global physics - be critically aware of some of the aspects of cosmology where more work is needed to reconcile theory and observations - The history and foundations of modern cosmology: Olber’s Paradox, Hubble’s Law and the Cosmological Principle. - Describing the evolution of the Universe: basics of space time and relativity, curvature, Friedmann equation, fluid and acceleration equations. - Model universes: describing the evolution when dominated by single component and multiple-components - the standard cosmological (benchmark) model. - Key properties of our Universe: tests of the standard cosmological model, evidence for dark matter; models for dark matter, origin of structure. - The early Universe: the Big Bang, connection to elementary particle physics and grand-unified field theories (GUTS), inflation, Big Bang nucleosynthesis, formation of the cosmic background radiation. Commitment: 15 Lectures Assessment: 1.5 hour examination. This module has a home page. Recommended Text: M Roos, Introduction to Cosmology 1st edition 1994 Also useful is MV Berry, Principles of Cosmology and Gravitation, CUP Leads to: PX436 General Relativity
Australia: Environmental awareness and lagoon-based wastewater treatment plants. Australia is the driest inhabited continent and water scarcity has been worsening during last decades, with growing population and climate change effects. As a result, most inhabitants of big cities realized the need to protect their limited water resources. Between 2000 and 2009, the country managed to reduce water consumption per capita by 40%, and joined those water scarce countries in recognizing wastewater as valuable resource. Each state and territory has the responsibility to regulate and manage natural resources and public health within its jurisdiction. However, among the most influential policies related to wastewater management, the 2004 National Water Initiative was signed by all of Australia’s state and territorial governments to ensure a homogenous approach across the country. In 2010, the Australian Water Recycling Centre of Excellence was further established to undertake research and broaden the use of recycled water. Water conservation and non-potable recycling initiatives are now embedded across the country, and have become mainstream in most new urban developments. Although historically, agriculture represented the biggest demand for recycled water, future projections show a dominating demand for municipal, industrial and commercial use, as well as for environmental protection. The City of Melbourne (4 million inhabitants), one of Australia’s largest and driest cities, provides a good illustration of this evolution, by treating half of its wastewaters in 11,000 HA of lagoons (the Western Treatment Plant) which constitute one of the world’s largest lagoon-based wastewater treatment plant. The area around the plant host thousand of birds and is listed as a wetland of international significance under the Ramsar Convention since 1982. It also receives thousand of visitors each year for the educational purposes.
Echinoderms are invertebrates that have pentaradial symmetry, a spiny skin, a water vascular system, and a simple nervous system. Describe the characteristics of echinodermata - Echinoderms live exclusively in marine systems; they are widely divergent, with over 7,000 known species in the phylum. - Echinoderms have pentaradial symmetry and a calcareous endoskeleton that may possess pigment cells that give them a wide range of colors, as well as cells that possess toxins. - Echinoderms have a water vascular system composed of a central ring of canals that extend along each arm, through which water circulates for gaseous exchange and nutrition. - Echinoderms have a very simple nervous system, comprised of a nerve ring at the center and five radial nerves extending outward along the arms; there is no structure resembling a brain. - There are two sexes in echinoderms, which each release their eggs and sperm into the water; here, the sperm will fertilize the eggs. - Echinoderms can reproduce asexually by regeneration. - madreporite: a lightcolored calcerous opening used to filter water into the water vascular system of echinoderms - podocyte: cells that filter the bodily fluids in echinoderms - pentaradial symmetry: a variant of radial symmetry that arranges roughly equal parts around a central axis at orientations of 72° apart - water vascular system: a hydraulic system used by echinoderms, such as sea stars and sea urchins, for locomotion, food and waste transportation, and respiration - ampulla: the dilated end of a duct Echinodermata are so named owing to their spiny skin (from the Greek “echinos” meaning “spiny” and “dermos” meaning “skin”). This phylum is a collection of about 7,000 described living species. Echinodermata are exclusively marine organisms. Sea stars, sea cucumbers, sea urchins, sand dollars, and brittle stars are all examples of echinoderms. To date, no freshwater or terrestrial echinoderms are known. Morphology and Anatomy Adult echinoderms exhibit pentaradial symmetry and have a calcareous endoskeleton made of ossicles, although the early larval stages of all echinoderms have bilateral symmetry. The endoskeleton is developed by epidermal cells and may possess pigment cells that give vivid colors to these animals, as well as cells laden with toxins. Echinoderms possess a simple digestive system which varies according to the animal’s diet. Starfish are mostly carnivorous and have a mouth, oesophagus, two-part pyloric stomach with a pyloric duct leading to the intestine and rectum, with the anus located in the center of the aboral body surface. In many species, the large cardiac stomach can be everted and digest food outside the body. Gonads are present in each arm. In echinoderms such as sea stars, every arm bears two rows of tube feet on the oral side which help in attachment to the substratum. These animals possess a true coelom that is modified into a unique circulatory system called a water vascular system. The more notably distinct trait, which most echinoderms have, is their remarkable powers of regeneration of tissue, organs, limbs, and, in some cases, complete regeneration from a single limb. Water Vascular System Echinoderms possess a unique ambulacral or water vascular system, consisting of a central ring canal and radial canals that extend along each arm. Water circulates through these structures and facilitates gaseous exchange as well as nutrition, predation, and locomotion. The water vascular system also projects from holes in the skeleton in the form of tube feet. These tube feet can expand or contract based on the volume of water (hydrostatic pressure) present in the system of that arm. The madreporite is a light-colored, calcerous opening used to filter water into the water vascular system of echinoderms. Acting as a pressure-equalizing valve, it is visible as a small red or yellow button-like structure (similar to a small wart) on the aboral surface of the central disk of a sea star. Close up, it is visibly structured, resembling a “madrepore” colony. From this, it derives its name. Water enters the madreporite on the aboral side of the echinoderm. From there, it passes into the stone canal, which moves water into the ring canal. The ring canal connects the radial canals (there are five in a pentaradial animal), and the radial canals move water into the ampullae, which have tube feet through which the water moves. By moving water through the unique water vascular system, the echinoderm can move and force open mollusk shells during feeding. Other Body Systems The nervous system in these animals is a relatively simple structure with a nerve ring at the center and five radial nerves extending outward along the arms. Structures analogous to a brain or derived from fusion of ganglia are not present in these animals. Podocytes, cells specialized for ultrafiltration of bodily fluids, are present near the center of echinoderms. These podocytes are connected by an internal system of canals to the madreporite. Echinoderms are sexually dimorphic and release their eggs and sperm cells into water; fertilization is external. In some species, the larvae divide asexually and multiply before they reach sexual maturity. Echinoderms may also reproduce asexually, as well as regenerate body parts lost in trauma. Classes of Echinoderms Echinoderms consist of five distinct classes: sea stars, sea cucumbers, sea urchins and sand dollars, brittle stars, and sea lillies. Differentiate among the classes of echinoderms - Sea stars have thick arms called ambulacra that are used for gripping surfaces and grabbing hold of prey. - Brittle stars have thin arms that wrap around prey or objects to pull themselves forward. - Sea urchins and sand dollars embody flattened discs that do not have arms, but do have rows of tube feet they use for movement. - Sea cucumbers demonstrate “functional” bilateral symmetry as adults because they actually lie horizontally rather than stand vertically. - Sea lilies and feather stars are suspension feeders. - ossicle: a small bone (or bony structure), especially one of the three of the middle ear - fissiparous: of cells that reproduce through fission, splitting into two - ambulacrum: a row of pores for the protrusion of appendages such as tube feet. Classes of Echinoderms The phylum echinoderms is divided into five extant classes: Asteroidea (sea stars), Ophiuroidea (brittle stars), Echinoidea (sea urchins and sand dollars), Crinoidea (sea lilies or feather stars), and Holothuroidea (sea cucumbers). The most well-known echinoderms are members of class Asteroidea, or sea stars. They come in a large variety of shapes, colors, and sizes, with more than 1,800 species known so far. The key characteristic of sea stars that distinguishes them from other echinoderm classes includes thick arms (ambulacra; singular: ambulacrum) that extend from a central disk where organs penetrate into the arms. Sea stars use their tube feet not only for gripping surfaces, but also for grasping prey. Sea stars have two stomachs, one of which can protrude through their mouths and secrete digestive juices into or onto prey, even before ingestion. This process can essentially liquefy the prey, making digestion easier. Brittle stars belong to the class Ophiuroidea. Unlike sea stars, which have plump arms, brittle stars have long, thin arms that are sharply demarcated from the central disk. Brittle stars move by lashing out their arms or wrapping them around objects and pulling themselves forward. Of all echinoderms, the Ophiuroidea may have the strongest tendency toward 5-segment radial (pentaradial) symmetry. Ophiuroids are generally scavengers or detritivores. Small organic particles are moved into the mouth by the tube feet. Ophiuroids may also prey on small crustaceans or worms. Some brittle stars, such as the six-armed members of the family Ophiactidae, are fissiparous (divide though fission), with the disk splitting in half. Regrowth of both the lost part of the disk and the arms occur, yielding an animal with three large arms and three small arms during the period of growth. Sea urchins and sand dollars are examples of Echinoidea. These echinoderms do not have arms, but are hemispherical or flattened with five rows of tube feet that help them in slow movement; tube feet are extruded through pores of a continuous internal shell called a test. Like other echinoderms, sea urchins are bilaterans. Their early larvae have bilateral symmetry, but they develop fivefold symmetry as they mature. This is most apparent in the “regular” sea urchins, which have roughly spherical bodies, with five equally-sized parts radiating out from their central axes. Several sea urchins, however, including the sand dollars, are oval in shape, with distinct front and rear ends, giving them a degree of bilateral symmetry. In these urchins, the upper surface of the body is slightly domed, but the underside is flat, while the sides are devoid of tube feet. This “irregular” body form has evolved to allow the animals to burrow through sand or other soft materials. Sea lilies and feather stars are examples of Crinoidea. Both of these species are suspension feeders. They live both in shallow water and in depths as great as 6,000 meters. Sea lilies refer to the crinoids which, in their adult form, are attached to the sea bottom by a stalk. Feather stars or comatulids refer to the unstalked forms. Crinoids are characterized by a mouth on the top surface that is surrounded by feeding arms. They have a U-shaped gut; their anus is located next to the mouth. Although the basic echinoderm pattern of fivefold symmetry can be recognized, most crinoids have many more than five arms. Crinoids usually have a stem used to attach themselves to a substrate, but many live attached only as juveniles and become free-swimming as adults. Sea cucumbers of class Holothuroidea are extended in the oral-aboral axis and have five rows of tube feet. These are the only echinoderms that demonstrate “functional” bilateral symmetry as adults because the uniquely-extended oral-aboral axis compels the animal to lie horizontally rather than stand vertically. Like all echinoderms, sea cucumbers have an endoskeleton just below the skin: calcified structures that are usually reduced to isolated microscopic ossicles joined by connective tissue. In some species these can sometimes be enlarged to flattened plates, forming armor. In pelagic species, such as Pelagothuria natatrix, the skeleton and a calcareous ring are absent. The phylum Chordata contains all animals that have a dorsal notochord at some stage of development; in most cases, this is the backbone. Name the features that distinguish the members of the phylum chordata - The phylum chordata is named for the notochord, a longitudinal, flexible rod between the digestive tube and the nerve cord; in vertebrates, this is the spinal column. - The chordates are also characterized by a dorsal nerve cord, which splits into the brain and spinal cord. - Chordata contains two clades of invertebrates: Urochordata (tunicates) and Cephalochordata (lancelets), both of which are suspension feeders. - The phylum chordata includes all animals that share four characteristics, although they might each possess some of them at different stages of their development: a notochord, a dorsal nerve cord, pharyngeal slits, and a postanal tail. - Chordata contains five classes of animals: fish, amphibians, reptiles, birds, and mammals; these classes are separated by whether or not they can regulate their body temperature, the manner by which they consume oxygen, and their method of reproduction. - dorsal nerve cord: a hollow cord dorsal to the notochord, formed from a part of the ectoderm that rolls, forming a hollow tube. - notochord: a flexible rodlike structure that forms the main support of the body in the lowest chordates; a primitive spine - pharyngeal slit: filter-feeding organs found in non-vertebrate chordates (lancelets and tunicates) and hemichordates living in aquatic environments Animals in the phylum Chordata share four key features that appear at some stage of their development: - A notochord, or a longitudinal, flexible rod between the digestive tube and the nerve cord. In most vertebrates, it is replaced developmentally by the vertebral column. This is the structure for which the phylum is named. - A dorsal nerve cord which develops from a plate of ectoderm that rolls into a tube located dorsal to the notochord. Other animal phyla have solid nerve cords ventrally located. A chordate nerve cord splits into the central nervous system: the brain and spinal cord. - Pharyngeal slits, which allow water that enters through the mouth to exit without continuing through the entire digestive tract. In many of the invertebrate chordates, these function as suspension feeding devices; in vertebrates, they have been modified for gas exchange, jaw support, hearing, and other functions. - A muscular, postanal tail which extends posterior to the anus. The digestive tract of most nonchordates extends the length of the body. In chordates, the tail has skeletal elements and musculature, and can provide most of the propulsion in aquatic species. In some groups, some of these traits are present only during embryonic development. In addition to containing vertebrate classes, the phylum Chordata contains two clades of invertebrates: Urochordata (tunicates) and Cephalochordata (lancelets). However, even though they are invertebrates, they share characteristics with other chordates that places them in this phylum. For example, tunicate larvae have both a notochord and a nerve cord which are lost in adulthood. Most tunicates live on the ocean floor and are suspension feeders. Cephalochordates, or lancelets, have a notochord and a nerve cord (but no brain or specialist sensory organs) and a very simple circulatory system. Lancelets are suspension feeders that feed on phytoplankton and other microorganisms. The phylum Chordata contains all of the animals that have a rod-like structure used to give them support. In most cases this is the spine or backbone. Within Chordata there are five classes of animals: fish, amphibians, reptiles, birds, and mammals. Three dividing factors separate these classes: - Regulation of body temperature: animals are either homeothermic (can regulate their internal temperature so that it is kept at an optimum level) or poikilothermic (cannot regulate their internal temperature, the environment affects how hot or cold they are) - Oxygen Absorption: the way in which oxygen is taken in from the air, which can be through gills, the skin (amphibians), or lungs - Reproduction: this factor is particularly varied. Animals can be oviparous (lay eggs) or viviparous (birth live young). Fertilization can occur externally or internally. In mammals, the mother produces milk for the young.