content
stringlengths
275
370k
As the number of sides of a regular polygon inscribed in a circle increases, the polygon gets closer and closer to the shape of a circle. This fact is widely used in video games because the human eye will accept a shape as being a circle even when the number of sides in the regular polygon is as small as 20. The wheels of a car in a video game can be rendered as 20-sided polygons. The sum of the lengths of the sides in any regular polygon is called its perimeter. A regular polygon of n sides with side length p has a Perimeter P = np units. As you know, the perimeter of a circle is called its circumference. The ratio of the circumference to the diameter of any circle is that famous, irrational number, Pi. π = C/D One could gain an approximation for Pi by measuring the circumference and diameter of a circle to as high a degree of accuracy as current measurement technology allows and then evaluate the ratio of circumference to diameter. But mathematics and access to Scratch give us an easier and cheaper (no need to buy expensive measuring tools) way to evaluate Pi. We need to determine the relationship between the number of sides in a regular polygon, its radius, and the central angle subtended by one of its sides. For any regular polygon, we would like to know how to compute the length of one of the equal sides given the radius of the circumscribed circle. Consider the seven-sided regular polygon (heptagon or septagon) shown in the figure below. The central angle in any regular polygon of n sides can be computed by dividing 360º by n. central angle = 360º/n Construct the line from the center of a regular polygon at right angles to any of its sides. This line is called the apothem. The hypotenuse–leg theorem states that any two right triangles that have a congruent hypotenuse and a corresponding, congruent leg, are congruent triangles. Therefore, the apothem bisects the central angle. Let s equal the length of a side in a regular polygon and ø the angle between the apothem and the radius as shown in the following diagram. Then ø = half the central angle. This gives ø in terms of n, the number of sides. ø = 360º/2n ø = 180/n [Equation 1.] In the right triangle, sing = (s/2)/r and solving for s, s=2r(sin(ø) [Equation 2.] Substitute Equation 1 for ø in Equation 2. s=2rsin(180/n) [Equation 3.] The perimeter is P = ns or P=n2rsin(180/n) [Equation 4.] The ratio of the perimeter of the polygon to 2r gives an approximation to Pi. Given a radius of 100 units, Equation 3 and the approximation to Pi is coded in the following, short, Scratch script. The Scratch program was run to generate the data in the following table. A polygon with just 12 sides and a radius of 100 approximates the first two digits of Pi, 3.1. Equation 3 with r = 100 units can be used to write a Scratch script that draws the polygon for any given number of sides. I leave it to you to key in the above code and verify that is does indeed draw the regular polygon specified by the number of sides variable.
Written by 20/01/2017on Gastroenteritis (‘Gastro’) is a bowel infection which causes diarrhea (runny, watery poo) and sometimes vomiting. The vomiting may settle quickly, but the diarrhea can last up to 10 days. Gastro can be caused by many different germs although the most common cause of Gastro is a viral or bacterial infection. Most children do not need to take any medicine. It is more common and severe in babies and young children. Babies under six months can become ill very quickly because of the loss of fluid from their body. Signs and symptoms Gastro causes your child to feel unwell, and not want to eat or drink. Vomiting may happen in the first 24 to 48 hours. Then your child may have diarrhea lasting up to one week. Your child may have some stomach pains. Your child may also have a fever. Young babies and children can become dehydrated very easily and need to be checked by a doctor. Signs that your child may be dehydrated include drowsiness (being very sleepy and not waking for feeds), dry lips and mouth, not passing wee and cold hands and feet. Babies under six months may need to be checked again by your doctor after six – 12 hours. If your child is very dehydrated and cannot keep any fluids down, they may need to be admitted to hospital to have fluids by: a tube through the nose into the stomach (called a nasogastric or NG tube) or directly into a vein by intravenous therapy (a drip). Care at home Children with mild Gastro can be looked after at home. The main treatment is to keep your child drinking fluids often. This is needed to replace fluid lost due to the vomiting and diarrhea. It is important for the fluids to be given even if the diarrhea seems to get worse. Do not withhold drinks from your child if they are thirsty. Do not give medicines to reduce the vomiting and diarrhea. They do not work and may be harmful. Your baby or child is infectious so wash your hands well with soap and warm water, particularly before feeding and after nappy changes. Keep your child away from other children as much as possible until the diarrhea has stopped. If you are breastfeeding, continue to do this but feed more often. You can give an oral re-hydration solution (e.g. GastrolyteTM, HYDRAlyteTM, Pedialyte or Repalyte) as well. If bottle feeding, give oral re-hydration solution or clear fluids for the first 12 hours, then give normal formula in small, but more frequent amounts. Offer babies a drink every time they vomit. Give small amounts of clear fluid often – i.e. a few mouthfuls every 15 minutes for all children with diarrhea or vomiting and especially if your child is vomiting a lot. Give older children one cup (around 150 – 200ml) of fluid for every big vomit or episode of diarrhea. Gastrolyte, Hydralyte, Pedialyte and Repalyte are different types of oral re-hydration fluid that can be used to replace fluids and body salts. These are the best option if your child is dehydrated. For mild Gastro without dehydration you can also give water or diluted cordial, but do not give sports drinks, Lucozade, or undiluted lemonade, cordials, or fruit juices. Your child may refuse food at first. This is not a problem as long as clear fluids are taken. Doctors now suggest there is no need to restrict food. Generally, if your child is hungry at any time, give them the food they feel like eating. Do not stop food for more than 24 hours. When to see a doctor: If your child is not drinking but still has vomiting and diarrhea. If your child has a lot of diarrhea (eight to 10 watery motions, or two or three large motions per day) or if the diarrhea continues after 10 days. If your child is vomiting frequently and seems unable to keep any fluids down. If you think your child is dehydrated e.g. not passing urine, is pale and has lost weight, has sunken eyes, cold hands and feet or is hard to wake up. If your child has a bad stomach pain. If there is any blood in their poo. If there is any green vomit. OR if you are worried for any other reason. Key points to remember - Young babies and children with Gastro can become dehydrated very easily – they need small amounts of fluid often. - Babies under six months with Gastro need to be checked by a doctor and may need to be checked again by your doctor after six to 12 hours. - Often babies drink every time they vomit. Keep breastfeeding. If bottle feeding, do not stop formula for more than 12 to 24 hours. - Give older children one cup (around 150-200ml) of fluid for every big vomit or episode of diarrhea. - Continue to give food if your child wants to eat. Do not stop food for more than 24 hours. - Your baby or child is infectious so wash your hands well with soap and warm water, particularly before feeding and after nappy changes. - Keep your child away from other children as much as possible until the diarrhea has stopped. This information is intended to support, not replace, discussion with your doctor or healthcare professionals. The authors of these consumer health information handouts have made a considerable effort to ensure the information is accurate, up to date and easy to understand. The Royal Children’s Hospital, Melbourne accepts no responsibility for any inaccuracies, information perceived as misleading, or the success of any treatment regimen detailed in these handouts. Information contained in the handouts is updated regularly and therefore you should always check you are referring to the most recent version of the handout. The onus is on you, the user, to ensure that you have downloaded the most up-to-date version of a consumer health information handout.
HENRI ROUSSEAU - FANTASY JUNGLE more photos at end of lesson Grades: 3-5 | Age: 8-11 | Written by: [Rebecca is an art educator from Cathedral School in Bismark ND.] Children will learn about artist Henri Rousseau. They will then create oil pastel versions of Rousseau's jungle-style paintings. What You Need: - Students will discuss the life and artistic style of Henri Rousseau. - Students will recognize and identify foreground, mid ground and back ground. - Students will create a stylized drawing using simple shapes. - Students will combine oil pastels to create value and depth. What You Do: - 12 x 15 scrap paper - 12 x 15 white sulphate drawing paper - permanent fine tip black marker - oil pastels - water colors - Introduce your students to the life and art of Henri Rousseau. - Use available resources to give the students some background information on the artist. - Show the class prints of Rousseau's jungle-style paintings. - Discuss foreground, mid ground and background, warm and cool colors, and simple, stylized drawing, and repeating shapes. - Show the students where and how Rousseau used these concepts. - Discuss how Rousseau used resources around him (books, museums, gardens, etc.) and his imagination to create his paintings. Step 3. (Next Class) - Give the students scrap paper and black markers to practice drawing their jungle scene. - Demonstrate the use of large, simple repeating shapes to create grass (foreground), flowers (mid ground), and trees (background). - Use horizon lines to create depth. - Point out the benefits of repeating a certain flower and/or color to "tie" their picture together. - Let the students add simple animals to their picture. Do they need a moon or a sun? Will their picture be warm or cool? Remind them to keep their shapes large and to avoid small details. - When they are satisfied with their "practice" paper. Let the students redraw their picture on good white drawing paper with a black permanent marker. No pencils! Step 4. (Next Class) - Pass out oil pastels and demonstrate for the students how to use the oil crayons like paint. - Layer and blend the colors to create shading and depth. - Remind the students to repeat colors throughout their composition and to use the pastel in a heavy manner. - They may color the grass, flowers, trees, animals, sun and/or moon. - Do not color the background space. - Pass out watercolor paints, large watercolor brushes, and water containers. - Have the students moisten the blue, violet, green and yellow paint. - Students will begin painting by wetting the sky with clean water. - They can then use blue and/or violet to wash the sky. - Next, they may wet the grass or ground area with water and wash it using green and/or yellow and /or blue. About Henri Rousseau: Henri Rousseau was a French artist born in 1844, died in 1910. He was a self taught artist who often painted images of jungle scenes and animals. His work was almost always bright and colorful and he is best know for his Sleeping Gypsy painting of 1897. The Imaginary World of Henri Rousseau Information for educators from the National Gallery of Art in D.C. Henri Rousseau Biography A nice brief biography of the artist. by Susanne Pfleger This dreamy children's book celebrates Henri Rousseau, the French customs inspector and self-taught painter whose Sleeping Gypsy is one of the most popular paintings in New York's Museum of Modern Art. This content has been printed from:
|Comparison of three common aspect ratios. The outer box (blue) and middle box (red) are common formats for cinematography. The inner box (green) is the format used in standard television.| Comparison of three common aspect ratios. The outer box (blue) and middle box (red) are common formats for cinematography. The inner box (green) is the format used in standard television.Within the motion picture industry, the convention is to assign a value of 1 to the image height, so that, for example, a Cinemascope frame is described as 2.35:1 or just "2.35". This way of speaking comes about because the width of a film image is restricted by the presence of sprocket holes and, usually, an optical sound track on the projection print. Development of various camera systems therefore centers on the placement of the frame in relation to these lateral constraints; the height of image can be adjusted freely, so the ingenuity goes into getting different widths. One clever widescreen process, VistaVision, used standard 35mm film running sideways through camera gate, so that the sprocket holes were above and below frame and the width was not restricted. The most common projection ratios in American theaters are 1.85 and 2.35. The term is also used in the context of computer graphics to describe the shape of an individual pixel in a digitized image. Most digital imaging systems use square pixels--that is, they sample an image at the same resolution horizontally and vertically. But there are some devices that do not, so a digital image scanned at twice the horizontal resolution to its vertical resolution might be described as being sampled at a 2:1 aspect ratio, regardless of the size or shape of the image as a whole. WidescreenA widescreen image is a film image with a greater aspect ratio than the ordinary 35 millimeter frame. The aspect ratio of a standard 35 millimeter frame is around 1.37:1, although cameramen may use only the part of the frame which will be visible on a television screen (which is 1.33:1 for standard television). Viewfinders are typically inscribed with a number of frame guides, for various ratios. Note that aspect ratio refers here to the projected image. There are various ways of producing a widescreen image of any given proportion. - Anamorphic: used by Cinemascope, Panavision and others. Anamorphic camera lenses compress the image horizontally so that it fits a standard frame, and anamorphic projection lenses restore the image and spread it over the wide screen. The picture quality is reduced because the image is stretched to twice the original area, but improvements in film and lenses have made this less noticeable. - Masked: the film is shot in standard ratio, but the top and bottom of the picture are masked off by mattes in the projector. Alternatively, a hard matte in the camera may be used to mask off those areas while filming. Once again the picture quality is reduced because only part of the image is being expanded to full height. Sometimes films are designed to be shown in cinemas in masked widescreen format but the full unmasked frame is used for television. A low-budget movie called Secret File: Hollywood, often ridiculed as a collection of bloopers, is actually an example of a film that is always projected wrong. All the lights and microphone booms visible above the actors should be concealed by a projection matte, creating an image that would fill a wide screen for little money. - Multiple camera/projector: the Cinerama system originally involved shooting with three synchronized cameras locked together side by side, and projecting the three resulting films on a curved screen with three synchronized projectors. Later Cinerama movies were shot in super anamorphic (see below), and the resultant widescreen image was divided into three by optical printer lenses to produce the final threefold prints. The technical drawbacks of Cinerama are discussed in its own article. - Big film format: a 70mm film frame is not only twice as wide as a standard frame but also has greater height. Shooting and projecting a film in 70mm therefore gives more than twice the image area of non-anamorphic 35mm film with no loss of quality. - Super anamorphic: 70mm with anamorphic lenses creates an even wider high-quality picture. LetterboxLetterboxing is the practice of copying widescreen film to video formats while preserving the original aspect ratio. Since the video display is most often a more square aspect ratio than the original film, the resulting master must include masked-off areas above and below the picture area (these are often referred to as "black bars", resembling a letterbox slot). The term takes its name from the similarity of the resulting image to a horizontal opening in a postal letter box. The resulting video master utilizes only a portion of the display screen, the technique offers an alternative to the older pan and scan method of copying that cropped the image to suit the 4:3 (or 12:9) ratio of the television screen and preserves the original composition of the film as seen in the theater. Some filmmakers state a preference for letterboxed videos of their work. Woody Allen's insistence on a letterboxed release of Manhattan probably inspired this treatment of other films. One exception to the preference is Milos Forman, who finds the bands distracting. However, most video releases are made without consultation with either the director or director of cinematography of the film. The letterboxing is often careless, and the common 16:9 ratio does not precisely correspond to aspect ratios of the most common widescreen systems. HDTV, a newer digital video system, uses video displays with a wider aspect ratio than standard television and, is becoming the broadcast standard in the United States. The wider screen will make it easier to make an accurate letterbox transfer. Some contemporary television programming is being produced in letterbox format. This is done both to give a "classier" look to the image (particularly in the case of advertising), and to facilitate the production of widescreen programming for later syndication in HDTV. 16:9 widescreen television is also becoming common on European digital television systems. Although this is not true HDTV it uses the same aspect ratio, and the majority of programming in countries like Britain and France is now made in letterbox format. Of course, on a true widescreen television set the "letterboxed" 16:9 picture is no longer letterboxed since it fills the entire screen. However, movies made in even wider aspect ratios are letterboxed to some extent even on 16:9 sets. Sometimes, by accident or design, a standard-ratio image is presented in the central portion of a letterboxed picture, resulting in a black border all around. This is referred to as "matchboxing" and is generally disliked because it wastes a lot of screen space and reduces the resolution of the original image. This can for instance be seen on some of the DVD editions of the Star Trek movies whenever the widescreen documentaries included as extras use footage from the original TV series. The alternative would be to crop the original 4:3 TV images horizontally to fit the 16:9 ratio. Pan and ScanPan and scan is a method of adjusting widescreen film images so that they can be shown within the proportions of an ordinary video screen. Until High Definition Television came onto the scene, television images had approximately the shape of a frame of 35mm film: a width 1.33 times the height (in the industry, referred to as "4:3 aspect ratio"). By contrast, a film image typically has a more rectangular final projected image with an aspect ratio greater than 16:9, often as wide as 2.35 times the height of the image. To broadcast a widescreen film on television, or create a videotape or DVD master it is necessary to make a new version from the original filmed elements. One way to do so is to make a "letterbox" print, which preserves the original theatrical aspect ratio, but produces an image with black bars at the top and bottom of the screen. Another way to turn the 16:9 aspect ratio film into a 4:3 aspect ratio television image is to "pan and scan" the negative. During the "pan and scan" process, an operator selects the parts of the original filmed composition that seem to be significant and makes sure they are copied—"scanning." When the important action shifts to a new position in the frame, the operator moves the scanner to follow it, creating the effect of a pan shot. This method allows the maximum resolution of the image, since it uses all the available video scan lines. It also gives a full-screen image on analog television. But it can also severely alter compositions and therefore dramatic effects—for instance, in the film Jaws, the shark can be seen approaching for several seconds more in the widescreen version than in the pan and scan version. In some cases, the results can also be a bit jarring, especially in shots with significant detail on both sides of the frame: the operator must either go to a two-shot format (alternating between closeups in what was previously a single image), lose some of the image, or make several abrupt pans. In cases where a film director has carefully designed his composition for optimal viewing on a wide theatrical screen, these changes may be seen as changing that director's vision to an unacceptable extent. Once television revenues became important to the success of theatrical films, cameramen began to work for compositions that would keep the vital information within the "TV safe area" of the frame. For example, the BBC suggests program producers frame their shots in a 14:9 aspect ratio to minimize the effects of converting film to television. In other cases film directors reverse this process, creating a negative with information that extends above and below the widescreen theatrical image (this is sometimes referred to as a "full frame" composition). Often pan-and-scan compositors make use of this full-screen negative as a starting point, so that in some scenes the TV version may contain more image content than the widescreen version while in other scenes where such an "opened" composition is not appropriate a subset of the widescreen image may be selected. In some cases (notably many of the films of Stanley Kubrick) the original 1.33:1 aspect ratio of the negative is transferred directly to the video master (although these versions also represent a new aspect ratio compared to the original theatrical release these are not properly "pan and scan" transfers at all but are often called "full-frame" or "open matte" transfers). Yet some directors still balk at the use of "pan and scan" version of their movies; for instance Steven Spielberg initially refused to release a pan and scan version of Raiders of the Lost Ark, but eventually gave in; Woody Allen refused altogether to release one of Manhattan and the letterboxed version is in fact the only version available on VHS and DVD. CinemaScopeCinemascope, or more strictly CinemaScope, was a widescreen movie format used in the US from 1953 to 1967. Using anamorphic lenses and 35 mm film it could project film at a 2.66:1 ratio, twice as wide as conventional lenses could achieve. It was developed by 20th Century Fox to supplant the complex, multi-projector Cinerama process, first shown in 1952. The actual anamorphic process, initially called Anamorphoscope, was developed by Henri Chétien around 1927 using lenses he called hypergonar. Chétien had been attempting to sell his process to Hollywood since the 1930s but with little interest, until the advent of Cinerama. Another factor was the rise of television, which meant that the studios saw the need for a spectacle to compete. The hypergonar lens patents were acquired by 20th Century Fox in 1952 and the system was renamed Fox CinemaScope. The advantage over Cinerama was that all the system needed was an additional lens unit fitted to the front of ordinary cameras and projectors, although stereo sound could be carried on separate 35mm tracks. It was first demonstrated in 1953 and the first film shot was The Robe (September 1953). The technology was licensed by Fox to MGM and Disney and shortly afterwards to Columbia, Universal and Warner. However, initial uncertainty meant that a number of films were shot simultaneously with anamorphic and regular lenses. Also only the 'biggest' films were made in Cinemascope, around a third of the total produced. Although Cinemascope was capable of producing a 2.66:1 image, the addition of stereo information could reduce this to 2.55:1. A change in the base 35 mm film aperture eventually reduced Cinemascope to 2.35:1. Often cinemas with smaller screens would further crop the format to make it fit. A general problem with expanding the visible image meant that there could be visible grainyness and brightness problems, so to combat this larger formats were developed; initially an unsuccessful 55 mm, and later 65 and 70 mm. Since the actual anamorphic process was not patentable (it had been known for centuries and had been used in paintings such as "The Ambassadors" by Hans Holbein), some studios sought to develop their own systems rather than pay Fox - RKO used Superscope, Republic used Naturama, Warner developed Warnerscope. Other systems developed included Panatar, Vistarama, Technovision and Euroscope. Cinemascope itself was called Regalscope when used by the Fox adjunct Regal Films for black-and-white features. Many US studios adopted the cheaper, non-Fox, but still anamorphic Panavision system and by the mid-1960s even Fox had abandoned Cinemascope for Panavision. The initial problems with grain and contrast were eventually solved thanks to improvements in film stock and lenses.
Binomial theorem, statement that, for any positive integer n, the nth power of the sum of two numbers a and b may be expressed as the sum of n + 1 terms of the form in the sequence of terms, the index r takes on the successive values 0, 1, 2, . . . , n. The coefficients, called the binomial coefficients, are defined by the formula in which n! (called n factorial) is the product of the first n natural numbers 1, 2, 3, . . . , n (and where 0! is defined as equal to 1). The coefficients may also be found in the array often called Pascal’s triangle by finding the rth entry of the nth row (counting begins with a zero in both directions). Each entry in the interior of Pascal’s triangle is the sum of the two entries above it. The theorem is useful in algebra as well as for determining permutations, combinations, and probabilities. For positive integer exponents, n, the theorem was known to Islamic and Chinese mathematicians of the late medieval period. Isaac Newton stated in 1676, without proof, the general form of the theorem (for any real number n), and a proof by Jakob Bernoulli was published in 1713, after Bernoulli’s death. The theorem can be generalized to include complex exponents, n, and this was first proved by Niels Henrik Abel in the early 19th century.
The medieval knight rose early in the morning with the sunrise or close to dawn. He would usually hear mass in the chapel during this time or consult with his officials about business. Most of the medieval knight's duties were completed early in the morning with all entertainment occurring after dinner, which was usually served at midday. This mid-day dinner was the largest meal of the day. The rest of the knight's day involved entertainment or hunting. Entertainment consisted of jugglers, troubadours, acrobats, gambling or games. Hunting was a way to exercise the body and work with weapons while honing weaponry skills for possible warfare in the future. Most hunting exhibitions would take place with other knights in groups and would take place on horses. The targets were usually deer and wild boar because of their delicious taste; however, wild animals, such as wolves and wild dogs were hunted because they were a threat to both people and livestock. The knights would end their day at sundown unless there was a midnight feast happening in the land. Candles were used to mimic sunlight to make it possible to see and celebrate.Learn more about Middle Ages
CONTENTS | PREV | NEXT 8H. the essentials of the E typesystem This section tries to explain how the E typesystem works from Most problems people have while programming in E stem from their incorrect view of how the E type-system works, Also, many people have an idea how types work from their previous programming language, and try to apply this to E, which is often fatal, because E is quite different when it come to types. The Type System. but E is in essence a TYPELESS language. Indeed, variables may have a type, but this is only used as a specification how to dereference a variable when it is used as a pointer. In almost ALL other language constructions, variables are treated as all being of the same type, namely the 32bit In practise this means that for example in expressions with the exception of the ".", "" and "++" operators etc., all operators and functions work on 32bit values, regardless of whether they represent booleans, integers, reals or pointers to something. In the E type-system only 4 types exist, PTR TO CHAR, PTR TO INT, PTR TO LONG and PTR TO <object>, where <object> is a name of a previously defined OBJECT. When a variable (or an object member, as we'll see later) is declared as being of this type, It means that if the variable contains a value that is a legal pointer, this is how it should be dereferenced. LONG, ARRAY etc. All other types one may see in a DEF declaration are not really types, as they really are only other ways of writing one of the above four. As an example, ARRAY OF <type> is just another way of writing PTR TO <type>, with the only difference that the former is automatically assigned the address of an area of stackspace which is big enough to hold data for the #of elements specified in square brackets. Here's a table that shows all E 'types' in terms of the basic four: ARRAY OF CHAR, ARRAY, STRING, LONG (are equal to) PTR TO CHAR ARRAY OF INT (is equal to) PTR TO INT ARRAY OF LONG, LIST (are equal to) PTR TO LONG ARRAY OF <object>, <object> (are equal to) PTR TO <object> - LONG is for variables that are not intended to be used as a pointer, i.e integers. Its equivalence with PTR TO CHAR is quite logical, as conceptually both talk about things that are measured in units of 1. (for example, "++" has the same effect on both) - LIST and STRING are the same as their ARRAY equivalents, in respect to the fact that they're initialised to a piece of stack-space, but their stack representation is a little more complex to facilitate runtime bounds-checking (when used with the correct functions). - an <object> is equivalent to :ARRAY OF <object>. both represent an initialised PTR TO <object>. In an OBJECT one can have the same declarations, with the addition of CHAR and INT (similar to LONG), and the ommission of LIST and STRING, as these are complex objects in their own right, and cannot be part of an object. Given a pointer p of some type, "" may index other elements that are sequentially ordered next to the element it is currently pointing to. note that this allows for both positive and negative indices, and also no assumptions are made about where and how many elements are actually allocated. "++" sets the pointer to the next element in memory, "--" to the previous one. note that these operators always operate on the pointer and never on the the element the pointer is pointing to. "." works similar to "", only now indexes the pointer by name, i.e. the pointer must be a PTR TO <object>. "" and "." may be concatenated to a pointer p in any sequence, given the fact that the previous resulting value again is known to be of a "PTR TO" One does not need to write out a de-reference in total, as in other languages, e.g. if p is an ARRAY OF obj, instead of having to write p[index].member you can write just p[index], which logically results in the address of that object. This also explains why p.member is equivalent to p.member, since p is the same as p when it points to an object. Another type-related issue that makes E somewhat different from other languages and thus harder to grasp is it's accent on Reference Semantics rather than Value Semantics. I'll try to argue why that's good here. Informally, Reference Semantics means that objects in a language (mostly other than the simple ones like LONGs) are represented by pointers, while Value Semantics treats these objects as just being themselves. An example of a language that has only Value Semantics is BASIC, examples of languages that have them both are the C/C++ and Pascal type-of languages, and examples of Reference only are newer Object Oriented languages, functional languages like LISP and of course E. Using Reference Semantics doesn't mean being occupied with pointers all the time, rather you're worrying about them a lot less then in the mixed case or the Value-only case, especially since in real life programs most non-trivial data-structures get allocated dynamically which implies pointers. The best example of this is LISP, where one programs heavily with pointers without noticing. In E, one could easily forget STRING is a pointer, given the easy by which one can pass it around to other functions; in C often lots of "&" are needed where in the equivalent E case none are, and the Oberon equivalent of bla('hallo') looks like bla(sys.ADR('hallo')) because the string doesn't represent a pointer, but a value as a whole...
Lamarck, Jean Baptiste (Pierre Antoine) (1744–1829) Lamarck is remembered as the first person to put forward a complete theory of organic evolution. But this really does little justice to him. Most of his long life was devoted to studies in natural history and he emerged as one of the greatest biologists of his age. Lamarck was born in a Picardy village. He obtained some early education at a college in Amiens but by 17 he was distinguishing himself for valor in the French army. The return to peace in 1762 left Lamarck restless. His temperament wasn't suited to tedious barrack room duties. He resigned his commission and took up the study of medicine. Supporting himself by working in a bank, he qualified after 4 years. But now, instead of practicing his acquired knowledge, Lamarck threw himself into the study of botany. Plants he had always loved and, equipped with a scientific training, he became keeper of the Herbarium of the Royal Gardens. Ten years of hard work ended in the publication of his Flore Francaise – a description of wild plants in France. The book incorporated a key for plant identification which Lamarck had devised himself. A further 15 years of botanical work saw Lamarck, at nearly 50, a leading French botanist. But the really illustrious part of his career was yet to come. In 1793 he was appointed Professor of Zoology at the Paris Museum with special interests in insects, worms, and microscopic animals. Lamarck's small quota of zoological knowledge did not prevent him from pursuing his new course with all the enthusiasm he devoted to botany. His findings were to revolutionize the systematics – the classification – of the animal kingdom. First Lamarck divided the kingdom into what he called the vertebrates and the invertebrates– now very familiar terms. Then after long hours of dissecting in his laboratory he suggested new invertebrate groups – based on an anatomical likeness and dissimilarities, not merely appearances. The group Vermes recognized by Linnaeus, was demolished. Lamarck showing how completely different animals had been wrongly classified together. New groups were made, Lamarck naming the annelida, the arachnida, the tunicata, and the crustacea. Incidentally, it was Lamarck himself who coined the word biology. From the study of modern invertebrates, Lamarck became interested in comparing them with the remains of past invertebrates. From the Tertiary muds and sands of the Paris basin, he collected a variety of fossils, chiefly mollusc shells. In his writings he encouraged other workers to similarly compare past with present forms. He drew biology closer together with paleontology and, in fact, has been called the founder of invertebrate paleontology. In other branches of geology, Lamarck showed a complete mastery of fundamental principles – as is shown in yet another study, his World Hydrology. A man of Lamarck's versatility could not remain unaware of a possible relationship between different forms of life and methods by which one type of animal could change or evolve into another. Stimulated by earlier attempts of Buffon, he presented his won theory of evolution – the Lamarkian Theory. Seeing how closely structures of different organisms were related to modes of life, he stressed the importance of the surroundings – the environment. Two laws were made. First, that organs continuously used in response to the environment were strengthened, and those not used disappeared. Second, when these modifications have been acquired, they are passed on by reproduction. This second law – the inheritance of acquired characteristics – met with violent opposition. Certainly there are few if any proven cases of such inheritance. Yet whether right or wrong, this does not invalidate Lamarck's emphasis on the influence of environment. The same emphasis appeared in 1859 when Darwin proposed his Theory of Evolution by Natural Selection. – Lamarck's monumental career ended in poverty, and, perhaps due to overwork with lens and microscope, in blindness. Related entry evolutionary theory and extraterrestrial life Related category BIOLOGISTS Home • About • Copyright © The Worlds of David Darling • Encyclopedia of Alternative Energy • Contact
Excretion is defined as “the removal of waste molecules that have been produced in metabolism inside cells”. So for example carbon dioxide is a waste product of respiration and is excreted in the lungs. The liver too produces a waste molecule urea from the breakdown of amino acids. Amino acids and proteins cannot be stored in the body: if you eat more than you use, the excess is broken down to urea. Urea would certainly become toxic if it was allowed to accumulate in the body (patients with no kidney function will die within 3-4 days without treatment) and the organ that is adapted to excrete urea from the blood is the kidney. Kidneys excrete urea by dissolving it in water, together with a few salts to form a liquid called urine. Don’t confuse urine, the liquid produced in the kidney that is removed from the body, with urea, the nitrogen-containing chemical made in the liver that ends up as one component of urine. Urine is produced in the kidneys continuously day and night. It travels away from the kidney in a tube called the ureter. Each kidney has a ureter coming out of it, and the two ureters carry the urine to the bladder. The bladder is a muscular storage organ for urine. Urine drains from the bladder through a second tube called the urethra. Make sure you check your spelling: ureter and urethra are easy to muddle and correct spelling is essential to ensure the meaning is not lost…. How is urine made in the kidney? Well that’s the big question for this post. How does the kidney start with blood and produce a very different liquid called urine from it….. Urine is basically made of water, dissolved urea and a few salts. Before I can explain how urine is made, I need to briefly look at the structure of a kidney. You can see the structure of the kidney on this simple diagram. There are three regions visible in a kidney: an outer cortex, an inner medulla which is often a dark red colour due to the many capillaries it contains, and a space in the centre called the renal pelvis that collects the urine to transfer it into the ureter. Blood enters the kidney through the large renal artery and deoxygenated blood containing less urea leaves the kidney in the renal vein. But there is no way from looking at the gross structure of the kidney that you could ever work out how the Dickens it produces urine. This requires careful microscopic examination of the kidney. Each kidney contains about a million tiny microscopic tubules called nephrons. The nephron has an unusual blood supply and an understanding of what happens in different regions of the nephron allows an understanding of how urine is made to be built up. The nephron is the yellow tubule in the diagram above. It starts in the cortex with a cup-shaped structure called the Bowman’s capsule. This cup contains a tiny knot of capillaries called the glomerulus. The Bowman’s capsule empties into the second region of the nephron which is called the proximal convoluted tubule. The tubule then descends into the medulla and out again in a region called the Loop of Henle. There is then a second convoluted region called the distal convoluted tubule before the nephron empties into a tube called a collecting duct. The collecting ducts carry urine down into the renal pelvis and into the ureter. Stages in the Production of Urine Blood is filtered in the kidney under high pressure, a process called ultrafiltration. Filtration is a way of separating a mixture of chemicals based on the size of the particles and this is exactly what happens to the blood in the kidney. Red blood cells, white blood cells and platelets are all too large to cross the filtration barrier. Blood plasma (with the exception of large plasma proteins) is filtered from the blood forming a liquid called glomerular filtrate. The kidneys produce about 180 litres of glomerular filtrate per day. Ultrafiltration happens in the glomerulus and the glomerular filtrate (GF) passes into the Bowmans capsule. The high pressure is generated by the blood vessel that takes blood into the glomerulus (afferent arteriole) being much wider than the blood vessel that takes blood out of the glomerulus (efferent arteriole). The plasma of blood (minus the large plasma proteins) is squeezed out of the very leaky capillaries in the glomerulus and into the first part of the nephron. What’s in Glomerular Filtrate? - amino acids As well as containing urea, water and salts, glomerular filtrate also contains many useful molecules for the body (glucose and amino acids for example) so these have to be collected back into the blood in the second stage….. 2) Selective Reabsorption The useful substances in the glomerular filtrate are reabsorbed back into the blood. This can be by osmosis (for water) or by active transport (glucose and amino acids). All of the glucose and all of the amino acids in the GF are reabsorbed in the proximal convoluted tubule by active transport. Remember active transport can pump substances against the concentration gradient using energy from respiration. Almost all the water in GF is reabsorbed by osmosis in the proximal tubule too. So that leaves the question, what is the rest of the nephron doing…? Well this is where it gets much more complicated…… Extra urea and salts can be secreted into the nephron at certain points along the tubule. The Loop of Henle allows the body to produce a urine that is much more concentrated than the blood plasma. And much of the distal tubules and collecting ducts are used for the second function of the kidney: homeostasis. But you will have to wait until my next post to find out how the kidney fulfils this crucial second function… Please add comments or questions to this post – I really value your feedback… Tell me what is unclear and do ask questions….
Commonly called 'heart burn', acid reflux disease is a condition in which the liquid content of the stomach regurgitates (backs up, or refluxes) into the esophagus. It's annoying and painful. But you want to know the truth, the reflux of the stomach's liquid contents into the esophagus occurs in most normal individuals. However, when heartburn becomes acid reflux disease or Gastro esophageal reflux disease, commonly referred to as GERD, it is s real problem. That is because with GERD, the acid is stronger and stays in the esophagus longer causing more discomfort. Most often, you will experience this during the daytime when you are upright, sitting straight, or standing. You body handles this reflux by the fluid flowing back down into your stomach. You swallow more during the daytime therefore draining the acid back to where it belongs. Your salivary glands produce saliva that also contains bicarbonate that acts to neutralize the acid your stomach has kicked up. At night though, you may have a greater problem when acid reflux disease occurs that is because while sleeping, gravity does not work as well lying down, your constant swallowing stops, and the production of saliva is reduced. Certain conditions make a person more prone to acid reflux disease, this GERD. For example, while you are pregnant, this can be a serious problem. Elevated hormone levels of pregnancy probably cause reflux by lowering the pressure in that part of your body known as the lower esophageal sphincter. Also, the growing baby puts more pressure on the abdomen. Both of these effects of pregnancy tend to increase the risk of GERD. If your acid reflux disease is a minor condition, then you should only experience minor symptoms. These would include primarily heartburn, regurgitation, and nausea. However, if the condition is complicated, then watch out for the following symptoms. The liquid that comes back into the esophagus damages the lining of the esophagus. The body tries to protect itself from the acid reflux disease by 'inflaming' the esophagus. Trying to speed the healing process through the inflammation, the wall of the esophagus may form an ulcer. The ulcer is a break in the lining of the esophagus wall. Then what happens is that there may be bleeding. If the bleeding is very severe, patients might need a blood transfusion or even surgical treatment. If your heartburn is severe or acute, happening very frequently, you need to see a doctor. What can you do for yourself to help the condition? Try sleeping a pillow a night that raises your chest up slightly so that gravity can bring the acid back down more easily. Since this condition usually occurs on a full stomach, eat earlier and eat less to keep the stomach from being too full. Ease off on the chocolate, peppermint, alcohol, and caffeinated drinks. Reduce fatty foods and of course, cut down or quit smoking. Other foods may aggravate the conditions. Avoid spicy or acid-containing foods, like citrus juices, carbonated beverages, and tomato juice.
Volume 16, Number 1, March 2000 By Joseph King The Sun spins at approximately 360 deg per 27 days, but sunspots rotate faster near the Sun's equator than at higher latitudes. The Sun's effective magnetic dipole flips over once per 11-year solar activity cycle. The solar wind emanates from a solar atmosphere seething with activity. For all these reasons it might be expected that no long-term patterns dependent on solar longitude would be observed in the solar wind. However, quite the opposite has just been found by Dr. Marcia Neugebauer of NASA JPL and collaborators. They used mainly data from NSSDC's 1963-1999 OMNI data set of near-Earth solar wind field and plasma observations and data from various other solar wind measuring spacecraft away from the Earth in NSSDC's COHOWeb system. By using the times of interplanetary observations and the observed solar wind speeds (which vary between 300 km/s and 700+ km/s for varying plasma elements but are assumed constant versus solar distance for any given element for this analysis), they were able to time stamp each observation with the time the observed plasma left the Sun. They then assumed a zero point for solar longitude at a time in 1962, prior to any data availability. For each of many assumed solar rotation rates, they were able to assign solar surface longitudes to each plasma element whose solar surface departure time had previously been determined. Then for each assumed rotation rate, they took averages of the flow speeds and magnetic field radial components, observed from many spacecraft over nearly 40 years, in each of many longitude bins. The expectation was that any shorter-lived longitudinal variations would be washed out by averaging such a long run of data. However, for an assumed solar rotation period of 27.03 (+/-.02) days, significant longitudinal variations were obtained in both flow speed (amplitude ~ 30 km/s) and magnetic field radial component (amplitude ~ 0.2 nT). In the analysis published in the February 2000 issue of the Journal of Geophysical Research, the authors conclude that the solar magnetic dipole re-establishes the same longitude after each 11-year flip, which is an unexpected and significant result on solar processes. The authors used some 1960's data from Mariner 2 (which initially confirmed the continued existence of the solar wind) and from Pioneers 6 and 7. It was, however, the longevity of the OMNI data set as well as the uniformity of the OMNI and COHOWeb data that enabled and greatly facilitated, respectively, this significant study. NASA home page GSFC home page GSFC organizational page
Blue Whales and Man Made Noise Blue whale vocal behavior is affected by man-made noise, even when that noise does not overlap the frequencies the whales use for communication, according to new research published Feb. 29 in the open access journal PLoS ONE. The whales were less likely to emit calls when mid-frequency sonar from ships was present, but were more likely to do so when ship sounds were nearby, the researchers report. The data show an acoustical response from blue whales to MFA sonar and ship noise. In particular, there is a disruption of the D call production of these animals with MFA sonar. The implications of such a response are unknown to date, but owing to the low received level, a single source of MFA sonar may be capable of affecting the animals' vocal behavior over a substantial area. Additionally, nearby ships elicit more intense D calling by blue whales. The use of sound for whale communication and acquisition of information about the environment has evolved across the years and constitutes an important aspect of baleen whale behavior. Given the increasing level of anthropogenic (human) noise in the ocean, there has been concern that high-intensity anthropogenic noise may impact communication and other behaviors involving whale sound production. Estimates made by Cummings and Thompson (1971) suggest the source level of sounds made by blue whales are between 155 and 188 decibels when measured relative to a reference pressure of one micropascal at one meter. All blue whale groups make calls at a fundamental frequency between 10 and 40 Hz; the lowest frequency sound a human can typically perceive is 20 Hz. Blue whale calls last between ten and thirty seconds. The reason for blue whale vocalization is unknown. Richardson in 1995 discussed six possible reasons: Maintenance of inter-individual distance Species and individual recognition Contextual information transmission (for example feeding, alarm, courtship) Maintenance of social organization Location of topographic features Location of prey resources For further information: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0032681#s3
When a baseball is the resulting motion of the ball is determined by Newton's laws of motion. Newton's first law, we know that the moving ball will stay in motion in a straight line unless acted on by external forces. A force may be thought of as a push or pull in a specific direction; a force is a If the initial velocity and direction are known, and we can determine the magnitude and direction of all the forces on the ball, then we can predict the flight path using Newton's laws. This slide shows the three forces that act on a baseball in flight. The forces are the weight, drag, and lift. Lift and drag are actually two of a single acting on the ball. Drag acts in a direction opposite to the motion, and lift acts perpendicular to the motion. Let's consider each of these forces separately. Weight is a force that is always directed toward the center of the earth. In general, the of the weight depends on the mass of an object as determined by Newton's law of gravitation. By rule, the weight of a major league baseball is 5 ounces. A baseball is made with a solid core, a string wrapping around the core, and a stitched covering, so the weight is distributed throughout the ball. But we can often think of the weight as collected and acting through a single point called the center of gravity. The center of gravity is the average location of the weight of an object. To first order, the center of gravity for a baseball is located at the exact center of the ball. In flight, the ball center of gravity. Newton's laws of motion of the center of gravity. The physics describing the rotation, translation, weight and center of gravity of a baseball is the same for any ball. A softball is larger and slightly heavier (6.25 ounces) than a baseball. So the trajectory of a batted softball will be similar, but not the same as a batted baseball. In the software described below, the student can vary the type of ball to see the difference that weight produces in the flight of a ball. Strictly speaking, the weight of the ball should not be specified in ounces. The ounce(oz.) is a measure of mass and not of weight. Weight is a force, mass times acceleration, and is not equal to the mass of an object. The pound is a measure of force. Unfortunately, people often use the units for weight and mass interchangeably; the assumption being that we are talking about the weight at the surface of the Earth where the acceleration is a constant (32.2 ft/sec^2 or 9.8 m/sec^2). So when the rule states that the ball weighs 5 oz, it should more correctly specify that the weight is 5/16 lb. At NASA, we have to be very careful of the distinction between mass and weight. On Mars, the mass of a baseball is the same as on Earth. But since the gravitational acceleration on Mars is 1/3 that of the Earth, the weight of a baseball on Mars is only 5/48 lb. As the ball moves through the air, the air resists the motion of the ball and the resistance force is called drag. Drag is directed along and opposed to the flight direction. In general, there are many that affect the magnitude of the drag force including the of the object, the square of the velocity of the object, and conditions of the air; particularly, the viscosity of the air. Determining the magnitude of the drag force is difficult because it depends on the details of how the flow interacts with the surface of the object. For a baseball, this is particularly difficult because the stitches used to hold the ball together are not uniformly or symmetrically distributed around the ball. Depending on the orientation of the ball in flight, the drag changes as the flow is disturbed by the stitches. To determine the magnitude of the drag, aerodynamicists normally use a wind tunnel to the drag on a model. For a baseball, the can be determined experimentally by throwing the ball and accurately measuring the change in velocity as the ball passes between two points of known distance. A softball is slightly larger than a baseball, so the magnitude of the drag force will be different for a softball. Students can use the software mentioned below to study these differences. Lift is the component of the aerodynamic force that is perpendicular to the flight direction. Airplane wings generate to overcome the weight of the airplane and allow the airplane to fly. A also generate aerodynamic lift. Like the drag, the magnitude of the lift depends on several related to the conditions of the air and the object, and the velocity between the object and the air. For a spinning ball, the speed of rotation affects the magnitude of the aerodynamic force and the direction of the force is perpendicular to the axis of rotation. The orientation of the axis of rotation can be varied by the pitcher when the ball is thrown. If the axis is vertical, the lift force is horizontal and the ball can be made to curve to one side. If the axis is horizontal, the lift force is vertical and the ball can be made to dive or loft depending on the direction of rotation. The stitches on a baseball introduce some additional complexity in the generation of lift and drag. For any object, the aerodynamic force acts through the center of pressure. The center of pressure is the average location of the aerodynamic forces on an object. For an ideal, smooth ball, symmetry considerations place the the center of pressure at the center of the ball along with the center of gravity. But a baseball in flight is neither smooth nor symmetric because of the stitches. So the center of pressure for a baseball moves slightly about the center of the ball with time, depending on the orientation of the stitches. The time-varying aerodynamic force causes the ball to move erratically. This motion is the source of the "dancing" knuckleball that confuses both batters and catchers alike. To account for the complexities when making predictions of the lift, aerodynamicists make an using theory, and then correct the prediction using experimental data. lift coefficient - Cl was determined by high speed photography of the flight of a pitched ball. The motion of the ball through the air depends on the relative strength and direction of the forces shown above. We have built two simulation packages that look at the physical problem of pitching a and of the flight of a baseball that is from home plate. The curve ball problem involves all three forces with the lift force that causes the ball to curve. The simulation calculates the magnitude of the lift force and it can be shown that even big league pitchers can not generate enough lift force to overcome the weight of the ball. There are no rising fast balls. The hit baseball problem considers only the forces of drag and weight. the simulator demonstrates the important role that atmospheric conditions play on the flight of a baseball. The is very different from the idealized that occurs when drag is neglected. The figure on this web page was created by Elizabeth Morton, of Magnificat High School, during a "shadowing" experience at NASA Glenn during May of 2007. - NASA Baseball Home Page - Beginner's Guide Home Page
Australia worksheets for 2nd and 3rd grades. 13 printable activities. This 13-page Worksheet Packet is perfect for your Unit Study on Australia! In this Australia Worksheet Packet, you’ll find: - Australia’s Flag – Color the Australian flag, matching the template provided - All About Australia Word Search - Australia: Continent & Country – Find and label each place on the map - Australia: Fill-in-the-Blank - The Animals of Australia – Choose an Australian animal from the list and complete the mini-report - Australia and The World – Use a globe to answer the questions about Australia’s location in the world - What Do You Know About Australia? – Discover the national symbols of Australia and study the Australian Coat of Arms - Reading Comprehension – Read the short narrative about Australia’s history and answer the questions - Correct the Sentence! – Correct the capitalization and punctuation errors - Name That Coin! – Match each Australian coin to the correct description - Australian Notes – Practice skip counting while answering the questions regarding Australia’s currency - Australian Definitions: Cut & Paste – Learn vocabulary words in this unique cut & paste activity Worksheets are geared towards 1st-3rd Graders. However, they can easily be used with younger learners (with mom’s help) or possibly even 4th-5th graders. Only logged in customers who have purchased this product may leave a review.
During this topic, we found out about The Shang Dynasty through creative and inspiring learning. - What life was like under The Shang Rule. - Described the function and purpose of Shang artefacts. - Explored and described a specific aspect of The Shang religion. - Located The Shang Dynasty on a map of China. The highlight of this topic and something the children enjoyed was creating and making their own Chinese calendars!
Persuasion is an art, and the strength of an argument is largely based on the persuasive words and phrases that are used. The most important words and phrases to use are those that convey knowledge and confidence and avoid indecisiveness and uncertainty.Continue Reading For example, one should never use the phrases "I think" or "I guess." This demonstrates uncertainty; instead, use phrases such as "I know," "from experience I can tell you," or "it's a fact that..." The message should be delivered as forcefully and confidently as possible. An argument should also be directed at the opponent or audience using the word "you." When the word "you" is used, it involves the opponent or audience in a personal way. An example would be the phrase "My view is the better view, and it will make your life better." If the opponent or audience is made to feel that adopting a particular view will benefit them in some way, they are more likely to accept the conclusions. Another strategy is to use words and phrases that address the counter-argument. If a position is being argued for, the counter-argument should be addressed and revealed to be inferior to the position advocated by the arguer. Some great phrases to use include: "I used to think the same way as my opponent, but then I learned...," or "position X will lead to undesirable consequences, but my position avoids all of them."Learn more about Academic Essays
Hamilton's principle states that the differential equations of motion for any physical system can be re-formulated as an equivalent integral equation. Thus, there are two distinct approaches for formulating dynamical models. It applies not only to the classical mechanics of a single particle, but also to classical fields such as the electromagnetic and gravitational fields. Hamilton's principle has also been extended to quantum mechanics and quantum field theory—in particular the path integral formulation of quantum mechanics makes use of the concept—where a physical system randomly follows one of the possible paths, with the phase of the probability amplitude for each path being determined by the action for the path. Solution of differential equation Empirical laws are frequently expressed as differential equations, which describe how physical quantities such as position and momentum change continuously with time, space or a generalization thereof. Given the initial and boundary conditions for the situation, the "solution" to these empirical equations is one or more functions that describe the behavior of the system and are called equations of motion. Minimization of action integral Action is a part of an alternative approach to finding such equations of motion. Classical mechanics postulates that the path actually followed by a physical system is that for which the action is minimized, or more generally, is stationary. In other words, the action satisfies a variational principle: the principle of stationary action (see also below). The action is defined by an integral, and the classical equations of motion of a system can be derived by minimizing the value of that integral. This simple principle provides deep insights into physics, and is an important concept in modern theoretical physics.
Endangered species can be defined as those populations of organisms facing the risk of extinction, mainly because of their reduction in numbers coupled with the changes in the environmental conditions. Human beings have a direct influence in endangering the species. The World Wildlife Fund has been the most famous organization in the world, having had wide influence on the need to conserve nature. They have provided a list of the top five animals which face extinction if nothing is done to conserve them. If you’re an avid adventure traveler and animal lover, you may want to try visit these animals in their natural habitats before it’s too late. 1. Javan Rhino At the moment, the Javan rhino population stands at 45 rhinos in the world, all located in Indonesia. Their habitat in Ujung Kulon Park has been fenced all round. It is argued that volcanic eruptions and other natural disasters have driven them into extinction. Moreover, the efforts of poaching have greatly reduced their numbers. Some cultures still believe that the rhino horns have medicinal qualities, which is why poaching is so rampant. To monitor and protect these creatures effectively, conservation groups have relocated them to other areas so as to diversify and strengthen the existing species. 2. Giant pandas Giant pandas were mainly located in the Chinese mountain ranges, inhabiting the woodland areas. Due to extensive human degradation efforts, all their habitats were depleted and the species was pushed increasingly further into the mountain ranges. Their population now stands at 1600, three hundred of which are in captivity. Their estimated live span is approximately thirty years. Females breed one cub every two years. To increase their numbers, conservation groups have devised programs to help them recover from their poaching and habitat loss. 3. Wild tigers Statistics from the World Wildlife Fund show that 97% of tigers have perished and only three thousand now remain. Poaching has greatly contributed to the reduction in their numbers; many Chinese still believe that their bones have some medicinal value. Some Chinese cultures also use their skin and bones for decoration purposes. As the human population continues to grow, the habitats of the wild tigers gets encroached upon by humans. Deforestation has also been prolific in many Chinese rural areas in preference for livestock keeping. It is estimated that by the year 2022, the Chinese year of tiger, tigers will have become extinct. Elephants were common in Africa and Asia in the past. The 19th century ivory trades have brought adverse effects leading to poaching in many African countries. In the Asian countries, elephant tusks are a lucrative market. Elephants have also posed conflicts with many people. This is because they end up destroying human farms while grazing, as they require a lot of food to support their heavy weight. The addax, also known as the screwhorn antelope, is almost extinct. These animals are mostly found in the Caribbean countries where they are hunted manly for their meet. Most people in these countries also kill the addax to prevent them from grazing in their farmlands. Their population has been drastically reduced by 80% within a period of thirty years. This necessitated many conservation groups to include them on the extinction list. Though it may take many years to restore their numbers, conservationists have initiated schemes to increase their breeding.
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. Click herefor a complete list of Reading Like a Historian lessons, and click here for a complete list of materials available in Spanish.
You may have seen this image making the rounds on social media, but is it really true? Sure, Earth is the only place with liquid water and we know it rains here, but do other worlds also have rain? Perhaps rains of methane, iron, or even diamonds? Let’s find out. Does it rain sulfuric acid on Venus? Venus is the second planet from the Sun, and in many ways, it is just like Earth. It’s similar in size, mass, composition and even proximity to the Sun — but that’s where the similarities end. The atmosphere of Venus is composed of 96.5% carbon dioxide, while most of the remaining 3.5% is nitrogen. Its atmosphere is extremely dense, and it’s estimated that the atmospheric mass is 93 times that of Earth’s atmosphere, whereas the pressure at the planet’s surface is about 92 times that at Earth’s surface. Early evidence pointed to the sulfuric acid content in the atmosphere, but we now know that that is a rather minor (though still significant) constituent of the atmosphere. Because CO2 is a greenhouse gas and Venus has so much of it, temperatures on the planet reach a scorching462 °C — much higher than that of Mercury, which is much closer to the Sun. The Venusian atmosphere supports opaque clouds made of sulfuric acid, extending from about 50 to 70 km. Beneath the clouds, there is a layer of haze down to about 30 km and below that it is clear. Above the dense CO2 layer there are thick clouds consisting mainly of sulfur dioxide and sulfuric acid droplets. The thing is, there is no rainfall on the surface of Venus — while sulfuric acid rain falls in the upper atmosphere, it evaporates around 25 km above the surface. Also, sulfur dioxide concentrations in the atmosphere, which dropped by a factor of 10 between 1978 and 1986, which suggests that the sulfur in the atmosphere actually comes from volcanic eruptions. The clouds are also extremely acidic and there is also lightning on Venus. The sulphuric acid droplets can be highly electrically charged, and so they offer the potential for lightning. The surface of Venus can be accurately described as a hellish and unforgiving place. Verdict: It does rain sulfuric acid on Venus, but not on the surface, rather at 25 km high in the atmosphere. The sulfur may come from volcanic eruptions. Does it rain Glass on HD 189733b? HD 189733b is an extrasolar planet approximately 63 light-years away from the Solar System. The planet was discovered in 2005. With a mass 13% higher than that of Jupiter, HD 189733 b orbits its host star once every 2.2 days, making it a so-called hot Jupiter. Hot Jupiters are a class of extrasolar planets whose characteristics are similar to Jupiter, but that have high surface temperatures because they orbit very close to their star. The planet was discovered using Doppler spectroscopy — an indirect method for detecting extrasolar planets. Basically, you don’t observe the planet itself, you study its stars and notice any tiny wobbles in it with Doppler shifts. In 2008, a team of astrophysicists managed to detect and monitor the planet’s visible light the first such success in history. This result was further improved by the same team in 2011. They found that the planetary albedo is significantly larger in blue light than in the red. But the blue doesn’t come from an ocean or some watery surface – it comes from a hazy, turbulent atmosphere believed to be laced with silicate particles – the stuff of which natural glass is made. The planet has incredibly fast winds and the estimated temperature of over 1000 degrees Celsius, so the rain is likely more horizontal than vertical. “It rains glass, sideways, in howling 7,000 kilometre-per-hour winds,” Frederic Pont of the University of Exeter. NASA astronomers now believe that the planet has a scorching temperature around two times higher than Venus, and almost certainly features a dry atmosphere — there’s almost certainly no water on its surface, though there is a possible condensate in its atmosphere, magnesium silicate (MgSiO3), raining down as solid fragments. Verdict: We don’t know for sure, but it likely rains silicate particles (you can consider them glass) on the planet we call HD 189733b. Does it rain Diamonds on Neptune? Neptune is the eighth and farthest planet from the Sun in the Solar System (sorry Pluto). Neptune’s composition is similar to that of Uranus and different to that of gas giants like Saturn and Jupiter. Neptune’s atmosphere is composed primarily of hydrogen and helium, along with traces of hydrocarbons and possibly nitrogen; however, it contains a higher proportion of “ices” such as water, ammonia, and methane. Neptune’s weather is characterized by extremely dynamic storm systems, with winds reaching speeds of almost 600 m/s (2160 km/h). The abundance of methane, ethane, and ethyne at Neptune’s equator is 10–100 times greater than at the poles. It has been theorized that Uranus and Neptune actually crush methane into diamonds, and lab experiments seemed to confirm that this is possible. However, you need significant pressures to do that, and you need to go some 7000 km inside the planet – but keep in mind, the planet is made out of gas (grossly 80% hydrogen, 19% helium and 1% methane). “Once these diamonds form, they fall like raindrops or hailstones toward the center of the planet,” said Laura Robin Benedetti, a graduate student in physics at UC Berkeley. Diamonds may be very rare on Earth, but astronomers believe that they are very common in the universe. Molecular-sized diamonds have been found in meteorites, and recent experiments suggest that large amounts of diamonds are formed from methane on the ice giant planets such as Uranus and Neptune. Some planets in other solar systems may consist of almost pure diamond. As for the rain, it’s estimated that at a depth of 7000 km, the conditions may be such that methane decomposes into diamond crystals that rain downwards like hailstones. Neptune and Uranus aren’t unique in this regard. There is a very good chance that many other gas giants in our galaxy have similar atmospheres. In fact, a recent study found that one particular planet called 55 Cancri E has a mantle that may be mostly diamond. That’s because the planet’s composition contains high levels of carbons, which, at expected temperatures and pressures, would be compressed into diamonds. Verdict: It likely rains diamonds on Neptune, but not on the surface — 7000 km deep into the depths of the gas planet. Does it rain Iron on OGLE-TR-56b? Out of all the planets here, we know the least about OGLE-TR-56b. Astronomers from the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge detected it back in 2003. At the time, it was the farthest planet ever discovered, and although that record has long been beaten, we haven’t really learned all that much about it. Even its Wikipedia entry is a mere paragraph with links to a few studies presenting its discovery. OGLE-TR-56b is also a Hot Jupiter, with an estimated surface temperature of 2000 degrees Celsius, which is hot enough to form clouds made of iron atoms. We have no direct information to confirm this, although astronomers reported evidence for iron rain on brown dwarfs — so-called “failed stars”, objects too big to be a planet but too small to be a star. Verdict: We don’t know if it rains iron on OGLE-TR-56b… but it’s certainly possible. It almost certainly rains iron on some brown dwarfs. Does it rain Methane on Titan? Titan is the largest moon of Saturn. It is the only natural satellite known to have a dense atmosphere, and the only object other than Earth where clear evidence of stable bodies of surface liquid has been found. Titan has liquid seas made of hydrocarbon, lakes, mountains, fog, underground water oceans and yes, it does rain methane on Titan. In fact, Earth and Titan are the only worlds in the Solar System where liquid rains on a solid surface — through again, the rain is methane and not water. Interestingly enough, in many ways, the weather on Titan is similar to that on Earth. The climate — including wind and rain –creates surface features similar to those of Earth, such as dunes, rivers, lakes, seas (probably of liquid methane and ethane), and even deltas. The same type of weather patterns present on Earth are also found on Titan, and Titan’s methane cycle is a good analogue to Earth’s water cycle, although at a much lower temperature. Titan gets 100 times less solar radiation than Earth, so the average surface temperature is about −179 °C. At this temperature water ice has an extremely low vapor pressure, so the atmosphere is nearly free of water vapor. However, the methane in the atmosphere causes a substantial greenhouse effect which keeps the surface of Titan at a much higher temperature than would happen otherwise. The terrain on Titan is likely not made up of small grains of silicates like the sand on Earth, but rather might have formed when liquid methane rained and eroded the ice bedrock, possibly in the form of flash floods. The satellite even has dunes, much like Earth’s own deserts. Clouds typically cover 1% of Titan’s disk, though outburst events have been observed in which the cloud cover rapidly expands to as much as 8%. The weather on Titan is dominated by Saturn; it was summer in Titan’s southern hemisphere until 2010, when Saturn’s orbit, which governs Titan’s motion, moved Titan’s northern hemisphere into the sunlight. Verdict: It does rain methane on Titan. Rubies and sapphires on HAT-P-7b Are diamonds just not enough for you? Signs of powerful changing winds have been detected on a planet 16 times larger than Earth called HAT-P-7b, but that’s hardly the most impressive thing about this planet. Although it’s hard to confirm this, astronomers believe that the clouds on this planet would be made of corundum — a crystalline form of aluminium oxide which forms rubies and sapphires. While such a sight would no doubt be visually stunning, it’s also a hellish place to be. Aside from these unusual clouds, HAT-P-7b remains very important as the first detection of weather on a gas giant planet outside the solar system. Verdict: We’re not sure, but it might rain rubies and sapphires HAT-P-7b. The universe is a big and wild place, and we’re only barely starting to scratch its surface. While it may rain water on Earth, that’s not the rule by any any means — on many different planets, it can rain many different things. Who knows what we’ll discover in the future?
Conservation of Energy The Bernoulli’s equation can be considered to be a statement of the conservation of energy principle appropriate for flowing fluids. It is one of the most important/useful equations in fluid mechanics. It puts into a relation pressure and velocity in an inviscid incompressible flow. Bernoulli’s equation has some restrictions in its applicability, they summarized in following points: - steady flow system, - density is constant (which also means the fluid is incompressible), - no work is done on or by the fluid, - no heat is transferred to or from the fluid, - no change occurs in the internal energy, - the equation relates the states at two points along a single streamline (not conditions on two different streamlines) Under these conditions, the general energy equation is simplified to: This equation is the most famous equation in fluid dynamics. The Bernoulli’s equation describes the qualitative behavior flowing fluid that is usually labeled with the term Bernoulli’s effect. This effect causes the lowering of fluid pressure in regions where the flow velocity is increased. This lowering of pressure in a constriction of a flow path may seem counterintuitive, but seems less so when you consider pressure to be energy density. In the high velocity flow through the constriction, kinetic energy must increase at the expense of pressure energy. The dimensions of terms in the equation are kinetic energy per unit volume. Extended Bernoulli’s Equation There are two main assumptions, that were applied on the derivation of the simplified Bernoulli’s equation. - The first restriction on Bernoulli’s equation is that no work is allowed to be done on or by the fluid. This is a significant limitation, because most hydraulic systems (especially in nuclear engineering) include pumps. This restriction prevents two points in a fluid stream from being analyzed if a pump exists between the two points. - The second restriction on simplified Bernoulli’s equation is that no fluid friction is allowed in solving hydraulic problems. In reality, friction plays crucial role. The total head possessed by the fluid cannot be transferred completely and lossless from one point to another. In reality, one purpose of pumps incorporated in a hydraulic system is to overcome the losses in pressure due to friction. Due to these restrictions most of practical applications of the simplified Bernoulli’s equation to real hydraulic systems are very limited. In order to deal with both head losses and pump work, the simplified Bernoulli’s equation must be modified. The Bernoulli equation can be modified to take into account gains and losses of head. The resulting equation, referred to as the extended Bernoulli’s equation, is very useful in solving most fluid flow problems. The following equation is one form of the extended Bernoulli’s equation. h = height above reference level (m) v = average velocity of fluid (m/s) p = pressure of fluid (Pa) Hpump = head added by pump (m) Hfriction = head loss due to fluid friction (m) g = acceleration due to gravity (m/s2) The head loss (or the pressure loss) due to fluid friction (Hfriction) represents the energy used in overcoming friction caused by the walls of the pipe. The head loss that occurs in pipes is dependent on the flow velocity, pipe diameter and length, and a friction factor based on the roughness of the pipe and the Reynolds number of the flow. A piping system containing many pipe fittings and joints, tube convergence, divergence, turns, surface roughness and other physical properties will also increase the head loss of a hydraulic system. Although the head loss represents a loss of energy, it does does not represent a loss of total energy of the fluid. The total energy of the fluid conserves as a consequence of the law of conservation of energy. In reality, the head loss due to friction results in an equivalent increase in the internal energy (increase in temperature) of the fluid. Most methods for evaluating head loss due to friction are based almost exclusively on experimental evidence. This will be discussed in following sections. Examples – Bernoulli’s Principle Bernoulli’s Effect – Relation between Pressure and Velocity It is an illustrative example, following data do not correspond to any reactor design. When the Bernoulli’s equation is combined with the continuity equation the two can be used to find velocities and pressures at points in the flow connected by a streamline. The continuity equation is simply a mathematical expression of the principle of conservation of mass. For a control volume that has a single inlet and a single outlet, the principle of conservation of mass states that, for steady-state flow, the mass flow rate into the volume must equal the mass flow rate out. Determine pressure and velocity within a cold leg of primary piping and determine pressure and velocity at a bottom of a reactor core, which is about 5 meters below the cold leg of primary piping. - Fluid of constant density ⍴ ~ 720 kg/m3 (at 290°C) is flowing steadily through the cold leg and through the core bottom. - Primary piping flow cross-section (single loop) is equal to 0.385 m2 (piping diameter ~ 700mm) - Flow velocity in the cold leg is equal to 17 m/s. - Reactor core flow cross-section is equal to 5m2. - The gauge pressure inside the cold leg is equal to 16 MPa. As a result of the Continuity principle the velocity at the bottom of the core is: vinlet = vcold . Apiping / Acore = 17 x 1.52 / 5 = 5.17 m/s As a result of the Bernoulli’s principle the pressure at the bottom of the core (core inlet) is: Bernoulli’s Principle – Lift Force In general, the lift is an upward-acting force on an aircraft wing or airfoil. There are several ways to explain how an airfoil generates lift. Some theories are more complicated or more mathematically rigorous than others. Some theories have been shown to be incorrect. There are theories based on the Bernoulli’s principle and there are theories based on directly on the Newton’s third law. The explanation based on the Newton’s third law states that the lift is caused by a flow deflection of the airstream behind the airfoil. The airfoil generates lift by exerting a downward force on the air as it flows past. According to Newton’s third law, the air must exert an upward force on the airfoil. This is very simple explanation. Bernoulli’s principle combined with the continuity equation can be also used to determine the lift force on an airfoil, if the behaviour of the fluid flow in the vicinity of the foil is known. In this explanation the shape of an airfoil is crucial. The shape of an airfoil causes air to flow faster on top than on bottom. According to Bernoulli’s principle, faster moving air exerts less pressure, and therefore the air must exert an upward force on the airfoil (as a result of a pressure difference). The use of Bernoulli’s principle may not be correct. The Bernoulli’s principle assumes incompressibility of the air, but in reality the air is easily compressible. But there are more limitations of explanations based on Bernoulli’s principle. There are two main popular explanations of lift: - Explanation based on downward deflection of the flow – Newton’s third law - Explanation based on changes in flow speed and pressure – Continuity principle and Bernoulli’s principle Both explanations correctly identifies some aspects of the lift forces but leaves other important aspects of the phenomenon unexplained. A more comprehensive explanation involves both changes in flow speed and downward deflection and requires looking at the flow in more detail. See more: Doug McLean, Understanding Aerodynamics: Arguing from the Real Physics. John Wiley & Sons Ltd. 2013. ISBN: 978-1119967514 Bernoulli’s Effect – Spinning ball in an airflow The Bernoulli’s effect has another interesting interesting consequence. Suppose a ball is spinning as it travels through the air. As the ball spins, the surface friction of the ball with the surrounding air drags a thin layer (referred to as the boundary layer) of air with it. It can be seen from the picture the boundary layer is on one side traveling in the same direction as the air stream that is flowing around the ball (the upper arrow) and on the other side, the boundary layer is traveling in the opposite direction (the bottom arrow). On the side of the ball where the air stream and boundary layer are moving in the opposite direction (the bottom arrow) to each other friction between the two slows the air stream. On the opposite side these layers are moving in the same direction and the stream moves faster. According to Bernoulli’s principle, faster moving air exerts less pressure, and therefore the air must exert an upward force on the ball. In fact, in this case the use of Bernoulli’s principle may not be correct. The Bernoulli’s principle assumes incompressibility of the air, but in reality the air is easily compressible. But there are more limitations of explanations based on Bernoulli’s principle. The work of Robert G. Watts and Ricardo Ferrer (The lateral forces on a spinning sphere: Aerodynamics of a curveball) this effect can be explained by another model which gives important attention to the spinning boundary layer of air around the ball. On the side of the ball where the air stream and boundary layer are moving in the opposite direction (the bottom arrow), the boundary layer tends to separate prematurely. On the side of the ball where the air stream and boundary layer are moving in the same direction , the boundary layer carries further around the ball before it separates into turbulent flow. This gives a flow deflection of the airstream in one direction behind the ball. The rotating ball generates lift by exerting a downward force on the air as it flows past. According to Newton’s third law, the air must exert an upward force on the ball. Torricelli’s law, also known as Torricelli’s principle, or Torricelli’s theorem, statement in fluid dynamics that the speed, v, of fluid flowing out of an orifice under the force of gravity in a tank is proportional to the square root of the vertical distance, h, between the liquid surface and the centre of the orifice and to the square root of twice the acceleration caused by gravity (g = 9.81 N/kg near the surface of the earth). In other words, the efflux velocity of the fluid from the orifice is the same as that it would have acquired by falling a height h under gravity. The law was discovered by and named after the Italian scientist Evangelista Torricelli, in 1643. It was later shown to be a particular case of Bernoulli’s principle. The Torricelli’s equation is derived for a specific condition. The orifice must be small and viscosity and other losses must be ignored. If a fluid is flowing through a very small orifice (for example at the bottom of a large tank) then the velocity of the fluid at the large end can be neglected in Bernoulli’s Equation. Moreover the speed of efflux is independent of the direction of flow. In that case the efflux speed of fluid flowing through the orifice given by following formula: v = √2gh We hope, this article, Bernoulli’s Equation – Bernoulli’s Principle, helps you. If so, give us a like in the sidebar. Main purpose of this website is to help the public to learn some interesting and important information about thermal engineering.
WHEN BIOLOGISTS SYNTHESIZE DNA, they take pains not to create or spread a dangerous stretch of genetic code that could be used to create a toxin or, worse, an infectious disease. But one group of biohackers has demonstrated how DNA can carry a less expected threat—one designed to infect not humans nor animals but computers. In new research they plan to present at the USENIX Security conference on Thursday, a group of researchers from the University of Washington has shown for the first time that it’s possible to encode malicious software into physical strands of DNA, so that when a gene sequencer analyzes it the resulting data becomes a program that corrupts gene-sequencing software and takes control of the underlying computer. While that attack is far from practical for any real spy or criminal, it’s one the researchers argue could become more likely over time, as DNA sequencing becomes more commonplace, powerful, and performed by third-party services on sensitive computer systems. And, perhaps more to the point for the cybersecurity community, it also represents an impressive, sci-fi feat of sheer hacker ingenuity. “We know that if an adversary has control over the data a computer is processing, it can potentially take over that computer,” says Tadayoshi Kohno, the University of Washington computer science professor who led the project, comparing the technique to traditional hacker attacks that package malicious code in web pages or an email attachment. “That means when you’re looking at the security of computational biology systems, you’re not only thinking about the network connectivity and the USB drive and the user at the keyboard but also the information stored in the DNA they’re sequencing. It’s about considering a different class of threat.” A Sci-Fi Hack For now, that threat remains more of a plot point in a Michael Crichton novel than one that should concern computational biologists. But as genetic sequencing is increasingly handled by centralized services—often run by university labs that own the expensive gene sequencing equipment—that DNA-borne malware trick becomes ever so slightly more realistic. Especially given that the DNA samples come from outside sources, which may be difficult to properly vet. If hackers did pull off the trick, the researchers say they could potentially gain access to valuable intellectual property, or possibly taint genetic analysis like criminal DNA testing. Companies could even potentially place malicious code in the DNA of genetically modified products, as a way to protect trade secrets, the researchers suggest. “There are a lot of interesting—or threatening may be a better word—applications of this coming in the future,” says Peter Ney, a researcher on the project. Regardless of any practical reason for the research, however, the notion of building a computer attack—known as an “exploit”—with nothing but the information stored in a strand of DNA represented an epic hacker challenge for the University of Washington team. The researchers started by writing a well-known exploit called a “buffer overflow,” designed to fill the space in a computer’s memory meant for a certain piece of data and then spill out into another part of the memory to plant its own malicious commands. But encoding that attack in actual DNA proved harder than they first imagined. DNA sequencers work by mixing DNA with chemicals that bind differently to DNA’s basic units of code—the chemical bases A, T, G, and C—and each emit a different color of light, captured in a photo of the DNA molecules. To speed up the processing, the images of millions of bases are split up into thousands of chunks and analyzed in parallel. So all the data that comprised their attack had to fit into just a few hundred of those bases, to increase the likelihood it would remain intact throughout the sequencer’s parallel processing. When the researchers sent their carefully crafted attack to the DNA synthesis service Integrated DNA Technologies in the form of As, Ts, Gs, and Cs, they found that DNA has other physical restrictions too. For their DNA sample to remain stable, they had to maintain a certain ratio of Gs and Cs to As and Ts, because the natural stability of DNA depends on a regular proportion of A-T and G-C pairs. And while a buffer overflow often involves using the same strings of data repeatedly, doing so in this case caused the DNA strand to fold in on itself. All of that meant the group had to repeatedly rewrite their exploit code to find a form that could also survive as actual DNA, which the synthesis service would ultimately send them in a finger-sized plastic vial in the mail. The result, finally, was a piece of attack software that could survive the translation from physical DNA to the digital format, known as FASTQ, that’s used to store the DNA sequence. And when that FASTQ file is compressed with a common compression program known as fqzcomp—FASTQ files are often compressed because they can stretch to gigabytes of text—it hacks that compression software with its buffer overflow exploit, breaking out of the program and into the memory of the computer running the software to run its own arbitrary commands. A Far-Off Threat Even then, the attack was fully translated only about 37 percent of the time, since the sequencer’s parallel processing often cut it short or—another hazard of writing code in a physical object—the program decoded it backward. (A strand of DNA can be sequenced in either direction, but code is meant to be read in only one. The researchers suggest in their paper that future, improved versions of the attack might be crafted as a palindrome.) Despite that tortuous, unreliable process, the researchers admit, they also had to take some serious shortcuts in their proof-of-concept that verge on cheating. Rather than exploit an existing vulnerability in the fqzcomp program, as real-world hackers do, they modified the program’s open-source code to insert their own flaw allowing the buffer overflow. But aside from writing that DNA attack code to exploit their artificially vulnerable version of fqzcomp, the researchers also performed a survey of common DNA sequencing software and found three actual buffer overflow vulnerabilities in common programs. “A lot of this software wasn’t written with security in mind,” Ney says. That shows, the researchers say, that a future hacker might be able to pull off the attack in a more realistic setting, particularly as more powerful gene sequencers start analyzing larger chunks of data that could better preserve an exploit’s code. Needless to say, any possible DNA-based hacking is years away. Illumina, the leading maker of gene-sequencing equipment, said as much in a statement responding to the University of Washington paper. “This is interesting research about potential long-term risks. We agree with the premise of the study that this does not pose an imminent threat and is not a typical cyber security capability,” writes Jason Callahan, the company’s chief information security officer “We are vigilant and routinely evaluate the safeguards in place for our software and instruments. We welcome any studies that create a dialogue around a broad future framework and guidelines to ensure security and privacy in DNA synthesis, sequencing, and processing.” But hacking aside, the use of DNA for handling computer information is slowly becoming a reality, says Seth Shipman, one member of a Harvard team that recently encoded a video in a DNA sample. (Shipman is married to WIRED senior writer Emily Dreyfuss.) That storage method, while mostly theoretical for now, could someday allow data to be kept for hundreds of years, thanks to DNA’s ability to maintain its structure far longer than magnetic encoding in flash memory or on a hard drive. And if DNA-based computer storage is coming, DNA-based computer attacks may not be so farfetched, he says. “I read this paper with a smile on my face, because I think it’s clever,” Shipman says. “Is it something we should start screening for now? I doubt it.” But he adds that, with an age of DNA-based data possibly on the horizon, the ability to plant malicious code in DNA is more than a hacker parlor trick. “Somewhere down the line, when more information is stored in DNA and it’s being input and sequenced constantly,” Shipman says, “we’ll be glad we started thinking about these things.”
Using the Excel ROW Function SummaryThe Excel ROW function is used to return the row number of a reference cell. For example, ROW (B4) would return 4 since cell B4 is in the fourth row in the worksheet. If a reference isn't provided, the function will return the row in which the function was entered. The COLUMN function can be used if you need to find the column number of a reference cell. Optional. Reference can be blank, in which case the function returns the row in which it was entered, or you can specify a cell or range of cells. The reference argument cannot include multiple references. Usage NotesROW returns the row number of a reference cell. Entering a Range of Cells If the reference argument is a range of cells and if ROW is entered as a vertical array, the function will return the row numbers of reference as a vertical array.
For the purposes of this Assignment, you are going to develop a presentation that could educate the community on a social issue that impacts children and their families. You can choose any of the following social issues that impact children and their families: Child abuse and neglect, teen pregnancy, drug abuse among youth, bullying, teen dating violence, and suicide among youth. The presentation should be developed for students in a school setting, or for individuals in a community setting. The presentation needs to include the components listed below. - Introduction slide that includes a description of the audience of your presentation. Be specific and include details such as the age of the children the presentation is intended for if in a school setting and the types of adults in the community setting (teachers, community members), etc. Please include the specific details in the Notes section of the PowerPoint slide. This slide should also include a justification about why community education is important with regard to the social issue you chose, and the role of community education with the topic. - Background on the social issue that your presentation is covering including some statistics about how many people are impacted by the issue, risk and protective factors related to this issue, and so on. Make sure this information is appropriate to the audience you have chosen. - Information about what your audience can do about the social issue you are discussing (for example, behaviors that people may be able to address). - Discussion of the different ways in which the topic that you chose is a public health issue. - At least two resources (websites, articles, etc.) that your audience can refer to on the topic you are discussing if they want to get further information. Your Assignment should reflect professional writing standards using proper tone and language. The writing and writing style should be correct, accurate, and reflect knowledge of fundraising. You should include a minimum of four reputable references in your Microsoft® PowerPoint® presentation. Your PowerPoint presentation should consist of at least 15 slides, not including the title page and references slide(s). For help with citations, refer to the APA Quick Reference. For additional writing help, visit the Writing Center and review the guidelines for research, citation, and plagiarism.
All kids deserve to be safe when walking to school, crossing the street, or heading down to a neighbor’s house. Pedestrian safety depends on many factors, including the behavior of those walking and driving, street design, visibility, and other environmental factors. Below are top tips for kids as well as ways to help make streets safer for all. Top Safety Tips - Teach kids to always look left, right, left before crossing the street and to make eye contact with drivers. - Remember to always walk on sidewalks or paths when available. If there are no sidewalks, walk facing traffic as far to the left as possible. - Be seen by drivers by wearing reflectors or carrying lights. - As a driver, remember to yield to pedestrians, watch for small children, drive slowly when children may be nearby, and follow posted speed limits. - When walking or driving, remember to put phones and other devices down to avoid distractions. - Safe Kids Worldwide – Pedestrian Safety - American Academy of Pediatrics – Walking and Biking to School - City of Austin – Vision Zero
This is a quick, simple activity to review food vocabulary with children learning Spanish. Tengo hambre y quiero comer is based on the memory game kids play in Spanish called Me voy de viaje, but uses pictures to provide a visual reminder of the meaning of the words. I play it with groups of 4-8 students and up to 20 food words, but it also be done with two or three children. Of course, it can easily be adapted to other sets of vocabulary. To play, you need picture cards of the words, and a plate to hold the cards that have been chosen. Printable picture cards like the ones from Do2Learn work well. I print the two-inch cards with no words on card stock. I use a plastic plate with a rim so that the cards do not move too much as the kids pass it around. Place the picture cards face up where everyone can reach them. To begin the activity the first person says Tengo hambre y quiero comer and chooses a picture from the pile to compete the sentence. For example, she might choose the apple and say una manzana. She passes the plate to the next player who says Tengo hambre y quiero comer una manzana y… and then chooses another picture and adds that word to the sentence. She puts the picture next to the first one and passes the plate. I encourage the children to point to each card as they say the words. The activity continues this way until the picture cards are used up. This is an effective Spanish language activity because the sentence establishes a context for what you are saying. I teach actions to go with tengo hambre and quiero comer, so the kids do those actions as they say the opening sentence. Most important, the pictures remind the children of the meaning of the Spanish words. There is lots of repetition, and the kids like choosing a food that they really like to eat. In fact, they are often disappointed when someone chooses helado o galletas before they do. You can play this game with any set of related vocabulary. Choose a simple opening sentence in Spanish to establish the situation. For example, with furniture you can say Tengo una casa y en la casa tengo …, with clothes Voy de viaje y voy a llevar…, or with farm animals Voy a la granja y voy a ver…. A shallow box with a simple background can represent a house, a suitcase or a farm that children add the cards to. With larger groups, if everyone will not be able to see if the pictures are passed, order the pictures in a central location. You can also encourage choral repetition of the list of words each time a new one is added.
Recreation consists of activities or experiences carried on within leisure, usually chosen voluntarily by the participant – either because of satisfaction, pleasure or creative enrichment derived, or because he perceives certain personal or social values to be gained from them. It may, also be perceived as the process of participation, or as the emotional state derived from involvement. Recreation refers to all those activities that people choose to do to refresh their bodies and minds and make their leisure time more interesting and enjoyable. Examples of recreation activities are walking, swimming, meditation, reading, playing games and dancing. Leisure refers to the free time that people can spend away from their everyday responsibilities (e.g. work and domestic tasks) to rest, relax and enjoy life. It is during leisure time that people participate in recreation and sporting activities. Sport refers to any type of organized physical activity, e.g. soccer, rugby, football, basketball and athletics. Types of Recreational Activities Breaking recreation down into various areas, classifications, or types might be done in numerous ways. The listing below represents one of the ways that recreation could be categorized for individuals, groups, or leaders planning programs. The listing is shown in random order and does not indicate any order of importance. - Physical activities (sports, games, fitness, etc.) - Social activities (parties, banquets, picnics, etc.) - Camping and outdoor activities (day camps, resident camps, backpacking, float trips, etc.) - Arts and crafts activities (painting, scrapbooking, ceramics, woodworking, etc.) - Dramatic activities (plays, puppetry, skits, etc.) - Musical activities (singing, bands, etc.) - Cultural activities (art appreciation, music appreciation, panels, discussion groups, etc.) - Service activities (fun in doing things for others) Recreation also, of course, includes activities for all age groups (children, senior adults, etc.), as well as various special populations (physically handicapped, mentally retarded, etc.). The Benefits of Participation in Recreational Activities Participation in recreation and sports activities can have many benefits for both the individual and community.These include: - Health promotion and disease prevention – recreation and sports activities are an enjoyable and effective way to improve health and well-being; they can relieve stress, increase fitness, improve physical and mental health, and prevent the development of chronic diseases, such as heart disease; - Skills development – physical and social skills are some of the many skills that can be developed through participation in recreation and sports activities; - Awareness raising, reduction of stigma and social inclusion – recreation and sports activities are a powerful, low-cost means to foster greater inclusion of people with disabilities; they bring people of all ages and abilities together for enjoyment, and provide people with disabilities the opportunity to demonstrate their strengths and abilities, and promote a positive image of disability; - International peace and development – sport is a universal language that can be used as a powerful tool to promote peace, tolerance and understanding by bringing people together across boundaries, cultures and religions. - Empowerment – recreation and sports activities can empower people with disabilities by positively influencing their self-confidence and self-esteem. Recreational Institutions defined Recreational institution can be defined as an organized system of social relationships for satisfying human desire of entertainment, amusement and play etc. Recreational institution means an area of land containing sleeping accommodations and facilities used for both passive and active forms of recreation, which without limiting the generality of the foregoing, shall include, but shall not be limited to the following: children’s camp, religious camp, institutional camp, or other like or similar camp or establishment, but shall not include a tourist establishment. Functions of Recreational Institutions A. Physical Health: Recreational activities, especially outdoor ones improve one’s health like maintaining lower body fat percentages, lowering blood and cholesterol levels, increasing muscular strength, flexibility, muscular endurance, body composition and cardiovascular endurance. Overall it increases one’s stamina and energy level resulting in more focus for academic activities besides also having an impact on one’s class attendance and attention thus leading to more learning. And as we all know “health is wealth”. B. Mental Health: Mental health is essential for overall physical health. Recreational activities help manage stress. It provides a chance to nurture oneself and provides a sense of balance and self-esteem, which can directly reduce anxiety and depression. There is also an increased motivation to learn as it can serve as a laboratory for application of contents learnt in classrooms teaching. It provides a channel for releasing tension and anxiety thus facilitating emotional stability and resilience. Such activities help students to become more self-reliant, emphatic and self-disciplined. C. Improved Quality of Life: People who make recreation a priority are more likely to feel satisfied with their lives overall, according to an American Recreation Coalition Study, 2000. Recreational activities help create a balance between academic pressures with physical and mental well-being. The effects of recreation are multifold. It enriches self-expression, self-fulfillment ability, interpersonal skills, techniques and methods of using leisure, physical strength, creative expression, and aesthetic sense. Such attributes have a favorable effect on human beings who have limits in everyday life. Therefore, recreation is allowed to be used as a tool of therapy (Lee, 2000). Physical activity-based recreation helps participants recover from the deteriorated physical strength, caused by the lack of exercise, and develops the latent ability to achieve self-realization. This also helps people to deal with common day to day problems more effectively as it makes people more optimists and with a positive outlook to life. OTHER RELATED POSTS
Human organs are short in supply, that is more people are waiting to receive organs than are available. There are too few donors, not in the least place because better road safety rules and better healthcare. So scientists and doctors are looking for alternatives. One line of inquiry is xenotransplantation, i.e. the transfer of animal organs to humans. Most studies involves pigs, as their organs have a similar size as human ones. However, humans are not pigs and simply putting a pig kidney in a human will not work, as the human immune system will immediately reject the pig’s organ. Organ rejection is already a problem in human-to-human transplantation. An approach to address this issue is to genetically engineer pigs in such way, that their organs are more “human-like”. As yet no such genetically engineered organ has been successfully implanted in humans. Instead of genetic modification of pigs, other scientists have a different approach. A Californian team of researcher led by professor Pablo Ross has injected human stem cells into a pig embryo. The idea is that those human stem cells will form human organs, which could be used for transplantation. Both lines of research raise ethical questions. Transgene pigs are problematic from an animal welfare perspective, while the creation of human-animal chimeras also adds to the ethical debates around stem cell research. It is, however, doubtful whether either method will be necessary in the (near) future. For instance, Dutch scientists are working on a treatment to cure sick organs by injecting healthy stem cells – which could be obtained from living donors. Also there is a lot of research to grow organs ex vivo from (induced pluripotent) stem cells – including the idea of 3D printing of organs. If those methods will prove to be successful than there will be no need to use animals to solve the shortage of donor organs.
Polarized Light vs Unpolarized Light Polarization is a very important effect discussed in wave theory of light. The effect of polarization is rarely observed in real life situations, but this is very useful in studying the characteristics of light. It is vital to have a proper understanding in the effect of polarization, polarized light and un-polarized light in order to excel in fields such as modern and classical optics, waves and vibrations, acoustics and various other fields. In this article, we are going to discuss what polarization is, what polarized light and un-polarized light are, their definitions, the variations of polarized light, the applications of polarization, and finally the difference between polarized light and un-polarized light. For one to understand polarized light, he/she must first understand polarization. Polarization is simply defined as a type of orientation of oscillations in a wave. Polarization of a wave describes the direction of oscillation of a wave with respect to the direction of propagation; therefore, only transverse waves display polarization. The oscillation of particles in a longitudinal wave is always in the direction of propagation; therefore, they do not display polarization. There are three types of polarization, namely linear polarization, circular polarization, and elliptical polarization. Imagine a wave travelling through space. If the wave is a mechanical wave, particles get affected by the wave and oscillate. If the particles oscillate on a line perpendicular to the direction of propagation, the wave is said to be linearly polarized. If the particles trace out an ellipse on a plane perpendicular to the movement of propagation, the wave is an elliptically polarized wave. If the particle traces a circle on a plane perpendicular to the direction of propagation, then the wave is said to be circular polarized. The process of polarizing is done using a polarizer. A polarizer is a device that only allows some fraction of the wave pass through it. Un-polarized light is the light we generally see daily. Any source of light generated as photons have random directions of oscillations with respect to the direction of propagation. Un-polarized light has intensity components at every direction, at all times. If un-polarized light is sent through a polarizer, polarized light can be obtained. Reflection also causes a partial linear polarization in the direction parallel to the reflected surface. Polaroid glasses are used to polarize light in daily lives. Since the light reflected only has the horizontal electrical component in prominence, the Polaroid glass cuts the horizontal intensity. What is the difference between Polarized and Un-polarized Light? • Un-polarized light has electrical component in every direction, at any given time, but the polarized light has the electric component only in one direction for a given time. • When the un-polarized light is polarized, it is always reduced in intensity. • Light sources give out un-polarized light, but it is impossible to create polarized light sources without the usage of a polarizer.
Rather than being released into the atmosphere and exacerbating the problem of climate change, CO2 can also be used as a raw material for substances required in industrial processes, such as formic acid or methanol. The conversion of CO2 has already been investigated in detail in laboratory studies, with nanodiamonds serving as an environmentally friendly photocatalyst. Researchers from the Fraunhofer Institute for Microengineering and Microsystems IMM are now working with partners to turn this reaction into a contin-uous process – bringing it much closer to real-world application. Given the damage that CO2 does to the climate, governments and companies are working hard to limit their emissions as much as possible. In cases where it cannot be avoided, however, CO2 could soon be used as a raw material in the production of industrially relevant C1 building blocks such as formic acid or methanol, which only contain one carbon atom. One possible method involves nanodiamonds: CO2 is converted into formic acid by using nanodiamonds as a catalyst and irradiating them with short-wave UV-C light in an aqueous environment. This method is currently being studied in the laboratories of Prof. Anke Krüger at the University of Würzburg (although Prof. Krüger is now working at the University of Stuttgart). Using diamond as a catalyst might sound expensive, but the diamond used in this process is not a costly jewelry-grade diamond; it is a detonation diamond which is produced on an industrial scale and is therefore relatively inexpensive as a catalyst. Furthermore, it largely consists of carbon and is therefore an environmentally friendly, “green” catalyst. Researchers from Fraunhofer IMM– together with Prof. Krüger and Sahlmann Photochemical Solutions GmbH– are now taking these reactions one step closer to real-world application within the framework of the CarbonCat project. “Up to now, the experiments have been carried out in a batch reactor; i.e., a stirred flask. There are certain disadvantages to this method,” says Dr. Thomas Rehm, one of the scientists at Fraunhofer IMM. “Firstly, the contacting between the gas and liquid phase and the catalyst is less than ideal; secondly, the catalyst– i.e., the nanoparticles that are floating around– needs to be separated from the solution after the reaction.” Large-area diamond catalyst The research team has therefore come up with a way to apply the catalyst to large areas– specifically, reaction plates measuring around 5 by 9centimeters. “The batch process we have used up to now involves placing all of the components in a flask and waiting until the reaction comes to an end, but we want to achieve continuous operation,” explains Rehm. To this end, the researchers have developed a microreactor with an upright standing reaction plate which features microchannels coated with the diamond catalyst. At the top of the plate is a slit into which water is constantly being pumped. The liquid then runs down the plate. Capillary forces result in the formation of a liquid film with a thickness of 10 to 50micrometers, which constantly coats the microchannels. The CO2 is directed over the reaction plate from below in a counterflow configuration. “In this way, we can apply much higher quantities of carbon dioxide directly to the catalyst film and in a smaller volume of solution. This improves the gas-liquid-solid contacting, which can result in higher CO2 conversion and hence a larger quantity of formic acid,” says Rehm. Visible light instead of UV light Furthermore, the researchers are no longer using energy-intensive UV-C light– as in the case of the nanoscale catalyst– and are instead using visible light which is more inexpensive and is also easier to handle. This requires a modification to the diamond surface as it needs to capture visible light but still trigger the same reaction as the nanoscale diamond. To this end, the researchers chemically bind metal complexes– organic compounds with a metal center which are able to capture visible light– to the diamond surface. However, these complexes do not cover the entire surface, which means that the liquid and carbon dioxide still come into contact with the diamond layer. When visible light shines on the modified coating, some electrons are lifted out of the diamond crystal lattice and onto the surface of the diamond layer. They are then transferred to the CO2 so that, in combination with the water, formic acid can be formed. “What we have here is a light-powered electron pump,” confirms Rehm. In order to supply more electrons, the team can apply a low electrical voltage to the diamond surface. Some milestones– the large-area catalyst and the use of visible light– have already been achieved. One aspect that the research team is still working on is the low contact time: The CO2, water and diamond layer currently only have 10 to 15 seconds for the reaction– not enough time to produce the amount of formic acid required for real-world applications. The researchers are looking at two solutions: more efficient metal complexes in order to increase the reaction speed, and adapting the reactor to enable longer contact times. Combination of photochemistry and biocatalysis In a separate project, a team comprising members from four different Fraunhofer institutes is making further strides with regard to the use of light in chemistry. The project combines photochemical catalysis with biocatalysis– i.e., with reactions in which biological enzymes serve as the catalyst– and thus brings together two very gentle procedures. The aim is to produce fine chemicals with a high degree of enantiomeric purity, as required in applications such as pharmaceuticals or agrochemicals. Here, the research team exploits cascade-like reactions, made possible by coupling the two catalytic methods. The consortium hopes to achieve a high degree of synergy for the synthesis of complex molecules in the future.
Basic arithmetic: The basics The Globe and Mail article Basic arithmetic is a basic concept that’s often ignored, but it’s really important. To make an idea true, it has to be repeated several times. But there are a lot of misconceptions about how it works, and you can help dispel some of them by reading the basics of basic arithmetic. What is arithmetic? Basics in arithmetic are defined in the rules of basic calculus, which are written in terms of symbols. These symbols are called operators, and they can be used to add, subtract, multiply, divide, and round to and fro. An example of an operator is a dot, which means add one to one. In this example, the dot is added to the first number. It means to add one, and then to subtract one, the first one is subtracted from the second. And so on. You can use the dot, for example, to subtract 1 from the first two numbers, 2 and 3. You could also use the operator to multiply two numbers. And to do a simple addition, you use the addition sign. For example, if you add the number 1 to the number 2, you get the number 3. The dot represents the addition, and the arrow indicates which number to add the value of. For more information on basic arithmetic, see “How is arithmetic explained?” or “Why are some things more complicated than others?” Basic arithmetic was invented in 1868 by the German mathematician Carl Friedrich Gauss. It was originally a method of algebra, but over time it was taken up by other mathematicians and was used to develop mathematical methods for everything from accounting to the production of financial products. The basic principles of basic math are simple. For instance, add and multiply two values, divide two numbers and then add the result to the third. Divide two numbers by three, add one and then multiply by two. The multiplication operator adds the two numbers to a number, and if you multiply two together, you’re adding one. If you add two numbers together, they add together. The square root of a number is the product of two numbers divided by three. The hypotenuse of a circle is the angle between the center of two straight lines. The fractional part of a complex number is a fraction of the circumference of a straight line. And the quotient of two fractions is the ratio of the quotients of the two parts. The sum of two complex numbers is the sum of the sums of the numbers in the complex numbers. The remainder of a series of complex numbers that is divided by two is called the derivative. For the simple example, you can add two fractions to get the product 3. Add the sum to 3, and add the quotitor to 3 to get 3/2. Divide the quotiter by 3 to obtain 6. The root of two series is called a fraction, and for complex numbers it is the square root. So for a series x of complex values, the root of the series is 1. The formula for the root is 2x + 3/x. The product of a positive and a negative is called an absolute value, and it is used in accounting, finance, and other fields of study. The number 3 is called one. The decimal point is the second power of two. A decimal point represents the largest integer that is greater than the smallest integer that can be represented by the whole number. You might have heard that the first digit of a binary number is zero, and that this is what we call a decimal point. But this is incorrect. If the number is negative, it is called 1. But a decimal digit represents the number minus one. So the decimal point for a negative number is -1. In fact, we have zero and one decimal digits in the decimal system, but they are different from one and zero in the mathematical system. We call them negative numbers. You need to be able to convert between the two types of numbers. We need to convert from one to the other when we multiply or divide them, but we can also convert from the positive to the negative. So let’s say that the value 2 is positive and 3 is negative. We can multiply it by 1 to get 5. Divide it by 3 and you get 3. Divide 5 by 3, you’ll get 12. We will use this example to make the first basic mathematical point about arithmetic: Basic arithmetic doesn’t really matter. We have the same number of decimal points and the same power of three. So you can multiply and divide the number to get any value you like. It’s a basic rule of arithmetic. But how do we know it’s correct? We can use a tool called a logarithm, which tells us how many digits we need to add or subtract to get a certain number. For an example, let’s divide a number by 2 to get 2/3. This means we need 2 log 2 to subtract from the value. If we multiply 2 by 2, we get 5
The strange orbits of 'Tatooine' planetary disks Astronomers using the Atacama Large Millimeter/submillimeter Array (ALMA) have found striking orbital geometries in protoplanetary disks around binary stars. While disks orbiting the most compact binary star systems share very nearly the same plane, disks encircling wide binaries have orbital planes that are severely tilted. These systems can teach us about planet formation in complex environments. In the last two decades, thousands of planets have been found orbiting stars other than our Sun. Some of these planets orbit two stars, just like Luke Skywalker's home Tatooine. Planets are born in protoplanetary disks - we now have wonderful observations of these thanks to ALMA - but most of the disks studied so far orbit single stars. 'Tatooine' exoplanets form in disks around binary stars, so-called circumbinary disks. Studying the birthplaces of 'Tatooine' planets provides a unique opportunity to learn about how planets form in different environments. Astronomers already know that the orbits of binary stars can warp and tilt the disk around them, resulting in a circumbinary disk misaligned relative to the orbital plane of its host stars. For example, in a 2019 study led by Grant Kennedy of the University of Warwick, UK, ALMA found a striking circumbinary disk in a polar configuration. "With our study, we wanted to learn more about the typical geometries of circumbinary disks," said astronomer Ian Czekala of the University of California at Berkeley. Czekala and his team used ALMA data to determine the degree of alignment of nineteen protoplanetary disks around binary stars. "The high resolution ALMA data was critical for studying some of the smallest and faintest circumbinary disks yet," said Czekala. The astronomers compared the ALMA data of the circumbinary disks with the dozen 'Tatooine' planets that have been found with the Kepler space telescope. To their surprise, the team found that the degree to which binary stars and their circumbinary disks are misaligned is strongly dependent on the orbital period of the host stars. The shorter the orbital period of the binary star, the more likely it is to host a disk in line with its orbit. However, binaries with periods longer than a month typically host misaligned disks. "We see a clear overlap between the small disks, orbiting compact binaries, and the circumbinary planets found with the Kepler mission," Czekala said. Because the primary Kepler mission lasted 4 years, astronomers were only able to discover planets around binary stars that orbit each other in fewer than 40 days. And all of these planets were aligned with their host star orbits. A lingering mystery was whether there might be many misaligned planets that Kepler would have a hard time finding. "With our study, we now know that there likely isn't a large population of misaligned planets that Kepler missed, since circumbinary disks around tight binary stars are also typically aligned with their stellar hosts," added Czekala. Still, based on this finding, the astronomers conclude that misaligned planets around wide binary stars should be out there and that it would be an exciting population to search for with other exoplanet-finding methods like direct imaging and microlensing. (NASA's Kepler mission used the transit method, which is one of the ways to find a planet.) Czekala now wants to find out why there is such a strong correlation between disk (mis)alignment and the binary star orbital period. "We want to use existing and coming facilities like ALMA and the next generation Very Large Array to study disk structures at exquisite levels of precision," he said, "and try to understand how warped or tilted disks affect the planet formation environment and how this might influence the population of planets that form within these disks." "This research is a great example of how new discoveries build on previous observations," said Joe Pesce, National Science Foundation Program Officer for NRAO and ALMA. "Discerning trends in the circumbinary disk population was only made possible by building on the foundation of archival observational programs undertaken by the ALMA community in previous cycles."
Understanding the eye To understand how contact lenses work, you first need to understand just a little concerning the eye. At its most elementary, light displays off an object and passes by means of the cornea, the clear covering of the eye. Next it travels to the pupil, the black a part of the eye, and then via the lens, which focuses the rays on the retina on the back of the eye. The retina is filled with things called rods and cones and these take the light and convert it to electrical impulses which are then sent to the brain for processing. If you get blurred vision, this is due to something called ‘refractive errors’ and is when the shape of the eye stops the light reaching the retina directly and leads to distortion. Fixing those errors So to fix those errors, we use a tiny plastic lens that corrects them by making a film that makes contact with the eye (hence – contact lenses). They work in the same way as glasses to focus the light on the retina and permit the eye to work properly. There are a number of various types of errors that leads to different problems and require completely different prescriptions to correct. Myopia or close to-sightedness is where object far away seem blurred while objects close by appear clear. This happened when the light getting into the eye is not accurately targeted and is caused by the shape of the eye. This is likely one of the most typical circumstances and there are numerous contact lenses that correct it. Hyperopia or farsightedness is the opposite, the place gadgets shut by are blurred but these additional away are clear. Once more, this is relatively frequent and there are lots of lenses to correct it. Presbyopia is an age-associated situation just like hyperopia the place items shut by develop into blurry but comes from a different cause. While hyperopia comes from the shape of the eye, presbyopia comes from the eye lens hardening with age. To appropriate the situation, multifocal lenses are sometimes used as they will clear both the near and the far blurriness. Finally, astigmatism is a condition where the irregular shape of the cornea or the lens stops the eye from focusing the light on the retina. A special class of lens is used to correct this condition, called a toric contact lens. Often, an image of a ball can be utilized to visualise the completely different – a sphere lens is like a beach ball the place a toric lens is more like a rugby ball. An example of lenses to appropriate this condition are the Biofinity Toric lens. Should you loved this information and you wish to receive more details with regards to Trendy spectacles i implore you to visit the site.
For children, self-esteem comes from: - knowing that they’re loved and that they belong to a family and a community that values them - spending quality time with their families - being encouraged to try new things, finding things they’re good at and being praised for things that are important to them. he most important thing you can do to foster your child’s self-esteem is to tell your child that you love him. Say it often and for no reason other than to show you appreciate your child. Relationships, connections, belonging and your child’s self-esteem Being connected to other people who care about her is good for your child’s self-esteem. It gives her a stronger sense of her place in your immediate and extended family. And being connected to friends and people in the community helps your child learn how to relate to others and can boost her confidence. Here are some ideas for nurturing your child’s self-esteem through relationships: - Strengthen your child’s sense of his family, culture and community. For example, show your child family photos and share family stories, take part in community or cultural events like religious festivals, and encourage your child join a local sporting club or interest group, or join as a family. - Encourage your child to value being part of your family. One way to do this is by involving your child in chores. When everyone contributes to the smooth running of the household, you all feel important and valued. - Make your child’s friends welcome and get to know them. Encourage your child to have friends over to your house, and make time for your child to go to their houses. Quality time and your child’s self-esteem When you spend quality time with your child you let your child know she’s important to you. Doing things together as a family can help strengthen a sense of belonging and togetherness in your family, which is also good for your child’s self-esteem. Here are some ideas: - Develop family rituals. These could include a story at bedtime, a special goodbye kiss or other ways of doing things that are special to your family. - Let your child help you with something, so that he feels useful. For example, your preschooler could help you set the table for dinner. - Plan some regular one-on-one time with your child, doing something that she enjoys, whether it’s drawing, doing puzzles, kicking a soccer ball or baking cakes. Achievements, challenges and your child’s self-esteem Success and achievements can help your child feel good about himself. But your child can also build self-esteem doing things he doesn’t always enjoy or succeed at. You can still praise his effort and determination – and remind him that these will help him succeed in other areas, or next time. There are lots of ways to help your child succeed, achieve and cope well with failure: - When your child has a problem, encourage her to think calmly, listen to other people’s points of view and come up with possible solutions to try. This builds important life skills. - Help your child learn new things and achieve goals. When your child is younger, this might mean praising and encouraging him when he learns something new, like riding a bike. When he’s older, it might be taking him to sport and helping him practise. - Celebrate big and small achievements and successes. And remember to praise your child’s effort, not just her results. For example, ‘You tried that puzzle piece in lots of different spots and you finally got it right. Well done!’. - Keep special reminders of your child’s successes and progress. You can go through them with your child and talk about your special memories, and the things he has achieved. - Teach your child that failing is a part of learning. For example, if she keeps missing the ball when she’s learning to catch, say ‘You’re getting closer each time. I can see how hard you’re trying to catch it’. - Teach your child to treat himself kindly when he does fail. You could be a role model here. For example, ‘I tried a new recipe, and the cake looks a bit funny. But that’s OK. It smells delicious’. Things that can damage children’s self-esteem Messages that say something negative about children are bad for their self-esteem – for example, ‘You are slow, naughty, a bully, a sook …’. When children do something you don’t like, it’s better to tell them what they could do instead. For example, ‘You haven’t done your homework. You need to sit down now and finish your maths questions’. Messages that imply that life would be better without children might harm their self-esteem. For example, ‘If it weren’t for the children, we could afford a new car’. Ignoring children, treating them like a nuisance and not taking an interest in them are likely to be bad for children’s self-esteem. An example might be, ‘I am sick and tired of you’. Frowning or sighing all the time when children want to talk to you might have the same effect. Negative comparisons with other children, especially brothers and sisters, are also unlikely to be helpful. Each child in your family is different, with individual strengths and weaknesses. It’s better if you can recognise each child’s successes and achievements. All parents feel frustrated and tired sometimes. But if parents send the message that they feel like this about their children all the time, children get the message that they’re a nuisance. hanges like moving house, school or country, or separation or divorce, might affect your child’s self-esteem. If your family is going through experiences like these, try to keep up family rituals and your child’s activities, as well as giving your child lots of love. This will help your child feel OK about herself and her identity even as things around her are changing.
Language—the most important aspect of human existence— is never static; it is very dynamic. As such, the world’s major events and occurrences impact language significantly. They change vocabulary usage in our day-to-day communication. With the onslaught of Covid-19 on almost all aspects of human life, language is not an exception. And so is the English language. With over 1.35 billion speakers worldwide and over 800 million speakers in Asia, English is one of the languages bearing the imprint of the global Covid-19 pandemic. As a principal language used in business and many formal settings globally, the world experienced a remarkable shift in commonly used terms. Also, English has been the main medium for international communication during the pandemic. The vocabulary arising from this unprecedented public health crisis includes new coinage and a plethora of medical terms, phrases, acronyms, abbreviations, and collocations. This even led to the editors of the Oxford English dictionary moving from regular quarterly updates to monthly updates to keep pace with the evolving language use and enormous change in the daily vocabulary. Without a doubt, we have also seen novel nuances being attached to some of the old words to describe our predicaments, fears, griefs, uncertainty, and more. Based on historical facts, major events like wars and natural disasters have proven to impact language significantly. And therefore, the Researchers at Michigan State University and other English language research institutions believe the Covid-19 pandemic too will change the way we communicate and bring new additions to the English dictionaries. Change on Commonly Used English Terms From casual conversations, media reports to written communication since early-2020, many obscure terms, phrases, and abbreviations have been used to convey the intended information effectively. However, not every uncommon word used during the pandemic is new. Some words or phrases existed before but came into use during the Covid-19 pandemic, while others are clones of the already existing vocabulary. As the pandemic spread from Asia to other parts of the world, the Oxford English Dictionary and other online dictionaries editors noticed an epic spike in the search volume of many pandemic-related terms on word lookup. But in most cases, the readers could not get the meaning of such words in English. In their analysis, the researchers noted that keywords such as infection, vaccine, immune, symptoms, virus, swabs, droplets, and testing had become part of the basic vocabulary for many English speakers worldwide from the time the deadly virus was reported. Let’s break it down. In January 2020, the researchers discovered most of the words that had high search volumes were related to the name of the novel Coronavirus. Such words included SARS, Coronavirus, virus, respiratory, human-to-human, and flu-like. In February 2020, terms like Covid 19, COVID 19, self-quarantine, quarantine, pandemic, epicentre, self-isolate, and other words describing the flow of the virus became more common. From March 2020 onwards, words such as lockdown, social distancing, self-quarantine, self-isolation, non-essential, postpone, WFH (work from home), PPE, workers, frontline warriors, and ventilator became more frequent when referring to the issues surrounding medical responses to the global pandemic. Some words such as keyworkers, support bubbles, and circuit-breaker were commonly used while referring to the flow of the disease. Old Words, New Meanings Researchers noted that some English words have received new meanings. Such terms include self-isolation,first recorded in 1834, and self-isolating recorded in 1841. While these two words were initially applied to countries detaching from the world, they’re used in the pandemic to refer to self-imposed isolation. Self-isolate is much preferred in British English, while self-quarantine is commonly used in the U.S. Social distance, the term first used in 1957 to mean a deliberate attempt to keep distance from others socially, has been applied to keeping physical distance from others to avoid infection. Also, elbow bumps have been swayed from meaning a celebratory gesture to avoiding hand-touching when greeting each other. Bubble, which previously described an insular set of ideas such as ideological bubble or political bubble, has also been put into a new use during the pandemic. Currently, ‘bubble’ refers to a small group or family that avoids contact with others. ‘Travel bubble’ refers to exclusive travel between two or more countries that are relatively more successful in handling the pandemic. But that’s not all. Lockdown, which was initially related to security and crime, is currently used to refer to a temporary condition imposed by concerned governmental authorities requiring people to stay in their homes during a disease outbreak. Old Words Blended to Form New Words Most of the new pandemic-related terms are clones of the existing ones. Such words include: - Maskne: Acne outbreak caused by using face masks - Zoombombing: The act of strangers breaking into a video meeting - Covidiot: Someone who keeps ignoring public safety guidelines - Quarantini: A cocktail usually consumed by those in quarantine - Doomscrolling: The act of skimming anxiety-inducing Coronavirus-related stories While these words are common during the pandemic, it is not apparent whether they will remain in communication after the pandemic. COVID 19 or Covid 19? Wondering which is correct? Well, it depends on your geographical location. According to dictionary editors, the word has a regional variation. “Covid” is more prevalent in the U.K., New Zealand, Ireland, and South Africa, whereas “COVID” is the most common version in U.S., Australia, and Canada. Asian countries vary in the use of “Covid”/“COVID”, with India preferring the terms “Coronavirus” and “Corona”. Meanwhile, the British version of “Covid” is what you’re likely to get in Oxford English Dictionary since it’s edited and published in England. Also, many news outlets prefer using “Covid-19″ on their online news platforms and print magazines. Regardless of how COVID-19 is written, one thing we know for sure—it has brought into our lives new words and new meanings. Only time will tell when this pandemic (and with it, its influence on the English language) will subside.
For others like sodium chloridethere is only a small change in solubility with temperature. These solutions are usually unstable and the excess material crystallises out if the crystals have something to form around. Quantification of solubility Solubility is commonly expressed as a concentration; for example, as g of solute per kg of solvent, g per dL mL of solventmolaritymolalitymole fractionetc. Download Response to the Criticism of Petropoulos and Coworkers. In the same way, compounds with low solubility will dissolve over extended time geological timeresulting in significant effects such as extensive cave systems or Karstic land surfaces. Windows causes many problems of screen resolution so you will have to try different Screen settings to make sure the program fits properly onto your specific monitor. It is also possible to predict solubility from other physical constants such as the enthalpy of fusion. On a normal solubility curve, temperature is the horizontal axis and the vertical axis shows the solubility in grams per g of water - a measure of concentration. There would never be any liquid whatever proportions of salt and water you had. In contrast, table salt NaCl has a higher Ksp and is, therefore, more soluble. The term "solute" refers to the chemical substance that is dissolved in a solution, while "solvent" is the component that does the dissolving. The solute sugar is being dissolved in the solvent water. All the potassium nitrate will stay in solution. A solution is saturated if it won't dissolve any more of the salt at that particular temperature - in the presence of crystals of the salt. If you aren't sure about the critical temperature of water, you could read the page about the phase diagrams of pure substances. Eutectic mixtures and the eutectic temperature are discussed in more detail on the page about the tin-lead system that I keep going on about. In the next diagram, the first salt crystals will form when the temperature hits the boundary line. As with other equilibrium constants, temperature can affect the numerical value of solubility constant. Benzoic acid is more soluble in an organic solvent such as dichloromethane or diethyl etherand when shaken with this organic solvent in a separatory funnelwill preferentially dissolve in the organic layer. For most substances, solubility increases with temperature. The ratio of mass of solute to mass of water remains the same at a given temperature. The relationship between solubility and temperature can be expressed by a solubility curve. The solubility of most compounds increases as temperature increases, although exceptions do exist. Download Diffusion July Obviously, the composition of the solution has changed because it contains less water - some of it has frozen to give ice. Learn what solubility is as well as the definitions of 'saturated,' 'unsaturated' and 'supersaturated.' Learn how to determine the solubility of a substance in water by using a solubility curve. Solubility is the property of a solid, liquid or gaseous chemical substance called solute to dissolve in a solid, liquid or gaseous janettravellmd.com solubility of a substance fundamentally depends on the physical and chemical properties of the solute and solvent as well as on temperature, pressure and presence of other chemicals (including changes to the pH) of the solution. Basics. Uses: Solubility curves allow a scientist to determine the amount of a solute that can dissolve in grams of water at a given temperature. Graph: Grams of solute per g of water versus temperature (°C) Slope: A steeper slope relfects more of an affect on solubility as temperature increases. Solid Solutes vs. Gas Solutes: As temperature increases, solubility of a solid increases. 1. Introduction. Diabetes is a complex metabolic disorder afflicting more than million people worldwide, and the prevalence is expected to rise to million by 1 Type 2 diabetes mellitus (T2DM) is the most common form of diabetes characterized by hyperglycemia resulting from impaired insulin secretion and insulin resistance. Long-term hyperglycemia results in an increased risk of. Base your answers to questions 68 through 70 on the information and table below. A student conducts an experiment to determine how the temperature of water affects the rate at which an antacid tablet dissolves in the water. Solubility curves. Two typical solubility curves. A solubility curve shows how the solubility of a salt like sodium chloride or potassium nitrate varies with temperature.What are solubility curves
Truck & Diesel Engine Computers Diesel Engines vs. Gasoline EnginesI'll bet that you've heard of diesel engine (and have probably seen the diesel pumps at the gas station), but Iw would also bet that you don't really know what a diesel engine is. At the core of their function, both gas and diesel engines are pretty similar. They both convert chemical energy into mechanical and kinetic energy, and do so through the process of internal combustion. For this reason, they are both called internal combustion engines. The difference is how that combustion is achieved. In gasoline engines, the air and fuel mixture is compressed until, at a critical point in the timing of the engine cycle, a spark plug ignites the air-gas mixture. In diesel engines, on the other hand, there are no sparks or spark plugs involved. Instead, the diesel fuel and air are compressed together to the point until it combusts. Basically, the extreme compression generates enough heat that the mixture spontaneously combusts in a process known as "combustion ignition." What Does Diesel ECM/PCM Do?In diesel engines, the diesel computer controls the injection of the fuel and air into the cylinder, as well as the compression of the mixture. This is especially important when starting a diesel engine in cold weather, as the compression process required for combustion may not raise the air to a high enough temperature to ignite the fuel. The diesel computer, or diesel control module (DCM) can sense the ambient air temperature and adjusts the timing of the engine in cold weather so the injector sprays the fuel at a later time. The air in the cylinder is compressed more, creating more heat, which aids in starting. Additionally, the computer in your diesel engine controls or regulates the following functions: • Ignition system • Fuel injection • Emission system • Mechanical positioning of the rotating assembly • Exhaust system • And any other functions related to the operation of the engine and transmission The same as the car computer in a gasoline engine vehicle, a diesel computer is able to control and regulate all these functions by utilizing a vast system of sensors and switches throughout the engine. These sensors provide a constant stream of information and data that the computer uses to make adjustments to diesel engine components in order to ensure a smooth operation in the face of changing driving and engine conditions. Diesel engine computers will also run and store an error code related to the system or component it thinks is causing the problem. This then makes the job of a mechanic or technician that much easier, because they can better pinpoint which part of the engine is having trouble.
A small stone flake marked with intersecting lines of red ochre pigment some 73,000 years ago that was found in a cave on South Africa’s southern coast represents what archaeologists on Wednesday called the oldest-known example of human drawing. Researchers say the abstract design, vaguely resembling a hashtag, was drawn by hunter-gatherers who periodically dwelled in Blombos Cave overlooking the Indian Ocean, roughly 190 miles (300 km) east of Cape Town. The drawing predates the previous oldest-known drawings by at least 30,000 years. While the design appears rudimentary, the fact that it was sketched so long ago is significant, suggesting the existence of modern cognitive abilities in our species, Homo sapiens, during a time known as the Middle Stone Age, the researchers said. The cross-hatched design drawn with ochre, a pigment used by our species dating back at least 285,000 years ago, consists of a set of six straight lines crossed by three slightly curved lines. The coarse-grained stone flake measures about 1-1/2 inches (38.6 mm) long and 1/2-inch (12.8 mm) wide. “The abrupt termination of all lines on the fragment edges indicates that the pattern originally extended over a larger surface. The pattern was probably more complex and structured in its entirety than in this truncated form,” said archaeologist Christopher Henshilwood of the University of Bergen in Norway and the University of the Witwatersrand in South Africa, who led the research published in the journal Nature. Henshilwood says the drawing is definitely an abstract design but probably not an example of art. He added that the drawing almost certainly had some meaning to the maker and probably formed a part of the common symbolic system understood by other people in this group. “All these findings demonstrate that early Homo sapiens in the southern Cape used different techniques to produce similar signs on different media,” Henshilwood said. “This observation supports the hypothesis that these signs were symbolic in nature and represented an inherent aspect of the advanced cognitive abilities these early African Homo sapiens, the ancestors of all of us today.” Homo sapiens first appeared more than 315,000 years ago in Africa, later trekking to other parts of the world.
When we say there aren’t enough teachers, we recruit and hire them, but when we say there aren’t enough competent teachers we’re talking about finding something that can’t be cooked up on short order. It requires a generation or two to create competent teachers. When regenerating a forest, seeds must fall, sprout and grow. Leaves must fall and accumulate inch-by-inch to create a topsoil thick and rich enough for the next generation to take root. This is not so different from an education system. Old-growth forests are accumulated biological histories so it’s hard to know how they start and how they develop, but in some rare cases land is reduced to ground zero as is the case of volcanic eruptions. Some thirty years Mount St. Helens in Washington State collapsed on itself and erupted violently enough to wipe out every identifiable living thing in a gigantic swath of destruction. It became an ecological lesson on how once destroyed, nature is not something easily regenerated. The first plants have to colonize bare ground and must survive without soil. Lichen can live on rocks and are called pioneer species because they scrape out the first foothold for other species to follow. Then shrubs give shelter for the seedlings of taller trees to germinate which eventually top the low growth to form forest cover. This process can take centuries. With Mount St. Helens, scientists believed that regrowth could be sped up with the introduction of outside species, but evidence shows that “biological legacies” in the form of fallen trees, buried seeds and surviving amphibians were instrumental as restarters of green cover. Ecology is not simply a metaphor for human systems. Natural cycles of devastation and regeneration help us to understand how culture and education are also fragile ecological systems that are sustained by more than superficial elements. A human knowledge base depends on resources, parents, communities and a consensual commitment to learning. The life source of this cycle centers on the quality of the teacher. There will always be books and repositories of knowledge, but in the case of survivors of the Khmer Rouge, it was only a handful of tenacious artists that could pass on centuries of cultural knowledge on the verge of disappearing. They were biological legacies, resilient as lichen and as important as the last genetic evidence in a seed bank. A healthy education system is like an old growth forest that is fertile from the deepest roots to the highest forest canopy and one that can provide homes for the widest variety of species. In contrast, plantations are easily started and appear green from a satellite image, but monocropping will eventually leach the soil of its nutrients, only to export its wealth away without natural regeneration. To see the future, we can take a look at our schools and make a quick assessment. What do the students and classrooms look like? Is it a virgin forest full of life or a factory for agricultural products? What do the teachers look like? Do they look more interested in sowing seeds and cultivating growth or more concerned about production rates and output? The reason why this difference is important is because the real and significant difference is something we will see in 20, 50 or 100 years.
Lignin (sometimes "lignen") is a complex chemical compound most commonly derived from wood and an integral part of the cell walls of plants. The term was introduced in 1819 by de Candolle and is derived from the Latin word lignum, meaning wood. It is the most abundant organic polymer on Earth after cellulose, employing 30% of non-fossil organic carbon and constituting from a quarter to a third of the dry mass of wood. The compound has several unusual properties as a biopolymer, not least its heterogeneity in lacking a defined primary structure. Lignin fills the spaces in the cell wall between cellulose, hemicellulose and pectin components, especially in tracheids, sclereids and xylem. It is covalently linked to hemicellulose and thereby crosslinks different plant polysaccharides, conferring mechanical strength to the cell wall and by extension the plant as a whole. It is particularly abundant in compression wood, but curiously scarce in tension wood. Lignin plays a crucial part in conducting water in plant stems. The polysaccharide components of plant cell walls are highly hydrophilic and thus permeable to water, whereas lignin is more hydrophobic. The crosslinking of polysaccharides by lignin is an obstacle for water absorption to the cell wall. Thus, lignin makes it possible for the plant's vascular tissue to conduct water efficiently. Lignin is present in all vascular plants, but not in bryophytes, supporting the idea that the original function of lignin was restricted to water transport. Lignin is indigestible by mammalian and other animal enzymes, but some fungi and bacteria are able to biodegrade the polymer. The details of the reaction scheme of the biodegradation are not fully understood to date. These reactions depend on the type of wood decay - in fungi either brown rot, soft rot or white rot. The enzymes involved may employ free radicals for depolymerization reactions. Well understood lignolytic enzymes are manganese peroxidase, lignin peroxidase and cellobiose dehydrogenase. Furthermore, because of its cross-linking with the other cell wall components, it minimizes the accessibility of cellulose and hemicellulose to microbial enzymes. Hence, lignin is generally associated with reduced digestibility of the over all plant biomass, which helps defend against pathogens and pests. Lignin peroxidase (also "ligninase", EC number 1.14.99) is a hemoprotein from the white-rot fungus Phanerochaete chrysosporium with a variety of lignin-degrading reactions, all dependent on hydrogen peroxide to incorporate molecular oxygen into reaction products. There are also several other microbial enzymes that are believed to be involved in lignin biodegradation, such as manganese peroxidase, laccase and cellobiose dehydrogenase. Lignin plays a significant role in the carbon cycle, sequestering atmospheric carbon into the living tissues of woody perennial vegetation. Lignin is one of the most slowly decomposing components of dead vegetation, contributing a major fraction of the material that becomes humus as it decomposes. The resulting soil humus generally increases the photosynthetic productivity of plant communities growing on a site as the site transitions from disturbed mineral soil through the stages of ecological succession, by providing increased cation exchange capacity in the soil and expanding the capacity of moisture retention between flood and drought conditions. Highly lignified wood is durable and therefore a good raw material for many applications. It is also an excellent fuel, since lignin yields more energy when burned than cellulose. Mechanical, or high yield pulp used to make newsprint contains most of the lignin originally present in the wood. This lignin is responsible for newsprint yellowing with age. Lignin must be removed from the pulp before high quality bleached paper can be manufactured from it. - Raw materials for several chemicals, such as vanillin, DMSO, ethanol, torula yeast, xylitol sugar and humic acid - Environmentally sustainable dust suppression agent for roads The first investigations into commercial use of lignin were done by Marathon Corporation in Rothschild, Wisconsin (USA), starting in 1927. The first class of products which showed promise were leather tanning agents. The lignin chemical business of Marathon was operated for many years as Marathon Chemicals. It is now known as LignoTech USA, Inc., and is owned by the Norwegian company, Borregaard, itself a subsidiary of the Norwegian conglomerate Orkla AS. Lignin removed via the kraft process (sulfate pulping) is usually burned for its fuel value, providing more than enough energy to run the mill and its associated processes. More recently, lignin extracted from shrubby willow has been successfully used to produce expanded polyurethane foam. Lignin is a large, cross-linked, racemic macromolecule with molecular masses in excess of 10,000u. It is relatively hydrophobic and aromatic in nature. The degree of polymerisation in nature is difficult to measure, since it is fragmented during extraction and the molecule consists of various types of substructures which appear to repeat in a haphazard manner. Different types of lignin have been described depending on the means of isolation. There are three monolignol monomers, methoxylated to various degrees: p-coumaryl alcohol, coniferyl alcohol, and sinapyl alcohol (Figure 3). These are incorporated into lignin in the form of the phenylpropanoids p-hydroxyphenyl (H), guaiacyl (G), and syringal (S) respectively. Gymnosperms have a lignin that consists almost entirely of G with small quantities of H. That of Dicotyledonic angiosperms is more often than not a mixture of G and S (with very little H), and monocotyledonic lignin is a mixure of all three. Many grasses have mostly G, while some palms have mainly S. All lignins contain small amounts of incomplete or modified monolignols, and other monomers are prominent in non-woody plants. Lignin biosynthesis (Figure 4) begins in the cytosol with the synthesis of glycosylated monolignols from the amino acid phenylalanine. These first reactions are shared with the phenylpropanoid pathway. The attached glucose renders them water soluble and less toxic. Once transported through the cell membrane to the apoplast, the glucose is removed and the polymerisation commences. Much about its anabolism is not understood even after more than a century of study. The polymerisation step, that is a radical-radical coupling, is catalysed by oxidative enzymes. Both peroxidase and laccase enzymes are present in the plant cell walls, and it is not known whether one or both of these groups participates in the polymerisation. Low molecular weight oxidants might also be involved. The oxidative enzyme catalyses the formation of monolignol radicals. These radicals are often said to undergo uncatalyzed coupling to form the lignin polymer, but this hypothesis has been recently challenged. The alternative theory that involves an unspecified biologial control is however not accepted by most scientist in the field. Pyrolysis of lignin during the combustion of wood or charcoal production yields a range of products, of which the most characteristic ones are methoxy phenols. Of those, the most important are guaiacol and syringol and their derivatives; their presence can be used to trace a smoke source to a wood fire. In cooking, lignin in the form of hardwood is an important source of these two chemicals which impart the characteristic aroma and taste to smoked foods. - Lebo, Stuart E. Jr. (2001). "Lignin". Kirk‑Othmer Encyclopedia of Chemical Technology. John Wiley & Sons, Inc. doi:10.1002/0471238961.12090714120914.a01.pub2. Retrieved 2007-10-14. Unknown parameter - E. Sjöström (1993). Wood Chemistry: Fundamentals and Applications. Academic Press. - W. Boerjan, J. Ralph, M. Baucher (2003). "Lignin bios". Ann. Rev. Plant Biol. 54: 519–549. doi:10.1146/annurev.arplant.54.031902.134938. Unknown parameter - M. Chabannes; et al. (2001). "In situ analysis of lignins in transgenic tobacco reveals a differential impact of individual transformations on the spatial patterns of lignin deposition at the cellular and subcellular levels". Plant J.: 271–282. Text "volume-28 " ignored (help) - K.V. Sarkanen & C.H. Ludwig (eds) (1971). Lignins: Occurrence, Formation, Structure, and Reactions. New York: Wiley Intersci. Carlile, Michael J. (1994). The Fungi. Academic Press. ISBN 0-12-159959-0. Unknown parameter - "Uses of lignin from sulfite pulping". Retrieved 2007-09-10. - Green plastic produced from biojoule material BioJoule Technologies Press Release, 12 July 2007. - "Lignin and its Properties: Glossary of Lignin Nomenclature". Dialogue/Newsletters Volume 9, Number 1. Lignin Institute. July 2001. Retrieved 2007-10-14. - K. Freudenberg & A.C. Nash (eds) (1968). Constitution and Biosynthesis of Lignin. Berlin: Springer-Verlag. - J. Ralph; et al. (2001). "Elucidation of new structures in lignins of CAD- and COMT-deficient plants by NMR". Phytochem. 57: 993–1003. - Davin, L.B. (2005). "Lignin primary structures and dirigent sites". Current Opinion in Biotechnology. 16: 407–415. Unknown parameter - Biosynthesis pathway of lignin - The Lignin Institute A promotional site by a trade association of lignin manufacturers and users.
Scientists working with data from the New Horizons spacecraft have published a series of papers revealing for the first time detailed information and analysis of the geology, atmosphere and behaviour of Pluto and its moons. New Horizons has been sending back data and images of the distant dwarf planet and its satellites since the spacecraft carried out a successful fly-by in July 2015, collecting 50GB of measurements in the process. About half of that data has now been transmitted back to Earth, and all the remaining readings are expected to arrive by the end of 2016. The team was able to date the age of Pluto's surface by counting how many craters were visible. They found that the dwarf planet has been geologically active throughout the past four billion years. There are signs of relatively recent geological formations, too. Nasa said that "the surface of Pluto’s informally-named Sputnik Planum, a massive ice plain larger than Texas, is devoid of any detectable craters and estimated to be geologically young - no more than ten million years old." The dwarf planet's surface proved to be far more diverse and active than anyone had anticipated. Jeff Moore of Nasa's Ames Research Center said that "observing Pluto and Charonup close has caused us to completely reassess thinking on what sort of geological activity can be sustained on isolated planetary bodies in this distant region of the solar system, worlds that formerly had been thought to be relics little changed since the Kuiper Belt's formation." Its icy landscape is primarily made up of a combination of highly volatile and mobile methane, nitrogen and carbon monoxide ices, alongside inert and sturdy water ice. This leads to what Will Grundy of the Lowell Observatory describes as "fascinating cycles of evaporation and condensation" that are "a lot richer than those on Earth, where there's really only one material that condenses and evaporates - water. On Pluto, there are at least three materials, and while they interact in ways we don't yet fully understand, we definitely see their effects all across Pluto's surface." Grundy and his team's paper concluded that "although Pluto's durable [water] ice is probably not active on its own, it appears to be sculpted in a variety of ways through the action of volatile ices of [nitrogen] and [carbon monoxide]. [Methane] ice plays a distinct role of its own, enabled by its intermediate volatility. [Methane] ice condenses at high altitudes and on the winter hemisphere, contributing to the construction of some of Pluto’s more unusual and distinctive landforms." New Horizons revealed that Pluto's atmosphere is about 21 degrees colder than anticipated by previous Earth-based studies, as well as being more compact, although the reason for its frigidity is not yet known. These characteristics mean that less of Pluto's atmosphere is being lost to solar winds -- streams of charged particles from the Sun --than previously thought. Pluto is also less exposed to solar winds than previously thought. "We've discovered that pre-New Horizons estimates wildly overestimated the loss of material from Pluto's atmosphere," said Fran Bagenal of the University of Colorado, Boulder. "The thought was that Pluto's atmosphere was escaping like a comet, but it is actually escaping at a rate much more like Earth’s atmosphere." Researchers also found that methane, rather than nitrogen, was the primary gas that escaped Pluto's atmosphere, even thought the atmosphere near the dwarf planet's surface is 99 percent nitrogen. New Horizons observed distinct, bluish layered hazes in the atmosphere, thought to be produced by methane, acetylene, ethylene and ethane gases that make up abundant minor elements of the dwarf planet's atmosphere. Scientists have concluded that the haze is most likely caused by "buoyancy waves" that are created by winds blowing across Pluto's mountainous surface, which in turn compress and concentrate haze particles into distinct layers. Pluto is orbited by one large moon, Charon, which has a diameter of 1,172km, and four small, irregularly-shaped moons: Styx, Nix, Kerberos and Hydra. These range from around 40km in diameter in the case of Nix and Hydra, to tiny Styx and Kerberos, which come in at around 10km across. The moons' reflectivity when compared to small bodies common to the nearby Kuiper Belt indicates they are unlikely to have been captured from the Belt, and instead formed when even smaller bodies merged. Their surfaces date from at least four billion years ago. "These latter two results reinforce the hypothesis that the small moons formed in the aftermath of a collision that produced the Pluto-Charon binary system," said Hal Weaver, New Horizons project scientist from the Johns Hopkins University Applied Physics Laboratory. Charon itself has a similarly ancient surface. The smooth planes at its equator, informally named Vulcan Planum, is thought to have comes from cryovolcanoes that spewed icy material onto the moon's surface four billion years ago. It's thought that such eruptions were caused by an internal ocean that froze and ruptured Charon's crust.
When people talk or write about the transatlantic slave trade , they usually concentrate on what happened in the Caribbean and North America. Usually Central and South America are not included in the story, even though they were involved. Europeans colonised what are known as ‘the Americas’. The continent includes the Caribbean, Canada, the USA, and the central and South American countries such as Mexico and Brazil. A mixture of English, French, Spanish and Portugese people travelled across the Atlantic Ocean to settle in the many parts of the Americas. Spain and Portugal colonised parts of Central and South America. Most of the area south of Mexico was owned by Spain or Portugal. The settlers developed a ‘slave economy’, that is, they used slaves to work on their land. This ‘slave economy’ was different to that found in the Caribbean islands and in North America. Many more slaves were bought. Many were sent to the gold and silver mines rather than to the plantations. More of the enslaved Africans were able to buy their freedom and work as free men rather than as slaves. But in other ways the slave economy in this part of the Americas was the same as that in the Caribbean islands and North America. Many of the enslaved Africans died on the plantations and at the mines, and more were brought in from Africa to replace them.
By virtue of being religious the Puritans had built a church for and within their colony. The colonies goals shared some given differences as well as certain similarities. For instance, Jamestown and Plymouth colonies were more similar because of their nature as the company based settlements but Massachusetts Bay Colony was slightly different because it was a majority religious colony (Katz, 1973). All the three colonies were used as the permanent settlement areas meant for the immigrants. However, Jamestown and Plymouth were used as the agricultural centers to turn profits for their parent companies. Massachusetts Bay Colony was used to resettle the puritan religious members. The settlements also were used for exploration purposes into the interior the United States. John smith gives his explanation as to how his exploration of the Chickahamania River led him to be held captive by the local Indians for over six weeks. William Bradford states that shortly after their arrival by the year 1620 that they set off on 15th of November and when they had covered about the space of about a mile by the seaside they stated to be frustrated in their search for water (Woods, 2004). These early settlements succeeded in bringing about the renowned Great Migration of the later centuries as evident because they still survive today. Exploration was one of the major successes for finding new resources and the people greatly helped the settlers survive their new environments. John smith, after making friendship with Pocahontas aided his members of Jamestown in getting regular provisions of the resources like water. These are just excerpts of essays for you to view. Please click on Order Now for custom essays, research papers, term papers, thesis, dissertations, case studies and book reports.
SAT II Subject Test Overview SAT II Subject Tests are 20 multiple-choice standardized tests given by The College Board on individual subjects. They are taken to improve a student's credentials for admission to colleges in the United States. Many colleges use the SAT Subject Tests for admission, course placement, and to advise students about course selection. Some colleges specify the SAT Subject Tests that they require for admission or placement; others allow applicants to choose which tests to take. Students typically choose which tests to take depending upon college entrance requirements for the schools to which they plan to apply. Many schools don’t require SAT Subject Tests, but most of the most competitive ones do. So, in many case, while the SAT and ACT exams can get you rejected from great schools, but very rarely accepted, the SAT Subject Tests can’t get you rejected from great schools, but they can get you accepted. SAT Tests Overview Each Subject Test is an hour long. SAT Subject Tests are generally given six times in any given school year, on the same days and in the same test centers as the SAT — but not all 20 tests are offered on every SAT date. They are all multiple-choice hour-long tests scored on a 200–800 scale. SAT Language Tests are available in two forms: with and without Listening. The former gauges how well you understand the written language, while latter also features a recorded section devoted to the comprehension of the spoken language. Current SAT Subject tests are: - Korean with Listening - Chinese with Listening - Japanese with Listening - Mathematics Level Mathematics Level 2 - French with Listening - Spanish with Listening - United States History - German with Listening - World History - Modern Hebrew - Biology E (Ecological) oriented test - Biology M (Molecular) oriented test Parliament’s SAT Subject Test Tutoring Program Offers: Parliament's SAT Preparation and Tutoring Program recognizes that even the most gifted youngster can be intimidated by a formal testing process and be sensitive to pressures to do well. A Parliament tutor will give your student the individual attention needed to feel comfortable and confident with the examination chosen, and to achieve the highest score possible. In tests where calculators are used, your Parliament tutor can also review calculator skills to ensure they are at their peak. - Access to Parliament Online where you can practice with custom-designed, sample Full-Length SAT II Subject Tests to better diagnose your strengths and weaknesses, and communicate with your tutor online and retrieve practice work and assignments in-between session. - Lesson Packages of personalized SAT II Subjects instruction from the most qualified and personable SAT II Subject tutors in the industry, all in the comfort of your home - A customized lesson plan to meet your individualized needs - Expert feedback on College Prep and the Admission Process
Even before Einstein theorized that time is relative and flexible, humanity had already been imagining the possibility of time travel. “People think of time travel as something as fiction. And we tend to think it’s not possible because we don’t actually do it,” Ben Tippett, a theoretical physicist and mathematician from the University of British Columbia, said in a UBC news release. “But, mathematically, it is possible.” Essentially, what Tippet and University of Maryland astrophysicist David Tsang developed is a mathematical formula that uses Einstein’s General Relativity theory to prove that time travel is possible, in theory. “My model of a time machine uses the curved space-time to bend time into a circle for the passengers, not in a straight line,” Tippet explained. “That circle takes us back in time.” Simply put, their model assumes that time could curve around high-mass objects in the same way that physical space does in the universe. Kemo D. 7
Wednesday, March 6th 2019, 2:19 pm - Research indicates that just a two-degree Celsius increase in air temperature could cause millions of people worldwide to lose access to frozen lakes. There’s nothing quite like an afternoon game of shinny on the lake, especially when the weather’s right and you’re stripped down to your sweater, partially wet from sweat, partially from being playfully shoved into the snowbank, and partially from hopping through the knee-deep snow to fetch the puck that missed the net. No matter what age you are, days like these will always bring you back to being a kid. But climate change could threaten fond winter moments like these—and even make them a thing of the past for future generations, according to research published in Nature Climate Change. Visit our Complete Guide to Spring 2019 for an in depth look at the Spring Forecast, tips to plan for it and much more “Our study illustrates that an extensive loss of lake ice will occur within the next generation,” the researchers write. To draw this conclusion, the international team assessed the lake-ice records of more than 500 freshwater lakes around the world to develop a model that could be applied to millions of others and predict just how susceptible they are to ice loss. Of the nearly 500 lakes they studied, twenty-eight stood out, including the mighty Lake Superior, which hasn’t frozen three times since the 1850s (in the winters of ’97/98, 2011/2012, and 2015/2016), and was recorded as the second-fastest warming lake in the world in an earlier study. That’s because “deeper lakes have a greater heat capacity and take longer to cool down in the winter,” says Sapna Sharma, a Biology professor at York University and lead author of the study. VIDEO: THREE TRILLION TONNES OF ICE LOST IN ANTARCTICA IN 25 YEARS In fact, the research indicates that just a two-degree Celsius increase in air temperature could cause millions of people worldwide to lose access to frozen lakes, and it’s estimated that over 40 percent of the lakes expected to see significant ice loss in the 21st century will be in Canada. This would, of course, affect a lot more than beloved winter pastimes like hockey—it would also have a profound impact on remote communities that rely on ice for transportation, food and supplies, as well as the aquatic species that live in these lakes. “Loss of lake ice can have consequences for water quantity and water quality,” says Sharma. Lakes that lose ice cover or experience a shorter period of ice cover are warmer in the summer, which can have devastating effects on the ecosystem. “If water temperatures continue to warm, there may be an increased likelihood of algae bloom formation in the summer,” she says. Harmful algae blooms deplete oxygen levels in the water, which can be detrimental for cold water fish, like lake trout, and cool water fish like walleye. According to Sharma, this can also provide more habitat for non-native species like smallmouth bass, which can have devastating consequences on native fish diversity. But it’s the rapidity at which we may experience this change, and the number of people it could affect—culturally, socio-economically, and ecologically—that really concerns Sharma, and why she and the other researchers involved in the study hope that their results illustrate “the importance of climate mitigation strategies to preserve ecosystem structure and function, as well as local winter cultural heritage.” This story was written for Cottage Life by Jenna Wootton.
Produce an electrical potential difference! - Small group learning kit - Student copymasters and teacher guide included Students will study electrochemical series. The electrochemical series is built up by arranging various redox equilibria in order of their standard electrode potentials (redox potentials). When a strip of metal (an electrode) is placed in water the metal has a tendency to go into solution as ions with a simultaneous build up of electrons on the metal strip. This process produces an electrical potential difference between the metal and solution which is called an electrode potential (Eº).
Several weeks ago, Professor SK Gupta of the University of Maryland finally had a breakthrough in design on a robot bird that he and his students had been working on for eight years. The end result is a flying robot that is almost indistinguishable from a bird. Professor Gupta explains: Our new robot is based on a fundamentally new design concept. We call it Robo Raven. It features programmable wings that can be controlled independently. We can now program any desired motion patterns for the wings. This allows us to try new in-flight aerobatics that would have not been possible before. For example, we can now dive and roll. The new design uses two actuators that can be synchronized electronically to achieve motion coordination between the two wings. The use of two actuators required a bigger battery and an on-board micro controller. All of this makes our robotic bird overweight. So how do we get Robo Raven to “diet” and lose weight? We used advanced manufacturing processes such as 3D printing and laser cutting to create lightweight polymer parts to reduce the weight. However, this alone was not sufficient. We needed three other tricks to get Robo Raven to fly. First, we programmed wing motion profiles that ensured that wings maintain the optimal velocity during the flap cycle to achieve the right balance between the lift and the thrust. Second, we developed a method to measure aerodynamic forces generated during the flapping cycle. This enabled us to quickly evaluate many different wing designs to select the best one. Finally, we had to perform system level optimization to make sure that all components worked well as an integrated system. Robo Raven will enable us to explore new in-flight aerobatics. It will also allow us to more faithfully reproduce observed bird flights using robotic birds. I hope that this robotic bird will also inspire more people to choose “bird making” as their hobby! Robotic birds (i.e., flapping wing micro air vehicles) are expected to offer advances in many different applications such as agriculture, surveillance, and environmental monitoring. Robo Raven is just the beginning. Many exciting developments lie ahead. The exotic bird that you might spot in your next trip to Hawaii might actually be a robot!
55 Cancri A is a Sun-like star some 40 light years away. It has an apparent magnitude of about 6 and so is visible to the naked eye in the constellation of Cancer. This star is unusual in that it is just one of a handful that are known to have at least 5 planets. The innermost of these planets–55 Cancri e–was discovered in 2004 and has since had plenty of attention from astronomers. Various groups have observed the the changes in radial velocity that it causes its parent star. This tells them about that it orbits its star every 18 hours and that its mass is about 8 times Earth’s or about half Neptune’s. But without a measurement of the planet’s radius, it’s not possible to determine the planet’s density. So 55 Cancri e could be an ice giant like Neptune or a terrestrial planet more like our world. Today, Michael Gillon at the University of Liege in Belgium and a few pals reveal some interesting new data about this exoplanet. These guys have observed 55 Cancri e in a different way, by watching it pass in front its parent star using both NASA’s Spitzer space telescope and Canada’s MOST space telescope. These kinds of observations are important because the amount of light the planet blocks during each transit is essentially a measure of its radius. Consequently, Gillon and co are able to say that the radius of 55 Cancri e is about twice Earth’s. That makes it almost certain that this is a rocky planet. But it’s possible to make a few more deductions. A rocky planet is likely to be made from a combination of iron and magnesium-silicon-oxides, like the rocky planets in our Solar System. The density of these materials is well known and this raises a problem. “We find that 55 Cnc e is too large to be made out of just rocks,” say Gillon and co. “Therefore, it has to have an envelope of volatiles.” These guys look at two possibilities. The first is an atmosphere of hydrogen and helium, rather like the atmospheres of our ice giants. But they rule this out because such an atmosphere would escape into space in just a few million years. The second possibility is an envelope of water with a mass some 20 per cent of the planet’s total. (By contrast the water on Earth makes up only 0.023 per cent of its mass.) This, say Gillon and co, is more likely because the water is less likely to escape into space and so would hang around for billions of years. So 55 Cancer e must be a waterworld. However, this waterworld is nothing like the planet envisioned in the Kevin Costner movie. 55 Cancer e is so close to its sun that the water is likely to be in a supercritical state, when the liquid and gas phases become one. The planet may also be tidally locked so that one side is in permanent sunshine while the other is in permanent darkness. That should make for some interesting weather, not to mention some interesting chemistry too. We should know more in a few years. Gillon and co say the planet’s envelope should be directly visible to the next generation of space telescopes. Ref: arxiv.org/abs/1110.4783: Improved precision on the radius of the nearby super-Earth 55 Cnc e
History of Clocks Even though sundials were discovered and initially developer in Ancient Babylon, it was Egypt and Greece where this timekeeping device received the most attention. Sadly after the fall of Roman Empire, sundials and other simple time measuring devices received only limited use. Change came in 12th and 13th century when trade expeditions of early Renaissance brought to the Europe knowledge of Islamic clocks and Chinese intricate water clocks. This provided European inventors with a basis to produce their own improved designs. Mechanical watches first started appearing in second half of 14th century, but they had a problem of weak power sources – weights. However after the invention of first mainspring in early 1500s and small portable clocks by German locksmith Peter Henlein, clocks finally started spreading across Europe. Even though they were had to make, imprecise and easy to break, they created basis for all future watches and enabled spreading of watchmaking industry across the world. After main spring, the most important invention that was made pre 1600s were introduction of crews, which enabled manufacture of much smaller and compact watches. During this time, clocks and watches entered into “Age of Decoration”. They changed very little in the mechanical sense, but their high production cost attracted the attention of wealthy people, nobility and royalty all across the Europe. Extravagant design and use of precious stones and metals made watches desirable object for every person of high status. During that time, Italian custom of separating one day into 24 separate pieces (hours) spread across the world. 1675 – 1700 Introduction of Balance Spring (pendulum) finally eliminated one major flaw of watches – low accuracy. With this invention, clocks finally started measuring hours very accurately, and only fractions of minutes become lost to the mechanical inefficiencies. Because of this great increase in accuracy, minute handle finally became standard into all watches. As for fashion, this 25 year period became known as a first time that men started carrying pocket watches fastened with the small chain to their belt or coat. This marked the first time that men stopped carrying watches like pendants on the neck. 1700 – 1775 This period was marked by a steady innovation of watches, which was greatly accelerated by the needs of maritime navigators and scientists. The most famous person from that period was without a doubt John Harrison, English clockmaker who managed to produce one of the most important clocks of all time - Marine chronometer. With the power to calculate longitude by means of celestial navigation, famous Age of Sail finally entered into its height. After watches became sufficiently accurate to be used in scientific experiments, and lower price of regular pocket, table and wall watches enabled their spreading across population. The most important mechanical innovation of that time was without a doubt lever escapement. With the rising industrial manufacture, watches became even cheaper and reliable, and quicker cycles of fashions styles caused rapid expansion of watch creating facilities. 1900 and beyond Modern metallurgy and industrial manufacture enabled watches to finally become available to everyone. Electric clock become widespread, atomic watches defined second as an exact number of oscillations of cesium atom, and computer controlled digital watches become ever present.
Part II: Miocene Epoch Sahelanthropus tchadensis (<7mya) (“human from the sahel” / Chad) The Sahelanthropus tchadensis specimen (see Figure 6.2) was discovered in 2001 at the site of Toros-Menalla, in the Djurab Desert of northern Chad, by Michel Brunet and associates. Brunet’s incredible years-long quest for hominins in that area is documented in the NOVA series, Becoming Human (www.pbs.org/wgbh/nova/evolution/becoming-human.html). The species name translates to “human from the sahel of Chad.” The sahel is the region of dry grasslands south of the Sahara desert. The skull has been nicknamed “Toumai” in the Dazaga language, meaning “hope of life.” The location of the fossil material came as a surprise in that only one species of hominin had ever been discovered west of the Great Rift Valley of East Africa, i.e. Australopithecus bahrelghazali (see Chapter 12). However in 1998, Noel Boaz speculated that, contrary to the Rift theory for the origin of the hominins, a portion of the ancestral stock that gave rise to the chimp and human lineages became isolated in a riparian (i.e. riverine or gallery) forest zone in Chad that was surrounded by arid, open land. At a later point in time, a forest corridor allowed their movement into East Africa. Part of the problem at that point in paleoanthropology was that no species of hominins, prior to the australopiths, had been discovered in East Africa. They seemingly appeared de novo in the fossil record, beginning about 3.5 mya, with no intervening stages or “missing links” in evidence. We now have much older hominin species from Kenya and Ethiopia, i.e. Orrorin tugenensis and Ardipiths, respectively. While the phylogeny of S. tchadensis is unknown, some researchers believe that it may represent a stem or basal hominin, i.e. one of the earliest members of our tribal tree. (Note: Once a genus is used the first time in a document, it can subsequently be abbreviated.) Just as we do not know the ancestry of the species, we do not have any species that are good contenders for its descendants. DISCOVERY AND GEOGRAPHIC RANGE As mentioned, the holotype (the fossil(s) from a particular individual that are assigned to and used to define the characteristics of a species) was discovered at the desert site of Toros Menalla (see Figure 6.3). Unless fossils are discovered elsewhere, it is impossible to speculate about the extent of the geographic range of the species. The skull of S. tchadensis is very robust, with a chimp-sized brain and pronounced ape-like muscle attachments. While only fragmentary postcranial material has been discovered, some researchers claim that the foramen magnum is anteriorly oriented, suggesting an upright and bipedal hominin. Pronounced brow ridges are also concordant with early hominin status. The facial profile is surprisingly orthognathic and the jaws lack the honing complex, leading some researchers to speculate that S. tchadensis may lie near the base of our family tree, versus other phylogenetic scenarios. However, the pronounced posterior neck muscle attachments have led others to suggest that S. tchadensis may have been quadrupedal. ENVIRONMENT AND WAY OF LIFE Based upon fossilized faunal remains at the site, such as freshwater fish, rodents, and monkeys, it is likely that S. tchadensis inhabited a forest environment in close proximity to an ancient lake (Wayman 2012). Their way of life was likely that of a forest-dwelling ape. Like ardipiths (see Chapter 8), their molar enamel was thinner than that of the later australopiths and they thus likely had a chimp-like diet consisting of fruit, young leaves, and tender shoots.
Summary and Keywords Household air pollution from use of solid fuels (biomass fuels and coal) is a major problem in low and middle income countries, where 90% of the population relies on these fuels as the primary source of domestic energy. Use of solid fuels has multiple impacts, on individuals and households, and on the local and global environment. For individuals, the impact on health can be considerable, as household air pollution from solid fuel use has been associated with acute lower respiratory infections, chronic obstructive pulmonary disease, lung cancer, and other illnesses. Household-level impacts include the work, time, and high opportunity costs involved in biomass fuel collection and processing. Harvesting and burning biomass fuels affects local environments by contributing to deforestation and outdoor air pollution. At a global level, inefficient burning of solid fuels contributes to climate change. Improved biomass cookstoves have for a long time been considered the most feasible immediate intervention in resource-poor settings. Their ability to reduce exposure to household air pollution to levels that meet health standards is however questionable. In addition, adoption of improved cookstoves has been low, and there is limited evidence on how the barriers to adoption and use can be overcome. However, the issue of household air pollution in low and middle income countries has gained considerable attention in recent years, with a range of international initiatives in place to address it. These initiatives could enable a transition from biomass to cleaner fuels, but such a transition also requires an enabling policy environment, especially at the national level, and new modes of financing technology delivery. More research is also needed to guide policy and interventions, especially on exposure-response relationships with various health outcomes and on how to overcome poverty and other barriers to wide-scale transition from biomass fuels to cleaner forms of energy. Access to the complete content on Oxford Research Encyclopedia of Environmental Science requires a subscription or purchase. Public users are able to search the site and view the abstracts and keywords for each book and chapter without a subscription. If you are a student or academic complete our librarian recommendation form to recommend the Oxford Research Encyclopedias to your librarians for an institutional free trial. If you have purchased a print title that contains an access token, please see the token for information about how to register your code.
In the case of the child with ADHD, the teacher might give the child some strategies to stop screaming out answers in class. What causes students to behave this way? The teacher might try positive behavior reinforcement. For example, every time the child raises his hand before giving the teacher an answer, she could reward the child in some way, such as allowing him to be her helper when she passes out papers to the students in class or giving him extra minutes of free reading time. After using these strategies to cut down on the student's negative behaviors, the teacher would once again measure how often the child blurts out answers instead of waiting to be called on in class. After using behavior modification strategies, the teacher finds that the child now only blurts out answers in class about five times a day. This lets the educator know that her intervention plan is working. If the child continued to blurt out answers 11 times per day, the same amount he did when she took the baseline measurement of his behavior, the teacher would know that she needs to come up with a different intervention method to correct the child's behavior. Teachers and parents should consider alternatives when a behavior modification plan goes awry. Instead of using positive reinforcement alone to reduce the number of outbursts the child with ADHD has in class, perhaps the child also needs to face negative consequences for his outbursts. The teacher may determine that other modifications may need to be made to help the student's behavior problems. Moving the child away from a particular student may help if it's determined that the classmate is egging the child on. Or perhaps the child is seated in the back of the classroom and feels that shouting is the only way for him to be heard. Learn the best ways to manage stress and negativity in your life. More in ADHD. The founding of the Journal of Applied Behavior Analysis in served as an important marker for this period of growth. The development, application, and expanded use of research strategies associated with applied behavior analysis served as a catalyst for the systematic study of behavioral procedures in classrooms. Referred to as single-subject or single-case experimental designs, these approaches allow researchers to examine the impact of interventions on individual students. Continuous assessment demands repeated observations of the dependent measure, typically accomplished by daily observations. The establishment of stable baseline levels of performance is crucial to any further effort to determine whether the manipulation of the independent variable has a functional effect on the dependent variable. Stability in this case implies that the rate at which the targeted behavior occurs is essentially flat or shows a clear trend of deterioration during the baseline phase. If researchers are confident that repeated observations of behavior during a baseline phase show a stable or worsening trend in behavior, the introduction of the intervention in question can then be evaluated in the context of a number of different single-case designs. We describe and provide examples from the empirical literature of the four single-case experimental research designs most commonly used in behavioral research: reversal designs, multiple baseline designs, changing criterion designs, and multielement or alternating treatment designs. The ABAB or reversal design is perhaps the simplest single-case experimental design. While improvement in behavior during the intervention phase provides some evidence of a treatment effect, the strength of this inference is increased dramatically if a second demonstration occurs during the reversal phase. In nearly all experimental situations, a reintroduction of the intervention B 2 is called for, not only because it allows a stronger demonstration of the functionality of the intervention, but because it is consistent with the goals of applied behavior analysis, which include fostering positive behavior change. Powell and Nelson provided an example of a reversal design in which an intervention consisting of assignment choice was evaluated using a reversal ABAB design with a second grade student who was diagnosed with attention deficit hyperactivity disorder ADHD. During baseline, the student participated with his classmates by completing the same assignment given to the entire class, but was found to display high rates of undesirable behavior, defined as noncompliance, being away from his desk, disturbing others, or simply not doing his work. The intervention, assignment choice, consisted of the teacher offering the student a choice from among three appropriate assignments taken directly from the class curriculum during language arts periods. As can be seen in Fig. Figure 3. Example of an ABAB or reversal design. From Powell, S. Effects of choosing academic assignments on a student with attention deficit-hyperactivity disorder. Journal of Applied Behavior Analysis, 30, — Reprinted with permission. Multiple baseline designs allow repeated demonstrations of a functional relationship between independent and dependent variables without necessarily invoking a reversal or withdrawal of the intervention. This is especially useful when a return to baseline is either impossible in the case where learning has occurred or unethical in the case where a destructive or dangerous behavior has been reduced with an intervention. In a multiple baseline design, the researcher establishes two or more baselines before implementing an intervention phase. These baselines may be for different participants multiple baseline across subjects design , for different behaviors displayed by the same subject multiple baseline across behaviors design , or for the display of a behavior in different settings multiple baseline across settings design. The intervention is then implemented in a staggered fashion across these multiple baselines. That is, the intervention will be implemented at different points in time for each participant, behavior, or setting. To the degree that an observed dependent variable targeted for change improves when and only when the intervention is introduced to that subject or behavior or setting , the case for a functional relationship is enhanced. The intervention consisted of showing students a prepared videotape of their voluntary hand-raising in response to questions asked by the teacher during large-group instruction. In preparing the videotapes, students had been prompted to raise their hands, but the prompts were edited out of the videotapes used during intervention, so that the students appeared to be raising their hands spontaneously when teachers asked general questions of the group. As shown in Fig. The implementation of intervention at three different points in time helps to rule out alternative explanations for behavior change such as changes in teacher behavior, routine, or curriculum , and in this case offers three replications of treatment effect for this intervention. Example of multiple baseline across subjects design. From Hartley, E. Self-modeling as an intervention to increase student classroom participation. ADHD Behavioral Treatment | Therapy for ADHD | Child Mind Institute Psychology in the Schools, 35 4 , — A second common application of the multiple baseline design involves a single participant, but multiple settings. Fabiano and Pelham used a multiple baseline across settings design to evaluate the effects of three simple changes to an existing behavior management plan for a third-grade student diagnosed with ADHD who was reported by his regular classroom teacher to display high rates of disruptive, noncompliant classroom behavior. A less common application of the multiple baseline design involves applying a particular intervention across multiple behaviors of a single student. - The Truth about the Internet and Online Predators (Truth about (Facts on File)). - Love and Sand? - Using Response Cost in Classroom Behavior Management. - Dealing with Hostile and Aggressive Behavior in Students; While Magee and Ellis suggested that the extinction intervention itself may have contributed to higher rates of the subsequent behaviors i. Although the scope of behaviors to which it can be applied is somewhat limited Rusch et al. The essential feature of the changing criterion design is that the intervention phase is divided into a number of subphases that have increasingly rigorous criteria for the dependent measure. Treatment is implemented with the goal of moving baseline levels of performance to an initial criterion level; once criterion is reached for a predetermined number of days or sessions, the subsequent phase begins with a more stringent criterion. Such designs may be particularly suited to negative behaviors that occur at a high rate and need to be gradually reduced Rusch et al. Deitz and Repp used a changing criterion design to successfully decrease inappropriate talking in a high school classroom. Within this design, the criterion was lowered each week, requiring that students meet a more stringent standard to earn the reinforcer. As can be seen in the figure, the reinforcement program, known as differential reinforcement of low rates DRL Kazdin, , resulted in a systematic decrease in the targeted behavior across these phases, as well as an increase in the negative talking when the program was withdrawn with a return to baseline. Example of a changing criterion design. From Deitz, S. Decreasing classroom misbehavior through the use of DRL schedules of reinforcement. Journal of Applied Behavior Analysis, 6, — The multielement or alternating treatments design is used when researchers wish to evaluate the relative effects of two interventions in a single experimental phase, something that is not possible in other single-case designs. Share Article Menu In the alternating treatments design, the baseline phase is followed by an intervention phase in which the two interventions are applied at different times e. To enhance the analysis of a functional relationship, the treatments are also balanced across the intervention phase so that neither occurs consistently first, nor always under the same conditions. - For behavior problems? - Never Waste a Good Crisis: An Historical Perspective on Comparative Corporate Governance (Annual Review of Financial Economics Book 1)? - Outlaw Trigger (Epic Book 2). - The Mystery of Regeneration (The Secrets of the Kingdom). - About This Item! - Brand Development of Coca-Cola Company (UK): Exploring new branding opportunities for Coca-Cola Company (UK). McQuillan, DuPaul, Shapiro, and Cole used an alternating treatments design to examine the relative effects of two forms of a self-management intervention and a teacher-evaluation intervention on the mathematics performance and time on task of three adolescent students with behavior disorders see Fig. After seven days of baseline, during which the teacher-evaluation management system already in use in the school remained in effect, an alternating treatments phase was implemented in which the teacher-evaluation system and the two forms of self-management were counterbalanced across daily sessions. Following three weeks of this phase, the optimal condition self-evaluation was implemented in a subsequent phase. Example of an alternating treatments or multielement design. McQuillan, G. DuPaul, E. Shapiro, and C. Cole Journal of Emotional and Behavioral Disorders, 4, — We have touched briefly on the extensive literature base underlying a behavioral approach to classroom management and have also noted that a research-to-practice gap plagues classroom management just as it does all of education. Some writers have suggested that as a field we really do not know all that we purport to know about how to teach and manage behavior e. Three issues seem to be at the heart of concerns about the behavioral view of classroom management: a generalization, b concerns about coercion and bribery, and c ethical concerns about the potential for misuse of behavioral operations. The failure of researchers to produce treatment effects that routinely generalize to other settings, times, and responses has been a sharp and essentially legitimate criticism of behavioral programming since its early application to classroom settings. Even when teachers experience great success in fostering positive change in important academic and social behavior in one context or setting, there is no guarantee that effects will generalize across time maintenance , or to other settings or responses. In what is probably the classic treatment of the problems associated with generalization, Stokes and Baer reviewed scores of studies and described nine generalization promotion strategies that researchers reported using. These included such strategies as program common stimuli , in which elements of the new environment tasks, materials, trainers, directions, etc. Unfortunately, train and hope , essentially a failure to program for generalization, was noted as a common strategy in the literature reviewed. In essence, the criticism that behavioral operations do not produce generalizable effects was shown to be true by default; if educators do not actively program for generalization in their interventions, as often appears to be the case, then generalization will be lacking. But as a number of authors have since summarized, active programming for generalization using among other strategies those noted by Stokes and Baer can result in generalized responding e. Ducharme and Holborn , for example, used prompting, modeling, and verbal praise with preschoolers with hearing impairments to teach social interaction skills such as sharing, cooperating, or assisting other children. While the skills were learned and displayed successfully by the children in their preschool training setting, these newly learned skills did not generalize to other teachers, children, or play settings. Ducharme and Holborn used two generalization promotion strategies to engender such transfer. First, they trained sufficient exemplars by using multiple and different play activities games , different teachers, and several different peers during their training of the targeted social skills. Second, they introduced children to natural contingencies , by systematically fading the teacher praise used initially to teach the new behaviors. These strategies resulted in generalized responding in a different setting with new peers, teachers, and play activities even with no additional prompting or reinforcement, such as that used in the initial training. The larger remaining challenge for behavioral researchers lies in making sure that behavioral interventions routinely include explicit programming for generalization. Rusch et al. As should be obvious, though, failure to generalize calls into question the true worth of any contextually limited behavior change. Among the more frequent criticisms of the behavioral view of classroom management are concerns that teachers become too controlling, and merely coerce or bribe students to behave in ways that the teacher chooses. That said, even behavioral procedures as innocuous as contingent teacher attention are subject to misuse, but this is not different from the teacher who does not use proper and scientifically sound literacy research to guide instruction for emergent readers. Related Emotional Behaviour Deficit and Behavioural modification strategy: Behavioural modification strategy Copyright 2019 - All Right Reserved
From an environmental point of view, the development and utilization of fossil fuels is one of the main causes of atmospheric and other types of environmental pollution and ecological damage. How to develop and use energy at the same time, to protect the survival of the human environment and ecological environment has become a global important issue. Since the middle of the twentieth century, countries around the world have taken measures to improve energy efficiency and improve the energy structure to address this major environmental issues closely related to energy consumption, known as clean energy. Global climate change is a major global environmental concern, and since the era of industrialization, human material civilization has been highly developed, at the same time, the earth’s ecological environment has been severely damaged and deteriorated. In the past 100 years, the global average temperature rose 0.3-0.6 ℃, the global sea level increased by an average of 10-25cm, which is the so-called “greenhouse effect”. According to research reports, about 80% of greenhouse gases are caused by human activities, of which about 60% of the role of CO2. Visible CO2 is the main greenhouse gas type in the atmosphere, and fossil fuel combustion is the main source of C02 emissions. In 1990, the world’s primary energy consumption accounted for 27.3%, 38.6% and 21.7% of coal, oil and natural gas respectively. These fossil fuels emitted a large amount of C02 into the atmosphere. It is expected that the global average temperature will rise by 0. 2 ° C every 10 years from the beginning of the 21st century if greenhouse gas emissions are not taken, and the global average temperature will rise by 13. 5 ° C by 2100. Solar energy can be used directly, basically no pollutant emissions, clean and clean. At present, various types of power generation carbon emissions rate of 275, oil power generation 204, natural gas power generation 181, solar thermal power generation 92, solar photovoltaic power generation 55, wave power generation 41, ocean temperature power generation 36, power generation 35, wind power is 20, geothermal power generation 11, nuclear power generation 8, hydropower 6. These data are based on the raw materials used in various power generation and combustion of mining and transportation, the manufacture of power generation equipment, the construction of power grids, the operation of power supply and the maintenance and waste discharge and disposal of all the energy consumed in the cycle. A variety of power generation methods in the life of the power generation calculated. The carbon emission rates of the various power generation methods are shown in the figure. Solar energy emissions are much lower than coal oil and natural gas.
There are two reasons why the subject-to-image distance is not 40 cm at unit magnification: - the focal length of the lens may not be 100 mm - the distance between the principal planes may not be zero. Which of these reasons is the most important is impossible to tell without detailed information on the optical design of the lens. The value “100 mm” written on the lens itself is a nominal focal distance, which is normally a rounded value of the real focal distance when the lens is focused at infinity. Some lenses, usually called “unit focusing” lenses, achieve focus by moving the optical assembly as a whole. These lenses have a focal distance which does not vary with focusing. However, many complex lenses, including virtually any modern macro lens, have some sort of “close range correction” (in Nikon parlance): their optical formula changes as you focus, which enables better correction of aberrations. These lenses have a focal distance which varies as you focus. These two facts: the rounding of the nominal focal length and the fact that it varies when you focus, mean you do not know what the actual focal length of the lens is at unit magnification. The Wikipedia page you cite defines do and di as the distance from the lens to the object (resp. image), but note that these definitions appear in a section that is specifically about thin lenses. Your lens being a thick compound lens, this begs the question of the applicability of the formula. It turns out that the thin lens approximation is not applicable in this situation. However, the formula is still valid if interpreted in the context of the thick lens model. In this model, the plane of the thin lens is replaced by two planes, which are called “principal - the “front” (or “primary”, or “object side”) principal plane is used for measuring distances in object space - the “back” (or “secondary”, or “image side”) principal plane is used for measuring distances in image space These are conjugate planes with unit magnification. In the figure below (source), they are the vertical planes that go through H1, N1 and H2, N2: Note that this way of describing an optical system in terms of its cardinal points (the Fi, Hi and Ni above) is also applicable to compound lenses. See for example this old drawing of a telephoto lens (source) where both principal planes (the vertical planes through Ni and No) are on the left side of the leftmost element: Thus, your formula is still valid provided you define: - do as the distance from the subject to the primary - di as the distance from the secondary principal plane to This gives the subject-to-image distance as do + e + di = 4f + e at unit magnification, where e is the (possibly negative) distance between the principal planes. Note that the thin lens approximation essentially says that the principal planes are coincident (e = 0), but it is not applicable to your case. For more info about this topic, you can take a look at: The thin lens misconception I wrote this answer mostly to help clear a popular misconception, which appears in some of the answers here, including the one you accepted: that a photographic lens is equivalent to a thin lens. It turns out that in most photographic situations (basically all non-macro situations), the subject-to-lens distance is much larger than any characteristic distance of the lens itself. In such situations it doesn't really matter which reference point you use for measuring the distance to the subject. It is then convenient to forget about the distance that separates the principal planes and consider that the rear principal plane is the only one that matters. This is equivalent to setting e = 0, which is basically the thin lens approximation. Sticking to this approximation makes learning optics a lot simpler, as you don't need to understand notions such as principal planes, principal or nodal points, object space, image space, and so on. Considering that: - the approximation is good enough for most (non macro) purposes - knowledge in optics is only useful to a photographer at a qualitative level, as you are not going to design lenses, and you don't need optics expertise to become a great photographer it is understandable that the thin lens is the model most commonly taught to photographers. And yet the approximation breaks when dealing with a complex thick lens at macro distances. The answers that tell you that the focal distance is one quarter of the subject-to-image distance illustrate how this misconception leads to people posting wrong answers.
General relativity (GR) is a theory of gravitation that was developed by Albert Einstein between 1907 and 1915. According to general relativity, the observed gravitational attraction between masses results from the warping of space and time by those masses. Before the advent of general relativity, Newton's law of universal gravitation had been accepted for more than two hundred years as a valid description of the gravitational force between masses. Under Newton's model, gravity was the result of an attractive force between massive objects. Although even Newton was bothered by the unknown nature of that force, the basic framework was extremely successful at describing motion. However, experiments and observations show that Einstein's description accounts for several effects that are unexplained by Newton's law, such as minute anomalies in the orbits of Mercury and other planets. General relativity also predicts novel effects of gravity, such as gravitational waves, gravitational lensing and an effect of gravity on time known as gravitational time dilation. Many of these predictions have been confirmed by experiment, while others are the subject of ongoing research. For example, although there is indirect evidence for gravitational waves, direct evidence of their existence is still being sought by several teams of scientists in experiments such as the LIGO and GEO 600 projects. General relativity has developed into an essential tool in modern astrophysics. It provides the foundation for the current understanding of black holes, regions of space where gravitational attraction is so strong that not even light can escape. Their strong gravity is thought to be responsible for the intense radiation emitted by certain types of astronomical objects (such as active galactic nucleus or microquasars). General relativity is also part of the framework of the standard Big Bang model of cosmology. Although general relativity is not the only relativistic theory of gravity, it is the simplest such theory that is consistent with the experimental data. Nevertheless, a number of open questions remain: the most fundamental is how general relativity can be reconciled with the laws of quantum physics to produce a complete and self-consistent theory of quantum gravity. In ordinary three-dimensional space the formula for distance in Cartesian coordinates is Now one can change the coordinate systems if one wants. If one rotates the coordinate system or stretches or shrinks it, the values for x, y, and z may change, but the distances will not. We can even conceive of more radical changes, like going into spherical coordinates where In special relativity we learned that physics is described by another invariant in which Again, we are free to change the coordinates of x, y, z and t to anything we want, but the underlying geometry and distances don't change. The next step is to incorporate gravity into this picture. While the mathematical details can be complex, the basic idea is that the effects of gravity are equivalent to the effects of acceleration on an observer. From this equivalence principle, Einstein was able to show that what matter does is to change the rules for distances. The formula we showed above is strictly true only when matter is not present; when matter is present, the rules for determining distances change, and the effect of these changes is to produce the effects of gravity we all know. This picture of gravity is powerfully simple and elegant. However, there is one problem with it; in order for it to be usable, it is necessary to learn many new mathematical concepts to understand how this picture works. In our daily lives, we have become very familiar with the properties of three-dimensional Euclidean space because that is the world we live in. In order to do anything such as walking, moving, or catching balls, our brains have to deal with 3-space and so we have a great deal of intuitive knowledge about how this sort of geometry works. Even when we are doing mathematics in three-dimensional space, we are helped by the fact that our minds have this sort of knowledge built in. However when we discuss other types of space, our normal intuition fails us, and we are forced to follow the much more difficult path of trying to figure out what happens by describing the situation through precise mathematical statements, and this involves learning several new mathematical concepts and techniques. To give an example of the mathematical techniques we will have to learn. Imagine you are on the surface of a flat plane. One formula for distance is Another formula for distance in the plane in polar coordinates is Now these two formula look quite different, but they are really two different descriptions of the same situation. On the surface of a cylinder we again have a formula which looks very similar to the distance in the plane expressed in polar coordinates: locally, the cylinder cannot be distinguished from the plane. Later on, we will give this a name: we call these flat surfaces. However if we were on the surface of a sphere, then the distance for small changes in φ and θ is Now in this situation, the difference in the distance formula is not merely one in which we are using different coordinates to talk about the same thing, the thing that we are talking about is actually different. So this brings up a lot of questions. How do we know if the differences in distance formulas are real or are just differences in coordinate systems? Can we talk about distance formulas in a way that lets us naturally distinguish between real differences rather than ones that are the result of our descriptions? How can we classify different geometries? All of this may be intuitively obvious when we are talking about three-dimensional spaces or two-dimensional objects such as spheres embedded in three dimension space. However, in order to talk about the behavior of four-dimensional space-time geometries, we need to rely on mathematical statements to get us answers. Fortunately, mathematicians such as Riemann worked this all out in the start of the 19th century. However to understand how to deal with weird geometries, we will need to learn a few more concepts, such as the concept of a tensor.
Social studies is defined by the National Council for Social Studies as “the integrated study of the social sciences and humanities to promote civic competence.” General social studies establishes a foundation for all of the subsequent, more specific classes that students will take in history, civics and the like. Typically, students take general social studies in elementary school, then move to more specific areas of study in middle school, and even more in-depth subjects in high school and college. In elementary school, students take social studies every year, beginning with the most basic elements of geography and history, and gradually progressing to more specific and detailed subjects as years go on. In middle school, students take a specific social studies class each year, usually revolving around world history and U.S. history, and in high school, classes are more dedicated to completing a thorough study of a particular subject, like modern American history. A few of the different areas social studies covers are geography, history, government and current events. Geography is the study of different countries, which includes factors like population, culture, location, climate, economy and physical land properties. In elementary school, general concepts of geography are incorporated into social studies such as different land forms and the basics of the world’s map and population. Middle schools tend to go more in depth on the topics covered in elementary schools. Some middle schools will devote an entire class to geography, which involves much more memorizing of locations on maps, and an in-depth study of physical conditions and climates. Many school districts that offer geography as a specific class in middle school do not offer a class in high school. Oftentimes, aspects of geography in high school are also incorporated into earth science and history classes. History is a general branch of social studies that is taught in the upper levels of elementary school and in middle school. In middle school and high school, however, it is typically broken down into two different categories: world history and U.S. history. The foundation for U.S. history is incorporated into social studies in elementary school, where a basic timeline of United States history from before the Revolutionary War up to the present day is constructed. In middle school, this timeline is built upon and different ideas within the study of America are fleshed out and developed. In high school, the history of America can be taught over the course of two years, and involves a deep analysis of historical events, systems of government and important figures. World history, on the other hand, takes a global perspective and covers a broad range of topics including the ancient history of eastern and western civilizations, the secular history of religions, globalization, colonialism and major international conflicts. The study of government includes the history of governments, the basic principals and types of governments, and the current state of both the American government and governments worldwide. Oftentimes, government is incorporated into other social studies classes, such as U.S. history, world history and current events. However, some schools have a specific class dedicated to the study of the government. In elementary school social studies, students learn about the branches of the U.S. government and other basic topics, such as the Bill of Rights and the Constitution. Middle school classes build off these principals, going more in depth into the study of government, though usually still focusing on the United States. In high school, however, students may begin to learn about other types of government around the world and other political models, such as communism, socialism, dictatorships and monarchies. They may also learn about political revolutions and conflicts between governments. Current events is the branch of social studies that examines the present world. This subject analyzes a wide range of current social, ethical, political, legal, educational and environmental issues. Typically, a current events class blends presentations from both the instructor and the students to keep students actively engaged. In elementary school, social studies classes will generally cover current events on a basic level to promote awareness. The teacher will frequently report on recent developments, or ask students to keep an eye on and present interesting happenings. In middle school and high school, current events becomes a specialized class that actively develops the students’ ability to monitor and interpret the pressing issues occurring in the world around them.
Presenting The Case For Music Whilst music is under pressure both in state and private education, contemporary science is increasingly discovering the vital educational importance of music in lifelong learning. Here Professor Paul Robertson provides an overview of current research and offers compelling arguments for an increase in music making in the classroom. Many traditional and ancient civilisations placed music at the very heart of their philosophy, culture and education. The ancient Greeks, whose social and scientific philosophy has so influenced our own, considered music an integral and universal paradigm for both cosmic events and personal activities. Such music modelling, based on the physics of sound and the observation of sound frequencies, gave rise to the notion of a harmony of the spheres in which planetary motions and conjunctions directly relate to specific music pitches and scales. (Interestingly, contemporary computer-generated mathematic computations show remarkable congruencies between planetary movement and tonal musical relationships). Later followers of Platonic theory applied and developed these cosmologically inspired musical principles into such universally accepted periodicities as the seven days of the week (the steps of the major scale) and the division of white light into a spectrum of seven colours. The seven muses and seven ages of man etc. almost certainly also reflect these same musical principles. In such spiritually inspired systems, the seven step system combined with the other fundamental numerical principle of the trinity is universally considered as having special significance, hence the triad Father, Son and Holy Ghost in Christianity. Rajas, Tamas and Satvic energies in Hinduism etc., etc. In western music this tri-fold principle manifests as the musical 'accidentals' - sharps, flats and naturals. This combination of energies in the form of interrelated pitch patterns make up our modulating musical system. Further numerical application of this frequency system also allows for the exploration of the Octave into the twelve semi-tone steps of the chromatic scale. This principle also finds its way into our common currency, as months of the year, hours on the clock, number of disciples as well as signs of the zodiac etc. Through such major figures as Kepler and Newton, musical systems continued to inform our astronomical history whilst the artists and architects of the Renaissance also drew heavily on musical Platonism in forming the aesthetics and abiding principles of the Renaissance. Closer to home and nearer in history even Harrison, the renowned clockmaker, applied musical mathematical principles involving the tempering of pitch to the intricacies of energy loss within his prize winning clock mechanism. This very brief survey of the history of musical formulations may indicate just how pervasive musical models are to our culture. It is true to say that, both consciously and unconsciously, music informs our world view and musical structures underlie many of our aesthetic and scientific paradigms. The contemporary view of music and development Just as music forms a fundamental part of our earliest social organisation, so we are discovering that musical response and awareness is involved in our earliest individual development. Extensive research shows that even within the womb musical (i.e. pitched) tones are recognised and familiarised by the unborn child. Such information is therefore significantly at play in our earliest pre-cognitive neurological development. Trehub and others have shown that within weeks of birth very young infants can process and discriminate complex musical tasks. Such universal gifts are no accident of nature but rather a vital species survival skill enabling us to recognise and interpret the complex emotional prosodic voice information that underlies speech. Such precocious abilities are also reflected neurologically. Tramo and others have established that the primary auditory areas of the brain (where the kinetic energy of sound is converted into the electro-chemical energies of the brain) show coherent firing patterns when processing concordant intervals - whilst discordant intervals create erratic firing patterns. It is also now further established that all mammals prefer concords. Successful mapping of these preferences and processing patterns are also increasing at more complex levels of brain organisation. This very technical and specialist work is at the cutting edge of science. Some relevant principle findings however can be summarised. Using E.R.P.'s (measurement of electrical activity in the brain) Besson has shown that all listeners, be they musically skilled or not, show very significant brain response to incongruous 'wrong' notes. However the individual's ability to verbalise or cognitively recognise this information differs according to their degree of training. The essential importance of this study is that all people have intrinsic musical responses. Tonal music and the raising of intelligence The ability to infer pattern is, of course, a fundamental part of intelligence. Dr. Frances Rauscher's discovery that patterned tonal music (in this case Mozart) significantly raises spatial IQ. (17%) in non-musical subjects is based on the theory that the neurones in the brain themselves communicate in a way that closely reflects the pitch patterns of repetition and novelty that constitute western classical music. These findings strongly support the view that musical structures both reflect and alter our neuronal structures and that both passive and active musical activity can be used to make specific neural changes. Indeed without such a high congruence it would be difficult to explain the universal practice of music. Musical processing builds language skills I have already indicated the reasons for believing that our musical responses pre-date our verbal competence. Diana Deutsch has shown that subjects make musical inferences relative to electronically treated tri-tone intervals dependent upon where the individual has learnt their mother tongue. (In this experiment there was an overwhelming statistical result showing that English speakers who learnt their language in the Southern Counties 'hear' differently to those whose English was learnt in California). This then is highly suggestive that 'musical' response not only pre-dates language acquisition but also plays an important role in it. Tomatis and Suggestopoedia In this area Alfred Tomatis' work is of prime importance. Through methodical study and meticulous scientific observation Tomatis has proved that an individual's auditory response and tonality of voice reflect and chart their developmental history. Early traumas inevitably leave behind them auditory deficits which can often be corrected or alleviated by a listening programme of specially electronically filtered music which 're-educates' auditory response. Tomatis has had outstanding success with dyslexic children and in helping various brain-damaged individuals. His method is also much used for language learning where the overtone systems specific to each language can be presented to the student thus maximising acquisition. Music is also used very effectively for accelerated learning (particularly of language) by the methods of Suggestopoedia. This well established teaching method is widely used in continental Europe and Scandinavia, where enlightened music-based educational methods such as Kodaly, Steiner, Colour-Strings etc. are the accepted norm. Most of these systems re-inforce learning synaesthetically by combining sound, movement, colour etc. Mind/Brain models in education - learning from brain damage. Howard Gardiner's seminal work proposes that we are each differently gifted in domains of skill, such as mathematical, verbal, spatial, musical etc., each of which is associated with relatively discreet neurological systems. Effective education must recognise and utilise these different individual tendencies and use them to create supportive and creative opportunities. A mass of evidence from brain-damaged children also reinforces this view. When considering early brain damage it is important to bear in mind that the human brain develops sequentially and in stages, with each successive stage normally dependent on the previous ones. Many syndromes such as Downs, Cerebral Palsy and Autism whilst very different in their causes and giving rise to different handicaps, do reflect gross interruptions of the developmental path. Subsequent compensations and development are not just incalculably valuable for the affected individuals and their carers but also reveal vital clues as to the nature of brain function and potential for us all. Similarly certain very autistic individuals develop extraordinary musical skills which gradually allow them to increase verbal and even emotional communication. (Tony de Blois and others:- Idiot Savant). This process, which can be most moving for those involved does powerfully illustrate the power of music in higher-skill learning. To illustrate - Professor George Odam mentions a case history that closely involves him. This young man of 15 has Down's Syndrome - like many such individuals he is highly musical but limited academically. In fact this young man was really suffering at school because he could not grasp the concept of mathematical division. This very real problem for teachers and pupil was put in a constructive light when Professor Odam pointed out that this pupil could creditably perform a Beethoven piano sonata and was therefore already computing divisions both of time (the complex musical rhythms) and of space (the divisions of the keys on the keyboard). Therefore the limitation can be seen not as fundamental but of application across domains. The inbuilt capacity for the developing brain to find support and inform itself across different domains of skill and synaesthetically between the senses is of the greatest possible educative importance. Further clinical support Children suffering profound brain damage from cerebral palsy may be taught to sing their names and short phrases describing how they feel even whilst they remain unable to speak. (Marienne Berel - New York). It is just because of these ancient anatomical connections that pre-literate societies (and pre-literate children) carry so much of their history in song. The powerful combination of repeated rhythmic tonal patterns supporting words creates a significant and emotive connection between the older 'mammalian' systems of the brain with the more evolutionary recent cognitive ones that process and generate words and logic. (The power of advertising jingles and TV theme tunes to recur unbidden to our minds pays acknowledgement to this). Clinical observation also shows that where different areas of brain damage (from trauma or stroke) can cause specific deficits such as loss of speech and the ability to generate written language and yet leave musical ability (including the use of written musical notation) intact. This graphically illustrates the multi-domain thesis. So, at one and the same time, musical ability may underlie other cognitive skills and yet not be dependent upon them. Music - emotion and learning Damasio and others offer compelling evidence for the importance of emotion in informing and supporting cognitive function. In this area music has a unique role and potential. The relationships between emotion, cognition and learning are currently the area of some most exciting research. Motion and emotion. Moving the paradigm Epstein and others have clearly shown the close rhythmic connections between musical rhythms and physiological and neurological pulse. Such congruencies not only form the template of human empathy with all its concomitant gifts of social and group activity but also connect to the rewarding area of gesture, emotion and creative expression. (In his study Music is also shown to materially affect our subjective sense of passing time). Manfred Clynes has superbly shown the exact connection between the physiology of emotional experience and its precise musical counterparts in musical gesture. This important work allows us to say with authority that learning music teaches emotional management. His work in computing also shows the subtle links between such emotional skills and the development of intelligence itself (including artificial computer intelligence; see Minsky and others). Clynes' remarkable interactive software 'Superconductor' will prove an invaluable tool for learning, allowing as it does the untrained user to develop their own aesthetic and change at will the emotional expression of the computer performance by means of a simple cursor. Education - The direct evidence Educationally we can already state with full confidence that exploring and developing innate musical potential will enhance and improve emotional behaviour (see recent Swiss experiments and the vast archive of music therapy and-social research). Music also improves and educates good social activity - neurologically. This is because of the strong associations of musicality with the limbic system (including Singular gyrus Amygadala etc. known to be involved with emotional response, memory and social ability). Music will enhance spatial and mathematical skills - in part because all these are strongly right-brain associated. (It is surely significant that classically infant prodigies tend to be in Music, Mathematics or Chess, further implying common neurological features between these skills). Music can also assist spatial IQ and cognitive function because of the congruence between musical patterns and the patterns of neurological function and the reinforcement of pattern recognition, memory and creativity. Because of its largely non-verbal emotional rewards music can provide a great intrinsic reward and sense of well-being and self-worth. This is partly because of its focus within the brain's systems but also because it allows a vital means of non-verbal, non-academic self expression and identity. Even the non-educationally disposed can and will strongly identify and define both themselves and their peer groups by means of the music they share. The same emotional power that draws individuals to the cultural icons of Spice-Girls, Rap and Rock stars, football chants or 'Land of Hope and Glory' at the last night of the Proms - this same vital deeply affective force so glibly used in the advertising and film industry is an available resource to educate, enlighten and raise the lot of all our citizens. We diminish or neglect this universal gift at a grave risk to ourselves and our children. Human history and even contemporary science is telling us that the Greeks believed Music is fundamental to a full human life. We should begin again to listen.
a. The Panama Canal Authority In 1880 Ferdinand de Lesseps began a 20 year effort to construct a waterway across the Isthmus of Panama in fulfillment of a dream that began with Vasco Nuñez de Balboa's first sighting of the Pacific. After this failed, the United States took up the work in 1903 under a treaty which granted them rights to a strip of territory running 5 miles on each side of the Canal. At the opening of the Canal in 1914, the Panama Canal Company, together with the government had integrated the operation of the waterway and adjacent lands into the Canal Zone, a separate State within the Republic of Panama. Wholly U.S. in character, the Canal Zone had its own police force, judicial system customs and immigration services, remained unchanged until the implementation of the Torrijos/Carter treaties on October 1st., 1979. The new arrangement disestablished the governmental institutions of the Canal Zone and the territories reverted to the full sovereignty of the Republic of Panama, but the treaty continued the right of the United States to operate the waterway until the end of the century. The final act of transfer to the Republic of Panama took place on 31 December 1999. An autonomous legal entity by the name of Panama Canal Authority established under public law, is the new entity in charge of the administration, operation, conservation, maintenance, and modernization of the Panama Canal and its related activities. By law the Panama Canal constitutes an inalienable patrimony of the Panamanian Nation and it shall remain open to the peaceful and uninterrupted transit of ships of all nations. Vessels approaching the Canal should do so with the clear understanding that they are entering the legal and governmental jurisdictions of the Republic of Panama. b. Facts about the Panama Canal The Panama Canal connects the Atlantic Ocean with the Pacific Ocean. The Atlantic entrance is at Cristobal and the Pacific entrance at Balboa. The Canal has a length of about 83 Km from ocean to ocean. The channel is maintained to a min. width of 152.4m and a depth of 12.80m at MLW. The lock chambers are 304.8m long and 33.53m wide, with depth of water over mitre sills of 12.4m at the most restrictive point, the South end of Pedro Miguel Locks. A system of whistle buoys to mark the Atlantic entrance to the canal have been installed and these are especially effective in rough weather and where there are few physical landmarks to be seen. From the Atlantic terminal, Cristobal Harbour or Limon Bay, the channel extends to Gatun Locks, a distance of about 12 Km where vessels enter a 3/lift lock and are raised 25.90m to the level of Gatun Lake, which is the summit elevation of the Canal. All locks have two parallel lanes. The channel from Gatun Locks through the lake extends 37.6 Km to Gamboa where vessels enter the Gaillard Cut which runs approx. 12.8 Km to Pedro Miguel. At Pedro Miguel vessels enter a single lift lock and are lowered 9.45m to a small lake, through which they pass to Miraflores Locks, a distance of about 1.6 Km. Here they enter a 2 lift lock and are lowered to sea level, passing out through a channel about 11 Km long to the Pacific. Vessels are towed through the locks by electric locomotives assisted by ship's engines. High mast lighting is installed at all locks. A vessel of medium size can pass through the Canal in about 9 hours, and Canal capacity is now about 42 vessels per day. ACP Canal Profile The convoy system is not employed. Vessels are dispatched for transit under a fairly complex system resulting from the need to schedule traffic in accordance with vessel type size and/or cargo; which governs pilot and equipment requirements, and restrictions on transit time and conditions. Large vessels and dead tows which require clear cut and/or daylight passage, are usually dispatched during the early morning with smaller vessels commencing transit later in the day and during the night.
During the early 1600s, Jesuit missionaries arrived in New France (modern-day eastern North America) to convert North American Indians to the Roman Catholic faith. Beginning in 1632, the Jesuits began to publish yearly accounts of their missionary activities in New France. These accounts became known as the Jesuit Relations. The Jesuit Relations provided Europeans interested in settling in North America with information on life in the New World. These writings also have provided subsequent historians with an abundance of information on the Jesuits' experiences in New France. In addition, the Jesuit Relations also give modern researchers some of the most detailed and earliest written accounts of the American Indian people who resided in modern-day Ohio.
What’s a Rain Garden Why is a Rain Garden Important? A rain garden is an attractive native plant garden with a purpose: to protect local streams, rivers and the Chesapeake Bay. Rain water (or snowfall) is routed to the garden and filtered by the plants and soils in the garden. Rain gardens use a combination of soils and water-tolerant native plants to catch and hold runoff, a concept known as bioretention. The soils and plants then naturally filter out pollutants found in rain and runoff helping to protect local streams, rivers and the Chesapeake Bay. Impervious surfaces, like rooftops, roads and parking lots, do not absorb or allow the infiltration of rainfall. As a result, more rainwater travels over the surface, washing various pollutants like excess nutrients, lead, copper, engine oil, gasoline and engine coolant collected on these surfaces into local streams, rivers and eventually the Chesapeake Bay. Planting a rain garden in your yard may seem like a small thing, but capturing the first inch of water from a storm in a rain garden keeps 90% of pollutants and nutrients out of the local streams and rivers. Keeping rain where it falls by putting it into a rain garden will help protect our rivers, streams and the Chesapeake Bay. - Matt Fleming Chesapeake & Coastal Service Unit Director Photo of Rain Garden Courtesy of NRCS For more information: Take time to volunteer in your neighborhood or community. Volunteering is a great way to make new connections and find people who share your interests. There are many opportunities and your contribution will be appreciated. Find out what is going on in your area. Links for event calendars from other agencies and organizations are included in our list. If you have a question about a specific event, make sure to contact the group hosting the event. Sustainability TipInstall a rain garden or rain barrel to catch rainwater. Stormwater rushing off of roofs and lawns carries pollution directly into streams and the Bay. Rain gardens and rain barrels slow the water down and keep pollution out of our waters. Click here for other helpful information. Join the Sustainability Network The Sustainability Network is a place where interested citizens, businesses and organizations can share ideas on projects and make connections between others who share their interest.
No matter how much you praise Michelangelo, Rafael or any other artist who create wonderful masterpieces, they can never match their level of artistry to that of the nature. This is because nature puts million of years of effort to create it’s geological wonder, carving and molding every speck present on the canvas of earth. Whether it is a shiny round pebble or the mighty Grand Canyon, our vocabulary becomes limited and short of words to describe these marvelous creation. Two of such wonders are Stalactites and Stalagmites which are formed in cave which are over million years old. THE FORMATION: Both stalactites and stalagmites are found in limestone cave. 1. Limestone caves are composed of a mineral called Calcite. Calcite is basically calcium carbonate (marble in simple terms). 2. When rain water falls over the cave, the water flows over the rock’s surface and dissolves carbon dioxide (from air) and Calcite (from rocks) as it flows. 3. A chemical reaction with water and carbon dioxide converts Calcite into another mineral called Calcium hydrogen carbonate. 4. If there is a crack in ceiling of the cave, this water (which have dissolved calcium hydrogen carbonate) flows down through the crack into the cave. 5. The water tricking down the ceiling again dissolves carbon dioxide which causes another reaction (a reverse reaction) that converts Calcium hydrogen carbonate back to Calcite. This Calcite deposit and stick itself around ceiling’s crack and forms a tube like structure which gradually forms an inverted stalactites cone (a solid cone). 6. The water dripping from the end of a stalactite falls to the floor of a cave and deposits more calcite into a mound. Mound also grows and forms another cone at ground of the cave to form Stalagmites. That is why both of them usually occurs in a pair.
Sometimes it’s useful to have a simpe program around to answer if a certain number is a prime number or not. With a few lines of Python this is easy to build. The following small program prompts for a number and answer if it’s a prime or not. You can also read the exit code of the program in case you want to implement it in another program or a shell script. The program will quit with exit code 0 if it’a prime number, and 1 if it’s not. This program only works for numbers greater than 1, since it starts checking by dividing by 2. I think this is okay, since we all know (or nowdays consider) 1 to be the first prime number. #!/usr/bin/env python3 n = int(input("Enter a number: ")) for i in range(2, n): if (n%i == 0): quit (str(n) + " is not a prime") print (n, "is a prime") How it works To find out if a number is a prime number or not one can divide every number from two and up to the number itself minus 1. If none of these divisions get a whole number (no remainder), it’s a prime number. Let’s start with 5 as an example. As we can see from above we didn’t get any integers, or whole numbers, and hence it’s a prime number. Now let’s try 9 for example. We don’t need to go any further here, since we got a whole 3, and hence the number 9 is not a prime number.
The alkaline earth metals are a group of chemical elements in the periodic table with very similar properties. They are all shiny, silvery-white, somewhat reactive metals at standard temperature and pressure and readily lose their two outermost electrons to form cations with charge 2+ and an oxidation state, or oxidation number of +2. In the modern IUPAC nomenclature, the alkaline earth metals comprise the group 2 elements. The alkaline earth metals are beryllium (Be), magnesium (Mg), calcium (Ca), strontium (Sr), barium (Ba), and radium (Ra). This group lies in the s-block of the periodic table as all alkaline earth metals have their outermost electron in an s-orbital. The periodic table is a tabular arrangement of the chemical elements, organized on the basis of their atomic numbers, electron configurations, and recurring chemical properties. Elements are presented in order of increasing atomic number (the number of protons in the nucleus). The standard form of the table consists of a grid of elements laid out in 18 columns and 7 rows, with a double row of elements below that. The table can also be deconstructed into four rectangular blocks: the s-block to the left, the p-block to the right, the d-block in the middle, and the f-block below that. The rows of the table are called periods; the columns are called groups, with some of these having names such as halogens or noble gases. Since, by definition, a periodic table incorporates recurring trends, any such table can be used to derive relationships between the properties of the elements and predict the properties of new, yet to be discovered or synthesized, elements. As a result, a periodic table—whether in the standard form or some other variant—provides a useful framework for analyzing chemical behavior, and such tables are widely used in chemistry and other sciences. A chemical element is a pure chemical substance consisting of one type of atom distinguished by its atomic number, which is the number of protons in its nucleus. Elements are divided into metals, metalloids, and non-metals. Familiar examples of elements include carbon, oxygen (non-metals), silicon, arsenic (metalloids), aluminium, iron, copper, gold, mercury, and lead (metals). The lightest chemical elements, including hydrogen, helium (and smaller amounts of lithium, beryllium and boron), are thought to have been produced by various cosmic processes during the Big Bang and cosmic-ray spallation. Production of heavier elements, from carbon to the very heaviest elements, proceeded by stellar nucleosynthesis, and these were made available for later solar system and planetary formation by planetary nebulae and supernovae, which blast these elements into space. The high abundance of oxygen, silicon, and iron on Earth reflects their common production in such stars, after the lighter gaseous elements and their compounds have been subtracted. While most elements are generally viewed as stable, a small amount of natural transformation of one element to another also occurs at the present time through decay of radioactive elements as well as other natural nuclear processes. The Internet is a global system of interconnected computer networks that use the standard Internet protocol suite (TCP/IP) to serve several billion users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic, wireless and optical networking technologies. The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web (WWW), the infrastructure to support email, and peer-to-peer networks. Most traditional communications media including telephone, music, film, and television are being reshaped or redefined by the Internet, giving birth to new services such as voice over Internet Protocol (VoIP) and Internet Protocol television (IPTV). Newspaper, book and other print publishing are adapting to website technology, or are reshaped into blogging and web feeds. The Internet has enabled and accelerated new forms of human interactions through instant messaging, Internet forums, and social networking. Online shopping has boomed both for major retail outlets and small artisans and traders. Business-to-business and financial services on the Internet affect supply chains across entire industries. In journalism, a human interest story is a feature story that discusses a person or people in an emotional way. It presents people and their problems, concerns, or achievements in a way that brings about interest, sympathy or motivation in the reader or viewer. Human interest stories may be "the story behind the story" about an event, organization, or otherwise faceless historical happening, such as about the life of an individual soldier during wartime, an interview with a survivor of a natural disaster, a random act of kindness or profile of someone known for a career achievement.
Meekness is an attribute of human nature and behavior. It has been defined several ways: righteous, humble, teachable, and patient under suffering, long suffering willing to follow gospel teachings; an attribute of a true disciple. Meekness has been contrasted with humility as referring to behavior towards others, whereas humility refers to an attitude towards oneself – meekness meaning restraining one's own power, so as to allow room for others. - The Israelite Apostle Paul gave an example of meek behavior when writing to Timothy: "The servant of the Lord must be gentle, apt to teach, patient, in meekness instructing those that oppose themselves." (2 Tim. 2:24–25) - Sir Thomas Browne explained: "Meekness takes injuries like pills, not chewing, but swallowing them down." This indicates that meekness allows a person to overlook or forgive perceived insults or offenses. - The meek feature in the Beatitudes, and were linked thereby to the classical virtue of magnanimity by Aquinas. - Latter Day Saint Apostle of Jesus Christ, Elder David A. Bednar, said in April 2018, "Meekness is a defining attribute of the Redeemer and is distinguished by righteous responsiveness, willing submissiveness, and strong self-restraint." He further said, "Whereas humility generally denotes dependence upon God and the constant need for His guidance and support, a distinguishing characteristic of meekness is a particular spiritual receptivity to learning both from the Holy Ghost and from people who may seem less capable, experienced, or educated, who may not hold important positions, or who otherwise may not appear to have much to contribute." - Beethoven rejected meekness and equality in favor of cultural elitism: “Power is the moral principle of those who excel others”. - Nietzsche rejected Christian meekness as part of a parasitic revolt by the low against the lofty, the manly, and the high. - Buddhism, like Christianity, strongly values meekness – the Buddha himself (in an earlier life) featuring as the 'Preacher of Meekness' who patiently had his limbs lopped off by a jealous king without complaining. - Taoism valorized the qualities of submission and non-contention. - Book of Numbers chapter 12 verse 3: Now the man Moses was very meek, above all the men which were upon the face of the earth. - In Islam, faqr, sometimes translated as "poverty", is one of the central attitudes in a Faqeer. It was also one of the attributes of the Prophet. He said "faqr is my pride". In a spiritual sense, faqr is defined as the absence of desire for wealth, recognition or for the blessings of the otherworld. One of the aspects of one who has embodied the true essence of faqr, is that the mystic will never ask anything of anyone else. The reason for this is for one to ask someone else for anything they would be relying on a created being. To receive something from that same being would produce gratitude in the heart which would be geared toward the giver, not towards God. - The classical Greek word used to translate meekness was that for a horse that had been tamed and bridled. - The buffalo was to the Buddhists a lesson in meekness. - Meekness is used to characterise the nature of Tess in Tess of the D'Urbervilles. - The Heroine of Possession: A Romance judges the hero as "a gentle and unthreatening being. Meek, she thought drowsily, turning out the light. Meek." - The Free Dictionary, Meekness - LDS.org Guide to the Scriptures, meekness - Neal A. Maxwell, Meekness -- A Dimension of True Discipleship, 1982 - E. A. Cochran, Receptive Human Virtues (2011) p. 82 - Matthew (1806). A Discourse Concerning Meekness. Hilliard - K. D. Bassett, Doctrinal Insight to the Book of Mormon (2008) p. 197 - The Free Dictionary, Usages of meekness - C. S. Titus, Resilience and the Virtue of Fortitude (2006) p. 320 - David A. Bednar (April 2018). "Meek and Lowly of Heart". The Church of Jesus Christ of Latter-day Saints. - Quoted in Maynard Solomon, Beethoven Essays (1988) p. 204 - W. Kaufman ed., The Portable Nietzsche (1987) p. 626-30 - J. B. Carman, Majesty and Meekness (1994) p. 124 - D. Schlinghoff, Studies in the Ajanta Paintings (1987) p. 219 - D. C. Lau ed., Lao Tzu (1963) p. 25-9 - Annemarie Schimmel (2011) . Mystical Dimensions of Islam (reprint) |url=(help). University of North Carolina Press. p. 121. ISBN 978-0-8078-9976-2. - Khadim Sultan-ul-Faqr, Mohammad Najib-ur-Rehman (2015). Sultan Bahoo: The Life and Teachings, page 145. Sultan-ul-Faqr Publications, Lahore. ISBN 978-969-9795-18-3. - J. K. Bergland, The Journeys of Robert Williams ( 2010) p. 53 - D. Schlinghoff, Studies in the Ajanta Paintings (1987) p. 144 - H. Bloom, Thomas Hardy (2010) p. 84 - A. S. Byatt, Possession: A Romance (1991) p. 141
Repelling rivals with just a song Two species of tawny brown singing mice that live deep in the mountain cloud forests of Costa Rica and Panama set their boundaries by emitting high-pitched trills, researchers at UT's Center for Brain, Behavior and Evolution have discovered. Although males of both the Alston's singing mouse (Scotinomys teguina) and Chiriqui singing mouse (S. xerampelinus) sing to attract mates and repel rivals within their respective species, the findings show for the first time that communication is being used to create geographic boundaries between species. In this case, the smaller Alston's mouse steers clear of its larger cousin, the Chiriqui. "Most people are puzzled by the existence of singing mice, but in reality many rodents produce complex vocalizations, including mice, rats and even pet hamsters," said Bret Pasch, a postdoctoral fellow in the Department of Integrative Biology and lead author on the paper, which was published online inhe American Naturalist. "Often they're high-pitched and above the range of human hearing." Both singing mouse species produce vocalizations that are barely audible to humans. Alston's singing mice are smaller and more submissive than Chiriqui singing mice, and they have longer, higher-pitched songs than their larger cousins. "Songs consist of a set of rapidly repeated notes, called trills," said Pasch. "Notes are produced each time an animal opens and closes its tiny mouth, roughly 15 times per second." The two mouse species share similar diets and live in similar forest habitats. Such overlap in lifestyle often leads to conflict. "A long-standing question in biology is why some animals are found in particular places and not others. What factors govern the distribution of species across space?" said Pasch. Using field and laboratory experiments, Pasch and his colleagues discovered that temperature regimes appear to limit how far down the mountain the larger Chiriqui mice can spread. They do not tolerate heat well and prefer the cooler temperatures of the higher altitudes. These dominant mice sing in response to potential intruders of either species and actively approach both types of songs. Conversely, temperature-tolerant Alston's mice will readily spread higher into cooler habitats if their larger cousins are removed from the equation. However, when an Alston's mouse hears the call of his bigger cousin, he ceases singing and flees to avoid a confrontation, declaring defeat before the battle has even begun. "The use of communication in mediating species limits is the major finding of our study and provides insight into how large-scale patterns are generated by individual interactions," said Pasch.
Rudimentary elevators, or hoists, were in use during the Middle Ages and can be traced back to the third century BC. They were operated by animal and human power or by water-driven mechanisms. The elevator as we know it today was first developed during the 1800s and relied on steam or hydraulic plungers for lifting capability. In the latter application, the cab was affixed to a hollow plunger that lowered into an underground cylinder. Liquid, most commonly water, was injected into the cylinder to create pressure and make the plunger elevate the cab, which would simply lower by gravity as the water was removed. Valves governing the water flow were manipulated by passengers using ropes running through the cab, a system later enhanced with the incorporation of lever controls and pilot valves to regulate cab speed. The "granddaddy" of today's traction elevators first appeared during the 19th century in the U.K., a "lift" using a rope running through a pulley and a counterweight tracking along the shaft wall. Give Us the Power... The power elevator debuted mid-19th century in the U.S. as a simple freight hoist operating between just two floors in a New York City building. By 1853, Elisha Graves Otis was at the New York Crystal Palace exposition, demonstrating an elevator with a "safety" to break the cab's fall in case of rope failure, a defining moment in elevator development. By 1857, the country's first Otis passenger elevator was in operation at a New York City department store, and, ten years later, Elisha's sons went on to found Otis Brothers and Company in Yonkers, NY, eventually to achieve mass production of elevators in the thousands. Various other elevator designs appeared on the landscape, including screw-driven and rope-geared, hydraulic models. Later in the 1800s, with the advent of electricity, the electric motor was integrated into elevator technology by German inventor Werner von Siemens. With the motor mounted at the bottom of the cab, this design employed a gearing scheme to climb shaft walls fitted with racks. In 1887, an electric elevator was developed in Baltimore, using a revolving drum to wind the hoisting rope, but these drums could not practically be made large enough to store the long hoisting ropes that would be required by skyscrapers. Motor technology and control methods evolved rapidly. In 1889 came the direct-connected geared electric elevator, allowing for the building of significantly taller structures. By 1903, this design had evolved into the gearless traction electric elevator, allowing hundred-plus story buildings to become possible and forever changing the urban landscape. Multi-speed motors replaced the original single-speed models to help with landing-leveling and smoother overall operation. Electromagnet technology replaced manual rope-driven switching and braking. Push-button controls and various complex signal systems modernized the elevator even further. Safety improvements have been continual, including a notable development by Charles Otis, son of original "safety" inventor Elisha, that engaged the "safety" at any excessive speed, even if the hoisting rope remained intact. Today, there are intricate governors and switching schemes to carefully control cab speeds in any situation. "Buttons" have been giving way to keypads. Virtually all commercial elevators operate automatically and the computer age has brought the microchip-based capability to operate vast banks of elevators with precise scheduling, maximized efficiency and extreme safety. Elevators have become a medium of architectural expression as compelling as the buildings in which they're installed, and new technologies and designs regularly allow the human spirit to soar!
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, videos, activities, or other ideas you'd like to contribute, we'd love to hear from you. Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals. Teacher Resources by Grade |1st - 2nd||3rd - 4th| |5th - 6th||7th - 8th| |9th - 10th||11th - 12th| Readers Theatre With Jan Brett |Grades||1 – 2| |Lesson Plan Type||Standard Lesson| |Estimated Time||Five 60-minute sessions| Fort Worth, Texas In this lesson, students in grades 1–2 interact with the book Hedgie's Surprise by Jan Brett and create a Readers Theatre that is performed for an audience. Students make predictions about the story prior to reading and listen to a read-aloud of the story. Postreading, they make observations about the characters, setting, and plot. The focus on the literary elements of the story leads students to create costumes, props, and sets for the final Readers Theatre performance. Although Readers Theatre does not typically employ such devices, the use of costumes and sets affords early elementary students a better understanding of the story. Readers Theatre script for Hedgie’s Surprise: The sample script for Hedgie’s Surprise provided with this lesson can be used for the Readers Theatre or as a model for students to write their own script. Aaron Shepard’s RT page: For helpful tips on conducting Readers Theatre and some additional sample scripts, see Aaron Shepard's RT page. Gambrell, L.B., Morrow, L.M., & Pennington, C. (2002). Early childhood and elementary literature-based instruction: Current perspectives and special issues. Reading Online, 5(6). Available: http://www.readingonline.org/articles/art_index.asp?HREF=handbook/gambrell/index.html - Literature-based instruction provides authentic learning experiences and activities by using high-quality literature to teach and foster literacy development. - A guiding principle of the literature-based perspective is that literacy acquisition occurs in a book-rich context where there is an abundance of purposeful communication and meaning is socially constructed (Cullinan, 1987). - Student participation in storybook readings (e.g., a Readers Theatre performance) increases comprehension and the sense of story structure, thereby enabling students to more thoroughly integrate the information. - The element of drama enables students to realize that reading is an activity that permits experimentation-they can try reading words in different ways to produce different meanings. As they practice their roles, readers are also given the opportunity to reflect on the text and to evaluate and revise how they interact with it. - Educators have long elaborated on the benefits of using Readers Theatre and related strategies for increasing reading fluency and sight-word vocabulary, improving reading comprehension, providing opportunities to interpret dialogue and communicate meaning, and increasing awareness and appreciation of plays as a form of literature. Cullinan, B.E. (1987). Children's literature in the reading program. Newark, DE: International Reading Association.
Human Rights Day History Human Rights Day commemorates the day on which the United Nations issued the Universal Declaration of Human Rights (UDHR), a document drafted by representatives from all regions of the world, which outlined fundamental human rights to be universally protected. The Declaration contains 30 articles that touch on rights to freedom, justice, peace, dignity, education and health care, amongst other rights. On December 10, 1948, the United Nations proclaimed the UDHR in an effort to help define equal rights that all humans on the planet deserve and can help the world achieve lasting freedom, justice and peace. Human Rights Day was officially declared by the United Nations in 1950. It is celebrated on December 10th each year and is marked by speeches and activities designed to bring attention to the issues surrounding the most pressing Human Rights issues worldwide. Human Rights Day Facts & Quotes - The United Nations Declaration of Human Rights was one of their first declarations and came about after the atrocities perpetrated upon humans during World War II were brought to light. - Over the past decade, armed conflict has killed 2 million children, disabled another 4-5 million, left 12 million homeless and orphaned another million. Whenever I hear anyone arguing for slavery, I feel a strong impulse to see it tried on him personally. - Abraham Lincoln America did not invent human rights. In a very real sense... human rights invented America. - Jimmy Carter I have cherished the ideal a democratic and free society... it is an ideal for which I am prepared to die. - Nelson Mandela, President of South Africa, who was imprisoned from 1964-1990. Human Rights Day Top Events and Things to Do - Educate yourself on current human rights fights such as genocide by terrorist groups, slavery and trafficking and child labor around the world. - Get involved with a local human rights organization. - Hold a candlelight vigil for those who have had their human rights violated. - Watch a documentary about human rights issues and violations. Some recommendations: Invisible Children (2006), Girl Rising(2013) and Nefarious (2011). - Attend an Amnesty International Human Rights Event near you to support the battle to uphold human rights throughout the world.
Teacher resources and professional development across the curriculum Teacher professional development and classroom resources across the curriculum Wool and Stuff Medieval Clothing Pages Peasant men wore stockings or tunics, while women wore long gowns with sleeveless tunics and wimples to cover their hair. Sheepskin cloaks and woolen hats and mittens were worn in winter for protection from the cold and rain. Leather boots were covered with wooden patens to keep the feet dry. The outer clothes were almost never laundered, but the linen underwear was regularly washed. The smell of wood smoke that permeated the clothing seemed to act as a deodorant. Peasant women spun wool into the threads that were woven into the cloth for these garments. to Clothing] [Next: Health] The Middle Ages is inspired by programs from The Western Tradition.
Being an American The Declaration of Independence In this lesson, students will explore the structure, purpose, and significance of the Declaration of Independence. Students will analyze the concepts of inalienable or natural rights and government by consent to begin to understand the philosophical foundations of America’s constitutional government The United States Constitution In this lesson, students will study the Constitution from three perspectives: structure, content, and underlying principles. They will study the purpose, content, underlying ideas, and constitutional principles of each Article in the Constitution. The Bill of Rights For the Bill of Rights to remain more than what Madison referred to as a “parchment barrier,” citizens must understand the purpose, content, and meaning of this important American document. In this lesson, students will identify and analyze the protections in the Bill of Rights as well as evaluate Supreme Court decisions in cases centered on Bill of Rights protections. America’s Civic Values This lesson offers students the opportunity to reflect on the virtues the Founders considered fundamental to a free society. After reflecting on the meaning of these values, students will analyze situations where civic values can be exercised and identify modern examples of those values in practice. American Heroes: Past and Present Students will examine how a diverse group of Americans have exemplified the responsibilities of citizenship. Students will consider how these historic figures defended the principles of the Constitution and Bill of Rights through their choices and actions. A Personal Response to American Citizenship This lesson challenges students to reflect on the meaning of American citizenship and understand its many forms, including private action and public service. Students will explore avenues for maintaining individual responsibility and civic engagement before articulating responses to the challenges of citizenship.
Franklin D. Roosevelt was in his second term as governor of New York when he was elected as the nation’s 32nd president in 1932. With the country mired in the depths of the Great Depression, Roosevelt immediately acted to restore public confidence, proclaiming a bank holiday and speaking directly to the public in a series of radio broadcasts or “fireside chats.” His ambitious slate of New Deal programs and reforms redefined the role of the federal government in the lives of Americans. Reelected by comfortable margins in 1936, 1940 and 1944, FDR led the United States from isolationism to victory over Nazi Germany and its allies in World War II. He spearheaded the successful wartime alliance between Britain, the Soviet Union and the United States and helped lay the groundwork for the post-war peace organization that would become the United Nations. The only American president in history to be elected four times, Roosevelt died in office in April 1945.
STEM Adventure in Nature – Integration of Nature and STEM Activities Our STEM adventure in nature project aims to provide STEM integration to our students through nature-oriented activities. The subjects that students are most curious about in nature constitute the starting point of our project. The project subjects are plant cultivation, what the roots of plants do, the water cycle, the importance of water for living things, what the wind does in nature, what is the soil, clean and dirty soil and its importance for our future, and waste management. Why is nature important in STEM activities? Nature offers us great opportunities to create preliminary knowledge that children can use in the STEM approach. In early childhood, children are most curious about nature and want to spend time outside. Some examples of this are: watching the ants when you go out to play in the garden, the shadow play you play together, the bean experiments we watched grow in amazement, the wind wheel we spin in the wind, toys made of clay… The main objective of our project was to enable our students to discover nature with STEM-based activities and to raise awareness about the importance of nature for a sustainable future. Therefore, we aimed to increase students’ interest and knowledge in nature, as well as to influence their level of environmental awareness and their positive attitudes towards nature. At the beginning of the study process, we started our studies by identifying the subjects that our students were most curious about in their environment. After doing that, we created a brochure addressed to their parents, containing information about nature and STEM. For starting the project, we made our seed balls, studied how plants grow, plant roots, and produced dyes from plant roots. After that, we watered the plants and observed how water is formed through experiments. We investigated which living things other than plants need water. In addition, we created the aquariums ourselves with the materials. In the third part of the project, we dealt with the subject of land. We started by investigating how soil was formed, exploring clean and polluted soil. We used soil as an educational material, by touching it and creating shapes. We also made wind wheels and turned them on, exploring how wind can be used as an energy source. Our project developed the scientific skills of our students with STEM-based activities and developed all areas of development in early childhood. Our students were able to use web2 tools with technology integration. We strengthened our students’ ties with nature and made them look at nature more curiously. About the Author Havva Düzenli is a preschool teacher in Turkey, specialised in STEM. She has been Scientix Ambassador for two years. Tags: Education, STEM, Students, teachers Your hands-on work with nature Your hands-on work with nature has been amazing. Congratulations, teacher. 🌟🌟🌟 Your hands-on work with nature Your hands-on work with nature has been amazing. Congratulations, teacher. 🌟🌟🌟If we live by the students’ Very nice work .
When it comes to writing, one essential aspect to consider is the voice you use in your sentences. The two most common voices are “passive” and “active.” Understanding the difference between these two can significantly improve the clarity and impact of your writing. In this blog post, we’ll break down the passive and active voice using simple language, so you can confidently choose the right voice for your writing.When it comes to writing, one essential aspect to consider is the voice you use in your sentences. The two most common voices are “passive” and “active.” Understanding the difference between these two can significantly improve the clarity and impact of your writing. In this blog post, we’ll break down the passive and active voice using simple language, so you can confidently choose the right voice for your writing. Let’s start with the active voice, which is the more straightforward and direct way of constructing a sentence. In the active voice, the subject of the sentence performs the action, and the verb shows that action. Since the subject is the one performing the action, and the sentence structure is clear and concise. Example: John (subject) wrote (verb) a book (object). In this sentence, John is the doer of the action (he wrote), and the sentence is easy to follow. Active voice is usually preferred because it makes your writing more engaging and concise. On the other hand, the passive voice can be a bit trickier to understand. In the passive voice, the subject receives the action rather than performing it. The sentence structure is reversed, and the focus is often on the object rather than the subject. This can sometimes lead to ambiguity and wordiness. Example: The book (object) was written (verb) by John (subject). In this sentence, the book is the one receiving the action (it was written), and the subject, John, is not the primary focus. Passive voice can be useful in certain situations, such as when the doer of the action is unknown, or when the focus needs to be on the object. However, overusing passive voice can make your writing sound weak and less engaging. When to Use Active Voice: Active voice is generally preferred in most types of writing because it creates a stronger and more direct connection between the subject and the action. Active voice is ideal for: - Clear and straightforward communication. - Engaging and compelling storytelling. - Concise and direct sentences. When to Use Passive Voice: While passive voice is not as commonly used, there are instances where it can be appropriate. Passive voice can be suitable for: - Emphasizing the object or the receiver of the action. - Being tactful or diplomatic when the doer of the action is not important or needs to be downplayed. - Emphasizing a sequence of events, where the subject is consistent throughout. How to Identify Passive Voice: Spotting passive voice in your writing is relatively simple. Look for these common indicators: - Forms of “to be”: am, is, are, was, were, being, been, etc. - Past participles: words ending in “-ed” or “-en” (e.g., written, spoken). If you find these elements in your sentence, you might be using passive voice. Using active voice in your writing generally results in more compelling, clear, and direct communication. While passive voice has its place, it’s essential to use it judiciously. By understanding the difference between active and passive voice and knowing when to use each, you can elevate the quality of your writing and make it more impactful for your readers.
Registered users download this PDF file for free. Use this describing food domino game as a complement to our describing foods lesson plan or as something to complete your own lesson plan. This activity functions as a basic domino game you can play with your A2 students, or some ambitions A1’s, to practice describing food. - Cut out the dominos and shuffle them in a pile. - Give your students each 4-5 dominoes. - Students take turns matching the illustrations with relevant words describing food. - Please note, there are many overlapping pairs, which makes the game much more playable. - You can play the easy version (students need only associate one of the adjectives with the illustration) or the harder version (they must associate both adjectives with the illustration). - If there is any doubt, have the class decide if the student has made a legal move! - A student can play only one domino at a time. - If a student is blocked, they end their turn by drawing a domino from the pile. - The first student to get rid of all their dominos wins. It’s truly a pity we don’t have an online domino game engine that would be easy to program… This game plays like Taboo. - Students must describe a food without using any of the words in the box connected to it. - If someone successfully names the food, then both earn a point. - Check the box so that it cannot be used again. Looking for more activities to talk about food and drinks? Check out our collection by clicking here.
Assignment: Measures of Variability Measures of central tendency are some of the most widely used statistics for describing data. Recall that measures of central tendency capture what a typical case or score looks like. An equally important characteristic of data, however, is how the cases or scores are distributed and how much they vary from one another. Measures of variability—including the range, interquartile range, variance, and standard deviation—describe the distribution and variability of data. Consider again the arrest records of inmates. It is possible that some inmates have committed many offenses, whereas others are one-time offenders. To describe how the number of offenses of inmates is distributed, you could calculate the range and the interquartile range. The range would show the difference between the highest and the lowest number of offenses. The interquartile range would show the middle 50 percent of the number of offenses. To describe how much the offenses vary from each other, you could calculate the variance and the standard deviation. The variance is typically not part of data interpretation; rather, it is a statistic that is calculated when determining the standard deviation. The standard deviation would show, on average, how far each inmate’s number of arrests deviates from the mean number of arrests of all inmates. In this Assignment, you calculate the range, interquartile range, variance, and standard deviation of a hypothetical set of data. - Review the 10 cases in the sample listed below. Calculate the range, interquartile range, variance, and standard deviation. - Consider how your calculations might be used to explain the distribution and variance of the data in this sample. - In 500 words, using the sample provided, respond to the following below. - What is the range of the sample? - What is the interquartile range of the sample? - What is the variance of the sample? - What is the standard deviation of the sample? - Based on your calculations, explain the distribution and variance of data. - Explain how the ability to analyze the data in these ways can affect your criminal justice practice. - Explain how the ability to understand data presented in these ways can affect citizen understanding of crime.
Indigenous poverty in Canada is a generational problem that has lowered living standards and created wide gaps in financial security and literacy. Indigenous populations still experience disproportionately high rates of poverty, despite multiple initiatives and legislation designed to reduce such gaps. In the Canadian Constitution, three distinct groups of Indigenous peoples are officially recognized: First Nations, Inuit and Métis. The poorest Canadians are those of Indigenous descent. In Canada, one in four Indigenous people and four in 10 Indigenous children face poverty. Intergenerational Trauma within Indigenous Communities Indigenous communities in Canada have long been victims of colonial policies that suppress their cultural identity and assimilate Indigenous children into Euro-Western culture through the residential school system. The accumulation of profound intergenerational distress and trauma has persisted and compounded over time, transmitting across successive generations within kinship groups and becoming entrenched within Indigenous families across Canada. Intergenerational trauma faced by the Indigenous population in Canada has resulted in the manifestation of various symptoms, including anxiety, depression and substance abuse. Addressing these challenges has proven to be a challenging task for mental health professionals in Canada. At the community level, there is a need to recognize the impact of colonization, allocate resources to community-based initiatives in Indigenous reserves and continue promoting reconciliation with Indigenous communities. The Remoteness of Indigenous Communities The responsibility over Indigenous reserves lies with the federal government of Canada. Indigenous reserves are mostly in isolated northern Canadian provinces and territories. Due to their distance, these communities have difficulties acquiring basic resources, including food, shelter and education, which are more expensive than in southern communities. In some communities, employment opportunities are few. Indigenous Services Canada (ISC) works to enhance First Nations, Inuit and Métis services. The mission of the ISC is to facilitate the self-sufficiency of Indigenous communities in delivering essential services and addressing socio-economic circumstances within their respective communities. At present, the federal government endeavors to formulate measures aimed at promoting the provision of clean water on reserves. It also established helplines for mental health services and implements non-insured health benefits. Systemic Discrimination and Institutional Racism Institutional racism and prejudice increase Indigenous poverty in Canada. Justice, health care and job discrimination restrict resources and opportunities. The 2017 to 2018 Annual Report of the Office of the Correctional Investigator revealed a concerning surge in Indigenous imprisonment. The proportion of Indigenous federal prisoners rose from 20% in 2008 to 2009 to 28% in 2017 to 2018. Despite experiencing higher victimization rates, Indigenous individuals are not inherently more prone to committing crimes compared to their non-Indigenous counterparts. The 2019 GSS reported that Indigenous people faced 33% higher discrimination than non-Indigenous and non-visible minority individuals. The government initiative Budget 2021 allocated $126.7 million over 3 years to combat anti-Indigenous racism in Canada’s health systems. Among the initiatives is the Federation of Sovereign Indigenous Nations First Nations Health Ombudsperson Office. Advocates from this agency work with patients and families to address systemic concerns with federal and provincial health institutions. They also assist in identifying solutions to address conflicts and concerns, ultimately leading to improvements in the overall system. Indigenous peoples suffer lifelong educational hurdles. Colonialism, marginalization, poor education in reserves and limited finance create these impediments. Indigenous peoples struggle with regard to education due to little educational financing, especially in rural locations with few schools and programs. Nearly half of Indigenous reserve residents in Ontario lack a high school diploma. Indigenous and Northern Affairs Canada currently conducts youth employment, job experience and skills development programs. These initiatives finance First Nations and Inuit post-secondary students. These programs try to overcome educational inequities and improve employability for Indigenous students, yet weak educational systems in Indigenous communities perpetuate economic instability and poverty. The Long-Term Consequences of Residential Schools From the 17th century through the late 1990s, Canada ran Indigenous residential schools. These Christian-run institutions aimed to eradicate Indigenous culture and incorporate children into Euro-Western civilization. Survivors and their descendants continue to suffer from emotional trauma and loss of language, culture and mental well-being after the closure of residential schools. The Canadian government has often apologized to Indigenous people for residential school abuse and Pope Benedict apologized to the Assembly of First Nations’ National Chief in 2009 for Indigenous people’s suffering in residential schools. Additionally, many Indigenous people suffer from substance abuse to deal with mental health issues caused by the residential school system. The 2006 Indian Residential Schools Settlement Agreement created the Indian Residential Schools Resolution Health Support Program to assist Indigenous communities in coping with emotional trauma. Former students of residential schools may seek cultural and emotional assistance through the program’s crisis hotlines fostering a positive outlet. Indigenous poverty in Canada persists due to a variety of circumstances, including residential institutions, educational challenges, isolation on Indigenous reservations, racial conflicts and the long-term repercussions of intergenerational trauma. Nonetheless, there are positive indicators (due to ongoing effort) of improvement in these communities in terms of reconciliation, empowerment and inclusion. – Valentina Ornelas
What are the symptoms of PTSD? In general, posttraumatic stress disorder can be seen as an overwhelming of the body’s normal psychological defenses against stress. Thus, after the trauma, there is abnormal function (dysfunction) of the normal defense systems, which results in certain symptoms. The symptoms are produced in three different ways: - Re-experiencing the trauma - Persistent avoidance - Increased arousal First, symptoms can be produced by re-experiencing the trauma, whereby the individual can have distressing recollections of the trauma. For example, the person may relive the experience as terrible dreams or nightmares or as daytime flashbacks of the event. Furthermore, external cues in the environment may remind the patient of the event. As a result, the psychological distress of the exposure to trauma is reactivated (brought back) by internal thoughts, memories, and even fantasies. Persons also can experience physical reactions to stress, such as sweating and rapid heart rate. (These reactions are similar to the “fight or flight” responses to emergencies. The patient’s posttraumatic symptoms can be identical to those symptoms experienced when the actual trauma was occurring. The second way that symptoms are produced is by persistent avoidance. The avoidance refers to the person’s efforts to avoid trauma-related thoughts or feelings and activities or situations that may trigger memories of the trauma. This so-called psychogenic (emotionally caused) amnesia (loss of memory) for the event can lead to a variety of reactions. For example, the patient may develop a diminished interest in activities that used to give pleasure, detachment from other people, restricted range of feelings, and a sad affect that leads to the view that the future will be shortened. The third way that symptoms are produced is by an increased state of arousal of the affected person. These arousal symptoms include sleep disturbances, irritability, outbursts of anger, difficulty concentrating, increased vigilance, and an exaggerated startle response when shocked.
4-H County Events gives youth ages 8 and up the opportunity and experience of preparing and delivering a 4-H related speech or presentation. Youth can exhibit their knowledge by presenting on a topic related to their 4-H project through public speaking, demonstrations, illustrated talks and share the fun (talent). County Events allows youth to: - Develop skills in gathering, preparing and presenting educational information - Gain confidence in public speaking - Exhibit 4-H project knowledge and skills Explore the different kinds of County Events presentations: Demonstrations: also thought of as “show & tell,” put words into action and shows “how to” do something including the steps in doing that and how to use the appropriate supplies. You can use visual aids to enhance your demonstration. You should actually “do” something and have a finished product to show the audience at the end. Illustrated Talk: Tells “how to” do something through illustrating a process. You should teach the audience how to do something through the use of the posters, graphs, charts, models, equipment, and PowerPoint presentations. No finished product is required. Public Speaking: A prepared speech that “tells about” something and shares what you have done or learned through the project. During a public speech, you teach, entertain or inform your audience about a topic. Share the Fun: Share your talent with your audience. Talents include instrumental, vocal, dance, dramatic and novelty acts. Examples include musical instruments, singing, dance, skits, stunts, monologue, impersonation, etc.
Unlocking the Power of Speech and Language: A Guide to Your Child’s Health and Development Parents all want the best for their children. One crucial aspect of their growth and development is speech and language development. Communication skills play a pivotal role in shaping their overall health and development. In this evidence-based blog, we will explore the importance of speech and language in your child’s life, the milestones to watch for, red flags to be aware of, and practical tips to support their linguistic journey. Why Speech and Language Matter - Cognitive Development: Language is not just a tool for communication; it is the cornerstone of cognitive development. A study by Hart and Risley (1995) found that children exposed to rich language environments during their early years had significantly higher IQ scores and better academic performance later in life. - Social Interaction: Effective communication is essential for building secure and deep relationships. Children who develop good language skills engage confidently with peers, form friendships, and interact successfully with adults. - Emotional Expression: Language enables children to express their emotions and needs, reducing frustration and promoting emotional well-being. - Academic Success: Strong language skills are the foundation for academic achievement. Children with advanced vocabulary and language comprehension find learning to read and write easier. Language Development Milestones Understanding typical speech and language milestones can help parents monitor their child’s development and potential delays. It is important to remember that children develop at different rates, but general guidelines include: - 6-12 Months: Babbling, imitating sounds, recognising their name, and responding to simple commands. - 1-2 Years: Saying single words, following simple instructions, and beginning to use basic pronouns (e.g., “me,” “mine”). - 2-3 Years: Combining words to form short sentences, using plurals and verbs, and asking simple questions (e.g., “What’s that?”). - 3-4 Years: Engaging in conversations, using more complex sentences, and understanding basic concepts of time and space. - 4-5 Years: Telling stories, using future tense, and understanding more abstract language concepts. Red Flags for Speech and Language Delay While children develop at different rates, certain red flags may indicate a potential speech or language delay. Consider seeking professional evaluation if your child: - Does not babble or imitate sounds by 12 months. - Speaks no words by 18 months. - Struggles to word build (2-word sentence) by 2 years. - Demonstrates difficulty understanding simple commands. - Experiences persistent stuttering beyond 5 years. Supporting Your Child’s Speech and Language Development As a parent, you play a vital role in fostering your child’s language skills. Here are some evidence-based strategies to support their speech and language development: - Talk and Read: Engage in frequent conversations with your child and read books together. The more words they hear, the richer their vocabulary will become. - Active Listening: Show genuine interest in what your child is saying and encourage them to express themselves. Active listening promotes their confidence and self-esteem. - Limit Screen Time: Excessive screen time can hinder language development. Ensure a healthy balance between screen activities and interactive play. - Play and Imitate: Encourage imaginative play and participate in their games. Pretend play helps build language skills and creativity. - Seek Professional Advice: If you have concerns about your child’s speech and language development, don’t hesitate to consult a speech-language therapist or paediatrician for assessment and guidance. Speech and language development are vital for a child’s health and well-being. By understanding the importance of language skills, recognising developmental milestones, and actively supporting your child’s growth, you are unlocking the potential for a bright future. Remember, every child’s linguistic journey is unique. With your love, care, and guidance, they can reach new heights in their communication abilities. If you have any questions about this blog please email me or contact us.
Students will explore shapes and pathways to create movement inspired by different birds at that inhabit the Great Salt Lake. Demonstrate the shape of different bird beaks and how they move through space with creative movement. Creation of patterns, recall movement sequences, use images as inspiration for movement, describe the shape and function of different bird beaks. Open Space, drum, photographs and videos of different birds that live at the Great Salt Lake Establish class goals and expectations. Ask students what they picture in their minds when they think of a bird's beak. Spread the students out into the space and create still shapes in response to the following words: curved, straight, wide, narrow, hooked and scooped. Try using the whole body and then just individual body parts to make the shapes. Next, have them try moving instead of still shapes, using similar words: round, straight, small, wide, hook and scoop. Have students try these movements in different pathways (straight, curved and zig-zag). Discuss the Great Salt Lake and how it is a important nesting and feeding location for many different types of birds. Show some photos of birds that live in the Great Salt Lake region. Examine the different beaks of the birds. What do you notice? Have each student select a bird and create a shape, a movement and include a pathway to show a movement description of the bird's beak. Perform the movement creations and link together to form a bird beak ballet! Use music for further inspiration or the sounds of birds. Extension to the Lesson This same idea could be done with other animals that live on or near the Great Salt Lake. Examining different ways of swimming, flying, walking, etc. Great Salt Lake Bird Refuge
The distinctive characteristic of a topographic map is that the shape of the Earth's surface is shown by contour lines. Contours are imaginary lines that join points of equal elevation on the surface of the land above or below a reference surface, such as mean sea level. Contours make it possible to measure the height of mountains, depths of the ocean bottom, and steepness of slopes. A topographic map shows more than contours. The map also includes symbols that represent such features as streets, buildings, streams, and vegetation. This 1:24,000 scale topographic map, also known as a 7.5 minute topographic, covers approximately 7 by 9 miles. The contours and elevations on this scale map are shown in feet. Media Type: Paper Map Location: Tooele County
Learning to use a scientific calculator can seem daunting, but if a project/exam is looming ahead, one can’t run away from it forever. Even if you are great at solving math problems and know all the formulas, using scientific calculators does not come naturally to everyone. Besides, one cannot expect to solve complex problems within a limited time using just pen and paper. For instance, in case you are preparing for your SATs, you’ll need a high quality calculator for the SAT exam, as not all calculators are SAT approved. So, learning how to use scientific calculators is a must if you’re preparing for such a test. A scientific calculator makes life easier for professionals as well as students as it can help get calculations done faster. And learning the various scientific notations on a calculator can be incredibly beneficial and time-saving. In this brief guide, we’ll help you learn how to use scientific calculators. What Is A Scientific Calculator? Table of Contents - What Is A Scientific Calculator? - How To Use A Scientific Calculator? - 1. Learn The Basic Functions - 2. Initialize The Calculator - 3. Regular Calculations - 4. Decimals And Fractions - 5. Powers - 6. Correcting Errors - 7. Negative Numbers - 8. Longer Calculations - 9. Reusing Previous Results - 10. Calculator Memory - 11. Scientific Notation - 12. Roots/Square Root - 13. Trigonometric Ratios - 14. Logarithms - How To Use Scientific Calculator Frequently Asked Questions ? - How To Use Scientific Calculator Final Words Before we begin with the basic functions and advanced applications, let’s take a look at what device classifies as a scientific calculator. The first thing to understand is that scientific calculators are distinctly different from other calculators, even if they may appear similar. Broadly speaking, there are three categories – basic, scientific, and business calculators. The basic ones are used for regular multiplication, addition, subtraction, and division purposes. But it is impossible to work upon detailed problems that have a basis in trigonometry, physics, or engineering on this regular calculator. In fact, a business calculator also stands unable to solve such complicated math problems. Scientific calculators are designed to work with log, natural log, exponents, and trig functions along with memory. When working with scientific notation, these functions will help calculate the sum and display results quickly on the calculator screen. Also, business calculators have buttons for interest rates which is not enough for finding any formula that has a geometry component. How To Use A Scientific Calculator? If you wish to make the most of your calculator and use it often for daily arithmetic sums or complex problems, read through the entire guide carefully. Remember that calculators vary by brand, so the exact function or keys may differ, but we have used a popular model, “Casio fx-83ES”. This calculator is readily available in stores, but if you cannot access this particular model, don’t fret, for the basic operations will remain similar. 1. Learn The Basic Functions First up, let’s get the common functions clear by giving you several examples of keys, their functions, and usage in a problem. So, take out your brand new (or existing) scientific calculator and take a close look at the keys, calculator screen, and keypad. Typically, the device should have the following elements/keys- - Mode key - On key - Cursor control button - Function keys - Delete key - All clear key - Basic operation keys (multiplication/addition) - Equals key - Last answer key - Numbers keys - Alpha key - Shift key Basic Key Functions Most of these keys will be evident once you see them, and a few have their names mentioned too. For instance, the delete key is shown as “DEL” in red, and the all-clear key is written as “AC.” The “On” key is used to switch the power on all calculators and is found in the top right corner of most devices. On the lower half of the keypad, you have the numbers and basic operation keys of addition, subtraction, multiplication, and division. The “=” sign is called the equal key and gives you the final result of the calculation. Make note that some keys often have more than one function on the keypad. The primary function will be written on the key in white, while the secondary use is sometimes mentioned in yellow above the key. To access these alternate functions, press the “SHIFT” key, and the “S” symbol will appear on the top left corner of the display. This sign appears only to indicate that the key has been pressed and goes away when you press something else. Sometimes keys have three functions, with the third written in red above the key. The number values of previous formulas are stored in the memory and can be retrieved by pressing the “ALPHA” button. When “A” appears on the screen, it confirms the button has been pressed. Screen Menus And Key Sequence Some options may not have a dedicated button on the pad, so they can be viewed on the calculator screen. One can select one function from the menu options on the display by pressing the corresponding number key. The key sequence is the process of combining particular keys to get one function. To make things easy, we’ll stick to mentioning the key sequence and explaining it to them while placing the sequence name in brackets. For instance, if you press “SHIFT” and then press “AC,” this is a key sequence for turning the device off. Thus, we’ll mention the purpose (OFF) in brackets like this. Essentially, there is no button on the keypad that is called “OFF,” but this method will help you remember the name of the function for each key sequence. 2. Initialize The Calculator Start with the default settings of the calculator and initialize by pressing the “ON” key and entering two key sequences- - Sequence 1 – “SHIFT” 9 (CLR) 1 (setup) = “AC” - Sequence 2 – “SHIFT” “MODE” (SETUP) 8 (norm) 2 In sequence 1, the CLR stands for clear, which is the second function of the numerical key 9. Basically, you need to do this to clear the previous settings that may be stored in the calculator. After pressing the correct keys, you can start using the device in math mode. The word “math” will show up on the right side of the screen to confirm this. 3. Regular Calculations We’re sure everyone might be familiar with this step of using calculators because this step is as basic as it gets. All the numeric keys you type are displayed clearly on the screen, and pressing the “=” button gives the results on the right side of the screen. If the calculation is particularly long and all numbers do not fit on the screen, scrolling symbols will appear, and you can scroll right or left as required. Locate the word “REPLAY” on the device and use the keys on its left and right to scroll. To avoid confusion, break down the calculation into smaller parts to get accurate results. 4. Decimals And Fractions When the number in the calculation is not whole, it will appear as a fraction on calculator displays. You can see something like ¼ or ¾ on the screen when this occurs, but you can convert it to a decimal form for easy understanding. Press “SHIFT” and “=” instead of only the equal key to get the answer in decimal form. It is recommended to keep the scientific calculator in math mode for navigating through the calculations with ease. If you need to calculate a fraction, there is the required button that looks like one square on top of the other with a line dividing them. This key is located on the left side of the function key area on your calculator. The two squares denote the two numbers you wish to fill in as a fraction, such as ⅕. On the first press of the button, enter the numerator (1), then move the cursor downwards to fill the denominator (5). To continue with the rest of the calculation, get out of the denominator box by pressing the right-side cursor. Similarly, it is possible to use mixed numbers like 2 ¾ by using a specific template/ key sequence. Pressing “SHIFT” followed by the fraction key gives three numbers to fill, the first being the whole number and the other two being the numerator and denominator in that order. Now, let’s move on to using powers in calculations on a scientific calculator. The device will have a dedicated button for a smaller power like a square or cube, which are found on the function key area. Just as one would scribble on paper, here, too, the main number comes first, followed by the power number. For example, if you need to calculate 32, then type the number 3 first and then the x2 key on the calculator. For powers higher than 3, make use of the general power key to customize the power number. So, if you wish to enter 26, then type 2, followed by the general power key and the number 6. After pressing the power key (which looks like x on a calculator), there should be a flashing cursor that looks like this “|.” This indicates that you are applying the power in the correct area. 6. Correcting Errors Typing in the numbers quickly can lead to incorrect data entry, which is not a problem since editing it out is simple. To correct any unwanted numbers in the calculator, the sideways scrolling buttons will come in handy. The right and left scrolling keys, along with the flashing cursor that looks like this “|,” allow for quick edits without redoing the entire sum. Insert new numbers when the cursor is at the appropriate place and delete unwanted items using the “DEL” delete button. Conveniently, this sequence works even after the “=” sign is pressed, making it possible to swiftly go back and rectify the error. In some cases, it is best to clear the display and start afresh by pressing the “AC” key. If the calculation sequence does not register with the device due to an erroneous entry, it will inform the user of the same by flashing “Syntax error.” The error pop-up also gives two options – to clear the sum or scroll through the calculation. Similarly, if you face a “Math error” or “Stack error,” it means the scientific calculator cannot register the calculations, and it is best to redo the sum. 7. Negative Numbers You will already know that the minus sign is used in two contexts in mathematics – as a symbol of subtraction or to mark a negative number. If you look closely at the scientific calculator, you’ll find two distinct minus signs, each for a different purpose. - Minus – is used to indicate subtractions between two numbers like 8 – 5 = 3 - Bracketed minus (-) indicates a negative number like -2 or (-)2 However, there could be some calculators that use the same function key interchangeably, so you can check the manual that comes in the packaging to verify this. However, if you try to use minus with brackets (-) for subtraction, the device will view it as a “Syntax error.” 8. Longer Calculations Depending on your area of work or study, you will be required to solve complex calculations which require specific functions on the scientific calculator. Let us assume you need to figure out the volume V of matter in a metal rod that has L meters of length, and the distance around the midpoint is D. Here, you are faced with this problem V = L x D2/ 4 π If the metal rod is 2 m in length and the distance around the center is 90 cms, then you need to find V = 2 x 0.902/ 4 π As you can see from this example, the value of π (pi) is required to find the volume of the metal material. In ordinary cases, you can type out the approximate value of pi individually, but typing it can get inconvenient after solving multiple complex calculations. Instead, use the shortcut “SHIFT” “x10x” (π) to quickly add the approximate value of pi. 9. Reusing Previous Results Calculations on paper will have you using multiple pages to note previous results, and you may be doing the same while using calculators. Keep away the papers because scientific calculators retain the results of the previous calculation. So, instead of making errors by writing and retyping each time, you can use the “Ans” key on the device to retrieve the latest result of your calculations. 10. Calculator Memory Pressing the “Ans” key will only reveal the most recent calculation result, but there is more to a scientific calculator than that. The memory function of calculators allows users to divide the calculation into two parts for ease. This way, it is possible to calculate the values of various parts of the expression without having to note down the results each time. The thing to note is that calculators have different types of memories and the basic one (M) involves only one key, “M+.” This is simple enough to use, but you must make sure to clear the memory of previous information since the device might still retain it. Press the key sequence “SHIFT” 9 (CLR) 2 (memory) = AC. After this command, all the previous calculator memories will be wiped out. Here’s how you can store an answer in the M memory of the calculator – press “SHIFT,” “RCL,” and M+. In this sequence, the RCL or recall button is used to store the information, its second function. If you need to check the information/answers stored in the memory, press “RCL” and M+ to display the values on the screen. 11. Scientific Notation Some numbers are so huge that the calculator automatically shows them using scientific notation. However, smaller numbers can also be viewed using scientific notation depending on which mode you are currently working in. The two modes are “Norm 1” and “Norm 2,” of which Norm is short for normal. Norm 1 mode will use scientific notation for any number less than 0.01. (and greater than -0.01) On the other hand, Norm 2 does the same for any number less than 0.000000001. 12. Roots/Square Root We have already explained how the power key allows users to add powers greater than two and three. Similarly, there are dedicated keys on the calculator for finding the root of any number. The button visible on the keypad directly gives the square root of the required number. However, we can use the same button to find the cube roots too. If higher roots are required, you can use the second function of the general power key (the one that looks like x). If you’ve got the hang of using fractions on a scientific calculator, this process will feel easier too. It has the same left and right scrolling system that allows users to add the correct numbers in the right place. 13. Trigonometric Ratios One can measure angles using varying degrees and find the values of trigonometric ratios on the calculator. To work with degrees, press the key sequence “SHIFT” “MODE” (SETUP) followed by 3 (degree). After this, a “D” indicator will be visible on display, and the device is ready to work upon trigonometric ratios. Press the “san,” “cos,” and “tan” keys as and when required to measure an angle. If the formula is simple, pressing the keys followed by = will be enough to answer. However, if these ratios are meant to be part of a larger calculation, use brackets on them. Alternatively, if your calculation is based on radians, it is possible to change the settings from degrees to radians on the device. Press “SHIFT” “MODE” (SETUP) and 4 to get this set. The “Log” button on the calculator is used to calculate logarithms, and the key sequence goes as follows: press “log” 100 = to get log2 100. Note that the sequence will require closed brackets if it is part of a larger calculation. How To Use Scientific Calculator Frequently Asked Questions ? Can the same key sequence provide varying answers on different calculators? Yes, and that is why it is crucial to know the functioning of the scientific calculator you use. Logically, one may think that the mathematical formula should be the same across all devices, but this is not true. Calculators can be built to understand key sequences differently. For instance, if you need to calculate 3 + 5 x 4, you may know that according to the correct order, multiplication comes before addition. So, your calculation gives you the answer 23, which is the correct one. But, your calculator may not understand this basic rule and will switch up the order and add 3 with 5 and multiply it with 4 later, giving you the incorrect answer 32. How To Use Scientific Calculator Final Words While these steps should be easy enough for anyone to start using scientific calculators, a user’s manual can be a lifesaver. As we mentioned previously, the keypad of calculators may vary depending on the brand, and if you can find the manual, it will help immensely. In any case, we hope this guide proved helpful to those of you trying out a scientific calculator for the first time.
Puberty | Science homework help the topic of puberty and explores how children develop physically, psychosocially, and cognitively during this time. Often, parents/caregivers are not comfortable talking with children about the changes they experience during puberty and, consequently, leave children to figure things out on their own. As a parent/guardian, what do you think would be important to tell a child about puberty? Describe at least one thing you would explain from each of the following categories: - physical changes - psychosocial changes - cognitive changes
Building is a fascinating and fun activity for children of all ages. When children build they are learning geometry, practicing spatial skills, and engaging their executive function. STEM (Science, Technology, Engineering and Math) education has become increasingly necessary to thrive in today’s world and therefore a national priority. Yet, a mistaken notion that complex thinking skills are beyond the ability of young children means that infants, toddlers & preschoolers aren’t always exposed to STEM concepts or activities during their early education. By providing libraries with the materials necessary to easily create play-based learning opportunities that focus on age appropriate STEM concepts, the Colorado State Library hopes to address that gap. This kit will allow library staff to present play-based, interactive programs on measurement for young children, ages 3 to 7. - 1 set of 6 Squishy Shapes - 4 Fort Building Kits - 4 bed sheets - 20 Flashlights - 1 copy of each of the following books: - Little Red Fort by Brenda Maier - It’s Fort Building Time by Megan Wagner Lloyd - Builders and Breakers by Steve Light
Media - Day 2 grades 7+ learning activities These activities will help youth build a positive view of themselves and recognize their strengths while thinking about media. They can be done alone or with friends over a video chat such as Skype, Zoom, Facetime, etc. Riddle: Which weighs more, a pound of feathers or a pound of bricks? See mindfulness activity answer at the end of the lesson plan. Think about if you had inside weather, what would it be right now? Would it be sunny? Cloudy? Rainy? Stormy? - markers or pencil crayons Make your own magazine about any topics you want. Try these topics or come up with your own: - Top 6 things to do at home during quarantine - Write a short story/comic hero story - Sports/Entertainment theme - Things I miss most… - My favorite video games To make your magazine: - Fold two sheets of paper in half and place one inside the other. Now you have eight pages to decorate (front and back). - Fill in your blank booklet and turn it into a fun and creative magazine. Once you have completed your magazine, share it with your family or friends. This activity will test whether or not you have the detective skills to see whether news is real or fake. Fake news is created to purposely mislead or deceive readers. Fake news is often created to influence social views or for political motives. How to spot fake news: - Look for weird URLs - If the ending is something unfamiliar like .lo or .com.co, they are probably not authentic. - Look at the text - If there are grammatical errors, incorrect dates, or bold claims with no source, the source is probably not legitimate. - Dig deeper - Check the ‘about us’ section of the website and research the source online. If you can’t find that information, it’s probably not legitimate. - Cross check - Use fact checking sites to confirm information and see if other credible sites are sharing the same info. - Reverse image search - If the same image appears in unrelated stories, you may have reason to be suspicious. Use the tips above to research this news article and answer these questions: - What is the source? - Who wrote it? - When was it written? - What’s the background? - Is it meant to be a joke? - Is it based on rumours? - Why are you interested in the story? - What other sources are reporting on this story? - markers or pencil crayons Interview a friend or family member you want to get to know. You can Facetime or Zoom a friend to complete the interview. - Start by asking questions like “what is your favourite sport or TV show?” or “If you could go anywhere in the world, where would you go and why?” - Practice active listening and make sure the person you are interviewing knows you are interested in what they are saying. - Next, try asking a few more personal questions, like “what is something you’ve always wanted to try, but haven’t had the courage to do it yet?” or “if you had unlimited money, how would you spend your time?” Use the answers and create a biography of each person (you can create a magazine for them like in the previous activity). Share it with your family or friend to see how correct your information is. - How does it feel to use media to share your interests? - When do you think sharing information from the internet, magazines or newspapers is helpful? When is it not helpful? - How can you use media to communicate positively? Answer: Neither. They both weigh one pound.
How Old is Old? You may never have wondered where the oldest tree in the world stands and the concept had not occurred to me, until I visited Israel recently. In Gethsemane, I discovered olive trees that have been around for more than 800 years. I have however, recently learned of a tree in Nevada, which at 1,400 years of age, is the oldest on the planet. The ancient Great Basin bristlecone pines, have twisted trunks, which simulate thick ropes, due to centuries of gusting wind and rain. They tend to thrive in this area, because little else does. The 3,400 metre or 11,000 foot altitude is void of grass, brush, and virtually no pests. In other words, there is no competition for these trees to survive. Of course, there are no people to start wildfires, and no nearby trees to spread pathogens. Standing solitarily, year after year, these ancient wonders are left alone to simply exist. They store water in needles that can live for decades and pack on the teensiest bit of mass at a time. The wood grows so slowly, it gets too dense for beetles or disease to penetrate. These bristlecone pines on Mount Washington, in Nevada’s Great Basin National Park, have become so iconic their images are stamped on the back of American quarters. One such tree, named Methuselah, was 4,853 years old, when it was accidentally destroyed in 1964. Donald Rusk Currey, an American professor of geography, got his tree corer stuck in Methuselah. A park ranger tried to help him get it out by cutting the tree down. They did a ring count and found it was planted over 4,800 years ago… I wonder how he felt. The whereabouts of its location has been kept secret, as tourism would destroy what is left of the tree, not to mention the other bristlecone pines nearby. Patagonian cypresses, also known as alerces, native to Chile and Argentina, have long been recognized as the world’s second longest-lived tree species. In the early 1990s, by counting tree rings on a cut stump, a Patagonian cypress was found to be more than 3,600 years old. It is hard to imagine a tree, or anything for that matter, being that old. Methuselah, the bristlecone pine, would have sprouted leaves before the pyramids were erected at Giza and nearly two thousand years before the birth of Christ. Wow, and I thought I was getting old. Jonathan van Bilsen is a television host, award winning photographer, published author, columnist and keynote speaker. Watch his show, ‘Jonathan van Bilsen’s photosNtravel’, on RogersTV, the Standard Website or YouTube.
Coffee beans typically lose 14–20% of their mass during roasting. Darker roasts lose relatively more mass — in fact, the amount of mass lost correlates linearly with the roast colour. Fast, high-temperature roasts in fluid-bed roasters lose less mass than slower roasts for a given roast colour, partly because they lose less water in the short time they spend in the roaster. Over longer periods of time, however, roasting at high temperatures leads to a higher maximum mass loss, thus the roasting temperature has more of an effect on mass loss than the amount of time the beans spend in the roaster (Schenker 2000). Mass loss depends on the final roast colour and the roasting temperature. The mass lost correlates linearly with roast colour (left), but fast, high-temperature roasts to the equivalent roast colour lose less mass. The maximum mass lost during roasting (right) is much higher in fast, high-temperature roasts. Adapted from Schenker (2000) Most of the mass lost represents the bean’s initial moisture content, which escapes as steam. The remainder is made up of carbon dioxide, water that was created during reactions during roasting, and small amounts of volatile compounds. In practical measurements in the roastery rather than in a lab, the mass loss will also include any chaff, bean fragments, dust, or small stones that escape the roaster. Because moisture makes up the biggest part of the lost mass, the initial moisture content has a big effect on the mass lost during roasting. Researchers often refer to ‘organic roast loss’ or ‘dry mass loss’, meaning the amount of mass lost after excluding water. The dry mass loss is typically around 4–6% (Fernandes 2019), but for very dark roasts it could be as high as 12% (Clarke 1987). Some studies have found that the rate of mass loss accelerates at the end of the roast, perhaps because the chemical reactions that break down the beans’ structure begin at high temperatures.
Teacher, parents and tutors will often tell students to revise - perhaps even when, how often and how long for. The 'how' to revise can often be based on what students think is good revision but often isn't backed by evidence that it is effective. Parents can support their children in revising by ensuring that they approach the content is small, manageable chunks which allows the student time to practice and build skills and knowledge gradually. Remember that revision is a marathon and not a sprint! Cramming sessions before an assessment will just make you tired the next day. In addition you can help by ensuring your child has a purposeful place to revise, free from distractions such as social media. Below you will find resources that teachers have put together to ensure that what the students are revising is high quality and will help them develop knowledge and skills based upon gaps that their teachers have identified. If you would like to discuss revision further, please contact your child's form tutor.
10 Easy Tips that Work: Expanding Choices for Middle School As kids get into middle school, they have more choices. They are spending more time away from their parents to be with their peers. Their choices also become bigger with more impact on their lives. All of this happens at the same time that their peers’ attitudes have a larger role in their lives. How can we help? Read along for some tips that work for middle school students, especially those who have emotional disorders, and their decision-making. Be a Role Model The good news is that parents, teachers, and trusted adults still have an important role to play in the lives of pre-teens and teenagers. We can be their sounding boards for decision-making, pointing out possible consequences of decisions that they may not have thought about. And we are still a role model for them- even if they don’t seem to be paying attention! We can offer advice, but older students do not welcome being told what to do. It is important to offer advice as to options that students can make, delineating both positive and negative possible consequences related to the choices. By discussing these factors and letting our youngsters make the decision, we are empowering them on their paths to become independent adults. 10 Tips: Working with Difficult Students - Avoid unnecessary confrontations over behaviors. Stay calm and point out the negative consequences of the behaviors that the student is choosing to exhibit. (These hints apply to behaviors that are not dangerous or injurious.) - Repeat the request or the direction and give the student time and space to make a different choice. - If the student chooses to continue the inappropriate behavior, follow up with the appropriate negative consequence. - If the student chooses to make a more appropriate choice, continue from wherever you were, unobtrusively helping them to catch up. It also helps to find a quiet way to thank the student for making a better choice. - Consider what kind of factors contributed to the student’s inappropriate behaviors. Was it due to internal factors or a reflection of the difficulty level of the work presented? Often students will act out rather than admit that something is difficult or that they need help. - Have choices available for the students’ work that day. When students are having a difficult day, they may respond better to addressing their goals via a game or a video clip rather than a worksheet. - Communicate that you are aware that something is wrong when your difficult student walks in the door looking upset. Ask the student if they would like to talk about it or get the work done. - Be willing to barter on difficult days. Getting a smaller amount of responses than you hoped for is a better use of a session than having the student lose it and not accomplish anything at all. - Try to put a fun spin on some review work. Often students are willing to use a skill in a role-play situation. Have them “be” the SLP and give you the directions. Or you can engage in an online activity on days when they refuse to complete typical or challenging work. - Spend some time to get ready for difficult days with ideas related to each of the goal areas on your IEPs. Expanding Middle School Choices Students with emotional disorders who are having problematic days don’t need to deal with more pressure in their lives. But sometimes you can reach them on a difficult day with fun or humor! Keep a few fun backup games for general language skills on your shelves. Pictured are some fun games to have around for those difficult days. These will let you review some goals on your student’s IEP, which is an improvement on doing nothing at all. Best of all, it lets you observe your students’ skill use in real-life situations. YouTube clips usually appeal to middle school students and can be used for a variety of language skills as well. Putting the links to these websites on one document can be helpful for finding them quickly on days when attention spans and tempers are short. If needed, you can bargain by getting students to watch your video before letting them show their favorite school-appropriate one to you. See if they can summarize what the video is about and explain why they like it. Functional use of language! This is also a great way to expand your list. Have a list of websites for making your own stories or comic strips ready to go at any time. These sites help you work on a variety of language skills. They also provide your students with the options of visually ‘journaling’ what they are upset about or getting their mind off their problem entirely. Expand Language Skills for Problem Solving It helps to work on problem-solving skills when your students are able to settle down to get something accomplished. Practice the language skills for problem-solving on days that students can learn. This makes using the language easier on difficult days. Most kids would much rather figure out someone else’s problems than discuss their own in a group. The trick is to find materials that they can relate to in their lives. And it can be tough to find appropriate topics with the language levels you need to work on. That is why I made my own! Look at the buyers’ feedback on the Social Inferences Bundle to see how engaged students are! Or check out Language Skills for Conflict Resolution as pictured above. I know that it can be confusing or scary to purchase materials on line. Always be sure to thoroughly read the description and look at the previews to help in the decision. And, to make it even easier, try out the related freebies: Then be sure to leave kind feedback as a thank you!
California is known for its contrasting weather patterns. These distinct patterns are often caused by climate change that has affected California’s water for decades. California has a large amount of oceans, streams, rivers, and lakes which have played a big role on the natural landscape as well as overall climate. According to our textbook, in chapter two of ‘The Natural Setting’, it discusses the shortage of water as well as drought history in California and what impact that has had on the hydraulic system. The Elusive Eden goes into great detail regarding California’s prolonged water shortages and what impact that has on the natural surroundings as well as the lack of rainfall. It also demonstrates the complex drought patterns in California and how that aspect plays a significant role on water quality. California by far has the most conflicting climate changes than anywhere else in the United States because of the differing climate as well as landscapes; this has caused California to have tons of droughts almost every year. Most of the time a drought happens when there has been a declination of water or a decrease in rainfall for a significant amount of time which can last for months or even years. California has often had to deal with a tremendous amount of water management issues due to its distinctive climate and lack of rainfall. This long-term challenge will continue to persist in the state of California. A combination of water shortages as well as a growing population have only added to these complications making it difficult for California to create laws or ideas that would combat these long-term issues. The hydraulic era was established to create a system that balances the changes in population as well as the overall economy while focusing solely on water management issues in California. California’s large growing population flood and water policies needed to be changed in order to cover a large distance. Water companies have faced a number of challenges trying to create working water policies due to the states changing ecosystem and constantly have to come up with new strategies to adapt to these changes. One of the first conflicts the hydraulic era faced was the ever-changing climate in California and aiming to come up with solutions in order to solve an ongoing water crisis. Growing cities all over California have fought for new water policies in order to suit their ever-growing population. California had trouble dealing with new environmental statutes. Water companies aimed to transform water management policies to suit the environment. According to ‘Floods, Droughts, and Lawsuits: A Brief History of California Water Policy’ it states, “In the Water Commission Act of 1913, however, it endeavored to devise a comprehensive system for regulating water rights. The act created a State Water Commission with the power to issue permits and licenses to govern the exercise of water rights” (Floods, Droughts, and Lawsuits, page 37). This quote shows what policies California used in order to stop conflict caused by the hydraulic era. The Water Commission Act of 1913, was one of the first water policy acts in California and paved the way for water regulation as well as water usage. Not only were there laws surrounding water shortages, the court had to protect aquatic life as well as water quality in order to keep the state’s water sources well accounted for and clean. The amount of water use was an ongoing concern in court as well as how much water could be used during a shortage or a long-term drought. Save your time! We can take care of your essay - Proper editing and formatting - Free revision, title page, and bibliography - Flexible prices and money-back guarantee The federal government took charge in the next step of the hydraulic era by looking at two of the state’s largest water sources. This led to the Boulder Canyon Project and the Central Valley Project which help to transport water from the mountains to the cities and farms. The Boulder Canyon Project was a large dam that allowed the United States to dominate this large river and use it as a main water source to help provide water to cities as well as farms. This helped to grow agricultural as well as provided more than enough water to crowding cities in California. California’s large growing population caused the Colorado river quickly dry up killing fish, wildlife, leaving more of a puddle than a lush stream. California’s large water demands caused many conflicts in which the Colorado river could no longer keep up with the state’s growing population. The Central Valley Project, was influenced by farmers who needed a large amount of water to satisfy their crops. This water was taken from the Sierra Nevadas, however due to large acres of land the San Joaquin Valley relied solely on groundwater which was detrimental to the valley’s aquifers. During this time California was in the middle of a large drought as well as the midst of the Great Depression which had a significant impact on the state’s economy which made it near impossible to fund the Central Valley Project. The sole purpose of the Central Valley Project was to control flood management, water supply, and control where water was sent to. According to ‘The PPIC Article/ Hydraulic Era/ Managing California’s Water from Conflict to Reconciliation’, it discusses how the Central Valley has continued to expand which has led farmers without surface irrigation to water their crops because of this groundwater has been limited. A significant amount of conflicts have arisen from this issue. For example, the creation of new pumps and drilling has led to decline in groundwater. The management of California’s groundwater has been a persistent issue for water management. The history of California in the twentieth century is the story of a state inventing itself with water, it discusses the important elements of groundwater as well as the issues of water management in California. This article discusses the ways California has aimed to adapt to water shortages as well as what led to the water management. California’s diverse range of climates has a significant impact on natural precipitation that could lead to a drought for months or potential flooding. The climate isn’t the only cause for the lack of natural groundwater in California. California’s natural landscape ranges from mountains, terrains, deserts, great valley, and the coastal plains which have a significant impact on water policies as well as water management. Due to California’s diverse landscape the state relies heavily on groundwater in order to meet water supply needs. California uses more groundwater than any other state in America which is a concern considering this over use causes the natural groundwater table to lower which results in wells not being able to reach the water. When too much of the groundwater is used this can cause the land to subside meaning there’s not enough support from water below the surface to hold up the ground. Overuse of groundwater can often times cause waters in streams and lakes to dry up killing animals and reducing natural water sources. This can also lead to deterioration of natural water quality, contaminating the area and polluting drinking water throughout the state. The hydraulic system has had a huge impact on the state’s water supply and has faced a large amount of issues for decades. California has struggled to keep up with water policies and has often faced a number of problems trying to adapt to California’s climate and large population. The hydraulic era was an era of conflict in California considering water companies had issues developing strategies to keep up with California’s high demands, environmental statutes, and issues with overuse of groundwater. The state has faced a number of problems trying to keep up with California’s diverse landscape which has often led to long term droughts affecting the amount of water supply within the state. Not only has the climate played a large role during the hydraulic era the creation of water policies and keeping up with California’s high demands have created a significant amount of problems. California will continue to face a number of challenges when dealing with the hydraulic system and will have to continue to find a solution to fix these long-term issues. - Bullough, William A, Irwin, Mary Ann, Orsi, Richard J, Rice, Richard B. Chapter One/ Managing California’s Water. Elusive Eden. 2012. - Ellen Hanak, Jay Lund, Ariel Dinar, Brian Gray, Richard Howitt, Jeffrey Mount, Peter Moyle Barton “Buzz” Thompson. PPIC Article/ Hydraulic Era/ Managing California’s Water from Conflict to Reconciliation. 2011. - William L. Kahrl, Water and Power. The History of California in the Twentieth Century Is the Story of a State Inventing Itself with Water. 1982. - Sarah Null, Eleanor Bartolomeo, Jay Lund, Ellen Hanak. Managing California’s Water Insights from Interviews with Water Policy Experts. February 2011.
It’s long been a mystery for astronomers: why aren’t galaxies bigger? What regulates their rates of star formation and keeps them from just becoming even more chock-full-of-stars than they already are? Now, using a worldwide network of radio telescopes, researchers have observed one of the processes that was on the short list of suspects: one supermassive black hole’s jets are plowing huge amounts of potential star-stuff clear out of its galaxy. Astronomers have theorized that many galaxies should be more massive and have more stars than is actually the case. Scientists proposed two major mechanisms that would slow or halt the process of mass growth and star formation — violent stellar winds from bursts of star formation and pushback from the jets powered by the galaxy’s central, supermassive black hole. “With the finely-detailed images provided by an intercontinental combination of radio telescopes, we have been able to see massive clumps of cold gas being pushed away from the galaxy’s center by the black-hole-powered jets,” said Raffaella Morganti, of the Netherlands Institute for Radio Astronomy and the University of Groningen. The scientists studied a galaxy called 4C12.50, nearly 1.5 billion light-years from Earth. They chose this galaxy because it is at a stage where the black-hole “engine” that produces the jets is just turning on. As the black hole, a concentration of mass so dense that not even light can escape, pulls material toward it, the material forms a swirling disk surrounding the black hole. Processes in the disk tap the tremendous gravitational energy of the black hole to propel material outward from the poles of the disk. At the ends of both jets, the researchers found clumps of hydrogen gas moving outward from the galaxy at 1,000 kilometers per second. One of the clouds has much as 16,000 times the mass of the Sun, while the other contains 140,000 times the mass of the Sun. The larger cloud, the scientists said, is roughly 160 by 190 light-years in size. “This is the most definitive evidence yet for an interaction between the swift-moving jet of such a galaxy and a dense interstellar gas cloud,” Morganti said. “We believe we are seeing in action the process by which an active, central engine can remove gas — the raw material for star formation — from a young galaxy,” she added. The researchers published their findings in the September 6 issue of the journal Science. Source: NRAO press release
Lithographs exploit the incompatibility of water and grease to transfer marks. First, greasy marks are drawn with crayon or ink onto a flat block of limestone. Water is applied to the surface, absorbed by the bare stone, but rejected by the greasy marks. Then greasy ink is rolled over the surface, rejected by the wet stone surface, but accepted by the greasy marks. The ink, laying only on the drawn marks, is pressed onto paper. (The complete process is qiute a bit more complicated than this!) Serigraphs (or silkscreen prints) employ a stencil technique. Fine, open weave silk is stretched across a frame, like a window screen. Some areas of the screen are blocked with paper, film or other material. When ink is drawn across the screen with a squeegee, it will pass through only the unblocked areas, and onto paper. Monotypes share the characteristics of painting and printmaking. Typically, the artist will create an image directly on a smooth, unmarked plate, using printing ink or other pigmented material. The artist can freely build upon, alter, or even eradicate the image while it remains on the plate. Upon completion, the image is transferred to paper in a press, revealing its true character. Only one impression is made, since most of the pigment has been transferred.
Since 1998, globally averaged surface temperatures have remained relatively flat, despite continued warming of the climate system and carbon dioxide concentrations reaching a new high of 400 parts per million in 2013. Scientists are debating how and why the global atmosphere seems to be bucking the influence of steadily increasing greenhouse gases. As Kevin Trenberth and John Fasullo, scientists at the National Center for Atmospheric Research (NCAR) point out in a 2013 paper, the climate system’s innate variability and dynamics make this a less-than-surprising reality. While seasonal fluctuations don’t seem out of the norm, with warmer-, wetter-, or sunnier-than-normal summers occurring one year, and cooler-, drier-, or cloudier-than-normal occurring the next, many people, write Trenberth and Fasullo, expect that human-made climate change will result in seasonal temperatures growing increasingly warm each year. However, natural variability within a dynamic system and people’s experience with seasonal fluctuations should indicate that such an assumption is neither typical nor likely. Over the last several years, scientists including Trenberth, Fasullo, and NCAR colleague Jerry Meehl have used observations and models to show that “pauses” in global atmospheric warming lasting a decade or more can be expected, thanks in large part to the huge role of oceans in modulating Earth’s climate. Most climate scientists agree that the current warming “hiatus” does not indicate a stalling of the effects of a warming world. Instead, this heating hiatus, a result of natural variability, may be caused by fluctuating patterns linked to both the atmosphere and ocean, like the El Niño/Southern Oscillation that results in El Niño and La Niña events. For instance, the 1998 El Niño event caused notable changes in global weather patterns because heat came out of the oceans – thereby cooling the ocean – and invigorating weather systems, while recent La Niña events have reduced sea surface temperatures, resulting in cooler global average surface temperatures even as the ocean as a whole warms. In other words, the warming of the surface ocean often goes in the opposite direction to the global mean surface temperatures. Additionally, natural events such as volcanic eruptions and reduced solar activity caused by the Sun’s recent quieter-than-average state can cause a reduction in the amount of incoming radiation. A number of studies, including Trenberth and Fasullo’s, indicate that the excess heat generated by anthropogenic emissions seems to be melting the Arctic sea ice and warming the world’s oceans, with the deep ocean – below 700 meters – currently taking up a third of the excess heat. This appears to be related to the cool (negative) phase of the Pacific Decadal Oscillation (PDO) that has prevailed since the late 1990s. When the PDO is in its cool phase, there tends to be a net storage of heat in the global oceans—similar to El Niño, but on a longer time scale. The PDO switches from warm to cool (positive to negative) about every 20 to 30 years. Trenberth and Fasullo speculate that the record-strong El Niño of 1997–98 released so much heat from the ocean that the PDO’s switch to a heat-storing negative mode may be having some type of compensating response. They caution that the dynamics that drive shifts in the PDO have not been conclusively determined, and climate models don’t yet seem fully capable of predicting such shifts. Trenberth believes an El Niño might be the trigger to push the current PDO in the other direction. If this happens, some of the “missing” atmospheric warming may once more be felt, potentially causing global temperature to rise at rates on a par with that experienced during the 1970s to 1990s and pushing global average readings to new record highs. This will likely cause global decision-makers some concern, given that even with the current hiatus in increasing land-surface temperatures, the first decade of the 21st century is the warmest since at least the 1850s, when instruments began regularly and reliably measuring weather phenomenon. Even in the current global pause, the United States experienced by far the warmest year on record in 2012, accompanied by widespread and costly drought. The evidence suggests that global warming of the planet is continuing, explains Trenberth, it just gets manifested in different ways at times.
The nineteenth and early-twentieth century asylum was most likely to be run on a system of ‘moral management’. The term ‘moral’ is used here in a somewhat insidious way: it refers to a system of bodily and mental health, but has its roots in a conventional Victorian morality which insisted upon self-discipline above all else. There were, however, many advantages to the system of moral management: it offered patients the opportunity to take responsibility for their own actions, something which earlier centuries would have considered impossible for the insane (no matter the degree of insanity). The system also offered patients freedom from isolation and physical restraints, and other tortuous ‘treatments’, instead offering them routine, exercise, good food, fresh air and regular occupation (a programme which still sounds quite practical today). Consequently, asylums ideally were large buildings with airy communal rooms, and, significantly, grounds in which patients might exercise, take walks, and even do some gardening. The Victorian belief in fresh air and exercise became more pronounced into the twentieth century, and consequently the grounds of an asylum were extremely important. Treatments available, in addition to this healthy routine, might include some rudimentary medication (such as sedatives, usually bromide), frequent immersion in cold or lukewarm water, and hypnotism. Restraints might be used where a patient was dangerous or likely to hurt him or herself, but – at least in theory – were meant to be used only rarely and in extremis. The emphasis was on the moral regime, however, through which a well-behaved patient might earn privileges, and patients could feel themselves to be useful members of the community, and thus restore their reason through self-discipline. Such treatment was effective in some cases, particularly milder cases, or illnesses such as post-partum depression or alcoholism, but was less effective for the criminally-inclined or the seriously disturbed. When asylums became the standard place of care for the mentally ill, in the early 1800s, there was a big rise in the number of asylum buildings, followed by another boom after the 1845 Lunatics Act. They were commonly built on regimented lines, yet often in imitation of the English country house. Large, airy common rooms, such as a lounge, recreation room and dining room, would be central to this (though patients were likely to remain segregated by sex throughout their time in the asylum). Most asylums would also have a chapel, since in the nineteenth and early twentieth century, religion was seen as helpful to patients, offering them ritual, faith and hope. The grounds would be laid out to facilitate outdoor activities for the patients, whilst ensuring that they did not leave the view of the staff, and consequently the building was itself an intrinsic part of treatment. Violent or dangerous patients would be isolated, possibly in padded rooms, or restrained, while those perceived as less of a threat would commonly sleep in dormitories, where they could be seen easily by staff. Rooms for treatment, isolation, administration and sleeping would be laid out along corridors, with the communal rooms placed centrally. This plan was common throughout the nineteenth century, differing from the radial layout common prior to this (particularly for prisons), based on Jeremy Bentham’s Panopticon, which permitted all patients to be seen from a central point. This design was unpopular, however, for its cramped and institutional feel, and lack of natural light.
Religion in the United States, Religious Discrimination nativist movement, white supremacist organization, Catholic immigration, Protestant Bible, Arab oil Although religious toleration is a cornerstone of American society, religious discrimination has also been a part of America’s history. Most Americans, from early colonists to members of the Bureau of Indian Affairs in the 20th century, have viewed Native American spiritual beliefs as superstition. Even the most well-intentioned of American policy makers sought to replace traditional native beliefs with Christianity by breaking up native families, enforcing the use of English, and educating children in boarding schools dedicated to Christianization and Americanization. European immigrants also sometimes faced religious intolerance. Roman Catholics suffered from popular prejudice, which turned violent in the 1830s and lasted through the 1850s. Americans feared that the hierarchical structure of the Roman Catholic Church was incompatible with democracy. Many felt that separate parochial schools meant that Roman Catholics did not want to become Americans. Irish Catholics were thought to be lazy and prone to heavy drinking. At its peak, the nativist movement—which opposed foreigners in the United States—called for an end to Catholic immigration, opposed citizenship for Catholic residents, and insisted that Catholic students be required to read the Protestant Bible in public schools. The nativist American Party, popularly called the Know-Nothings because of the secrecy of its members, won a number of local elections in the early 1850s, but disbanded as antislavery issues came to dominate Northern politics. In the early part of the 20th century, the Ku Klux Klan sought a Protestant, all-white America. The Klan was a white supremacist organization first formed in the 1860s. It was reorganized by racists in imitation of the popular movie The Birth of a Nation (1915), which romanticized Klansmen as the protectors of pure, white womanhood. The Klan preached an antiblack, anti-Catholic, anti-Semitic message and sometimes used violence to enforce it. Burning crosses, setting fires, and beating, raping, and murdering innocent people were among the tactics used. Many Protestant congregations in the South and in the Midwest supported the Klan. The Klan attracted primarily farmers and residents of small towns who feared the diversity of the nation’s large cities. Anti-Catholic feelings reappeared during the unsuccessful presidential campaign of Alfred E. Smith in 1928 and in the 1960 presidential campaign, in which John F. Kennedy became the first Roman Catholic president. Jews were subjected to anti-Semitic attacks and discriminatory legislation and practices from the late 19th century into the 1960s. The Ku Klux Klan promoted anti-Semitic beliefs, there was an anti-Semitic strain in the isolationism of the 1920s and 1930s, and the popular radio sermons of Father Charles Coughlin, a Roman Catholic priest, spread paranoid fears of Jewish conspiracies against Christians. President Franklin D. Roosevelt was the target of anti-Semitic attacks, despite the fact that he was not a Jew. Both the fight against fascism during World War II and the civil rights movement of the 1950s and 1960s helped to diminish anti-Semitism in the United States. Court decisions and civil rights legislation removed the last anti-Jewish quotas on college admissions, ended discrimination in corporate hiring, and banned restrictive covenants on real estate purchases. Far right-wing movements at the end of the 20th century have revived irrational fears of Jewish plots and promoted anti-Semitic statements, as have some African American separatist groups. However, right-wing militias and Klan groups have paid less attention to American Jews than to African Americans, homosexuals, and conspiracies allegedly funded by the federal government. In the 1990s, the demise of the Soviet Union as the “evil empire” (as President Ronald Reagan named it in 1983) left a void in American political life that has been partially filled by a sporadic antagonism towards certain Muslim nations. Foreign policy crises have coincided with an influx of Muslims into the United States and popular revulsion at the antiwhite rhetoric of the American Nation of Islam. An oil crisis created in the 1970s when Arab oil-producing nations raised prices astronomically triggered anti-Arab, anti-Muslim diatribes in the United States. International crises in the Middle East during the 1980s continued these sentiments. There were outbursts of anti-Muslim feeling during the Persian Gulf War (1990-1991), and many Muslims felt the war was an attack on Islam rather than a dispute with the government of Iraq. This sense that U.S. policy was attacking the Islamic faith was a factor when the World Trade Center in New York City was bombed in 1993 and destroyed in 2001. American ideals of religious toleration and freedom of conscience have not always been endorsed in particular cases and in certain periods of American history, but the goal of inclusiveness and liberty remains an important theme in the development of the United States. Article key phrases:
These days we take for granted that a meter in our own country is exactly equal to a meter at the other side of the globe. If you buy 5 meters of that beautiful cloth online in Canada, you (rightfully) assume that the Canadian meter is the same as the one in your country. If your energy supplier imports electricity from a neighboring country, it is taken that the ampere in both countries is the same. Straightforward as this may seem, it has only been so for a relatively short period of time. In the Middle Ages traveling salesmen had to beware, as the unit of length (for instance of the cloth he was selling) was determined by the length of the arm of the mayor of the village he was staying. As mayors come in sizes, so did the units of length. In the course of centuries, the first attempts were made to standardize units, though it is estimated that by the end of the eighteenth century in France alone approximately 250,000 different units were used. One of the ideals of the French Revolution was to achieve measuring standards “all the time, for everyone”. This led to the metric system. In 1875, 17 countries signed The Metre Convention, in which they amongst others denoted physical platinum-iridium artifacts as the “world standards”, the prototypes, for the kilogram and the meter. These prototypes were famously stored in Sèvres, near Paris, at the newly founded Bureau International de Poids et Mesures (BIPM). In the course of nearly a century, the scope of BIPM was broadened to other units. In 1960 the Metric System was superseded by the Système International d’Unités, commonly known as the SI. Since then, several adaptations to the Si have been made. Currently the SI contains seven base units: the kilogram (kg), the meter (m), the second (s), the kelvin (K), the ampere (A), the mole (mol) and the candela (cd). From these seven base units, all other units (newton, pascal, joule, volt and many others) can be determined. Learn about the SI units Redefining the SI units For more information please contact Gert Rietveld: [email protected].
William Tell, a hero of Swiss folklore, became a symbol of Switzerland's national pride and independence. He is best known for shooting an arrow through an apple sitting on his son's head. Tell's feat of archery supposedly took place around 1300, when Switzerland was under Austrian rule. The independent-minded Tell refused to salute an Austrian official, who then ordered Tell to take the nerve-wracking shot. Afterward, the official spotted a second arrow. Tell said that if his first arrow had missed, he would have used the second one to kill the official. As punishment, Tell was sent to prison, but he escaped and killed the Austrian official. This act inspired the rebellion that eventually ended Austrian rule in Switzerland. Some accounts name Tell a leader in that fight. * See Nantes and Places at the end of this volume for further information. William Tell first appeared in legends and songs of the 1400s. By the 1700s, various Swiss histories featured the story. The play Wilhelm Tell (1804) by the German poet Friedrich von Schiller brought the Swiss hero to world attention, as did the opera Guillaume Tell (1829) by Italian composer Gioacchino Rossini. Despite these works, however, there is no historical evidence that William Tell existed, although the stories about him may have been based on a kernel of reality. The famous test of marksmanship, with a cherished life at stake, is similar to stories from Norse* and British folklore.
Unit 9: My community Day three: Review Using phrases or short sentences, students will be able to respond orally to a series of questions about a fictitious town or city. Setting the Stage (5 minutes) Teacher reintroduces the exchange student featured in some activities of the "MY HOME" unit, and asks students to volunteer aloud what they remember about that student and that unit. Input (20 minutes) Teacher presents a slide show about a community. (The text of the Powerpoint provided with the unit is in English, but Teacher may change text to the target language as appropriate.) Teacher encourages student involvement and participation by asking students to create a story about the exchange student's home town or city. Each new fact and event of the story is reinforced with yes/no, either/or, who/what/where questions. Teacher encourages students to respond as a class, as well as individual students. Guided Practice (15 minutes) Teacher divides the class into two teams. The teams stand in a line on opposite sides of the classroom. The first student in line is in the back of the room and not near the front. This student holds a dry erase pen or a piece of chalk. Teacher informs that the students with the writing tool must run to the front board and write down their answer to Teacher's question as fast as possible. The first student to write a correct response (both of information and of language context) will win a point for his/her team. Those students then run back to their line and pass the writing tool to the next student. This process continues until Teacher calls "time." The team with the most points at the end of this game will win extra-credit points for participation for today. Teacher asks the following types of questions: - Which day is the first day of the week? - What can we see in a garden? - What does your family recycle? - Where is Armenia? - Why are trees good for the earth? - What do you like to buy in a supermarket? - Do you like freeways or expressways? Why or why not? - Which is better for the earth, riding on the bus or in the car? - Which is better, a gas-powered car or an electric car? - Where do you want to go to college? - What do you do when you go to a park? - Do you like to go skate-boarding? - When was the last time you and your family went to the beach? - What do you like to do at the beach? - What do you and your family do to keep our earth beautiful? Closure (2 minutes) Students, in pairs, tell each other which slide in the Powerpoint presentation they liked the best and why. « Previous lesson Next lesson » This work is licensed under a Creative Commons License. - You may use and modify the material for any non-commercial purpose. - You must credit the UCLA Language Materials Project as the source. - If you alter, transform, or build upon this work, you may distribute the resulting work only under a license identical to this one.
New technique may open up an era of atomic-scale semiconductor devices (Phys.org) —Researchers at North Carolina State University have developed a new technique for creating high-quality semiconductor thin films at the atomic scale – meaning the films are only one atom thick. The technique can be used to create these thin films on a large scale, sufficient to coat wafers that are two inches wide, or larger. "This could be used to scale current semiconductor technologies down to the atomic scale – lasers, light-emitting diodes (LEDs), computer chips, anything," says Dr. Linyou Cao, an assistant professor of materials science and engineering at NC State and senior author of a paper on the work. "People have been talking about this concept for a long time, but it wasn't possible. With this discovery, I think it's possible." The researchers worked with molybdenum sulfide (MoS2), an inexpensive semiconductor material with electronic and optical properties similar to materials already used in the semiconductor industry. However, MoS2 is different from other semiconductor materials because it can be "grown" in layers only one atom thick without compromising its properties. In the new technique, researchers place sulfur and molybdenum chloride powders in a furnace and gradually raise the temperature to 850 degrees Celsius, which vaporizes the powder. The two substances react at high temperatures to form MoS2. While still under high temperatures, the vapor is then deposited in a thin layer onto the substrate. "The key to our success is the development of a new growth mechanism, a self-limiting growth," Cao says. The researchers can precisely control the thickness of the MoS2 layer by controlling the partial pressure and vapor pressure in the furnace. Partial pressure is the tendency of atoms or molecules suspended in the air to condense into a solid and settle onto the substrate. Vapor pressure is the tendency of solid atoms or molecules on the substrate to vaporize and rise into the air. To create a single layer of MoS2 on the substrate, the partial pressure must be higher than the vapor pressure. The higher the partial pressure, the more layers of MoS2 will settle to the bottom. If the partial pressure is higher than the vapor pressure of a single layer of atoms on the substrate, but not higher than the vapor pressure of two layers, the balance between the partial pressure and the vapor pressure can ensure that thin-film growth automatically stops once the monolayer is formed. Cao calls this "self-limiting" growth. Partial pressure is controlled by adjusting the amount of molybdenum chloride in the furnace – the more molybdenum is in the furnace, the higher the partial pressure. "Using this technique, we can create wafer-scale MoS2 monolayer thin films, one atom thick, every time," Cao says. "We can also produce layers that are two, three or four atoms thick." Cao's team is now trying to find ways to create similar thin films in which each atomic layer is made of a different material. Cao is also working to create field-effect transistors and LEDs using the technique. Cao has filed a patent on the new technique. The paper, "Controlled Scalable Synthesis of Uniform, High-Quality Monolayer and Few-layer MoS2 Films," was published online May 21 in Scientific Reports, a journal of the Nature Publishing Group.