content
stringlengths
275
370k
Introduce the slideshow: Look at the cover photo and list objects that were used to create this ofrenda (shrine) for Dia de los Muertos, the Day of the Dead holiday that honors the memory of loved ones who have died. questions to guide a pre-reading discussion: Looking at this photo, how do you think Mexicans prepare for the memorial holiday, Dia de los Muertos (Day of the Dead)? How do people around the world honor loved ones who have died? What objects help us celebrate this kind of holiday? As you read the slideshow, discuss: Cultures have different beliefs about death and different ways of remembering loved ones who have died. How do Mexicans celebrate Day of the Dead, their memorial holiday in November? Encourage students to look closely at each photo and ask questions about how ofrendas are made, why marigold flowers are gathered, how candles are used, what traditional foods and drinks are made for the festivities, and more. Explore the customs and unique ways this annual holiday The Day of the Dead at Our School in Angangueo may or may not have mentioned Halloween, which occurs at the same time of year. The holiday's historic roots, beliefs, and traditions have given way in the U.S. and Canada to a celebration of candy and costumes. Explore the similarities and differences between Halloween and the Mexican Day of the Dead celebrations; make comparisons to other customs and traditions for other memorial events from around the world. Compare and contrast core beliefs, main activities, foods, clothing, and other items that help people celebrate during these Discuss seasonal connections: fall holidays have their early roots in yearly seasonal changes and final harvests. People stockpiled food for cold winter months when the sun set early and rose late, and when nature "died" until its rebirth in the spring. What's happening to plants at this time of year? How is daylength changing? Research how other cultures honor the memory of loved ones: the fact that many cultures have traditions for honoring the dead. For instance, in Afghanistan, people prepare and eat the favorite food of the deceased relative once a week for a month after he/she died. Have students conduct research to learn about other customs.
Criteria is part of the Academic Word List. It is important for students in college and university. - The plural form of criterion; more than one criterion. - (sometimes singular) Criteria are a set of principles that you use to judge something or decide about something. - One of the criteria for measuring the success of a democracy is the percentage of people that vote in that society . - The project proposal does not meet current criteria for approval.
Phases of Research Research studies are generally divided into four phases: Phase I trials are designed to confirm the safety of a new medication. Phase I trials generally involve a small group of healthy volunteers and typically last three to four months. Phase II trials help to confirm whether or not a medication is effective. These studies are “randomized”. Randomization means that participants are divided into two groups—one group receives the active treatment and the other group receives a “control”. (Usually the standard treatment available). Phase III trials are initiated once a treatment has demonstrated both safety and efficacy. Phase III trials help determine whether a medication is effective over a longer period of time. These trials are longer and larger, in terms of the number of participants. Phase III trials can provide enough data for a product to be submitted for approval by the Food and Drug Administration (FDA). Phase IV clinical trials are conducted to obtain more information on a new medicine that has been submitted to the FDA for approval. Phase IV trials are also conducted when a company is gathering more information about a product already on the market.
Plants Do Photosynthesis is the process of converting light energy to chemical energy and storing it in the bonds of sugar. This process occurs in plants, some algae (Kingdom Protista), and the cyanobacteria (also known as “bluegreen algae,” Kingdom Monera). Photosynthesis requires only light energy, CO2, and H2O to make sugar. The process of photosynthesis takes place in the the green pigment involved in photosynthesis. Photosynthesis takes place primarily in plants’ leaves, and little to none occurs in stems, etc. The parts of a typical leaf include the upper and the vascular bundle(s) (veins), and the The upper and lower epidermal cells do not have chloroplasts, thus photosynthesis does not occur there. They serve primarily as protection for the rest of the leaf. The stomates are holes which occur primarily in the lower epidermis and are for air exchange: they let CO2 in and O2 out. The vascular bundles or veins in a leaf are part of the plant’s transportation system, moving water and nutrients around the plant as needed. The mesophyll cells have chloroplasts and this is where As you hopefully recall, the parts of a chloroplast include the outer and inner membranes, intermembrane space, The chlorophyll is built into the membranes of the thylakoids. The light reaction happens in the thylakoid membrane and converts light energy to chemical energy. This chemical reaction must, therefore, take place in the light. Chlorophyll and several other pigments such as beta-carotene are organized in clusters in the thylakoid membrane and are involved in the light reaction. Each of these differently-colored pigments can absorb a slightly different color of light and pass its energy to the central chlorphyll molecule to do photosynthesis. Pigments Involved in Photosynthesis In this lab you will be examining the pigments present in plant leaves, separating/isolating these pigments from each other, and determining absorption spectra for each of them. Chlorophyll A (chloro = green, phyll = leaf) is the pigment used by plants to convert energy from the sun into chemical energy useful to the plant, but other pigments present in leaves also help to “harvest” light energy. This energy is stored by converting carbon dioxide and water to sugar. The chemical reaction for this is 6 CO2 + 12 H2O (+ light energy) → C6H12O6 + 6 O2 + 6 H2O. This sugar is stored by the plant as starch (thus the occurrence of photosynthesis could be demonstrated using the iodine test for starch). Benedict’s solution could be used to test for the presence of sugar (usually found in the leaf veins, indicating transfer of sugar from one part of the plant to another). The central part of the chemical structure of a chlorophyll molecule is a which consists of several fused rings of carbon and nitrogen with a magnesium ion in the center. Chlorophyll looks green because it absorbs red and blue light, making these colors unavailable to be seen by our eyes. It is the green light which is NOT absorbed that finally reaches our eyes, making chlorophyll appear green. However, it is the energy from the red and blue light that is absorbed that is, thereby, able to be used to do photosynthesis. The green light we can see is not/cannot be absorbed by the plant, and thus cannot be used to do photosynthesis. Structure of Vitamin A and β-Carotene Besides chlorophylls A and B, various other pigments, including carotenes (carot = carrot), xanthophylls (xantho = yellow), and anthocyanins (antho = a flower, cyano = blue, dark blue), are often found in plant leaves. The chemical structures of these molecules are illustrated in many organic chemistry and cell physiology books. Because of their different colors, many of the carotenes and xanthophylls are capable of “capturing” solar energy that the chlorophyll cannot and transferring that energy to the chlorophyll enabling photosynthesis to occur. Anthocyanins are not involved in photosynthesis. Once a mixture of these pigments has been extracted from a leaf together, because each of these pigments, including the chlorophylls, has a different chemical structure and formula, the mixed pigments can be separated from each other be a process known as paper chromatography (chromo = color; graph = to write). In this process, the mixed pigments are dissolved in a mixture of two (or more) solvents and allowed to soak into a piece of paper by capillary action. Typically, one of the solvents used is more covalent while the other is more polar or ionic, and their molecular weights differ considerably. Various of the leaf pigments, thus, are more, or less, soluble in the different solvents, so as the solvent system wets the paper, the various pigments move into/across the paper at various rates depending on their sizes (molecular weight), relative number of covalent or ionic bonds in the molecule, and other factors based on their chemical structures: normally, the smallest move fastest and Once the pigments are separated, a tentative identification of each may be made (to be confirmed by obtaining an absorption spectrum of each.). Chlorophyll A appears as a blue-green band while chlorophyll B is a yellow-green band. Carotenes are bright yellow to orange while xanthophylls are a slightly greenish yellow. Anthocyanins are reddish, violet, or blue, and are not soluble in organic solvents, thus typically do not move up the chromatogram at all. Light Absorbed by Each of the Pigments The fact that each of these pigments appears as a different color is an indication that each is absorbing different wavelengths of light. Remember the color(s) that we see is whatever the plant has NOT absorbed (For example, chlorophyll A looks green because it is not absorbing and using green light). According to the literature, chlorophyll A has two absorption peaks (absorbs the most light) at around 428 nm (blue-violet range) and at around 660 to 700 nm (red range), while chlorophyll B absorbs best at around 453 and 643 (to 650) nm. Beta-carotene, the most common carotene (and precursor of vitamin A), has an absorption peak at a wavelength of 451 nm (at the blue-violet end of the spectrum). Each of these pigments or the mixture as extracted from the leaf can be examined with a spectrophotometer to determine its absorption spectrum, thus confirming its identity. Effects of Wavelength on Plant Growth While the present experiment will not test this, it has been the blue/violet light absorbed by plants is responsible for foliage growth. Plants grown in only blue light are compact with lush, dark-green leaves but few flowers. Red and far-red light affect the growth processes (elongation and expansion) in various plant parts. Consequently, these colors are responsible for flower development among other things. Incandescent light is a good source of light in the red range but lacks output in the blue range while fluorescent light tends to be better in blue and lacking in the red end of the spectrum. Vitalightr and similar special fluorescent-type bulbs are specially-designed for high output in both the blue and red ranges, yet are low in the green range (hence tend to look purplish or pinkish in color). Many plant growers combine incandescent and fluorescent lights. The chemicals in the solvent system being used in Part A and the ethanol used in Part B are flammable, thus should be kept away from any open flames. Pouring of solvent system to/from the Erlenmeyer flask should take place in a fume hood, and the reagent bottle and your flask should immediately be capped. It probably is also a good idea to not breathe too much of it. Part A — Paper Chromatography - Fresh spinach (or other leaves of one’s choice — optionally, if available, leaves may be picked from outdoors. Brightly-colored autumn leaves could be interesting to test.) - Freshly-made solution of 90% petroleum ether + 10% acetone - Whatman #1 chromatography paper (continuous strip) - Penny (or other coin) - 250-mL Erlenmeyer flask with rubber stopper (#8) to fit it, - Forceps and scissors - 95 or 100% EtOH - 13×100 (small) test tubes and rack - Hand-held UV light Part B — Spectra of Pigments - Tubes of pigments from Part B - 95 or 100% EtOH - Fresh or dried leaves of spinach, parsley, or kale - Mortar and pestle - Funnel and circular filter paper - (Optional red, yellow, blue, and green food color, methylene blue, riboflavin, and/or other pigment solutions of interest) - Additional 13×100 test tubes - Spectrophotometer, cuvettes in plastic rack, lens Part A — Paper Chromatography solvent in flask - A 250-mL Erlenmeyer flask, #8 stopper, and T-pin should be obtained. Working in the fume hood, a few millimeters (depth) of the 90% petroleum ether + 10% acetone solution should be poured into the bottom of the flask and the stopper placed on the flask so that the “fumes” can start to accumulate in the flask while the next few steps are performed. The air in the flask must become saturated with fumes from the solvent or the chromatography won’t run properly. ready to measure paper make paper this long fold paper here second fold here - A piece of chromatography paper slightly longer than what will fit into the flask should be obtained. It is important to touch it as little as possible, preferably only at the edges. The bottom edge of the paper should be cut as straight as possible. The paper should be long enough that when a T-pin is used to attach it to the underside of the stopper, the tip of the paper will reach to within a few millimeters of the bottom of the Erlenmeyer flask. This paper must be kept as clean as possible, handled only by the edges, and only set on clean If it is necessary to mark on the chromatography paper, only pencil should be used, not pen (ballpoint ink is alcohol-soluble). spinach, penny, and paper roll line on paper darker line on paper - A spinach (or other) leaf should be obtained. The chromatography paper should be laid on a piece of clean paper and the leaf laid over the chromatography paper. The edge of a penny (or other coin) may be used to roll (smash) a stripe of color across the paper about 1.5 to 2 cm above the end. The leaf should be moved so a new portion of the leaf is over the stripe, and re-rolled with the penny over/onto the same place on the paper to darken the stripe. This process should be repeated several times as needed to obtain a dark stripe. The stripe should be allowed to dry before proceeding. pin onto stopper, view 1 pin onto stopper, view 2 - The chromatography paper should be held up next to the flask to judge the exact length of paper needed such that when the paper is pinned to the bottom side of the stopper, the bottom end of the paper will be just below the surface of the solution. If needed, the top end of the strip should be folded over at the right place. The flask should not be left open while pinning the paper to the stopper. With the flask placed in its “permanent” location, the stopper should be quickly flipped upside-down on top of the flask so the flask remains sealed. The flask should be kept open the minimum amount of time possible. The T-pin should be used to attach the top (non-pigment end) of the paper to the center bottom of the rubber stopper. stopper into flask solvent front below the line When the paper is securely attached, the stopper should quickly be flipped right-side-up and inserted into the flask in such a way that the paper does not touch the sides. The bottom of the paper should be barely in the solvent so that the solvent will be soaked up. It is imperative that you not move, jostle, or slosh the flask once the paper is soaking! solvent front above the line done: solvent front by “bottom” end of pin - As the solution is absorbed into the paper by capillary action, it will carry the various pigments up from the “center”. When the farthest band is about 1.0 to 0.5 cm away from the top of the paper or close to touching the T-pin, the chromatography may be stopped by removing the paper from the flask and replacing the stopper. The chromatogram should be observed and drawn, especially noting the colors of the various bands that are visible. The paper should be handled carefully, and no marks should be made on it. chromatogram removed and drying finished chromatogram ready to cut apart - As a class, the identical bands will be put together and the pigments re-dissolved. One labeled 10 × 130 test tube will be supplied for each band Using (CLEAN) scissors, the various bands should be cut apart from each other (remember which is which). Each should then be placed as far into the bottom of the designated 10 × 130 test tube as possible. Everyone’s identical bands (i. e. all outer yellow bands) should go into the same tube to make the solutions as concentrated as possible. After everyone’s bands have been collected, the instructor (or an appointed class member) should place about 5 mL of 100% ethanol into each tube. Each tube should be labeled (if not done previously) and covered with Parafilm®. Each label should include the order and color of the band that tube contains (for example, “outer yellow”). The covered tubes should be placed in the designated rack for storage until the next lab period. - All tubes being saved must be properly labeled and covered, then placed into a rack and stored in an appropriate location until next period. Chromatography solvent should be returned to the reagent bottle for reuse. All glassware should be washed and placed in the racks to dry. All scraps of chromatography paper and spinach should be disposed of properly and any other general clean-up should - Once the pigments have been re-dissolved in ethanol, your instructor may use the UV light to examine the tubes to demonstrate how chlorophyll (and any of the other pigments?) Part B — Spectra of Pigments - For this part of the experiment, students will be working in groups, based on the number of spectrophotometers available and the number of students in the lab section. (Optionally, as a class, a drop of each color of food coloring may be diluted with 100% EtOH so these may also be tested.) Someone in the class may grind a piece of spinach leaf or some dried parsley with a mortar and pestle, then add 100% EtOH to extract the plant pigments. Then, a small test tube, rack, glass funnel, and a piece of circular filter paper should be obtained. The test tube should be placed into the rack and the funnel into the test tube. The paper should be folded in half, then in quarters (half of half) as demonstrated by the instructor, then inserted into the funnel. The newly-extracted pigment solution should be poured through the filter paper to remove any particles. If this solution is very dark green, it will need to be diluted with more ethanol (see below). - The tubes containing the isolated bands from the chromatograms (and those containing the diluted food coloring) will be distributed among the groups of students so that each group should have at least one of the redissolved, isolated bands and “something else” (mixed spinach or parsley pigments and/or food coloring and/or methylene blue or riboflavin) to test. - One (CLEAN – without methylene blue stains) cuvette should be obtained for each solution the group will be testing plus one for plain EtOH, making sure to match glass colors (types/brands of cuvettes). Each cuvette should first be tested for the presence of unwanted, left-over methylene blue by placing a small amount of 100% EtOH in it, swirling, and holding the tube against a white surface. If a cuvette needs to be cleaned, do not use water to rinse it because all the solutions we will be testing are dissolved in EtOH, and water could interfere with the readings. For this experiment, only EtOH should be used to clean out cuvettes. Because it is so difficult to remove markings from cuvettes, and considering the possibility that any marks on them may interfering with the readings that will be taken, it is better to just line them up in the test tube rack in a pre-determined order corresponding to the labeled test tubes from the pigments being tested. In the unlikely case that it is necessary to label the cuvettes, ONLY PENCIL SHOULD BE USED, lightly writing only on the white area provided. DO NOT USE WAX MARKER OR LAB PEN! The cuvette that will serve as the blank should have about 4 or 5 mL of 100% ethanol added to it. Later, each pigment solution to be tested will be poured directly from its test tube into its own cuvette. - While the redissolved, individual bands are probably dilute enough, if you are testing a solution of freshly-extracted, mixed parsley or spinach pigments, that may be too concentrated and may need to be diluted so the readings for it are not off the scale. Remember, Beer’s Law says that, by diluting a sample, its absorbance (at all wavelengths) will decrease. The chlorophylls in the mixture cause it to have an absorbance peak (highest amount of light absorbed) near 425 nm, so we want to adjust the concentration of the solution so that the A425 is not over 1.00. To do that, first, the wavelength on the spectrophotometer should be set to 425 nm. The zero and blank (using EtOH so that the machine subtracts out readings for whatever light the solvent absorbs) should be adjusted. Then, the absorbance of the mixed pigment solution should be read, and if the A425 is greater than about 1.000, it is necessary to dilute the sample. Ethanol should be added to dilute the solution and decrease the A425 to no more than 1.000. While it is possible to just “play around” with adding more ethanol or pigment solution until the absorbance is acceptable, if great accuracy is desired, the amount of alcohol needed can be calculated as follows. Recall that Beer’s Law says that, for example, if the absorbance reading is 2.00, theoretically, the addition of an equal volume of EtOH should dilute the sample to half its original concentration, such that the absorbance is half of the original, or only 1.00. Assuming you’re starting with 4 mL of pigment solution, the amount that you might typically use in a cuvette, Beer’s Law would mean that, where “(x amt)/# of mL” is an expression of concentration. The units used to express “x” could be moles, grams, or whatever, and since it’s the same on both sides, it cancels out and is not even necessary to know. This equation can be rewritten in general terms as, |A425 = 2.00|| = ||A425 = 1.00| |(x amt)/ ~4 mL soln||(x amt)/ ~4 mL soln + ~4 mL EtOH| which can be simplified to, |Ai (A425 observed)|| = ||Af (A425 desired of 1.00)| |(x amt)/ Vi (initial mL of soln)||(x amt)/ Vf (initial mL of soln + mL of EtOH needed)| and may be used to determine the amount of EtOH needed. When solved for milliliters of ethanol needed, this equation becomes: |Ai (A425 observed) × Vi (init mL of soln) = | | Af (A425 desired of 1.00) × Vf (initial mL of soln + mL of EtOH needed)| If the desired, final absorbance reading is 1.00, this can be simplified to, |mL of EtOH needed = ||Vi (initial mL of soln) × [Ai (A425 observed) – Af (A425 desired of 1.00)]| |Af (A425 desired of 1.00)| Whether the amount of EtOH needed is calculated as just explained or whether a “guesstimate” amount is used, approximately the needed amount of alcohol should be added and a new A425 reading obtained. As needed, the volume should be adjusted further and another reading taken. When the A425 is 1.000 or slightly less, the concentration has been properly adjusted. About 4 to 5 mL of the diluted solution should be kept in (or placed in) a cuvette for testing. |mL of EtOH needed = Vi (initial mL of soln) × [Ai (A425 observed) – 1.00]| - The isolated pigment bands are dilute enough that they are OK as is and do not need to be diluted. Each should be decanted into a separate, clean (check first for methylene blue) cuvette, taking care to not include any of the paper pieces. - Absorbance readings should be obtained for all pigments at 25-nm intervals, and the process of obtaining readings will be quicker if all samples for which the group is responsible are tested at a given wavelength before changing to the next wavelength (all tested at 350 nm, then all at 375 nm, etc.). It is much more time-consuming to test one sample at all wavelengths, then go back and “start over” to test a second sample, etc. Initially, the wavelength should be set to 350 nm and the zero and blank rechecked. Absorbance readings should be obtained for all specimens the group is testing. Then the spectrophotometer should be set at 375 nm, the zero and blank readjusted, and another set of readings obtained. Readings of absorbance should be taken at 25-nm intervals from 350 to 800 nm (350, 375, 400, etc.). Each time the wavelength is changed, it is necessary to recheck both the zero and the blank to get correct readings. Readings should be obtained for each of the bands being tested before changing wavelength. Readings should be recorded in students’ lab notebooks in chart form with columns for wavelength and for each of the samples. Also, data should be entered online. Part A — Paper Chromatography The resulting bands on the chromatography paper should be drawn (then colored with colored pencils?) and described (color, location with respect to the solvent front and/or original spot). A tentative identification should be assigned to each of the pigments based on the list of pigment colors mentioned in the Background. From the colors of the individual bands on the chromatogram, which pigment does each of these bands appear to represent? Which is the smallest or fastest-moving molecule? Which is the slowest? Remember to draw any new equipment used. Part B — Spectra of Pigments - All spectrophotometer readings should be recorded in group members’ lab notebooks. A suggested format is: |Wavelength||Pigment #1 (Name?)||Pigment #2 (Name?)| - Data for the absorption spectra of all solutions/bands tested should also be (once per group — per set of data, not multiple entries of the same data). When all data have been entered, you may then return to the Web site to print out the - For each sample the group tested, a graph of wavelength (on the X- or horizontal axis) versus absorbance (on the Y- or vertical axis) should be constructed. The graphing protocol should be used as a reference on proper graphing techniques. Because this graph represents data which do not exhibit a proportional correlation, sequential points should be connected in “dot-to-dot” fashion, and the graph will not be a straight line graph. Absorption maxima (peaks) and minima for each of the solutions tested should be noted. The example, above, is a graph of the spectra for two concentrations of Chlorophyll A, represented by the black line and the greenish line, and the spectrum for Carotene, represented by the pinkish line. Because the concentrations of the solutions were not standardized in any way, the heights of the peaks (which, you should recall, are merely concentration-dependent) are not significant (differences in concentrations of solutions are not being examined in this experiment), but rather, as notated in the example, above, the locations of the peaks, the maxima, as well as the minima, relative to the wavelengths tested, are important data. Thus it is important that Chlorophyll A’s maximum is at 425 nm as compared to Carotene’s maximum at 450 nm, and it is important that Chlorophyll A’s minimum is at 525 nm as compared to Carotene’s minimum at 600 nm. For this experiment, we don’t care that at 425 nm, the absorbance for one of the Chlorophyll A solutions was 0.59 and the other was 0.29 — all that means is that one solution was about twice as concentrated than the other, but the important thing is that Chlorophyll A had a maximum absorbance peak at 425 nm. It is also important that Chlorophyll A has a second maximum at 675 nm. Also, because these glass cuvettes are only good in the visible range (UV takes special quartz cuvettes and IR takes special salt cuvettes), the “drift” at the beginnings and ends of the graph, where the wavelength is approaching the ultraviolet or infrared range are “meaningless” for this - At what wavelength(s) did each of the isolated pigments absorb the most/least light? Do the observed absorption maxima and minima correspond to those reported in the literature for each of those pigments? Were the tentative identifications of the bands correct — do the absorption data support the identifications made based on color/appearance? To what colors do these wavelengths correspond? - The absorption spectrum of the mixed pigments tested should be compared with the spectra from the various “known” pigments. By matching the peaks, which of the individual pigments does the mixed pigment solution contain? Also, at which wavelengths did the mixed plant pigments absorb the most light — where were the absorbance peaks? To what colors do these wavelengths correspond? At which wavelength did they absorb the least light — where was the absorbance closest to zero? To what color does this correspond? - If methylene blue, food coloring, or any other pigments were also tested, the same analysis should be done for each pigment tested. The absorption maxima and minima should be determined for each of the colors tested. What wavelength(s) of light is/are each of the colors absorbing (therefore unavailable to a plant), and what wavelength(s) is/are each color not absorbing (therefore reflecting or transmitting and available to a plant). If a plant was placed into a solution containing this/these pigment(s), what wavelength(s)/color(s) of light would be available to the plant to use? - Any other significant notes, observations, and data should be included. Optional Additional Experiment(s) - The anthocyanins in leaves such as those of red cabbage are soluble in water, and may be extracted by putting red cabbage in a blender with water, then straining off the pulp. Adding acid changes the color of the cabbage “juice” to a bright, cherry red, adding a base turns it a dark, forest green, and adding tap water sometimes changes it to blue. Anthocyanins are also soluble in methanol and ethanol, and so may be extracted using one of those, but at least with ethanol, within a few hours the solution fades to clear. Once spotted onto chromatography paper (either from a methanol or ethanol extract or using a penny to apply pigment directly), it appears that the anthocyanins will not move using the solvent systems typically used for chromatography of plant pigments. Optionally, a spectrum could be obtained for freshly-extracted pigment from red cabbage leaves. A comparison of spectra for red cabbage juice in acidic and basic solutions would also be interesting. Experimentation with various chromatography solvents could be done to try to find a system that would allow anthocyanins to move. - Blue-green algae (bacteria-relatives in Kingdom Monera) like Spirulena also contain a bluish pigment, phycocyanin, which is water-soluble and is not used in photosynthesis. Because Spirulena is a microscopic organism, using a penny to “roll” pigments onto chromatography paper wouldn’t work. The pigments in Spirulena would have to be extracted in some solvent (like was done for the spinach/dried parsley in Part B), then spotted onto chromatography paper if so desired. Phycocyanin appears to be insoluble in EtOH (or only slightly). Optionally, a spectrum of water-extracted Spirulena could be taken (use water as the blank) and/or experimentation undertaken to try to find a chromatography solvent with which phycocyanin could be separated from the various photosynthetic pigments also present in Spirulena. - It might be interesting to try to extract pigments from carrots or other vegetables high in β-carotene, then performing paper chromatography on the extract and obtaining absorption spectra for the pigments thus - Health-food stores often sell “chlorophyll extract”. It might be interesting to obtain some of that to use for chromatography and spectral Things to Include in Your Notebook Make sure you have all of the following in your lab notebook: - all handout pages (in separate protocol book) - all notes you take during the introductory mini-lecture - drawing (yours!) of chromatography set-up - optional sample of chromatography paper &/or Parafilm backing &/or the leaf you used - notes on chromatography results - labeled (which band was which and where were they?) drawing (yours!) of finished chromatogram - your in-class spectrophotometer data - your properly-constructed graph of your group’s spectrophotometer - any other notes and data you gather as you perform the experiment - print-out of class data (available online) - answers to all discussion questions, a summary/conclusion in your own words, and any suggestions you may have - any returned, graded pop quiz Copyright © 2011 by J. Stein Carter. All rights reserved. Based on printed protocol Copyright © 1986 D. B. Fankhauser and © 1989 J. L. Stein Carter. Chickadee photograph Copyright © by David B. Fankhauser This page has been accessed times since 5 Sep 2011.
The years between 10 and 14 years of age are known as adolescence. It is a time characterized by rapid change and development, as it is the transition between childhood and young adulthood. Changes can be inconsistent and also uncomfortable. Adolescents experience physical, social, as well as personal and emotional changes. Cognitive processes will also begin to differ. The rate at which adolescents experience changes will vary depending on gender, genetics, environmental factors and health. Physical change is a primary characteristic of adolescents. Preteens will experience growth spurts, changes in skeletal structure, muscle and brain development, as well as sexual and hormonal development. Gender differences play a role in when these changes occur. For girls, physical changes begin to happen at about age 12, while boys typically begin to see changes at about age 14. Eating disorders, drug use and sexual activity can pose serious health risks if teens engage in these behaviors during these rapid physical changes. Socialization is another characteristic of adolescence, as they begin to socialize more with their peers and separate themselves from their family. During childhood, kids have a loyalty to their adult role models, such as parents or teachers. However, during adolescence, this loyalty shifts, making preteens more loyal to their friends and peers. For adolescents, self-esteem is largely dependent on their social lives. Girls tend to stick to small groups of close friends, while boys build larger social networks. Adolescents are highly aware of others and how they are perceived during this stage. Changes in cognitive processes are characteristic during adolescence. Preteens experience higher thinking, reasoning and abstract thought. Preteens develop more advanced language skills and verbalization, allowing for more advanced communication. Abstract thought allows adolescents to develop a sense of purpose, fairness and social consciousness. Adolescents also decide how moral and ethical choices will guide their behaviors during this time. Cognitive processes are affected by overall socialization, meaning that adolescents will develop differently during this stage based on the individual factors. Personal and Emotional Characteristics Adolescence is a time when emotions begin to run high. Parents and teachers may begin to notice argumentative and aggressive behaviors due to sudden and intense emotions. Adolescents are also characteristically self-absorbed. They are preoccupied with themselves because they are beginning to develop a sense of self, but they are also scrutinizing their own thought processes and personalities. Possibilities begin to look endless during adolescents, leading some teens to become overly idealistic. They also believe that their thoughts and feelings are unique, doubting that others could possibly understand what they are experiencing.
Probably you missed it, but last week there was a fascinating interview on the NPR program Talk of the Nation. The segment featured a scientist named David Goldberg, who answered questions about his research concerning the plausibility of storing massive amounts of carbon dioxide in basalt formations deep below the earth’s oceans. In a paper that is available online and will be published in an upcoming issue of The Proceedings of the National Academy of Sciences, Goldberg and his colleagues write about how a basalt formation off of the coast of Oregon and Washington could potentially store anywhere from 120-150 years of carbon produced by the United States in its cavities (assuming current U.S. emission rates do not increase). While initially I was extremely skeptical of this idea (because I thought that it might cause all kinds of unintended ecological havoc), by the end of the interview, I was somewhat more optimistic. The idea of storing carbon dioxide under the ocean reminds me of a rather naive idea I once had as a kid. I thought that a good way to rid the earth of our human waste would be to rocket it into space in huge bundles. Then a giant incinerator in space would burn the garbage. Every time I see a bumper sticker that says “Earth First: We’ll destroy the other planets later,” I chuckle as I am reminded of what a silly idea shooting our litter into space would be. The Basic Science Behind Storing CO2 Under the Ocean The aforementioned NPR interview does an excellent job of explaining how carbon would be stored underneath the ocean in basalt rock formations. Essentially, the CO2 would need to be pumped into fissures in the rocks in pressurized liquid form. It would become trapped inside for a long period of time, reacting with iron, calcium, and sea water in the rocks to make chalk (known more technically as carbonates). Laboratory reactions indicate that this process would most likely occur quickly and would not influence the oceans in a negative way or allow the carbon to escape easily. According to Goldberg, in its liquid form CO2 is denser than sea water and would subsequently be more likely to sink than rise while trapped within the basalt formations. The formations in discussion are part of the Juan de Fuca Plate, and are close to 10,000 feet below the ocean off of the coast of Oregon and Washington. Goldberg says that this particular formation has the advantage of potentially holding about ten times more carbon than other areas of this kind underneath the ocean. He therefore thinks that research efforts should be focused here. But he readily admits that there are other challenges that might derail this potential solution to help alleviate global warming. He explains that a pilot study is needed to see if assumptions are correct that carbon can be stored safely and efficiently this far below the ocean. He and his fellow researchers are currently seeking funding to commence this introductory research. What Are Some of the Other Challenges and Alternatives to Storing Carbon this Way? The process of storing carbon in such ways that it won’t enter the atmosphere is known as carbon sequestration. What Goldberg and his fellow researchers are proposing is referred to as geochemical trapping. If it is discovered that the basalt formations below Oregon and Washington’s coastal areas can in fact store carbon dioxide safely, there is still the issue of transporting the CO2 from destinations on the ground in the U.S. all the way to the depths of the ocean. Goldberg admits that creating and financing this infrastructure would be a necessary precursor to making carbon sequestration and geochemical trapping a reality. But he suggests that if his research and those of others is fast-tracked, then perhaps it is achievable. He is right in that the research has to begin somewhere though. When asked by NPR’s interviewer about how this idea was different or better than storing CO2 in empty oil wells, he replied that while the strategy he is researching would be more inconvenient, it would represent a component to global warming alleviation that would be far more significant in magnitude to any storage that old oil wells might provide. He wisely cautions listeners, however, by saying that his idea would just be one of various solutions and actions needed to address global warming and climate change, not a silver bullet to relieve us of attempting to reduce our greenhouse emissions. You can listen to the NPR interview by clicking here. Your thoughts on this potential method of combating global warming are appreciated. Read More About Carbon Sequestration on the Green Options Network: - EPA Drafts Rule for Carbon Sequestration - Lots of Room to Sequester CO2 - CO2 Capture and Technology of the Future - Wyoming Passes Carbon Capture & Sequestration Legislation - Wining about Global Warming - Carbon Sequestration Could Be $8B Business for Agriculture
Cladistics illustrates the ever-developing evolutionary relations between life forms, while the Linnaean system classifies organisms according to fixed physical traits. Cladistics organizes all life within a frame of evolutionary theory, while the Linnaean system reaches back to an Aristotelian concept of scientific names.Continue Reading Cladistics is a method developed to pair the evolutionary model with the practical need for classification of organisms. Cladistics shows branches of evolution and allows users to see how closely related one species is to another species at any point of their evolutionary histories. Cladistics organizes life in accordance to the history of development rather than to the similarity of one life form with another at any one time. Cladistics is not intended for use outside work involving the theory of evolution. Carl Linnaeus developed his system in an attempt to give every distinct form of life a universal name and a place within a structure of all life according to physical similarities. This structure designates a species, genus, family, order, class and kingdom for each organism. Linnaeus relied on a previous model of classification offered by Aristotle, though Linnaeus changed and enlarged Aristotle's model. The Linnaean system groups species with physical similarities together regardless of the different ways those traits were developed in each species.Learn more about Biology
This is a book for anyone who wondered about the lines on the maps of the United States. In it Andro Linklater, a British writer and journalist, provides a history of the surveying of America. This is necessarily a two-part task, as not only does he describe the development and importance of surveying in shaping America, but it also requires him to explain the simultaneous development of uniform measurement in the Western world. For while people were familiar with units of measurement, those units themselves were not standardized, as lengths, along with weights and volume differed from place to place during the colonial period. Yet the colonists already had access to the first standard measurement, the 22-foot-long chain introduced by the 17th century mathematician Edmund Gunter. His chain was the first element of precision that made the surveying – and through that, the selling – of the vast American territories England claimed in North America. Linklater describes this tandem development well, conveying both the importance of surveying and measurement in shaping the history of the country, as well as the numerous frustrations involved in getting it right. What began as an often haphazard assessment gradually became a more professional, systematic approach by the mid-19th century, creating the checkerboard pattern and straight lines visible from the skies overhead today. Linklater’s book is a readable history of a mundane yet critical aspect of American history. With a scope spanning from Tudor England to a land office in modern-day Sacramento he conveys something of the long process of development that brought us to where we are now. Yet his examination of surveying rests in a bed of outdated interpretations about American history. These are minor and do little to effect the author’s argument, yet they are a weakness that diminishes from the overall value of the book. All of this makes Linklater’s book a useful look at a long overlooked element shaping American history, yet one that is strongest when focusing on its main subject and not when discussing American history more broadly.
The annual return of the Asian monsoons is one of nature’s great cycles of renewal. Each summer, the onset of the wet season brings much-needed rain to millions of people across the continent. But scientists have noticed a puzzling trend in recent decades. Some of the monsoons, including the annual rains in India and parts of China, seem to be weakening over time, raising concerns about the long-term effects on water supplies and agriculture. It’s the exact opposite of what should be expected in a warming world. On a basic level, the monsoons are caused by a difference in temperature between the air over the Asian continent versus the surrounding oceans. This gradient changes with the seasons, and allows for an influx of warm, moist air to blow across the land during the hotter summer months. That brings stormy weather. As the climate warms, land masses are expected to heat up faster than surrounding waters, strengthening the monsoon season in the process. But some are getting weaker, and now scientists think they know why. Research increasingly suggests that rising air pollution in parts of Asia is powerful enough to alter the weather—sometimes in ways that work against the influence of global warming. The result is a kind of tug of war between greenhouse gases and pollution particles in the atmosphere. And for now, when it comes to the annual monsoons, pollution seems to be winning. A recent study, published last month in the journal Geophysical Research Letters, is the latest to highlight the phenomenon. It focuses on the Asian summer monsoon, which brings annual rains to large regions of China. The researchers, led by Yu Liu of the Chinese Academy of Sciences, compiled a large collection of data from tree rings in northern China. They revealed 448 years of the region’s climate history. Tree ring samples contain a variety of information about the conditions that occurred over the course of a tree’s life. The tree rings showed that the Asian summer monsoon has been decreasing in strength for the past 80 years—the most substantial weakening observed in the entire 450-year record. The researchers then used a series of climate model simulations to determine the causes. The models indicate that the growing influence of aerosols—tiny pollution particles in the atmosphere—are likely to blame. Without the air pollution, the models suggest, the strength of the monsoon should be growing as the climate warms. The major mechanism likely involves the cooling influence of certain types of aerosols, particularly sulfate particles, according to study co-author Steven Leavitt of the University of Arizona. “The sulfate particles turn out to be quite reflective to sunlight,” he noted in an email to E&E News—meaning they beam sunlight away from the Earth, causing a cooling effect on the local climate. This cooling effect works against the influence of climate change, dampening the warming that’s occurring over the continent and causing the monsoon to weaken. The research reaffirms an idea that many other studies have also suggested, according to climate and monsoon system expert Andrew Turner of the University of Reading in the United Kingdom. “Modelling studies consistently show that aerosol emissions over Asia, and in particular sulphate aerosol emissions, lead to reduced monsoon rainfall,” he noted in an email, adding that the effect isn’t limited to northern China. Multiple studies have pointed to a similar phenomenon affecting the South Asian summer monsoon, as well, which brings rain to millions of people across the Indian subcontinent. Observations suggest that the South Asian monsoon has also weakened in recent decades. And as with the rains in northern China, models indicate that aerosols are likely a major factor. That’s not to say there aren’t remaining mysteries about the behavior of the Asian monsoons. The East Asian monsoon has exhibited some perplexing trends in recent years, according to atmospheric physicist Yi Ming of NOAA’s Geophysical Fluid Dynamics Laboratory. Some of Ming’s own modeling studies have suggested that the influence of aerosols over the Asian continent should be driving a drying trend over southern China, similar to northern China and the Indian subcontinent. But real-life observations show that it’s actually growing wetter, he said in an interview. Scientists are still working to understand the discrepancy, but many believe that certain natural climate variations may be part of the reason. On the whole, though, research increasingly points to the acute influence of air pollution on monsoon systems affecting broad swaths of the Asian continent. A global phenomenon Scientists are becoming more aware of the profound ways in which air pollution can affect global weather and climate patterns, and the ways in which they may work with or against the influence of climate change. Different types of pollutants may have different effects in the atmosphere—black carbon particles, for instance, can actually absorb heat and increase climate warming. But the cooling influence of particles like sulfates has proven among the most globally significant effects so far. Recent studies suggest that air pollution may be masking some of the influence of climate change, so to speak, and that the climate would be significantly warmer if it didn’t exist. One particularly jarring 2018 study estimated that eliminating all human aerosol emissions could cause the planet to warm by as much as an additional half to 1 degree Celsius. Other studies have suggested that changes in air pollution have had significant effects on other kinds of climate patterns—not just monsoons—in various parts of the world. One recent study, published earlier this month in Nature, found the likely signature of aerosols in a century of global drought records. Also relying on tree ring data, the study found that the influence of human-caused global warming on droughts around the world has been clear for at least 100 years. But its influence temporarily declines for a few decades, starting around 1950. The researchers suggest that an increase in global air pollution during this time was probably counteracting some of the effects of climate change. On a smaller scale, research suggests that some air pollution particles may also be able to affect the weather by altering the formation of clouds. That’s even more complicated. Depending on the types of particles and their concentrations in the air, studies have found that pollution can sometimes enhance rainfall and sometimes suppress it. Still, the effect on clouds has had some globally significant consequences. One oft-cited 2014 paper, for instance, found that pollution from Asia may be strengthening storms in the Pacific Ocean—including weather systems that eventually make it all the way to North America—largely by altering the formation and structure of clouds. On the other hand, some of the weakening trend in the Asian monsoons may actually be linked to the opposite process, according to Wenju Cai of Australia’s Commonwealth Scientific and Industrial Research Organisation, one of the new paper’s co-authors. Some types of aerosol particles over the continent may actually work to decrease the size of water droplets in the storm clouds, which he notes is “not conducive to rainfall.” That’s one of the complicated things about air pollution—there’s a wide variety of different particles released into the atmosphere, and they don’t all behave in the same ways. That makes modeling their effects on already complicated aspects of the Earth system, like weather and climate, a big challenge. What may be an equally important question is what will happen to global weather and climate patterns if air pollution goes away. Despite the variety of different particles and their behaviors, it’s clear that the climate-cooling properties of some of the most common aerosols remain among the most significant global consequences of air pollution around the world. Currently, that effect is working at odds with the progression of global warming. But aerosols only last a short time in the atmosphere compared with carbon dioxide and other greenhouse gases, and they disappear quickly when emissions are halted. Some scientists have pointed out that as efforts to clean up the air become more successful, it could be followed by rapid climate warming. And that extra warming could bring about a spate of unforeseen changes in regional weather patterns, as well. Recent patterns in Atlantic hurricanes may hold some hints about the potential future of the Asian monsoons, Ming suggested. While climate change is expected to increase the strength of tropical cyclones, some research has also suggested that recent reductions in air pollution from North America and Europe may have removed some of the cooling influence of aerosols over the Atlantic Ocean. This cooling effect may have been suppressing storms during much of the 20th century, and recent clean-up efforts could partly explain why hurricanes seem to be growing stronger over the last few decades. “In that sense, this is also like an anecdote, or somewhat preview, of what’s to come over East Asia,” Ming suggested. Widespread concern about air quality across Asia, particularly China and India, may eventually result in similarly successful efforts to reduce pollution. When that happens, the weakening trend in the monsoons may begin to reverse itself, particularly as the climate continues to warm. That might seem like a good thing for water availability across the continent. But there are two sides to every coin, Ming noted. Monsoon season is often associated with sudden severe storms, as opposed to just continuous rainfall. If the monsoons strengthen, there may be an increase in flooding and storm-related damage. Researchers have frequently emphasized that it’s important to improve air quality despite the potential climate side effects. But they would like to project the outcomes on warming before they occur, so there’s time to prepare. And that starts with science aimed at understanding how air pollution is affecting the climate system now, and in what ways it’s working with or against the continued progression of climate change. “If our past understanding is right, that means you will have this unmasking of the aerosol effect—so that means everything will be reversed,” Ming said of the monsoons. “So that is not a tug of war anymore between aerosols and greenhouse gases. They will all be working in concert toward the same direction.” Reprinted from Climatewire with permission from E&E News. E&E provides daily coverage of essential energy and environmental news atwww.eenews.net.
What to do: Collect 8 plastic drinking straws. If they are the bendy type, cut off the bend and use the straight part of the straw. The first straw will need no cutting. Cut about 2 cm off the end of the next straw, 4 cm off the third straw, 6 cm off the fourth straw and so on until all 7 straws have been cut. Lay a piece of clear tape on the table, sticky side up, and arrange the straws on the tape from longest to shortest, with the tops of the straws all lined up with each other. Wrap more tape around the straws to secure them together. Blow over the top of the straws. Which straw makes the highest pitch noise? Which straw makes the lowest pitch noise? Why do you think this is? What is happening? The pitch of a sound corresponds to the frequency of the sound wave: the higher the frequency, the higher the pitch. The shorter the straw, the higher the frequency of the sound wave and the higher the pitch. The longer the straw, the longer the frequency of the sound wave and the lower the pitch. To find out more about pitch, click here and watch this video.
In the early solar system, terrestrial planets like Mercury, Venus, Earth and Mars are thought to have formed from planetesimals, small early planets. These early planets grew over time, through collisions and mergers, to make them the size they are today. The material released from these violent collisions is commonly thought to have escaped and orbited around the sun, bombarding the growing planets and altering the composition of the asteroid belt. But the asteroid belt does not seem to contain a record of this impact debris, which is a mystery that has been stumping astronomers and astrophysicists for decades. Two researchers from Arizona State University’s School of Earth and Space Exploration, former NewSpace Postdoctoral Fellow Travis Gabriel and doctoral student Harrison Allen-Sutter, were curious about this discrepancy and set about creating high-end computer simulations of the collisions, with surprising results. “Most researchers focus on the direct effects of impacts, but the nature of the debris has been underexplored,” Allen-Sutter said. Instead of creating rocky debris, the simulations showed that large collisions between planets vaporize the rocks into gas. Unlike solid and molten debris, this gas more easily escapes the solar system, leaving little trace of these planet-smashing events. Their work, which has been published in the Astrophysical Journal Letters, provides a potential solution to this decades-old paradox, dubbed the “Missing Mantle Problem” or the “Great Dunite Shortage.” “It has long since been understood that numerous large collisions are required to form Mercury, Venus, Earth, the moon and perhaps Mars,” said Gabriel, who is the principal investigator of this project. “But the tremendous amount of impact debris expected from this process is not observed in the asteroid belt, so it has always been a paradoxical situation.” Their results may also help us to better understand how the moon was formed, which is thought to have been born from the aftermath of a collision that released debris into the solar system. “After forming from debris bound to the Earth, the moon would have also been bombarded by the ejected material that orbits the sun over the first hundred million years or so of the moon’s existence,” Gabriel said. “If this debris was solid, it could compromise or strongly influence the moon’s early formation, especially if the collision was violent. If the material was in gas form, however, the debris may not have influenced the early moon at all.” Gabriel and Allen-Sutter hope to continue this line of research to learn more about not only our own planets, but also the large population of planets observed outside our solar system. “There is growing evidence that certain telescope observations may have directly imaged giant impact debris around other stars,” Gabriel said. “Since we cannot go back in time to observe the collisions in our solar system, these astrophysical observations of other worlds are a natural laboratory for us to test and explore our theory.” The case of the missing mantle: How impact debris may have disappeared from the solar system (2021, September 1) retrieved 1 September 2021 This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Metabarcoding allows scientists to extract DNA from the environment, in order to rapidly detect species inhabiting a particular habitat. While the method is a great tool that facilitates conservation activities, few studies have looked into its applicability in monitoring species’ populations and their genetic diversity, which could actually be critical to assess negative trends early on. The potential of the method is confirmed in a new study, published in the peer-reviewed scholarly journal Metabarcoding & Metagenomics. For the first time, a complete time-calibrated phylogeny for a large group of invertebrates is published for an entire continent. A German-Swedish team of scientists provide a diagrammatic hypothesis of the relationships and evolutionary history for all 496 European species of butterflies currently in existence. Their study provides an important tool for evolutionary and ecological research, meant for the use of insect and ecosystem conservation. In recent years, the concept of Ecosystem Services (ES): the benefits people obtain from ecosystems, such as pollination provided by bees for crop growing, timber provided by forests or recreation enabled by appealing landscapes, has been greatly popularised, especially in the context of impeding ecological crises and constantly degrading natural environments. Hence, there has been […] Earlier this year, a research article triggered a media frenzy by predicting that as a result of an ongoing rapid decline, nearly half of the world’s insects will be no more pretty soon Amidst worldwide publicity and talks about ‘Insectageddon’: the extinction of 40% of the world’s insects, as estimated in a recent scientific review, a critical […]
The flag was first raised in 1810 over the fort of Baton Rouge, Louisiana, by a band of Florida troops, the Bonnie Blue served as the symbol of southern independence, and as the official flag of the Confederacy, until it was replaced by the Stars and Bars in 1861. The Bonnie Blue was used by the Republic of Texas from 1836 to 1839. In 1861, it flew over the capital building in Jackson, Mississippi, inspiring the southern patriotic song - "The Bonnie Blue Flag," composed by Harry MaCarthy. It was also used in one form or another by numerous southern confederate states. Stars and Bars The white stars on the blue field represent the original Confederate States of Alabama, Florida, Georgia, Louisiana, Mississippi, South Carolina and Texas. Stars and Bars (final version) The thirteen stars represented the original seven Confederate States, as well as the states of Arkansas, North Carolina, Tennessee and Virginia. Note that Kentucky and Missouri each have a star, but efforts to secede from the Union within their individual states eventually failed. Regardless, the stars remained. Confederate Battle Flag Perhaps the most recognizable flag from the Civil War period was the Confederate Battle Flag (shown above). It was carried by Confederate troops throughout the war. Confederate Navy Jack Beginning in 1863, this flag was used at sea by the navy, and became (in many ways) the recognizable symbol of the southern states.
What is Temperature? Temperature (represented as “T” in scientific formulae) is a physical quantity for measuring heat or , in other words, the degree degree of hotness or coldness of a surface, object or environment. Why is Temperature important? Temperature is important in all fields of natural science. Temperature is an integral part of the natural physical laws that govern the observable phenomena in our world including weather and climate. Equally relevant, though smaller in scale is the importance of temperature (heat) in the laws that govern the interactions between the smallest of particles (atoms and molecules) that make up our universe: the Laws of Thermodynamics. It follows that phenomena that we observe in our environment on a daily basis are these same atomic interactions, played out at an immensely larger scale. Water is the best example to illustrate the above statement. Temperature affects the physical state of water (as a result of its effect on the interactions between water molecules), resulting in three distinct physical states: water vapour (gas), liquid water and ice (solid). These physical states are seen in everyday weather phenomena such clouds. Clouds are formed by water vapor condensing into liquid water as it cools and ,later, in larger quantities as precipitation such as rain and, provided it is cold enough, snow or hail (ice). Therefore, temperature plays a keys role in the Water cycle which is critical to life on Earth. As demonstrated by the example of the water cycle, it is important to note the relationship of temperature with both Humidity and Atmospheric Pressure as it is difficult to separate their influence from temperature readings as well as to separate the influence of temperature on the measurement of atmospheric pressure and humidity. How is Temperature measured? Temperature is measured using a thermometer, where several scales and units are used. The most common of these scales are the Celsius scale, (units denoted as ℃), the Fahrenheit scale (℉) and, particularly in science, the Kelvin scale (K). In the case of ☒CHIPS, temperature is measured by the advanced weather sensor on the ☒CHIP SW01.
|Uses of the Rainforest| |Uses of the Rainforest Rainforests do not only provide material goods, they also provide invaluable services for us. Around 25% of the people in the world depend in one way or another on the water from the rainforest. The trees and soil of the rainforest acts like a sponge, storing water from frequent downpours. They prevent the precious water from directly flowing into the rivers, seas and oceans. When the rainforest are cleared, millions of people are losing a precious source of water. There may be severe water shortages in nearby areas and in areas further away, there would be major floods occuring as there would be no more rainforest to soak up the water. In another way, the rainforest also helps to lower the the temperatures around the world. They act as a huge and global air conditioner. If you could observe a rainforest from a distance, you would see that it makes up a dark surface. This dark surface absorbs heat from the sun and reduce the temperatures. When the trees are cut down, there would not be a dark surface to absorb the heat. Instead, there would be a lighter surface formed by the remaining crops and plants, and this surface reflects the heat back into the atmosphere causing temperatures to rise. This change in temperature can alter weather patterns around the world on a global scale. Droughts, famines and floods may occur in different parts of the world causing massive damage. Moreover, the rainforest works against the advanced greenhouse effect. One of the major greenhouse gases, carbon dioxide, traps the sun's rays in the atmosphere, leading to a rise in world temperatures and global warming which is heating the planet and threatening our future. Carbon dioxide is generally given out during energy usage, material consumption, driving, and from cutting and burning wood or vegetation. The situation is made worse when uncontrolled logging occurs and when tropical rainforests are burnt. These account for almost one-third of carbon emissions in the world. Conserving the rainforest helps to keep global temperatures right. By stopping deforestation, we can make the world a cooler place and better place for us to live in.
What is intravenous immunoglobulin, or IVIG? It is a blood product that contains immune factors that will counteract some of the autoimmune attack that the body puts on its own parts. So it’s actually pooled blood products that get concentrated as antibodies, and they’re used to stop the autoimmune attack with a person’s body It’s made from donor blood. One product may come from anywhere from 2,000 to 16,000 pooled blood infusions. Most of the IVIG applications have been in the neuromuscular field of neurology. Guillain Barre syndrome, the acute autoimmune attack on the nerves, and chronic inflammatory demyelinating neuropathy, an autoimmune attack on the nerves of a more chronic basis, are two common applications. It’s been used in some cases in inflammatory muscle disease, particularly dermatomyositis, it’s been used in stiff person syndrome, which is an autoimmune attack in which patients become rigid. It has been used in the autoimmune attack on the neuromuscular junction called, in a disease called. In addition, in neurology, it’s also been used at times in multiple sclerosis, and in some cases vasculitis, an inflammatory process affecting the blood vessels that results in stroke. So there’s a host of conditions in neurology where it’s become important This could also be a potential treatment for other autoimmune diseases. For example, peripheral neuropathies, where the nerves aren’t working in the arms and legs, more and more of those conditions have been found to be due to an autoimmune process, and immunoglobulin has become an important treatment for that. There are major and minor risks to immunoglobulin. Minor side effects could be things like headaches, muscle aches, stiff neck, rash, slight fever. Some of these are related to effusion rate. Some of them are just reactions. They could be treated with some preceding Benadryl, aspirin — things like that. Then there are major side effects like venous thrombosis, or blood clots in the veins, acute renal failure. There have been instances of heart attacks and stroke, and of very dangerous rashes from deposition of immune complexes. Those are much less common, but possible, especially in predisposed people. For example, people who have underlying renal insufficiency may be more predisposed to having kidney problems as a side effect, because of the load, the concentration of the immunoglobulin, so you’ve got to pick your people carefully. Older people may not be able to handle the volume load and develop congestive heart failure if their heart can’t handle that influx of volume. Source: Ivanhoe Newswire
Market House of Louisville, Georgia Located in the center of Louisville, Georgia, this open-air market has been the site of trade since the late 1700s and features a bell made in 1772 that was used to warn residents and traders of impending dangers such as fire and possible raids by indigenous tribes. All manner of goods were exchanged here, including human beings sold as chattel slaves. The structure is the only remaining slave trading post in the state of Georgia. Louisville was the original state capital from 1796 to 1806, a time when lands were illegally given to speculators in what is now known as the Yazoo Land Fraud. Louisville was also the center of the antebellum cotton trade and slave trade in this section of Georgia. Today, the refurbished market house in the middle of the town square still stands as it did more than two hundred years ago. This trading post saw an increase in the number of human beings sold following Congress's ban of the trans-Atlantic slave trade in 1808. With no new slaves coming legally into the nation and coastal slave markets in Savannah closing, this site became a place where slave smugglers found sanctuary. As a result, many of the slaves who were sold here were victims of kidnapping in Africa or sent to the US from Latin America in violation of federal law. The illegal trade was difficult to prevent as slavery itself remained legal until the end of the Civil War. Backstory and Context The open air building that is now commonly known as the “Old Slave Market” was refurbished in the 1990s and still contains materials from the original building that was constructed over two hundred years ago. It serves as the symbol for the community organization known as the Friends of Historic Downtown Louisville as it is the most widely recognized structure associated with the town. This pavilion is also the oldest standing structure in the area and possesses a seventeenth century French bell with an interesting backstory; the bell which was intended for a covenant in New Orleans never arrived there. Instead, it was relocated to Louisville, Georgia after being stolen by pirates on its voyage from France to Louisiana. The bell that hung in the slave market was used to warn of Indian attacks on the town and is still hanging from the structure’s rafters today. Due to its open air nature, this historic site never really opens or closes, so visitors can stop and reflect upon its history and significance at their leisure. The bell was made in the year 1772 , near the time of 1776-birth of a new nation. The structure reflects both the strengths of that nation in commerce and trade as well as the worst aspects of a nation built largely upon slave labor.
Certain grasshoppers are generally mild-mannered and solitary creatures.1 However, when their population becomes dense enough, their bodies physically change and they clump together, sometimes forming the kinds of swarms that have plagued mankind throughout history. The swarming capacity of these insects is well-known, but “there has been no convincing general explanation for the evolution of these density-dependent switches in spatial distribution.”2 In a recent study published in Current Biology, researchers have attempted to explain the origins of this behavior. Andy Reynolds of Rothamsted Research and his colleagues propose that locusts evolved their clumping and swarming capacity in order to prevent predators from being able to continuously eat them. If grasshoppers always remained solitary, they would remain spread out over wide regions, so that predators could easily devour them. An increase in the grasshoppers’ population density triggers the swarming response, causing the insects to move en masse to a new location out of the reach of their immediate enemies. However, this explanation fails to answer the key question: How did these grasshoppers obtain the ability to transform their individual behaviors, social behaviors, external and internal physiologies so radically, and within a span of just hours?3 Even if locusts form swarms for the purpose of eluding predators, this only shows that there is a purpose involved. No evolutionary concepts are necessarily connected from this purpose to the locusts’ behavior. In fact, the relevant point is precisely that no real-world natural mechanism has been established, either by this research or in any other field, that would translate a purpose (such as evading predators) into forming a new biological structure or instinct, let alone a completely-integrated organism. Thus, the pertinent question for evolution remains unanswered, and even unexplored: How, step by step, did locusts evolve their transforming capacity? It may well be the case that these insects clump and swarm partly for the purpose of avoiding predators. However, another study indicates that locusts begin swarming because of the threat of being eaten by neighboring locusts.4 Research does not demonstrate that either of these general explanations has, or could have, anything to do with an evolutionary development of grasshopper behavior or physiology. It makes more sense that the purposes for locust swarms were conceived in the mind of the One who programmed locust behavior. Moreover, this Creator, unlike any undirected natural process, understands and even formulated the broad plans of the complex ecosystems in which the locusts can effectively function—either as solitary insects, or swarming hordes. - Those within the family Acrididae, including locusts (swarming grasshoppers). - Reynolds, A. et al. 2008. Predator Percolation, Insect Outbreaks, and Phase Polyphenism. Current Biology. Published online December 18, 2008. - Rogers, S. et al. 2003. Mechanosensory-induced behavioural gregarization in the desert locust Schistocerca gregaria. Journal of Experimental Biology. 206 (22): 3991-4002. - Cannibals drive locust march. Oxford University press release, May 9, 2008, regarding the study published in Current Biology by Bazai, S. et al, 2008, Collective Motion and Cannibalism in Locust Migratory Bands, 18 (10): 735-739. * Mr. Thomas is Science Writer. Article posted on January 8, 2009.
This article needs additional citations for verification. (March 2016) (Learn how and when to remove this template message) Countries around China have adopted administrative divisions based on or named after the jùn. See 郡 for further information. History and developmentEdit During the Eastern Zhou's Spring and Autumn period from the 8th to 5th centuries BCE, the larger and more powerful of the Zhou's vassal states—including Qin, Jin and Wei—began annexing their smaller rivals. These new lands were not part of their original fiefs and were instead organized into counties (xiàn). Eventually, jun were developed as marchlands between the major realms. Despite having smaller populations and ranking lower on the official hierarchies, the jun were larger and boasted greater military strength than the counties. As each state's territory gradually took shape in the 5th- to 3rd-century BCE Warring States period, the jun at the borders flourished. This gave rise to a two-tier administrative system with counties subordinate to jun. Each of the states' territories was by now comparatively larger, hence there was no need for the military might of a jun in the inner regions where counties were established. The border jun's military and strategic significance became more important than those of counties. Following the unification of China in 221 BCE under the Qin Empire, the Qin government still had to engage in military activity because there were rebels from among the six former states who were unwilling to submit to Qin rule. As a result, the First Emperor set up 36 jun in the Qin Empire, each subdivided into counties. This established the first two-tier administrative system known to exist in China. Han dynasty and Three Kingdoms periodEdit When the Han dynasty triumphed over Chu in 206 BCE, the Zhou feudal system was initially reinstated, with Emperor Gao recognizing nearly independent kings and granting large territories to his relatives. These two sets of kingdoms were placed under hereditary rulers assisted by a chancellor (xiàng). Parallel to these, some Qin jun were continued, placed under a governor appointed directly by the central government. Over the first three centuries CE, during the Eastern Han dynasty and Three Kingdoms period, the jun were subordinated to a new provincial division, the zhōu. Based upon legendary accounts of the Yellow Emperor's Nine Provinces, there were usually 13 zhōu and many more jun. Jin dynasty and the Southern and Northern DynastiesEdit During the following five centuries, during the Jin and Southern and Northern Dynasties period, the number of administrative districts were drastically increased and a three-tier system—composed of provinces, jun, and counties—was established. To limit the power of any one local lord, China was divided into more than 200 provinces, 600 jun, and 1,000 counties. Each province consisted of two or three jun and each jun had two or three counties under its jurisdiction. Sui and Tang dynastiesEdit After the Tang was established in 618, the former jun became prefectures but were referred to as zhōu. Emperor Xuanzong reversed these changes during his reign from 712 to 756. From then on, the term jun was no longer used in the administrative division system.[clarification needed] After Emperor Suzong ascended the throne in 756, he changed commanderies back to prefectures.[clarification needed] During 1920–1945, when Taiwan was under Japanese rule, there were divisions called 郡 (Mandarin: jùn, Japanese: gun). They are based on the Districts of Japan (郡 gun), which in turn were based on ancient Chinese jùn. Their officers were known as 郡守 (Mandarin: jùnshǒu, Japanese: gunshu). This was the title of ancient administrators of the Chinese jun (see below), and had never existed in Japan. By the end of 1945, there were 51 jun/kun in Taiwan. In the Warring States period, the chief administrative officers of the areas were known as jun administrators (郡守, jùnshǒu, literally "defender of the jun"). In the Han dynasty, the position of junshou was renamed grand administrator (太守, tàishǒu. "Grand Defender"). Both terms are also translated as "governor". A grand administrator drew an annual salary of 2,000 dan (石) of grain according to the pinzhi (品秩, pǐnzhì) system of administrative rank. Many former grand administrators were promoted to the posts of the Three Ducal Ministers or Nine Ministers later in their careers. In contemporary Chinese language, the word 郡 jùn is also used to translate the administrative division Shire in English language. The counties of the United Kingdom and the United States are also translated as jùn. - zhou, also translated as prefecture, and often poetically referred to as jun after Tang dynasty, alluding to its historical equivalents - Fu, also translated as prefecture, and often poetically referred to as jun in the Ming dynasty and Qing dynasty - Government of the Han dynasty - 郡, for administrative divisions in other countries that are also called 郡. All of them were based on or inspired by the ancient Chinese jun, but their nature have become quite different from the original concept. - Lü, Simian (2009). "Geography of the Later Han (后汉的地理)". History of the Three Kingdoms (三国史话) (in Chinese). China: Zhonghua Publishing House (中华书局). ISBN 9787101066890. - Shi Ji vol. 71. - de Crespigny, Rafe (2004). "The government and geography of the northern frontier of Late Han". The government and geography of the northern frontier of Late Han. Australian National University. Retrieved 18 February 2016. - "Cambridge dictionary".
Assessing with Rubrics Providing detailed explanations of an assignment using an online rubric can assist students in both completing tasks and improving future performance. Online rubric tools allow teachers to create rubrics quickly with a greater level of feedback, allowing for student interaction in the process. Also, online rubrics can easily be shared amongst teachers in schools and saved or modified for future assignments. Online rubric tools allow teachers and students to communicate more effectively, with easy to keep records that integrate into Learning Management Systems, about specific performance goals and their improvement over time. There are many online tools that can help both teachers and students through the assessment process: - Use their premade rubrics for various types of projects, or customize a rubric to fit your specific needs. - Rubric Machine (beta) - Type a topic into a search box, and choose from a vast number of rubrics. - Rubrics for Assessment - University of Wisconsin-Stout provides rubrics for assessment of web and multimedia projects. Topics include: wikis, web pages, podcasts, writing, oral presentations, and research. - Teachnology Rubric Tools Provides an extensive list of rubric generators and collections to choose from. - Tell Me a Story - This site provides a variety of tools designed to assess a multimedia digital storytelling project. One of the goals is to model how multimedia projects can be used as a form of reflective assessment. - Digital Media Scoring Guides - This interactive scoring guide prompts you to choose a communication type and then customize your scoring guide by checking ONLY the traits and elements you want to use. For those interested in developing 21st-century communication and collaboration skills online, those types of skills are in many ways real-world, problem-solving skills. Thus, we are going to need to measure them asperformances of understanding. There are simply no multiple-choice tests that we can give students that will evaluate how well — for instance — they collaborate with others. The only way we can measure how well students collaborate with others is . . . to have students collaborate with others. Thus, performance-based assessment is central to the process of evaluating “21st century skills.” (It is certainly not the only measure of student achievement and other types of assessments will continue to have their place.) Performance is most often seen in the form of formative and summative assessment. Formative assessment is ongoing and provides information needed to adjust teaching and learning. Formative assessment not only helps monitor student progress throughout an activity, but it can help gauge student understanding and readiness to proceed to further tasks. Summative assessment, as mentioned previously, focuses on a particular point in time — often at the conclusion of an activity . Both types of assessments are valuable tools when designing tasks to demonstrate mastery or understanding.
In this lab you will find some of the components you’ll use frequently when making electronic circuits. For more on any given component, please check out its datasheet. There are no specific activities in this lab other than to examine the components and to familiarize yourself with them. A datasheet or spec sheet is a document (printed or .pdf) that describes the technical characteristics of a sensor, electronic component, product, material or other. It includes details on how to use the component in a circuit and other useful design info on how to integrate it into a system together with specifications on performance and other characteristics that are important to know. Voltage regulators take a range of DC voltage and convert it to a constant voltage. For example, this regulator, a 7805 regulator, takes a range of 8 – 15 volts DC input and converts it to a constant 5-volt output. Note the label on the regulator that reads “7805”. Check the label on every component. This physical form factor, called the package, is used by many different components, and not all of them are voltage regulators. This is a TO-220 package. The 7800 series regulators come in many different voltages. 7805 is a 5-volt regulator. 7809 is a 9-volt regulator. 7812 is a 12-volt regulator. All the regulators of this family have the same pin connections. In the image above, the left leg is connected to the input voltage. The middle leg is connected to ground. The right leg is the output voltage. 3.3V regulators are also common. Note that these ones don’t have the same pin configuration as the 7805 regulators! LEDs, or Light Emitting Diodes, are diodes that emit light when given the correct voltage. Like all diodes, they are polarized, meaning that they only operate when oriented correctly in the circuit. The anode of the LED connects to voltage, and the cathode connects to ground. The anode in the LEDs in this photo is the longer leg on each LED. LEDs come in many different packages. The packages above have built-in lenses. These LEDs are the cheapest you can buy, and they’re not very bright. You can get superbright LEDs as well, which are much brighter. If you’re working on applications that need very small light sources, you can also get LEDs in a surface mount package. LEDs can only handle a limited amount of current and voltage. The details should be covered in each LED’s datasheet, but if not, here’s a link to a handy LED current calculator. For most common LEDs running at 5 volts, a resistor between 220 and 1K ohms will do the job. Solderless breadboards are reusable prototyping tools for electronics that allow you to build and experiment with circuits simply by plugging components in and out of its rows and columns. They come in different shape and sizes. Resistors resist the flow of electrical current. When placed in series, they reduce the voltage, and limit the current. The bands on a resistor indicate the resistor’s value. Here’s a handy resistor color code calculator. Potentiometers are variable resistors. The two outside terminals act as a fixed resistor. A movable contact called the wiper moves across the resistor, producing a variable resistance between the center terminal and either of the two sides. Trimmer potentiometers are designed to be mounted on a circuit board, difficult to turn, so you can use them to adjust a circuit. They’re handy to use as physical variables, to tune your project. Switches are one form of digital input. There are many kinds of switches. The two most useful categories are momentary switches, which remain closed only when you press them, and toggle switches, which stay in place after you switch them. Pushbuttons are a common type of momentary switches. The pushbuttons in the photo above are designed to be mounted on a circuit board. They are very small, less than 1 centimeter on a side. They have four pins. When the button is facing you, top two are connected to each other, and the bottom two are connected to each other. Pushing the button connects the top pins to the bottom pins. Toggle switches stay in one position when you flip them. Wall light switches are common examples of toggle switches. Unlike a momentary switch, a toggle switch can be used to turn a device on or off, because they stay in one state when you remove your hand. The toggle switches below each have three connectors, also called pins or legs. They’re usually labeled C for common, NO for Normally Open, and NC for Normally Closed. When you switch the switch, it will open the connection between the common pin and the normally closed pin, and close the connection between the common pin and the normally open pin. Switch it the other way, and you will reverse the connection. Photocells, also known as light-dependent resistors, are variable resistors whose resistance changes with the intensity of the light falling on the resistor. Thermistors are variable resistors whose resistance changes as the temperature changes. You measure the resistance between the two legs and expose the top to a varying temperature in order to vary the resistance between the two legs. Capacitors store electrical energy while there’s energy coming in, and release it when the incoming energy stops. They have a variety of uses. One common use is to smooth out the dips and spikes in an electrical supply. This use is called decoupling. Ceramic capacitors are cheap and unpolarized. They generally have very small capacitance values. They’re useful decoupling caps in a low-current circuit. You often see them used to decouple the power going into a microcontroller or other integrated circuit. The number on a ceramic cap gives you its value and order of magnitude. For example, 104 indicates a 0.1 microfarad (uF) cap. 103 indicates a 0.01 microfarad cap. Electrolytic capacitors can generally store more charge than ceramic caps, and are longer lasting and more expensive. They’re usually polarized, meaning that they have a positive leg and a negative leg. This is because current flows more efficiently through them one way than the other. An electrolytic cap will have a + or – on one side, as shown above. Diodes permit voltage to flow in one direction and block it in the other direction. LEDs are a type of diode, as are the 1N4001 diodes shown here. They’re useful for stopping voltage from going somewhere you don’t want it to go. Zener diodes have a breakdown voltage past which they allow current to flow in both directions. They’re used to chop off excess voltage from a part of a circuit. Transistors act as electronic switches. When you put a small voltage across the base and emitter, the transistor allows a larger current and voltage to flow from the collector to the emitter. The transistor shown above, a TIP120, is a type of transistor known as a Darlington transistor. It is usually used to control high-current loads like motors. DC Power jacks are used to connect your breadboard to a DC power supply that you can plug into a wall. They’re less common in microcontroller circuits now that USB power connectors and USB wall plugs are common, but they are still very handy when you have only a DC power supply to work with. The one above has screw terminals on the back to which you can connect wires to connect to your breadboard. This one is a 2.1mm inside diameter, 5.5mm outside diameter jack, which is a very common size. Battery connectors like the ones shown above are good for connecting batteries to your project. They commonly have either two round terminals for a 9-volt battery, or a DC power jack like the one shown above. A servo motor is a motor paired with an encoder (e.g. an Arduino) to provide position/speed readings and control messages in a feedback loop. This loop is used to precisely control of the servo’s degree of rotation. RC servomotors like the one shown here can only turn 180 degrees. They are often used for the rudder control on remote control planes and cars. The plastic bits shown in the photo are called horns, and they attach to the shaft to let you attach the motor to the mechanism that you want to control. DC motors utilize induction (an electromagnetic field generated by current flowing through a wire coil) to rotate a central shaft. You can reverse the direction that the shaft rotates by reversing the leads powering it. An H bridge is an electronic circuit that enables a voltage to be applied across a load in either direction. They are often used to control the direction of DC motors. This H-bridge is a model L293D in a DIP package (Dual Inline Package), meaning that there are pins on either side of the component. Like transistors, relays are electronic switches. Electromechanical relays contain a small coil that, when energized, creates a magnetic field that moves a small metal armature to open or close an electrical contact. Relays can handle higher current than transistors and can be used for AC or DC loads. However, because they rely on a physical mechanism, they are slower and more prone to wearing out. If you want to control a relay with the Arduino, you will need to use a transistor as an intermediary because most relays draw more current than the Arduino’s output pins can supply. Relays come in many packages. The ones shown above are for controlling relatively low power loads. For more on relays, see the Transistors, Relays, and Controlling High-Current Loads topic page. Screw terminals are electrical connectors that hold wires in place with a clamping screw. They allow for a more secure connection than female headers and more flexibility than soldering a wire in place. There is one screw, socket, and pin per connection.
A habitable environment on Martian volcano? The Martian volcano Arsia Mons may have been home to one of the most recent habitable environments yet found on the Red Planet, geologists say. The research shows that volcanic eruptions beneath a glacial ice sheet would have created substantial amounts of liquid water on Mars’s surface around 210 million years ago. Where there was water, there is the possibility of past life. Heat from a volcano erupting beneath an immense glacier would have created large lakes of liquid water on Mars in the relatively recent past. And where there’s water, there is also the possibility of life. A recent paper by Brown University researchers calculates how much water may have been present near the Arsia Mons volcano and how long it may have remained. Nearly twice as tall as Mount Everest, Arsia Mons is the third tallest volcano on Mars and one of the largest mountains in the solar system. This new analysis of the landforms surrounding Arsia Mons shows that eruptions along the volcano’s northwest flank happened at the same time that a glacier covered the region around 210 million years ago. The heat from those eruptions would have melted massive amounts of ice to form englacial lakes — bodies of water that form within glaciers like liquid bubbles in a half-frozen ice cube. The ice-covered lakes of Arsia Mons would have held hundreds of cubic kilometers of meltwater, according to calculations by Kat Scanlon, a graduate student at Brown who led the work. And where there’s water, there’s the possibility of a habitable environment. “This is interesting because it’s a way to get a lot of liquid water very recently on Mars,” Scanlon said. While 210 million years ago might not sound terribly recent, the Arsia Mons site is much younger than the habitable environments turned up by Curiosity and other Mars rovers. Those sites are all likely older than 2.5 billion years. The fact that the Arsia Mons site is relatively young makes it an interesting target for possible future exploration. “If signs of past life are ever found at those older sites, then Arsia Mons would be the next place I would want to go,” Scanlon said. A paper describing Scanlon’s work is published in the journal Icarus. Via Science Daily
Fretting is a special wear process that occurs at the contact area between two materials under load and subject to slight relative movement by vibration or some other force. This article focuses on measures to avoid or minimize crack initiation and fretting fatigue. It lists the factors that are known to influence the severity of fretting and discusses the variables that contribute to shear stresses. These variables include normal load, relative displacement (slip amplitude), and coefficient of friction. The article describes the general geometries and loading conditions for fretting fatigue. It presents the types of fretting fatigue tests and the effect of variables on fretting fatigue from different research test programs. The article also lists the general principles and practical methods for the abatement or elimination of fretting fatigue.
Module 8—Introduction to Amplifiers Pages i - ix 1-1 to 1-10 , 1-11 to 1-20 1-21 to 1-30 , 1-31 to 1-40 2-1 to 2-10 , 2-11 to 2-20 2-21 to 2-30 , 2-31 to 2-35 3-1 to 3-10 ,3-11 to 3-20 3-21 to 3-30 , 3-31 to 3-40 3-41 to 3-50 , 3-51 to 3-60 3-61 to 3-70 , AI-1 to AI-3 Figure 3-6.—Differential amplifier. Even though this circuit is designed to have two inputs and two outputs, it is not necessary to use both inputs and both outputs. (Remember, a differential amplifier was defined as having two possible inputs and two possible outputs.) A differential amplifier can be connected as a single-input, single-output device; a single-input, differential-output device; or a differential-input, differential-output device. many inputs and outputs are possible with a differential amplifier? Q-2. What two transistor amplifier configurations are combined in the single-transistor, two-input, single-output difference amplifier? Q-3. If the two input signals of a difference amplifier are in phase and equal in amplitude, what will the output signal be? Q-4. If the two input signals to a difference amplifier are equal in amplitude and 180 degrees out of phase, what will the output signal be? Q-5. If only one input signal is used with a difference amplifier, what will the output signal be? Q-6. If the two input signals to a difference amplifier are equal in amplitude but neither in phase nor 180 degrees out of phase, what will the output signal be? SINGLE-INPUT, SINGLE-OUTPUT, DIFFERENTIAL AMPLIFIER Figure 3-7 shows a differential amplifier with one input (the base of Q1) and one output (the collector of Q2). The second input (the base of Q2) is grounded and the second output (the collector of Q1) is not used. Figure 3-7.—Single-input, single-output differential amplifier. When the input signal developed by R1 goes positive, the current through Q1 increases. This increased current causes a positive-going signal at the top of R3. This signal is felt on the emitter of Q2. Since the base of Q2 is grounded, the current through Q2 decreases with a positive-going signal on the emitter. This decreased current causes less voltage drop across R4. Therefore, the voltage at the bottom of R4 increases and a positive-going signal is felt at the output. When the input signal developed by R1 goes negative, the current through Q1 decreases. This decreased current causes a negative-going signal at the top of R3. This signal is felt on the emitter of Q2. When the emitter of Q2 goes negative, the current through Q2 increases. This increased current causes more of a voltage drop across R4. Therefore, the voltage at the bottom of R4 decreases and a negative- going signal is felt at the output. This single-input, single-output, differential amplifier is very similar to a single-transistor amplifier as far as input and output signals are concerned. This use of a differential amplifier does provide amplification of a.c. or d.c. signals but does not take full advantage of the characteristics of a differential amplifier. SINGLE-INPUT, DIFFERENTIAL-OUTPUT, DIFFERENTIAL AMPLIFIER In chapter one of this module you were shown several phase splitters. You should remember that a phase splitter provides two outputs from a single input. These two outputs are 180 degrees out of phase with each other. The single-input, differential-output, differential amplifier will do the same thing. Figure 3-8 shows a differential amplifier with one input (the base of Q1) and two outputs (the collectors of Q1 and Q2). One output is in phase with the input signal, and the other output is 180 degrees out of phase with the input signal. The outputs are Figure 3-8.—Single-input, differential-output differential amplifier. This circuit’s operation is the same as for the single-input, single-output differential amplifier just described. However, another output is obtained from the bottom of R2. As the input signal goes positive, thus causing increased current through Q1, R2 has a greater voltage drop. The output signal at the bottom of R2 therefore is negative going. A negative-going input signal will decrease current and reverse the polarities of both output signals. Now you see how a differential amplifier can produce two amplified, differential output signals from a single-input signal. One further point of interest about this configuration is that if a combined output signal is taken between outputs number one and two, this single output will be twice the amplitude of the individual outputs. In other words, you can double the gain of the differential amplifier (single output) by taking the output signal between the two output terminals. This single-output signal will be in phase with the input signal. This is shown by the phantom signal above R5 (the phantom resistor connected between outputs number one and two would be used to develop this signal). DIFFERENTIAL-INPUT, DIFFERENTIAL-OUTPUT, DIFFERENTIAL AMPLIFIER When a differential amplifier is connected with a differential input and a differential output, the full potential of the circuit is used. Figure 3-9 shows a differential amplifier with this type of configuration (differential-input, Figure 3-9.—Differential-input, differential-output differential amplifier. Normally, this configuration uses two input signals that are 180 degrees out of phase. This causes the difference (differential) signal to be twice as large as either input alone. (This is just like the two-input, single-output difference amplifier with input signals that are 180 degrees out of phase.) one is a signal that is in phase with input number two, and output number two is a signal that is in phase with input number one. The amplitude of each output signal is the input signal multiplied by the gain of the amplifier. With 180-degree-out-of-phase input signals, each output signal is greater in amplitude than either input signal by a factor of the gain of the amplifier. When an output signal is taken between the two output terminals of the amplifier (as shown by the phantom connections, resistor, and signal), the combined output signal is twice as great in amplitude as either signal at output number one or output number two. (This is because output number one and output number two are 180 degrees out of phase with each other.) When the input signals are 180 degrees out of phase, the amplitude of the combined output signal is equal to the amplitude of one input signal multiplied by two times the gain of the amplifier. When the input signals are not 180 degrees out of phase, the combined output signal taken across output one and output two is similar to the output that you were shown for the two-input, single-output, difference amplifier. The differential amplifier can have two outputs (180 degrees out of phase with each other), or the outputs can be combined as shown in figure 3-9. In answering Q7 through Q9 use the following information: All input signals are sine waves with a peak-to-peak amplitude of 10 millivolts. The gain of the differential amplifier is 10. Q-7. If the differential amplifier is configured with a single input and a single output, what will the peak-to-peak amplitude of the output signal be? Q-8. If the differential amplifier is configured with a single input and differential outputs, what will the output signals be? Q-9. If the single-input, differential-output, differential amplifier has an output signal taken between the two output terminals, what will the peak-to-peak amplitude of this combined output In answering Q10 through Q14 use the following information: A differential amplifier is configured with a differential input and a differential output. All input signals are sine waves with a peak-to-peak amplitude of 10 millivolts. The gain of the differential amplifier is 10. Q-10. If the input signals are in phase, what will be the peak-to-peak amplitude of the output signals? Q-11. If the input signals are 180 degrees out of phase with each other, what will be the peak-to-peak amplitude of the output signals? Q-12. If the input signals are 180 degrees out of phase with each other, what will the phase relationship be between (a) the output signals and (b) the input and output signals? Q-13. If the input signals are 180 degrees out of phase with each other and a combined output is taken between the two output terminals, what will the amplitude of the combined output signal be? Q-14. If the input signals are 90 degrees out of phase with each other and a combined output is taken between the two output terminals, (a) what will the peak-to-peak amplitude of the combined output signal be, and (b) will the combined output signal be a sine wave? An OPERATIONAL AMPLIFIER (OP AMP) is an amplifier which is designed to be used with other circuit components to perform either computing functions (addition, subtraction) or some type of transfer operation, such as filtering. Operational amplifiers are usually high-gain amplifiers with the amount of gain determined by Operational amplifiers have been in use for some time. They were originally developed for analog (non-digital) computers and used to perform mathematical functions. Operational amplifiers were not used in other devices very much because they were expensive and more complicated than other circuits. Today many devices use operational amplifiers. Operational amplifiers are used as d.c. amplifiers, a.c. amplifiers, comparators, oscillators (which are covered in NEETS, Module 9), filter circuits, and many other applications. The reason for this widespread use of the operational amplifier is that it is a very versatile and efficient device. As an integrated circuit (chip) the operational amplifier has become an inexpensive and readily available "building block" for many devices. In fact, an operational amplifier in integrated circuit form is no more expensive than a good transistor. CHARACTERISTICS OF AN OPERATIONAL AMPLIFIER symbols for an operational amplifier are shown in figure 3-10. View (A) shows the power supply requirements while view (B) shows only the input and output terminals. An operational amplifier is a special type of high-gain, d.c. amplifier. To be classified as an operational amplifier, the circuit must have certain characteristics. The three most important characteristics of an operational amplifier are: 1. Very high gain 2. Very high input impedance 3. Very low output impedance Figure 3-10A.—Schematic symbols of an operational amplifier. Figure 3-10B.—Schematic symbols of an operational amplifier. Since no single amplifier stage can provide all these characteristics well enough to be considered an operational amplifier, various amplifier stages are connected together. The total circuit made up of these individual stages is called an operational amplifier. This circuit (the operational amplifier) can be made up of individual components (transistors, resistors, capacitors, etc.), but the most common form of the operational amplifier is an integrated circuit. The integrated circuit (chip) will contain the various stages operational amplifier and can be treated and used as if it were a single stage. BLOCK DIAGRAM OF AN OPERATIONAL AMPLIFIER Figure 3-11 is a block diagram of an operational amplifier. Notice that there are three stages within the Figure 3-11.—Block diagram of an operational amplifier. The input stage is a differential amplifier. The differential amplifier used as an input stage provides differential inputs and a frequency response down to d.c. Special techniques are used to provide the high input impedance necessary for the operational amplifier. The second stage is a high-gain voltage amplifier. This stage may be made from several transistors to provide high gain. A typical operational amplifier could have a voltage gain of 200,000. Most of this gain comes from the voltage amplifier stage. The final stage of the OP AMP is an output amplifier. The output amplifier provides low output impedance. The actual circuit used could be an emitter follower. The output stage should allow the operational amplifier to deliver several milliamperes to Notice that the operational amplifier has a positive power supply (+VCC) and a negative power supply (-V EE). This arrangement enables the operational amplifier to produce either a positive or a negative output. The two input terminals are labeled "inverting input" (-) and "noninverting input" (+). The operational amplifier can be used with three different input conditions (modes). With differential inputs (first mode), both input terminals are used and two input signals which are 180 degrees out of phase with each other are used. This produces an output signal that is in phase with the signal on the noninverting input. If the noninverting input is grounded and a signal is applied to the inverting input (second mode), the output signal will be 180 degrees out of phase with the input signal (and one-half the amplitude of the first mode output). If the inverting input is grounded and a signal is applied to the noninverting input (third mode), the output signal will be in phase with the input signal (and one-half the amplitude of the first mode output). Q-15. What are the three requirements for an operational amplifier? Q-16. What is the most commonly used form of the operational amplifier? Q-17. Draw the schematic symbol for an operational amplifier. Q-18. Label the parts of the operational amplifier shown in figure 3-12. Figure 3-12.—Operational amplifier. CLOSED-LOOP OPERATION OF AN OPERATIONAL AMPLIFIER Operational amplifiers can have either a closed-loop operation or an open-loop operation. The operation (closed-loop or open-loop) is determined by whether or not feedback is used. Without feedback the operational amplifier has an open-loop operation. This open-loop operation is practical only when the operational amplifier is used as a comparator (a circuit which compares two input signals or compares an input signal to some fixed level of voltage). As an amplifier, the open-loop operation is not practical because the very high gain of the operational amplifier creates poor stability. (Noise and other unwanted signals are amplified so much in open-loop operation that the operational amplifier is usually not used in this way.) Therefore, most operational amplifiers are used with feedback (closed-loop operation). Operational amplifiers are used with degenerative (or negative) feedback which reduces the gain of the operational amplifier but greatly increases the stability of the circuit. In the closed-loop configuration, the output signal is applied back to one of the input terminals. This feedback is always degenerative (negative). In other words, the feedback signal always opposes the effects of the original input signal. One result of degenerative feedback is that the inverting and noninverting inputs to the operational amplifier will be kept at the same potential. Closed-loop circuits can be of the inverting configuration or noninverting configuration. Since the inverting configuration is used more often than the noninverting configuration, the inverting configuration will be shown first. Inverting Configuration Figure 3-13 shows an operational amplifier in a closed-loop, inverting configuration. Resistor R2 is used to feed part of the output signal back to the input of the operational amplifier. Figure 3-13.—Inverting configuration. At this point it is important to keep in mind the difference between the entire circuit (or operational circuit) and the operational amplifier. The operational amplifier is represented by the triangle-like symbol while the operational circuit includes the resistors and any other components as well as the operational amplifier. In other words, the input to the circuit is shown in figure 3-13, but the signal at the inverting input of the operational amplifier is determined by the feedback signal as well as by the circuit input signal. can see in figure 3-13, the output signal is 180 degrees out of phase with the input signal. The feedback signal is a portion of the output signal and, therefore, also 180 degrees out of phase with the input signal. Whenever the input signal goes positive, the output signal and the feedback signal go negative. The result of this is that the inverting input to the operational amplifier is always very close to 0 volts with this configuration. In fact, with the noninverting input grounded, the voltage at the inverting input to the operational amplifier is so small compared to other voltages in the circuit that it is considered to be VIRTUAL GROUND. (Remember, in a closed-loop operation the inverting and noninverting inputs are at the same potential.) Virtual ground is a point in a circuit which is at ground potential (0 volts) but is NOT connected to ground. Figure 3-14, (view A) (view B) and (view C), shows an example of several circuits with points at virtual ground. Figure 3-14A.—Virtual ground circuits. Figure 3-14B.—Virtual ground circuits. Figure 3-14C.—Virtual ground circuits. In view (A), V1 (the left-hand battery) supplies +10 volts to the circuit while V2 (the right-hand battery) supplies -10 volts to the circuit. The total difference in potential in the circuit is 20 volts. The total resistance of the circuit can be calculated: Now that the total resistance is known, the circuit current can be calculated: Introduction to Matter, Energy, and Direct Current, to Alternating Current and Transformers, Introduction to Circuit Protection, Control, and Measurement , Introduction to Electrical Conductors, Wiring Techniques, and Schematic Reading , Introduction to Generators and Motors Introduction to Electronic Emission, Tubes, and Power Supplies, Introduction to Solid-State Devices and Power Supplies Introduction to Amplifiers, Introduction to Wave-Generation and Wave-Shaping Circuits , Introduction to Wave Propagation, Transmission Lines, and Antennas , Microwave Principles, , Introduction to Number Systems and Logic Circuits, Introduction to Microelectronics, Principles of Synchros, Servos, and Gyros Introduction to Test Equipment , Radar Principles, The Technician's Handbook, Master Glossary, Test Methods and Practices, Introduction to Digital Computers, Magnetic Recording, Introduction to Fiber Optics
- The Persian Wars involved a prolonged battle between Greece and Persia between 429 BC and 449 BC. The wars consisted of two large Persian invasions and a series of legendary battles. - The main reason for the invasions appears to be the concern of the security of the western border of the Persian Empire. Many Greeks lived in these regions including a tribe from the region of Ionia in Asia Minor. These Ionians rebelled and sparked a series of related military revolts by other Greek regions, provoked by Persian tyrant leaders who denied the Greeks their freedom. - The Ionians were given support from Athens and Eretria who attacked the Persian stronghold of Sardis but were eventually defeated. The Persians wanted to silence the Greeks once and for all. - The Assyrian army was the first to use iron to help them protect and attack. At the time, bronze was the main material for weapons, but it did not offer the same hard protection and was easier to destroy. Iron was much harder so they used it for almost everything, including swords and blades. This new material played a key role in the Assyrian victories. - The Assyrians weapons were also effective in practice. Assyrian archers had a new type of bow that allowed them to fire and hit the enemy at long distances, giving them a big advantage. They also used slings and could hurl stones in a deadly manner like bullets. - Under King Darius I the Persians, after a Greek refusal to let its enemy take over, decided to invade Greece in 490 BC. At the battle of Marathon, the Persian army consisted of around 90,000 men compared to a Greek force that numbered no more than 20,000 men. - The Persians resorted to their tried and tested method of launching a long-range archer attack on the Greek soldiers. However, the Greeks created a massive wall of bronze with their large round shields. They stood in a strong line, holding spears and swords which were longer and heavier than that of the Persians, and were well protected as a group and as individuals. - The Greek, Athenian army had been put together very quickly, but its general, Miltiades, was a thoughtful and strong leader. Miltiades tricked the Persians by deliberately weakening the centre of his battle line. The Greeks then attacked the startled Persians from the wings go in behind them causing chaos among their ranks. The Persians were then easily defeated and lost around 6400 men compared to the Greek loss of 192 men. The start of another invasion: Thermopylae - After ten years the Persians, under their new King Xerxes, were thirsty for revenge. In 480 BC Xerxes assembled a massive army to lead a further attack against the Greeks. This invasion took place on the east coast of Greece at Thermopylae. The Spartan King Leonidas led the small Greek army for several days, but, sensing defeat, he sent most of his army away while he and 300 others were killed. - There was also a naval battle between the two sides at Artemision in which neither side could claim victory. Although not successful as such, both of these conflicts allowed the Greeks more time to prepare for more inland battles. - After their victory at Thermopylae, the Persians could now move further into the heart of Greece. The Persians started to make important gains, including Athens, which was overwhelmed. Leonidas’ brother Kleombrotos led his own army and started to build a huge wall to defend the Greeks near Corinth, but it was becoming clear that the next major battle would be at sea. - The Battle of Salamis took place in September 480 BC in the Saronic Gulf. This battle is regarded as one of the most important naval struggles in ancient history. Although historians do not agree on the numbers, we can be sure that the Greeks had a much smaller fleet to the Persians. - The Greeks won through strategy. The trireme was a very quick and easy to control Greek warship with a bronze ram at the front designed to split enemy boats in half - The Greeks tried and succeeded to suck the Persians into the narrow straits of Salamis where their ships would be easier to ram and destroy. Unlike the Greeks, the Persians had nowhere to retreat to and were trapped. Their fleet was picked off and many of them drowned in the chaos that followed. - Although they had experienced defeat at Salamis, the Persians still held control over large parts of mainland Greece. In August 479 BC the stage was now set for a final land battle at Plataea in August 479 BC. - This time, the Greek army consisted of around 110,000 men from all over the country. Despite a formidable Persian opposition, the Greeks managed toovercome their enemy. - The Persians had been defeated and, while some minor battles followed, the so-called ‘Peace of Callias’ agreement was signed in 449 BC. The Greek victory was a vital moment in ancient history. The Greeks enjoyed a long period of development that would lay the foundations for Western civilisation.
Starting with 2 rectangles of drawing paper and markers, students drew mini-concentric squares. They just finished a large concentric square project last week, so this was not a new concept. The one rectangle was folded in half, creating 2 back-to-back squares and they cut the other rectangle in half to a draw on each of the squares. These single squares were eventually folded in half diagonally to create triangles. I asked the children to choose 2 or three colors next to each other on the color wheel for each square. We bent and shaped 2 wires from which the folded shapes would hang. Balance and Motion is a science concept often studied in 1st or 2nd grade, so this project ties in nicely with other curriculum. |Students punched a hole in the smaller foam core using a push pin.| |Once the small wire was pushed through the hole in the smaller, top foam core, kids spread the wires apart in back (like a brad fastener) and glued the 2 foam cores together, sandwiching the wire between the pieces of foam core.| The assemblage was attached to a 6" X 6" board using a small, bent wire and 2 small pieces if foam core. Students colored the 2 pieces of foam core repeating at least one color they had used for their concentric square drawings. They used a push pin to make a hole in the top (smaller) foam. Then they attached one of their larger wires to the small loop, which they used (much like a brad fastener) to insert through the top piece of foam core. We used Glue All to attach the foam core to the backing board. Here are a few that are drying:
First observed in the 1970's, crop circles became a hot topic generating much debate over the various paranormal and naturalistic causes. In late 1991 Doug Bower and Dave Chorley announced that they had been constructing crop circles since 1970. Circlemakers.org, a UK-based art collective, have created complex crop circles since the early 1990s. Crop circles are constructed using simple tools such as wooden planks, rope, and wire. Using a four-foot-long plank attached to a rope, they can easily create circles eight feet in diameter. After the public admission of the original creators, crop circle activity skyrocketed. Each new design sought to be more complex than earlier designs. Today crop circle designs have increased in complexity to the point where they have become an art form in and of themselves. In an interview with Mark Pilkington, crop circle maker John Lundberg spoke about this change in crop circle designs, "I am rather envious of circle makers in other countries. Expectations about the size and complexity of formations that appear in the UK are now very high, whereas the rather shabby looking Russian formation made the national news. Even Vasily Belchenko, deputy secretary of the Russian Security Council, was on site gushing about its origin: 'There is no doubt that it was not man-made... an unknown object definitely landed there.' If the same formation appeared in the UK it would undoubtedly be virtually ignored by researchers and the media alike." Because the majority circles occur in the Avebury close to ancient sites such as earth barrows or mounds, white horses carved into the chalk hills, and stone circles, it has been hypothesised that crop circles are of paranormal origin. Some of the alternative theories that were bounced around include alien spacecraft landings, UFO's, mini tornadoes, ball lightning, and "plasma vortices". How the Patterns are Created Crop artists tend to use a map and computers to plot their circles out beforehand and some use GPS devices to help them create large patterns. Some also use more traditional techniques such as dowsing rods to identify prime locations. Siting a crop circle above underground streams or in magnetic fields is thought to add authenticity to the circle while also helping to baffle those who want to unravel their mystery. Venturing out in broad daylight can ruin the mystery of how a crop circle appeared so they tend to be created at night, away from the prying eyes of unsuspecting locals. Torches can attract attention, so instead the crop circle creators prefer to work by the light provided by the stars or moon after allowing time for their eyes to become accustomed to the dark. By entering fields on existing tractor tracks and paths, crop circle creators can help to disguise how the crop circle got there. Sticking to hard ground also avoids leaving foot prints while careful movement through the growing plants helps to minimise signs that they were there. Once at the location for the pattern, they try to only walk in areas where the crop will be flattened so their presence can go undetected, helping to add to the mystery. Using tape or string, most crop artists will measure out the design. Some crop circle creators use surveying equipment to help ensure their shapes are perfectly geometric and to keep lines straight. Traditionally, rope attached to the plank is looped over the shoulders and a foot is pressed onto the wood, pressing it forward and down. This folds the stems and bends them in a regular pattern. By advancing in a shuffling gait they can bend all of the plants in the same, regular way.
The United Nations has long recognized that climate change is a scientific fact, that it is caused by human activities, and that by working together, the people of the world can effect real change in protecting our planet. In December 2015, 195 nations gathered in Paris for COP 21, the 21st Conference of the Parties to the Kyoto Protocol, and now they have made history with the adoption of a binding international climate change agreement. Key Outcomes from the Paris Agreement Perhaps the most striking feature of the agreement is the goal to keep global temperature rise at or below 1.5 degrees lower than pre-industrial levels. While scientists have widely agreed that 2 degrees is the key to preventing major climate disasters, more recent consensus and research indicates that a more aggressive limit of 1.5 degrees may help to prevent severe and long-lasting effects such as the melting of the entire Greenland ice sheet and the inundation of island nations by rising seas. Additional important notes from the agreement: Preservation of forests, including payments for tropical countries if they succeed in reducing or limiting destruction of their forests due to logging, or clearance for food production. Developed countries will be required to take the lead in mobilizing climate finance and supporting developing countries' efforts. Mutual trust and confidence will be enhanced by a transparency framework for global accountability. A clear message has been issued that most of the existing reserves of coal, oil, and gas must stay in the ground. Because we know that the fossil fuels that have already been extracted, will be burned, offsets must be made by planting new forests and eliminating further deforestation. Countries will be legally required to come back every five years with new reduction targets for emissions that will be evaluated. This keeps a tighter schedule than some countries wanted, but will increase global accountability. Unfortunately, text regarding indigenous rights was removed from the final versions of the agreement, due to concerns about legal implications if climate change is judged to have violated those rights. As indigenous peoples stand to be among those most significantly affected by the progression of climate change, this is a major disappointment and unacceptable to the indigenous people who attended COP21 and held a strong presence throughout the discussions. As with any agreement among widely diverse parties, there are both strong and weak points to the agreement. But overall, it represents a massive step forward in global progress toward mitigating climate change. The recognition of the 1.5 degrees Celsius limit on global warming, as well as the requirement for 5-year check-ins for all countries, are aggressive parameters. This should give everyone hope for a brighter and more sustainable future for all of us. Photograph of Eiffel Tower by Alberto Otero Garcia
What occurs in both photosynthesis and cellular respiration? Photosynthesis makes the glucose that is used in cellular respiration to make ATP. The glucose is then turned back into carbon dioxide, which is used in photosynthesis. While water is broken down to form oxygen during photosynthesis, in cellular respiration oxygen is combined with hydrogen to form water. In which cell organelles does photosynthesis occur? In plants, photosynthesis takes place in chloroplasts, which contain the chlorophyll. Chloroplasts are surrounded by a double membrane and contain a third inner membrane, called the thylakoid membrane, that forms long folds within the organelle. In which organelle does cellular respiration occur? What organelle is used in cellular respiration of oxygen is present? What is cellular respiration easy? : any of various energy-yielding oxidative reactions in living matter that typically involve transfer of oxygen and production of carbon dioxide and water as end products Cellular respiration is a series of reactions, occurring under aerobic conditions, during which large amounts of ATP are produced.— What are the inputs of cellular respiration? Unit 5: Photosynthesis & Cell Respiration |What are the inputs of cellular respiration?||Glucose, oxygen| |What are the outputs of cellular respiration?||Carbon dioxide, water, energy (ATP)| |What is the site of cellular respiration?||Mitochondria| What organelle is used in cellular respiration of oxygen is not present? What ingredients are needed for cellular respiration to occur? Oxygen and glucose are both reactants in the process of cellular respiration. The main product of cellular respiration is ATP; waste products include carbon dioxide and water. What cell organelle is similar to the respiratory system? Mitochondria are known as the powerhouses of the cell. They are organelles that act like a digestive system which takes in nutrients, breaks them down, and creates energy rich molecules for the cell. The biochemical processes of the cell are known as cellular respiration. What are two storage organelles? Two storage organelles are vesicles and vacuoles. What organelle takes in raw materials? ER helps make proteins (ribosomes) and also lipids. Chloroplasts turn sunlight, carbon dioxide, and water into food (glucose). What organelle is considered a “factory”, because it takes in raw materials and converts them to cell products that can be used by the cell? Which is a list of organelles? Within the cytoplasm, the major organelles and cellular structures include: (1) nucleolus (2) nucleus (3) ribosome (4) vesicle (5) rough endoplasmic reticulum (6) Golgi apparatus (7) cytoskeleton (8) smooth endoplasmic reticulum (9) mitochondria (10) vacuole (11) cytosol (12) lysosome (13) centriole. Which organelle is used for storage? |vacuole||storage tank for the cell| |chromosome||made of DNA – directions all activities in the cell| |golgi body||sorts and packages things to be delivered – mailroom| |lysosome||pushes trash vacuoles out the cell, digests old cell parts, breaks food down into smaller pieces| What cell produce proteins? Which organelle is the jelly like fluid that fills the cell and surrounds all the organelles? What is any living thing with one or more cells? A living thing, whether made of one cell (like bacteria) or many cells (like a human), is called an organism. Thus, cells are the basic building blocks of all organisms. Do viruses have cells? A virus is a tiny, infectious particle that can reproduce only by infecting a host cell. Nor do viruses have cells: they’re very small, much smaller than the cells of living things, and are basically just packages of nucleic acid and protein. Is onion living or non-living? Onions are loaded with cells so onions must be a living thing. Are potatoes living or nonliving? Yes, the potato is a living organism; in fact it is root of the tree from which new potato plant develops. After the harvesting of potato, a potato is still alive and it is in a dormant state. Are tomatoes living or nonliving? Plants are living things and they need air, nutrients, water, and sunlight. Other living things are animals, and they need food, water, space, and shelter. Plants can include dandelions, grass, corn, tomatoes and much more. Non-living things include things that do not need food, eat, reproduce, or breathe. Is yogurt living or non-living? Yogurt is chock-full of protein, vitamins, and calcium. It’s also a superb source of good, helpful bacteria. The good bacteria found in yogurt are known as live cultures. That means they are still alive when you eat them. Do vegetables feel pain? Given that plants do not have pain receptors, nerves, or a brain, they do not feel pain as we members of the animal kingdom understand it. Uprooting a carrot or trimming a hedge is not a form of botanical torture, and you can bite into that apple without worry. Are bananas living or nonliving? Fruits and vegetables when they are in plants they grow and hence they are called as living things. But once plucked from the plants or trees, they do not grow and hence they become a non-living things. Is Sand water living or nonliving? Sand, wood and glass are all non-living things. None of them shows any of the characteristics listed above. Non-living things can be divided into two groups. First, come those which were never part of a living thing, such as stone and gold. Is Egg living or nonliving? The egg we get from a grocery shop is not alive as it is unfertilised egg. After hatching, the egg cell divides, grows and produces chick. These are the properties of living organism, so fertilised egg can be considered as living. If it is fertilised it is living, the one we get from market it not fertilised. Is a sunflower seed living or nonliving? Seeds are living! They are just typically in a dormant state, which means they require very little of the resources necessary to stay alive, until they are in the appropriate conditions to grow. Answer 4: Inside of a seed is an embryo – a baby plant. How do you introduce living and nonliving things? - Ask the class if they are living or nonliving. - Ask students if their pets at home are living or nonliving. - Ask students to identify what they need to survive. Write “food,” “water,” “shelter,” and “air” on the board. - Explain to students that today they will be learning about living and nonliving things.
KS1 English Curriculum Evening October 2016 These slides explain what the English curriculum in KS1 and provide an overview of the curriculum and how it is taught. Tips for Parents Some ideas about how you can help your child at home. A Summary of English in Year 2 Phonics are the building blocks of reading and writing, and a fundamental part of the English curriculum. All children are taught phonics daily: 3 days a week they are split into differenciated groups, and 2 days the teaching is whole class. This ensures some exposure to a breadth of sounds, and some more focused teaching. We follow a programme called 'Letters and Sounds', which breaks the learning down into 6 phases. Information on each phase can be found below. A link to a video of clear enunciation of sounds is also below: parents can support teachers by watching this to clarify how sounds should be pronounced!
With this great book, students will go online and explore WebQuests on the book, work together in cooperative learning reading groups and work in centers to explore the story further. Themes in the story include good against evil and not fitting in. Cooperative Group Activities Working in cooperative groups gets students involved and working with others. Groups should be broken up into approximately four students in each. The children should have different academic levels; so that students who are at a higher level can help those at a lower reading level understand the story better. Each child is given a role such as leader, secretary, fact finder and presenter. Students take turns reading the story and actively work together to answer questions related to the story or to summarize each chapter of the story. Students can work in groups to learn about space travel. The teacher should write questions down on the board related to time travel. These include: - How far is the sun from the earth? - Why does the sky appear to be black when a person is in space? - Why don't stars appear in photos of astronauts in space? - What are the reasons that the space shuttle doesn't take trips to the moon? - Can you list the benefits of space programs as well as what was learned from them? Have the children work in their groups to find the answers. Tell them they can use the encyclopedia, online sources and scientific magazines to find their answer. When complete, students present their findings. Set Up for Learning in Centers Create learning centers for students to actively explore different themes and elements that make up the story A Wrinkle in Time. For the reading center, have students explore this and other books related to space travel. In the science center, put magazines and books about space out on a table. Have students create a collage. In the computer center, students go online and read a study guide on the book or take an online quiz. In the writing center, set up a writing prompt for students by asking questions about the characters in A Wrinkle in Time (find more about the characters in this study guide). Students can also act out parts of the story in the play area. WebQuests on the Book There are WebQuests available online to use when planning activities. A Wrinkle in Time WebQuest gives students a thorough synopsis of the book, its characters and themes in the story. Students work together in small groups to complete it. The literary explorer A Wrinkle in Time gives a complete summary and thorough analysis of characters in the story. Included is a synopsis, author biography, literary concepts, time travel, math and science activities, the Universe, history of space travel, "creature feature" writing workshop, light bearers historical figures and a diagram of a hypercube. Students will learn a lot about the story, its themes and science from visiting this WebQuest. Do you have any activities that are fun to share on A Wrinkle in Time? Please include information in the comment section.
Area of Study: Fine Arts, Language Arts, Math, Science, Social Studies, Fine Motor Skills. Ages: 3-6 (extensions for older kids). The scarecrow–themed lesson plan is designed for children ages 3-6, but many activities can be adapted for older children (extensions for older children will be in red text). This lesson plan includes: Introductions, Literacy, Math, Science, Crafts, Music/Action Song, and Scarecrow Book Suggestions. Using scarecrows as a fall theme for learning is not only fun for kids, but it is a wonderful way to incorporate lessons on feelings at home or in the classroom. A scarecrow gets its name from the duty it performs; a scarecrow is designed to SCARE away crows from a field or a garden. To introduce the lesson, ask your children about a time when they were scared. What did being scared feel like? Brainstorm words or phrases with your child that might explain how being scared felt. While children may know about a feeling, translating that knowledge into words is sometimes difficult. Brainstorming allows children to investigate words to help them describe what they feel inside. Materials needed: One copy of the Wikki Stix Scarecrow Matching Cards (separate download here) and scissors. Print the Wikki Stix Scarecrow Matching Cards to heavy paper and laminate for durability. Have the children cut along the dotted lines to make 8 individual cards. Discuss the scarecrow "faces" on the cards with your child. Invite the children to explain how the scarecrow might be feeling simply by looking at one of the cards. The cards have faces for: Happy, Sad, Scared, and Mad. Ask the children about times when they have felt: happy, sad, scared, or mad. We can learn a great deal about our children by allowing them time to reflect and discuss. For older children, have the children select a scarecrow card and write about a time when they experienced a feeling that corresponds with the face on the card. Scarecrow Matching Game: Print several copies of the Scarecrow Matching Cards (separate download here) to heavy paper. Laminate the cards for durability and cut them out. Lay all the cards face down on a table or the floor. Have the children turn the cards face up (two at a time) to see if the scarecrow faces form a matching pair. If they do not, the cards are turned face down and play continues to another player. The matching game is over when all pairs are located. Beginning Letters: Print the Scarecrow Beginning Letters (separate download here) to heavy paper. Laminate the cards for durability and cut them out. As the children become familiar with the scarecrow faces, ask what letter the "feelings" words begin with: Happy, Sad, Scared, and Mad. Have the children make the beginning letter with Wikki Stix and place it beside the corresponding scarecrow card. Older children may wish to create the entire word from Wikki Stix . Wikki Stix provides a tactile layer for learning that will enhance letter and word formations. Materials needed: One copy of the Scarecrow Patterns (separate download here), scissors, and assorted Wikki Stix. Print the patterning pages for each child. Set out an assortment of Wikki Stix for the children to use. The Wikki Stix will need to be cut into smaller pieces to complete the patterns (see photo above). Safety scissors will work to cut Wikki Stix, but younger children may need assistance. Invite the children to look at the scarecrow faces in each of the rows. The children can use Wikki Stix to create the "face" that is needed to finish the pattern in each of the rows (see photo above). The second table in the file is intentionally left blank. Older children can create Wikki Stix scarecrows for their own patterning page (see photo below). Many farmers use metal pie plates or other metal items to decorate a scarecrow. Ask the children if they might know why farmers add pie plates (or other metal items) to the scarecrow. Items are generally hung from the scarecrow so that when the wind blows, the metal items will clang together and the sound will scare the crows (birds, in general). Ask the children what sounds they have heard that have scared them in the past? Most young children do not like very loud sounds (such as sirens or loud thunder claps). Sound Experiment – items needed: craft sticks, Wikki Stix, and a variety of items found around the house (or classroom) that can be attached to Wikki Stix (item suggestions: large paper clips, keys, metal cookie cutters, miniature animals, binder clips, silverware, jingle bells, straws, cotton balls, and craft feathers). Thread or attach similar items to strands of Wikki Stix (see photo above). Wrap the Wikki Stix around a craft stick (if desired, the scarecrow pictures from the patterning page can be cut and used as toppers for the Wikki Stix Sound Sticks). Invite the children to predict if the items on the individual sound sticks will make a LOUD or a SOFT sound. Lay the sticks out on a table or the floor so the children can explore the sound sticks one at a time. The children can use markers and Wikki Stix to complete the recording sheet (separate download here). For older children: Have the children close their eyes and listen to the Wikki Stix Sound Sticks (one at a time). See if the children can determine what items are attached to the Wikki Stix by the sound that it makes. Wikki Stix Paper Plate Scarecrow Craft The scarecrow craft can be created and enjoyed by children of all ages! Materials needed: 2 paper plates per craft, scissors, and assorted colors of Wikki Stix. Have the children cut out the inner circle of a paper plate. The remaining edge of the plate should be cut in half (see photo above). The children can cut a large oval or rectangular shape from the second paper plate (this will become the top of the scarecrow’s hat). Set out assorted Wikki Stix and invite the children to completely cover the rippled edge of the paper plate. The Wikki Stix will adhere the last two parts of the craft together (no glue is necessary). Attach the Wikki Stix covered paper plate edge around the top and sides of the inner paper plate circle. The upper part of the scarecrow’s hat can be attached behind the paper plate edge (see photo). The children can add facial features, yellow Wikki Stix "straw", or other decorations as desired. The scarecrows make adorable crafts to display for fall celebrations or Thanksgiving! Scarecrow Craft Extension Activity: Attach the paper plate scarecrow craft to a poster board and remove any facial features. Have the children cut pieces of Wikki Stix that can be used to create a variety of faces for the scarecrow. Observe the children and the faces they choose to create for the scarecrow. Invite the children to verbally share; we can garner valuable information from our children through discussions as they design and create! SCARECROW ACTION SONG (to the tune of "I’m a little teapot…") I’m a little scarecrow all stuffed with hay. (children stand and rub their tummy as if stuffing hay) Here I stand in the field all day! (children stand and sway back and forth with arms out to the side) When the crows come out you’ll hear me shout, (children cup hands near mouth) "HEY YOU CROWS, YOU’D BETTER GET OUT!" (children point as if talking to the crows) ~Original Author Unknown Suggested Books to Accompany the Wikki Stix Scarecrow Lesson Plan The Little Scarecrow Boy by Margaret Wise Brown The Little Old Lady Who Was Not Afraid of Anything by Linda D. Williams The Scarecrow’s Dance by Jane Yolen For more Fall and Harvest Craft Ideas, visit the Wikki Stix Blog!
Tokyo, December 20: Japanese scientists have detected evidence of water in 17 asteroids for the first time using data from the infrared satellite AKARI. This discovery will contribute to our understanding of the distribution of water in our solar system, the evolution of asteroids, and the origin of water on Earth. Researchers from Japan Aerospace Exploration Agency (JAXA) and University of Tokyo found that water is retained in asteroids as hydrated minerals, which were produced by chemical reactions of water and anhydrous rocks that occurred inside the asteroids. Our Earth is an aqua-planet and is the only planet in our solar system where the presence of water on the planet surface has been confirmed. However, scientists are not yet sure how our Earth acquired water. Recent studies have shown that other celestial bodies in our solar system have, or used to have, water in some form. Asteroids are considered to be one of the candidates that brought water to Earth. Hydrated minerals are stable even above the sublimation temperature of water ice. The Japanese infrared satellite AKARI, which was launched in February 2006 and ended operations in 2011, was equipped with the Infrared Camera (IRC) that allowed us to obtain spectra at near-infrared wavelengths from two to five micrometers. Infrared wavelengths contain characteristic spectral features of various substances, such as molecules, ice, and minerals, which cannot be observed at visible wavelengths. Therefore, it is indispensable to observe at infrared wavelengths for the study of solar system objects. The observations detected absorption, which was attributed to hydrated minerals for 17 C-type asteroids. C-type asteroids, which appear dark at visible wavelengths, were believed to be rich in water and organic material, but the present observations with AKARI are the first to directly confirm the presence of hydrated minerals in these asteroids. Many C-type asteroids display this trend, suggesting that C-type asteroids were formed by the agglomeration of rocks and water ice, then aqueous alteration occurred in the interior of asteroids to form hydrated minerals, and finally, C-type asteroids were heated and dehydrated
Axial piston pump This article relies too much on references to primary sources. (December 2006) (Learn how and when to remove this template message) This article relies largely or entirely on a single source. (April 2011) An axial piston pump is a positive displacement pump that has a number of pistons in a circular array within a cylinder block. It can be used as a stand-alone pump, a hydraulic motor or an automotive air conditioning compressor. An axial piston pump has a number of pistons (usually an odd number) arranged in a circular array within a housing which is commonly referred to as a cylinder block, rotor or barrel. This cylinder block is driven to rotate about its axis of symmetry by an integral shaft that is, more or less, aligned with the pumping pistons (usually parallel but not necessarily). - Mating surfaces. One end of the cylinder block is convex and wears against a mating surface on a stationary valve plate. The inlet and outlet fluid of the pump pass through different parts of the sliding interface between the cylinder block and valve plate. The valve plate has two semi-circular ports that allow inlet of the operating fluid and exhaust of the outlet fluid respectively. - Protruding pistons. The pumping pistons protrude from the opposite end of the cylinder block. There are numerous configurations used for the exposed ends of the pistons but in all cases they bear against a cam. In variable displacement units, the cam is movable and commonly referred to as a swashplate, yoke or hanger. For conceptual purposes, the cam can be represented by a plane, the orientation of which, in combination with shaft rotation, provides the cam action that leads to piston reciprocation and thus pumping. The angle between a vector normal to the cam plane and the cylinder block axis of rotation, called the cam angle, is one variable that determines the displacement of the pump or the amount of fluid pumped per shaft revolution. Variable displacement units have the ability to vary the cam angle during operation whereas fixed displacement units do not. - Reciprocating pistons. As the cylinder block rotates, the exposed ends of the pistons are constrained to follow the surface of the cam plane. Since the cam plane is at an angle to the axis of rotation, the pistons must reciprocate axially as they precess about the cylinder block axis. The axial motion of the pistons is sinusoidal. During the rising portion of the piston's reciprocation cycle, the piston moves toward the valve plate. Also, during this time, the fluid trapped between the buried end of the piston and the valve plate is vented to the pump's discharge port through one of the valve plate's semi-circular ports - the discharge port. As the piston moves toward the valve plate, fluid is pushed or displaced through the discharge port of the valve plate. - Effect of precession. When the piston is at the top of the reciprocation cycle (commonly referred to as top-dead-center or just TDC), the connection between the trapped fluid chamber and the pump's discharge port is closed. Shortly thereafter, that same chamber becomes open to the pump's inlet port. As the piston continues to precess about the cylinder block axis, it moves away from the valve plate thereby increasing the volume of the trapped chamber. As this occurs, fluid enters the chamber from the pump's inlet to fill the void. This process continues until the piston reaches the bottom of the reciprocation cylinder - commonly referred to as bottom-dead-center or BDC. At BDC, the connection between the pumping chamber and inlet port is closed. Shortly thereafter, the chamber becomes open to the discharge port again and the pumping cycle starts over. - Variable displacement. In a variable displacement pump, if the vector normal to the cam plane (swash plate) is set parallel to the axis of rotation, there is no movement of the pistons in their cylinders. Thus there is no output. Movement of the swash plate controls pump output from zero to maximum. There are two kinds of variable-displacement axial piston pumps: - direct displacement control pump, a kind of axial piston pump with a direct displacement control. A direct displacement control uses a mechanical lever attached to the swashplate of the axial piston pump. Higher system pressures require more force to move that lever, making direct displacement control only suitable for light or medium duty pumps. Heavy duty pumps require servo control. A direct displacement control pump contains linkages and springs and in some cases magnets rather than a shaft to a motor located outside of the pump (thereby reducing the number of moving parts), keeping parts protected and lubricated and reducing the resistance against the flow of liquid. - servo control pump. - Pressure. In a typical pressure-compensated pump, the swash plate angle is adjusted through the action of a valve which uses pressure feedback so that the instantaneous pump output flow is exactly enough to maintain a designated pressure. If the load flow increases, pressure will momentarily decrease but the pressure-compensation valve will sense the decrease and then increase the swash plate angle to increase pump output flow so that the desired pressure is restored. In reality most systems use pressure as a control for this type of pump. The operating pressure reaches, say, 200 bar (20 MPa or 2900 psi) and the swash plate is driven towards zero angle (piston stroke nearly zero) and with the inherent leaks in the system allows the pump to stabilise at the delivery volume that maintains the set pressure. As demand increases the swash plate is moved to a greater angle, piston stroke increases and the volume of fluid increases; if the demand slackens the pressure will rise, and the pumped volume diminishes as the pressure rises. At maximum system pressure the output is once again almost zero. If the fluid demand increases beyond the capacity of the pump to deliver, the system pressure will drop to near zero. The swash plate angle will remain at the maximum allowed, and the pistons will operate at full stroke. This continues until system flow-demand eases and the pump's capacity is greater than demand. As the pressure rises the swash-plate angle modulates to try to not exceed the maximum pressure while meeting the flow demand. Designers have a number of problems to overcome in designing axial piston pumps. One is managing to be able to manufacture a pump with the fine tolerances necessary for efficient operation. The mating faces between the rotary piston-cylinder assembly and the stationary pump body have to be almost a perfect seal while the rotary part turns at perhaps 3000 rpm. The pistons are usually less than half an inch (13 mm) in diameter with similar stroke lengths. Keeping the wall to piston seal tight means that very small clearances are involved and that materials have to be closely matched for similar coefficient of expansion. The pistons have to be drawn outwards in their cylinder by some means. On small pumps this can be done by means of a spring inside the cylinder that forces the piston up the cylinder. Inlet fluid pressure can also be arranged so that the fluid pushes the pistons up the cylinder. Often a vane pump is located on the same drive shaft to provide this pressure and it also allows the pump assembly to draw fluid against some suction head from the reservoir, which is not an attribute of the unaided axial piston pump. Another method of drawing pistons up the cylinder is to attach the cylinder heads to the surface of the swash plate. In that way the piston stroke is totally mechanical. However, the designer's problem of lubricating the swash plate face (a sliding contact) is made even more difficult. Internal lubrication of the pump is achieved by use of the operating fluid—normally called hydraulic fluid. Most hydraulic systems have a maximum operating temperature, limited by the fluid, of about 120 °C (250 °F) so that using that fluid as a lubricant brings its own problems. In this type of pump the leakage from the face between the cylinder housing and the body block is used to cool and lubricate the exterior of the rotating parts. The leakage is then carried off to the reservoir or to the inlet side of the pump again. Hydraulic fluid that has been used is always cooled and passed through micrometre-sized filters before recirculating through the pump. Despite the problems indicated above this type of pump can contain most of the necessary circuit controls integrally (the swash-plate angle control) to regulate flow and pressure, be very reliable and allow the rest of the hydraulic system to be very simple and inexpensive. Axial piston pumps are used to power the hydraulic systems of jet aircraft, being gear-driven off of the turbine engine's main shaft, The system used on the F-14 used a 9-piston pump that produced a standard system operating pressure of 3000 psi and a maximum flow of 84 gallons per minute. Automotive air conditioning compressors for cabin cooling are nowadays mostly based around the axial piston pump design (others are based on the scroll compressor or rotary vane pump ones instead) in order to contain their weight and space requirement in the vehicle's engine bay and reduce vibrations. They're available in fixed displacement and dynamically adjusted variable displacement variants. Axial reciprocating motors are also used to power many machines. They operate on the same principle as described above, except that the circulating fluid is provided under considerable pressure and the piston housing is made to rotate and provide shaft power to another machine. A common use of an axial reciprocating motor is to power small earthmoving plant such as skid loader machines. Another use is to drive the screws of torpedoes. - Danfoss. "Applications Manual: Transmission Circuit Recommendations". p. 6 - "Definitive Guide to Pressure Washer Pumps". PressureWashr. Retrieved 13 August 2015.
Presentation on theme: "PART ONE: SLAVERY IN ANTEBELLUM AMERICA. A:SLAVERY IN ANTEBELLUM AMERICA 1818: The year of the birth of Frederick Douglass, slavery was already an old."— Presentation transcript: PART ONE: SLAVERY IN ANTEBELLUM AMERICA A:SLAVERY IN ANTEBELLUM AMERICA 1818: The year of the birth of Frederick Douglass, slavery was already an old institution in America. Two centuries had passed since the first 20 Africans landed in Virginia from a Dutch ship. After the abolition of slavery in the North, slavery had become the “peculiar institution” of the South – that is, an institution unique to Southern society. SLAVERY IN ANTEBELLUM AMERICA Despite the hopes of some of the Founding Fathers that slavery might die out, in fact the institution survived the crisis of the American Revolution and rapidly expanded westward. On the eve of the Civil War, the slave population had risen to 4 million, its rate of natural increase more than making up for the prohibition in 1808 of further slave imports from Africa. SLAVERY IN ANTEBELLUM AMERICA In the South as a whole, slaves made up 1/3 of the total population and in the cotton producing states of the Deep South about ½. 1850: Slavery had crossed the Mississippi River and was expanding rapidly in AK, LA, and eastern TX. 1860: 1/3 of the nation’s cotton crop was grown west of the Mississippi River. “COTTON IS KING” The Old South was the largest and most powerful slave society the modern world has known. Its strength rested on a virtual monopoly of cotton, the South’s “white gold.” By the 19 th century, cotton had assumed an unprecedented role in the world economy. “COTTON IS KING” About ¾ of the world’s cotton supply came from the Southern USA. 1830: Cotton had become the most important American export. On the eve of the Civil War, it represented well over ½ the total of American exports. 1860: The economic investment represented by the slave population exceeded the value of the nation’s factories, railroads, and banks combined. B: SLAVERY AND THE NATION 1816: Henry Clay stated “Slavery forms an exception … to the general liberty prevailing in the United States” But Clay, like many of his contemporaries, underestimated slavery’s impact in the entire nation. SLAVERY AND THE NATION The “free states” had ended slavery, but they were hardly unaffected by it. The Constitution enhanced the power of the South in the House of Representatives and Electoral College and required all states to return fugitive slaves from bondage (3/5 Compromise/Fugitive Slave Clause) SLAVERY AND THE NATION Slavery shaped the lives of all Americans, white as well as black. It helped determine where they lived, how they worked, and under what conditions they could exercise their freedom of speech, assembly, and press. SLAVERY AND THE NATION Northern merchants and manufacturers participated in the slave economy and shared in the profits. Money earned in the cotton/slave trade helped finance industrial development in the North.. Northern ships carried cotton to NY and Europe, northern bankers financed cotton plantations, north companies insured slave property, and northern factories turned cotton into cloth. Northern manufacturers supplied cheap fabrics (“Negro cloth”) to clothe the South’s slaves. SLAVERY AND THE NATION Slavery led the South down a very different path of economic development than the North, limiting the growth of industry, discouraging immigrants from entering the region, and inhibiting technological progress. Southern banks existed primarily to help finance the plantations. THE OLD SOUTH: SOME GENERALIZATIONS The further North, the cooler the climate, the fewer the slaves and the lower commitment to maintaining slavery. The further south, the warmer the climate, the more slaves, and the higher commitment to maintaining slavery. D: REGIONS OF THE SOUTH THE MIDDLE SOUTH There were many plantations in eastern VA and western TN. 1850: Slaves accounted for 30% of the population of the Middle South. There was an average of 8 slaves per slaveholder. 36% of white families owned slaves. THE LOWER SOUTH Secessionists would prevail after Lincoln’s election in 1860. 1850: Slaves accounted for 47% of the Lower South’s population. There was an average of 12 slaves per slaveholder. 43% of white families owned slaves. THE LOWER SOUTH Less than 2% of Lower South’s blacks were free. Lower South was the area where the brutality of slavery was most harsh. THE PLANTER ARISTOCRACY The South was ruled politically and economically by wealthy plantation owners. 1850: Only 1,733 families owned more than 100 slaves; yet they dominated Southern politics. The South was the least democratic region of the country. THE PLANTER ARISTOCRACY There was a huge gap between rich and poor. South had a very poor public education system thus planters sent their children to private schools. Planters carried on the “cavalier” tradition of early VA. Planters: a landed genteel class THE SOUTHERN WHITE MAJORITY 75% of white Southerners owned no slaves. Mostly subsistence farmers and did not participate in the market economy. Poorest were called “white trash”, “hillbillies”, or “crackers.” Fiercely defended the slave system as it proved white superiority. THE SOUTHERN WHITE MAJORITY Poor whites took comfort that they were “equal” to the planter class. They hoped someday to own slaves. Slavery proved effective in controlling blacks and ending slavery might result in race mixing and blacks competing with whites for jobs. F: FREE BLACKS OF THE SOUTH By 1860: Numbered about 250,000. In the Border South, emancipation increased starting in the late 18 th century. In the Lower South, many free blacks were mulattos – white father and black mother. This was evidence of the sexual intimidation and abuse by male slaveholders. FREE BLACKS OF THE SOUTH Some were able to buy their freedom from their labor after hours. (Task System) Some owned property. A few even owned slaves though this was very rare. FREE BLACKS IN THE SOUTH Faced discrimination in the South. They were prohibited from certain occupations and from testifying against whites in court. They had no political rights. They were always in danger of being forced back into slavery by slave traders. G: FREE BLACKS OF THE NORTH Free blacks numbered about 250,000. Some states forbade their entrance or denied them public education. Most states denied them suffrage. THE PRO-SLAVERY IDEOLOGY Even those who had no direct stake in slavery shared with planters a deep commitment to white supremacy. Indeed, racism – the belief that blacks were innately inferior to whites and unsuited for life in any conditions other than slavery – formed one pillar of the pro-slavery ideology. THE PRO-SLAVERY IDEOLOGY Most slaveholders also found legitimation for slavery in Biblical passages such as the injunction that servants should obey their masters. Others argued that slavery was essential to human progress. Without slavery, planters would be unable to cultivate the arts, sciences, and other civilized pursuits. THE PRO-SLAVERY IDEOLOGY Still other defenders of slavery insisted that the institution guaranteed equality for whites by preventing the growth of a class doomed to the life of unskilled labor. They claimed to be committed to the ideal of freedom. THE PRO-SLAVERY IDEOLOGY Slavery for blacks, they claimed, was the surest guarantee of “perfect equality” among whites, liberating them from the “low, menial” jobs like factory labor and domestics service by wage laborers of the North. Slavery made possible the considerable degree of economic autonomy enjoyed not only by planters but by non-slaveholding whites. I: LIFE UNDER SLAVERY SLAVES AND THE LAW For slaves, the “peculiar institution” meant a life of incessant toil, brutal punishment, and the constant fear that families would be destroyed by sale (slavery’s greatest psychological horror). Before the law, slaves were property. SLAVES AND THE LAW Although they had a few legal rights (all states made it illegal to kill a slave except in self- defense, and slaves accused of serious crimes were entitled to their day in court before all white juries), these were haphazardly enforced. Slaves could be sold or leased by their owners at will and lacked any voice in the governments that ruled them. SLAVES AND THE LAW By 1830, it was a crime to teach a slave to read or write. Not all these laws were rigorously enforced. Some members of slaveholding families taught children to read and write – although rather few since well over 90% of the slave population was illiterate in 1860. SLAVE LABOR Slavery was a system of labor, “from sunup to first dark,” with only brief interruptions for meals, work occupied most of the slaves’ time. The large majority of slaves – 75% of women and nearly 90% of men – worked in the fields. Large plantations were diversified communities where slaves performed all kinds of work. SLAVE LABOR The precise organization of their labor varied according to the crop and size of the holding. On small farms, the owner often toiled side-by-side with his slaves. SLAVE LABOR The largest concentration of slaves, however, lived and worked on the plantations in the Cotton Belt, where men, women and children labored in gangs, often under the direction of an overseer and perhaps a slave “driver’ who assisted him. SLAVE LABOR Among slaves, overseers had a reputation for meting out harsh treatments. Solomon Northup, a free black who was kidnapped from the North and spent twelve years in slavery recalled “The requisite qualifications for an overseer are utter heartlessness, brutality, and cruelty. It is his business to produce large crops, no matter [what the] cost.” J: MAINTAINING ORDER MAINTAINING ORDER Slave owners employed a variety of means in their attempt to maintain order and discipline among their human property and persuade them to labor productivity. Their system rested on force. Masters had almost complete discretion in inflicting punishment, and rare was the slave who went through his or her life without experiencing a whipping. Any infraction of plantation rules, no matter how minor, could be punished by the lash. MAINTAINING ORDER Subtle means of control supplemented violence. Owners encouraged and exploited divisions among slaves, especially between field hands and house servants. They created a system of incentives that rewarded good work with time off or even payments – in Virginia a slaveholder paid 10 cents a day for good work. MAINTAINING ORDER The slave owed the master complete respect and absolute obedience. No aspect of their lives, from the choice of marriage partners to how they spent their free time, was immune from the master’s interference. The entire system of southern justice was designed to enforce the master’s control over the person and labor of his slaves. THE “CRIME” OF CELIA Celia was a slave who killed her master while resisting a sexual assault. Missouri state law deemed “any woman” in such circumstances to be acting in self- defense. But, the Court ruled that Celia was not a woman. THE “CRIME” OF CELIA She was a slave, whose master had complete power over her person. The Court sentenced her to death. However, since Celia was pregnant, her execution was postponed until her child had been born, so as to not deprive her owner’s heir of their property rights. MAINTAINING ORDER As the 19 th century progressed, some southern states enacted laws to prevent the mistreatment of slaves, and their material living conditions improved. With the price of slavery rising, it made economic sense for owners to become concerned with the health and living conditions of their human property. MAINTAINING ORDER Improvements in the slaves’ living conditions were meant to strengthened slavery, not undermine it. Even as the material conditions and health of slaves improved, the South drew tighter and tighter the chains of bondage. More and more states set limits on voluntary manumission, requiring such acts be approved by the legislature. MAINTAINING ORDER Few slave societies in history have so systematically closed all avenues to freedom as the Old South. SLAVE CULTURE Slaves never abandoned their desire for freedom or their determination to resist total white control of their lives. In the face of grim realities, they succeeded in forging a semi-independent culture, centered on family and church. This enabled them to survive the experience of bondage without surrendering their self- esteem and to pass from generation to generation a set of ideals and values fundamentally at odds with those of their masters. SLAVE CULTURE Slave culture drew on the African heritage. African influences were evident in the slaves’ music and dances, styles of religious worship, and the use of herbs by slave healers to combat disease. Since most slaves in the USA were American born and lived amidst a white majority, slave culture was a new creation, shaped by African traditions and American valves and experiences. THE SLAVE FAMILY At the center of the slave community stood the family. In the USA, where the slave population grew from natural increase rather than continued importation from Africa, slaves had an even male-female ratio, making the creation of families more possible. THE SLAVE FAMILY The law did not recognize the legality of slave marriages. The Master had to consent before a man and woman could “jump over the broomstick” (the slaves’ wedding ceremony), and families stood in constant danger of being broken up by sale. Nonetheless, most adult slave married, and their unions, when not disrupted by sale, typically lasted a lifetime. THE SLAVE FAMILY Most slaves lived in two-parent families. But because of constant sales, the slave community had a significantly higher number of female- headed families than among whites, as well as families in which grandparents, other relatives, or even- non- kin assumed responsibility for raising children. THE SLAVE FAMILY As the domestic slave trade expanded with the rise of the Cotton Kingdom, about one marriage in three in slave-selling states like VA was broken by sale. Fear of sale permeated slave life, especially in the Upper South. THE SLAVE FAMILY As a reflection of their paternalistic responsibilities, some owners encouraged slaves to marry. Others, however, remained unaware of their slaves’ family connections and their interest in slave children was generally limited to work in the fields. A distinctive version of Christianity also offered solace to slaves in the face of hardship and hope for liberation from bondage. Some blacks, free and slave, had taken part in the First and Second Great Awakenings. SLAVE RELIGION Even though the law prohibited slaves from gathering without a white person present, every plantation, it seemed had its own black preacher. Usually the preacher was a “self-called” slave who possessed little or no formal education but whose rhetorical abilities and familiarity with the Bible made him one of the most respected members of the slave community. SLAVE RELIGION In Southern cities, slaves worshipped in biracial congregations with white ministers where they were generally required to sit in the back pews or in the balcony. Urban free blacks established their own churches, sometimes attended by slaves. SLAVE RELIGION To masters, Christianity offered another means of social control. Many required slaves to attend services conducted by white ministers, who preached that theft was immoral and that the Bible required servants to obey their masters. SLAVE RELIGION One slave later recalled being told in a white minister’s sermon “how good God was in bringing us over to this country from dark and benighted Africa, and permitting us to listen to the sound of the gospel.” SLAVE RELIGION But the slaves transformed the Christianity they had embraced, turning it to their own purposes. A blend of African traditions and Christian belief, slave religion was practiced in secret nighttime gatherings on plantations and in “praise meetings” replete with shouts, dances, and frequent emotional interchanges between the preacher and congregation. SLAVE RELIGION The Biblical story of the Exodus played a central role in black Christianity. Slaves identified themselves as a chosen people, who God in the fullness of time would deliver them from bondage. SLAVE RELIGION At the same time, the figure of Jesus Christ represented to slaves a personal redeemer, one who truly cared for the oppressed. The Christian message of brotherhood and the equality of all souls, in the slaves’ eyes, offered an irrefutable indictment of the institution of slavery. L: RESISTANCE TO SLAVERY RESISTANCE TO SLAVERY With the entire power structure of government, federal, state and local, committed to preserving the institution of slavery, slaves could only rarely express their desire for freedom by outright rebellion. Compared to Brazil and the West Indies, which experienced numerous uprisings, involving hundreds or even thousands of slaves, revolts in the USA were smaller and less frequent. RESISTANCE TO SLAVERY This does not mean that slaves in the USA placidly accepted the system under which they were compelled to live. Resistance to slavery took many forms in the Old South, from individual acts of defiance to occasional uprisings. These actions posed a constant challenge to the slaveholders’ self-image as benign paternalists and their belief that slaves were obedient subjects grateful for their owners’ care. FORMS OF RESISTANCE The most widespread expression of hostility to slavery was “day-to-day resistance” or “silent sabotage” -doing poor work, breaking tools, abusing animals, and in other ways disrupting the plantation routine. Many slaves made believe that they were to ill to work – although almost no slaves reported themselves sick on Sunday, their only day of rest. FORMS OF RESISTANCE Then there was the theft of food, a form of resistance so common that one southern physician diagnosed it as a hereditary disease unique to blacks. Less frequent, but more dangerous, were serious crimes committed by slaves, including arson, poisoning, and armed assaults against individual whites. FUGITIVE SLAVES Even more threatening to the ability of the slave system was running away. Formidable obstacles confronted the prospective fugitive. FUGITIVE SLAVES Solomon Northup recalled “Every white man’s hand is raised against him, the patrollers are watching for him, the hounds are ready to follow in his track.” Slaves had little or no knowledge of geography, apart from understanding that the North Star led to freedom. FUGITIVE SLAVES No one knows how many slaves succeeded in reaching the North or Canada – the most common rough estimate is around 1,000 per year. FUGITIVE SLAVES Not surprisingly, most of those who succeeded lived, like Frederick Douglass, in the Upper South especially MD, VA, and KY, which bordered on the free states. FUGITIVE SLAVES The large majority of runaways were young men. Most women were not willing to leave children behind, and to take them along on the arduous escape journey was nearly impossible. FUGITIVE SLAVES In the Deep South, fugitives tended to head for cities like New Orleans or Charleston, where they hoped to lose themselves in the free black community. Other escapees fled to remote areas like the Great Dismal Swamp or VA or the Florida Everglades, where the Seminole Indians offered refuge before they were forced to move west. FUGITIVE SLAVES In TN, a study of newspaper ads for runaways finds that 40% remained in the local neighborhood, 30% to have headed to other locations in the South, while only 25% tried to reach the North. FUGITIVE SLAVE ADS THE UNDERGROUND RAILROAD The Underground Railroad was a loose organization of sympathetic abolitionists who hid fugitives in their homes and sent them to the next “station” assisted some runaway slaves. THE UNDERGROUND RAILROAD A few courageous individuals made forays into the South to liberate slaves. The best known was Harriet Tubman. Born in Maryland in 1820, she escaped to PA in 1849. THE UNDERGROUND RAILROAD During the next decade of her life, she risked her life by making some 20 trips back to her state of birth to lead relatives and other slaves to freedom. THE UNDERGROUND RAILROAD But most who managed to reach the North did so on their own initiative, some showing remarkable ingenuity. William and Ellen Craft impersonated a sickly owner traveling with her slave. THE UNDERGROUND RAILROAD Henry “Box” Brown packed himself inside a crate and literally had himself shipped from Georgia to freedom in the North. THE UNDERGROUND RAILROAD THE AMISTAD In a few instances, large groups of slaves collectively seized their freedom. The most celebrated instance involved 53 slaves who took control of the Amistad – a ship transporting them from one port in Cuba to another, and tried to force the navigator to steer it to Africa. THE AMISTAD The Amistad wended its way up the Atlantic coast, until an American vessel seized it off the coast of Long Island. The slaves were placed in jail. The fate of the slaves rested with the US court system. THE AMISTAD The slaves were led by Cinque from the Mende tribe. The central issue was: were the captives freemen or slaves. If it was determined they were slaves, they would be returned to Cuba/Spain. If not they would be free. President Martin Van Buren favored returning them to Cuba. THE AMISTAD But abolitionists, such as Lewis Tappan, brought the case to the Supreme Court, where former President, now Congressman John Quincy Adams argued on behave of the slaves. Adams argued that since the slaves had been recently brought from Africa in violation of international treaties banning the slave trade, the captives should be freed. The Court accepted Adams’ reasoning and most of the captives made their way back to Africa. N: THE YEAR 1831 AND SLAVERY 1831: A turning point for the Old South. The English Parliament, led by William Wilberforce, launched a program for abolishing slavery throughout the British Empire. (1838) This underscored the South’s growing isolation in the Western world. THE YEAR 1831 AND SLAVERY In 1831, William Lloyd Garrison, a Boston abolitionist, published his first issue of The Liberator. From 1831 to the outbreak of war, the nation would be confronted with a vigorous movement to abolish slavery. PART TWO: THE ABOLITIONIST MOVEMENT A: THE ABOLITIONIST MOVEMENT The Abolitionist Movement began in the North. The goal was to end slavery. Some Abolitionists called for an immediate end to slavery. Others called for a gradual end and colonization of freed slaves outside of America. THE ABOLITIONIST MOVEMENT The Movement was influenced by the reform fervor of the Second Great Awakening. Yet at first the greatest evil in American society attracted the least attention from reformers. For many years, it seemed that the only Americans willing to challenge the existence of slavery were the Quakers, slaves, and free blacks. THE ABOLITIONIST MOVEMENT While the issue of slavery influenced the politics of the early Republic, efforts to abolish it waned. The institution of slavery remained central to American life but any vigorous movement to abolish it after the American Revolution died out. The slavery question faded from national life, with occasional eruptions like the Missouri controversy of 1819-1821. COLONIZATION Before the 1830s, those white Americans willing to contemplate an end to bondage almost always coupled calls for abolition with the “colonization” of freed slaves – their deportation to Africa, the Caribbean or Central America. 1816: Proponents of the idea founded the American Colonization Society. COLONIZATION The ACS promoted the gradual abolition of slavery and the settlement of black Americans in Africa. It soon established Liberia, on the coast of West Africa, an outpost of American influence whose capital Monrovia, was named for President James Monroe. COLONIZATION Numerous prominent political leaders of the Jackson Era, such as Henry Clay and President Jackson, supported the colonization society. Many northerners saw colonization as the only way to rid the nation of slavery. Southern supporters devoted most of their energy to persuading those African- Americans who were already free to leave the United States. COLONIZATION Slavery and racism were so deeply embedded in American life, colonizationists believed, that blacks could never achieve equality if freed and allowed to remain in the country. Like Indian removal, colonization rested on the premise that America is fundamentally a white society. COLONIZATION In the decades before the Civil War, several thousand black Americans did emigrate to Liberia with the aid of the ACS. Some were slaves emancipated by their owners on condition that they depart. Others left voluntary, motivated by a desire to spread Christianity in Africa or to enjoy rights denied them in the United States. COLONIZATION But most African- Americans adamantly opposed the idea of colonization. 1817: Some 3,000 free blacks assembled in Philadelphia for the first national black convention. COLONIZATION Their resolutions insisted that blacks were Americans, entitled to the same freedom and rights enjoyed by whites. “We have no wish to separate from our present homes.” In the years that followed, several black organizations removed the word “African” from their names to eliminate a possible reason for being deported from the land of their birth. SPREADING THE ABOLITIONIST MESSAGE The abolitionist movement expanded rapidly throughout the North. Antislavery leaders took advantage of the rapid development of print technology and the expansion of literacy thanks to common school education to spread their message. They recognized the democratic potential in the production of printed material. They seized upon the recently invented steam press to produce millions of copies of pamphlets, newspapers, petitions, novels, and broadsides. SPREADING THE ABOLITIONIST MESSAGE 1833: The American Anti-Slavery Society was founded. Between the founding of the Anti- Slavery Society and the end of the decade, some 100,000 northerners joined local groups devoted to abolition. SPREADING THE ABOLITIONIST MESSAGE Most were ordinary citizens – farmers, shopkeepers and laborers. Others were prominent businessmen like Arthur Tappan of NY. SPREADING THE ABOLITIONIST MESSAGE If Garrison was the movement’s most notable propagandist, Theodore Dwight Weld helped to create its mass constituency. Weld was a young minister who had been converted by the evangelical preacher Charles Finney. SPREADING THE ABOLITIONIST MESSAGE Weld trained a band of speakers who brought the abolitionist message into the heart of the rural and small-town North. Their methods were those of the revivals – fervent preaching, lengthy meetings, calls for individuals to renounce their immoral ways – and their message was a simple one: SLAVERY WAS A SIN. SPREADING THE ABOLITIONIST MESSAGE Identifying slavery as a sin was essential to replacing the traditional strategies of gradual emancipation and colonization with immediate abolition. The only proper response to the sin of slavery, abolitionist speakers proclaimed, was the institution’s immediate elimination. SPREADING THE ABOLITIONIST MESSAGE Weld supervised the publication of abolitionist pamphlets. 1839: Weld published his own American Slavery As It Is. It was a compilation of accounts of the maltreatment of slaves. Since he took all his examples from the southern press, they could not be dismissed as figments of the northern imagination. SPREADING THE ABOLITIONIST MESSAGE Many southerners feared that the abolitionist intended to spark a slave insurrection. This belief was strengthened by the outbreak of Nat Turner’s Rebellion a few months after The Liberator made its appearance in 1831. But Turner was completely unknown to Garrison. SPREADING THE ABOLITIONIST MESSAGE Nearly all abolitionists, despite their militant rhetoric, rejected violence as a means of ending slavery. Many were pacifists or “non-resistants,” who believed that coercion should be eliminated from all human relationships and institutions. SPREADING THE ABOLITIONIST MESSAGE Their strategy was “moral suasion” and the arena was the public sphere. Slaveholders most be convinced of the sinfulness of their ways, and the North of its complicity in the peculiar institution. Some critics charged that this approach left nothing for the slaves to do in seeking their own liberation but await the nation’s moral regeneration. SPREADING THE ABOLITIONIST MESSAGE Abolitionists adopted the role of radical social critics. They focused their efforts not within the existing political system, but on awakening the nation to the moral evil of slavery. Their language was deliberately provocative, calculated to seize public attention E: ABOLITIONISTS AND THE IDEA OF FREEDOM ABOLITIONISTS AND THE IDEA OF FREEDOM The abolitionist crusade both reinforced and challenged common understandings of freedom during the antebellum years. Abolitionists helped to popularize the concept that personal freedom derived not from the ownership of productive property such as land but from ownership of one’s self and the ability to enjoy the fruits of one’s labor. ABOLITIONISTS AND THE IDEA OF FREEDOM Abolitionists repudiated the idea of “wage slavery’” which had been popularized by the era’s labor movement. Compared to slavery, the person working for wages, they insisted, was an embodiment of freedom: the free laborer could change jobs, if he wished, accumulate property, and enjoy a stable family life. ABOLITIONISTS AND THE IDEA OF FREEDOM Only slavery, wrote William Goodell, deprived human beings of their “grand central right – the inherent right of self-ownership.” ABOLITIONISTS AND THE IDEA OF FREEDM Abolitionists argued that slavery was so deeply embedded in American life that its destruction would require fundamental changes in the North as well as the South. They insisted that the inherent, natural, and absolute right to personal liberty, regardless of race, took precedence over other forms of freedom, such as the right of citizens to accumulate and hold property or self-government by local political communities. F: A NEW VISION OF AMERICA A NEW VISION OF AMERICA In a society in which the rights of citizenship had become more and more closely associated with whiteness, the antislavery movement sought to reinvigorate the idea of freedom as a truly universal entitlement. The antislavery movement viewed slaves and free blacks as members of the national community. A NEW VISION OF AMERICA The Movement’s position was summarized by Lydia Maria Child in her treatise of 1833, An Appeal in Favor of That Class of American Called Africans. Child’s text insisted that blacks were fellow countrymen not foreigners or a permanently inferior caste. Abolitionists maintained that the slaves, once freed, should be empowered to participate fully in the public life of the United States. A NEW VISION OF AMERICA The abolitionists debated the Constitution’s relationship to slavery. Garrison burned the document, calling it a covenant with the devil. A NEW VISION OF AMERICA Frederick Douglass came to believe the Constitution offered no national protection to slavery. But despite these differences of opinion, abolitionists developed an alternative, rights-oriented view of constitutional law, grounded in their universalistic understanding of liberty. Abolitionists invented the concept of equality before the law regardless of race, one all but unknown in Antebellum America. G: BLACK AND WHITE ABOLITIONISM Blacks played a leading role in the antislavery movement. James Forten, a successful sailmaker, helped to finance The Liberator. As late as 1834, northern blacks made up a majority of the journal’s subscribers. BLACK ABOLITIONISTS Several blacks served on the board of directors of the American Anti- Slavery Society. Northern born blacks and fugitive slaves emerged as major organizers and speakers. Greatest of the black abolitionists. Born into slavery in 1818, he became a major figure in the crusade for abolition, the drama of emancipation, and the effort during Reconstruction to give meaning to black freedom. He was the son of a slave mother and an unidentified white man, possibly his master. Douglass experienced slavery in all its variety, from work as a house servant and as a skilled craftsman in a Baltimore shipyard to labor as a plantation field hand. He taught himself to read and write. FREDERICK DOUGLASS When he was 15, his owner sent him to a “slave breaker” to curb his independent spirit. After numerous whippings, Douglass defiantly refused to allow himself to be disciplined again. This confrontation, he recalled, was “the turning point in my career as a slave.” It rekindled his desire for freedom. In 1838, having borrowed the free papers of a black sailor, he escaped North. FREDERICK DOUGLASS Douglass lectured against slavery throughout the North and the British Isles, and he edited a succession of antislavery publications. He published a widely read autobiography that offered an eloquent condemnation of slavery and racism. FREDERICK DOUGLASS Douglass, at first, was a follower of the Garrisonian philosophy. But by 1848, he and Garrison split. Also in 1848, Douglass began to publish his own abolitionist newspaper The North Star. FREDERICK DOUGLASS Throughout his career, he insisted that slavery could only be overthrown by continuous resistance. He argued that in their desire for freedom, the slaves were truer to the nation’s underlying principles than the white Americans who annually celebrated the Fourth of July while allowing the continued existence of slavery. BLACK AND WHITE ABOLITIONISM The first racially integrated social movement in American history and the first to give equal rights for blacks a central place in its political agenda, abolitionism was nonetheless a product of its time and place. Racism was pervasive in 19 th century America. White abolitionists could not free themselves entirely from this prejudice. BLACK AND WHITE ABOLITIONISM Black spokesman, Martin R. Delany, charged that white abolitionists monopolized the key decision-making relegating blacks to a secondary role. By the 1840s, blacks sought an independent role within the movement, regularly holding their own conventions. BLACK AND WHITE ABOLITIONISTS Henry Highland Garnett, proclaimed at one convention, in 1843, that slaves should rise in rebellion to throw off their shackles. His position was so at odds with the prevailing belief in moral suasion that the published proceedings omitted his speech. It was not until 1848 that Garnett’s speech along with Walker’s Appeal appeared in print in a pamphlet partially financed by John Brown – at that time an obscure white abolitionist. BLACK AND WHITE ABOLITIONISM Black abolitionists developed an understanding of freedom that went well beyond the usage of most of their white contemporaries. They worked to attack the intellectual foundations of racism, seeking to disprove pseudoscientific arguments for black inferiority. BLACK AND WHITE ABOLITIONISM They challenged the prevailing image of Africa as a continent without civilization. Many black abolitionists called on free blacks to seek out skilled and dignified employment, to demonstrate the race’s capacity for advancement. H: SLAVERY AND LIBERTY SLAVERY AND LIBERTY Black abolitionists rejected the nation’s pretensions as a land of liberty. Many blacks dramatically reversed the common association of the USA with the progress of freedom. They offered a stinging rebuke to white Americans’ claims to live in a land of freedom. SLAVERY AND LIBERTY Northern blacks devised an alternative calendar of “freedom celebrations” centered on January 1, 1808 the date the international slave trade was abolished rather than the 4 th of July. With the abolition of slavery in 1838 in Great Britain, it became a model of liberty and justice, while the USA remained a land of tyranny. SLAVERY AND LIBERTY The greatest oration on American slavery and American freedom was delivered in Rochester, NY, on July 5 th 1852, by Frederick Douglass. Douglass posed the question: “What, to the Slave, is the Fourth of July?” SLAVERY AND LIBERTY He answered that 4 th of July festivities revealed the hypocrisy of a nation that proclaimed its belief in liberty yet daily committed “practices more shocking and bloody” than any other country on earth. He also laid claim to the founder’s legacy. SLAVERY AND LIBERTY The Revolution had left a “rich inheritance of justice, liberty, prosperity, and independence,’ from which subsequent generations had tragically strayed. Only by abolishing slavery and freeing the “great doctrines” of the Declaration of Independence from the “narrow bounds” of race could the USA recapture its original mission. I: NORTH AND SOUTH REACTION TO ABOLITIONSM NORTH AND SOUTH REACTION TO ABOLITIONISM At first, abolitionism aroused violent hostility from northerners who feared the movement threatened to disrupt the Union, interfere with profits wrested from slave labor, and overturn white supremacy. Led by businessmen and local merchants, mobs disrupted abolitionist meetings in northern cities. NORTH AND SOUTH REACTION TO ABOLITIONISM In 1835, a Boston crowd led Garrison through the streets with a rope around his neck Garrison barely escaped with his life. NORTH AND SOUTH REACTION TO ABOLITIONISM In 1836, a Cincinnati mob destroyed the printing press of James G. Birney, a former slaveholder who had been converted to abolitionism by Theodore Dwight Weld. NORTH AND SOUTH REACTION TO ABOLITIONISM 1837: Antislavery editor Elijah P. Lovejoy became the movement’s first martyr when he was killed by a mob in Alton, Ill., while defending his press. In his editorials Lovejoy repeatedly called slavery an evil and a sin. Mobs destroyed his press four times only to see Lovejoy resume publication. The fifth attack ended in Lovejoy’s death. NORTH AND SOUTH REACTIONS TO ABOLITIONISM Crowds of southerners burned abolitionist literature that they had removed from the US mail. 1836: Abolitionists began to flood Congress with petitions calling for emancipation. Congress responded with the notorious “gag rule” which prohibited any talk of slavery and emancipation. The rule was reauthorized in 1840 but repealed in 1844. NORTH AND SOUTH REACTIONS TO ABOLITIONISM Mob attacks and attempts to limit abolitionists’ freedom of speech convinced many northerners that slavery was incompatible with the democratic liberties of white Americans. It was the murder of Elijah Lovejoy that led Wendall Phillips, who became one of the movement’s greatest orators, to associate with the abolitionist cause. NORTH AND SOUTH REACTIONS TO ABOLITIONISM The abolitionists movement now broadened its appeal so as to win the support of northerners who cared little about the rights of blacks, but could be convinced that slavery endangered their own cherished freedoms. The “gag rule” aroused considerable resentment in the North. NORTH AND SOUTH REACTIONS TO ABOLITIONISM The flight to for the right to debate slavery openly and without reprisal led abolitionists to elevate “free opinion” – freedom of speech, press and the right to petition – a central place in what William Lloyd Garrison called the “gospel of freedom.” In defending free speech, abolitionists claimed to have become the custodians of the “rights of every freeman.” J: THE END OF ABOLITIONISM The Abolitionist Movement failed in its ultimate goal to end slavery. The Movement kept the issue in the public view throughout the antebellum years and into the Civil War. However, as many black abolitionists had long recognized, slavery would be abolished through a violent struggle.
This engaging chapter book study of Charlie and The Chocolate Factory is sure to win the students over. The best part is that we made it for the busy teacher. -23 fun assessments aligned with common core standards: -1 engaging comprehension game -Lesson ideas aligned to common core for each chapter Your kids will love reading this book with you while you make sure that they mastered the standards for the year. This unit study could be completed in one week if you allow for 45 minute shared reading blocks a day. Please let us know if you need anything else for this unit. Customer service is very important to us.
A category of mental health problems that includes all types of depression and bipolar disorder, mood disorders are sometimes called affective disorders. During the 1980s, mental health professionals began to recognize symptoms of mood disorders in children and adolescents, as well as adults. However, children and adolescents do not necessarily experience or exhibit the same symptoms as adults. It is more difficult to diagnose mood disorders in children, especially because children are not always able to express how they feel. Today, clinicians and researchers believe that mood disorders in children and adolescents remain one of the most under-diagnosed mental health problems. Mood disorders in children also put them at risk for other conditions (most often anxiety disorder, disruptive behavior, and substance abuse disorders) that may persist long after the initial episodes of depression are resolved. What causes mood disorders in children is not well known. There are chemicals in the brain that are responsible for positive moods. Other chemicals in the brain, called neurotransmitters, regulate the brain chemicals that affect mood. Most likely, depression (and other mood disorders) is caused by a chemical imbalance in the brain. Life events (such as unwanted changes in life) may also help cause this chemical imbalance. Affective disorders aggregate in families and are considered to be multifactorially inherited. Multifactorial inheritance means that "many factors" are involved. The factors that produce the trait or condition are usually both genetic and environmental, involving a combination of genes from both parents. Often one gender (either males or females) is affected more frequently than the other in multifactorial traits. There appears to be a different threshold of expression, which means that one gender is more likely to show the problem, over the other gender. Anyone can feel sad or depressed at times. However, mood disorders are more intense and difficult to manage than normal feelings of sadness. Children, adolescents, or adults who have a parent with a mood disorder have a greater chance of also having a mood disorder. However, life events and stress can expose or exaggerate feelings of sadness or depression, making the feelings more difficult to manage. Sometimes, life's problems can trigger depression. Being fired from a job, getting divorced, losing a loved one, death in the family, and financial trouble, to name a few, all can be difficult and coping with the pressure may be troublesome. These life events and stress can bring on feelings of sadness or depression or make a mood disorder harder to manage. The chance for depression in females in the general population is nearly twice as high (12 percent) as it is for males (6.6 percent). Once a person in the family has this diagnosis, the chance for their siblings or children to have the same diagnosis is increased. In addition, relatives of persons with depression are also at increased risk for bipolar disorder (manic depression). The chance for manic depression (or bipolar disorder) in males and females in the general population is about 2.6 percent. Once a person in the family has this diagnosis, the chance for their siblings or children to have the same diagnosis is increased. In addition, relatives of persons with manic depression are also at increased risk for depression. The following are the most common types of mood disorders experienced by children and adolescents: Children, depending upon their age and the type of mood disorder present, may exhibit different symptoms of depression. The following are the most common symptoms of a mood disorder. However, each child and adolescent may experience symptoms differently. Symptoms may include: In mood disorders, these feelings appear more intense than adolescents normally feel from time to time. It is also of concern if these feelings continue over a period of time, or interfere with an adolescent's interest in being with friends or taking part in daily activities at home or school. Any adolescent who expresses thoughts of suicide should be evaluated immediately. Other signs of possible mood disorders in adolescents may include: The symptoms of mood disorders may resemble other conditions or psychiatric problems. Always consult your child's physician for a diagnosis. Mood disorders are real medical conditions. They are not something a child will likely just "get over." A child psychiatrist or other mental health professional usually diagnoses mood disorders following a comprehensive psychiatric evaluation. An evaluation of the child's family, when possible, in addition to information provided by teachers and care providers may also be helpful in making a diagnosis. Specific treatment for mood disorders will be determined by your child's physician based on: Mood disorders can often be effectively treated. Treatment should always be based on a comprehensive evaluation of the child and family. Treatment may include one, or more, of the following: Parents play a vital supportive role in any treatment process. Preventive measures to reduce the incidence of mood disorders in children are not known at this time. However, early detection and intervention can reduce the severity of symptoms, enhance the child's normal growth and development, and improve the quality of life experienced by children with mood disorders. Click here to view the Online Resources of Growth & Development
Calculating Kinetic Energy in an Ideal Gas Molecules have very little mass, but gases contain many, many molecules, and because they all have kinetic energy, the total kinetic energy can pile up pretty fast. Using physics, can you find how much total kinetic energy there is in a certain amount of gas? Yes! Each molecule has this average kinetic energy: To figure the total kinetic energy, you multiply the average kinetic energy by the number of molecules you have, which is nNA, where n is the number of moles: NAk equals R, the universal gas constant, so this equation becomes the following: If you have 6.0 moles of ideal gas at 27 degrees Celsius, here’s how much internal energy is wrapped up in thermal movement (make sure you convert the temperature to kelvin): This converts to about 5 kilocalories, or Calories (the kind of energy unit you find on food wrappers). Suppose you’re testing out your new helium blimp. As it soars into the sky, you stop to wonder, as any physicist might, just how much internal energy there is in the helium gas that the blimp holds. The blimp holds 5,400 cubic meters of helium at a temperature of 283 kelvin. The pressure of the helium is slightly greater than atmospheric pressure, So what is the total internal energy of the helium? The total–kinetic energy formula tells you that KEtotal = (3/2)nRT. You know T, but what’s n, the number of moles? You can find the number of moles of helium with the ideal gas equation: PV = nRT Solving for n gives you the following: Plug in the numbers and solve to find the number of moles: So you have Now you’re ready to use the equation for total kinetic energy: Putting the numbers in this equation and doing the math gives you So the internal energy of the helium is That’s about the same energy stored in 94,000 alkaline batteries.
There are three main categories of rechargeable batteries: Automotive batteries: Used to supply primary power to start engines on cars, boats and other vehicles. They provide a short burst of high current to get the engine started. Standby/industrial batteries: Designed to be permanently connected in parallel with a critical load and a rectifier/charger system, where the rectifier/charger forms the primary source of power for the load and the battery provides the secondary source in the event of a primary source failure. Portable batteries: Designed to power portable equipment such as consumer products and tools such as drills, mobile phones, laptop computers and so on. Amtex Electronics provide a wide range of battery chargers, for just about any application, but today we will discuss the use of Lead-Acid batteries in Standby applications and some of the terminology used is such applications. Terms Associated with Standby Batteries: Cell: A cell comprises a number of positive and negative charged plates immersed in an electrolyte that produces an electrical charge by means of an electrochemical reaction. Lead acid cells generally produce an electrical potential of 2V. Battery: A battery is a number of cells connected together. String/bank: A battery string or bank comprises a number of cells/batteries connected in series to produce a battery or battery string with the required usable voltage/potential e.g. 6V, 12V, 24V, 48V, 110V. Ah: The Ah or Ampere/hour capacity is the current a battery can provide over as specified period of time, e.g. 100Ah @ C10 rate to EOD of 1.75V/cell. This means the battery can provide 10 Amps for 10 hours to an end of discharge voltage of 1.75V per cell. Different battery manufacturers will use different Cxx rates depending on the market or application at which their batteries are targeted. Typical rates used areC3, C5, C8, C10 and C20. Because of this it is important, when comparing batteries from different manufacturers having the same Ah rate, to confirm on what Cxx rate this figure is based. Nominal voltage: The cell voltage that is accepted as an industrial standard. For lead acid batteries this is 2V Charge voltage: The voltage that is applied to each cell, during normal float charging conditions. The charge voltage for lead acid batteries is typically 2.2.5V ~ 2.3V per cell. Float charge: Similar to trickle charge. When a battery is fully charged, the trickle charge compensates for the self-discharge of a lead acid battery. Boost charge: Charge given to a battery to correct voltage imbalance between individual cells and to restore the battery to fully charged state. This is typically 2.4V per cell. EOD Voltage: End of discharge voltage is the level to which the battery string voltage or cell voltage is allowed to fall to before affecting the load i.e. 1.75V or 21V on a nominal 24V system. End of life factor. This is a factor included within the battery sizing calculation to ensure the battery is able to support the full load at the end of the battery design life, calculated by multiplying Ah by 1.25. Temperature Derate Factor: As the energy stored within a battery cell is the result of an electro chemical reaction, any change in the electrolyte temperature has an effect on the efficiency or rate of reaction. i.e. an increase in temperature increases the efficiency/rate whereas a decrease in temperature reduces the efficiency/rate of reaction. As a result of this, all battery manufacturers’ discharge data will be specified at a recommended temperature (typically 20-25°C) with temperature corrections provided for operation above and below these values. Temperature Compensation: The simplest way of maintaining the rate of reaction within design parameters is to alter the charge voltage at a rate proportional to the change in temperature, i.e. decrease the charge voltage with an increase in temperature above 20-25°C an decrease the charge voltage with a decrease in temperature below 20-25°C. The typical change in charge voltage is 3 mV / °C. Minimum information required to select and size a battery. In order to size a standby battery the following data is generally required:- System nominal voltage: The nominal voltage that the load requires e.g. 24V, 48V. Load rating. Either current or power taken by the load during normal & primary power source failure. Battery standby or autonomy time: The period that the battery is required to provide power to the load during primary power failure. Normal operating or ambient temperature: The operating temperature in which the battery will be operating in. Load voltage limits: The voltage range over which the load will safely operate. Understanding of the above terms, is the 1st step in the selection of a suitable battery and battery charger for both…Cyclic and Standby applications. The 2nd step is to communicate as much information as possible about your application to your battery supplier and your battery charger manufacturer. Their knowledge and expertise will provide you to the right solution. Further details can be found on our website, in the Applications Notes section.
Welcome to Week 2 This week we’ll investigate Hans Christian Andersen’s fairy tales from a genre perspective. In the following steps, we will introduce you to the folk tale genre and to useful analysis models. As we shall see, Hans Christian Andersen was profoundly inspired by the folk tale tradition. Thus, getting familiar with this special genre is fundamental for your understanding of his fairy tales. We will introduce you to the typical forms of the folk tale and to a couple of models which are useful when analysing its elements and structures: the so-called actantial model and the narrative pattern called ‘home-away-home’. The latter is not only specific to the folk tale but can also be found in the Bildungsroman. This German term is used in the English language and refers to novels like Goethe’s ‘Wilhelm Meister’s Lehrjahre’ or ‘Wilhelm Meister’s apprenticeship’. You will also get the opportunity to test your analysis skills on a folk tale which is particularly important in our context: ‘The Blue Light’, collected and published by the Grimm Brothers. This tale has a lot of features in common with one of Hans Christian Andersen’s earliest fairy tales, ‘The Tinderbox’. Comparing the two tales, we will be able to study Hans Christian Andersen’s use and manipulation of the folk tale. Finally, PhD student Torsten Bøgh Thomsen will propose his analysis and interpretation of ‘The Tinderbox’. Week 2 contains all the theory you will need to participate. So don’t worry, the rest of the course will be lighter. As you go through this week, here are some questions to bear in mind: - How can we understand the interest that Hans Christian Andersen takes in the orally transmitted folk tales? - In what way is he manipulating this genre and why? - What is characteristic of Hans Christian Andersen’s special way of telling fairy tales? © The Hans Christian Andersen Centre
As a kid in Trinidad, the first time I heard the idea of Columbus “discovering” the Americas challenged was by my Caribbean History teacher Askia Amon Rah! Decades later, I will come to appreciate the magnitude and the significance of what he was trying to teach us! Indigenous Peoples’ Day Indigenous Peoples’ Day is a holiday that honours and memorializes the shared history and experience of American Indigenous Peoples on the second Monday in October. This day is part of a campaign to communicate a more full and accurate version of American history, both highlighting the history and participation of Native peoples and bringing awareness to the true facts of the exploration of America. Columbus Day became a federal holiday in 1937, in part because of efforts by Roman Catholic Italian Americans who lobbied to memorialize Christopher Columbus in American history. Unfortunately, this effort, when combined with prevailing attitudes of the 20th century, contributed to an inexact and incomplete account of American history, until the recent efforts around recognizing the Native Peoples have been bringing more awareness. To learn more and get involved: - Visit the Smithsonian National Museum of the American Indian (NMAI). The NMAI cares for one of the world’s most expansive collections of Native artifacts, including objects, photographs, archives, and media covering the entire Western Hemisphere, from the Arctic Circle to Tierra del Fuego. - Visit the Smithsonian National Museum of American History. Devoted to the scientific, cultural, social, technological, and political development of the United States, the museum traces the American experience from colonial times to the present.
In this lesson, students will use a simulation to observe and manipulate the variables of a lever and fulcrum as well as force vectors and torque. - Students will be able to identify a lever and fulcrum. - Students will solve one-step equations for balancing different masses on a teeter-totter balance. - simple machine About the Lesson Adapted from a PhET™ simulation, this less involves students using TI-Nspire technology to simulate, observe and manipulate the variables of lever and fulcrum as well as force vectors and torque. As a result, students will: PhET is a trademark owned by the Regents of the University of Colorado, which was not involved in the production of, nor do they endorse this product. - Describe a lever and fulcrum - Determine the positions for different mass arrangements to balance on the teeter-totter.
How Big? – An estimating and measuring game. You will need 1 non-clear ruler 2 players (you and an adult) Your homework book or a sheet of paper. 1. Using an upside down ruler, draw, five different length straight lines in your homework book. 2. Then, you and your adult write an estimation for how long you think each line will be. 3. Now, measure each line accurately. 4. If a player has estimated the exact measurement they score 10 points. 5. If a player has estimate within 2 mm of the measurement they scores 5 points. 6. If neither player has scored 5 or 10 points, then the player whose estimate was closest scores 1 point. Note: If you need to borrow your 30cm ruler from school you can – Just make sure you bring it back for Monday’s lessons.
Mutation (genetic algorithm) Mutation is a genetic operator used to maintain genetic diversity from one generation of a population of genetic algorithm chromosomes to the next. It is analogous to biological mutation. Mutation alters one or more gene values in a chromosome from its initial state. In mutation, the solution may change entirely from the previous solution. Hence GA can come to better solution by using mutation. Mutation occurs during evolution according to a user-definable mutation probability. This probability should be set low. If it is set too high, the search will turn into a primitive random search. The classic example of a mutation operator involves a probability that an arbitrary bit in a genetic sequence will be changed from its original state. A common method of implementing the mutation operator involves generating a random variable for each bit in a sequence. This random variable tells whether or not a particular bit will be modified. This mutation procedure, based on the biological point mutation, is called single point mutation. Other types are inversion and floating point mutation. When the gene encoding is restrictive as in permutation problems, mutations are swaps, inversions, and scrambles. The purpose of mutation in GAs is preserving and introducing diversity. Mutation should allow the algorithm to avoid local minima by preventing the population of chromosomes from becoming too similar to each other, thus slowing or even stopping evolution. This reasoning also explains the fact that most GA systems avoid only taking the fittest of the population in generating the next but rather a random (or semi-random) selection with a weighting toward those that are fitter. For different genome types, different mutation types are suitable: - Bit string mutation - The mutation of bit strings ensue through bit flips at random positions. 1 0 1 0 0 1 0 ↓ 1 0 1 0 1 1 0 - The probability of a mutation of a bit is , where is the length of the binary vector. Thus, a mutation rate of per mutation and individual selected for mutation is reached. - Flip Bit This mutation operator takes the chosen genome and inverts the bits (i.e. if the genome bit is 1, it is changed to 0 and vice versa). This mutation operator replaces the genome with either lower or upper bound randomly. This can be used for integer and float genes. The probability that amount of mutation will go to 0 with the next generation is increased by using non-uniform mutation operator. It keeps the population from stagnating in the early stages of the evolution. It tunes solution in later stages of evolution. This mutation operator can only be used for integer and float genes. This operator replaces the value of the chosen gene with a uniform random value selected between the user-specified upper and lower bounds for that gene. This mutation operator can only be used for integer and float genes. This operator adds a unit Gaussian distributed random value to the chosen gene. If it falls outside of the user-specified lower or upper bounds for that gene, the new gene value is clipped. This mutation operator can only be used for integer and float genes. - John Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, Michigan. 1975. ISBN 0-262-58111-6.
Professional Interests: Statistical Analysis 3 years ago Triangle numbers are the sums of successive integers. So 6 is a triangle number because 6 = 1 + 2 + 3 which can be displa... Write a function which returns every other element of the vector passed in. That is, it returns the all odd-numbered elements, s... Try out this test problem first. Given the variable x as your input, multiply it by two and put the result in y. Join the conversation
This is a type of respiratory infection when mucous membranes found in the bronchial passage of the lungs become inflamed and irritated. The passage can become constricted or the lungs’ airways can be blocked, which induces breathlessness, phlegm, and coughing episodes. The ten most common bronchitis signs and symptoms include: This is an unexpected reflex that both animals and human beings experience so that they can clear the throat and passages of any particles. These include foreign particles, mucus, microbes, fluids, and irritants. The bronchial tubes that ferry air into the lungs become inflamed in bronchitis. The irritated membranes then produce a lot of mucus, blocking the tiny airways. To be able to breathe easily, a cough occurs to remove this mucus build-up. Air is transported to the lungs by the bronchial tubes. These tubes become inflamed due to bronchitis resulting in the buildup of phlegm and mucus. You may experience shortness of breath since the airways become blocked. Although this feeling is common in bronchitis, it is more evident during strenuous activities like exercise or sexual activity. 3. Swollen lymph nodes Your lymph nodes will likely be affected anytime your body battles an infection. These are tiny structures located all over the body, and they sieve substances that are harmful. They consist of immune cells that battle infection by confronting the germs that go over this fluid. The lymph nodes become swollen due to this battle. A bronchitis patient will experience swelling of the lymph nodes, although they normally go back to their initial size after the infection has cleared. Visit your doctor if swelling continues beyond the infection period. 4. Fatigue and reduced energy The body uses a lot of energy every time it battles an infection. As an outcome of this, the patient becomes lethargic and fatigued. This is a common bronchitis symptom. Since your body constantly requires oxygen, when the supply is sufficient, it starts to slow down, saving its energy and making you feel tired. Bronchitis can lead to a lengthy feelings of fatigue. Your body increases its temperature to battle an infection since the infection will find it hard to spread if the environment is warmer. As the human body tries to warm itself up, you will experience chills. A slight fever is also associated with bronchitis. 7. Inflammation of the bronchial tubes This is the main symptom of bronchitis. Smoke, chemicals, air pollution, tobacco, bacterial and viral infections may contribute to this symptom. 8. Discomfort in the chest If you have bronchitis, your body overworks to move air in and out of the lungs because of its nature and the effects it has on your body like inflammation, heavy breathing, and partially blocked airways. This in turn causes discomfort, pain, and strain in your chest, and in the muscles of the abdomen that help expel air. Discomfort can also be felt as a result of the inflammation of the lymph nodes and lungs, which exacerbates pain when touching the affected areas. 10. Sputum generation Bronchitis can be indicated by sputum. The sputum associated with bronchitis is yellowish-grey or expectorating clear white. It acts as one of your body’s defense against invading microbes. Your body collects, bundles and eliminates the viruses and bacteria that are not wanted by generating mucus. The sputum can contain traces of blood in rare cases, but this should not be a cause for concern unless it persists. You should however, visit your doctor if the sputum is seen in large amounts and occurs frequently. - Your body constantly requires oxygen. When the supply is sufficient, this process begins to slow down, saving energy and making you feel tired - The sputum can contain traces of blood in rare cases, but this should not be a cause for concern unless it continues - Your body collects, bundles, and eliminates unwanted viruses and bacteria by generating mucus
Learning spelling words can be a dreaded activity for many students, their teachers and even their parents. Luckily, there are ways to make learning spelling words fun, from worksheet activities to classroom games and hands-on activities. Use some simple ideas to make learning spelling words fun, and you'll never have to worry about your child or students complaining about having to learn them again. Use spelling words in crossword or word search puzzles. Your child will have to think about how to spell each word in order to fit it properly into the crossword boxes, or to find it in a word search. You can create your own crosswords and word searches using graph paper, or you can use a free online program such as Puzzlemaker, which is popular with teachers (see Resources). Many spelling word games can be played in a classroom. One such game is called "Sparkle." The teacher says the spelling word, and each student recites one letter of the word in order. If a student says the wrong letter, he or she is out and must sit down. Once the word is spelled correctly, the next student must say "Sparkle" and he or she is also out (this last step makes the game go faster and ensures an end to the game). This game can also be played between a parent and child. Another way for a child to learn spelling words in a game format is to create a cheer for each word, or to play "Spelling Charades." As the parent acts out the word, the child guesses the word, then has to spell it correctly. Hands-on activities are especially good for children who have learning disabilities. Instead of writing the word, have your child spell it using alphabet blocks or magnetic letters. Alternately, the child could create letters out of clay, Popsicle sticks or any other manipulative tools. Visual learners do well if they can draw a picture for each word, incorporating the letters of the word into the picture.
POWERING BATTERIES WITH PROTONS – A POTENTIAL DISRUPTION IN THE ENERGY INDUSTRY Climate Change has been a crucial factor taken into consideration by the Australian researchers from Royal Melbourne Institute of Technology before creating the first rechargeable proton battery. After considering all available options about cost and availability of the materials needed, the researchers in Melbourne decided to make a proton battery to meet up with the alarming increase of energy needs in the world. Lead researcher Professor John Andrews says, “Our latest advance is a crucial step towards cheap, sustainable proton batteries that can help meet our future energy needs without further damaging our already fragile environment. As the world moves towards inherently variable renewable energy to reduce greenhouse emissions and tackle climate change, requirements for electrical energy storage will be gargantuan”. The proton battery is one among many potential contributors towards meeting this enormous demand for energy storage. Powering batteries with protons has the potential to be more economical than using lithium ions, which are made from scarce resources. Carbon, which is the primary resource used in our proton battery, is abundant and cheap compared to both metal hydrogen storage alloys and the lithium needed for rechargeable lithium-ion batteries. Here’s how the battery works; During charging, protons generated during water splitting in a reversible fuel cell are conducted through the cell membrane and directly bond with the storage material with the aid of electrons supplied by the applied voltage, without forming hydrogen gas. In electricity supply mode, this process is reversed. Hydrogen atoms released from the storage lose an electron to become protons once again. These protons then pass back through the cell membrane where they combine with oxygen and electrons from the external circuit to reform water. In simpler terms, carbon in the electrode bonds with the protons produced whenever water is split via the power supply’s electrons. Those protons pass through the reversible fuel cell again to form water as it mixes with oxygen and then generates power. JLCPCB – Prototype 10 PCBs for $2 + 2 days Lead Time China’s Largest PCB Prototype Enterprise, 300,000+ Customers & 10,000+ Online Orders Per Day Inside a huge PCB factory: https://www.youtube.com/watch?v=_XCznQFV-Mw
The Human-Computer Interface The user interface is not strictly speaking part of the operating system, but an intermediate layer of software that allows the user to interact effectively with the operating system. In some operating systems, the user interface software is closely integrated with the operating system software (examples include Microsoft Windows and Mac OS). Other operating systems keep the user interface distinctly separate from the operating system, allowing users to select from a range of available user interfaces (Linux is a good example). Virtually all operating systems, however, provide a command-line interface, usually called a command shell, which can be used to enter short text commands to execute system commands, run programs or manage files and directories. Windows 7, for example, provides a command line facility called cmd.exe. Although most users today prefer to use the more intuitive graphical user interface (GUI), with its application windows, program icons, and mouse-driven menus, a command line environment provides some powerful features, particularly from the point of view of system administrators, who can use it to quickly perform low-level system management and configuration tasks. The GUI, on the other hand, takes advantage of the fact that users can recognise and respond to visual cues, and removes the need to learn obscure commands by allowing a pointing device (usually a mouse) to be used to perform operations such as opening a user application or navigating the file system at the click of a button. Modern application software is usually written for a particular operating system. This allows the programmer to take advantage of the operating system's application programming interface (API), which provides a standard set of functions for creating the user interface for the application. The result is that all applications written for a particular operating system present a standardised interface to the user. This facilitates ease of use, because a user encountering a new software application can concentrate on learning the salient features of the application immediately, without having to get used to a completely different style of user interface. The study of human-computer interaction (HCI) is concerned with the design and implementation of interactive systems, and with the interaction between humans and computers. It is often unclear whether it is a branch of computer science, cognitive psychology, sociology or even industrial design. From the point of view of this discussion, we are mostly interested in the computer science aspects, although other disciplines have a supporting role to play, such as communication theory, graphic design, linguistics and the social sciences. Chiefly, we are concerned with effective cooperation between human beings and computers in jointly carrying out various tasks. Interface design, specification and implementation must consider the ways in which human beings can effectively communicate with machines, and how they learn to use the interface through which this is achieved. The human-computer interface in a modern operating system may well represent more than half of the operating system's program code. The application of software engineering and design principles is obviously important to the development of the operating system software, but consideration must also be given to the performance of the user. A graphical user interface is not necessarily the most efficient kind of interface, but for most people it is the preferred way of working with the computer. It is usually far easier and quicker to learn how to use a graphical user interface than to become reasonably proficient in a command line environment. You will notice if you look at your keyboard that the first row of letters spells "QWERTYUIOP". This layout was originally devised for typewriters, and was designed by inventor Christopher Scholes of Milwaukee sometime around 1872. The keys are arranged in this fashion for a reason. In the first typewriters, the keys were a sequence of hammers arranged in a circle which struck an inked ribbon. The ribbon was forced onto the paper, leaving a printed character. Because the keys would often jam together if pressed one after the other too quickly, Scholes arranged them in such a way that any pair of letters that the typist was likely to hit in quick succession were not adjacent to each other. The layout became standard for typewriters, and was subsequently migrated to computer keyboards. So far, a more ergonomic keyboard has not found widespread use, although an alternative has been around since 1932 in the form of the Dvorak keyboard. The main claims made for the Dvorak layout is that it is more comfortable to use, and may help to reduce the number of repetitive strain injuries. An early QWERTY typewriter keyboard The mouse was invented in 1964, and has been a standard feature of the human-computer interface since 1973, although it was not until the 1980s that it began to be used with IBM-based PCs. The graphical user interface (GUI) was first popularised by Apple's Macintosh computer in 1983, and has become a standard feature of all modern desktop operating systems. The GUI uses metaphors such as the desktop, and its characteristic features include Windows, Icons, Menus, and Pointers (the term GUI replaces the older WIMP acronym). Windows are the workspaces inside which each application runs, and can typically be resized, moved around, and hidden as required by the user. Icons are small images that are used as shortcuts, for example to open applications, files or directories. A menu contains a short list of functions, allowing the user to easily select the task they wish to perform. Pointers are moveable images, controlled by a pointing device such as a mouse, that allow the user to track their position on the screen and to select windows, icons or menu items at the click of a button. The Windows 7 desktop Interface design principles A number of design principles are involved when creating a user interface: - User profiling - it is important to know who your user is. What skills and experience do they have? How will you help them achieve their goals? A desktop operating system will have many kinds of user. - Metaphors - metaphors borrow behaviours from familiar systems, like the tape deck metaphor seen on audio applications. If you start to use a metaphor, however, you must be consistent, and use it throughout the application. Beware also, of negative associations and cultural boundaries. - Visibility - make sure the user can clearly see what functions are available by, for example, providing graphical toolbars. The main program features should be completely exposed, with secondary features (e.g. a menu item) being exposed by trivial user gesture. Submenu items may be exposed by a more involved user gesture. Any underlying complexity should be hidden from the user (for example, error messages should be meaningful and not describe the details of a low-level fault). - Coherence - the interface should be logical, consistent and easy to follow. If one object attribute is modified using a pop-up menu, so should the other object attributes (internal consistency). Application programs should adopt behaviours that are consistent with the operating system environment in which they reside (external consistency). Where possible, existing user interface standards should be followed. - State visualisation - any change in the behaviour of the program should be accompanied by a change in the appearance of the interface in order that different modes of operation can be distinguished from one another. It should be obvious, for example, what the current selection is, since this is the object that will be affected by the next command. A selection should be dimmed, however, when the window in which it resides does not have the focus. - Shortcuts - menu systems make life easier for inexperienced users by breaking complex actions into a series of simple steps. As users gain experience, shortcuts should be available that provide fast access to powerful functions. A recordable macro facility is often provided. Shortcuts are powerful and more abstract methods, and should not be the most exposed methods. - Focus - the human eye is highly non-linear, and possesses both edge-detection and motion-detection "hardware". For this reason, it is drawn to animated display areas, and changes to these display areas are readily noticed. Users acquire the habit of tracking the mouse pointer, and changes are often signaled by changing the appearance of the pointer. - Grammar - this defines the rules of the user interface. With an action-object grammar, for example, an action (or tool) is selected first. The tool subsequently operates on any object selected, i.e. the mode persists until the action is deselected. An example would be the selection of a particular drawing tool in a graphics program. With an object-action grammar, the object is selected first, and subsequent actions operate on the selected object - a method used in many word processing applications. Many actions are therefore available, since the mode can change. Direct manipulation is a special case, in which the object itself is a kind of tool - i.e. dragging or re-sizing a window. - Help - this can be: - Goal oriented - "What can I do with this program?" - Descriptive - "What is this? What does it do?" - Procedural - "How do I do this?" - Interpretive - "Why did this happen?" - Navigational - "Where am I?" - Safety - provides a safety net. The human mind has an envelope of risk which varies for different people and situations. A level that is comfortable for a beginner may make an experienced user feel restricted. New users should feel safe: "Are you sure . . . ?". Expert users should be able to turn off safety checks. - Context - the current document, selection of dialog box. Limit the user to one well-defined context. Two different states, for example, could be selection of a paragraph of text, and selection of an individual character. Dim a selection that is not applicable in the current context, and avoid any sudden shift of context, which may be confusing. - Aesthetics - the program interface should not offend the eye. Apply graphic design principles, and do not include something that looks like a mistake, such as similar buttons being not quite the same size. The program should respond in a timely fashion, and should not appear sluggish. Future developments in interface design Attempts to predict the future of technology have often proved to be spectacularly inaccurate. It is therefore useful to bear in mind that any speculation about the future of the human-computer interface can be based only on what is currently known to be possible. The current economic model is one in which the cost of hardware is decreasing, while speed and capacity is increasing, suggesting that computational facilities will become increasingly ubiquitous, and that the degree of human-computer interaction will continue to increase. The miniaturisation of hardware components, together with ever lower power requirements, will make it possible to deploy embedded computer systems in an increasing range of applications, including hand-held mobile devices, vehicles, household appliances and personal accoutrements. New display technologies have already appeared, making it possible to view multimedia content or take part in a video conference using a mobile phone, for example. TFT monitors, whilst currently still significantly more expensive than CRT monitors, have many advantages, including being lighter, having a smaller footprint, using considerably less power, and emitting less heat and radiation. The assimilation of computation into the environment means that it is already possible to automate the monitoring and control of temperature, humidity and lighting levels in an office building. New developments in I/O techniques may mean that the keyboard and mouse will soon be obsolete, as our computers learn to recognise voice commands, or even learn to converse with us. Alternative I/O techniques are already being developed to improve the availability of computing resources to various disadvantaged groups.
Rubella or German Measles Rubella, sometimes known as German measles or three-day measles is caused by the Togaviridae virus. Note that German measles is not the same as measles. Although, it is transmitted from one person to another by respiratory droplets, the rubella virus can also be passed through the blood from an infected mother to her baby during pregnancy. Referred to as congenital rubella, rubella contracted this way can cause significant and permanent damage to the baby, especially if the infection occurs early on during the pregnancy. This is why pregnant mothers and women planning to become pregnant are routinely tested for evidence of past rubella infection or vaccination. If a person has had either a rubella vaccination, or had a rubella infection in the past, they are considered protected against this infection After a 14-21 day incubation period, rubella infections start with mild nonspecific, cold-like symptoms, with pain along the lymph nodes of the neck region and throat. After about 24 hours, a rash develops all over the body. The rash usually disappears within 3 days. People with rubella are contagious from one week before to one week after the rash appears. There is no available treatment and the infection usually resolves on its own without any consequences. Rare complications of rubella include arthritis, infection of the nerves (neuritis) and, in some extremely rare cases, chronic brain infections. The best approach is ensuring that your child is immunized against rubella. The rubella vaccine is part of the routine childhood vaccine series. Thanks to the vaccine, rubella cases are now relatively rare. For women who plan to be pregnant and have not yet been vaccinated against rubella, or are unsure whether they have had rubella or a vaccine against the disease, it is a good idea to speak to your healthcare provider to ensure you are protected against rubella.
Learning logs are a personalized learning resource for children. In the learning logs, the children record their responses to learning challenges set by their teachers. Each log is a unique record of the child's thinking and learning. The logs are usually a visually oriented development of earlier established models of learning journals, which can become an integral part of the teaching and learning program and have had a major impact on their drive to develop a more independent learner. The importance of learners becoming aware of their own thought processes and gaining insights into the strategies they use to resolve problems, or overcome difficulties. There is also critical need for students to become actively involved in the process of learning. Research findings indicating that journals of this type are likely to increase metacognition through students becoming more aware of their own thought processes. Research using a "thinking book" which investigated the development of reflective thinking skills in children. This model of learning logs differs significantly from these earlier models by introducing a greater opportunity for the children to introduce colourful graphic and physical representations to illustrate their thinking and learning, rather than simply relying on the written word. Much of the development of learning logs built on practical classroom applications of mapping and visual tools. This has generated a motivation to engage in the process of reflective learning in students who have had more difficulty in expressing themselves through the conventional written form. The use of the learning logs has extended now to schools in Australia, Canada and Thailand in addition to their extensive use in schools throughout the UK. An outline of some of the practical applications of the learning logs along with a number of illustrations of the innovative thinking which has emerged as a product of this visual learning tool. Use and scope The process of using learning logs involves developing thinking and learning skills, which are enhanced by a peer partnership system. In this peer system, the children are encouraged to discuss and share their thoughts, as well as to develop their learning logs in a collaborative way. Learning Logs also give the opportunity for the students to provide feedback to their teachers in order to help extend and elaborate their understanding. The learning log allows teachers to quickly and easily share weekly teaching objectives with the children. Once set up the children then take the lead role in sharing and developing their knowledge and understanding and displaying this in a range of styles. The learning log is not an in depth assessment tool but more of a snapshot of what the students have or have not understood in their lesson material. The learning log can be used at any key stage and for a range of learning activities. It addresses the current creative agenda and has had considerable success with students of a more challenging nature. - Blagg, Nigel (1991). Can We Teach Intelligence?. Lawrence Erlbaum Associates. ISBN 0-8058-0793-4. - Ashman, Adrian F; Robert N. F. Conway (1993). Using Cognitive Methods in the Classroom. Routledge. ISBN 0-415-06835-5. - McCrindle, A. R.; C.A. Christensen (1995). The Impact of Learning Journals on Metacognitive and Cognitive Processes and Learning Performance, Vol 5. pp. 167–185. - Susan; Richard White (1994). The Thinking Books. Falmer Press. ISBN 0-7507-0295-8. - Caviglioli, Oliver; Ian Harris (2000). Mapwise:Accelerated Learning Through Visible Thinking. Network Educational Press. ISBN 1-85539-059-0. - Caviglioli, Oliver; Ian Harris (2002). Thinking Skills & Eye Q: Visual Tools for Raising Intelligence. Network Educational Press. ISBN 1-85539-091-4. - Powell, Robert (2006). Personalised Learning in the Classroom. Robert Powell Publications Ltd. ISBN 1-901841-25-1.
For years scientists have hypothesized that a rise in CO2 levels will cause the world’s forests to use water more efficiently, however only recently was this theory proven after Harvard University researchers performed the most complex study of the sort to date. The team of researchers led by research Trevor Keenan and Andrew Richardson actually found the the world’s forests are more efficient than expected. “This could be considered a beneficial effect of increased atmospheric carbon dioxide,” said Keenan, the first author of the paper. “What’s surprising is we didn’t expect the effect to be this big. A large proportion of the ecosystems in the world are limited by water. They don’t have enough water during the year to reach their maximum growth. If they become more efficient at using water, they should be able to take more carbon out of the atmosphere due to higher growth rates.” How does CO2 relate to water use? It’s all tied to the fundamental conversion process plants use to transform CO2 taken from the atmosphere to produce O2 called photosynthesis. During photosynthesis plants open tiny pores, called stomata, on their leaves to collect the CO2. Seeing how there’s more CO2 than they’re traditionally used to, compared to other generations where CO2 levels were lower, the plants do not need to open the stomata as wide, or for as long. This means a lower energy usage, translating into less water needed and an increase in efficiency. Farmers, for instance, have known this for a while which is why many greenhouses pump extra CO2 to promote crop growth. So does this mean that the extra CO2 is actually good for the world’s vegetation? In the short-term yes, however in the long term the chain of events that is triggered by an accelerated rise in CO2 trend, such as the one we’re currently on, will have an overall detrimental effect. “We’re still very concerned about what rising levels of atmospheric carbon dioxide mean for the planet,” Richardson cautioned. “There is little doubt that as carbon dioxide continues to rise — and last month we just passed a critical milestone, 400 ppm, for the first time in human history — rising global temperatures and changes in rainfall patterns will, in coming decades, have very negative consequences for plant growth in many ecosystems around the world.” Testing the CO2 fertilizer effect on the world’s forests, however, is much more difficult to assess since you need a heck load of data. Luckily, Harvard has been monitoring forests in northeast United States for 20 years using towers extending above the forest canopy, allows researchers to determine how much carbon dioxide and water are going into or out of the ecosystem. The researchers also employed data from other 300 towers positioned in forests all over the globe, which however haven’t been deployed for nearly as long as those installed by Harvard, but which still can serve valuable data. When Keenan, Richardson, and their colleagues began to examine those records, they found that forests were storing more carbon and becoming more efficient in how they used water. These findings were not limited to a particular region of the globe, but rather the trend was observed everywhere. “We went through every possible hypothesis of what could be going on, and ultimately what we were left with is that the only phenomenon that could cause this type of shift in water-use efficiency is rising atmospheric carbon dioxide,” Keenan said. Next, the researchers plan on improving their assessment by gaining access to data collected from tropical and Arctic regions. “This larger dataset will help us to better understand the extent of the response we observed,” he said. “That in turn will help us to build better models, and improve predictions of the future of the Earth’s climate. Right now, all the models we have underrepresent this effect by as much as an order of magnitude, so the question is: What are the models not getting? What do they need to incorporate to capture this effect, and how will that affect their projections for climate change?” The findings were presented in a paper published in the journal Nature.
Relief for Our Reefs By MARGARET WERTHEIM Brainless, immobile and with only the most primitive nervous systems, coral polyps have built some of the most magnificent structures on our planet. SILENTLY and steadily, a tragedy is unfolding beneath the ocean’s waves: Coral reefs around the world are disappearing. According to some projections, there could be few, if any, left by the end of the century. This dire and credible prediction has shocked many marine scientists, who had not realized how close to the tipping point coral reefs are. The news is especially disheartening because 2008 is the International Year of the Reef. The culprit here is carbon dioxide, the greenhouse gas that is responsible for global warming and that also is turning our oceans into an acid bath. Remember your mother’s warning that too much Coke would dissolve your teeth? Well, too much acid in the oceans prevents corals from growing their calciferous skeletons. In a December Science magazine article, researchers reported results of models in which they simulated the effects of carbon dioxide emissions over the next century. By 2050, the projections revealed, oceans will be too acidic for coral reefs to grow. Why should we care if coral reefs continue to grow? After all, they cover only 0.1 percent of the Earth’s surface. Unlike rain forests, they are tiny on a global scale. In terms of biodiversity, however, coral reefs are the rain forests of the ocean. Reefs are home to between 1 million and 9 million species. Nobody knows the exact number, says Nancy Knowlton, a coral reef expert at the Smithsonian Institution in Washington, D.C., because scientists have only just begun to seriously map marine biodiversity. That’s one of the goals of the Census of Marine Life being conducted by a network of researchers from more than 50 nations. If reefs disappear, at least half the species that live on them also might go extinct, according to the Science article. Here’s the problem. When carbon dioxide enters the ocean, it reacts with water to form carbonic acid. A few other chemical steps ensue, with the outcome that fewer carbonate ions are available for biological systems. Corals are not the only organisms that suffer. All shell-forming marine creatures are adversely affected. Taking a human analogy, it would be as if your bones could no longer keep growing. We are seeing the effects of ocean acidification. Today, the concentration of carbon dioxide in the Earth’s atmosphere is more than 380 parts per million. That’s more than at any time during the last 20 million years. About 25 percent of this carbon dioxide ends up being absorbed by the oceans. As carbon dioxide levels have risen during the industrial era, the average pH level in the ocean, an indicator of acidity, has dropped by 0.1 pH unit. (On the pH scale, a lower number means more acidic.) That might not sound like much, but evidence from antarctic ice cores shows that the global average is lower than at any time over almost half a million years. As the Science article notes, changes in atmospheric carbon dioxide over the last century “are two or three orders of magnitude higher than most of the changes seen in the past 420,000 years.” Until recently, many ocean scientists had imagined that as global temperatures rise, corals might begin to adapt. But acidification is a far more serious problem to these inherently delicate organisms. Knowlton says that, “It’s just not possible for organisms to adapt rapidly to such fundamental chemical changes in their environment.” Imagine, by way of comparison, that you were suddenly told that instead of drinking water, you’d have to settle for Coke all the time. Major drop in growth The corrosive effects of acidification are evident in the Great Barrier Reef in the Coral Sea off Queensland, Australia. Here, massive porites coral have experienced a 20 percent drop in growth in the last 16 years. The best-case scenario from the Intergovernmental Panel on Climate Change, which tracks global warming, predicts that carbon dioxide concentrations in the atmosphere will rise to 450 parts per million this century unless we change our consumption of fossil fuels quickly. Most models predict a rise to at least 500 parts per million if we don’t change our consumption habits. That will spell disaster for coral reefs. Many developing nations rely on reef tourism as a crucial part of their economies. Brainless, immobile and with only the most primitive nervous systems, coral polyps have built some of the most magnificent structures on our planet. They protect us, feed us and astound us with their beauty. Now they need our help and time is running out. Margaret Wertheim is the co-creator, with her sister Christine, of the Crochet Coral Reef Project, now showing in New York. This appeared in the Los Angeles Times. (c) 2008 Record, The; Bergen County, N.J.. Provided by ProQuest Information and Learning. All rights Reserved.
(painting by Robert Lobenwein) It is humane, supportive response to a fellow human being who is suffering and who may need support.It is especially useful when there is a terrorist attack,or natural disasters. PFA involves the following themes: -providing practical care and support, which does not intrude; - assessing needs and concerns; -helping people to address basic needs (for example, food and water, information); -listening to people, but not pressuring them to talk; -comforting people and helping them to feel calm; - helping people connect to information, services and social supports; -protecting people from further harm. PFA doesnot necessarily involving a discussion of the event that caused the distress,or any kind of psychological analysis.It is not a counselling or a psychiatric intervention.It is a psychosocial support,which can be provided by anyone,who have undergone a training of 12 hrs duration. PFA involves factors that seem to be most helpful to people’s long-term recovery . These include: - feeling safe, connected to others, calm and hopeful; -having access to social, physical and emotional support; -feeling able to help themselves, as individuals and communities. World Mental health day on october 10.Theme:Diginity in Mental health:Psychological and mental Health First aid.
How and Why do Crickets Chirp? Why do crickets chirp? The main, most important reason that crickets chirp is to attract and court a mate that they can reproduce with. Each species has its own unique chirp that is identifiable to the females of that species (only the males chirp). Scientists have observed that female crickets are more attracted to a particular type of chirp sound: one from a dominant male. More recent studies even show that female crickets prefer the higher pitched, louder sounds of younger males over the deeper chirps of older males. When a female is interested in a male’s chirp, she will turn her body to face the direction of the chirp. This response is known as phonotaxis. New research shows that crickets actually respond to sound similar to dolphins! But, not all male crickets chirp. Researchers from the University of California have been studying crickets in Hawaii for over 20 years and they have discovered that certain species of crickets have stopped chirping in order to avoid a parasitic predator. A tachinid fly known as Ormia Ochracea targets singing crickets and lays its eggs right on them. When the eggs hatch, the maggots invade the crickets body and live inside it until they become adults. When they reach adulthood, they will tear they way out of the poor cricket, killing him (if he has not already died). As the research group continued monitoring Hawaii’s crickets, they discovered that more and more of the crickets were becoming silent. Within 10 years, 90% of the cricket species had developed flat wings that were incapable of producing sound. That is just one example of how incredible mother nature’s adaptations can be. How do crickets chirp? Male crickets create their chirps by rubbing their forewings together. One side of the wings contains a jagged edge. When the flat side of the wing rubs against the jagged side, this produces the chirp sound. Cricket males generally have three distinct song types. The “calling song” is the rhythmic, familiar chirp that you typically hear on a summer night. Its main purpose is just to attract some females. Then, there is the “courtship song”, which features faster, deeper sounding chirps. This particular song is used when a male is right about to mate. Finally, there is the “aggressive song”, which is a loud trill most often produced as two male crickets fight. Did you know crickets have ears on their knees? Discover how crickets hear>> Filed under: Cricket Behavior
Scurvy is a disease that results when people do not get enough vita-min C (also called ascorbic acid) in the diet over a period of weeks or months. Some of the effects of scurvy are spongy gums, loose teeth, weakened blood vessels that cause bleeding under the skin, and damage to bones and cartilage, which results in arthritis-like pain. for searching the Internet and other reference sources What Is Scurvy? Scurvy was one of the first recognized dietary deficiency diseases. During the sea voyages of the fifteenth to eighteenth centuries, many sailors suffered from scurvy. The Portuguese navigator Vasco da Gama (ca 1460-1524) lost half his crew to the disease during their voyage around the Cape of Good Hope, and the British admiral Sir Richard Hawkins (1532-1595) lost 10,000 sailors to scurvy. In 1747, the British naval physician James Lind conducted experiments to see which food or liquids might be able to prevent scurvy. He found that lemons and oranges enabled sailors to recover from scurvy. Both of these citrus fruits are rich sources of vitamin C. What Is the Role of Vitamin C in the Body? Vitamin C is necessary for strong blood vessels, healthy skin, gums, and connective tissue, formation of red blood cells, wound healing, and the absorption of iron from food. Have You Ever Heard Anyone Called a "Limey"? In Treatise of the Scurvy, published in 1753, James Lind wrote about the first example of a research experiment set up as a controlled clinical trial. To study the treatment of scurvy, Lind divided sailors who had it into several groups and then fed each group different liquids and foods. He discovered that the group fed lemons and oranges was able to recover from scurvy. By the end of the eighteenth century, the British navy had its sailors drink a daily portion of lime or lemon juice to prevent scurvy. The American slang term for the English, "limeys," originated from that practice. What Are the Symptoms of Scurvy? The main symptom of scurvy is bleeding (hemorrhaging). Bleeding within the skin appears as spots or bruises. Wounds heal slowly. The gums become swollen, and gingivitis (jin-ji-VY-tis), which means inflammation of the gums, usually occurs. Bleeding can take place in the membranes covering the large bones. It can also occur in the membranes of the heart and brain. Bleeding in or around vital organs can be fatal. Scurvy develops slowly. In the beginning, a person usually feels tired, irritable, and depressed. In the advanced stages of scurvy, laboratory tests show a complete absence of vitamin C in the body. Who Is at Risk for Scurvy? Scurvy is less prevalent today than it was in the time of Vasco da Gama and Richard Hawkins, but people who are on diets that lack a diversity of foods may develop scurvy or scurvy-like conditions. Infants who depend solely on processed cow's milk for nutrition and are not given vitamin C supplements are at risk for scurvy. Elderly people, whose diets often lack citrus fruits or vegetables that contain vitamin C, represent another at-risk group. People who follow diets that limit them to very few food choices also may be susceptible to developing scurvy. How Is Scurvy Treated? To treat scurvy, people take vitamin C supplements (vitamin pills) and eat foods rich in vitamin C. In addition to citrus fruits like oranges and grapefruits, good sources of vitamin C include broccoli, strawberries, cantaloupe, and other fruits and vegetables. Mavarra, Tova. Encyclopedia of Vitamins, Minerals, and Supplements. New York: Facts on File, 1996. Slap, Gail B. Teenage Health Care. New York: Pocket Books, 1994. The U.S. National Institutes of Health (NIH) has a search engine at its website that locates information about scurvy and about vitamin C
How does one say “for” in Spanish? Simple question, huh? Not really. In fact, understanding the answer to that seemingly simple question is one of the more difficult problems facing many Spanish students. The problem is that two Spanish prepositions, por and para, frequently are used for the English word “for.” (Actually, there are number of other words that also can fit the bill, but we won’t concern ourselves with them now because they don’t seem to be the cause of so much confusion.) The differences between them sometimes are subtle. If it’s any consolation, prepositions can be as difficult for people learning English. Why do we sometimes say something is under control, and sometimes say something is in control? Why are we in the house but at home? The rules sometimes escape logic. In Spanish, the key to understanding which preposition to use is to think of the meaning you want to convey. If I use a phrase such as “three for a dollar” in English, the “for” has a different meaning than it does in “this book is for you.” In the first case, “for” indicates an exchange or a rate, while in the second case it indicates an intention or direction. Thus the Spanish translation of the two phrases are different, “tres por un dólar” and “este libro es para ti.” The following chart shows some of the major uses of these two prepositions. Uses for por: - Expressing movement along, through, around, by or about: Anduve por las calles de la ciudad. I walked through the streets of the city. - Denoting a time or duration when something occurs. Viajamos por tres semanas. We’re traveling for three weeks. - Expressing the cause (not the purpose) of an action: Me caí por la nieve. I fell down because of the snow. - Meaning per: Dos por ciento.Two percent. - Meaning supporting or in favor of: Trabajamos por derechos humanos. We work for human rights. - Introducing the agent of an action after a passive verb: Fue escrito por Bob Woodward. It was written by Bob Woodward. - Indicating means of transportation: Viajaré por avión. I will travel by plane. - Used in numerous expressions: Por ejemplo. For example. Por favor. Please. Uses for para: - Meaning for the purpose of or in order to: Para bailar la bamba, necesita una poca de gracia. In order to dance the bamba you need a little grace. - With a noun or pronoun as object, meaning for the benefit of or directed to: Es para usted. It’s for you. - Meaning to or in the direction of when referring to a specific place: Voy para Europa. I’m heading to Europe. - Meaning by or for when referring to a specific time: Necesito el regalo para mañana. I need the gift for tomorrow. Vamos a la casa de mi madre para el fin de semana. We’re going to my mother’s for the weekend.
Work is done only when a force moves an object, i.e. when you push, lift, or throw an object -- WORK is A FORCE ACTING THROUGH A DISTANCE - you do work whenever you move something from one place to another Work = Force X Distance Force is measured in newtons Distance is measure in meters so the unit of work is the newton-meter in the metric system -- the newton-meter is called the JOULE A force of 1 newtonexerted on an object that moves a distance of 1 meter does 1 newton-meter, or 1 joule, or work EXAMPLE of a Joule equation if you lifted an object weighing 200 N (newtons) through a distance of 0.5 m (meters) . . . 200 N x 0.5 m = 100 J The unit of work divided by a unit of time/ or the joule per second One watt is equal to 1 joule per second (1 J/sec) example of power This is why a bulldozer has more power than a person with a shovel. The bulldozer does more work in the same amount of time. As the process of doing work is made faster, power is increased. the work that goes into the machine -- comes from the force that is applied to the machine, or the effort force. the machine exerts a force, called an output force, over some distance. The work output is used to overcome the force you and machine are working against. the force you and the machine are working against - often the weight of the object being moved Do machines increase the work you put into them? No - the work that comes out of a machine can never be greater than the work that goes into the machine -- like momentum, work is conserved. How do machines make work easier? Machines make work easier because they change either the SIZE or the DIRECTION of the force put into the machine. What you increase in force you pay for in distance, and what you increase in distance is at the expense of force. EFFICIENCY of the machine the comparison of work output to work input is called the EFFICIENCY of the machine - thereby knowing how much work is lost to FRICTION. the number of times a machine multiplies the effort force -- The mechanical advantage tells you how much force is gained by using the machine. The more times a machine multiplies the effort force, the easier it is to do the job. a flat slanted surface. ex. ramp an inclined plane is a simple machine with no moving parts. The ramp decreases the amount of force you need to exert, but it increases the distance over which you must exert your force. YOU solved the problem of how to get the snowblower into the truck for Dad when you used a table in the garage as a ramp. an inclined plane that moves. Instead of an object moving along the inclined plane, the inclined plane itself moves to raise the object. As the wedge moves a greater distance, it raises the object with greater force. A wedge is usually a piece of wood or metal that is thinner at one end (think of a doorstop at church). The longer and thinner a wedge is, the less the effort force is required to overcome the resistance force. Example - lock or zipper an inclined plane wrapped around a central bar or cylinder to form a spiral. A screw rotates, and with each turn moves a certain distance up or down. A screw multiplies an effort force by acting through a long distance. The closer together the threads, or ridges, of a screw, the longer the distance over which the effort force is exerted and the more the force is multiplied. Thus the mechanical advantage of a screw increases when the threads are closer together. Ex. -- corkscrew, nut and bolt, faucets, and jar lids a rigid bar that is free to pivot, or move about, a fixed point -- when a force is applied on a part of the bar by pushing or pulling it, the lever swings about the fulcrum and overcomes a resistance force. crowbar, seesaw, and pliers -- levers where the fulcrum is between the effort force (your push) and the resistance force (the nail) wheelbarrows, doors, nutcrackers, and bottle openers -- The fulcrum is at the end of the lever. The resistance force is the weight of the load. The effort force (at the other end) is the force that you apply to the handles. Because the distance is decreased by the wheelbarrow, force must be increased. A second-class lever does not change the direction of the force applied to it. the fulcrum is at the END of the rod where you are holding it -- in a surf-casting rod. The effort force is applied by your other hand as you pull back on the rod. The top of the rod is the resistance force. A lever in the third class reduces the effort force but mutliplies the distance through which the output force moves. Ex. shovels, hoes, hammers, tweezers, and baseball bats. These levers cannot multiply force. a rope, belt, or chain wrapped around a grooved wheel. A pulley can change the direction of a force or the amount of force. a pulley that is attached to a structure -- does NOT multiply the effort force - it only changes the direction of the effort force. Done when you attach a pulley to the object you are moving. For each meter the load moves, the force must pull two meters. This is because as the load moves, both the left and the right ropes move. Two ropes each moving one meter equals two meters. combining fixed and movable pulleys greater mechanical advantage can be obtained -- as more pulleys are used, more sections of rope are attached to the system. Each additional section of rope helps to support the object, thus less force is required. A block and tackle is an example of a pulley system.
Home > Preview The flashcards below were created by user on FreezingBlue Flashcards. Slide 1 Cognitive behavioral therapy It is believed that most human emotions and behaviors, whether rational or irrational, functional or dysfunctional, are largely the result of what people think, imagine, or believe. The essence of CBT is to help clients change their cognitive processes in a way that enables them to overcome their maladaptive behaviors. Slide 1 Bullet 1 cont'd ABC theory of emotions: A=activating event/ situation, B= belief of the situation, C= emotional consequence of the thoughts/ beliefs. If B is irrational thought of A then it leads to irrational consequences and the client will experience A as extremely stressful, upsetting, or catastrophic. In CBT, the therapist assists the client dispute the irrational part of the thoughts and replace with raional beliefs, which should lead to a new evaluation of the problem and activating event. Slide 1: bullet 2: origin of CBT It all started with a greek philosopher who observed that people are not disturbed by things but by their perceptions of things. In the 60s, this was tested for purposes of treatment. It has become a third force between psychoanalysis and behaviorism approaches. CBT holds that thinking shapes behavior. And we have perceptions based on what we have learned through observation and modeling(social learning theory). Slide 1: bullet 3: Effective Extensive research has shown that CBT in group format for adolescent substance abusers is most effective because it addresses not only the event that has caused them to react in a negative way but it addresses the thought behind it. What makes it differ from individual CBT is the social force of cohesiveness. Slide 2: Interventions These are some of interventions used with adolescent substance abusers, and although they all have their positive side the most effective has been group therapy because during adolescence they rely heavily on their peers. Slide 3: Research bullet 2 Meaning, if only antisocial adolescent substance abusers are placed together in a group it doesn't necessarily mean that they will have positive outcomes because they will feed off of each others maladaptive behaviors. It is encouraged to consider putting a mixture of types of adolescents into the group so there's a higher possibility of positive outcome. Slide 3: Research bullet 4 There are factors in group that assist adolescent substance abusers comply with treatment. These are the realization that others share similar problems, the development of socializing techniques, role modeling, rehearsal, and peer/ therapist feedback. Slide 4: Effectiveness bullet 2 There are 13 principles that The National Institute on Drug Abuse has identified that should inform if the treatment is effective: - (1) No single treatment is appropriate for all individuals. - (2) Treatment should be readily available. - (3) Effective treatment needs to attend to the multiple needs of the individual, not just his/her drug abuse. - (4) An individual’s treatment and service plan must be assessed often and modified to meet the person’s changing needs. - (5) Remaining in treatment for an adequate period of time is critical for treatment effectiveness. - (6) Counseling and other behavior therapies are critical components of virtually all effective treatments for addiction. - (7) For certain types of disorders, medications are an important element of treatment, especially when combined with counseling and other behavioral therapies. - (8) Addicted or drug-abusing individuals with - coexisting mental disorders should have both disorders treated in an integrated way. - (9) Medical management of withdrawal syndrome is only the first stage of addiction treatment and by itself does little to change long-term drug use. - (10) Treatment does not need to be voluntary to be effective. - (11) Possible drug use during treatment must be monitored continuously. - (12) Treatment programs should provide assessment for HIV/AIDS, hepatitis B and C, tuberculosis, and other infectious diseases, and should provide counseling to - help patients modify or change behaviors that place themselves or others at risk of infection. - (13) As is the case with other chronic, relapsing - diseases, recovery from drug addiction can be a long-term process and typically requires multiple episodes of treatment, including “booster” sessions and other forms of continuing care.
Great Minds, Great Lakes - The Journey of the Lake Guardian - The Lake Guardian Explores Lake Superior - Investigating Lake Huron - The Journey Continues on Lake Michigan - The Lake Guardian Travels the Length of Lake Erie - The End of the Journey, Lake Ontario Who Governs the Great Lakes? Because the United States and Canada share the Great Lakes as a border, many governments are involved with environmental problems in the Great Lakes Basin: on a federal level, the US Environmental Protection Agency and Environment Canada: eight state governments (Illinois, Indiana, Michigan, Minnesota, New York, Ohio, Pennsylvania, and Wisconsin): and two Canadian provinces (Ontario and Quebec). Having both Canada and the United States involved presents the unique situation of two nations responsible for managing and protecting a natural resource. To officially agree on how to protect the Great Lakes, the United States and Canada signed a treaty in 1909 called the Boundary Water Treaty. The treaty declared that neither Canada nor the United States has the right to pollute the resources of its neighbors. It also said that both countries have equal rights to the use of waterways that cross the international border of the Lakes. Despite the agreements made in the treaty, pollution problems began to mount, and by the early 1970s, the two countries had to reconsider the Boundary Water Treaty. The two countries decided to make a more specific commitment to restoring and maintaining the environmental health of the Great Lakes Basin. The agreement, called the Great Lakes Water Quality Agreement, was signed in 1972 and created a bi-national commission that would be responsible for reducing pollution in the Great Lakes and developing specific plans for cleaning up many of the pollution problems in the Basin. The commission is referred to as the International Joint Commission. Making progress on the problems that affect the Great Lakes is not easy. This is because the problems are not simple ones and because every proposal has ramifications that are both good and bad. For example, an environmental protection proposal that limits industrial growth may help prevent further pollution of the Great Lakes, but it may have negative effects on the economy and the availability of jobs.
Why is my water cloudy? White, cloudy water can be caused by several things, but most commonly it is due to a "bacteria bloom." A "bacteria bloom" is usually associated with "new tank syndrome". Ammonia builds up in the aquarium and the nitrogen cycle begins. As the aerobic bacteria establishes itself, it floats through the water creating a cloudy appearance. A "bacteria bloom" can also be caused by sudden increases in ammonia due to overfeeding or excess organic waste and decay. Losses of large numbers of bacteria due to power outages or other circumstances can also cause blooms. Test the aquarium water for ammonia and nitrite. If either of these compounds are present, a bacteria culture should be added. Do not do a water change unless levels are dangerously high, or fish show signs of stress. Changing water will only lengthen the time needed for the bacteria to establish itself. If the tank is an established aquarium (livestock has not been added in the past 2 months or longer) be sure you are not over feeding. If the problem persists there may be too many fish in the aquarium for the biological filter to adequately handle. This forces the bacteria to float freely throughout the aquarium. Additional biological filtration will need to be added or some fish may need to be removed from the tank. The Nitrogen Cycle The nitrogen cycle is the most important and fundamental principal of controlling a closed aquatic environment. No one should begin an aquarium without fully understanding what the nitrogen cycle is and how it works. The illustration and the description explain the four steps in the nitrogen cycle. Fish waste, excess food and other decaying organic material break down into a toxic chemical compound called ammonia. Even in low levels, ammonia will increase the breathing rate of fish by irritating gill tissues. Damage to the body tissues of both fish and invertebrates will follow, causing disease and death. Aerobic (oxygen needing) bacteria (Nitrosomonas) convert ammonia into nitrite. Nitrite is also a toxic chemical compound, equally as harmful to fish as ammonia. Nitrite destroys the hemoglobin in the blood of fish and invertebrates. Without hemoglobin the blood cannot carry oxygen. The nitrite is converted by a second aerobic bacteria (Nitrobacter) into a far less toxic compound called Nitrate. Nitrate levels in excess of 50 ppm in freshwater or 10 - 15 ppm in marine aquariums can cause stress, encourage disease and stunt growth. Be sure to regularly monitor nitrates with a nitrate test kit Nitrates can be removed by several means. Small amounts are absorbed naturally by plants and algae. The remaining nitrates can be effectively eliminated with a good aquarium maintenance program. Regular water changes, cleaning filter cartridges, vacuuming substrates and removing detritus (organic waste buildups) will solve most nitrate problems. If problems persist, nitrate removing medias, denitrators and protein skimmers may be needed. Denitrators will biologically remove nitrates while protein skimmers will remove organic waste before the Nitrogen Cycle breaks them down into nitrate. (Due to the low nitrate levels required in saltwater aquariums a protein skimmer is recommended as a basic piece of equipment.) Note: Some home water supplies contain nitrates. Water changes with this water will not be effective. A reverse osmosis or deionization unit may be necessary in these cases. Starting the Nitrogen Cycle
A-level Physics/Forces, Fields and Energy/Thermal physics Thermal physics deals with the changes that occur in substances when there is a change in temperature. - 1 Internal energy - 2 The thermodynamic temperature scale - 3 Heating up substances - 4 The gas laws When you heat up a material, it may change state. The molecules vibrate with a greater amplitude, and break apart from one another. The material has been supplied with energy and you can feel it getting hotter. The increased kinetic and potential (from their greater separation) energy of the particles is an increase in what we call internal energy. Internal energy is defined as: Therefore, an increase in temperature for a material means an increase in its internal energy. The thermodynamic temperature scale The Celsius scale of temperature depends on the properties of water. 0°C is the freezing point of water, and 100°C is the boiling point of water. It is a relative scale, because it is relative to the freezing and boiling points of water. The thermodynamic scale of temperature (represented by the letter T), however, is an absolute scale of temperuture, and does not depend on the properties of any particular substance. It is also directly proportional to the amount of internal energy a substance possesses. This scale of temperature is defined in terms of internal energy, and is measured in kelvins (K). 0K is defined as the temperature at which a substance will have minimum internal energy, and is the lowest possible temperature. This temperature is known as absolute zero. It is at this point at which molecule stop vibrating and electrons stop spinning and orbiting. Converting between K and °C The divisions of the kelvin scale are identical to the divisions of the Celsius scale, so that an increase of 1°C is equal to an increase of 1K. - K = C This makes it simple to convert between the two, and if you know that absolute zero is -273.15°C, you can simply use the formula: - K = C to convert between °C and K. Heating up substances When you apply heat to a substance, the temperature does not simply increase in a straight line. Some extra energy is required to break bonds between particles. Energy and temperature changes If we were to heat a block of ice at a steady rate and plot a graph of the temperature against time, we would get the following graph: This shape is rather surprising. You would expect the line to increase in a straight line, with none of the breaks that you can see above. We should consider what is happening to the molecules of the water at each section of the graph to understand why this is so: - The ice is below freezing point, but the temperature is increasing. The molecules are vibrating slowly, but begin to vibrate more. - At 273K (0°C) the ice is at melting point. The bonds between molecules are being broken and molecules have greater potential energy. This is the Latent Heat of Fusion - The water now increases in temperature towards boiling point. The molecules vibrate even more and move around rapidly as their kinetic energy increases. - At 373K (100°C) the water is now at boiling point. Molecules completely break away from each other and their potential energy increases. DE is much larger than BC because ALL bonds need to be broken for a gas to form. (The Latent Heat of Vapourisation.) - The water is now steam and the molecules are moving around much faster than before. Their kinetic energy continues to increase as energy is supplied. At the sections BC and DE, where there is a change of state, the molcules do not increase in kinetic energy, but increase in potential energy. The heat energy being supplied does not change the temperature at these sections, but is instead used to break the bonds between molecules. Specific heat capacity Some materials will heat up quicker than others. For example, metals are good conductors of heat, and provided they are the same mass and that the energy is supplied at the same rate, copper will increase in temperature quicker than water. The specific heat capacity can tell us how much energy is required to increase the temperature of a substance, and is defined as: This can be represented as: Where is the energy supplied, is the mass of the substance, is the specific heat capacity, and is the change in temperature Measuring the specific heat capacity To find the specific heat capacity of something, we can control all of the possible variables and then use them to calculate it. From the equation above, we can see that . This means that if we can supply a known amount of energy to a material of known mass, and measure the change in temperature, we can insert the values into the equation and obtain the specific heat capacity. To supply a known amount of energy, we can use an electric heater. You may recall that electrical energy can be found by , so by measuring the voltage, the current and the time that the circuit is switched on, we will have a value for the energy supplied to the material. In the same time period that the circuit is switched on, we must take measurments for the change in temperature. An ordinary mercury thermometer may be used, although it is recommend to use a temperature sensor with a computer to make more precise and accurate measurements. Once we have taken readings of the temperature and energy at regular intervals of time, we can plot a graph of against . We can calculate the gradient, making sure to use as much of the line in our calculation as possible, and divide it by the mass of the material to obtain the value of the materials specific heat capacity. Specific latent heat When you heat up a substance so that it changes state, the temperature stays the same during the change. Different substances will require more energy to change state than others. The specific latent heat will tell us how much energy a substance requires to change state and is defined as: This can be written as the equation: Where is the energy supplied, is the mass of the substance, and is the specific latent heat. The gas laws There are four properties of a gas, that are related to each other. These properties are the pressure, the temperature, the volume and the mass of the gas, and these relationships are expressed as the gas laws. Boyle's law relates the pressure of a gas to its volume. Specifically, it states that: This can be expressed as or . You can picture this at the molecular level, if you were to imagine the number of collisions the particles of a gas make with the container of a particular size, and then imagine the increased number of collisions when the container is reduced in size but the number of particles remain the same. This is observed as an increase in pressure of the gas. Charles' law relates the volume of a gas with its temperature on the thermodynamic temperature scale, and that: This can be expressed as or . It is a little more difficult to understand why this is the case, because a gas will always take up the entire volume of its container. If you think about how a particle behaves when it is heated up, it will vibrate more and cause an increase in pressure, or harder and faster collisions of the molecules against the container. However, since pressure is to be kept constant in this case, the volume of the container will need to increase. Therefore by increasing the temperature of the gas, we have increased its volume. This can be expressed as or . Equation for an Ideal Gas n is the number of moles of gas, R is the Ideal Gas Constant, R ≈ 8.314 J/(mol・K) T is the ABSOLUTE temperature in K, p is the Pressure in Pa, or Nm-2, V is the Volume in m3. Properties of an Ideal Gas 1) Its particles should be monatomic 2) The particles are infinitely small 3) There are no interaction between the particles, hence all the energy is kinetic. ||A reader requests expansion of this page to include more material. You can help by adding new material (learn how) or ask for assistance in the reading room.
All of the above were important considerations in the elucidation of the structure of DNA. 1) Watson and Crick elucidated the structure of DNA in 1953. Their research built on and helped explain the findings of other scientists, including ________. DNA is the molecular substance of genetic inheritance. 2) The transduction experiments done by Hershey and Chase, and the transformation experiments done by Griffith, supported the same conclusion, which was ________. complementary base pairing 3) The fact that within a double-stranded DNA molecule, adenine forms two hydrogen bonds with thymine and cytosine forms three hydrogen bonds with guanine is known as DNA is synthesized through a process known as Meselson and Stahl 5) Who performed the classic experiments that proved DNA was copied by semiconservative replication? 6) DNA contains the template needed to copy itself, but as you learned in Chapter 4, it has no catalytic activity. What catalyzes the formation of phosphodiester bonds between adjacent nucleotides in the DNA polymer being formed? ? he deoxyribonucleotide triphosphate substrates 7) What provides the energy for the polymerization reactions in DNA synthesis 2, 1, 3, 5, 4 8) Put the following steps of DNA replication in chronological order. 1. Single-stranded binding proteins attach to DNA strands. 2. hydrogen bonds between base pairs of antiparallel strands are broken. 3. Primase binds to the site of origin. 4. DNA polymerase binds to the template strand. 5. An RNA primer is created. Telomerase ensures that the ends of the chromosomes are accurately replicated and eliminates telomere shortening. Bodnar et al. (1998) used telomerase to extend the life span of normal human cells. Telomere shortening puts a limit on the number of times a cell can divide. How might adding telomerase affect cellular aging? most normal somatic cells Which of the following cells do not have active telomerase activity? on average, 6 times each time the entire genome of a cell is replicated DNA replication is highly accurate. It results in about one mistake per billion nucleotides. For the human genome, how often would errors occur? The proofreading mechanism of DNA polymerase was not working properly. Researchers found E. coli that had mutation rates 100 times higher than normal. What is a possible explanation for these results? The parent strand is methylated. In the mismatch repair process, enzyme complexes replace bases that were incorrectly inserted into the newly synthesized DNA strand. To function, they must be able to distinguish between the parent DNA strand and the new strand. How is this accomplished? In humans, xeroderma pigmentosum is a disorder of the nucleotide excision repair mechanism. These individuals are unable to repair DNA damage caused by ultraviolet light. Which of the following are the most prominent types of mutations in individuals suffering from xeroderma pigmentosum? adjacent pyrimidines on the same DNA strand that join by covalent bonding What are pyrimidine dimers? There are several enzymes involved in the nucleotide excision repair process. Recent studies have shown that xeroderma pigmentosum (an error in the nucleotide excision repair process) can result from mutations in one of seven genes. What can you infer from this finding? decreased ability to repair certain DNA mutations ) Hereditary nonpolyposis colorectal cancer (HNPCC) is an inherited disorder. The genetic defect identified is an error in the mismatch repair mechanism. Which of the following would be an expected result of this mutation?
When the shuttle lifts off, thick clouds billow around the launch pad and the space plane appears to rise on white columns. A spectacular sight for the observer. But those clouds and columns are composed of a highly acidic vapour from the shuttle’s exhaust. Indeed, NASA tells visitors attending shuttle launches at the Kennedy Space Center that a powdery residue from the exhaust plumes could be deposited up to 10 kilometres from the launch pad. This highly acidic residue can irritate eyes and respiratory tracts; it can even damage the finish on your car. The agency suggests that visitors might like to buy a cover for their vehicles. Concern that the polluting effects of the exhaust could be far more widespread, however, and in particular that they are contributing to the destruction of the ozone layer, has provoked a diverse range of organisations to question the type of propellants being used to launch spacecraft. Groups ranging from local environmental associations to the National Space Council want NASA to investigate alternative propellants. They contend that because the current generation of launch vehicles has a finite lifetime and replacing them is inevitably going to be an expensive proposition, why not make the next generation of launchers at least relatively clean? The principal concern is the acidic exhaust that the shuttle discharges from the two boosters packed with solid propellant, its so-called solid-rocket boosters. These supplement the power from the shuttle’s main engines, providing more than 80 per cent of the thrust needed to get the craft off the ground. Most of the American space fleet, including the shuttle as well as the
A tsunami is an ocean disturbance resulting from seismic movement of the sea floor. A wave results and moves across the ocean surface at hundreds of miles per hour. In deep water the passing wave may be only a foot or less in height. Approaching the shallow shoreline, however, the wave becomes large with resulting flood danger and destruction. In recent years, sensitive pressure sensors have been placed on the seafloor to detect tsunami waves. Data must be transmitted to a surface buoy, and here a problem arises: It is difficult to send information through water. However, it is noticed that dolphins are experts at underwater communication. They are able to recognize specific calls up to 15 miles (25 km) away. They are found to send and receive several frequencies of sound or pressure waves. Dolphin receptors hear a clear message by overcoming signal interference, scattering, A company called EvoLogics has developed underwater electronics which copy the communication ability of dolphins. The system is presently used in tsunami warning systems. Dolphins lead the way in warning and protecting coastal villages from tsunami waves.
The First Book of Maccabees is an excellent and reliable historical source on the period of the Maccabean revolt and its aftermath, 175–134 b.c. The book traces the struggle of the Judean Jews against the cruel oppression of the Seleucid rulers who had taken control of Judea away from the Ptolemaic Egyptian kings shortly after 200 b.c. At the center of the story are Mattathias, a country priest, and his five sons (Joannan, Simon, Judas, Eleazar, and Jonathan), who together lead a revolution against king Antiochus IV (surnamed Epiphanes), a murderous tyrant who was aggressively forcing Greek customs and culture on the Judeans, even to the extent of setting up an altar to Zeus in the Jerusalem temple. As the rebellion expands, the son Judas distinguishes himself as a courageous leader. By means of clever guerrilla warfare he eventually regains control of Jerusalem, cleanses the temple and rededicates it to God, a celebration which continues annually to this day during the Festival of the Dedication (Hanukkah, 4.56-59). “Maccabeus” was Judas' nickname (2.4). It may mean “the hammer” or “chosen by the Lord.” The name was soon applied to all in this heroic family who gave their lives in this Jewish struggle for freedom from tyranny. And it came to be used as well as the name of this very important history book, and of several others dealing with the same period or similar themes. First Maccabees is preserved in Greek and other secondary versions, yet is clearly based on a Hebrew original mostly composed shortly after the reign of the Hasmonean priest-king John Hyrcanus I (134–104 b.c.), perhaps around 100 b.c. The history writing in this book is obviously modeled on the style of Kings and Chronicles in the Hebrew Bible. In a very brief nine-verse introduction, the author recaps events from the meteoric rise of Alexander the Great to the division of his conquered territories among his three top servants (generals). History tells us that these three were Seleucus taking command of Syria-Mesopotamia, Ptolemy taking Egypt, and Antigonus taking Macedonia. From 1.10 to the end of the book readers are thrown immediately into the flow of events precipitated by the revolt of the family of the Maccabees. After the death of Judas Maccabeus we see the movement of this heroic family into the power vacuum in Judea, styling themselves as priest-kings (later to be known as the Hasmoneans) and rulers of Judea. Of special importance is the mention in this book of the Hasideans (Hasidim), “the pious ones,” whose primary concern (then as now) is with the honoring of God by careful and pious observance of Torah in daily life (2.42; 7.13). The key message of this book is that God can be relied on for salvation, and to raise up courageous leaders like the Maccabees to bring rescue from oppression. Introduction, Crisis, and Rebellion (1.1—2.70) Judas Maccabeus: Military Might (3.1—9.22) Jonathan: Political Power (9.23—12.53) Simon: Organizational Skill (13.1—16.24)
If you had a tiny set of probes drilled into the bore at various points and measured the air pressure, you would then see these little variations in pressure that develop. . Imagine this: Air from the windway cuts across the labium, some of the pressure rushes into the bore of the instrument, the length of the bore causes the fill time of this to lengthen. The air will reach a peak pressure and then it will depressurize through open tone holes. Also during the depressurization phase, the wind in the windway will be deflected upward and actually create a tiny vacuum that further depletes the bore. You get different notes by opening the toneholes with your fingers, this changes the bore length. Think of this, your playing an "A" on your whistle, 400 Hz. That means that this pressure cycle is occuring 440 times per second. Now, you take it outdoors on a cold day and the instrument plays a bit flat. The temperature will effect the viscosity of the air. When voicing an instrument, you must keep in concern the fact that the air is also pushing out of the bore through the opening near the windway called the "Window". Some designers do not take this in consideration and the result is a weak voiced instrument. ALSO: Note the chamfer in the block, this is critical to good voicing. Here is a simple CPVC close bore whistle that you can build. Last Update: 1/30/2000
Yesterday, I shared photos of a young Bald Eagle showing off its hopping skills. The eagle left its perch when a more mature Bald Eagle showed up, and I managed to capture some photos of the mature eagle landing on a dead limb. This took place at Miner’s Cove, which is located at the Sequoyah National Wildlife Refuge in Oklahoma. However, the mature Bald Eagle didn’t stay around for very long, so I was only able to snap a few photos of it. Bald Eagles are one of the most iconic birds in North America and are also one of the most endangered. In the early 1900s, there were only about 400 breeding pairs of Bald Eagles left in the United States. Thanks to conservation efforts, their population has since rebounded to over 70,000 breeding pairs. Bald Eagles are large and powerful birds of prey, with a wingspan of up to 7 feet and can weigh up to 14 pounds. They have a white body with a brown head and neck, yellow legs and feet, and a black beak. Bald Eagles are carnivores and eat a variety of small animals, including fish, rabbits, and rodents. They are also known to eat carrion. These eagles are monogamous and mate for life, building their nests in tall trees near water. The female lays 2-3 eggs, which hatch after about 35 days, and the young eagles stay with their parents for about 6-8 weeks before they are ready to fly on their own. Bald Eagles are an important part of the ecosystem, helping to control populations of small animals, and also clean up the environment by eating carrion. In the United States, Bald Eagles are a symbol of freedom and strength, and they serve as a reminder of the importance of conservation. Here are some additional facts about Bald Eagles: - Bald Eagles are the national bird of the United States. - Bald Eagles are found in North America, from Canada to Mexico. - Bald Eagles are carnivores and eat fish, rabbits, rodents, and other small animals. - Bald Eagles are monogamous birds and mate for life. - Bald Eagles build their nests in tall trees near water. - The female Bald Eagle lays 2-3 eggs, which hatch after about 35 days. - The young Bald Eagles stay with their parents for about 6-8 weeks before they are ready to fly on their own. - Bald Eagles are an important part of the ecosystem. - Bald Eagles are a symbol of freedom and strength in the United States. - Bald Eagles are a reminder of the importance of conservation. - Camera: Canon EOS R7 - Lens: Canon RF 800 mm F11 IS STM I was photographing this Bald Eagle from inside my pickup. I had a beanbag draped over the open window to support my camera and lens. - Location: Sequoyah National Wildlife Refuge (Oklahoma) - Date and Time Taken: April 2, 2023 (09:46 A. M.) - Exposure Mode: Manual - Aperture: f11 - Shutter speed: 1/1600 - ISO: 800 (Auto) - Exposure Compensation: +1/3 - Focal Length: 800 mm
Researchers are proposing gene alteration to save ocean ecosystems ravaged by rising temperatures. Just one verse each day. The Great Barrier Reef was recently proclaimed dead in a tongue-in-cheek obituary. While CNN reassured us this was an exaggeration, it brought the impending threat of coral annihilation to the forefront of public thought. Now, researchers are rushing against the clock to find a solution to rampant coral bleaching, which is the main culprit. Smithsonian.com reports that Rachel Levin, a molecular biologist, has recently proposed a way to save these marine ecosystems, in a paper in the journal Frontiers in Microbiology. Her strategy would repopulate bleached coral with lab engineered symbionts, rather than attempting to find healthy specimens in the wild and relocate them. This method could actually work. Smithsonian explains that since coral bleaching is a breakdown of a symbiotic union, introducing new symbionts could restore an entire reef: The coral animal itself is like a building developer who constructs the scaffolding of a high rise apartment complex. The developer rents out each of the billions of rooms to single-celled, photosynthetic microbes called Symbiodinium. But in this case, in exchange for a safe place to live, Symbiodinium makes food for the coral using photosynthesis. A bleached coral, by contrast, is like a deserted building. With no tenants to make their meals, the coral eventually dies. Though bleaching can be deadly, it’s actually a clever evolutionary strategy of the coral. The Symbiodinium are expected to uphold their end of the bargain. But when the water gets too warm, they stop photosynthesizing. When that food goes scarce, the coral sends an eviction notice. The rising water temperature is causing the reefs to evict their Symbiodinium, but there are none around to replace them. Which has lead Levin to the conclusion that she needs to create “super-symbionts” which can withstand the rising temperatures and continue to produce food for the reefs. Levin was able to identify a strain of Symbiodinium which was resistant to heat and inserted copies of these crucial heat tolerating genes into the weaker Symbiodinium. This created new a strain adapted to live with corals from temperate regions. However, it was no easy task to insert these genes, since the cells are encased in armored plates, cell membranes and a cell wall. The next hurdle became how to insert these genes without breaking and killing the cell. For this, Levin employed a virus, which she modified to carry the heat-resistant genes, and infected the cells with it. While this method seems extreme, Levin does not consider herself to be a “crazy scientist,” citing gene altering in mosquitoes as an even wilder use of science: Protecting humans from devastating diseases like malaria or Zika—scientists have been willing to try more drastic techniques, such as releasing mosquitoes genetically programmed to pass on lethal genes. The genetic modifications needed to save corals, Levin argues, would not be nearly as extreme. She adds that much more controlled lab testing is required before genetically modified Symbiodinium could be released into the environment to repopulate dying corals reefs.
These Find the Letter Printables: V is for Vampire Bat Worksheets will help your preschool and early-elementary aged children work on recognizing the letter V among many other letters of the alphabet. This Rainforest animal themed worksheet is a perfect extra for your reading preparedness studies! Over the next few weeks, I will be sharing some free Rainforest Animals Find the Letter Printables and other resources to help you teach your children learn all about the Alphabet in a rad In The Rainforest theme. I hope you find these homeschooling freebies useful for you in your homeschool adventures. You can have the kids learn shapes by putting a circle around the capital ‘V’ and a square around the Lower Case ‘v’. They could use different colors: for example, green around the “big V” and blue around the “little v”. You could also grab a set of Do A Dot Art Markers (affiliate link) so they can just dab over the right letters! However you decide to use these sheets, they are a simple way to reinforce alphabet recognition! By the way, my entire life I thought it was “Rainforest” but when I type that into my blog I get the dreaded squiggles – spellcheck wants me to put a space and make it two words like “rain forest”… So, I did some research. The Oxford Dictionary shows it as one word and the Merriam Webster shows it as two. Google shows both: So, I will continue to write it as one word, but it appears both are right these days! Rainforest Unit Study Resources: Rainforest Animals Facts Sheets Rainforest Animals Preschool Learning Kit Rainforest Activities for Preschoolers More FREE Find the Letter Resources: You can also access all of the letter find worksheets here at 3 Boys and a Dog! Find Letter of the Week Crafts at Crystal and Company Adorable Fingerprint Alphabet Art at Easy, Peasy, and Fun More Vampire Bat Resources: Preschool Science: Studying Bats Enhance Your Rainforest Animals Unit Study: Looking for more fun items to help you in teaching your kids about the animals in the Rainforest? Check out my top affiliate picks from Amazon! Rainforest Books for Kids - If I Ran the Rain Forest: All About Tropical Rain Forests (Cat in the Hats Learning Library) - National Geographic Kids: Explore My World Rain Forests - The Rainforest Grew All Around - Afternoon on the Amazon (Magic Tree House Book 6 - Magic School Bus Presents: The Rainforest
Working on the implementation of Aistear, Siolta, the First 5, Aim and Better Start programs we have always focused on our outdoor environments as creative learning spaces for children. Outdoor environments fully support the emersion of children in play as an active process regardless of the end product (Bruce, 2001). To facilitate role play and community belonging we follow an emergent curriculum focusing on the interests of the child. After 12 years of outdoor play, we have discovered what works well with children. Uninterrupted outdoor Play-based learning is how children learn optimally and by designing the correct environments we provide opportunities for the development of multi-layered complex levels of play. Children take more risks which leads to a deeper exploration of their ability and their environment. The importance for children to feel a sense of belonging and positive health and wellbeing is an important aspect of our work, more so due to the pandemic. Practitioners recognise that cognitive and emotional development are interwoven for children (Zigler, 2007). Loose part materials that are age-appropriate – sticks, leaves, sand, mud, stones, log walks, etc all promote engineering, construction, and creative skills for children. Natural loose parts also provide endless opportunities for sensory exploration – through smell, touch, texture, strength, and flexibility. Plastic trugs and plastic boxes are durable and allow multiple play opportunities. Often when discussing loose parts we tend to focus on small objects when Loose parts can also be varying sizes – such as plastic or wooden tracks from trains, plastic cars, containers, and cups of all sizes. As time has gone on through observation of, planning, reflection, and discussion with the children here we have learned together that free play is supported more through supplying loose parts in containers. This promotes a child’s creativity and multiuse of items through experimentation. Involving children in all aspects of decision making minimizes the adults’ interpretation of the child’s interest (Cooke & Kothari,2010) Giving the child a box of mixed train tracks that are different sizes, different textures and without trains, etc will promote their own imaginative play and the results can be amazing. Wooden cut-offs of blocks are used for building, constructing as phones or tablets. Incorporating wooden cut-offs can be a project with support from parents. We have 20 block pieces that were donated from a carpenter parent 5 years ago, all of varying sizes and are used in multiple ways by the children. This is an example of using an emergent interest of the child – building and also including the parent in the support of our curriculum When setting up mud kitchens we always use real-life products which are recycled through the school. Kettles, lunch boxes, plant pots, plates, shovels buckets, and pans all facilitate free play while also building strength and provide endless Maths skills development. Calculating how many buckets of mud they need to fill a kettle, or a bigger bucket or a pot all provide schema support as well as collaborative opportunities. The use of real-life artifacts also promotes role-play, story-making, and a culture of the community. Within this community of practice, children have shared experiences, shared goals, values, and respect for each other’s knowledge and inputs. Small Scale Gardening Learning through play offers many opportunities for PSED in children ( Manning-Morton). Gardening is a holistic interactive activity providing endless opportunities for learning through play. Children engage all senses while developing patience, fine motor skills development, experimentation, memorising, problem-solving, responsibility, and collaboration. Children learn to co-operate and display social rules within society when given the opportunity to play and socialise with their peers( (Unicef, 2018). Children learn about density, weight, permeability, and combination. Gardening supports STEAM in everyday activities. Composting fruit peels in our wormery is chemistry in action. Peels are broken down and over a number of weeks and months, children watch the peels break down and the separation of water and food materials is visually observable. Exploration of seed types, flowers, nuts, vegetables, and seed potatoes allow children to learn about botany and sustainability. By monitoring the dryness of the soil when minding their plants, children become meteorologists, beginning to understand the effects of weather on their plants. On a wider global community level, the children of today are learning how to support sustainability. As a green nature school, our focus is on building sustainable living skills through our sustainable community of play and gardening. Through gardening, children become responsible consumers and producers. Gardening also allows the children to become decision-makers in daily activities that affect them. Participation not only in the activity but in the design of the content of the gardening plan provides meaning. As Play practitioners, we are continually exploring with children how to make their environments more interactive and meaningful to them. There are a lot of small-scale garden projects appropriate for children. Filling pots with compost and adding seeds or leaving nature to provide the pot with seeds is a fantastic project for kids to watch the full life cycle of a plant. Potatoes can be grown in compost bags, window boxes are great to add plants, and seeds and we also recycle pencil toppings as mulch to protect the soil. Empty twistable crayons become plant supports, and tea bags and coffee grains feed the plants. Harvesting apples, nuts, berries, flowers and vegetables which the child has watch grown from a bud or grown from a bulb or seed is hugely rewarding and empowering for a child. Denise Sheridan (Author) Owner/Manager – Úlla Beag Preschool Denise works full-time as an Early Years Teacher specialising in creating learning environments where the child’s interests are paramount. She runs Úlla Beag in East Clare and is also currently completing a Masters in Early Childhood Studies through the Portobello Institute and the University of East London. Ulla Beag has been a member service of Early Childhood Ireland for 12 years.
Oxygen is a chemical element with the symbol O and atomic number 8. It is a highly reactive nonmetal that makes up about 21% of the Earth’s atmosphere. Oxygen is an essential element for life as we know it and is involved in many chemical and biological processes, including respiration, combustion, and oxidation. Despite being a nonmetal, oxygen can sometimes exhibit metallic properties, leading to confusion about its classification. Metals are characterized by their ability to conduct electricity and heat, their ductility and malleability, and their tendency to lose electrons in chemical reactions. Nonmetals, on the other hand, are generally poor conductors of electricity and heat, brittle, and tend to gain electrons in chemical reactions. While oxygen is not a metal, it can sometimes exhibit metallic properties under certain conditions. For example, when oxygen is subjected to high pressure, it can become a metallic solid with properties similar to those of metals, including conductivity and ductility. However, this is a highly unusual state of oxygen and not representative of its typical behavior. In its normal state, oxygen is a gas that is highly reactive with other elements, particularly metals. Oxygen readily combines with metals to form metal oxides, which are compounds that consist of a metal and oxygen. Metal oxides can be basic, acidic, or amphoteric, depending on the properties of the metal and the conditions under which the oxide is formed. In summary, oxygen is a nonmetal that is essential for life and involved in many chemical and biological processes. While it can exhibit metallic properties under certain conditions, this is not representative of its typical behavior, and it is not considered a metal. Oxygen is a highly reactive element that readily combines with other elements, particularly metals, to form metal oxides. Different State of Oxygen: Oxygen is a chemical element that exists in different states depending on the temperature, pressure, and other environmental conditions. The most common states of oxygen include: - Gas: Oxygen is most commonly found in the form of a gas, which makes up approximately 21% of the Earth’s atmosphere. It is colorless, odorless, and tasteless in its gaseous state. - Liquid: Oxygen can also exist as a liquid at very low temperatures and high pressures. Liquid oxygen is pale blue in color and has a boiling point of -183 °C. - Solid: Oxygen can be frozen into a solid at very low temperatures. Solid oxygen is a pale blue crystalline solid and is highly reactive, making it unstable at room temperature and atmospheric pressure. - Metallic: Under extreme pressure, such as that found in the Earth’s core or in laboratory experiments, oxygen can be transformed into a metallic state. Metallic oxygen is a highly unusual state and is not found in nature under normal conditions. In summary, oxygen can exist in different states depending on the temperature, pressure, and other environmental conditions. The most common states of oxygen include gas, liquid, and solid, while metallic oxygen is a highly unusual and uncommon state. Application of different state of Oxygen: - Liquid Oxygen: Liquid oxygen is used in various industrial and medical applications due to its ability to support combustion and enhance oxidation reactions. It is used in rocket propulsion systems, welding and cutting torches, and in the production of steel and other metals. Liquid oxygen is also used in the medical industry for respiratory support in patients with breathing difficulties. - Solid Oxygen: Solid oxygen is a pale blue crystalline solid and is highly reactive, making it unstable at room temperature and atmospheric pressure. Solid oxygen has important applications in rocket propulsion systems, where it is used as a powerful oxidizer for rocket fuel. It is also used in the production of high-energy explosives, such as triacetone triperoxide (TATP), which is used in improvised explosive devices (IEDs). - Metallic Oxygen: The metallic state of oxygen is a highly unusual state and is not found in nature under normal conditions. However, researchers have synthesized metallic oxygen in laboratory experiments by subjecting it to extremely high pressure. Metallic oxygen has potential applications in superconductivity and high-energy physics, where its unique electronic and magnetic properties could be harnessed for advanced technologies. In summary, the different states of oxygen have important applications in a variety of fields, from rocket propulsion systems to medical support and the production of high-energy explosives. While the metallic state of oxygen is not commonly found in nature, its potential applications in advanced technologies make it an area of active research and exploration.
Firstly, I appreciate that discussions about family violence can be a trigger for many people. Please make sure you feel comfortable proceeding before reading this article. Family violence takes many forms – physical and sexual violence, verbal abuse, and coercive and controlling behaviour. The overarching element is the use of power and control by the perpetrator. The most immediate impact on children of family violence is when it manifests in the form of direct physical or sexual violence perpetrated on the child themselves. However, it is well accepted that children – when not the direct physical victim – do not simply witness family violence from a distance. It has been said that “children who experience family violence in their homes experience it with all their senses. They hear it, see it and experience the aftermath”. Over recent years more attention has been given by researchers to exploring the impact of exposure to the use of power and control by one parent over the other on children’s health, learning, wellbeing and development. Research has found that the impact on children of experiencing this family violence manifests differently according to the age and developmental level of the child. For example: - Infants whose mothers were subject to violence had lower birth weights, higher rates of pre-term labour, foetal distress and death; - Young children display delayed toilet training, development of verbal skills and memory; - School aged children have been found to have higher rates of conduct disorders and lower educational attainment – more likely to suffer from depression, display aggression and difficulties developing positive peer relationships due to poor social skills; - The impact of experiencing family violence can continue into adolescence and adulthood in any number of ways – including continuing to experience (or even developing) depression and anxiety. A review of the studies shows that there are longer-term impacts as well. One long-term impact is that children who experience family violence often experience violence in a number of other settings – such as in their school and in the community. Studies have also shown that another longer-term impact can be “intergenerational transmission”; US research has shown a correlation between childhood experiences of violence in young girls and their likelihood of becoming a victim of family violence as an adult; it also found that men whose mothers had experienced family violence were more likely to become perpetrators of family violence. Liability limited by a scheme approved under Professional Standards Legislation
ICSE /Class 6 Maths MCQ Based On Connection of negative numbers in daily life Our free online Maths test quiz for Class 6, ICSE will assist you to improve your Maths skills on every concept in a fun interactive way. ICSE Class 6 Maths Connection of negative numbers in daily life The given number is positive. The difference between 15 and -4 and show the integer in rising or upward direction. When we move on the left hand side on a line the value then decreases. Above the surface must have opposite sign as that of below Represent withdrawing amount as a negative integer then add current balance and withdrawing amount Find difference in temperature Loss is opposite of gain At JustTutors, we believe in the power of digital technology to help students get personalized learning and attention from India's best-in-class science, english and math tutors. We are focused on creating a top-class e-learning platform that brings together the best teachers, technology, media, content for creating a seamless and world-class experience for every student.
A new international analysis of marine fossils shows that the warming of the polar oceans during the Eocene, a greenhouse period that provides a glimpse of Earth’s potential future climate, was greater than previously thought. By studying the chemical composition of fossilized foraminifera, tiny single-celled animals that lived in shallow tropical waters, a team of researchers generated precise estimates of tropical sea surface temperatures and seawater chemistry during the Eocene Epoch, 56-34 million years ago. Using these data, researchers fine-tuned estimates from previous foram studies that captured polar conditions to show tropical oceans warmed substantially in the Eocene, but not as much as polar oceans. Importantly, when modern climate models – the same as those used in the United Nations’ recent Intergovernmental Panel on Climate Change reports – were run under Eocene conditions, many could not replicate these findings. Instead, the models consistently underestimated polar ocean warming in the Eocene.
Why Teach Evolution? Why is it so important to teach evolution? After all, many questions in biology can be answered without mentioning evolution: How do birds fly? How can certain plants grow in the desert? Why do children resemble their parents? Each of these questions has an immediate answer involving aerodynamics, the storage and use of water by plants, or the mechanisms of heredity. Students ask about such things all the time. The answers to these questions often raise deeper questions that are sometimes asked by students: How did things come to be that way? What is the advantage to birds of flying? How did desert plants come to differ from others? How did an individual organism come to have its particular genetic endowment? Answering questions like these requires a historical context—a framework of understanding that recognizes change through time. People who study nature closely have always asked these kinds of questions. Over time, two observations have proved to be especially perplexing. The older of these has to do with the diversity of life: Why are there so many different kinds of plants and animals? The more we explore the world, the more impressed we are with the multiplicity of kinds of organisms. In the mid-nineteenth century, when Charles Darwin was writing On the Origin of Species, naturalists recognized several tens of thousands of different plant and animal species. By the middle of the twentieth century, biologists had paid more attention to less conspicuous forms of life, from insects to microorganisms, and the estimate was up to 1 or 2 million. Since then, investigations in tropical rain forests—the center of much of the world's biological diversity—have multiplied those estimates at least tenfold. What process has created this extraordinary variety of life? The second question involves the inverse of life's diversity. How can the similarities among organisms be explained? Humans have always noticed the similarities among closely related species, but it gradually became apparent that even distantly related species share many anatomical and functional characteristics. The bones in a whale's front flippers are arranged in much the same way as the bones in our own arms. As organisms grow from fertilized egg cells into embryos, they pass through many similar developmental stages. Furthermore, as paleontologists studied the fossil record, they discovered countless extinct species that are clearly related in various ways to organisms living today. This question has emerged with even greater force as modern experimental biology has focused on processes at the cellular and molecular level. From bacteria to yeast to mice to humans, all living things use the same biochemical machinery to carry out the basic processes of life. Many of the proteins that make up cells and catalyze chemical reactions in the body are virtually identical across species. Certain human genes that code for proteins differ little from the corresponding genes in fruit flies, mice, and primates. All living things use the same biochemical system to pass genetic information from one generation to another. From a scientific standpoint, there is one compelling answer to questions about life's commonalities. Different kinds of organisms share so many characteristics of structure and function because they are related to one another. But how? Solving the Puzzle The concept of biological evolution addresses both of these fundamental questions. It accounts for the relatedness among organisms by explaining that the millions of different species of plants, animals, and microorganisms that live on earth today are related by descent from common ancestors—like distant cousins. Organisms in nature typically produce more offspring than can survive and reproduce given the constraints of food, space, and other resources in the environment. These offspring often differ from one another in ways that are heritable—that is, they can pass on the differences genetically to their own offspring. If competing offspring have traits that are advantageous in a given environment, they will survive and pass on those traits. As differences continue to accumulate over generations, populations of organisms diverge from their ancestors. This straightforward process, which is a natural consequence of biologically reproducing organisms competing for limited resources, is responsible for one of the most magnificent chronicles known to science. Over billions of years, it has led the earliest organisms on earth to diversify into all of the plants, animals, and microorganisms that exist today. Though humans, fish, and bacteria would seem to be so different as to defy comparison, they all share some of the characteristics of their common ancestors. Evolution also explains the great diversity of modern species. Populations of organisms with characteristics enabling them to occupy ecological niches not occupied by similar organisms have a greater chance of surviving. Over time—as the next chapter discusses in more detail—species have diversified and have occupied more and more ecological niches to take advantage of new resources. Evolution explains something else as well. During the billions of years that life has been on earth, it has played an increasingly important role in altering the planet's physical environment. For example, the composition of our atmosphere is partly a consequence of living systems. During photosynthesis, which is a product of evolution, green plants absorb carbon dioxide and water, produce organic compounds, and release oxygen. This process has created and continues to maintain an atmosphere rich in oxygen. Living communities also profoundly affect weather and the movement of water among the oceans, atmosphere, and land. Much of the rainfall in the forests of the western Amazon basin consists of water that has already made one or more recent trips through a living plant. In addition, plants and soil microorganisms exert important controls over global temperature by absorbing or emitting ''greenhouse gases" (such as carbon dioxide and methane) that increase the earth's capacity to retain heat. In short, biological evolution accounts for three of the most fundamental features of the world around us: the similarities among living things, the diversity of life, and many features of the physical world we inhabit. Explanations of these phenomena in terms of evolution draw on results from physics, chemistry, geology, many areas of biology, and other sciences. Thus, evolution is the central organizing principle that biologists use to understand the world. To teach biology without explaining evolution deprives students of a powerful concept that brings great order and coherence to our understanding of life. The teaching of evolution also has great practical value for students. Directly or indirectly, evolutionary biology has made many contributions to society. Evolution explains why many human pathogens have been developing resistance to formerly effective drugs and suggests ways of confronting this increasingly serious problem (this issue is discussed in greater detail in Chapter 2). Evolutionary biology has also contributed to many important agricultural advances by explaining the relationships among wild and domesticated plants and animals and their natural enemies. An understanding of evolution has been essential in finding and using natural resources, such as fossil fuels, and it will be indispensable as human societies strive to establish sustainable relationships with the natural environment. Such examples can be multiplied many times. Evolutionary research is one of the most active fields of biology today, and discoveries with important practical applications occur on a regular basis. Those who oppose the teaching of evolution in public schools sometimes ask that teachers present "the evidence against evolution." However, there is no debate within the scientific community over whether evolution occurred, and there is no evidence that evolution has not occurred. Some of the details of how evolution occurs are still being investigated. But scientists continue to debate only the particular mechanisms that result in evolution, not the overall accuracy of evolution as the explanation of life's history. Evolution and the Nature of Science Teaching about evolution has another important function. Because some people see evolution as conflicting with widely held beliefs, the teaching of evolution offers educators a superb opportunity to illuminate the nature of science and to differentiate science from other forms of human endeavor and understanding. Chapter 3 describes the nature of science in detail. However, it is important from the outset to understand how the meanings of certain key words in science differ from the way that those words are used in everyday life. Think, for example, of how people usually use the word "theory." Someone might refer to an idea and then add, "But that's only a theory." Or someone might preface a remark by saying, "My theory is …." In common usage, theory often means "guess" or ''hunch." In science, the word "theory" means something quite different. It refers to an overarching explanation that has been well substantiated. Science has many other powerful theories besides evolution. Cell theory says that all living things are composed of cells. The heliocentric theory says that the earth revolves around the sun rather than vice versa. Such concepts are supported by such abundant observational and experimental evidence that they are no longer questioned in science. Sometimes scientists themselves use the word "theory" loosely and apply it to tentative explanations that lack well-established evidence. But it is important to distinguish these casual uses of the word "theory" with its use to describe concepts such as evolution that are supported by overwhelming evidence. Scientists might wish that they had a word other than "theory" to apply to such enduring explanations of the natural world, but the term is too deeply engrained in science to be discarded. As with all scientific knowledge, a theory can be refined or even replaced by an alternative theory in light of new and compelling evidence. For example, Chapter 3 describes how the geocentric theory that the sun revolves around the earth was replaced by the heliocentric theory of the earth's rotation on its axis and revolution around the sun. However, ideas are not referred to as "theories" in science unless they are supported by bodies of evidence that make their subsequent abandonment very unlikely. When a theory is supported by as much evidence as evolution, it is held with a very high degree of confidence. In science, the word "hypothesis" conveys the tentativeness inherent in the common use of the word "theory." A hypothesis is a testable statement about the natural world. Through experiment and observation, hypotheses can be supported or rejected. As the earliest level of understanding, hypotheses can be used to construct more complex inferences and explanations. Like "theory," the word "fact" has a different meaning in science than it does in common usage. A scientific fact is an observation that has been confirmed over and over. However, observations are gathered by our senses, which can never be trusted entirely. Observations also can change with better technologies or with better ways of looking at data. For example, it was held as a scientific fact for many years that human cells have 24 pairs of chromosomes, until improved techniques of microscopy revealed that they actually have 23. Ironically, facts in science often are more susceptible to change than theories—which is one reason why the word "fact" is not much used in science. Finally, "laws" in science are typically descriptions of how the physical world behaves under certain circumstances. For example, the laws of motion describe how objects move when subjected to certain forces. These laws can be very useful in supporting hypotheses and theories, but like all elements of science they can be altered with new information and observations. Glossary of Terms Used in Teaching About the Nature of Science Fact: In science, an observation that has been repeatedly confirmed. Law: A descriptive generalization about how some aspect of the natural world behaves under stated circumstances. Hypothesis: A testable statement about the natural world that can be used to build more complex inferences and explanations. Theory: In science, a well-substantiated explanation of some aspect of the natural world that can incorporate facts, laws, inferences, and tested hypot Those who oppose the teaching of evolution often say that evolution should be taught as a "theory, not as a fact." This statement confuses the common use of these words with the scientific use. In science, theories do not turn into facts through the accumulation of evidence. Rather, theories are the end points of science. They are understandings that develop from extensive observation, experimentation, and creative reflection. They incorporate a large body of scientific facts, laws, tested hypotheses, and logical inferences. In this sense, evolution is one of the strongest and most useful scientific theories we have. Evolution and Everyday Life The concept of evolution has an importance in education that goes beyond its power as a scientific explanation. All of us live in a world where the pace of change is accelerating. Today's children will face more new experiences and different conditions than their parents or teachers have had to face in their lives. The story of evolution is one chapter—perhaps the most important one—in a scientific revolution that has occupied much of the past four centuries. The central feature of this revolution has been the abandonment of one notion about stability after another: that the earth was the center of the universe, that the world's living things are unchangeable, that the continents of the earth are held rigidly in place, and so on. Fluidity and change have become central to our understanding of the world around us. To accept the probability of change—and to see change as an agent of opportunity rather than as a threat—is a silent message and challenge in the lesson of evolution. The following dialogue dramatizes some of the problems educators encounter in teaching evolution and demonstrates ways of overcoming these obstacles. Chapter 2 returns to the basic themes that characterize evolutionary theory, and Chapter 3 takes a closer look at the nature of science. THE CHALLENGE TO TEACHERS Teaching evolution presents special challenges to science teachers. Sources of support upon which teachers can draw include high-quality curricula, adequate preparation, exposure to information useful in documenting the evidence for evolution, and resources and contacts provided by professional associations. One important source of support for teachers is to share problems and explore solutions with other teachers. The following vignette illustrates how a group of teachers—in this case, three biology teachers at a large public high school—can work together to solve problems and learn from each other. It is the first week of classes at Central High School. As the bell rings for third period, Karen, the newest teacher on the faculty, walks into the teachers' lounge. She greets her colleagues, Barbara and Doug. "How are your first few days going?" asks Doug. "Fine," Karen replies. "The second-period Biology I class is full, but it'll be okay. By the way, Barbara, thanks for letting me see your syllabus for Bio I. But I wanted to ask you about teaching evolution—I didn't see it there." "You didn't see it on my syllabus because it's not a separate topic," Barbara says. "I use evolution as a theme to tie the course together, so it comes into just about every unit. You'll see a section called 'History of Life' on the second page, and there's a section called 'Natural Selection.' But I don't treat evolution separately because it is related to almost every other topic in biology."1 "Wait a minute, Barbara," Doug says. "Is that good advice for a new teacher? I mean, evolution is a controversial subject, and a lot of us just don't get around to teaching it. I don't. You do, but you're braver than most of us." "It's not a matter of bravery, Doug," Barbara replies. "It's a matter of what needs to be taught if we want students to understand biology. Teaching biology without evolution would be like teaching civics and never mentioning the United States Constitution." "But how can you be sure that evolution is all that important. Aren't there a lot of scientists who don't believe in evolution? Say it's too improbable?" "The debate in science is over some of the details of how evolution occurred, not whether evolution happened or not. A lot of science and science education organizations have made statements about why it is important to teach evolution. …"2 "I saw a news report when I was a student," Karen interjects, "about a school district or state that put a disclaimer against evolution in all their biology textbooks. It said that students didn't need to believe in evolution because it wasn't a fact, only a theory. The argument was that no one really knows how life began or how it evolved because no one was there to see it happen."3 "If I taught evolution, I'd sure teach it as a theory—not a fact," says Doug. "Just like gravity," Barbara says. "Now, Barbara, gravity is a fact, not a theory." "Not in scientific terms. The fact is that things fall. The explanation for why things fall is the theory of gravitation. Our problem is definitions. You're using 'fact' and 'theory' the way we use them in everyday life, but we need to use them as scientists use them. In science, a 'fact' is an observation that has been made so many times that it's assumed to be okay. How facts are explained is where theories come in: theories are explanations of what we observe. One place where students get confused about evolution is that they think of 'theory' as meaning 'guess' or 'hunch.' But evolution isn't a hunch. It's a scientific explanation, and a very good one." "But how good a theory is it?" asks Doug. "We don't know everything about evolution." "That's true," says Karen. "A student in one of my classes at the university told me that there are big gaps in the fossil record. Do you know anything about that?" "Well, there's Archaeopteryx," says Doug. "It's a fossil that has feathers like a bird but the skeleton of a small dinosaur. It's one of those missing links that's not missing any more." "In fact, there are good transitional fossils between primitive fish and amphibians and between reptiles and mammals," Barbara says. "Our knowledge of fossil intermediates is actually pretty good.4 And, Doug, it sounds like you know more about evolution than you're letting on. Why don't you teach it?" "I don't want any trouble. Every time I teach evolution, I have a student announce that 'evolution is against his religion.'" "But most of the major religious denominations have taken official positions that accept evolution," says Barbara. "One semester a friend of mine in the middle school started out her Life Science unit by having her students interview their ministers or priests or rabbis about their religion's views on evolution. She said that most of her students came back really surprised. 'Hey,' they said, 'evolution is okay.' It defused the controversy in her class." "She didn't have Stanley in her class," says Doug. "Who's Stanley?" asks Karen. "The son of a school board member. Given his family's religious views, I'm sure he would not come back saying evolution was okay." "That can be a hard situation," says Barbara. "But even if Stanley came back to class saying that his religion does not accept evolution, it could help a teacher show that there are many different religious views about evolution. That's the point: religious people can still accept evolution." "Stanley will never believe in evolution." "We talk about 'believing' in evolution, but that's not necessarily the right word. We accept evolution as the best scientific explanation for a lot of observations—about fossils and biochemistry and evolutionary changes we can actually see, like how bacteria become resistant to certain medicines. That's why people accepted the idea that the earth goes around the sun—because it accounted for many different observations that we make. In science, when a better explanation comes around, it replaces earlier ones." "Does that mean that evolution will be replaced by a better theory some day?" asks Karen. "It's not likely. Not all old theories are replaced, and evolution has been tested and has a lot of evidence to support it. The point is that doing science requires being willing to refine our theories to be consistent with new information." "But there's still Stanley," says Doug. "He doesn't even want to hear about evolution." "I had Stanley's sister in AP biology one year," Barbara replies. "She raised a fuss about evolution, and I told her that I wasn't going to grade her on her opinion of evolution but on her knowledge of the facts and concepts. She seemed satisfied with that and actually got an A in the class." "I still think that if you teach evolution, it's only fair to teach both." "What do you mean by both?" asks Barbara. "If you mean both evolution and creationism, what kind of creationism do you want to teach? Will you teach evolution and the Bible? What about other religions like Buddhism or the views of Native Americans? It's hard to argue for 'both' when there are a whole lot more than two options." "I can't teach a whole bunch of creation stories in my Bio class," says Doug. "That's the point. We can't add subjects to the science curriculum to be fair to groups that hold certain beliefs. Teaching ecology isn't fair to the polluter, either. Biology is a science class, and what should be taught is science." "But isn't there something called 'creation science'?" asks Karen. "Can creationism be made scientific?" "That's an interesting story. 'Creation science' is the idea that scientific evidence can support a literal interpretation of Genesis—that the whole universe was created all at once about 10,000 years ago." "It doesn't sound very likely." "It's not. Scientists have looked at the arguments and have found they are not supported by verifiable data. Still, back in the early 1980s, some states passed laws requiring that 'creation science' be taught whenever evolution was taught. But the Supreme Court threw out 'equal time' laws, saying that because creationism was inherently a religious and not a scientific idea, it couldn't be presented as 'truth' in science classes in the public schools."5 "Well, I'm willing to teach evolution," says Karen, "and I'd like to try it your way, Barbara, as a theme that ties biology together. But I really don't know enough about evolution to do it. Do you have any suggestions about where I can get information?" "Sure, I'd be glad to share what I have. But an important part of teaching evolution has to do with explaining the nature of science. I'm trying out a demonstration after school today that I'm going to use with my Bio I class tomorrow. Why don't you both come by and we can try it out?" "Okay," say Karen and Doug. "We'll see you then." Barbara, Doug, and Karen's discussion of evolution and the nature of science resumes following Chapter 2. The National Science Education Standards cite "evolution and equilibrium" as one of five central concepts that unify all of the sciences. (See www.nap.edu/readingroom/books/nses) Appendix C contains statements from science and science education organizations that support the need to teach evolution. In 1995, the Alabama board of education ordered that all biology textbooks in public schools carry inserts that read, in part, as follows: "This textbook discusses evolution, a controversial theory some scientists present as a scientific explanation for the origin of living things, such as plants, animals, and humans. No one was present when life first appeared on earth. Therefore, any statement about life's origins should be considered theory, not fact." Other districts have required similar disclaimers. The book From So Simple a Beginning: The Book of Evolution by Philip Whitfield (New York: Macmillan, 1993) presents a well-illustrated overview of evolutionary history. Evolution by Monroe W. Strickberger (Boston: Jones and Bartlett, 2nd edition, 1995) is a thorough text written at the undergraduate level. In the 1987 case Edwards v. Aguillard, the U.S. Supreme Court reaffirmed the 1982 decision of a federal district court that the teaching of "creation science" in public schools violates the First Amendment of the U.S. Constitution.
Alcohol is found in beer, wine and spirits. Alcohol (ethyl alcohol or ethanol) is the ingredient that makes people intoxicated. It is produced by the fermentation of yeast, sugars, and starches. Consuming alcohol is associated with a number of short term and long-term health risks including addiction, motor vehicle crashes, violence, risky sexual behaviors, high blood pressure, and various cancers including breast cancer. The risk for these health risks increases with the amount of alcohol consumed. Underage drinking is the consuming of alcohol by any person who is under the legal age of 21 yrs. When men consume more than 5 standard drinks in a row and women consume more than 4 standard drinks in a row within a short amount of time it is considered binge drinking which can lead to alcohol poisoning. Binge drinking causes the persons’ blood alcohol level to rise to an unsafe and toxic level. When young people ages 12 to 20 yrs. (underage drinking) drink alcohol they engage in binge drinking most of the time. Too much alcohol in the body can cause a person’s brain to shut down and their body functions controlled by the brain are affected. The heart can stop beating, a person can stop breathing and it can even cause death. Alcohol poisoning is a medical emergency. Warning signs of alcohol poisoning: - Mental confusion - Passing out and not waking up - Slow breathing (less that 8 breaths per minute) or irregular breathing (10 seconds or more between breaths) - Severe vomiting - Cold, clammy or blue skin If someone has been drinking and shows any of these warning signs – CALL 911! Do not leave a person alone and keep checking to make sure they are breathing, place them on their side with knees bent so they do not choke, raise the arm closest to you above the person’s head, roll the person towards you, rest their head in front of their arm, tilt their head up so they can breathe and tuck their nearest hand under their cheek. Stay with them until medical help arrives. For more information: - National Institute on Alcohol Abuse and Alcoholism (NIAAA) (NIH.gov) - NIDA.NIH.GOV | National Institute on Drug Abuse (NIDA) - SAMHSA - Substance Abuse and Mental Health Services Administration - Centers for Disease Control and Prevention (CDC.gov) - CADCA | Building Drug-Free Communities - U.S. Alcohol Policy Alliance | Turning Evidence Into Action - Alcohol Action Network | Protecting Communities Through Alcohol Policy Action - Center for Advancing Alcohol Science to Practice
NASA’s Mars Science Laboratory – Curiosity Rover – was sent to Mars some 16 months ago with a major objective of finding evidence of a past environment that would be well suited to supporting microbial life. Today, a team of mission researchers, writing in a series of papers published in the journal Science, said that they found evidence of what was once an ancient fresh water lake on Mars that might have been capable of supporting life. The findings were also announced this morning by members of the research team who addressed the annual meeting of the American Geophysical Union in San Francisco. The researchers studied a set of sedimentary rock outcrops that were found in an area on the floor of Gale Crater called Yellowknife Bay, near the Mars equator. These sedimentary rocks that probably formed from ancient Martian mud or clay have suggested to researchers that there was at least one lake that welled up with what could have been drinkable water inside of Gale Crater some 3.6 million years ago, and that the lake could have lasted for tens or even hundreds of thousands of years. “Shortly after we landed, Curiosity found evidence that liquid water had flowed across the surface long ago in Gale crater,” said Jim Bell, from Arizona State University and an author of four of the papers. “These new results, however, come from the first drilling activities ever performed on Mars, and they show that in addition to surface water, there was likely an active groundwater system in Gale crater that significantly weathered ancient rocks and minerals.” The mudstones analyzed by the research team are normally formed in calm conditions and produced by very fine sediment grains settling on each other layer-by-layer, in still water. The team’s analysis of Yellowknife Bay’s clay-rich lake-bed region showed that a calm and fresh water lake that contained basic but crucial biological elements such as carbon, hydrogen, oxygen, nitrogen and sulfur existed at least once inside the Gale Crater. According to the team, a lake with these conditions could provide an ideal environment for simple microbial life. The researchers think that a lake like this could have provided perfect conditions for simple bacterial life such as chemolithoautotrophs, which are rock-eating microbes that live on and derive their energy from mineral compounds. The researchers pointed out that they did not find signs of ancient life itself on Mars. “It is exciting to think that billions of years ago, ancient microbial life may have existed in the lake’s calm waters, converting a rich array of elements into energy. The next phase of the mission, where we will be exploring more rocky outcrops on the crater’s surface, could hold the key to whether life did exist on the red planet,” said another of the paper’s co-authors, Sanjeev Gupta from Imperial College London, who is also a member of the MSL mission team. The researchers will continue to use the Mars roving science laboratory to continue exploring Gale Crater for even more evidence of ancient lakes or other habitable environments. Comments are closed.
Many educational technology researchers leverage social media data to answer questions about trends, collaboration or learning networks. If you are not a programmer, you will most likely use existing apps and tools to conduct quantitative data analysis and generate visualizations such as word clouds and clusters. As more and more educators are acknowledging coding as an important digital literacy, this post we will explore some common techniques of statistical data visualization. In my last posting on text mining, I described how to collect data from Twitter. In this post, I will describe how we can summarize a large set of tweets on a certain topic – for example the latest SITE conference. Background: Giving structure to your data Text data, such as tweets, comments or posts usually comes with limited structure, as compared to scores on likert scales. To visualize and quantify the data we have to give it structure in the first place. Suppose we have a character vector as the following: "I am a member of the XYZ association" "Please apply for our open position" "The XYZ memorial lecture takes place on wednesday" "Vote for the most popular lecturer!" What is a character vector? You can think of a character vector as a container of all text pieces. Each piece represents the text from an individual, and is assigned a number. You can access any piece by using its given number. This type of data is easy for humans to read, but not for machines. Machine prefers the same information structured in the following way: A text file structured in this way is called document-term matrix. Each row in the matrix represents a word, while each column represents a document, which refers to all the texts from an individual. Each element in the matrix represents the number of times a particular word appears in a particular document. You may have noticed that all texts have been converted to lowercase in this matrix, while some words, like “a” or “the” are not shown up in the matrix. To convert the tweet texts you collect into a document-term matrix, the following steps are usually necessary: - Remove nonsense characters - Convert all words to lowercase. - Remove stop words, such as “a”, “an”, “that” and “the”. As you can see, by delineating the text into single words, its meaning may change significantly. This is why it oftentimes makes sense to combine qualitative and quantitative approaches when analyzing data sets – simply looking at a word cloud is not a replacement for meaningful analysis of qualitative text data. Sample Data – Tweets on #siteconf Did you miss your favorite AACE conference? Would you like to find out what predominant topics people discussed? We collected 709 tweets using the hashtags “#siteconf”. Step 1: Word Clouds To take a quick look at our data, an initial visual representation with world clouds is helpful. As you can see, the word clouds present us some key information as well as a lot of noise. We can spot some popular topics at a glance, but it is impossible to see how concepts are related. Step 2: Cluster Tree A more structured way to explore the data in an associational sense is to look at the collection of terms that frequently co-occur. This method is called cluster analysis. Cluster analysis is a way of finding association between items and bind nearby items into groups. A typical visualization technique is a tree diagram called dendrogram. The most common cluster analysis include K-means clustering and hierarchical clustering. K-means clustering require you to specify how many groups you prefer to have in the result before the analysis, while hierarchical clustering doesn’t have this requirement. The density and shape of the dendrogram may vary depending on the sparsity. The above one is the dendrogram on sparsity .95. It is interesting that when people tweeted using the hashtag “#msueped”, they also tended to use “#site2015”. “#msueped” stands for Educational Psychology and Educational Technology from Michigan State University. You can tell that many people from this program went to SITE 2015 conference. Did you gain a sense what the SITE community is talking about? Data visualization is certainly helpful to make sense of large datasets as it allows you to gain an overview from an elevated perspective. However, don’t mistake a set of images for the real thing. If you attended SITE 2015 in Las Vegas, your first hand experience is likely to be totally different and certainly more in-depth. Also keep in mind that while social media is becoming ever more popular, Twitter users are still only a sub-group of the whole audience. No approach is neutral in its analysis: Understanding the tools that we use helps us to interpret seemingly obvious connections more carefully. If you want to explore how we produced these visualizations use our sample data set with instructions.
SAFE SPORT RESOURCES AND EDUCATIONAL TOOLS • Maltreatment – refers to deliberate acts that result in harm or the potential for physical or psychological harm, including, but not limited to, abuse, assault, neglect, harassment, bullying, hazing and discrimination. This does not include accidents. • Vulnerable Sector - those who because of age, disability, or other circumstances are less able to protect themselves from harm – such as children, the disabled, or the elderly • Risk Management - a detailed and organized process used to identify, assess and treat risks so as to better achieve desired outcomes in a way that reflects our values. • The Universal Code of Conduct to Prevent and Address Maltreatment in Sport (UCCMS) is a cornerstone in ensuring all stakeholders commit to safety in sport. The universal code of conduct applies to all participants, administrators, athletes, coaches and officials who work within the national sports system in Canada. The universal code of conduct is meant to achieve a safe and welcoming environment for all participants.
Flashcards designed to flip conventional teaching methods Many modern classrooms rely on reading, writing and lectures to impart information. These methods are more suited to those students with a preference for auditory learning. The consequence for those with a preference for the visual is that they may struggle to keep pace with the class. How do flashcards work in the classroom? Active recall involves actively stimulating memory during the learning process. It contrasts with passive review, where information is processed passively – for example by reading or watching. There is a lot of evidence to suggest that active recall is a very quick, efficient and effective way for students to learn. One of the studies most often cited is called “The Critical Importance of Retrieval for Learning”, published in 2008. Basically, a group of college students were each given the same 40 foreign language vocabulary word pairs to learn and then tested on their recall. A week later, they were tested again. It was found that those students who were guided to use active recall were able to remember about 80% of the words compared to 34% for the control group. Flashcards are also an ideal tool for providing feedback. When a teacher gives a student feedback about the accuracy of their answer, it gives the student the opportunity to reflect on their learning “Yes! I understand” or “No. I don’t”. This act of self-reflection is known as metacognition. Metacognition is essentially thinking about one’s thinking. It is associated with improved learning outcomes and also determines whether students can transfer their learning to new scenarios. This is the sort of thinking that gets students beyond surface learning and drives them to deeper understanding. Why use flashcards for teaching? Flashcards can assist with teaching in a variety of ways. Words are abstract and therefore more difficult for the brain to retain. In comparison, visuals are concrete and more easily remembered. Using visuals can increase the rate at which your student learns and also improve their ability to comprehend, remember and retrieve information. Another reason flashcards are popular with parents and teachers is because they are easily accessible, portable and inexpensive. Plus, they can be used for a range of teaching purposes with a variety of students. Flashcard activities for teaching Flashcards can be used in many ways to teach a variety of concepts. Here are some examples: I write speech and language goals for children. These cards have made my life so much easier. The variety of topics is helpful. And the real-life images provide great context. I hope this company continues to make more sets. I’d be sure to buy them.
Athletes may look great on the outside, with their muscular physique and trim waistlines, but there is more than meets the eye. When dentists examine semi-professional to professional and even Olympic athletes, they have found that the dental health of the athletes is poor. This is surprising to many people because physicians recommend doing moderate-intensity physical exercises for 30 minutes per day on at least five days per week. Understanding why athletes are prone to bad teeth might help you to take care of your own dental health. Physical Exercises and the Teeth Athletes who spend many hours per day training for a long period of time experience considerable negative effects on their teeth. Scientists found that the more hours per week athletes train or play, the worse their tooth and periodontal health. This is true even when the researchers controlled for age and frequency of dental checkups. Open breathing and mouth guards may be partly to blame, but sugary drinks and bad nutrition deserve a majority of the fault in why athletes have poor tooth and gum health. Sugary Sports Drinks’ Role in the Oral Health of Athletes Intense athletic workouts could cause participants to become dehydrated, so many coaches encourage them to drink electrolyte drinks. These sugary sports drinks wreak havoc on oral hygiene. The sugar in the sports drinks provides bacteria with the opportunity to grow, releasing acids that wear away the enamel of the teeth and eventually cause a cavity. It is not uncommon for an athlete to drink many servings of these sugary drinks in just one intense training session or competition. Most athletes do not stop in the middle of a long training session or competition to go and brush the sugar off of their teeth. Daily consumption of these drinks combined with a lack of oral hygiene creates an tooth health disaster. In a study of 187 British football players, 37 percent were found to have active dental cavities, and 53 percent had dental erosion. Improper Dieting and Dental Health Athletes are known for bad nutrition practices. Some athletes need to bulk up, so they mix up protein powders or eat sugary and sticky energy bars. Those sticky and sugary substances stay on the teeth for a long time, contributing to the risk of dental cavities. Some athletes try to lose weight before a weigh-in at a competition. Improper dieting could deplete the body of calcium, which is needed for healthy teeth. Other types of improper dieting include avoiding certain food groups. For example, some athletes might avoid eating fats. However, fats are needed for your body to absorb vitamin D and calcium, which form dental enamel. Physical Therapy for Athletic Injuries During those intense workouts, athletes may become injured. Failing to warm up could lead to a torn muscle. Poor form could strain or sprain a ligament or tendon. Physical therapy aims to help athletes regain their strength and range of motion. With physical therapy, an athlete may return to play faster.
Maximilian I as ruler of the Habsburg Hereditary Lands and emperor of the Holy Roman Empire Maximilian united the possessions of all the Habsburg dynastic lines in his person. The Austrian patrimonial lands formed the solid basis for the emperor’s ambitious politics of Empire. Maximilian’s father, Frederick V from the Styrian line of the dynasty, had claimed the inheritance of the Austrian line, and in 1490, as successor to Duke Siegmund, Maximilian was able to unite Tyrol and the Forelands with the rest of the patrimonial dominions. The same year saw the death of Matthias Corvinus, Frederick’s long-standing enemy. To his father’s immense joy, Maximilian was able to re-establish Habsburg dominion over Vienna and Lower Austria, which had fallen to the Hungarian monarch in 1485. A military advance into Hungary in 1490/91 had little immediate consequence but demonstrated the new-found sense of dynamism in the dynasty. Despite older genealogical claims on Bohemia and Hungary, the two crowns went to the Polish-Lithuanian Jagiello dynasty. In compensation for these claims, the Treaty of Pressburg in 1491 gave Maximilian the pledge of a number of territories in western Hungary (present-day Burgenland) together with the guarantee that if the Jagiellos were to become extinct he would inherit the two crowns. This represented the first step towards the Habsburg-Jagiello double marriage of 1515. Maximilian’s reign also saw the expansion of Habsburg dominion in terms of the gradual acquisition of territories in the county of Gorizia around 1500. The last prince of the Meinhardiner dynasty, Leonhard of Gorizia, concluded a contract of inheritance with the Habsburgs that enabled them to take over large parts of present-day Upper Carinthia and eastern Tyrol together with the Puster Valley in southern Tyrol and territories in the present-day border area between Italy and Slovenia (Gorizia, Gradisca d’Isonzo, Flitsch-Tolmein). The Habsburg’s strengthened position in this region led to conflict with Venice, as the mercantile republic was a powerful rival for influence on Friuli and the northern Italian Alpine region. Further territorial acquisitions included regions in northern Tyrol with Kufstein, Rattenberg and Kitzbühel, together with the area around the Mondsee which Maximilian had claimed in his conflict in the hereditary succession in the Wittelsbach dynasty. As ruler over the hereditary lands, Maximilian initiated administrative reform, focusing in particular on the financial and juridical sectors. This saw the creation of the first administrative apparatus in these lands that was staffed by a bureaucracy and not as heretofore by functionaries from the ranks of the aristocracy and the Estates. Maximilian took his model from Burgundy, where he had been impressed by the effectiveness of a streamlined and strictly hierarchical administration. Maximilian planned to execute similar measures that were intended to strengthen the authority of the emperor in the Empire. He was attempting to limit the centrifugal forces in the structure of the Empire and to form closer ties between the imperial princes and the emperor and the Empire. However, here he met with increased resistance and was only able to implement a part of his plans. The result was a long-drawn-out conflict between the emperor and the imperial Estates and the princes of the Empire. Maximilian was soon forced to concede the limits of his power, and as a consequence increasingly devoted his efforts to strengthening his position in the patrimonial dominions. One lasting result of his attempts to reform the Empire was the introduction of new administrative institutions. The Empire was divided into six (later ten) districts representing a new regional administrative tier to facilitate the levying of taxes demanded by the Empire, the implementation of decrees issued by imperial bodies and the raising and remuneration of imperial military contingents. The Imperial Court (Reichskammergericht) was also initiated by Maximilian and arose from the negotiations for the ‘Eternal Peace’ at the Diet of Worms in 1495, which brought a ban on feuding. It also created a forum in which conflicts between the imperial Estates could be resolved.
The New Kingdom Period of Egypt is considered a golden era in ancient history, as it marks the peak of Egypt’s power and prosperity. There was an advancement in military power with technical achievements brought by the Hyskos period. Particularly in warfare, the Egyptians adapted new techniques and developed improved weapons; including the horse and chariot, bronze arrowheads and battle axes, and the composite bow. These types of arrowheads, rhombic in form with protruding barbs, have been found across the Southern Mediterranean; from Thebes, to Mycenae and Bologna. Unusual in form, their unique protruding, triangular knob at the base of the blade was ultimately intended to halt the protrusion of the blade, however the inclusion of barbs meant the tool was meant to penetrate. The inscription across the midrib does seem to be a feature unique to those found in Egypt and could possibly refer to the troop number. Their form changes over time, with the blade featuring straight edges, although examples still retain their triangular protrusion. Arrowheads with curved blade edges did not continue past 700 BC within Egypt.
Regeneration: what does it mean and how does it work? Some parts of our bodies can repair themselves quite well after injury, but others don’t repair at all. We certainly can’t regrow a whole leg or arm, but some animals CAN regrow – or regenerate – whole body parts. So what can we learn from these regenerative animals? Salamanders, planarians and a number of other species regrow damaged or missing body parts. This is regeneration. Some human organs, e.g. liver and skin, also regenerate when they are damaged. Regeneration can happen in many different ways using pluripotent or tissue-specific stem cells. Some regeneration happens without stem cells at all (e.g. the regeneration of Zebra fish hearts) Studying regeneration in other species will help us understand how the human body heals and repairs itself. This could help researchers develop regenerative medicines to help the human body more fully heal. Researchers are investigating many aspects of regeneration, from the signals that turn on regenerative processes to why stem cells in humans don’t regenerate the way salamanders do. Many scientists are interested in understanding what promotes stem cells to form a blastema, an accumulation of stem cells at the point of tissue damage. Studies in animals like salamanders are also attempting to determine how stem cells know what parts of the body need to be regrown and where they are in the body’s ‘map’, two things stem cells in mammals don’t do. Researchers are very interested in understanding what signals turn stem cells ‘on’ when regeneration is needed, and keep them ‘off’ when they’re not needed. Regeneration means the regrowth of a damaged or missing organ part from the remaining tissue. As adults, humans can regenerate some organs, such as the liver. If part of the liver is lost by disease or injury, the liver grows back to its original size, though not its original shape. And our skin is constantly being renewed and repaired. Unfortunately many other human tissues don’t regenerate, and a goal in regenerative medicine is to find ways to kick-start tissue regeneration in the body, or to engineer replacement tissues. There are many animals that can regenerate complex body parts with full function and form after amputation or injury. Invertebrates (animals without a spinal cord) such as the flatworm or planarian can regenerate both the head from a tail piece, and the tail from a head piece. Among vertebrates (animals with a spinal cord), fish can regenerate parts of the brain, eye, kidney, heart and fins. Frogs can regenerate the limb, tail, brain and eye tissue as tadpoles but not as adults. And salamanders can regenerate the limb, heart, tail, brain, eye tissues, kidney, brain and spinal cord throughout life. How do these regenerative animals regrow such complex structures? After amputation, stem cells accumulate at the injury site in a structure called the blastema. An important subject of ongoing research is how signals from the injury site cause the stem cells to form the blastema and start dividing to rebuild the missing part. And what about the stem cells themselves? Do the animals use a single type of stem cell in the blastema that can differentiate into many different types of tissues (called a multipotent stem cell). Or is a separate set of stem cells responsible for making each of the different tissues needed to make up the new body part? Recent research in different regenerating animals has shown that there are various stem cell strategies for regenerating body parts built from multiple tissues, such as muscle, nerve and skin. If we understand the principles and molecules these animals use to regenerate adult tissues, can these lessons be applied to regenerating or engineering human tissue? Scientist Peter Reddien’s research group in the USA recently solved a long-standing question in planarian (flatworm) regeneration – can a single stem cell regenerate a whole animal? The answer is yes, it can. This shows that adult planaria have pluripotent stem cells – cells that can make ALL the cell types of the animal’s body. How these pluripotent cells are controlled in the flatworm's body so that they do not form tumors is an important question that several research groups are now studying. But not all animals use pluripotent cells in regeneration. The stem cells that regenerate a frog tail and a salamander limb have very different properties from a planarian stem cell. In these animals, each tissue – such as muscle, nerve, or skin – has its own set of stem cells that just make the different types of cells in that particular tissue. In other words, a muscle stem cell cannot make skin and skin stem cells can’t make muscle. These multipotent tissue-specific stem cells are probably very similar to the stem cells in our own bodies that renew or repair tissues such as our skin or muscle. Why can such stem cells regenerate an entire limb in a salamander, but only repair damage to a single tissue type in our own bodies? This is another question that scientists are working on now. As well as using stem cells, regeneration can work by causing differentiated cells that had stopped dividing to ‘go back’ to dividing and multiplying in order to replace the lost tissue. This has recently been shown to happen in heart regeneration in zebrafish, where a heart muscle cell called the cardiomyocyte divides to replenish missing cardiac tissue. This regenerative phenomenon has also been found in newly born mouse hearts, but is rapidly lost as the mice mature. More research is needed to understand how differentiated cells can be made to divide and produce new heart tissue, and why this capacity is lost in humans. By defining the properties of stem cells that regenerate complex body parts, scientists are learning how injury causes these stem cells to regenerate the missing part instead of just forming scar tissue. Future research may make it possible to apply this knowledge in new kinds of medical treatments. Pluripotent stem cells How similar are the pluripotent stem cells of the planarian to mammalian embryonic stem cells or induced pluripotent stem cells? By studying the planarian, maybe we will gain insight into how to control human embryonic stem cells to replace parts of our own bodies. Tissue stem cells Salamanders and frogs use tissue stem cells that may be much like our own, so why can they regenerate a whole limb whereas we form scars? Ongoing research indicates that regenerative animals keep a kind of map inside their adult tissues, telling cells where they are and what they should be. Parts of this map may have been lost in mammals, or perhaps our stem cells have lost the ability to read the map. Researchers hope to find out what exactly is missing or blocked in mammals, and whether such information can be restored to direct stem cells to take part in regeneration for medical applications. Can we make adult, differentiated cells like heart muscle cells start dividing again, as in the zebrafish? It will be important to find out why mammalian heart cells lose this ability, and if it can be restored.
Children are the explorers, discoverers and creators of their own development process, learning at their own pace1 within the common co-construction of other participants (children, carers, environment). Trust, reliability, availability In order for children to enjoy a healthy development and to learn to the best of their capabilities, they require a stimulating environment in which they feel safe and secure, as well as carers who are available and reliable, and who respond with competence to each child’s individual needs. Following this principle, we place great value on how we build relationships in our daily daycare practice. Documented observations of the child form the basis for regular discussions with parents, ensuring a successful working partnership focussed on the well-being of the child. The child in the daycare centre community In the daycare centres, we help the children to develop and progress, and to act happily and confidently in a small community. We encourage the children to be curious and inquiring in reference to themselves and their surroundings. We challenge them, encourage them to believe in their strengths and allow them to take initiative. We respect the children in their individuality and guide them while they search to understand how the world works. This involves the ability to make pro-active contact with others, to maintain well-developed relationship structures, to exercise mutual respect and practise cooperation, and to gain pleasure from discourse. Together we seek answers, exploring and discovering the ordinary and extraordinary aspects of the everyday. Learning and education processes Learning is only a sustainable process if the children see it as significant and relevant, and if it relates to their experiences, desires and everyday problems. For us, pre-school education means stimulating a child’s resources until they reveal their full potential, enabling the child to tap into the world around him. This appropriation process corresponds to the child’s instinct to be self-motivated, to probe, to observe, to question and to communicate, to acquire knowledge and to form a picture of the world for himself. Playing is learning by discovering from sensory experiences.2 Children acquire knowledge from their interaction with adults and peers. Since the ability to educate themselves is accomplished by interacting with the outer world, we provide sufficient stimulation but also plenty of repeated exercises. A child should develop into an emotionally strong person. Development takes place when a child has successfully overcome challenges or difficulties, thus developing resilience, endurance and a strong will. In our work, we allow the children, as far as possible, to try things out for themselves and encourage them to solve tasks independently, in accordance with their abilities. By making targeted and sensitive observations of each individual child, we discover their strengths and preferences and offer each child specific chances to experience challenges that will help him grow. The balance between pedagogical activities and self-determined play Our pedagogical activities are primarily determined by the children’s interest in a particular subject, or by cultural and seasonal events. We work with the children on a project basis and provide them with a multi-sensory experience of the subject from various perspectives for an appropriate period of time. In doing so, we incorporate as many different educational areas as possible and consider the motor, cognitive, linguistic and social-emotional levels of the children’s development. Most importantly, we aim for a balance between planned activities and self-determined play. 1 According to the Reggio principles’ image of a child (cf. Lingenauber, S. (Hrsg.). (2013). Handlexikon der Reggio-Pädagogik. 5th edition. Freiburg: Projekt Verlag). 2 cf. Stamm, M.(2010). Frühkindliche Bildung, Betreuung und Erziehung.1st edition. Bern: Haupt)
Moving averages can smooth time series data, reveal underlying trends, and identify components for use in statistical modeling. Smoothing is the process of removing random variations that appear as coarseness in a plot of raw time series data. It reduces the noise to emphasize the signal that can contain trends and cycles. Analysts also refer to the smoothing process as filtering the data. Developed in the 1920s, the moving average is the oldest process for smoothing data and continues to be a useful tool today. This method relies on the notion that observations close in time are likely to have similar values. Consequently, the averaging removes random variation, or noise, from the data. In this post, I look at using moving averages to smooth time series data. This method is the simplest form of smoothing. In future posts, I’ll explore more complex ways of smoothing. What are Moving Averages? Moving averages are a series of averages calculated using sequential segments of data points over a series of values. They have a length, which defines the number of data points to include in each average. One-sided moving averages One-sided moving averages include the current and previous observations for each average. For example, the formula for a moving average (MA) of X at time t with a length of 7 is the following: In the graph, the circled one-sided moving average uses the seven observations that fall within the red interval. The subsequent moving average shifts the interval to the right by one observation. And, so on. Centered moving averages Centered moving averages include both previous and future observations to calculate the average at a given point in time. In other words, centered moving averages use observations that surround it in both directions and, consequently, are also known as two-sided moving averages. The formula for a centered moving average of X at time t with a length of 7 is the following: In the plot below, the circled centered moving average uses the seven observations in the red interval. The next moving average shifts the interval to the right by one. Centered intervals work out evenly for an odd number of observations because they allow for an equal amount of observations before and after the moving average. However, when you have an even length, the calculations must adjust for that by using a weighted moving average. For example, the formula for a centered moving average with a length of 8 is as follows: For a length of 8, the calculations incorporate the formula for a length of 7 (t-3 through t+3). Then, it extends the segment by one observation in both directions (t-4 and t+4). However, those two observations each have half the weight, which yields the equivalent of 7 + 2*0.5 = 8 data points. Using Moving Averages to Reveal Trends Moving averages can remove seasonal patterns to reveal underlying trends. In future posts, I’ll write more about time series components and incorporating them into models for accurate forecasting. For now, we’ll work through an example to visually assess a trend. When there is a seasonal pattern in your data and you want to remove it, set the length of your moving average to equal the pattern’s length. If there is no seasonal pattern in your data, choose a length that makes sense. Longer lengths will produce smoother lines. Note that the term “seasonal” pattern doesn’t necessarily indicate a meteorological season. Instead, it refers to a repeating pattern that has a fixed length in your data. Time Series Example: Daily COVID-19 Deaths in Florida For our example, I’ll use daily COVID-19 deaths in the State of Florida. The time series plot below displays a recurring pattern in the number of daily deaths. This pattern likely reflects a data artifact. We know the coronavirus does not operate on a seven-day weekly schedule! Instead, it must reflect some human-based scheduling factor that influences when causes of death are determined and recorded. Some of these activities must be less likely to occur on weekends because the lowest day of the week is almost always Sunday, and weekends, in general, tend to be low. Tuesdays are often the highest day of the week. Perhaps that is when the weekend backlog shows up in the data? Because of this seasonal pattern, the number of recorded deaths for a particular day depends on the day of the week you’re evaluating. Let’s remove this season pattern to reveal the underlying trend component. The original data are from Johns Hopkins University. Download my Excel spreadsheet: Florida Deaths Time Series. The graph displays one-sided moving averages with a length of 7 days for these data. Notice how the seasonal pattern is gone and the underlying trend is visible. Each moving average point is the daily average of the past seven days. We can look at any date, and the day of the week no longer plays a role. We can see that the trend increases up to April 17, 2020. It plateaus, with a slight decline, until around June 22nd. Since then, there is an upward trend that appears to steepen at the end. Smoothing time series data helps reveal the underlying trends in your data. That process can aid in the simple visual assessment of the data, as seen in this article. However, it can also help you fit the best time series model to your data. The moving average is a simple but very effective calculation!
Dr Shan Narayanan WHAT ARE PARANASAL SINUSES? Paranasal sinuses are made up of ethmoid, maxillary, sphenoid and frontal sinuses. These are hollow cavities found in the skull lined with mucous membrane. All babies have ethmoid and maxillary sinuses. Frontal and sphenoidal sinuses develop as the child gets older. They help to decrease the weight of the skull, improve our voices, and their main function is to produce mucus that moisturises the inside of the nose. This mucus layer protects the nose from pollutants, micro-organisms, dust and dirt. WHAT IS SINUSITIS? Sinusitis happens when one or more of the paranasal sinuses are inflamed or infected. Sinusitis usually occurs after a cold, upper respiratory tract infection or allergic inflammation. WHAT CAUSES SINUSITIS? The Upper Respiratory Tract Infection causes inflammation of the nasal passages that can block the opening of the paranasal sinuses, and result in a sinus infection. Allergies can also lead to sinusitis because of the swelling of the nasal tissue and increased production of mucus. There are other possible conditions that can block the normal flow of secretions out of the sinuses and can lead to sinusitis including the following: enlarged adenoids and abnormality in the structure of the nose. When the flow of secretions from the sinuses is blocked, bacteria may begin to grow. This leads to a sinus infection, or sinusitis. The most common bacteria that cause sinusitis include the following Streptococcus pneumonia, Haemophilus influenzae and Moraxella catarrhalis. WHAT ARE THE SIGNS AND SYMPTOMS OF SINUSITIS? The symptoms of sinusitis depend greatly on the age of the child. The child less than five years old, have the following symptoms: Lasts longer than seven to 10 days Discharge is usually thick green or yellow, but can be clear Swelling around the eyes. Children older than five years have the above symptoms with the addition of headache, facial discomfort, bad breath and post nasal drip. HOW IS SINUSITIS DIAGNOSED? Your doctor will usually be able to diagnose by asking you the symptoms your child is having and examining your child. Additional tests may be required such as sinus X-ray and Computed Tomography (CT Scan) of the paranasal sinuses. HOW IS SINUSITIS TREATED? Sinusitis is treated with antibiotics, usually given for 10-14 days, medication for relief of pain and congestion, depending on the symptoms and age of your child. In very rare situations, where the child does not improve, she is referred to the Ear Nose and Throat surgeon for surgical intervention. The surgeon uses an instrument called an endoscope, to open the natural drainage pathways of the sinuses and makes the narrow passages wider. Thus helping to clear out the secretions and clear the infection. The secretion is sent to the laboratory to determine the organism and the antibiotic is adjusted accordingly.
“Education is a natural process carried out by the child and is not acquired by listening to words but by experiences in the environment.” – Maria Montessori Our guides give our 3-6 year olds responsive, individualized attention to help them build their skills in these five important areas: - Practical Life: The exercises of Practical Life act as a natural bridge between the home and the school for children entering the primary environment. By participating in activities that they have seen at home many times over, children cultivate feelings of responsibility and love for themselves, others, and their environment. In addition, children develop greater coordination and motor skills, while the ability to concentrate naturally grows. There are four groups of Practical Life exercises: care of self, care of environment, grace and courtesy, and exercises of movement. The sensorial area aids children in understanding, organizing, and categorizing the multitudes of sensorial stimulation they receive every day. Scientifically designed materials nourish the development of the child’s intellect in the way of reasoning, focusing attention, and stabilizing the mind. This self-motivated process of training the senses results in joy and delight, as well as, the growth of the consciousness. The Sensorial Area is like a passage from concrete thinking to abstraction. This does not come about through mathematical analysis or teaching, but through active manipulation, exploration, and repetition which leads to knowledge about an object. Through hands-on, sensory motor involvement, the child grows in the ability to understand ideas of concepts without a physical representation. Language development begins as soon as the child enters the Montessori environment. Through exercises of spoken language, children are given the keys to self-expression. In addition, they begin learning the phonetic sounds of the alphabet and building muscle memory to be able to write those sounds. Written expression of thought is experienced, often before a child is actually able to control a writing instrument, through use of the Movable Alphabet. This involves physical manipulation of letters into words, phrases, and sentences. The children’s natural, enthusiastic interest in their language is nourished through writing and reading exercises that provide the opportunity for movement and hands-on stimulation. As they are ready, older children in the environment will have the opportunity to experience a more in-depth study of language in the way of word function, grammar, sentence diagramming, and reading purely for the joy of it.Math: Maria Montessori studied the child’s natural propensity for mathematical concepts, and created materials specifically designed for building upon this innate ability. Children in a Montessori environment have the opportunity to experience mathematical ideas through a series of concrete, hands-on lessons which progress into more abstract ideas. Children begin counting quantities and recognizing numbers, and gradually progress into work with the Golden Beads. With this material, children experience addition, subtraction, multiplication, and division through a process of combining, taking away, and sharing beads. In time, with use of appropriate materials, their work with the four operations becomes more and more symbolic, and they begin recording that work on paper. Children also gain experience squaring and cubing using concrete representations. By nurturing the child’s natural mathematical mind, the Montessori Method inspires interest, joyful learning, and a deep understanding of mathematical concepts. - Cultural Study: Cultural study in a Montessori Environment is integrated throughout each day. Children are excited to extensively learn the names and locations of continents, countries, and states by using puzzle maps. In addition, through art, music, portfolios, and cultural celebrations, children form an awareness and appreciation of the diverse world in which they live.
As more K-8 programs focus on science, technology, engineering, and math, teachers are finding that chaos creates learning opportunities. The project was not exactly going as planned—Carrie Allen had a classroom overrun with fruit flies. Her first graders were studying composting, and they were getting more of an ecology lesson than they’d expected. But at Richfield STEM School, an inquiry-based K–5 school in Richfield, Minnesota, both teachers and students take fruit-fly invasions in stride. “The kids came up with the idea that we should make traps for the fruit flies,” explains Allen. Students then tested to see which traps worked the best—giving them a chance to incorporate the classic engineering-design process (ask, imagine, plan, create, improve). “I can’t imagine not teaching like this anymore,” says Allen. “It just opens up so many other possibilities for the kids.” STEM has been a hot topic lately, as politicians and business leaders worry over the lack of qualified workers in the sciences and engineering. Though much public discussion focuses on higher education and high school curriculum, educators and others are realizing that for students to really get hooked on the sciences, STEM instruction has to start early. That’s where Richfield STEM and other newly minted K–8 programs come into play. Elementary educators need not fear the shift in emphasis. In fact, as generalists, they are uniquely qualified to lead inquiry-based STEM lessons. Blur the Lines As the head of the National Center for STEM Elementary Education at St. Catherine University in St. Paul, Minnesota, Yvonne Ng is used to taking the intimidation factor out of STEM. She has found that one of the main challenges for teachers new to the curriculum is overcoming their discomfort with math, science, and, especially, engineering. The best STEM instruction is open-ended and inquiry-based, but this format, she says, can seem chaotic to elementary teachers. Monica Foss advises that teachers embrace the chaos. “It’s always messy in here,” says Foss, an engineering specialist at Cedar Park Elementary STEM School in Apple Valley, Minnesota. Teachers need to let go of the idea that they always have to have the answer, says Foss. “They have to be willing to live with mess and muddiness.” Good STEM instruction blurs the lines between subject areas. As a consequence, STEM projects can be integrated into lessons in language arts, culture, and history. In the Richfield district, all students are required to go through a unit on Duke Ellington; the STEM school adds another level, explains Principal Joey Page. After listening to Ellington’s music, students answer questions such as “How does sound work?” or “How did they make that instrument?” Page says the school is hoping to have students take apart one of its decommissioned pianos as part of the unit. Hilburn Academy, in Raleigh, North Carolina, is in its second year of making the transition from a traditional curriculum to a STEAM school (the A is for arts). Elements of the traditional classroom remain, says Principal Gregory Ford, but the engineering-design process is used for all subjects. For example, guided reading groups may be tasked with coming up with solutions for a problem posed in their informational texts. The biggest challenge for Ford’s teachers is finding time for open-ended learning. So they, like their students, work in groups to find solutions. “It requires lots and lots of planning and collaboration with your teammates,” Ford says. “There’s really no existing inventory of these highly integrated STEAM lessons.” And how does Hilburn Academy define STEAM? “STEAM is a philosophy of education, not a program,” Ford says. “It is not the ‘what’ of curriculum; it is actually the ‘how.’” Look Outside the iPad It takes work to develop a STEM program. But districts don’t have to be flush with cash and expensive digital technology to implement it. “Pretty much anything around us is technology,” says Richfield’s Allen. “That’s one thing we’re teaching the kids, too: Everything around us was created or engineered to solve a problem.” Sophisticated STEM projects can be built around a simple tool such as a temperature probe, says David Carter, coauthor of a number of lab manuals, including Elementary Science With Vernier. For example, third graders could set out to create a vessel that keeps water as warm as possible. The science part comes into play as students learn the concept of heat transfer; the engineering side involves designing the best thermos. The temperature sensor itself allows students to record data, track their experiments, and improve their designs. The motion-sensor project is another favorite of Carter’s. “They get the concept that this graph is telling a story,” he says. “They’re seeing this mathematical concept.” That, he explains, gets to the real advantage of STEM: “It’s easy because kids love it.” At Dr. Albert Einstein Academy in Elizabeth, New Jersey, technology can be as simple as a doorstop. Teachers often struggled to prop open heavy classroom doors, so they tasked students to design a better way to do it. (One early version was a sand-filled water bottle flattened in the middle. Another version made use of a cork-and-magnet device.) Tracy Espiritu, a science coach at the K–8 STEAM school, says a lot of teachers start with the question: “What is technology?” The school has three criteria for teaching STEAM (here, the A is for architecture): Projects should be about solving a problem; students must apply the engineering-design process; and technology should be considered a resource, not a subject. Perhaps the most important lesson they learn along the way: Failure is part of the process. The key to STEM (or STEAM) education is reinforcing the engineering-design process, says Espiritu, who worked in aerospace engineering before teaching middle school science. “Engineers, they don’t get it right the first time,” she says. The learning process is a cycle. With each iteration, the design improves, says Espiritu. “Students get frustrated because they want the answer right away. You need that frustration. That’s how you learn.” It took Allen a while to grasp the necessity of letting her kids fail. You want students to feel good about the experience, she says, but it’s okay for them to feel the discomfort that comes when something is not working. Students at Minnesota’s Cedar Park Elementary face their first design challenge in kindergarten by building a boat out of clay, says Foss, the engineering specialist. Introducing kids to the engineering process—having them start again and fix the mistakes—at that age is much easier because they haven’t yet developed a fear of failure. “We definitely need more scientists and engineers,” says Foss, but more than that, “we need a population that understands science and the engineering process.” “This Is What We Need to Do Today” STEM is continuing to gain steam, but will it sustain momentum? Ng has seen increasing demand for her organization’s elementary STEM teacher certification program, which is offered through St. Catherine University, but still, she says, “whether it’s here to stay is a really good question.” As with any new approach, challenges remain. Public education needs STEM to remain relevant, says Ford, of Hilburn Academy. And students immediately grasp that relevance. He recalls one second-grade teacher remarking that students used to come into class and ask, “What are we doing today?” Now they say, “This is what we need to do today.” Start with the basics. You don’t need a cartload of iPads to teach STEM. Begin by looking out your front door. Does your school have a courtyard? Start a garden. Try a “tech take-apart” lesson by disassembling old TVs or VCRs. Students can build bridges out of manila folders or boats out of clay (see above); they can incorporate the engineering-design process (ask, imagine, plan, create, improve) into a variety of art projects. Reach out to local institutions. Whether there’s a nature center or a tech company next door to your school, your neighbors are the best folks to start with when you’re seeking resources for STEM initiatives. And be sure to cultivate partnerships with local businesses and colleges, too. See what the state offers. Many state education departments have set up websites with STEM resources. Visit stemconnector.org and click on “State by State” to find links to organizations in your area. The site serves as a clearinghouse of resources offered by corporations, nonprofits, and professional organizations. From the Math Magazine, Scholastic.
Oxygen saturation level in blood – in simple terms can be expressed as a measure of oxygen that is carried in blood in dissolved state. Blood acts as a carrier agent (forwarding) to carry oxygen across the state (organized community of organs) in our body. Oxygen is necessary for our body. It is absorbed in our body through a chemical process (not a chemical process carried out in laboratories). Here oxygen taken in while inhaling oxygen transforms various carbohydrates, vitamins and other minerals taken in while eating. Oxygen reacts with these energy efficient minerals to give energy to our body. That is why oxygen is important to our body – to make our bodies energy filled. Let’s talk about normal ranges of oxygen required for our body. Oxygen in normal ranges should be between 95%-99% in our blood. Any slight deviations from these ranges put our body at risk of attracting a disease. If oxygen level falls below 80% in low oxygen condition, then it could be a matter of grave concern. Similarly, oxygen level tend to be on higher side (above normal limits of around 99%), this condition also fall in grave concern category. Saturation means certain percentage of soluble ingredient dissolved in a solvent above which its percentage cannot be increased by any hook or crook method. In this case, our soluble ingredient is oxygen and solvent is our friend which we call our blood. Saturated oxygen is contained in red blood corpuscles in blood. It is referred to as a measurement for the amount of degrees stored in our blood. Oxygen, it is sucked in our body through two passages present in our nose. Oxygen moving around lungs clings to our blood which is in continuous motion in our body. All routes of our blood pass through heart, which acts as a cleaning machine for our body. Clean blood originating from heart reaches lungs where oxygen clings to red blood corpuscles in the blood. This oxygen rich blood travels to all parts of body. While passing through, fixed quantity of oxygen clings on. No other %age gets dissolved limiting any further absorption. When father absorption stops, this is oxygen saturation. But, a word of caution applies here. When saturated oxygen alters its prescribed limits, time to see a medical practitioner. Oxygen saturation level is a relative measure of oxygen that is carried in the blood flow. As blood flow reaches every part of our body, so is oxygen. Symptoms of changes in blood oxygen levels: Blood oxygen does not mean to stay within specific wave lengths (high or low). It can be in any range. So, it’s very much feasible to discuss both the conditions (I means conditions if our blood contains low saturated oxygen with high saturated oxygen in blood as well). Low saturated oxygen level symptoms: If saturated oxygen falls below normal ranges, then it could be identified by following symptoms: - Fast breathing. - Shortness in breath. - Excessive fatigue - When confusion prevails. - Nail beds turning blue - Top of skin turning blue - Mucosa going blue These are few symptoms which could be beneficial to single out low saturated oxygen in blood. Apart from these, list of symptoms could be unending, as blood with oxygen and being in continuous flow in a body may leave its impact in some part of a body. Taking particular in view, a doctor can give proper advice and cure High oxygen level symptoms: Now let us take a look at some symptoms of high oxygen level in blood: - Inflammation in lungs - Visual disturbance - Disturbance in sleep. High blood oxygen like its reversal on lower side also is a major cause of concern. Fix an appointment with a doctor to take due course of treatment. Causes of low and high blood oxygen levels: Some of the causes for high and low blood oxygen levels are: - Lung disease. Tips for improving saturating oxygen levels in blood: Saturating oxygen levels if have deviated from normal levels does not contain worry. Slight changes in living styles can have vast changes in” let’s talk about” saturated oxygen in blood. These tips just need little care - Deep breathing: Taking deep breaths adds values to changes in oxygen levels in blood. As we take in oxygen, oxygen directly go to our lungs. Lungs have the right to accumulate all inhaled oxygen. As blood passes through, oxygen mixes with blood. Deep breathing is helpful to normalize oxygen values. - Adding plants to our sorroundings: Blessed with process of photosynthesis produces oxygen, our surroundings’ if have good plants, more oxygen generation in progress. More oxygen means more to be inhaled for us. This requires only slight investments. Just have to go to a nursery to buy a plant. Alternatively, your gardener can also bring some good quality of plants. - Drinking water in good quantity: There is enough of oxygen dissolved in this life saving agent or water we can say. Drinking enough water during the day flushes out toxins from our body. More flushing creates good enough space in our stomach. Water being a rich carrier of dissolved oxygen can prove to be a major source of oxygen that is required for our body. - Taking healthy diet plans: Try choosing healthy diet plans as more healthy foods we have, more is the generation of oxygen needed for blood. - Daily walks: Walk a mile. This should be our motto. More walks we have, faster our heart works due to the fatigue that our body is in during walks. Fast heart beats let in more air intake through our nose. Air directly goes to our lungs mixing with blood in blood oxygen mixture chamber. - Jogging: Jogging produces more air inhale through (just similar to walking but with slight better results) as during jogging more air intake takes place as compared to walking. Air inhale lets in more oxygen giving chamber inside bodies chance to mix blood with oxygen resulting in oxygen laden blood flow freely across the body. - Aerobics: Another good chance of improving oxygen content in blood is to have few sessions of aerobics exercises. Just be on the lookout of a good place for your aerobics. Facts to remember: Some facts for saturated blood oxygen to remember are: - Oxygen levels in blood should be in normal range. - Little changes in normal changes could be a cause of concern. - Slight changes in normal blood oxygen values need consultation of a registered medical practitioner. - Slight changes in our living (eating practices, exercises) can bring abrupt changes in deviations which have happened in oxygen values.
Life on Earth is supported by a thin layer of gases held to the planet’s surface by gravity. Our atmosphere provides the breath of life and regulates global temperature in the face of a constant onslaught of solar radiation. Aerosol particles—including smoke, ash, soot, mineral dust, and sea salt—play a key role in regulating atmospheric energy exchanges. The particles can directly drive effects in the energy budget by raising the Earth’s albedo, scattering and reflecting solar radiation back into space and ultimately cooling the planet. Indirect effects of aerosols are more complex, like the uncertainties of particle interaction with clouds. A better scientific understanding of the role of aerosols in the atmosphere could give communities a vital tool to adapt to the Earth’s changing climate. Here Lacagnina et al. evaluate the single-scattering albedo of the atmosphere with measurements from the Polarization and Anisotropy of Reflectances for Atmospheric Sciences Coupled with Observations from a Lidar (PARASOL) satellite and compare their measures with observations from the Ozone Monitoring Instrument and the ground-based Aerosol Robotic Network. This study is the first time PARASOL data have been compared alongside other observations, and the data cover almost the entire globe, offering valuable insight into aerosol influence. The scientists found that the data sets usually reflect real-world observations quite nicely but that the models slightly overestimate aerosol scattering, as opposed to absorption. In other words, the models predicted an outsize role for aerosol, scattering solar radiation and cooling the upper atmosphere. The researchers suggest this bias implies that the direct and indirect effects of aerosols within the atmosphere may be bigger than previously simulated. The researchers hope their work highlights the potential and the importance of aerosols and the single-scattering albedo effect in evaluating the Earth’s energy exchange. The success of this comparison between PARASOL data, observations, and model simulations may help open the door to more cohesive analyses of the atmosphere. (Journal of Geophysical Research: Atmospheres, doi:10.1002/2015JD023501, 2015) —Lily Strelich, Freelance Writer Citation: Strelich, L. (2015), Aerosols may play a big part in atmospheric absorption, Eos, 96, doi:10.1029/2015EO040423. Published on 2 December 2015.
Dystonia is generally classified based on its cause, the age at which symptoms first occur, and the regions of the body affected. Based on the cause, dystonia is classified as primary, or secondary. Primary dystonia is a condition in which dystonia is the only clinical feature. There is no evidence of cell death or a known cause. It is also known as idiopathic torsion dystonia. The primary dystonias are often inherited from a parent. In non-primary or secondary dystonia, an acquired or exogenous cause is identified. This can be a prior stroke, a birth injury or exposure to certain drugs. Secondary dystonia may also represent one symptom of other neurological disorders, such as Parkinson’s disease. Based on age of onset, dystonia is classified as early-onset, if it develops before age 21, or late onset, after age 21. The age at onset is an important indicator of whether the dystonia is more likely to spread to other body regions. The younger the patient is at dystonia onset, the higher the likelihood that the dystonia may involve other areas. In patients with primary late-onset dystonia, dystonia often begins in the upper body, such as the neck, head, neck, or an arm. Based on regions of the body affected, dystonia is classified as: - Generalized Dystonia: is the most widespread form of dystonia; it affects the legs or one leg and the trunk, plus other regions, most commonly the arms. - Focal Dystonia: involves only one region of the body, such as the neck, vocal cords or hand. Focal dystonia includes blepharospasm, oromandibular dystonia, cervical dystonia (or spasmodic torticollis), laryngeal dystonia (also called spasmodic dysphonia) and limb dystonia. - Hemidystonia: affects one half of the body. - Segmental Dystonia: affects two or more adjacent body regions, such as the neck and an arm. - Multifocal Dystonia: affects two or more distant regions of the body, such as the upper face and the hand. For even more information on dystonia, please visit our "What is Dystonia" page.
Down syndrome is a chromosomal disorder that can be detected at an early stage, i.e., in the womb or at birth. It affects one’s physical and mental state. Take a look at the Down syndrome effects. Down syndrome is a chromosomal disorder that is a result of the presence of an extra 21st chromosome. The British doctor, John Langdon Down described the syndrome in 1866 and hence it is named as the Down syndrome. It produces many effects in children with this disorder. Let us look at these effects in detail. Effects on the Physique Children with Down syndrome exhibit major or minor differences in their physical structure. They have a single instead of double crease along one or both the palms. Those with Down syndrome have shorter limbs, a poor muscle tone, and an abnormally larger space between the first and the second toe. Typically, they have almond-shaped eyes, a flat-bridged nose, a protruding tongue and a small oral cavity. Susceptibility to other Diseases Children with Down syndrome have a greater risk of heart diseases. They are at a risk for congenital heart diseases and gastroesophageal reflux disease. They may have to suffer from frequent ear infections. Sleep apnea and thyroid problems are common among children with Down syndrome. Epilepsy, leukemia, and disorders of the immune system are among the less common effects of Down syndrome. Effects on Mental Ability Many children with Down syndrome are mentally retarded. Their cognitive development varies. There is a delay in acquiring speech and fine motor skills. Their language and communication skills are very different from those of other children. They are recognizably inarticulate in their language and communication skills. Their emotional and intellectual growth is often retarded. Children with Down syndrome lack social abilities. Some may not be able to do complex thinking that is required in the study of certain subjects. Some may achieve the ability of complex thinking much later in life. Children with Down syndrome show marked learning disabilities. There is a reduction in fertility among both men and women with Down syndrome. However, there have been three recorded instances of men with Down syndrome who could become fathers. People with Down syndrome have a comparatively lower risk of getting fatal cancers. Science has not yet found a clear reason for this. But the reduced incidences of cancers in patients of Down syndrome may be the result of tumor-suppressor genes in chromosome 21. Less risk of cancers might also be because those with Down syndrome are less exposed to environmental factors. These people are also less prone to diabetic retinopathy and the hardening of arteries. Down syndrome can be identified during pregnancy or at birth. A healthy environment can support their development. After the detection of Down syndrome, the child should undergo proper medical treatment. Parents must understand their child’s problems and provide him/her with suitable assistance. It is important to consider each case of Down syndrome individually, as the effects of this condition vary. Those with Down syndrome should be shown care and concern. Support from parents and educational aids can help make their lives better.
Now we’ll dig deeper into Fraction Finder-friendly processes Intermediate Level Syllabus (Processes) Ready to dig deeper? The Intermediate Level consists of three sections: Relevant Processes, Hardware, and Software for the Fraction Finder. - Short Path Distillation Explained - Wiped Film Evaporation Explained - Ethanol Extraction Explained Short Path Distillation Explained Short path distillation (SPD) is a process of extracting individual chemical compounds from a larger material. In this process, technicians are looking to split a material, likely a botanical, into different chemical compounds so that they can keep the useful substances and remove the contaminants. It is called “short path” due to the short distance the chemicals travel through glassware before they fully separate. This process is commonly used for purifying small amounts of a compound, or when working with compounds that become unstable when reaching high temperatures. Examples include organic compounds like solvents and aodic molecules utilized in the life sciences industry for biomedical or biological research and botanical oil refinement. Every Standard Operating Procedure is different, but the basic steps include: 1. Set the temperature and vacuum level 2. Observe the flow and bubbling action through the head 3. Observe both the rate of flow and color of distillate in the collection flask 4. Determine when to switch flasks Here is a more detailed set of instructions, including where the Fraction Finder comes in: 1. Set up a clean short path distillation system using the following equipment: Heating/receiving flask – round bottom flasks, Distillation head, Distribution condensers, Short path distillation head, and the Fraction Finder 2. Place the liquid to be distilled in the heating flask 3. Pull a Vacuum below 5 Torr 4. Heat the System: Make sure the large flask no more than 1/2 full on a hot plate at around 180 – 200° Celsius. This will heat up the large beaker 5. Vary temperature to get a steady flow of materials through the condenser, but not so much where it is violently boiling through the head 6. Use the Fraction Finder’s readings to determine when the “Main Body” fraction is flowing through the condenser – when this happens, change the collection flask to avoid cross contaminating the fractions 7. Use the Fraction Finder’s readings to determine when the the “Main Body” fraction is decreasing – when this happens, change the collection flask in order to collect the residual “Tails” 8. Once the run is complete, clean your glassware and set it up for another pass Diagram of a Typical Short Path Distillation Setup Wiped Film Evaporation Explained Wiped film evaporation (WFE) is a continuous distillation process, in which a rotating wiper sends the distillate onto a heated surface within the system, thinning the material and separating it into two different pathways that are collected in separate flasks. The WFE process is commonly used for the production of refined fragrance, fats, and hemp oil, among other markets. Basic Steps of a Wiped Film Process 1. Crude oil is fed into a main chamber 2. A spinning wiper sends the distillate onto a heated surface within the system 3. The spinning wiper thins the crude oil to a film to maximize the heat uniformity and thermal transfer of the product as well as to accelerate the heat-up times 4. This results in an efficient, uniform heating process 5. While the heated crude will fall into a flask, the vaporized crude will be sent through a different pathway that is re-condensed and collected. Currently, in WFE of botanical oils, processors look at the color and viscosity of the fluid to determine the quality of their separation during the process. Processors can control the quality of their process using four parameters: temperature, flow rate, vacuum pressure, and wiper speed (RPM). During the process, processors primarily control and adjust temperature and flow rate, while vacuum pressure and wiper speed are fixed. The current control process is very subjective and based on experience, which is where the Fraction Finder comes in. Wiped Film Evaporation vs Short Path Distillation Other methods of refinement such a short-path distillation (SPD) have no feed stream, and thus are considered ‘batch processing’. WFE, however, is a process that is constantly separating, and thus can be operated in either ‘semi-batch’ or ‘continuous’ processing modes. Also unlike SPD, which requires sequential separation of all fractions, WFE separates in parallel, overcoming fundamental speed and efficiency limits of SPD. Furthermore, WFE has no big boiling flask, which reduces the amount of time the crude oil is kept at high temperature. This time reduction significantly decreases the thermal degradation of cannabinoids during the separation process. Lastly, while WFE is known to have higher throughput, this increase comes at a higher equipment cost. Wiped Film processors manage four different core parameters 1. Flow rate 4. Wiper speed (RPM) Ethanol Extraction Explained Ethanol extraction is a relatively quick process that comes prior to distillation. During this phase, Cannabinoids are in the acidic form. Basic Steps of an Ethanol Extraction Process 1. Filling System – the extractor is filled with ethanol 2. Agitation then Soak Cycle – the extractor is agitated then goes through a soak cycle 3. Full Flow – the flow for recirculation begins 4. Recirculation Phase – Extractor recirculates ethanol over system 5. Emptying System using N2 Gas – Extractor is emptied with nitrogen gas and system no longer chilled 6. Emptying Reservoir – Extractor reservoir is emptied from extractor
OR WAIT 15 SECS A team led by professor Arne Skerra at the Technical University of Munich (TUM) has developed an innovative strategy for preventing the anthrax bacterium from absorbing iron, which is crucial for its survival. It does so by neutralizing a special iron complexing agent produced by the bacterium. Because the anthrax pathogen only spreads in the body when it receives access to the essential element, this is expected to provide an effective treatment against the life-threatening infection. Anthrax is a disease caused by bacteria. Although the pathogen responsible for anthrax can be treated with antibiotics, the toxin which it releases in the body is particularly dangerous. If the infection is recognized too late, it is often lethal. The anthrax pathogen can survive in the soil for decades in the form of spores. Grazing livestock, such as cows or sheep, ingest the spores and become infected with anthrax. Persons who work with these livestock animals or with animal products may become infected; however, it is very rare for anthrax to occur in animal herds in Germany today. Furthermore, humans may also become infected with the illness if the meat of infected animals is not sufficiently heated. In late August of this year, livestock in the southeast of France became infected with anthrax -- the most serious outbreak in 20 years, according to the French media. Populations of chimpanzees and gorillas living in the wild are also endangered by anthrax. Today, anthrax constitutes a global threat primarily due to its potential use as a bioweapon. In 2001, several letters with anthrax spores were distributed in the United States of America. Five people died at the time Just like any cell in the body, bacteria require the essential trace element iron. However, in body fluids, iron is tightly bound to proteins and, therefore, not easily available. Accordingly, bacteria produce special complexing agents called siderophores (iron carriers) in order to bind the few available iron ions and subsequently absorb them via their own import systems. The human immune system prevents this via a protein that circulates in the blood called siderocalin. It has a high affinity for common iron siderophores and scavenges them, allowing them to be removed via the kidneys. Petrobactin is a peculiar iron carrier produced by the anthrax pathogen which is not recognized by siderocalin. The aim of Prof. Skerra from the Department of Biological Chemistry was to disable this anthrax siderophore, thereby inhibiting the reproduction of the anthrax pathogen. With the aid of Anticalin® technology, which was developed by his department, he and his team were able to reconstruct the body's own siderocalin. The result was "petrocalin," which is able to neutralize the anthrax pathogen's siderophore. "The newly developed petrocalin captures petrobactin, thereby depriving the anthrax pathogen of access to vital iron and acting as a protein antibiotic," says Skerra. "In collaboration with professor Siegfried Scherer from the Department of Microbial Ecology, we have been able to demonstrate that this approach works in bacterial cultures." Skerra's strategy opens up a new avenue of treatment for anthrax infections by effectively suppressing the spread of the bacterium in the patient's body. The biochemical and protein structure analyses will be published by Skerra and his colleagues in the journal Angewandte Chemie, also providing insight into the molecular mechanisms. Source: Technical University of Munich
Personal, social, health and citizenship education deals with many real life issues young people face as they grow up. It gives them the knowledge and skills needed to lead healthy and responsible lives as confident individuals and members of society. In Years 5 and 6, each class receives one hour of PSHCE per week. Over the course of two years, there will be a progression within each unit of work in order to build and enhance the previous year's work. At KS3, PSHCE is part of the Ethics and Values program where they receive three hours over a two-week cycle. Each term's lessons are grouped into units of work, which cover the following areas: · perceiving themselves as growing and changing individuals with their own experiences and ideas, and as members of communities · staying healthy and safe, managing risk · the wider world and the interdependence of communities within it · social justice and moral responsibility · how their own choices and behaviour can affect local, national or global issues and political and social institutions · how to make more confident and informed choices about their health, behaviour and environment · taking more responsibility, individually and as a group, for their own learning · defining and resisting bullying. PSHCE is also responsible for educating the pupils about Drugs (KS2 & 3), Relationships Education (KS2) and Relationships and Sex Education (KS3). Effective relationships and sex education is essential if young people are to make responsible, informed and healthy decisions about their lives, both now and in the future. Drug, alcohol and tobacco education is an explicit, planned component of PSHCE. It enables pupils to increase their knowledge and understanding of drugs, alcohol and tobacco, and to explore attitudes and develop skills for making healthy, informed choices. Lessons within year groups may be enhanced using outside speakers and educational visits.
Illustration 3.4: Projectile Motion Please wait for the animation to completely load. A purple ball undergoes projectile motion as shown in the animation (position is given in meters and time is given in seconds). The blue and red objects illustrate the x and y components of the ball's motion. Ghost images are placed on the screen every second. To understand projectile motion, you must first understand the ball's motion in the x and y directions separately (any multidimensional motion can be resolved into components). Restart. Consider the x direction. Notice that the x coordinate of the projectile (purple) is identical to the x coordinate of the blue object at every instant. What do you notice about the spacing between blue images? You should notice that the displacement between successive images is constant. So what does this tell you about the x velocity of the projectile? What does it tell you about the x acceleration of the projectile? This should tell you that the object moves with a constant velocity in this direction (which is also depicted on the left graph). Now consider the y direction. Notice that the y coordinate of the projectile (purple) is identical to the y coordinate of the red object at every instant. What do you notice about the spacing between successive images for the red object? You should notice that the displacement between successive images gets smaller as the object rises and gets larger as the object falls. This means that it has a downward acceleration. By studying the right-hand graph, we can also see that the y acceleration is constant. A particularly important point to understand for the motion of a projectile is what happens at the peak. What is the velocity of the projectile at the peak? This is a tricky question because you have a good idea that the y velocity is zero. However, does this mean that the velocity is zero? Remember that velocity has two components, vx and vy. At the peak, vx is not zero. Therefore, the velocity at the peak is not zero. Click here to view the velocity and acceleration vectors. Illustration authored by Aaron Titus with support by the National Science Foundation under Grant No. DUE-9952323 and placed in the public domain.
Watch water droplets dance across a surface using electricity Cool things happen when you control water with a computer. Imagine drops of water sprinkled onto a pan’s Teflon surface. Those droplets will stay in place, or move around if you tilt the pan from side to side, thanks to gravity. But researchers at the Massachusetts Institute of Technology’s Media Lab have created a system by which droplets on a device follow a computer’s instructions and move in a controlled way across the horizontal surface, like pieces on a board game. The project was born out of a desire to “computationally reconfigure physical matter,” says the project’s lead, Udayan Umapathi, a research scientist at the MIT Media Lab. In other words, take a substance—in this case, water— and manipulate the way it moves with a computer. The result is a computer-controlled system that does just that: allows users to direct a water droplet across a board as they like, and even combine it with other droplets. It works using a 19th-century technique called “electrowetting.” The device itself is based on a circuit board, and its surface consists of hundreds of gold-plated copper pads, connected by wiring underneath. Upon those pads is a five-micron-thick sheet of smooth plastic that’s covered with Teflon on top to keep the surface hydrophobic, meaning that the water is repelled from the surface and thus doesn’t spread out in a messy pool. Sending a tiny electrical current to one of the many pads on the board creates a positive charge that lures the water droplet towards it. By charging the pads in a specific order, a water droplet can be pulled along whatever path the person controlling the computer wants. Think of it like putting a magnet beneath a board game to control a metal piece above, except in this case they are using a custom-made app on the connected computer to dictate the water’s behavior. So what’s the point? Basically, Umapathi says that it is interesting to take a common yet fascinating substance like water and use it in this way. “[Water] has all these beautiful properties,” he says, “but it’s not being put to use for a computer interface.” He also sees a potential application for the technology in biology, taking the place of a human who might use a pipette to carefully move or mix water droplets during experiments. Using technology to more closely join the physical world with the digital one, or vary the way we input information to a device—beyond simply typing on a computer keyboard, iPhone screen, or speaking to a virtual assistant like Alexa—can produce fascinating results. MIT Media lab also revealed technology that can detect the words you speak silently to yourself, and Google and Levi’s have teamed up to produce a jacket in which you touch its sleeve to input instructions that it parlays to your phone via Bluetooth. In this case, part of the idea is that the technology creates a peaceful, calming effect with a substance from nature. “We want to move away from from the idea of just looking at pixels which are stuck behind glass,” he says. “We want people to be connected to the natural, and the most beautiful material, that we have on the planet.” “This is a way to actually program reality,” he says. “That’s one way to think about it.”
A game in which players take it in turns to choose a number. Can you block your opponent? Using your knowledge of the properties of numbers, can you fill all the squares on the board? What is the smallest number of answers you need to reveal in order to work out the missing headers? Make a line of green and a line of yellow rods so that the lines differ in length by one (a white rod) Can you find a relationship between the number of dots on the circle and the number of steps that will ensure that all points are Play the divisibility game to create numbers in which the first two digits make a number divisible by 2, the first three digits make a number divisible by 3... How many integers between 1 and 1200 are NOT multiples of any of the numbers 2, 3 or 5? Some 4 digit numbers can be written as the product of a 3 digit number and a 2 digit number using the digits 1 to 9 each once and only once. The number 4396 can be written as just such a product. Can. . . . Follow this recipe for sieving numbers and see what interesting patterns emerge. Find a cuboid (with edges of integer values) that has a surface area of exactly 100 square units. Is there more than one? Can you find them all? Find the frequency distribution for ordinary English, and use it to help you crack the code. A game that tests your understanding of remainders. A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target. Data is sent in chunks of two different sizes - a yellow chunk has 5 characters and a blue chunk has 9 characters. A data slot of size 31 cannot be exactly filled with a combination of yellow and. . . . Find the largest integer which divides every member of the following sequence: 1^5-1, 2^5-2, 3^5-3, ... n^5-n. Given the products of adjacent cells, can you complete this Sudoku? Take any pair of numbers, say 9 and 14. Take the larger number, fourteen, and count up in 14s. Then divide each of those values by the 9, and look at the remainders. Gabriel multiplied together some numbers and then erased them. Can you figure out where each number was? Can you find a way to identify times tables after they have been shifted up? The clues for this Sudoku are the product of the numbers in adjacent squares. I put eggs into a basket in groups of 7 and noticed that I could easily have divided them into piles of 2, 3, 4, 5 or 6 and always have one left over. How many eggs were in the basket? Here is a machine with four coloured lights. Can you develop a strategy to work out the rules controlling each light? Which pairs of cogs let the coloured tooth touch every tooth on the other cog? Which pairs do not let this happen? Why? Do you know a quick way to check if a number is a multiple of two? How about three, four or six? Substitution and Transposition all in one! How fiendish can these codes get? The sum of the first 'n' natural numbers is a 3 digit number in which all the digits are the same. How many numbers have been summed? Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make? Find some triples of whole numbers a, b and c such that a^2 + b^2 + c^2 is a multiple of 4. Is it necessarily the case that a, b and c must all be even? If so, can you explain why? The number 8888...88M9999...99 is divisible by 7 and it starts with the digit 8 repeated 50 times and ends with the digit 9 repeated 50 times. What is the value of the digit M? A three digit number abc is always divisible by 7 when 2a+3b+c is divisible by 7. Why? Three people chose this as a favourite problem. It is the sort of problem that needs thinking time - but once the connection is made it gives access to many similar ideas. Using the digits 1, 2, 3, 4, 5, 6, 7 and 8, mulitply a two two digit numbers are multiplied to give a four digit number, so that the expression is correct. How many different solutions can you find? A mathematician goes into a supermarket and buys four items. Using a calculator she multiplies the cost instead of adding them. How can her answer be the same as the total at the till? Can you work out what size grid you need to read our secret message? Rectangles are considered different if they vary in size or have different locations. How many different rectangles can be drawn on a chessboard? The number 12 = 2^2 × 3 has 6 factors. What is the smallest natural number with exactly 36 factors? You are given the Lowest Common Multiples of sets of digits. Find the digits and then solve the Sudoku. Imagine we have four bags containing numbers from a sequence. What numbers can we make now? Factor track is not a race but a game of skill. The idea is to go round the track in as few moves as possible, keeping to the rules. Have you seen this way of doing multiplication ? The five digit number A679B, in base ten, is divisible by 72. What are the values of A and B? Ben passed a third of his counters to Jack, Jack passed a quarter of his counters to Emma and Emma passed a fifth of her counters to Ben. After this they all had the same number of counters. Explore the relationship between simple linear functions and their Can you find any perfect numbers? Read this article to find out more... The puzzle can be solved by finding the values of the unknown digits (all indicated by asterisks) in the squares of the $9\times9$ grid. Each letter represents a different positive digit AHHAAH / JOKE = HA What are the values of each of the letters? Find the highest power of 11 that will divide into 1000! exactly. Twice a week I go swimming and swim the same number of lengths of the pool each time. As I swim, I count the lengths I've done so far, and make it into a fraction of the whole number of lengths I. . . . Given the products of diagonally opposite cells - can you complete this Sudoku? A number N is divisible by 10, 90, 98 and 882 but it is NOT divisible by 50 or 270 or 686 or 1764. It is also known that N is a factor of 9261000. What is N?