content
stringlengths 275
370k
|
---|
The Montessori Method of education developed by Dr. Maria Montessori is a child-centered, development-based approach to education. Dr. Maria Montessori observed that the child absorbs from the environment they are in. She also believed that when children work on their own, they could reach new levels of independence. They become self-motivated and have better understanding. Dr. Montessori used specially designed materials, to incite the child’s inner desire to learn.
The period from birth to six years the young child has special powers. The child learns by unconsciously taking in everything around him and constructs himself, just like a sponge in water. Using his senses, he creates himself by absorbing everything from his environment through living in his environment. He does this easily and naturally.
From birth to three years old, the young child unknowingly or unconsciously acquires his basic abilities. The child’s work during this period is to become independent from the adult for his basic human functions. He learns to speak, to walk, to gain control of his hands and to master his bodily functions. Once the child achieves these basic skills, by about three years old, he moves into the next phase. During this period, the child starts developing his will. In this phase he needs freedom to move, freedom to choose and freedom to concentrate. This is the period when the child comes to the Montessori Environment.
AMI Certified Montessori Adult |
To place a link here contact the webmaster.
Support this web site
Some elements are made up of single atoms:
He, Fe, and Na are the Chemical Symbols of the elements.
Some elements are made up of groups of atoms:
These groups of atoms are called molecules.
Molecules can also be made up of combinations of different types of atoms. These substances are called compounds:
O2, CH4 and NH3 are the Chemical Formulas of Oxygen, Methane and Ammonia respectively. CH4 means that a single molecule of methane contains one atom of Carbon and four atoms of Hydrogen. This chemical formula could have been written C1H4 but the 1 is never written. Similarly, a molecule of Ammonia (NH3) contains one atom of Nitrogen and three atoms of Hydrogen.
For example, if Carbon (C) is burnt in Oxygen (O2) to form Carbon Dioxide, a Chemical Reaction occurs. This reaction can be written:
This is called a Chemical Equation. The substances on the left hand side of the equation are called the Reactants. The substances on the right hand side are called the Products.
There is one very important rule with chemical equations:
On the left had side, there is an atom of Carbon and a molecule of Oxygen (containing two atoms). On the right hand side there is a molecule of carbon dioxide (containing one atom of carbon and two atoms of Oxygen). The number of atoms on the left hand side is equal to the number of atoms on the right hand side. All that has changed is the arrangement of the atoms. In a chemical reaction atoms are re-arranged; no atoms are destroyed or created.
In the next example, Hydrogen gas is mixed with Oxygen gas. If the mixture is sparked, it explodes to form water. This chemical reaction can be expressed as:
On the left had side, there is a molecule of Hydrogen (containing two atoms) and a molecule of Oxygen (also containing two atoms). On the right hand side there is a molecule of water (containing two atoms of Hydrogen and one atom of Oxygen). The left hand side has one extra atom of Oxygen. This is not allowed by the Law of Conservation of Matter. Both sides must contain the same number of atoms.
To make the equation conform, we must balance the equation.
It is not possible to change the chemical formulas of the reactants or products. Water will always be H2O. Balancing the equation is achieved by changing the number of molecules involved. The balanced form of the above equation is:
Now, on the left had side, there are two molecules of Hydrogen (each containing two atoms making four atoms) and a molecule of Oxygen (containing two atoms). On the right hand side there are two molecule of water (each containing two atoms of Hydrogen and one atom of Oxygen making a total of four atoms of Hydrogen and two of Oxygen). The equation is now balanced.
In summary, when Hydrogen reacts with Oxygen, two molecules of Hydrogen react with one molecule of Oxygen to give two molecules of water.
This reaction gives out a lot of heat when it goes from left to right. It is said to be Exothermic (from two Greek words meaning out and heat). Because of the Law of the Conservation of Energy, if the reaction was made to go from right to left, energy would have to be added. Reactions that require energy are called Endothermic.
The following reaction is between Sulphuric Acid (H2SO4) and Sodium Hydroxide (NaOH) to give Sodium Sulphate (Na2SO4) and water (H20):
This equation is balanced. Counting the individual atoms on both sides gives four atoms of Hydrogen, two of Sodium, one of Sulphur and six Oxygen. This is achieved because one molecule of Sulphuric Acid reacts with two molecules of Sodium Hydroxide.
The final reaction is between Nitrogen (N2) and Hydrogen (H2) to give Ammonia (NH3):
The equation is balanced since there are 2 Nitrogen atoms and 6 Hydrogen atoms on both sides.
This reaction is different to the previous ones. When Hydrogen and Nitrogen are mixed together under room temperature and pressure, very little happens. When the temperature and pressure are raised, a partial reaction occurs.
The reaction goes in both directions. While the Nitrogen and Hydrogen are combining to form Ammonia, Ammonia splits to form Hydrogen and Nitrogen. A mixture of all three substances results. This type of reaction is called an Equilibrium and is represented by arrows going in both directions.
It is possible to push the reaction in one direction by adding a Catalyst. A catalyst is a substance that helps a reaction without being used up.
If Ammonia is removed from the equilibrium mixture, the reaction will move to produce more Ammonia so that equilibrium is attained.
To answer these types of questions we must use a quantity called Relative Atomic Mass (RAM). In simple terms, this tells us how heavy different atoms are. In actual fact, it tells us their relative masses.
The table below lists Relative Atomic Masses for selected elements.
This table tells us that a Carbon atom is about 12 times heavier than a Hydrogen atom while an Oxygen atom is 16 times as heavy.
Molecules have a Relative Molecular Mass. This is the sum of the relative atomic masses of its atoms. An Oxygen molecule (made up of two atoms of Oxygen, each with a relative atomic mass of 16) has a relative molecular mass of 32.
Using the table we can put figures to the chemical equations.
As an example, how many grams of Oxygen (O) will be used if 12g of Carbon (C) are burnt to form Carbon Dioxide (CO2)?
Remember the chemical reaction is written:
From the table above we can see that Carbon has a relative atomic mass of 12 while the Oxygen molecule has a relative molecular mass of 32. Since a single atom of Carbon reacts with a single molecule of Oxygen, 12g of Carbon will react with 32g of Oxygen.
The relative molecular mass of Carbon Dioxide is 44 (C = 12, O2 = 32; 12 + 32 = 44).
If the question had been how much Oxygen does 1g of Carbon require to burn completely to Carbon Dioxide, the figures can be divided by 12:
Dividing by all figures by 12 gives:
Note that 2.67 is 32 ÷ 12 and 3.67 is 44 ÷ 12. Also, the reactants always weigh as much as the products. In a chemical reaction matter cannot be created or destroyed.
For the formation of water from Hydrogen and Oxygen we have
H2 has a relative molecular mass of 2 (1 + 1). 2H2 have a relative molecular mass of 4. O2 has a relative molecular mass of 32. H2O has a relative molecular mass of 18 (1 + 1 + 16). Two molecules of H2O will have a relative molecular mass of 36. Therefore:
Any chemical reaction (as long as the equation is balanced) can be analysed in this way.
© KryssTal 2001 |
Lesson 5: Common SAS Operators
In this lesson, we will explain the basic SAS operators (+-*/) and some simple calculation functions.
1. SAS Operators - Summation, Subtraction, Multiplication and Division
The dataset above contains the salaries for both Amy and John. We are going to add up the salary for each staff for the monthly pay cheque. Notice: Amy is missing the last week of work so that her salary in week 4 is missing.
There are two methods to add up a number:
Method 1: A = B + C (Not Recommended)
Method 2: A = sum (of B, C) (Recommended)
SALARY1 = WEEK1 + WEEK2 + WEEK3 + WEEK4;
SALARY2 = SUM (OF WEEK1, WEEK2, WEEK3, WEEK4);
As you can see, the two summation methods return the different results when there is a missing field (See Amy's Salary 1 and 2). Salary1, using summation method 1, returns a missing result since the Week-4 salary is missing. In contrast, Salary2 simply skips the missing salary and add up the 3 non-missing salaries from week1 to week3.
Method 2 is more commonly used as any missing value is being taken care of in the function.
Subtraction, Multiplication and Division
Subtraction, multiplication and division are quite straightforward:
The code below computes the BMI for each patients:
BMI = WEIGHT / (HEIGHT*HEIGHT);
DONE! You have learned the basic operators in SAS! |
ctural Terms.--22. Chronology, the Era; Kings of Spain, Contemporary Sovereigns, and Royal Arms.--23. Authorities quoted.--24. Abbreviations.
1. GENERAL VIEW OF SPAIN.
The aggregate monarchy of Spain is composed of many distinct provinces, each of which in earlier times formed a separate and independent kingdom; although all are now united by marriage; inheritance, conquest, and other circumstances under one crown, the original distinctions, geographical as well as social, remain almost unaltered. The language, costume, habits, and local character of the natives, vary no less than the climate and productions of the soil. Man, following, as it were, the example of the nature by which he is surrounded, has little in common with the inhabitant of the adjoining district; and these differences are increased and perpetuated by the ancient jealousies and inveterate dislikes, which petty and contiguous states keep up with such tenacious memory. The general comprehensive term "Spain," which is convenient |
Heraldry had and has very specific rules as to how a coat of arms is made up. The most basic part of a coat of arms was the achievement. An achievement in terms of heraldry and a coat of arms was the complete display of arms, crests and accessories. An achievement was made up of eight parts and there were very specific rules as to what colours could be used in a heraldic device.
The eight parts of an achievement were:
- The shield
- The helmet
- The mantling
- The wreath
- The crest
- The supporters
- The coronets
- The mottoes
A shield was considered to be the most important part of a coat of arms. Symbolic of its importance to a family’s coat of arms, a shield could appear by itself without any other part of an achievement. A helmet appeared above the shield and the type of helmet and its position indicated the rank of the owner. A mantling swept round from the top of the helmet and draped round the sides of a shield. It is thought that a mantling was meant to resemble the mantle worn by Crusader knights while in the Middle East to shield them from the sun. The wreath was a piece of twisted silk that covered the joint of the helmet. A crest in a heraldic shield was originally an object that knights used to wear attached to their helmet especially at jousts. A supporter was either a model of an animal or person that appeared to be holding up the shield. Coronets were on the achievements of peers only – dukes, earls, viscount and barons – and were symbolic of their rank. A motto was usually placed at the bottom of a shield within a scroll but occasionally it could be seen above it.
Heraldic colouring was also very specific. A shield was made up of tinctures, metals, colours and furs.
Tinctures were either a metal or colour. A metal colour was either gold (or) or silver (argent). Colours were red (gules), blue (azure), black (sable), green (vert) and purple (purpure) while furs consisted of ermine (black ‘spots’ on white), ermines (white spots on black) and vair (black and silver). A general rule was that a colour should not appear immediately on another colour, or a metal on another metal.
Shields were also designed on patterns called ordinaries. These were usually some form of band that went across a shield, be it vertically, horizontally or diagonally. It is thought that the origin of ordinaries came from when a band of metal was put across a shield to add strength to it for combat. Each style had a name. A chief or fess had a bar that went horizontally across a shield, be it at the top (chief) or in the middle (fess). A pale was a bar that ran vertically down a shield. Other patterns were pall, chevron, pile, cross and saltire. More complicated designs were known as subordinaries. Whereas ordinaries were basic shapes that would be recognised outside of heraldry, patterns such as a fret, flinches or inescutcheon would not be.
Whereas knights would have had a helmet above their shield, peers of the realm would have had some form of crown that denoted their rank. A baron would have had a crown that only had silver balls on the pattern. An earl would have had strawberry leaves with silver balls above them; a marques would have had a strawberry leaf followed by a silver ball that was followed by a strawberry leaf while an earl has a pattern only of strawberry leaves. |
Electron-beam lithography (often abbreviated as e-beam lithography) is the practice of scanning a focused beam of electrons to draw custom shapes on a surface covered with an electron-sensitive film called a resist ("exposing"). The electron beam changes the solubility of the resist, enabling selective removal of either the exposed or non-exposed regions of the resist by immersing it in a solvent ("developing"). The purpose, as with photolithography, is to create very small structures in the resist that can subsequently be transferred to the substrate material, often by etching.
The primary advantage of electron-beam lithography is that it can draw custom patterns (direct-write) with sub-10 nm resolution. This form of maskless lithography has high resolution and low throughput, limiting its usage to photomask fabrication, low-volume production of semiconductor devices, and research and development.
- 1 Electron-beam lithography systems
- 2 Electron energy deposition in matter
- 3 Charging
- 4 Electron-beam resist performance
- 5 New frontiers in electron-beam lithography
- 6 See also
- 7 References
Electron-beam lithography systems
Electron-beam lithography systems used in commercial applications are dedicated e-beam writing systems that are very expensive (> US$1M). For research applications, it is very common to convert an electron microscope into an electron beam lithography system using a relatively low cost accessories (< US$100K). Such converted systems have produced linewidths of ~20 nm since at least 1990, while current dedicated systems have produced linewidths on the order of 10 nm or smaller.
Electron-beam lithography systems can be classified according to both beam shape and beam deflection strategy. Older systems used Gaussian-shaped beams and scanned these beams in a raster fashion. Newer systems use shaped beams, which may be deflected to various positions in the writing field (this is also known as vector scan).
Lower-resolution systems can use thermionic sources, which are usually formed from lanthanum hexaboride. However, systems with higher-resolution requirements need to use field electron emission sources, such as heated W/ZrO2 for lower energy spread and enhanced brightness. Thermal field emission sources are preferred over cold emission sources, in spite of the former's slightly larger beam size, because they offer better stability over typical writing times of several hours.
Both electrostatic and magnetic lenses may be used. However, electrostatic lenses have more aberrations and so are not used for fine focusing. There is no current mechanism to make achromatic electron beam lenses, so extremely narrow dispersions of the electron beam energy are needed for finest focusing.
Stage, stitching and alignment
Typically, for very small beam deflections electrostatic deflection "lenses" are used, larger beam deflections require electromagnetic scanning. Because of the inaccuracy and because of the finite number of steps in the exposure grid the writing field is of the order of 100 micrometre – 1 mm. Larger patterns require stage moves. An accurate stage is critical for stitching (tiling writing fields exactly against each other) and pattern overlay (aligning a pattern to a previously made one).
Electron beam write time
The minimum time to expose a given area for a given dose is given by the following formula:
where is the time to expose the object (can be divided into exposure time/step size), is the beam current, is the dose and is the area exposed.
For example, assuming an exposure area of 1 cm2, a dose of 10−3 coulombs/cm2, and a beam current of 10−9 amperes, the resulting minimum write time would be 106 seconds (about 12 days). This minimum write time does not include time for the stage to move back and forth, as well as time for the beam to be blanked (blocked from the wafer during deflection), as well as time for other possible beam corrections and adjustments in the middle of writing. To cover the 700 cm2 surface area of a 300 mm silicon wafer, the minimum write time would extend to 7*108 seconds, about 22 years. This is a factor of about 10 million times slower than current optical lithography tools. It is clear that throughput is a serious limitation for electron beam lithography, especially when writing dense patterns over a large area.
E-beam lithography is not suitable for high-volume manufacturing because of its limited throughput. The smaller field of electron beam writing makes for very slow pattern generation compared with photolithography (the current standard) because more exposure fields must be scanned to form the final pattern area (≤mm2 for electron beam vs. ≥40 mm2 for an optical mask projection scanner). The stage moves in between field scans. The electron beam field is small enough that a rastering or serpentine stage motion is needed to pattern a 26 mm X 33 mm area for example, whereas in a photolithography scanner only a one-dimensional motion of a 26 mm X 2 mm slit field would be required.
As features sizes shrink, the number of incident electrons at fixed dose also shrinks. As soon as the number reaches ~10000, shot noise effects become predominant, leading to substantial natural dose variation within a large feature population. With each successive process node, as the feature area is halved, the minimum dose must double to maintain the same noise level. Consequently, the tool throughput would be halved with each successive process node.
|feature diameter (nm)||minimum dose for one-in-a-million 5% dose error (μC/cm2)|
Note: 1 ppm of population is about 5 standard deviations away from the mean dose.
Ref.: SPIE Proc. 8683-36 (2013)
Shot noise is a significant consideration even for mask fabrication. For example, a commercial mask e-beam resist like FEP-171 would use doses less than 10 μC/cm2, whereas this leads to noticeable shot noise for a target CD even on the order of ~200 nm on the mask.
Defects in electron-beam lithography
Despite the high resolution of electron-beam lithography, the generation of defects during electron-beam lithography is often not considered by users. Defects may be classified into two categories: data-related defects, and physical defects.
Data-related defects may be classified further into two sub-categories. Blanking or deflection errors occur when the electron beam is not deflected properly when it is supposed to, while shaping errors occur in variable-shaped beam systems when the wrong shape is projected onto the sample. These errors can originate either from the electron optical control hardware or the input data that was taped out. As might be expected, larger data files are more susceptible to data-related defects.
Physical defects are more varied, and can include sample charging (either negative or positive), backscattering calculation errors, dose errors, fogging (long-range reflection of backscattered electrons), outgassing, contamination, beam drift and particles. Since the write time for electron beam lithography can easily exceed a day, "randomly occurring" defects are more likely to occur. Here again, larger data files can present more opportunities for defects.
Photomask defects largely originate during the electron beam lithography used for pattern definition.
Electron energy deposition in matter
The primary electrons in the incident beam lose energy upon entering a material through inelastic scattering or collisions with other electrons. In such a collision the momentum transfer from the incident electron to an atomic electron can be expressed as , where b is the distance of closest approach between the electrons, and v is the incident electron velocity. The energy transferred by the collision is given by , where m is the electron mass and E is the incident electron energy, given by . By integrating over all values of T between the lowest binding energy, E0 and the incident energy, one obtains the result that the total cross section for collision is inversely proportional to the incident energy , and proportional to 1/E0 – 1/E. Generally, E >> E0, so the result is essentially inversely proportional to the binding energy.
By using the same integration approach, but over the range 2E0 to E, one obtains by comparing cross-sections that half of the inelastic collisions of the incident electrons produce electrons with kinetic energy greater than E0. These secondary electrons are capable of breaking bonds (with binding energy E0) at some distance away from the original collision. Additionally, they can generate additional, lower energy electrons, resulting in an electron cascade. Hence, it is important to recognize the significant contribution of secondary electrons to the spread of the energy deposition.
In general, for a molecule AB:
- e− + AB → AB− → A + B−
This reaction, also known as "electron attachment" or "dissociative electron attachment" is most likely to occur after the electron has essentially slowed to a halt, since it is easiest to capture at that point. The cross-section for electron attachment is inversely proportional to electron energy at high energies, but approaches a maximum limiting value at zero energy. On the other hand, it is already known that the mean free path at the lowest energies (few to several eV or less, where dissociative attachment is significant) is well over 10 nm, thus limiting the ability to consistently achieve resolution at this scale.
With today's electron optics, electron beam widths can routinely go down to a few nm. This is limited mainly by aberrations and space charge. However, the feature resolution limit is determined not by the beam size but by forward scattering (or effective beam broadening) in the resist while the pitch resolution limit is determined by secondary electron travel in the resist. This point is driven home by the 2007 demonstration of double patterning using electron beam lithography in the fabrication of 15 nm half-pitch zone plates. Although a 15 nm feature was resolved, a 30 nm pitch was still difficult to do, due to secondary electrons scattering from the adjacent feature. The use of double patterning allowed the spacing between features to be wide enough for the secondary electron scattering to be significantly reduced. The forward scattering can be decreased by using higher energy electrons or thinner resist, but the generation of secondary electrons is inevitable. It is now recognized that for insulating materials like PMMA, low energy electrons can travel quite a far distance (several nm is possible). This is due to the fact that below the ionization potential the only energy loss mechanism is mainly through phonons and polarons, although the latter is basically an ionic lattice effect. Polaron hopping could extend as far as 20 nm. The travel distance of secondary electrons is not a fundamentally derived physical value, but a statistical parameter often determined from many experiments or Monte Carlo simulations down to < 1 eV. This is necessary since the energy distribution of secondary electrons peaks well below 10 eV. Hence, the resolution limit is not usually cited as a well-fixed number as with an optical diffraction-limited system. Repeatability and control at the practical resolution limit often require considerations not related to image formation, e.g., resist development and intermolecular forces.
A study by the College of Nanoscale Science and Engineering (CNSE) presented at the 2013 EUVL Workshop indicated that, as a measure of electron blur, 50-100 eV electrons easily penetrated beyond 10 nm of resist thickness (PMMA or commercial resist); furthermore dielectric breakdown discharge is possible.
In addition to producing secondary electrons, primary electrons from the incident beam with sufficient energy to penetrate the resist can be multiply scattered over large distances from underlying films and/or the substrate. This leads to exposure of areas at a significant distance from the desired exposure location. For thicker resist, as the primary electrons move forward, they have an increasing opportunity to scatter laterally from the beam-defined location. This scattering is called forward scattering. Sometimes the primary electrons are scattered at angles exceeding 90 degrees, i.e., they no longer advance further into the resist. These electrons are called backscattered electrons and have the same effect as long-range flare in optical projection systems. A large enough dose of backscattered electrons can lead to complete exposure of resist over an area much larger than defined by the beam spot.
The smallest features produced by electron-beam lithography have generally been isolated features, as nested features exacerbate the proximity effect, whereby electrons from exposure of an adjacent region spill over into the exposure of the currently written feature, effectively enlarging its image, and reducing its contrast, i.e., difference between maximum and minimum intensity. Hence, nested feature resolution is harder to control. For most resists, it is difficult to go below 25 nm lines and spaces, and a limit of 20 nm lines and spaces has been found. In actuality, though, the range of secondary electron scattering is quite far, sometimes exceeding 100 nm, but becoming very significant below 30 nm.
The proximity effect is also manifest by secondary electrons leaving the top surface of the resist and then returning some tens of nanometers distance away.
Proximity effects (due to electron scattering) can be addressed by solving the inverse problem and calculating the exposure function E(x,y) that leads to a dose distribution as close as possible to the desired dose D(x,y) when convolved by the scattering distribution point spread function PSF(x,y). However, it must be remembered that an error in the applied dose (e.g., from shot noise) would cause the proximity effect correction to fail.
Since electrons are charged particles, they tend to charge the substrate negatively unless they can quickly gain access to a path to ground. For a high-energy beam incident on a silicon wafer, virtually all the electrons stop in the wafer where they can follow a path to ground. However, for a quartz substrate such as a photomask, the embedded electrons will take a much longer time to move to ground. Often the negative charge acquired by a substrate can be compensated or even exceeded by a positive charge on the surface due to secondary electron emission into the vacuum. The presence of a thin conducting layer above or below the resist is generally of limited use for high energy (50 keV or more) electron beams, since most electrons pass through the layer into the substrate. The charge dissipation layer is generally useful only around or below 10 keV, since the resist is thinner and most of the electrons either stop in the resist or close to the conducting layer. However, they are of limited use due to their high sheet resistance, which can lead to ineffective grounding.
The range of low-energy secondary electrons (the largest component of the free electron population in the resist-substrate system) which can contribute to charging is not a fixed number but can vary from 0 to as high as 50 nm (see section New frontiers in electron beam lithography and extreme ultraviolet lithography). Hence, resist-substrate charging is not repeatable and is difficult to compensate consistently. Negative charging deflects the electron beam away from the charged area while positive charging deflects the electron beam toward the charged area.
Electron-beam resist performance
Due to the scission efficiency generally being an order of magnitude higher than the crosslinking efficiency, most polymers used for positive-tone electron-beam lithography will crosslink (and therefore become negative tone) at doses an order of magnitude than doses used for positive tone exposure. Such large dose increases may be required to avoid shot noise effects.
A study performed at the Naval Research Laboratory indicated that low-energy (10–50 eV) electrons were able to damage ~30 nm thick PMMA films. The damage was manifest as a loss of material.
- For the popular electron-beam resist ZEP-520, a pitch resolution limit of 60 nm (30 nm lines and spaces), independent of thickness and beam energy, was found.
- A 20 nm resolution had also been demonstrated using a 3 nm 100 keV electron beam and PMMA resist. 20 nm unexposed gaps between exposed lines showed inadvertent exposure by secondary electrons.
- Hydrogen silsesquioxane (HSQ) is a negative tone resist that is capable of forming isolated 2-nm-wide lines and 10 nm periodic dot arrays (10 nm pitch) in very thin layers. HSQ itself is similar to porous, hydrogenated SiO2. It may be used to etch silicon but not silicon dioxide or other similar dielectrics.
New frontiers in electron-beam lithography
To get around the secondary electron generation, it will be imperative to use low-energy electrons as the primary radiation to expose resist. Ideally, these electrons should have energies on the order of not much more than several eV in order to expose the resist without generating any secondary electrons, since they will not have sufficient excess energy. Such exposure has been demonstrated using a scanning tunneling microscope as the electron beam source. The data suggest that electrons with energies as low as 12 eV can penetrate 50 nm thick polymer resist. The drawback to using low energy electrons is that it is hard to prevent spreading of the electron beam in the resist. Low energy electron optical systems are also hard to design for high resolution. Coulomb inter-electron repulsion always becomes more severe for lower electron energy.
Another alternative in electron-beam lithography is to use extremely high electron energies (at least 100 keV) to essentially "drill" or sputter the material. This phenomenon has been observed frequently in transmission electron microscopy. However, this is a very inefficient process, due to the inefficient transfer of momentum from the electron beam to the material. As a result, it is a slow process, requiring much longer exposure times than conventional electron beam lithography. Also high energy beams always bring up the concern of substrate damage.
Interference lithography using electron beams is another possible path for patterning arrays with nanometer-scale periods. A key advantage of using electrons over photons in interferometry is the much shorter wavelength for the same energy.
Despite the various intricacies and subtleties of electron beam lithography at different energies, it remains the most practical way to concentrate the most energy into the smallest area.
There has been significant interest in the development of multiple electron beam approaches to lithography in order to increase throughput. This work has been supported by SEMATECH and start-up companies such as Multibeam Corporation, Mapper and IMS. However, the degree of parallelism required to be competitive would need to be very high (at least 10 million, as estimated above); this is far in excess of most scheduled demonstrations. A key difficulty is that the total supplied beam current needs to be multiplied by the number of parallel beams( e.g., 10 million), which dramatically increases cost of ownership. Also, the field size does not change, which means increasing the number of beams increases the strength of Coulomb interaction between beams.
- McCord, M. A.; M. J. Rooks (2000). "2". SPIE Handbook of Microlithography, Micromachining and Microfabrication.
- Parker, N. W.; et al. (2000). "High-throughput NGL electron-beam direct-write lithography system". Proc. SPIE. 3997: 713. doi:10.1117/12.390042.
- Faster and lower cost for 65 nm and 45 nm photomask patterning[dead link]
- M. L. Kempsell et al., J. Microlith/Nanolith. MEMS MOEMS, vol. 8, 043001(2009).
- H. Sunaoshi et al., Prof. SPIE vol. 6283, 628306 (2006).
- K. Ugajin et al., Proc. SPIE vol. 6607, 66070A (2007).
- F. T. Chen et al., Proc. SPIE vol. 8683, 868311 (2013).
- L. Feldman; J. Mayer (1986). Fundamentals of Surface and Thin Film Analysis. 54. pp. 130–133. ISBN 0-444-00989-2.
- Euronanochem. None. Retrieved on 2011-08-27.
- Stoffels, E; Stoffels, W W; Kroesen, G M W (2001). "Plasma chemistry and surface processes of negative ions". Plasma Sources Science and Technology. 10 (2): 311. Bibcode:2001PSST...10..311S. doi:10.1088/0963-0252/10/2/321.
- Seah, M. P.; Dench, W. A. (1979). "Quantitative electron spectroscopy of surfaces: A standard data base for electron inelastic mean free paths in solids". Surface and Interface Analysis. 1: 2. doi:10.1002/sia.740010103.
- Tanuma, S.; Powell, C. J.; Penn, D. R. (1994). "Calculations of electron inelastic mean free paths. V. Data for 14 organic compounds over the 50–2000 eV range". Surface and Interface Analysis. 21 (3): 165. doi:10.1002/sia.740210302.
- Broers, A. N.; et al. (1996). "Electron beam lithography—Resolution limits". Microelectronic Engineering. 32: 131–142. doi:10.1016/0167-9317(95)00368-1.
- K. W. Lee (2009). "Secondary electron generation in electron-beam-irradiated solids:resolution limits to nanolithography". J. Kor. Phys. Soc. 55 (4): 1720. Bibcode:2009JKPS...55.1720L. doi:10.3938/jkps.55.1720.
- SPIE Newsroom: Double exposure makes dense high-resolution diffractive optics. Spie.org (2009-11-03). Retrieved on 2011-08-27.
- Dapor, M.; et al. (2010). "Monte Carlo modeling in the low-energy domain of the secondary electron emission of polymethylmethacrylate for critical-dimension scanning electron microscopy". J. Micro/Nanolith. MEMS MOEMS. 9: 023001. doi:10.1117/1.3373517.
- P. T. Henderson; et al. (1999). "Long-distance charge transport in duplex DNA: The phonon-assisted polaron-like hopping mechanism". Proc. Natl. Acad. Sci. U.S.A. 96 (15): 8353–8358. Bibcode:1999PNAS...96.8353H. doi:10.1073/pnas.96.15.8353. PMC . PMID 10411879.
- H. Seiler (1983). "Secondary electron emission in the scanning electron microscope". J. Appl. Phys. 54 (11): R1–R18. Bibcode:1983JAP....54R...1S. doi:10.1063/1.332840.
- G. Denbeaux et al., 2013 International Workshop on EUV Lithography.
- J. A. Liddle; et al. (2003). "Resist Requirements and Limitations for Nanoscale Electron-Beam Patterning". Mater. Res. Soc. Symp. Proc. 739 (19): 19–30.
- Ivin, V (2002). "The inclusion of secondary electrons and Bremsstrahlung X-rays in an electron beam resist model". Microelectronic Engineering. 61–62: 343. doi:10.1016/S0167-9317(02)00531-2.
- Yamazaki, Kenji; Kurihara, Kenji; Yamaguchi, Toru; Namatsu, Hideo; Nagase, Masao (1997). "Novel Proximity Effect Including Pattern-Dependent Resist Development in Electron Beam Nanolithography". Japanese Journal of Applied Physics. 36: 7552. Bibcode:1997JaJAP..36.7552Y. doi:10.1143/JJAP.36.7552.
- Renoud, R; Attard, C; Ganachaud, J-P; Bartholome, S; Dubus, A (1998). "Influence on the secondary electron yield of the space charge induced in an insulating target by an electron beam". Journal of Physics: Condensed Matter. 10 (26): 5821. Bibcode:1998JPCM...10.5821R. doi:10.1088/0953-8984/10/26/010.
- J. N. Helbert et al., Macromolecules, vol. 11, 1104 (1978).
- M. J. Wieland et al., Proc. SPIE vol. 7271, 72710O (2009)
- F. T. Chen et al., Proc. SPIE vol. 8326, 83262L (2012)
- P. Kruit et al., J. Vac. Sci. Tech. B 22, 2948 (2004).
- Bermudez, V. M. (1999). "Low-energy electron-beam effects on poly(methyl methacrylate) resist films". Journal of Vacuum Science and Technology B. 17 (6): 2512. Bibcode:1999JVSTB..17.2512B. doi:10.1116/1.591134.
- H. Yang et al., Proceedings of the 1st IEEE Intl. Conf. on Nano/Micro Engineered and Molecular Systems, pp. 391–394 (2006).
- Cumming, D. R. S.; Thoms, S.; Beaumont, S. P.; Weaver, J. M. R. (1996). "Fabrication of 3 nm wires using 100 keV electron beam lithography and poly(methyl methacrylate) resist". Applied Physics Letters. 68 (3,) James Watt Nanofabrication Centre): 322. Bibcode:1996ApPhL..68..322C. doi:10.1063/1.116073.
- Manfrinato, Vitor R.; Zhang, Lihua; Su, Dong; Duan, Huigao; Hobbs, Richard G.; Stach, Eric A.; Berggren, Karl K. (2013). "Resolution limits of electron-beam lithography toward the atomic scale". Nano Lett. 13 (4): 1555–1558. doi:10.1021/nl304715p.
- C. R. K. Marrian (1992). "Electron-beam lithography with the scanning tunneling microscope". Journal of Vacuum Science and Technology. 10 (B): 2877–2881. Bibcode:1992JVSTB..10.2877M. doi:10.1116/1.585978.
- T. M. Mayer; et al. (1996). "Field emission characteristics of the scanning tunneling microscope for nanolithography". Journal of Vacuum Science and Technology. 14 (B): 2438–2444. Bibcode:1996JVSTB..14.2438M. doi:10.1116/1.588751.
- L. S. Hordon; et al. (1993). "Limits of low-energy electron optics". Journal of Vacuum Science and Technology. 11 (B): 2299–2303. Bibcode:1993JVSTB..11.2299H. doi:10.1116/1.586894.
- Egerton, R. F.; et al. (2004). "Radiation damage in the TEM and SEM". Micron. 35 (6): 399–409. doi:10.1016/j.micron.2004.02.003. PMID 15120123.
- Multibeam Corporation. Multibeamcorp.com (2011-03-04). Retrieved on 2011-08-27.
- Mapper Lithography. Mapper Lithography (2010-01-18). Retrieved on 2011-08-27.
- IMS Nanofabrications AG. IMS Nanofabrication AG (2011-12-07). Retrieved on 2012-01-15.
- M. L. Yu et al., JVST B 23, 2589 (2005). |
GLOSSARY ADAPTED FROM WIKIPEDIA
Food preservation is the process of treating and handling foods in such a way as to stop or greatly
slow down spoilage to prevent food-borne illness while maintaining nutritional value, density, texture and flavor.
Preservation involves preventing the growth of bacteria, fungi, and other micro-organisms as well as
retarding the oxidation of fats that cause rancidity. It also includes processes to inhibit natural aging and discoloration
that occurs during food preparation, such as apples browning when sliced Some preservation methods require the
food to be sealed after treatment to prevent re-contamination with microbes; others, such as drying, allow food to be stored
without any special containment for long periods.
Preservation processes include:
1) Heating to kill or denature organisms (e.g. boiling)
Oxidation (e.g. use of sulphur dioxide)
3) Toxic inhibition (e.g. smoking, use of CO/2, vinegar, alcohol etc)
Dehydration (e.g. drying)
5) Osmotic inhibition (e.g. use of syrups)
6) Low temperature inactivation (e.g.
7) Combinations of these methods
Common methods of applying these processes include drying, spray drying, freeze drying, refrigeration,
freezing, vacuum-packing, canning, preserving in syrup, sugar crystallization, food irradiation, adding preservatives or inert
gases such as carbon dioxide. Other methods that preserve food, and add flavor, include pickling, salting, smoking,
preserving in alcohol, sugar crystallization and curing.
Drying: One of the
oldest food preservation methods is drying that sufficiently reduces water activity to delay or prevent bacterial growth.
Most types of meat can be dried. In addition, many fruits can be dried. Drying is also the normal means of preservation
for cereal grains such as wheat, maize, oats, barley, rice, millet, and rye.
Smoking: Meat, fish,
and some other foods may be both preserved and flavored with smoke, which is typically infused in a smokehouse. The
combination of heat to dry the food without cooking it, and the addition of the aromatic hydrocarbons from the smoke preserves
Refrigeration & Freezing:
Refrigeration and freezing are two of the most commonly used processes commercially and domestically for preserving a very
wide range of foodstuffs including prepared foodstuffs, which would not have required freezing in their unprepared state.
For example, potato waffles are stored in the freezer, but potatoes themselves require only a cool dark place to ensure many
months' storage. Cold stores provide large volume, long-term storage for strategic food stocks held in case of national
emergency in many countries.
Vacuum Packing: Vacuum-packing
stores food in a vacuum environment, usually in an airtight bag or bottle. The vacuum environment strips bacteria of
oxygen needed for survival, hence preventing the food from spoiling. Home vacuum packing is available in bags, canisters,
Mason jars, and bottles using the FoodSaver Home Vacuum Packing System.
Salt: Salting, or
curing, draws moisture from the meat through a process of osmosis. Meat is cured with salt or sugar, or a combination
of the two. Nitrates and nitrites are also often used to cure meat.
Sugar: Sugar is used
to preserve fruits, either in syrup or in crystallized form where the preserved material is cooked in sugar to the point of
crystallization and the resultant product is then stored dry. This method is used for the skins of citrus fruit (candied
peel), angelica and ginger. A modification of this process produces glacÚ fruit such as glacÚ cherries where the fruit
is preserved in sugar but is then extracted from the syrup and sold, the preservation being maintained by the sugar content
of the fruit and the superficial coating of syrup. The use of sugar is often combined with alcohol for preservation
of luxury products such as fruit in brandy or other spirits.
is a method of preserving food by placing it or cooking it in a substance that inhibits or kills bacteria and other micro-organisms.
This material must also be fit for human consumption. Typical pickling agents include brine (high in salt) , vinegar,
ethanol, and vegetable oil, especially olive oil but also many other oils. Most pickling processes also involve heating
or boiling so that the food being preserved becomes saturated with the pickling agent. Frequently pickled items include
vegetables such as cabbage, peppers and some animal products such as corned beef and eggs. EDTA may also be added to
chelate calcium. Calcium is essential for bacterial growth.
Lye: Sodium hydroxide
(lye) makes food too alkaline for bacterial growth. Lye will saponify fats in the food, which will change its flavor
Canning and Bottling: Canning
involves cooking fruits or vegetables, sealing them in sterile cans or jars, and boiling the containers to kill or weaken
any remaining bacteria as a form of pasteurization. Various foods have varying degrees of natural protection against
spoilage and may require that the final step occur in a pressure cooker. High-acid fruits like strawberries require
no preservatives to can and only a short boiling cycle, whereas marginal fruits such as tomatoes require longer boiling and
addition of other acidic elements. Many vegetables require pressure canning. Food preserved by canning or bottling
is at immediate risk of spoilage once the can or bottle has been opened. Lack of quality control in the canning process
may allow ingress of water or micro-organisms. Most such failures are rapidly detected as decomposition within the can
causing gas production and the selling or bursting of the can. However, there have been examples of poor manufacture
and poor hygiene allowing contamination of canned food by the obligate anaerobe, Clostridium botulinum, which produces an
acute toxin within the food leading to severe illness or death. This organism produces no gas or obvious taste and remains
undetected by taste or smell. Food contaminated in this way has included corned beef and tuna.
Jellying: Food may be preserved
by cooking in a material that solidifies to a gel. Such materials include gelatin, agar, maize flour and arrowroot flour.
Some foods naturally form a protein gel when cooked.
Jugging: Meat can be preserved
by jugging, the process of stewing the meat in a covered earthenware jug or casserole. The animal to be jugged is usually
cut into pieces, placed into a tightly sealed jug with brine or gravy, and stewed. Red wine and/or the animal's own
blood is sometimes added to the cooking liquid. Jugging was a popular method of preserving meat up until the middle of the
is the treatment of food with x-rays or gamma radiation to kill bacteria and mold. It may be combined with vacuum packing
to seal out microbes. As with sunlight, exposure to the intense light from the lamps used for food irradiation is harmful
to human skin. As with sunlight, the light from the lamps used for food irradiation does not make the food "radioactive."
Food irradiation is effective against a wide variety of pathogens including bacteria, fungi, viruses, and parasites.
But the implications of irradiation are not fully understood, and the use of the technology is limited. Irradiation
of potatoes, strawberries, and meat is common in many countries where refrigerated facilities and trucks are not. In 2002,
the Food and Drug Administration permitted irradiation of meat and poultry to reduce the spread of E. coli and salmonella
In the US and most of Europe, irradiation of spices
is common, as the only alternative (treatment with gas) is potentially carcinogenic. The process is called "cold pasteurization"
because it is feared that the label "irradiation" would hurt sales. Foods may also carry labels saying "Picowaved For
Your Protection" as food processors may not want to openly label their foods as being irradiated. One should note that although
irradiation is effective at killing bacteria, fungi and other pathogens, there is still a danger that the food may contain
some of their toxins.
Modified atmosphere is a way to preserve food operating on the atmosphere around it. Salad crops which are notoriously difficult
to preserve are now being packaged in sealed bags with an atmosphere modified to reduce the oxygen (O/2) concentration and
increase the carbon dioxide (CO/2) concentration. There is concern that although salad vegetables retain their appearance
and texture in such conditions, this method of preservation may not retain nutrients, especially vitamins.
Grains may be preserved using carbon dioxide.
A block of dry ice is placed in the bottom and the can is filled with grain. The can is then "burped" of excess gas.
The carbon dioxide from the sublimation of the dry ice prevents insects, mold, and oxidation from damaging the grain.
Grain stored in this way can remain edible for five years.
Nitrogen Gas: (N/2)
at concentrations of 98% or higher is also used effectively to kill insects in grain through hypoxia. However, carbon
dioxide has an advantage in this respect as it kills organisms through both hypoxia and hypercarbia, requiring concentrations
of only 80%, or so. This makes carbon dioxide preferable for fumigation in situations where a hermetic seal is not maintainable.
Cellars: Many root
vegetables are very resistant to spoilage and require no other preservation other than storage in cool dark conditions, usually
Some foods, such as traditional cheeses, keep for a long time without any special procedures. The preservation occurs due
to the presence in very high numbers of beneficial bacteria or fungi, which use their own biological defenses to prevent
other organisms from gaining a foothold. |
Called Tribal Peoples, First Peoples, Native Peoples, Indigenous Peoples constitute about 5% of the world’s population, yet account for about 15% of the world’s poor.
There are approximately 370 million Indigenous people in the world, belonging to 5, 000 different groups, in 90 countries worldwide. Indigenous people live in every region of the world, but about 70% of them live in Asia.
There is no universally accepted definition for “Indigenous, ” though there are characteristics that tend to be common among Indigenous Peoples:
- They tend to have small populations relative to the dominant culture of their country. However, in Bolivia and Guatemala Indigenous people make up more than half the population.
- They usually have (or had) their own language. Today, Indigenous people speak some 4, 000 languages.
- They have distinctive cultural traditions that are still practiced.
- They have (or had) their own land and territory, to which they are tied in myriad ways.
- They self-identify as Indigenous.
- Examples of Indigenous Peoples include the Inuit of the Arctic, Native Americans, hunter-gatherers in the Amazon, traditional pastoralists like the Maasai in East Africa, and tribal peoples in the Philippines.
Indigenous Peoples and the Environment
Indigenous Peoples are often thought of as the primary stewards of the planet’s biological resources. Their ways of life and cosmovisions have contributed to the protection of the natural environment on which they depend on. It is no coincidence that when the World Wildlife Fund listed the top 200 areas with the highest and most threatened biodiversity, they found that 95 percent are on Indigenous territories.
Indigenous communities and the environments they maintain are increasingly under assault from mining, oil, dam building, logging, and agro-industrial projects. |
Cancer is an abnormal growth of cells. Cancer cells rapidly reproduce despite restriction of space, nutrients, or signals sent from the body to stop reproduction. Cancer cells are often shaped differently from healthy cells, do not function properly, and can spread to many areas of the body. Tumors, abnormal growths of tissue, are clusters of cells that are capable of growing and dividing uncontrollably; their growth is not regulated.
Oncology is the branch of medicine concerned with the diagnosis and treatment of cancer.
Tumors can be benign (noncancerous) or malignant (cancerous). Benign tumors tend to grow slowly and do not spread. Malignant tumors can grow rapidly, invade and destroy nearby normal tissues, and spread throughout the body.
Cancer is malignant because it can be "locally invasive" and "metastatic."
Locally invasive - the tumor can invade the tissues surrounding it by sending out "fingers" of cancerous cells into the normal tissue.
Metastatic - the tumor can send cells into other tissues in the body, which may be distant from the original tumor.
The original tumor is called the "primary tumor." Its cells, which can break off and travel through the body, can begin the formation of new tumors in other organs. These new tumors are referred to as "secondary tumors." The cancerous cells travel through the blood (circulatory system) or lymphatic system to form secondary tumors. The lymphatic system is a series of small vessels that collect waste from cells, carrying it into larger vessels, and finally into lymph nodes. Lymph fluid eventually drains into the bloodstream.
Cancer is named after the part of the body where it originated. When cancer spreads, it keeps this same name. For example, if kidney cancer spreads to the lungs, it is still kidney cancer, not lung cancer. (The cancer in the lung would be an example of a secondary tumor.) Staging is the process of determining whether cancer has spread and, if so, how far. There is more than one system used for staging cancer, and the definition of each stage will depend on the type of cancer.
Cancer is not just one disease but rather a group of diseases, all of which cause cells in the body to change and grow out of control. Cancers are classified either according to the kind of fluid or tissue from which they originate, or according to the location in the body where they first developed. In addition, some cancers are of mixed types. The following five broad categories indicate the tissue and blood classifications of cancer:
A carcinoma is a cancer found in body tissue known as epithelial tissue, which covers or lines surfaces of organs, glands, or body structures. For example, a cancer of the lining of the stomach is called a carcinoma. Many carcinomas affect organs or glands that are involved with secretion, such as breasts that produce milk. Carcinomas account for 80 to 90 percent of all cancer cases.
A sarcoma is a malignant tumor growing from connective tissues, such as cartilage, fat, muscle, tendons, and bones. The most common sarcoma, a tumor on the bone, usually occurs in young adults. Examples of sarcoma include osteosarcoma (bone) and chondrosarcoma (cartilage).
Lymphoma refers to a cancer that originates in the nodes or glands of the lymphatic system. The lymphatic system produces white blood cells and cleans body fluids. Some lymphomas start in lymph tissue in organs such as the brain or stomach. Lymphomas are classified into two categories: Hodgkin lymphoma and non-Hodgkin lymphoma.
Leukemia, also known as blood cancer, is a cancer of the bone marrow that keeps the marrow from producing normal red and white blood cells and platelets. White blood cells are needed to resist infection. Red blood cells are needed to prevent anemia. Platelets keep the body from easily bruising and bleeding. Examples of leukemia include acute myelogenous leukemia, chronic myelogenous leukemia, acute lymphocytic leukemia, and chronic lymphocytic leukemia. The terms myelogenous and lymphocytic indicate the type of cells that are involved.
Myeloma grows in the plasma cells of bone marrow. In some cases, the myeloma cells collect in one bone and form a single tumor, called a plasmacytoma. However, in other cases, the myeloma cells collect in many bones, forming many bone tumors. This is called multiple myeloma.
There is no one single cause for cancer. Scientists believe that it is the interaction of many factors together that produces cancer. The factors involved may be genetic, environmental, or lifestyle characteristics of the individual.
As mentioned, some cancers, particularly in adults, have been associated with certain risk factors. A risk factor is anything that may increase a person's chance of developing a disease. A risk factor does not necessarily cause the disease, but it may make the body less resistant to it. People who have an increased risk of developing cancer can help to protect themselves by scheduling regular screenings and check-ups with their physician and avoiding certain risk factors. Cancer treatment has been proven to be more effective when the cancer is detected early. The following risk factors and mechanisms have been proposed as contributing to the development of cancer:
Lifestyle factors such as smoking, a high-fat diet, and exposure to ultraviolet light (UV radiation from the sun) may be risk factors for some adult cancers. Most children with cancer, however, are too young to have been exposed to these lifestyle factors for any extended time.
Family history, inheritance, and genetics may play an important role in some adult and childhood cancers. It is possible for cancer of varying forms to be present more than once in a family. Some gene alterations are inherited. However, this does not necessarily mean that the person will develop cancer. It indicates that the chance of developing cancer increases. It is unknown in these circumstances if the disease is caused by a genetic mutation, other factors, or simply coincidence.
Exposures to certain viruses, such as the human papillomavirus (HPV) and human immunodeficiency virus (HIV; the virus that causes acquired immune deficiency syndrome, or AIDS), have been linked to an increased risk of developing certain types of cancers. Possibly, the virus alters a cell in some way. That cell then reproduces an altered cell and, eventually, these alterations become a cancer cell that produces more cancer cells. Cancer is not contagious and a person cannot contract cancer from another person who has the disease.
Environmental exposures have been linked to some cancers. For example, people who have certain jobs (such as painters, farmers, construction workers, and those in the chemical industry) seem to have an increased risk of some cancers, likely due to regular exposure to certain chemicals. Other exposures may occur in the home or elsewhere, such as radon (a radioactive gas) in some homes.
The discovery of certain types of genes that contribute to cancer has been an extremely important development for cancer research. Virtually all cancers are observed to have some type of genetic alteration. A small percentage (5 to 10 percent) of these alterations are inherited, while the rest are sporadic, which means they occur by chance or occur from environmental exposures (usually over many years). There are three main types of genes that can affect cell growth, and are altered (mutated) in certain types of cancers, including the following:
These genes regulate the normal growth of cells, causing them to grow. Scientists commonly describe oncogenes as similar to a cancer "switch" that most people have in their bodies. What "flips the switch" to make these oncogenes suddenly allow abnormal cancer cells to begin to grow is unknown.
Tumor suppressor genes
These genes are able to recognize abnormal growth and reproduction of damaged cells, or cancer cells, and can interrupt their reproduction until the defect is corrected. If the tumor suppressor genes are mutated, however, and they do not function properly, tumor growth may occur.
These genes help recognize errors when DNA is copied to make a new cell. If the DNA does not "match" perfectly, these genes repair the mismatch and correct the error. If these genes are not working properly, however, errors in DNA can be transmitted to new cells, causing them to be damaged.
Usually, the number of cells in any of our body tissues is tightly controlled so that new cells are made for normal growth and development, as well as to replace dying cells. Ultimately, cancer is a loss of this balance due to genetic alterations that "tip the balance" in favor of excessive cell growth.
Diagnosis, treatment, and prognosis for childhood cancers are different than for adult cancers. The main differences are the survival rate and the cause of the cancer. The five year survival rate for childhood cancer is about 80 percent, while in adult cancers the five year survival rate is 68 percent. This difference is thought to be because childhood cancer is more responsive to therapy, and a child can tolerate more aggressive therapy.
Childhood cancers often occur or begin in the stem cells, which are simple cells capable of producing other types of specialized cells that the body needs. A sporadic (occurs by chance) cell change or mutation is usually what causes childhood cancer. In adults, the type of cell that becomes cancerous is usually an "epithelial" cell, which is one of the cells that line the body cavity, including the surfaces of organs, glands, or body structures, and cover the body surface. Cancer in adults usually occurs from environmental exposures to these cells over time. Adult cancers are sometimes referred to as "acquired" for this reason.
© 2013 Main Line Health |
Only a licensed professional electrician like this one is qualified to install or repair the electrical system inside a home. It’s a complicated system that consists of a maze of electric circuits.
One Loop or Two?
An electric circuit consists of at least one closed loop through which electric current can flow. Every circuit has a voltage source such as a battery and a conductor such as metal wire. A circuit may have other parts as well, such as lights and switches. In addition, a circuit may consist of one loop or two loops.
A circuit that consists of one loop is called a series circuit. You can see a simple series circuit below. If a series circuit is interrupted at any point in its single loop, no current can flow through the circuit and no devices in the circuit will work. In the series circuit below, if one light bulb burns out, the other light bulb won’t work because it won’t receive any current. Series circuits are commonly used in flashlights.
Q: If one light bulb burns out in this series circuit, how can you tell which bulb it is?
A: It may not be obvious, because neither bulb will light if one is burned out. You can tell which one it is only by replacing first one bulb and then the other to see which replacement results in both bulbs lighting up.
A circuit that has two loops is called a parallel circuit. A simple parallel circuit is sketched below. If one loop of a parallel circuit is interrupted, current can still flow through the other loop. In the parallel circuit below, if one light bulb burns out, the other light bulb will still work because current can bypass the burned-out bulb. The wiring in a house consists of parallel circuits.
- An electric circuit consists of one or two closed loops through which current can flow. It has a voltage source and a conductor and may have other devices such as lights and switches.
- A circuit that consists of one loop is called a series circuit. If its single loop is interrupted at any point, no current can flow through the circuit.
- A circuit that consists of two loops is called a parallel circuit. If one loop of a parallel circuit is interrupted, current can still flow through the other loop. |
By stating that the standards are “a focus on results rather than means,” the authors of the Common Core handed the important work of implementing the standards to “teachers, curriculum developers, and states” (Common Core State Standards, p.4). This hand off has left many educators feeling apprehensive and asking questions, such as “What does a good Common Core aligned lesson look like?” and “What do I need to be thinking about as I make long- and short-term plans for instruction?” People are wondering, how are we going to achieve the “results” the Common Core sets forth?
Because implementation is an enormous and serious task, we are going to spend the next few days sharing some of the ideas, techniques and tools that we have developed to facilitate the implementation process. To launch this series, we offer you the following three qualities, which happen to be three C’s, that we think characterize great teaching as well as great Common Core aligned instruction.
1. Critical Thinking
With words like “analyze,” “evaluate” and “integrate” in abundance in the standards, planning instruction where students engage and think critically about whatever text they are “reading closely” is paramount. By virtue of selecting these words, the authors of the Common Core seemed intent on communicating that children not only become better readers, writers, speakers and listeners, but better thinkers, as well. When planning for Common Core aligned instruction, ask yourself these questions related to critical thinking:
Who’s doing the work?
How engaged are students in their learning?
- How do I know students are thinking deeply about their work?
If idea development is the outcome of “analyzing,” “evaluating,” and “integrating,” then it stands to reason that the next step is to communicate these ideas to others. Communication stands as a central theme of the Common Core State Standards, as evidenced throughout the ten standards for writing and twelve standards for speaking and listening and language development. When thinking about planning Common Core aligned instruction, ask yourself these questions related to communication:
How often do students write?
How much do students talk?
How are students accountable for the conversations they have with one another?
Great thinking does not usually happen in a vacuum. In most cases, it is the result of the synergy that happens after listening, talking, and reading the ideas and work of others. Great thinking and learning relies on collaboration and when we imagine powerful instruction, we always consider ways in which we can invite students to share their thinking and build on the ideas of others. When thinking about planning Common Core aligned instruction, ask yourself these questions about student collaboration:
How often do students work with others to develop, expand, and/or share ideas?
- How do students use what they learn from others to expand their own thinking?
- How are students accountable for during collaboration? |
YOUR NAME: ___________________________________
Circle the things that make you sick if you ate them.
1: Why should you wash your hands before you eat anything?
2: Can you see germs?
3: See if you can use a microscope to view germs.
Educating children about germs is important. Sometimes children at home don't receive the instructions and it is left up to teachers to do the job. Sometimes parents think that teachers should have done that part of teaching and it was never taught so parents will have to teach their children about germs. Whichever way... the resource provided here is suitable for teachers and parents. Because there is an increased demand for ready-to-eat foods, fast foods and dining out it is important to teach young ones about germs. Another activity you may like to complete is: Healthy Art designed for lower primary school aged students but can be very easily adapted for Kindergarten.
Back to worksheet index. Thank you for using Teaching Treasures worksheets.. |
In a tiny village just a few kilometers outside of Dakar, farmers struggle to get by on the equivalent of $2 a day. They live off the milk of their cows, sell the wool of their sheep at local markets, and put their children to work tending the fields. Yet none of this is enough to raise them out of poverty. It’s like filling a leaking bucket with water: No matter how much effort they put in, they never succeed in making enough to meet their daily needs.
Now, for the first time, scientists have found a way to determine the root causes of this “poverty trap”: Disease, whether of humans, animals, or crops, tends to rob the world’s poorest people of their livelihood, keeping them destitute regardless of how hard they work or how much economic aid they get. But the study also suggests possible solutions.
The work provides important insights and implications for future interventions, says Chris Desmond, an expert on social development at the Human Sciences Research Council in Dalbridge, South Africa, who was not involved in the research. “Policymakers need to look at the public health situation, the access to primary health care, the condition of biological pests in the environment,” he says. “They need to look at all those things before they can decide what type of intervention to do.”
To conduct the research, scientists led by Calistus Ngonghala, a mathematician at the University of Florida in Gainesville, collected both economic and disease data from 83 of the most and least developed countries. The data included annual income per person and the impact of diseases in terms of financial cost, disease incidence, and mortality, which vary dramatically around the world. For example, the caterpillar of the armyworm moth destroys crops in places like Brazil and Zimbabwe, but can’t survive in places with temperate climates like Romania. Similarly, human diseases like malaria and dengue fever abound in places like Kenya and Cambodia, where the tropical climate favors their spread, and these countries also happen to offer limited access to health care. The researchers used these and related data to “train” mathematical models to determine how economic and disease factors, as well as ecological factors such as the growth rate of fish populations and other natural resources, affected poverty.
The models show that poor people who live in areas with limited human, animal, and crop disease might be able to lift themselves out of poverty either through their own means or with a bit of economic assistance, such as money to buy more crops and cattle. But in places of high disease and limited means of combating it, people could be stuck in poverty, no matter how much economic aid they receive, the team reports this month in Nature Ecology & Evolution.
“If you’re a subsistence farmer, infectious diseases not only affect your health, they also affect your earning, because you depend on your physical labor to get an income,” Ngonghala says. “We were surprised when we realized that in some instances economic aid is not going to help at all.”
Considering that more than 10% of the world, or about 800 million people—live in extreme poverty, the study suggests that most of them will never escape it unless issues beyond mere income are addressed.
One effective way to break poverty traps may be structural changes such as increasing access to health care by reducing health care costs, and preventing disease transmission through vaccine coverage. Once people are able to get well and safeguard their crops and livestock, they also might be able to dig themselves out of poverty, says study co-author Matthew Bonds, an economist at Harvard University.
Bonds uses as an example Rwanda, a sub-Saharan country that succeeded in reducing extreme poverty and hunger as part of the Millennium Development Goals, a series of international development goals for the year 2015. “[Rwanda] has had a major investment in health infrastructure and health systems,” he says. “Most people can get access to health insurance and to most forms of health care inexpensively.” In addition, foreign businesses are investing in the country’s energy and telecommunication sectors, helping to lift people out of poverty traps, he says.
However, the study can’t tell whether additional health interventions in Rwanda would result in more economic growth. To address that, the researchers would need a detailed survey of the country’s pests, epidemics, and per person income. “Our models simply show theoretical possibilities, they do not provide conclusive evidence,” Bonds says. Yet, he adds, the models predicted health interventions to be the most significant drivers of positive economic outcomes. “You can’t lose with health care.” |
In the Pacific: Mysterious Seabird Declines
Why are puffins, auklets and other seabirds declining off the U.S. West Coast?
Off the U.S. West Coast, several seabird species are in trouble. The birds, all members of the auk family, spend their lives at sea, coming ashore only to raise their young. More at home in the ocean than on land or in the air, they use their wings to “fly” underwater to catch prey, often diving as deep as 100 to 300 feet.
Over the past century, auk populations have been depleted by introduced predators on nesting islands, oil spills, pollution and fishing nets that entangle and drown them. But today there are new threats. The seabirds’ food web appears to be unraveling, and scientists suspect global warming is the cause.
In the 1970s, about 100,000 pairs of Cassin’s auklet nested on California’s remote Farallon Islands. Today only 20,000 pairs remain. In 2005 and 2006, not one of the birds’ eggs hatched, and only about a third of the pairs fledged young in 2007.
Most tufted puffins nest in North America from the Aleutians to California. The birds’ northernmost colonies are thriving but their southern colonies are declining dramatically. At Oregon’s Three Arches Rocks, puffin numbers have plummeted, and only a fraction remain. In Washington, nearly a third of all colonies have disappeared.
Scientists suspect that lack of food is causing both species’ troubles. Adult Cassin’s auklets leave their young alone during the day and fly back each night with krill to feed them. But krill have been in short supply the past few years when auklets are trying to raise their chicks. Puffins feed their chicks fish, which also have declined during the seabird’s breeding season.
Why is prey disappearing?
One explanation may be that the ocean is heating up. From 1937 to 2002, the sea surface temperature in the vicinity of British Columbia’s Triangle Island has fluctuated from year to year but increased overall by nearly 1 degree C. Scientists have monitored breeding seabirds on the island since 1975, including a colony of 50,000 tufted puffins. They’ve found the puffins’ fledgling success is virtually zero when the sea surface exceeds 9.9 degrees C. When the water is that warm, biologists believe prey fish move elsewhere, forcing adult puffins to follow and leave their eggs and chicks behind.
“Breeding birds are tied to specific places,” explains Julie Thayer of Petaluma, California-based PRBE Conservation Science. The young of puffins and auklets must stay in or near their burrows until they are able to fly. As a result, parents are tethered to their nesting island for as long as 3 months, usually unable to forage more than about 30 miles away. The timing of prey is very important to these birds, adds Thayer.
Changing climate seems to be throwing that timing out of kilter. In particular, changes to coastal upwelling—the critical movement of nutrient-rich water from the depths of the ocean to its surface—may be having disastrous consequences. Upwelling delivers food to phytoplankton, the single-celled plants that are the foundation of marine food chains. Because phytoplankton can live only in the top 100 feet or so of the ocean, where light permeates, their existence depends on upwelling.
Today it often takes longer for upwelling to occur, and sometimes it doesn’t happen at all. When mixing does take place, the waters usually come up from a shallower level, which means they are poorer in nutrients. This sets off a domino effect in the food chain: disappearing phytoplankton, a drop in krill and other animal plankton, a scarcity of prey—and starving seabirds.
Other auks facing tough times:
From California to British Columbia, rhinocerous auklets have had trouble finding enough food for their youngsters in recent years. In times past, when the birds laid their eggs, the waters surrounding their nesting islands were roiling with rockfish, sand lance, sablefish and squid. Today the fish are often AWOL.
Horned puffins breed primarily in Alaska, where they have begun nesting farther to the north as the summer pack ice shrinks. The birds winter in the Pacific, usually far from land, and are rarely found washed up on the coast. But that has changed over the past few years, mystifying researchers who wonder if more birds are dying or if shifting conditions are forcing them to forage closer to shore.
Canaries in a coal mine?
“We see signals in birds,” says seabird biologist Bill Sydeman. “They’re the best indicators of what’s going on.” As president of the Petaluma, California-based Farallon Institute for Advanced Ecosystem Research, he and his colleagues have been following the islands’ seabirds for the past 35 years. “We know there are changes to the ecosystem,” says Sydeman.
Adapted from "Seabird Signals" by Doreen Cubie, National Wildlife, August/September 2008. |
Students need to opportunity to work non-routine problems, but they also need to be taught what it sounds like for a mathematician to work through a tough problem.
Consider this item from the 2013 Texas 4th Grade STAAR test:
This item was coded as a perimeter problem, but because it has process skills embedded, a child could totally know how to find the perimeter of a figure and still miss the problem. Stop for a minute and consider the skills needed to solve this problem.
Here’s where modeling comes in. Students need to hear your thinking about how to work this problem. That’s the way to develop true problem solving/critical thinking skills. It might sound something like this…
[teacher reads the problem, which is projected on an overhead, document camera, or interactive white board]
Wow! That’s a pretty complicated problem! I heard several math vocabulary words as I read, so I’d better think carefully about the meaning of each word. Let me break this down and see if I understand each part.
[re-reading] Use the ruler provided to measure the side lengths of the figures below to the nearest centimeter. Okay, I need to measure the sides, and I’m using centimeters, not inches. Let me make sure I’m using the correct side of my ruler. [measuring the first side] I’d better be sure I line my ruler up carefully. I know that’s important when I’m measuring length. As I measure, I should write my measurements on each side. That way, I’ll have the information I need to work with. [measures all sides and record the measurements] Hmmm, I notice that all the sides on this hexagon, because it has six sides, are congruent. They’re all the same length.
I’m done with the measuring. I’ll read more and see if I can figure out what to do next. [re-reading] What is the difference between the perimeters of these figures? Okay, it mentions the perimeters of the figures. I know that perimeter is the distance around a figure. I’ll find the perimeter of each figure by adding up the side lengths. Oh! The sides on the hexagon are equal, so I can multiply those. [finds perimeter of each figure, recording the calculations].
Hmmm, now I’ve got the perimeters, but is that my answer? I’d better re-read the question again. [re-reading] What is the difference between the perimeters of these figures? Oh!! I need to find the difference between the perimeters. I remember that difference is like subtraction. I’m comparing the perimeters of the two shapes, so I’ll subtract the smaller perimeter from the larger perimeter. Oh! I already know that 29 can’t be the answer, because that number is too big. I’ll bet that’s adding the perimeters. The perimeter of the triangle is 17 cm and the perimeter of the hexagon is only 12 cm, so 17 – 12 equals 5.
Wow! That wasn’t really so hard after all. It had a lot of steps, but each of the steps was pretty easy once I read it and broke it down.
Notice a couple of things:
- The teacher does all the talking. This is not the time to have students help you solve the problem–that can come later.
- It really doesn’t take all that long! Model one problem like this a day, and you’re sure to see your students’ problem solving abilities grow.
- Pack as much math into the problem as possible. This was not a geometry problem, but I took the opportunity to use the names of the geometric figures as well as the word congruent.
- Be sure to model the habits you want your students to develop. By recording all of your work when you model the problem, and commenting on why it’s important to do so, you are showing the students that mathematicians communicate their thinking using words, numbers, and pictures. That’s better than just telling your students, don’t forget to show your work.
Bottom line, our students don’t come to us knowing how to think critically. It’s part of our job to help students develop analytical skills right along with the math content. |
Temporal range: Upper Triassic
Riojasaurus was a herbivorous prosauropod dinosaur. It was one of the earliest of the large, plant-eating dinosaurs. Riojasaurus lived during the Upper Triassic, roughly 225 to 219 million years ago. Fossils have been found in La Rioja Province in Argentina. There are include incomplete skeletons from about 20 individuals.
Riojasaurus had a heavy body, bulky legs, and a long neck and tail. Its leg bones were dense and massive for a prosauropod. By contrast, its vertebrae were lightened by hollow cavities, and unlike most prosauropods, Riojasaurus had four sacral vertebrae instead of three.
References[change | change source]
- "Riojasaurus." In: Dodson, Peter et al The Age of Dinosaurs. Publications International, 41. ISBN 0-7853-0443-6.
- Van Heerden J. and Galton P.M. 1997. The affinities of Melanorosaurus a late Triassic prosauropod dinosaur from South Africa. Neues Jahrbuch für Geologie und Paläontologie Monatshefte. (1):39-55 |
Definition of Chemical Equilibrium
Chemical equilibrium applies to reactions that can occur in both directions. In a reaction such as:
CH4 (g) + H2O (g) <–> CO (g) + 3H2 (g)
The reaction can happen both ways. After some of the products are created the products begin to react to form the reactants. At the beginning of the reaction the rate that the reactants are changing into the products is higher than the rate that the products are changing into the reactants.
The net change is a higher number of products. Even though the reactants are regularly forming products and vice-versa the amount of reactants and products does become stable. When the net change of the products and reactants is zero the reaction has reached balance. The equilibrium is a dynamic equilibrium. The definition for a dynamic equilibrium is when the amount of products and reactants are constant.
The objective of the experiment is to show how experimental conditions involve chemical equilibrium. The effect of the reactant concentration (CH3COO-) and the common ion addition (H+) on the reaction which is given below:
3 Fe3+ + 6 CH3COO- + 2 H2O [Fe3 (OH) 2(CH3COO) 6] + + 2 H+
A solution of the complex [Fe3 (OH) 2(CH3COO) 6] + is an intense orange. A Spectra spectrometer can be used to appraise the give up of the reaction. The higher is the absorption of the coloured complex, the higher is the absorbance of the examined solution.
The equilibrium condition is described in terms of the rate of the forward and reverse reactions for the reaction. The equilibrium constant (KP and KC) and the Law of Mass Action are introduced.
In a chemical reaction, chemical equilibrium is the state in which both reactants and products are present at concentrations which have no further inclination to change with time. This state marks when the forward reaction proceeds at the same rate as the reverse reaction. The reaction rates of the forward and backward reactions are generally not zero but equal. There are no net changes in the concentrations of the reactant(s) and product(s). Such a state is known as dynamic equilibrium.
Chemical equilibrium is the condition which occurs when the concentration of reactants and products participating in a chemical reaction exhibit no net change over time. Chemical equilibrium may also be called a steady state reaction. The quantities of reactants and products have achieved a constant ratio, but they are almost never equal. There may be much more product or much more reactant.
Equipment and Material
- Test tubes Glass-stirring rod Berol pipit
- Syringe with cap Beakers Hot plate
- Test tube holder hot mitts
Experiment Time taken 60 – 90 min
Chemicals taken for experiment
- Bromothymol Blue phenolphthalein 0.1 M NaOH 0.1 M HCl
M Zn(NO3)2 15ml Club soda 6 M NaOH 6 M HCl
- NH4Cl saturated solution Crystals NH4Cl Deionise water
This experiment is to determine the effects of disturbances on chemical systems at equilibrium. The response of the chemical systems will be necessary in terms to Lech atelier’s principle.
Chemical equilibrium plays an important role in our lives. Many of the chemical changes involved in the metabolism of food are equilibrium-forced processes. A number of important industrial processes involve chemical reactions that do not proceed to conclusion due to back reaction. The reversibility of a reaction competes with the forward progression of a reaction. There is a point in a reaction when the products will back react to form reactant.
The extent of the reaction is 20% or 80% can be determined by measuring the concentration of each component in solution when the amount of product and reactant has stopped changing. The extent of the reaction is a function of temperature concentration ad degree of organization that is monitored by a constant value called the equilibrium constant. |
Teaching healthy eating habits
Download our Kidsbook mobile app FREE! Create videos. Parenting tips. Recipes.
As all parents of toddlers quickly learn, teaching them healthy eating habits is an ongoing lesson. Between the challenges of tackling fussy eating and their discovery of independent thought (and the word 'no'!), making sure that they eat well can be a challenge. Here's how you can save the mealtime battles with your child:
Creating healthy eating tips
- A toddler's appetite is as inconsistent as their emotions. Today they'll eat anything, tomorrow, it's only square food! While this may well drive you nuts, it's also very normal. What you can do to help stabilise this behaviour is teach your toddler to understand their hunger levels.
- Nutritionists suggest you ask your child if they are hungry before serving a meal or snack so your child begins to make the link between feeling hungry and then eating.
- The Children's Hospital at Westmead, Sydney, points out that children are born with the ability to self-regulate food intake and safeguarding this ability is one of the biggest factors in preventing childhood obesity. So don't reprimand a child if they don't have an appetite or don't finish their meal, but do explain that an unfinished dinner means no dessert.
- If your child leaves their meal and you are worried they haven't eaten enough, put it aside and store appropriately so you can offer it later if your child comes asking for food.
- Don't give in to requests for different food. A toddler might also find it easier to cope with six small meals during the day rather than three bigger ones.
- A key nutrient toddlers need is EFA/DHA, the potent component of omega-3 fatty acids. Found in deep-sea oily fish such as salmon and tuna, these are essential for brain development, especially from birth to two years. And don't assume your child would never eat fish such as salmon.
- Once your child becomes familiar with a new food they may take to it. You can try introducing a new food by adding it to old favourites.
- Iron deficiency is common in children. They require three serves of lean red meat a week to get enough iron, which is essential for red blood cells to carry oxygen to every cell in the body. A lack of iron shows up as lethargy, poor concentration, recurring sickness and pale skin, particularly the undersides of eyes and nailbeds.
- Calcium is the nutrient children need lots of for strong teeth and bones. Kids should be given full-fat dairy until they are two years. After two years, the recommendation is for reduced-fat dairy products such as light milk, which is lower in fat but higher in calcium. Other calcium-rich food sources are nuts, seeds, tahini, leafy green vegetables and soft-boned fish such as sardines and anchovies.
More preschooler nutrition articles:
- Omega 3 and kids
- Facts about Vitamin D
- Fluoride and your child
- Teaching your preschooler healthy eating habits
- Household germ hideouts
- Sneaky but simple ways to get your kids to eat fruit
- 10 kid-friendly foods with super powers
- The five second rule: fact or fiction?
- 7 healthy habits every child should know
More preschooler firsts:
- Development milestones for preschoolers
- Ideas for fun at home with your preschooler
- All about how preschoolers learn
- What to expect from preschooler manners
- Preschool health and nutrition
- Heading out and about with preschoolers
- Recipe ideas to tempt preschool-aged kids
- Keeping your preschooler safe
- All about sleep and rest for preschoolers
- Helping your preschooler solve problems
More childhood firsts:
This article was written for Kidspot, Australia's best parenting resource for babies, toddlers and preschoolers. |
Transcript for Hurricane Force - A Coastal Perspective, segment 02 of 12
America's hurricane-prone coasts stretch for over four thousand miles. Populations on these coasts have exploded to more than eighty million people. In spite of this, improved hurricane tracking and warning systems have greatly reduced the loss of life due to these storms. Nevertheless, hurricane property losses have escalated. Property destruction by hurricanes frequently results from high winds and, to a lesser extent, from flooding due to heavy rains. While, historically, coastal flooding and wave attack known as storm surge has been a hurricane's chief threat to life and property in low-lying coastal regions, coastal development, coastal resources, and the natural environment are in jeopardy. Fragile habitats such as reefs and wetlands are particularly vulnerable. Mounting human pressures on the coastal zone add need and urgency to better understanding the range of forces shaping America's coasts from the day to day work of tides to the work of great storms and hurricanes. Coastal geologists with the Department of the Interior, U. S. Geological Survey, have teamed with other scientists to study the impacts of hurricanes on America's coasts as part of a larger effort to understand erosion and the causes of coastal change. This film focuses on the varied impact of hurricanes on distinct coastal types, including Hurricane Andrew's nineteen ninety-two impact on coastal Louisiana, Hurricane Hugo's nineteen eighty-nine impact on the Puerto Rican island Culebra, and Hurricane Iniki's nineteen ninety-two impact on the Hawaiian island Kauai. |
The word “amen” is what’s known as a transliteration. A transliteration happens when a word is taken from another language, as is, to mean the same thing in the new language as it did in the language from which it was taken.
The New Testament was originally written back in the first century, in Greek. Greek was then the "second language" of the world, much as English is today.
But "Amen" in the Greek was actually a transliteration from yet another language, Hebrew. It was also transliterated into the language Jesus used every day, Aramaic.
“Amen” means truly! It's a word that underscores the truth of a statement that comes before or after it's said. Or, as in the case of prayer, it denotes faith in the God to Whom prayer is offered.
When in English translations of the Bible, we see Jesus saying things like, “Verily” or “Truly,” it means that the original Greek cites Him as saying, “Amen” or even, “Amen, Amen!”
Of course, as mentioned above, Jesus’ everyday language was Aramaic. But He and the fishermen and tax collectors among His disciples, given the cosmopolitan region in which they grew up, probably were conversant with Aramaic, Hebrew (the language used in the synagogue), Greek (the international language of trade and scholarship), and Latin (the language of their Roman conquerors). His earliest followers composed the books and letters that now make up the New Testament. |
blueprint, white-on-blue photographic print, commonly of a working drawing used during building or manufacturing. The plan is first drawn to scale on a special paper or tracing cloth through which light can penetrate. The drawing is then placed over blueprint paper, prepared with a mixture of potassium ferricyanide and ammonium ferric citrate. When the attached drawing and the blueprint paper are exposed to a strong light, the unprotected ferric salt not lying beneath the lines of the drawing is changed to a ferrous salt that reacts with the ferricyanide to form Turnbull's blue. This blue is the background of the finished print. The ferric salt under the lines of the drawing, protected from the light, remains and is dissolved during the washing in water that follows exposure. As a result, the lines of the original drawing appear white in the finished blueprint.
More on blueprint from Fact Monster:
See more Encyclopedia articles on: Technology: Terms and Concepts |
The Lexile Framework for Reading is an educational tool that uses a measure called a Lexile to match readers of all ages with books, articles and other leveled reading resources. The Lexile Framework uses quantitative methods, based on individual words and sentence lengths, rather than qualitative analysis of content to produce scores. Accordingly, the scores for texts do not reflect factors such as multiple levels of meaning or maturity of themes, and the US Common Core State Standards recommend the use of alternative, qualitative, methods for selecting books for students at grade 6 and over. Lower scores are meant to reflect easier readability.
Lexile measures are reported from reading programs and assessments annually. Thus, about half of U.S. students in grades 3rd through 12th receive a Lexile measure each year. Lexile measures are being used across schools in all 50 states and abroad.
- 1 Components of the Lexile Framework
- 2 History
- 3 Independent evaluations
- 4 Criticism
- 5 Common Core Standards
- 6 Examples of books with Lexile measures
- 7 Use
- 8 Free tools
- 9 References
Components of the Lexile Framework
The Lexile Framework for Reading is made up of Lexile reader measures and Lexile text measures, both of which are put on the Lexile scale.
The Lexile scale runs from below 0L (Lexile) to above 2000L. Scores 0L and below are reported as BR (Beginning Reader).
A Lexile measure is defined as "the numeric representation of an individual’s reading ability or a text’s readability (or difficulty), followed by an “L” (Lexile)". There are two types of Lexile measures: Lexile reader measures and Lexile text measures. A Lexile reader measure typically is obtained when an individual completes a reading comprehension test. Once a field study has been performed to link Lexile Framework with the test, the individual’s reading score can be reported as a Lexile measure.
For an individual, a Lexile measure is typically obtained from a reading comprehension assessment or program. These range from the adolescent level (DIBELS: Dynamic Indicators of Basic Early Literacy Skills) to the adult level (TABE: Test of Adult Basic Education). A Lexile text measure is obtained by evaluating the readability of a piece of text, such as a book or an article. The Lexile Analyzer, a software program specially designed to evaluate reading demand, analyzes the text’s semantic (word frequency) and syntactic (sentence length) characteristics and assigns it a Lexile measure. Over 60,000 Web sites, 115,000 fiction and nonfiction books, and 80 million articles have Lexile measures, and these numbers continue to grow. Over 150 publishers including Capstone Publishers, Discovery Ed, Houghton Mifflin Harcourt, McGraw-Hill, Pearson PLC, Riverside Publishing, Scholastic Corporation, Simon & Schuster, Workman Publishing Company, and World Book offer certified Lexile text measures for their materials.
The maker claims that noting the Lexile measure of a text can assist in selecting “targeted” materials that present an appropriate level of challenge for a reader — not too difficult to be frustrating, yet difficult enough to challenge a reader and encourage reading growth.
There is no direct correspondence between a specific Lexile measure and a specific grade level.
|This article's factual accuracy is disputed. (February 2013)|
The Lexile Framework was developed by MetaMetrics co-founders Stenner and Malbert Smith III, Ph.D. in 1989. Funding for developing a better measurement system for reading and writing was provided by the National Institute of Health through the Small Business Innovation Research grant program. Over the twelve-year period from 1984 through 1996, Stenner and Smith received a total of five grants on measurement of literacy. Development of the Lexile Framework was fueled by conversations and comments from John B. Carroll (UNC-Chapel Hill) and Benjamin Wright (University of Chicago), and with mathematical and psychometrical assistance from Donald S. Burdick, associate professor emeritus of Statistical Science, Duke University and Stenner founded Metrametrics in 1997.
The measurement ideas embedded in the Lexile Framework can be found in two 1982/1983 articles by Stenner and Smith,. when they participated in the evaluation of Head Start, comparing different programs from across the country that used different outcome measures.
In Mesmer's Tools for Matching Readers to Texts: Research Based Practices, she stated that the Lexile Framework for Reading was valid, reliable, and had "excellent psychometric properties."
Mesmer mentioned Walhole, and details a study which used Lexile to match 47 second grade readers to text books. The study found that Lexile was successful at matching students to texts with respect to reading accuracy (93%), but not at matching readers to texts that they could read at an acceptable rate: "Without support, either in the form of fluency modeling or repeated reading, these texts would be too difficult for these students to read productively on their own."
In 2002, the Lexile Framework was evaluated by Dale Carlson. The independent consultant found that the Lexile Framework had a "well-delineated theoretical foundation." Both Carlson and Mesmer have remarked on the positive and unique characteristic of having both the student and text on the same scale.
In 2001, the National Center for Educational Statistics (NCES) formally reviewed Lexile measures. The report acknowledged the science behind Lexile measures: “The panel affirmed the value of both sentence length and word frequency as overall measures of semantic and syntactic complexity....” Additionally, according to one panel member, the Lexile Framework appears “…exceptional in the psychometric care with which it has been developed; the extent of its formal validation with different populations of texts, tests, and children; in its automation; and in its developers’ continual quest to improve it.” However, the report also identified a number of issues and the different authors identified a range of concerns, such as the exclusion of factors such as reader knowledge, motivation and interest: "The notion of purpose in reading is excluded in the Lexile Framework. This is a serious oversight because of the dramatic effects that purpose can have on reading"
Stephen Krashen, educational researcher in language acquisition and professor emeritus at the University of Southern California, raised serious concerns with the Lexile rating system in his article, “The Lexile Framework: Unnecessary and Potentially Harmful.” Krashen argues that a reading difficulty rating system limits children’s choices and steers them away from reading books in which they may be interested.
Furthermore, like most reading formulas, the formula used to determine a book’s Lexile level can often lead to a flawed rating. For example, The Library Mouse, by Daniel Kirk, is a 32-page children’s picture book rated by Amazon.com as “for ages 4-8” and has a Lexile score of 830. However, Stephenie Meyer’s 498-page, young adult novel Twilight only garners a Lexile score of 720. Similarly, Beverly Cleary’s Ramona Quimby, Age 8, has a Lexile score of 860, while Michael Crichton’s Jurassic Park only has a score of 710.
Elfrieda H. Hiebert, Professor of Educational Psychology at University of California, Berkeley, noted in her study, "Interpreting Lexiles in Online Contexts and with Informational Texts," “The variability across individual parts of texts can be extensive. Within a single chapter of Pride and Prejudice, for example, 125-word excerpts of text (the unit of assessments used to obtain students’ Lexile levels) that were pulled from every 1,000 words had Lexiles that ranged from 670 to 1310, with an average of 952. The range of 640 on the LS [Lexile Scale] represents the span from third grade to college.”
Hiebert also demonstrated that slight changes in punctuation, such as changing commas to periods, resulted in “significant reclassification on the LS [Lexile Scale].
Besides limiting children’s reading choices and misrepresenting books’ reading difficulty, the Lexile Scale has had negative effects at a systemic level. When school districts and states began to mandate specific readability programs, textbook publishers responded by manipulating texts to tailor them to the requirements of the readability formulas.
Furthermore, the Lexile Framework costs states and school districts valuable resources. Even though other readability formulas, such as the Flesch-Kincaid used in Microsoft Word’s software, are widely used to establish reading levels and difficulty, the Lexile Scale is the major method of establishing text difficulty in American schools. However, unlike readability formulas of the past, MetaMetrics, the creator of the Lexile Framework, “retained the processing of readability as intellectual property, requiring educators and other clients to pay for their services to obtain readability levels.” Mesmer lists the cost of using the Lexile inventory tools as one of the disadvantages of using the system.
Common Core Standards
Lexile measures are cited in the U.S. Common Core State Standards for English Language Arts to provide text complexity grade and corresponding Lexile ranges. These grade and Lexile ranges are used to help determine at what text complexity level students should be reading to help ensure students are prepared for the reading demands of college and careers. However, this also notes that quantitative methods, including Lexile scores, often underestimate the challenges posed by complex narrative fiction which might use relatively simple prose. The Core standards note that until quantitative methods are able to take into account the factors that might make such texts challenging, preference should be given to qualitative measures of text complexity when evaluating narrative fiction intended for students in grade 6 and over.
Examples of books with Lexile measures
More examples are available here.
Over 40 reading assessments and programs report Lexile measures, including many popular instruments from Scholastic, Pearson, CTB/McGraw-Hill and Riverside Publishing, as well as a growing number of year-end state assessments.
Reading assessments that report Lexile measures
- Arizona's Instrument to Measure Standards (AIMS)
- California English-Language Arts Standards Test
- Delaware Comprehensive Assessment System
- Florida Assessments for Instruction in Reading (FAIR)
- Georgia Criterion-Referenced Competency Tests and the Georgia High School Graduation Test (CRCT and GHSGT)
- Hawaii State Assessment
- Illinois Standards Achievement Test (ISAT)
- Kansas State Assessments of Reading
- Kentucky Core Curriculum Test (KCCT)
- Minnesota Comprehensive Assessments (MCA)
- New Mexico Standards-Based Assessment (SBA)
- North Carolina End-of-Grade and English I End-of-Course (NCEOG and NCEOC)
- Oklahoma Core Curriculum Test (OCCT)
- Oregon Assessment of Knowledge and Skills (OAKS)
- South Carolina Palmetto Assessment of State Standards (PASS)
- South Dakota State Test of Educational Progress (DSTEP)
- Tennessee Comprehensive Assessment Program (TCAP) Achievement Test
- Texas Assessment of Knowledge and Skills (TAKS)
- Virginia Standards of Learning Tests (SOL)
- West Virginia WESTEST 2
- Proficiency Assessments for Wyoming Students (PAWS)
- CTB/McGraw-Hill|CTB/McGraw-Hill: TerraNova (CAT/6 and CTBS/5) and Tests of Adult Basic Education (TABE)
- ERB: Comprehensive Testing Program, 4th Edition (CTP 4)
- Pearson: Stanford 9, Stanford 10, MAT 8, and Aprenda 3
- Riverside Publishing: The Iowa Tests (ITBS) and (ITED) and Gates-MacGinitie Reading Tests, Fourth Edition]
- American Education Corporation: A+ LearningLink assessment
- Dynamic Measurement Group: Dynamic Indicators of Basic Early Literacy Skills (DIBELS)
- Florida Center for Reading Research: Florida Assessments for Instruction in Reading
- Measured Progress: Progress Toward Standards (PTS3)
- NWEA: Measures of Academic Progress (MAP)
- Pearson: Stanford Diagnostic Reading Test, Fourth Edition (SDRT 4) and Stanford Learning First
- Scantron: Performance Series
- Scholastic: Scholastic Reading Inventory (SRI)
- Achieve3000: KidBiz3000; Grades 2-8, TeenBiz3000; Grades 9-12
- New Mexico Standards-Based Assessment Grades 3-9, 11
- Pearson: Aprenda 3
- Scholastic Reading Inventory
- Texas Assessment of Knowledge and Skills (TAKS)-Spanish; Grades 3-6
- E-LQ Assessment
- GL Assessment, Progress in English (PIE) assessment; ages 7–11
- ETS: TOEFL
- ETS: TOEIC
- Scholastic International
Assessments for Homeschoolers
- BJU Press Testing and Evaluation: Stanford and Iowa achievement tests
- EdGate: Total Reader (TR)
- Riverside Publishing: Gates-MacGinitie Reading Tests
- Riverside Publishing: Iowa Tests of Basic Skills (ITBS)
Reading programs that report Lexile measures
- Achieve3000]: KidBiz3000] and TeenBiz3000
- Capstone Digital: myON reader
- Engaging English
- EdGate: Total Reader (TR)
- Hampton-Brown: The Edge and Insider
- Houghton Mifflin Harcourt: Earobics
- LaunchPad Learning
- Mindy's Bookworms
- Pearson/Longman/Prentice Hall: MyReadingLab
- Scholastic Reading Counts!, READ 180, and ReadAbout
- Sopris West: LANGUAGE!
- Thinkronize: netTrekker d.i.
- Voyager Expanded Learning: Passport Reading Journeys
Both Barnes & Noble’s Lexile Reading Level Wizard and MetaMetrics’ Find a Book are free utilities that enable students to find books on subjects that interests them and are within their Lexile range. MetaMetrics also offers two tools free of charge to educators. The organization offers access to the Lexile Analyzer, a software program that is used to determine the Lexile measure of a text, and the Lexile Titles Database Download, a file containing Lexile measures for over one hundred thousand books.
- "Common Core Standards for English Language Arts & Literacy in History/Social Studies, Science, and Technical Subjects". Corestandards.org. Retrieved 2014-02-16.
- Hiebert, E.H. (2002). Standards, assessment, and text difficulty. In A. E. Farstrup & S. J. Samuels (Eds.). What research has to say about reading instruction (3rd Ed.). Newark, DE: International Reading Association.
"Lexile Guide". GL Assessment.
"Lexiles in Education". MetaMetrics. Retrieved 5 February 2010.
Lennon, C. & Burdick, H. (2004)."The Lexile Framework as an approach for reading measurement and success.". MetaMetrics.
"Measured Progress Adds Lexile and Quantile Measures to its Progress Toward Standards Online Assessment". Retrieved 5 February 2010.[dead link]
- "Facts for Features". US Census Bureau. Archived from the original on 27 June 2008. Retrieved 16 June 2008.
- "Lexile Measures at Home". Georgia Department of Education.
- White, S. & Clement,J."Assessing the Lexile Framework: Results of a Panel Meeting". U.S. Department of Education, National Center for Education Statistic. Retrieved August 2001.
- "Linking DIBELS Oral Reading Fluency with The Lexile Framework for Reading". MetaMetrics. Retrieved 2009.
- "News - Capstone". Capstonepub.com. 2009-04-15. Retrieved 2013-11-07.
- Andriani, Lynn (2009-08-10). "MetaMetrics Providing Lexile Measures for Simon & Schuster". Publishersweekly.com. Retrieved 2013-11-07.
- [dead link]
- "World Book Adds Lexile Measures to World Book Web Articles - Internet@Schools Magazine". Internetatschools.com. 2010-04-23. Retrieved 2013-11-07.
- "Who Are Our Publisher Partners". Lexile.com. Retrieved 2014-05-20.
- Webster, L. (Spring 2000). "Jack Stenner: The Lexile King". Popular Measurement.
- "Management". MetaMetrics. Retrieved 10 February 2010.
Smith, D.R., Stenner, A.J., Horabin, I., & Smith, M.(1989). The Lexile Scale in Theory and Practice. Final report for NIH grant HD-19448
- Stenner, A. J. & Smith, M. (1982)."Testing Construct Theories". Perceptual and Motor Skills.
Stenner, A. J., Smith, M., & Burdick, D. S.(1983)
- "Toward a Theory of Construct Definition". Journal of Educational Measurement.
- Mesmer, H. (2007). Tools for Matching Readers to Text: Research Based PracticesGuilford Publications, Inc.
- Walpole, S., Hayes, L., and Robnolt, V. (2006). "Matching second graders to text:The utility of a group‐administered comprehension measure". Reading Research and Instruction, Volume 46, Issue 1.
- Carlson, D. (2002).The Validity and Potential Usefulness of the Lexile Framework: A Brief Review Conducted for the Wyoming Department of Education
- "The Lexile Framework: Unnecessary and Potentially Harmful - Page 1" (PDF). Retrieved 2014-02-16.
- "The Lexile Framework for Reading". Lexile.com. Retrieved 2013-11-07.
- "Interpreting Lexiles". Apexlearning.com. Retrieved 2014-02-16.
- "The Lexile Framework for Reading Map". Lexile.com. Retrieved 2014-06-13.
- "How To Get A Lexile Measure". Retrieved 10 February 2010.
- "How to get a Lexile Measure". Lexile.com. Retrieved 2013-11-07.
- "State Assessments". Lexile.com. Retrieved 2013-11-07.
- "Norm-Referenced Assessments". Lexile.com. Retrieved 2013-11-07.
- [dead link]
- [dead link]
- [dead link]
- "Assessments for Homeschoolers". Lexile.com. Retrieved 2013-11-07.
- "Reading Programs". Lexile.com. Retrieved 2013-11-07. |
A Measurement Dictionary
Physics attempts to provide a logical, quantitative, description of natural phenomena. This
requires that our observations of physical phenomena include quantitative measurements. Physics
is totally dependent upon these measurements. As a practical matter, however, it is impossible to
measure any quantity exactly. In fact, it is also theoretically impossible to simultaneously measure
some combinations of variables exactly. In order to discuss the problems that result from this
dilemma, language has been adopted to describe measurements and their significance. The
following is a short dictionary of these terms.
Replicable or Repeatable- Physicists are looking for general laws of nature. These laws
should have wide applicability, and similar circumstances should produce similar results and
experimental data. Experiments that cannot be repeated and measurements that cannot be
consistently performed are not considered trustworthy. (We will always assume that many repeated
measurements of a given experimental quantity are available.)
Accurate- if the average of a set of repeated measurements of a quantity is close to the true
value the measurement is said to be accurate. Example: A shotgun may scatter pellets all over a
target, but if the center of the spread is close to the center of the target then the shotgun is accurate.
Precise- if the individual measurements in a set of repeated measurements are close to each
other, the measurement is said to be precise. Example: A shotgun that spreads pellets all over a
target is not precise-a rifle in a bench stand should always hit the target in the same place-and is
Systematic error- the difference between the average value and the true value is called the
systematic error. Example: If the sights on a rifle are poorly adjusted, then even if it is locked into
a bench stand it will not hit the center of the target.
Random error- (statistical uncertainty) the deviation of the individual measurements from
their average is called the statistical uncertainty or random error. Example: A shotgun firing
pellets has a large random error.
Least Count- The smallest division on the measuring scale being used is called the least
count. This imposes a lower limit on the precision and accuracy possible with that scale. Example:
Meter sticks are usually calibrated to the nearest millimeter.
Agreement- an experiment agrees with theory if the experimental result accurately matches
the theoretical prediction to within the precision of the experiment.
Discrepancy- The difference between the results of two sets of measurement, or the
difference between the results from one set of measurements and an accepted value, is called the
Reliable-trustworthy, dependable, or replicable-Example: National Institute of Standards
and Technology measurements are more reliable than most private web pages.
Validity-Correctly derived from accepted premises and correctly measured. Example: A
measurement of the acceleration due to gravity that failed to consider the flotation of the test body
in air, or the viscous drag, or the rotation of the body as it rolled down a plane would be invalid at
some level of precision.
Absolute vs. Relative error-A result may be quoted giving the uncertainty as a number, or
as a percentage of the result. The former is an absolute statement of error, the latter is a statement
of the relative error. Example: Absolute 5 1 ; relative 5 20% .
Three groups, doing three different experiments have found values for the acceleration due to
Group 1 A Modified Atwood Machine Experiment
10 1 m/s 2
10 10% m/s 2
In this experiment a cart was subjected to the gravitational force pulling on a small mass, and both
the small mass and the cart were accelerated. The graph of the acceleration as a function of the
ratio of the force to the inertial mass was linear and the slope of the linear fit to the data produced
this accurate but not very precise value for the acceleration due to gravity. Several possible
experimental complications were ignored in the analysis. In particular the inertia of the pulley, the
friction in the pulley, and the inertia and friction in the wheels of the cart were all neglected, as was
the mass of the line connecting the masses.
Group 2 Galileo’s Experiment
7.0 0.5 m/s2
7.0 7% m/s 2
By rolling a ball down an inclined plane and repeatedly timing its arrival at different distances from
its starting point it was established that the speed of the ball was a linear function of time (good
correlation), and that the position was a quadratic function of time. By measuring the acceleration
for different angles of the inclined plane and extrapolating to a vertical plane a value of the
acceleration due to gravity was obtained. The value obtained, while reasonably precise, does not
agree with the accepted value. Given the large amount of data we collected, and the goodness of
the quadratic fits of position as a function of time, we believe that the experiment is reliable, and is
producing a valid result, but that the value we are obtaining is not a valid result for the acceleration
due to gravity, probably because of some effect that we have not yet encountered. (Perhaps the fact
that the ball rolled down the plane rather than slid is important!)
Group 3 Behr Free Fall Apparatus
9.777 0.005 m/s2
9.777 0.05% m/s2
Spark gap timing provided 32 positions as functions of time over a period of three-quarters of a
second for a body in free fall. By fitting the data to a quadratic curve we measured the acceleration
due to gravity directly. The quadratic fit to the data was excellent, showing an RMS average
discrepancy of only a few millimeters between the predicted and measured values of the position as
a function of time. This agrees with our estimate, based on the variability of the spark path, of the
experimental uncertainty in the position measurement. We achieved remarkable precision in our
value for the acceleration due to gravity. Unfortunately our result does not agree with the accepted
value. Perhaps there is some other effect of which we are not aware, such as air resistance. Maybe
we could do the experiment in vacuum. [Editorial note-the effect is the correct size to be due to the
weight of the air displaced by the falling object; a buoyancy force.]
The Requirements of Honesty
If someone were to tell you that Lock Haven was in Ohio it would be a lie. On the other hand, if
you were told “Lock Haven is in Ohio or a neighboring state”, that would be the truth. The clause
"or a neighboring state" is a statement of uncertainty. Notice how it affects your reaction to the
statement. In order to judge the value and meaning of an experimental result the reader of an
experimental report must be told both the accuracy and precision of the experiment. We do not live
in Ohio; it is similarly dishonest to hide the uncertainty in results from the reader. Whenever a
quantity is measured an estimate of the uncertainty in that measurement must be made and
recorded. Uncertainty in measurements will naturally result in uncertainty in the final result of the
experiment. This uncertainty must be calculated and reported!
Almost no experiment agrees with theory precisely. The degree to which we trust the results, and
therefore trust the theory that is being tested is always limited by our finite ability to measure
precisely and to include all relevant effects. In fact, when we get a discrepancy between
experiment and theory, we have either erred in our analysis, or perhaps we have discovered some
new physics. The former is cause for shame. The latter should be cause for rejoicing! |
In the spring, trees are the primary source of airborne pollen. During the summer, grasses take the lead role, followed in the late summer and fall by weed pollen, including the dreaded ragweed.
The following five plants are among the worst offenders when it comes to spring allergies. Why? Because they rely on wind, not on insects, to disperse pollen. As a result, they produce a lot of it to increase the chances that grains will reach and fertilize a female flower.
PSU Entomology/Photo Researchers
Of course, all that airborne pollen also means more of it is likely to reach your nose and eyes. Read "10 Tips to Reduce Your Exposure" for ways to manage spring allergies.
Visuals Unlimited/Getty Images
Mountain Cedar and Eastern Red Cedar
"There are vast populations in central Texas, literally hundreds of thousands of acres are covered with mountain cedar trees. It is a major, major allergen there," says Estelle Levetin, chair of the Aerobiology Committee of the American Academy of Allergy, Asthma & Immunology and professor of biology at the University of Tulsa. The severity of cedar allergies in that region has spawned a movement, "People Against Cedars," to control the spread of the tree. If you live on the East Coast, you may be sensitive to its close relative, the Eastern red cedar (pictured above).
When does it peak? Mountain cedar: December through March; Eastern red cedar: February through April
Where does it grow? Mountain cedar grows in rocky, dry soil in the Southwestern United States (mainly Texas, southern Oklahoma, Arkansas and northern Mexico). Eastern red cedar prefers moist soils and can be found in states on the East Coast and in the Midwest.
Maria Mosolova/Photo Researchers
The oak's hanging catkins, tassel-like appendages that are a few inches long, bear clusters of male flowers. Once these flowers disperse pollen, the catkins fall off the tree — you may have seen them on the sidewalk or piled up on your car's windshield in the spring. Pollen from the catkins fertilizes the oak's female flower to produce acorns. If you're allergic to oak, you may also be sensitive to beech, birch and alder trees, which are in the same family.
When does it peak? February through May
Where does it grow? Throughout the United States
The elm’s graceful vase-shape made it a popular shade tree for city streets. Although elms have been hit hard by Dutch elm disease over the last century, says Dr. Levetin, "it is still an important tree is many parts of the country, [so] we still see a lot of elm pollen in the air."
When does it peak? February to April (white elm or American elm), some species flower in the fall
Where does it grow? Eastern and Central United States
James Randklev/Getty Images
This is the tallest species of alder in North America, and the wood’s red tannin has been used as a dye. If you're allergic to alder, you may also have sensitivity to beech, birch and oak trees, which are in the same family.
When does it peak? February through April
Where does it grow? Cool and moist areas in the Western United States
James Randklev/Getty Images
Sweet Vernal Grass
Although summer is peak season for grass pollen, vernal is an early-blooming grass and is behind the sweet smell of freshly mown hay, according to The Encyclopedia of Herbs. The plant contains the natural compound coumarin, which is responsible for the grass's vanilla-like fragrance when cut and dried.
When does it peak? April to July
Where does it grow? In fields and on roadsides throughout the United States |
A new study by UCLA planetary scientists and their colleagues in Germany overturns a longstanding scientific tenet and provides new insights into how convection controls much of what we observe in planets and stars.
The research unifies results from an extensive array of previous experiments. It appears in the Jan. 15 edition of the journal Nature.
"The Nature paper allows us new and meaningful predictions for where we should observe different behaviors throughout the universe wherever there are rotating convection systems, and that means planets and stars," said study co-author Jonathan Aurnou, a UCLA associate professor of planetary physics. "This allows us to make predictions for almost any body where we can measure the rotation rate and heat coming out. For me, that's exciting."
Convection describes the transfer of heat, or thermal energy, from one location to another through the movement of fluids such as liquids, gases and slow-flowing solids. As an example, when a bowl of water is heated on a stove, the heated portion of the water becomes buoyant and rises through the surrounding cooler water, while the cooler water drops down to be heated, creating a convection current.
On a larger scale, convection is an important process in the Earth's core, its atmosphere and its oceans, as well as the cores and atmospheres of other planets; it controls features such as the strength and structure of magnetic fields, atmospheric jets and heat flux patterns, according to lead study author Eric King, a UCLA graduate student in Earth and space sciences who works in Aurnou's lab.
It is known that convection is affected by planetary rotation, and for decades, scientists have believed that the influence of rotation on convection depends on the competition between two global-scale forces: the Coriolis force, which is the force that arises in all rotating systems, and the non-rotational buoyancy force. In the Nature paper, King, Aurnou and their colleagues dispute this, presenting results from laboratory and numerical experiments demonstrating that transitions between rotationally dominated and non-rotating convection behavior are determined instead by the relative thicknesses of fluids' thermal (non-rotating) and the Ekman (rotating) boundary layers.
There are two very different ways in which convecting fluids will generally behave. One is known as chaotic turbulence, which can be seen when a fluid is not rotating, as in the example of a boiling pot of water. The other is when a fluid is rapidly rotating, during which convection becomes well-organized. In the image above figure "a" represents the movement of fluid in rapidly rotating convection, while "b" represents the non-rotating convection.
"Scientists have been arguing for decades that rotation should dominate all the convection, all the fluid dynamics on stars and planets, but nobody has systematically measured when this domination by the rotational Coriolis effects occur," Aurnou said. "When do the Coriolis effects take over? How does convection occur on a rotating body, such as a planet or star? All of these bodies are rotating; how does the rotation affect the convection?
"We actually went out and quantitatively measured when rotation controls the system," he said. "We are asking, what is controlling the convection and how do you apply that to planets and stars?"
To obtain such measurements, King, with help from Aurnou and Jerome Noir, a research associate in Aurnou's laboratory, designed and constructed a 10-foot-tall state-of-the-science device called the Rotating Magneto-Convection device, or RoMag, which allows for the study of complex interactions among planetary convection ingredients.
The RoMag, a cylinder that sits on a spinning pedestal with a computer that collects data in a rotating frame, is the only device of its kind in the world. Scientists can drive and control thermal convection in the device's tank by applying heat from below and can learn how efficiently heat is transferred by convection.
"In building this device, we had to become electricians, plumbers, engineers, materials scientists," King said.
"Eric [King] showed up to an empty lab more than four years ago," Aurnou said. "When Eric first visited our laboratory, I asked whether he does experimental work. He said, 'No, but I'm pretty good with my stereo.' I said, 'We can work with that.' Eric is not an engineer but is very good in the lab."
In the Nature paper, King, Aurnou and colleagues report how rapidly a fluid needs to rotate and what controls the transition when it goes from well-organized rotational convection ("a") to chaotic turbulence ("b"). The findings were surprising.
While scientists can predict how strong the Coriolis and buoyancy forces should each be in a rotating convective system, they have also thought that based on their predictions, they could say whether "a" or "b" should occur.
"We have shown that is not right," King said. "And I think we have figured out what is right."
What is right, he explained, is that fluids' boundary layers, not the strength of the Coriolis and buoyancy forces, control the rotating convection system. A boundary layer is a sliver-thin layer of fluid between the bulk of the fluid and the boundary.
"If you take a cup of coffee and spin it, the fluid in the middle isn't moving, but at the very edge, right against the wall, it has to move," King said. "As I rotate the cup, a thin piece of the fluid is rotating with the coffee cup. Until that thin layer of fluid can communicate its rotation to the rest of the coffee, the interior is not going to rotate. If you have a piece of dirt in coffee and try to rotate it away from you, it won't work unless it is right against the wall; that is why. The boundary layer controls whether the rest of the fluid knows it should be rotating. The boundary layer is the layer of fluid that communicates the boundary physics to the interior fluid far from the boundaries."
This layer is, surprisingly, the key to the transition from "a" to "b".
"We're showing there is a boundary layer called an Ekman layer that is a thin, rotating boundary layer of fluid that lets the rest of the fluid know that it is in a rotating container," King said. "We have shown that in 'a' there is an Ekman layer. When we go to 'b', the Ekman layer is becoming partially destroyed, and therefore rotation can no longer be effectively communicated to the rest of the fluid. This becomes important with planets and stars.
"We have shown that it's much easier to get the chaotic convection ('b') than was previously thought," he said. "Scientists had incorrectly assumed that planets and stars, because they are so big and rotate so fast, must be dominated by the effects of rotation. They thought the fluid dynamics in the Earth's core, for example, must be completely dominated by the effects of rotation. We are showing that we have to rethink that."
"We have shown that the standard assumption, that 'b' is irrelevant for planetary and stellar bodies, is incorrect," Aurnou said. "We can now predict, based on our laboratory measurements and computer simulations, when a planetary and stellar body should be in one regime versus the other regime."
Aurnou and King believe the Earth's core is not far from the transition between well-organized rotational convection and chaotic turbulence.
"We don't know what the physical processes in the Earth's core are," Aurnou said. "Our findings allow us, given an estimate of the amount of heat coming from the core, to make a much better determination of where the dynamics exist. It looks like the Earth's core is not far from the transition, while everyone thought we are firmly and deeply in 'a'. That's what our findings suggest, and that is a big change. We will continue to study this question."
So far, the experiments have been conducted in water. King plans to rebuild RoMag so that the device can accommodate liquid metal. Planetary cores are predominantly composed of molten iron and often have strong magnetic fields. The scientists' research should give further insight into planetary cores, including the Earth's.
For the research reported in Nature, King and Aurnou used both experimental laboratory studies and numerical models. In addition to Noir, they worked with Stephan Stellmach of the University of California, Santa Cruz, who was formerly based in Germany, and with Ulrich Hansen in Germany.
This research was federally funded by the National Science Foundation in the U.S.
Cite This Page: |
It is generally believed that the first manufactured glass was in the form of a glaze on ceramic vessels, about 3000 B.C. The first glass vessels were produced about 1500 B.C. in Egypt and Mesopotamia. The glass industry was highly successful for the next 300 years, and then declined. It was revived in Mesopotamia in the 700’s B.C. and in Egypt in the 500’s B.C. For the next 500 years, Egypt, Syria, and the other countries along the eastern shore of the Mediterranean Sea were glassmaking centers.
Early glassmaking was slow, laborious, and costly. Glass blowing and glass pressing were unknown, furnaces were small, the clay pots were of poor quality, and the heat was not sufficient for melting. Glassmakers were tenacious and eventually learned how to make colored glass jewelry, and cosmetics cases, along with tiny jugs and jars.
The priests and the ruling classes considered glass objects to be as valuable as jewels. Merchants soon learned that wines, honey, and oils could be carried and preserved far better in glass than in wood or clay containers.
The blowpipe was invented at about 30 B.C. This invention made glass production easier, faster, and cheaper. As a result, glass became available to the common people for the first time. Glass manufacture became important in all countries under Roman rule. In fact, the first four centuries of the Christian Era may justly be called the First Golden Age of Glass. The glassmakers of this time knew how to make a transparent glass, and they did offhand glass blowing, painting, and gilding (application of gold leaf). They knew how to build up layers of glass of different colors and then cut out designs in high relief.
The celebrated Portland vase, which was probably made in Rome about the beginning of the Christian Era, is an excellent example of this art. This vase is considered one of the most valuable glass art objects in the world. Back To Top The Middle Ages. Little is known about the glass industry between the decline of the Roman Empire and the 1200’s. Glass manufacture had developed in Venice by the time of the Crusades (A.D. 1096-1270), and by the 1290’s an elaborate guild system of glassworkers had been set up. Equipment was transferred to the Venetian island of Murano, and the Second Golden Age of Glass began. Venetian glass blowers created some of the most delicate and graceful glass the world has ever seen. They perfected Cristallo glass, a nearly colorless, transparent glass, which could be blown to extreme thinness in almost any shape. From Cristallo, they made intricate lacework patterns in goblets, jars, bowls, cups, and vases. In the 1100’s and 1200’s, the art of making stained-glass windows reached its height throughout Europe. By the late 1400’s and early 1500’s, glassmaking had become important in Germany and other northern European countries. Manufacturers there chiefly produced containers and drinking vessels. Northern forms were heavier, sturdier, and less clear than Venice’s Cristallo. During the late 1500’s, many Venetians went to northern Europe, hoping to earn a better living. They established factories there and made glass in the Venetian fashion. A new type of glass that worked well for copper-wheel engraving was perfected in Bohemia (now part of the Czech Republic) and Germany in the mid-1600’s, and a flourishing industry developed. Glassmaking became important in England during the 1500’s. By 1575, English glassmakers were producing Venetian-style glass. In 1674, an English glassmaker named George Ravenscroft patented a new type of glass in which he had changed the usual ingredients. This glass, called lead glass, contains a large amount of lead oxide. Lead glass, which is especially suitable for optical instruments, caused English glassmaking to prosper. Back To Top Early American glass. The first factory in what is now the United States was a glass plant built at Jamestown, Virginia, in 1608. The venture failed within a year because of a famine that took the lives of many colonists. The Jamestown colonists tried glassmaking again in 1621, but an Indian attack in 1622 and the scarcity of workers ended this attempt in 1624. The industry was reestablished in America in 1739, when Caspar Wistar built a glassmaking plant in what is now Salem County, New Jersey. This plant operated until 1780. Wistar is one of the great names of early American glass. The second great American glassmaker was Henry William Stiegel, also known by his nickname, “Baron” Stiegel. Stiegel made clear and colored glass, engraved and enameled glass, and the first lead glass produced in North America. A third important American glassmaker was John F. Amelung, who became best known for his elegant engraved glass. Another important early American glass, Sandwich glass, was made by the Boston and Sandwich Glass Company, founded by Deming Jarves in 1825. It was long believed to be the first company in America to produce pressed glass. But the first was actually the Bakewell, Page, and Bakewell Company of Pittsburgh, Pennsylvania, which began to make pressed glass earlier in 1825. These two companies and many others soon made large quantities of inexpensive glass, both pressed and blown. Every effort was made to produce a “poor man’s cut glass.” In lacy Sandwich, for example, glassmakers decorated molds with elaborate designs to give the objects a complex, lacelike effect. In the early 1800’s, the type of glass in greatest demand was window glass. At that time, window glass was called crown glass. Glassmakers made it by blowing a bubble of glass, then spinning it until it was flat. This process left a sheet of glass with a bump called a crown in the center. By 1825, the cylinder process had replaced the crown method. In this process, molten glass was blown into the shape of a cylinder. After the cylinder cooled, it was sliced down one side. When reheated, it opened up to form a large sheet of thin, clear window glass. In the 1850’s, plate glass was developed for mirrors and other products requiring a high quality of flat glass. This glass was made by casting a large quantity of molten glass onto a round or square plate. After the glass was cooled, it was polished on both sides. Bottles and flasks were first used chiefly for whiskey, but the patent-medicine industry soon used large numbers of bottles. The screw-top Mason jar for home canning appeared in 1858. By 1880, commercial food packers began to use glass containers. Glass tableware was used in steadily increasing quantities. The discovery of petroleum and the appearance of the kerosene lamp in the early 1860’s led to a demand for millions of glass lamp chimneys. All these developments helped to expand the market for glass. Back To Top Modern glassmaking. Changes in the fuel used by the glass industry affected the location of glass factories. In the early days when wood was used as fuel, glassworks were built near forests. By 1880, coal had become the most widely used fuel for glassmaking, and glassmaking operations were near large coal deposits. After 1880, natural gas became accepted as the perfect fuel for melting glass. Today, most glass manufacturing plants are near the major sales markets. Pipelines carry petroleum and natural gas to the glass plants. After 1890, the development, manufacture, and use of glass increased rapidly. The science and engineering of glass as a material are now so much better understood that glass can be tailored to meet an exact need. Any one of thousands of compositions may be used. Machinery has been developed for precise, continuous manufacture of sheet glass, tubing, containers, bulbs, and a host of other products. New methods of cutting, welding, sealing, and tempering, as well as better glass at lower cost, have led to new uses of glass. Glass is now used to make pipelines, cookware, building blocks, and heat insulation. Ordinary glass turns brown when exposed to nuclear radiation, so glass companies developed a special nonbrowning glass for use in observation windows in nuclear power plants. More than 10 tons (9 metric tons) of this glass are used in windows in one nuclear power plant. In 1953, automobile manufacturers introduced fiberglass-plastic bodies. Today, such materials are used in architectural panels to sheathe the walls of buildings. They are also used to make boat hulls and such products as missile radomes (housings for radar antennas). Other types of glass have been developed that turn dark when exposed to light and clear up when the light source is removed. These photochromic glasses are used in eyeglasses that change from clear glasses to sunglasses when worn in sunlight. During the late 1960’s, glass manufacturers established collection centers where people could return empty bottles, jars, and other types of glass containers. The used containers are recycled—that is, broken up and then melted with silica sand, limestone, and soda ash to make glass for new containers. Glass can be recycled easily because it does not deteriorate with use or age. In addition to the collection centers, some communities have set up systems to sort glass and other reusable materials from regular waste pickups. In the 1970’s, optical fibers were developed for use as “light pipes” in laser communication systems. These pipes maintain the brightness and intensity of light being transmitted over long distances. Types of glass that can store radioactive wastes safely for thousands of years were also developed during the 1970’s. The late 1900’s brought important new specialty glasses. Among the new specialty glasses were transparent glass ceramics, which are used to make cookware, and chalcogenide glass, an infrared-transmitting glass that can be used to make lenses for night vision goggles. |
The Elizabethan Age
The Elizabethan Age is remembered as the time of a great wave of English nationalism, as well as a period in which the arts flourished. The time of Shaksepeare was also the time of Elizabeth I, who is one of the more memorable monarchs.
The word ‘renaissance' literally means ‘rebirth' and it began in Italy in the 14th century and subsequently spread throughout Europe during 15th, 16th, and 17th centuries. The feudal economies of the medieval period gave way to centralised political structures and the dominance of the Church in aspects of cultural life such as music and the arts began to wane as secular interests rose. The Italian Renaissance was a product of urban centres that were becoming richer through commerce. This includes Milan, Florence, and Venice.
The Renaissance in England coincided with the reign of Elizabeth I who was Queen of England and Ireland from 1558 until 1603, so it is often referred to as the Elizabethan period. Elizabeth I's reign saw a rise in the concept of ‘nationalism' in England and this can be seen in the increased interest that writers had in writing literary and dramatic works in the English language. As a result, Elizabethan England saw a significant growth in cultural developments.
A number of important historical events contributed to making England a powerful nation during this period. England made significant advances in the realm of navigation and exploration. Its most important accomplishment was the circumnavigation of the world by Sir Francis Drake between 1577 and 1580. England's reputation as a strong naval power was enshrined in history by its defeat of the Spanish Armada in 1588 and by the turn of the century England was at the forefront of international trade and the race for colonisation.
England's renaissance in the realm of thought and art is epitomised by the official recognition that Elizabeth I gave to Oxford and Cambridge. These universities were acknowledged as the focal point for the nation's learning and scholarly activities. Other historical developments which shaped the direction of Elizabeth Literature include the invention of the printing press to England in 1476 which helped to make literature more widely available, the growth of a wealthy middle class of people who had the time to write and read, and the opening up of education to the laity rather than being the exclusive domain of the clergy.
The arts flourished under Elizabeth I. Her personal love of poetry, music, and drama helped to establish a climate in which it was fashionable for the wealthy members of the court to support the arts. Theatres such as the Globe (1599) and the Rose (1587) were built and writers such as Ben Jonson, Christopher Marlowe, and William Shakespeare wrote comic and tragic plays.
Latin was still used for much of the literature early in the period. However, as the new nationalism began to influence literary production, works began to appear in English. Edmund Spenser's “The Faerie Queene” was written in English and it broke new ground with respect to what could be achieved with this language. It was created to flatter Elizabeth I. Another innovative writer of the period was Sir Philip Sidney. The new directions that the philosophy of Humanism was creating at the time influenced both Spenser and Sidney. The new literary style borrowed heavily from classical Greek writing. A form of sonnet called either the Shakespearean Sonnet or the Elizabethan Sonnet became fashionable.
Theatrical Conditions in Elizabethan England
Shakespeare is the best known of all of the Elizabethan Playwrights. Other writers of the period include Thomas Kyd, Christopher Marlowe, Ben Jonson, John Fletcher, and John Webster. Plays were usually performed in outdoor theatres in the afternoon. Poorer audience members were required to stand for the duration of the performance while wealthier people could sit in elevated seats. Often writers worked under the patronage of significant courtiers or wealthy noblemen. Experimentation with the English language led to the rise in favour of Blank verse (which is unrhymed iambic pentameter).
The theatrical conditions of the period were such that companies flourished. During the period 1585-1642 there were typically two companies performing in London (and sometimes up to four companies). The population of London was only about 200,000 people so theatre companies often struggled to maintain audiences.
Performances took place six days a week and plays commenced at 2pm. Typically a different play was staged each day. A new play would be introduced into the repertoire every seventeen days. Individual plays normally only had about ten performances before they were dropped from the repertoire. This meant that playwrights were in high demand. Most plays were not published during the writers' lifetimes. Indeed, there was little consideration of reading the plays. Plays were for performance. |
As the climate changes, so will the places birds need.
Audubon scientists took advantage of 140 million observations, recorded by birders and scientists, to describe where 604 North American bird species live today—an area known as their “range.” They then used the latest climate models to project how each species’s range will shift as climate change and other human impacts advance across the continent.
The results are clear: Birds will be forced to relocate to find favorable homes. And they may not survive.
Birds and Climate Visualizer
Take it personally: Climate change is a serious threat to birds and your community. Enter your location to see which impacts from climate change are predicted for your area, and how birds near you will be affected.
Search Bird Species At Risk
If we take action now, we can improve the chances for hundreds of bird species.
By stabilizing carbon emissions and holding warming to 1.5°C above pre-industrial levels, 76 percent of vulnerable species will be better off, and nearly 150 species would no longer be vulnerable to extinction from climate change.
Click the three different warming scenarios to explore how increased warming makes more species vulnerable.
Bird Species at Risk
Explore more birds threatened by climate change around the country. |
It’s difficult to overstate the importance of reading. People who enjoy a book, especially classical literature, develop stronger empathy and critical-thinking skills. They also discover gripping stories with complex thematic threads that take on social class, moral dilemmas, the nature of civilized society, justice, revenge, greed, love, betrayal – and that’s just “Wuthering Heights” by Charlotte Brontë.
The school library also makes science fiction real every day. Students can take any book from the shelf and time travel into the middle of historical events or read the thoughts of someone who lived decades, even centuries ago.
As if that’s not enough value for words printed on a page, reading is also an immersive experience that gives the brain a workout. Research cited by the Open Education Database found that reading literature prompts multiple complex cognitive functions while reading for pleasure increases blood flow to different sections of the brain. The researchers concluded that “reading a novel closely for literary study and thinking about its value is an effective brain exercise.”
Given the importance of reading, Fresno Pacific University offers a wide variety of professional development courses about teaching reading strategies and courses on teaching classical literature, like those by Dostoevsky.
Why Read Classic Literature?
Even people who read constantly may shy away from literature at times. But readers should challenge themselves with good books, according to the teaching-focused website Gift Guru. Students who delve into classic literature enjoy many benefits.
Thought-provoking situations. Literature often tackles social issues, ethical dilemmas and moral choices, examining the ramifications of tough decisions and how they impact character development.
Understanding allusions. A common tactic in writing of all genres is to reference classic works. Readers won’t get the reference if they haven’t read the original. According to Gifted Guru, this experience “will lower dopamine levels, thus making you think you don’t like the book you’re reading.”
Cathartic reading. By following the lives of characters in classical literature as they struggle with difficult situations, readers can better persevere through their own struggles and learn from the mistakes of those characters.
Windows and mirrors. Books that give readers a look into the life of someone completely different from themselves are considered windows, while books about similar people are considered mirrors. A mirror book can give readers insight into themselves and also help them make decisions in their own life by reading about what someone they relate to did in a similar situation. A window book builds empathy for others by understanding the thoughts and motives of people completely unlike the reader. In both cases, classical literature effectively bridges the gaps between different cultures, races, geographies, ethnic origins and religions.
They stay with you. Ask a person who has read enough classical literature, and they will name the one (or more) novels that changed their lives. Teaching classical literature increases the chances of this happening for a student.
Tips For Teaching Classical Literature
In many cases, classical literature speaks for itself. Simply reading the novel and having lively discussions about the character’s actions benefit students. Nonetheless, teachers can support students by keeping certain tips in mind.
Give Context to the Novel
It’s impossible to reap the benefits of classical literature without having some context for the time in which they are set (especially historical novels such as “Les Misérables” by Victor Hugo) or the time the writer lived. The latter is especially true if the time and location play such a large part of the author’s writing (such as English society in the early 19th century for Jane Austen novels or Harlem in the 1960s for James Baldwin).
Provide Additional Resources
Especially for longer novels with complex events and a large cast of characters, providing students with character lists, plot summaries and insightful writing about themes and symbols can support a better understanding of the text.
Annotation involves making a note on the text, typically written in the margins. It’s helpful for going back and re-reading important sections and in class discussions about the book. By actively thinking about and annotating what they are reading, students retain more and move through the text more efficiently.
Add Different Media
High-quality movies or television series based on literature can help students engage better with the story’s characters and themes, particularly for difficult novels. Even showing small clips in class of certain scenes can support student engagement.
Also, for teachers deciding what books to teach, Penguin maintains a list of the 100 “must-read” classics.
Fresno Pacific University Reading Courses
Fresno Pacific University offers almost two dozen online courses for educators that can improve their ability to teach students about reading while also earning professional development credits. These courses encompass many topics related to reading and writing, including improving vocabulary, developing adolescent readers and writers and expanding content literacy.
These courses include Content Comprehension: Helping Students Read & Understand, which is ideal for teachers who work with ESL, special needs or low-level reading students.
All these courses help teachers support students in expanding their ability to enjoy and learn from reading. Those are important steps toward the day when they are ready to take on classic literature, widening their worldview while enjoying some of the finest novels ever written. |
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download The Reformation - AP European History -
Document related concepts
The Reformation Mr. Regan Long Term Causes • The growth in the power of the secular king and the decrease in the power of the Pope. • The popular discontent with the seemingly empty rituals of the Church. • The movement towards more personal ways of communicating with God, called lay piety. • The fiscal crisis in the Church that led to corruption and abuses of power – IMPORTANT! • Critics like Desiderius Erasmus -- “Laid the egg that Luther hatched Short Term Causes • John Wycliffe (1320 – 1384) was an English reformer who argued that the Church was becoming too remote from the people and advocated for simplification of its doctrines and less power for the priests. – He believed that only the Scriptures declared the will of God and questioned transubstantiation, the ability of the priests to perform a miracle turning the wine and bread into Christ’s blood and body. – His views were branded heretical, but he was able to survive in hiding though his remains were dug up by the Church in 1428 and burned. Short Term Causes • Jan Hus (1369 – 1415) was a Bohemian (Czech) who argued that priests weren’t a holy group, claiming instead that the Church was made up of all of the faithful. – He questioned transubstantiation, and said that the priest and the people should all have both the wine and the bread. – He was burned at the stake in 1415, but his followers, raised an army and won against the emperor, who let them to set up their own church in which both the wine and bread were eaten by all. Short Term Causes • The Avignon Exile and Great Schism were both events that greatly undermined both the power and prestige of the Church, and made many people begin to question its holiness and the absolute power of the Papacy. – ** People realized that the Church was a human institution with its own faults. Short Term Causes • The Printing Press before the invention of the printing press in the mid-1400s, many people didn’t have access to information or changes in religious thought except through word of mouth and the village priests. – With the printing press, new ideas, and the dissatisfaction with the church, could spread quickly, and people could read the Bible for themselves. Abuses of Church Power 1. 2. 3. 4. 5. 6. Simony the buying and selling of high church offices, producing revenue for the holder. Indulgences the sale of indulgences was the biggest moneymaker for the Church. When a person paid for an indulgence, it supposedly excused the sins they had committed (the more $, the more sins forgiven) even without them having to repent. Indulgences could even be bought for future sins not yet committed and for others, especially those who had just died, and were supposed to make a person’s passage into heaven faster. Dispensations payments that released a petitioner from the requirements of the canon law. Incelebacy church officials getting married and having children. Pluralism the holding of multiple church offices. Nepotism granting of offices to relatives Martin Luther • Martin Luther (1483 – 1546) was born into a middle class family in Saxony, Germany. He got a good education and began studying law. After almost being hit by lightning, he decided to become a monk. • As a monk, he became obsessed with his own sinfulness, and pursued every possible opportunity to earn worthiness in God’s eyes (for example, selfflagellation) but he was still not satisfied, for he felt that God would never forgive a sinner like himself (How can I be saved?). Martin Luther • Finally, he had an intense religious experience that led him to realize that justification in the eyes of God was based on faith alone and not on good works and sacraments. • In 1517, he saw a friar named Johann Tetzel peddling indulgences and claiming that by buying them, people could save themselves time in the purgatory. Since he said that by buying the indulgences, people could excuse sins, people were coming to buy the indulgences in droves. • This outraged Luther, and on October 31st, 1517 he posted his Ninety-Five Theses on the church door. • The theses explained that the Pope could remit only the penalties he or canon law imposed, and that for other sins, the faithful had only to sincerely repent to obtain an indulgence, not pay the Church. Martin Luther • The theses made the profits from the indulgences drop off, and angered the order that supported Tetzel. Luther and the rival monks began to have theological discussions, which were at first ignored. But, by 1520 Luther had written three radical pamphlets: • 1. 2. 3. An Address to the Christian Nobility of the German Nation (1520) made a patriotic appeal to Germans to reject the foreign Pope’s authority. The Babylonian Captivity(1520) attacked the belief that the seven sacraments were the only means of attaining grace, saying that only two, baptism and the Eucharist (which were mentioned in the Bible) were important. On the Freedom of the Christian Man (1520) explained his principle of salvation by faith alone. Martin Luther – Diet of Worms • Luther’s writings could no longer be ignored, and, in 1520, Pope Leo X excommunicated him, and Luther responded by calling the Pope an anti-Christ. So, Charles V, the Holy Roman Emperor ordered him to offer his defense against the decree at a Diet of the Empire at Worms. • At Worms, Luther refused to retract his statements, asking to be proved wrong with the Bible. So, Charles ordered that Luther be arrested and his works burned, but Prince Frederick of Saxony came to Luther’s aid and allowed Luther to hide in his castle. There, Luther established the Lutheran doctrines. Martin Luther • Translated the Bible into German in the 1530’s. Bible was free to be read by all, including women • “Demonstrated the fire of a theological revolutionary but the caution of a social and political conservative.” – Fails to support the German Peasants Revolt of 1524 - 1526 Lutheran Doctrine and Practices • Codified in the Augsburg Confession the Lutheran beliefs are as follows: – Justification by faith alone (sola fide), or the belief that faith alone, without the sacraments or good works, leads to an individual’s salvation. – The Bible (sola scriptura) is the only authority, not any subsequent works. – All people (sola gratia) are equally capable of understanding God’s word as expressed in the Bible and can gain salvation without the help of an intermediary. Lutheran Doctrine and Practices • No distinction between priests and laity. • Consubstantiation (the presence of the substance and Christ coexist in the wafer and wine and no miracle occurs) instead of transubstantiation. • A simplified ceremony with services not in Latin. The Appeal of Protestantism • Appeal to the peasants – Message of equality in religion, which they extended to life in general. – A simplified religion with fewer rituals, which made it easier to understand. – Luther rebelled, which inspired many of them to do the same. » Appeal to the nobles: No tithe to pay, so $ stays in the country. The Appeal of Protestantism • Since they are against Charles V for political reasons, they can justify it by becoming Protestant. – No more church owned land, so they can get more land. – No tithe for peasants, so they can tax them more. • Appeal to the middle class: – No tithe to pay, so more $ for them. – Now they can read the Bible and interpret it in their own way. – Concept of individualism – you are your own priest. Other Forms of Protestantism • Uldrich Zwingli (1484 – 1531) • Established a reform movement in Zurich more radical in style than Luther’s • Denied consubstantiation, the real “presence of Jesus in the bread and wine at mass. The sacrament was only :symbolic.” • He believed that NONE of the sacraments bestowed grace, and that they were purely symbolic. • Believers smashed organs, statues, and painted churches allwhite to better focus the believers attention on the Word of God • Met with Luther to resolve their differences (the Marburg Colloquy), but this failed. • Killed in 1531 in the Swiss Civil War Anabaptists • Believed that membership in a church community was an adult choice • Since some believed that Baptism should only be administered to adults who asked to be baptized, they were all called the Anabaptists (rebaptisers). • believed in a literal interpretation of the Bible. • Hated by Catholics & other Protestants because they believed in adult Baptism and total separation of church & state John Calvin - Calvinism • John Calvin (1509 – 1564) formed the second wave of the Reformation in Geneva, Switzerland, • Lutheranism and Calvinism both believed in people’s sinfulness, salvation by faith alone, that all people were equal in God’s eyes and that people should follow existing political authority. • Calvin believed in predestination or the concept that God, being all knowing, already knows if a person is going to go to heaven and become part of the elect or not. • Though behavior on earth technically had no effect on the decision, it was established that moral people tended to be part of the ‘elect.’ • Calvinist communities were model places, with very strict moral codes that were vehemently imposed. • The church and its doctrines were also very well defined in the Institutes of the Christian Religion (1536) and all Calvinists were supposed to make their communities worthy of the future elect. • Calvin’s most famous disciple was John Knox, who brought Calvinism to Scotland. England – Henry VIII (1509 – 1547) • Early life -- strong Catholic, “Defender of the Faith.” • No male heir, asked pope Clement VII for a divorce - was not granted for political reasons • 1534 -- Act of Supremacy -- makes Henry, not the pope, the head of the Catholic Church in England • 1534 – Act of Succession -- legitimated the offspring of Henry and Anne Boleyn (the future Queen Elizabeth I) England -- Edward VI (1547 – 1553) • Reforms include a Book of Common Prayer and an Act of Uniformity which provided a simpler, more “Protestant” form of worship. England – Mary I (1553 – 1558) • England turned back into a Catholic way • Mary was Catholic and her husband, Philip II was the King of Spain and a staunch defender of Catholicism. • Protestants persecuted. • “Bloody Mary” • did little in the long run to reestablish Catholicism in England England – Elizabeth I (1558 – 1603) • established a religious compromise in England, the Elizabethan Compromise, which called for toleration of all faiths except Catholicism • She represented a new type of leader emerging in Europe, a politique, or a leader who placed political unity above religious conformity. • By the end of her reign, England was the leading Protestant power in Europe. Social Impact of the Protestant Reformation – Family & Gender • Family placed at the center of social life • Celibacy abolished in Protestant churches • Luther, Calvin, and others preached that women’s natural sphere was domestic – Education • PR spurred education -- emphasis on Bible reading made it imporant to ensure literacy for boys and girls. • Luther’s colleague, Philip Melancthon advocated a system of basic schooling in the German states called the gymnasia • With the Jesuits, Catholic nations began to place more importance on education Social Impact of the Protestant Reformation – Social Classes • Few reformers explicitly argued for social equality • “Protestant Work Ethic” spurs the development of capitalism, strengthening the middle class • Ethic of hard work and capital accumulation developing -- Religious Practices – For centuries Europeans religious life had centered around the church calendar, with saints’ feast days, Carnival, Lent, sacraments, and rituals – In Protestant lands, these practices were abolished or modified – Protestant nations souoght to eliminate “externals” such as relics, pilgrimages, and festivals. Catholic Revival & Reform • During the sixteenth century, the institutional “RCC fiddled while Rome burned.” • Finally, under Pope Paul III (1534 – 1549), the RCC responded to the challenge of the PR. – It was a multipronged and complex reform – Catholic Reformation / Counter Reformation Catholic Revival & Reform • New Religious orders -- Society of Jesus (Jesuits) founded in the 1540’s by Ignatius of Loyola. See themselves as “troops of the pope” and missionaries in foreign lands. Worked through education & argument, re-Catholicizing large parts of Eastern Europe like Hungary and Poland. – Angela Merici founded the Ursuline order of nuns to bring education to girls – Teresa of Avila founded the Carmelite nuns, who dedicated their lives to contemplation & service Catholic Revival & Reform • The Council of Trent (1545 – 1563) finally puts the Church’s house in order. – Church abuses eliminated and provided for better education & regulation of priests – Refused to compromise on religious doctrine, reaffirming distinct Catholic practices such as clerical celibacy, the importance of good works, the authority of the pope, and transubstantiation – Reinstituted the Inquisition in Italy (Galileo) – Index of Prohibited Books (1559). -- RCC clamps down on any printed material that threatened to mislead the faithful away from orthodox teachings of the RCC – Baroque art -- revive Catholic spirituality -- emphasizes grandeur, illusion, and dramatic religiosity. Artists like Giovanni Bernini helped to rebuild Rome as a showplace for Catholic piety Catholic success? • By 1560, the religious divide in Europe was a fact -Catholic response was too little, too late • Some parts of Europe were re-Catholicized, Church emerged from its reforms stronger than before the Reformation • After the Council of Trent, no religious compromise was possible • An extended period of religious conflict lay on the horizon (French Wars of Religion, Thirty Years War) In his Institutes of the Christian Religion, John Calvin sought to a. answer the Roman Catholic Church’s doctrinal reforms formulated at the Council of Trent. b. systematize Protestant doctrine as the basis for a reformed Christianity. c. challenge the growing political authority of kings through the articulation of a theory of political resistance. d. promote a dialogue with the Roman Catholic Church e. raise the cultural level of Europeans by supporting universal schooling. Roman Catholics, Lutherans, and Calvinists condemned Anabaptists for their a. belief in church-state separation. b. support for infant baptism. c. secular outlook on the world. d. support for papal supremacy. e. use of magic to achieve religious reform. Which of the following was a major factor in the spread of humanist culture in the late fifteenth and early sixteenth centuries? a. The creation of new religious orders by the papacy. b. Annual meetings of humanist scholars in Italy. c. A major increase in government funding for elementary education. d. The development of the printing press. e. The sale of basic textbooks written in the vernacular Salvation by faith alone, the ministry of all believers, and the authority of the Bible are principles basic to a. b. c. d. e. the Christian humanism of Erasmus. the Church of England. Catholicism after the Council of Trent. Lutheranism in the early sixteenth century. the Society of Jesus (Jesuit order). Martin Luther initially criticized the Roman Catholic Church on the grounds that it a. supported priests as religious teachers. b. sponsored translations of the Bible into vernacular languages. c. reduced the number of sacraments. d. used indulgences as a fund-raising device. e. formed close associations with secular rulers. A major difference between Calvinism and Lutheranism relates to a. b. c. d. e. clerical marriage. the place of women in society. emphasis on predestination. infant baptism. Monasticism. Which of the following beliefs was central to Martin Luther’s religious philosophy? a. Salvation by faith alone. b. Saints as intermediaries between the individual Christian and God. c. The sacrament of penance. d. The priesthood defined as distinct from the laity. e. The equality of men and women. In Freedom of a Christian (1520), addressed to Pope Leo X, Martin Luther argued that a. faith, not “good works,” saves believers from damnation. b. a corrupt church and priesthood invalidated the Eucharist. c. clerics should be free to pursue intellectual debate wherever it might lead. d. Christians should be free to form their own churches and select their own priests. e. predestination was the only way for the elect to achieve salvation. |
The anthem (a musical term derived from antiphon) was originally an English development of the motet. It is a choral setting of a sacred or moral text usually sung during a service.
Whilst a four-part unaccompanied form is known from the mid-16th century, by about 1600 there began variations in which added solo voices and instrumental (usually organ) accompaniment appeared. The anthem became a formal part of the Anglican service and was acknowledged in the Book of Common Prayer of 1662. By then it was being written in a more dramatic style and strings began to appear. It reached full potential with Handel’s Coronation and Chandos Anthems in the first half of the 18th century. (Handel’s first Coronation Anthem, “Zadoc the Priest” is one of the most-loved choral pieces of all time.)
Like all forms of English music the anthem languished somewhat after Handel but enjoyed a revival in Victorian times culminating in the fine works of Stanford, then moved beyond the church in the 20th century with arrangements by Ralph Vaughan Williams, Arnold Bax and Benjamin Britten that are as much at home in the concert hall as in places of worship.
The anthem arrived in America during the 18th century and, by 1800, more works by American-born composers were being used than imported anthems, especially in New England. The form was enriched from input by German immigrants and still thrives in American churches.
- Anthem (book), a dystopian novel by objectivist Ayn Rand
- National anthem |
Emotional intelligence needed for safer schools
15 March 1, 2023 - A recent international research by the University of Córdoba and the European BOOST project showed that when schools focus on securing a safer school climate, this also increases the emotional intelligence of students. Although the study did not specifically go into nondiscrimination, it is likely that antidiscrimination programs focusing on LGBTIQ+ should also focus on developing emotional intelligence.
Emotional intelligence is defined as the ability to manage emotions effectively. It is defined as a system of mental abilities to access, perceive, understand, regulate, and process emotions to promote problem-solving in areas related to an individual’s affect. It can be measured by asking students (1) if they have attention for emotions (I usually care a lot about what I’m feeling); (2) about the clarity of their feelings (I can always explain how I feel); and (3) if they are able to manage their emotions ( this is called mood repair) (When I am angry I try to change my mood). Emotional intelligence has a clear impact on the well-being of schoolchildren. A large number of studies has shown how emotional intelligence impacts on psychological and contextual variables, but less is known about how school contexts influence emotional development.
Safe school climate
School climate is defined as a “pattern of students’, parents’, and school personnel’s experience of school life that reflects norms, goals, values, interpersonal relationships, teaching, and learning practices, and organizational structures” (Cohen et al., 2009, p. 182). Cohen et al identified four dimensions of school climate: safety, relationships, teaching and learning, and institutional environment. The school climate can be measured by asking students for four factors: (1) Teacher-student relations (Teachers care about their students); (2) student-student relations (Students are friendly toward most other students); (3) liking of school (I like this school); and (4) fairness of school rules (Consequences of breaking school rules are fair). When students answer these questions consistently in a positive way, this gives a reliable view of a safe school climate. Strategies to foster a positive school climate can result in helping students learn to internalize the negative emotions they experience, analyze why they feel these negative emotions, and improve their social skills and emotional intelligence.
The BOOST study looked at the level of emotional intelligence and school climate for schoolchildren in primary education in Spain, Poland and Norway. Girls showed higher levels of emotional repair compared to boys. Emotional repair and clarity of emotions were the most important factors of emotional intelligence.
Multivariate analysis showed higher levels of emotional intelligence in Spanish schoolchildren related to a corresponding higher level of school climate. The researchers suggest that creating a safer school climate can lead to the development of more emotional intelligence. However, earlier studies also showed a correlation in the other direction: students who are guided to develop higher levels of emotional intelligence contribute to a safer school climate.
Although this study was done among primary school children, it is likely that the same trends will be found in secondary schools and vocational schools.
Relevance for sexual and gender diversity
Educational programs which are designed to combat discrimination related to sexual and gender diversity are usually focused on giving information about LGBTIQ+ and on correcting prejudice. However, in practice the more effective way to combat intolerant towards sexual and gender diversity is to focus on emotions and how students can deal with the negative feelings they experience when confronted with somebody who is different than they are. For this reason, it would be important that updated programs should be developed to create more acceptance of LGBTIQ+ people by focusing more on emotional intelligence. Such programs could especially focus on how students can engage in mood repair. It is also important that the school has attention for the context of increasing emotional intelligence by providing an LGBTIQ+ inclusive safer school climate. In turn, such a systemic safer school climate will increase the emotional intelligence of students, including their ability to cope with negative feelings towards minorities.
In the European #UNIQUE-project, this approach is leading. This project is focused on vocational education and an online course has been developed for teachers of schools in Cyprus, Greece, Croatia and Poland. The project leaders have been trained in a more emotional approach of combating homophobia and transphobia and the online course is supporting this perspective.
Source: Luque-González, R., Romera, E., Gómez-Ortiz, O., Wiza, A., Laudańska-Krzemińska, I., Antypas, K., & Muller, S. (2022). Emotional intelligence and school climate in primary school children in Spain, Norway, and Poland. Psychology, Society & Education, 14(3), 29-37. |
The impact of global warming is particularly marked in the Alps. Like the Arctic, the mountain range is becoming greener. In an article published in the journal Science, researchers from the University of Lausanne (UNIL) and the University of Basel (UNIBAS) have shown, using satellite data, that the productivity of vegetation above the tree line has increased in almost 80% of the Alps. The snow cover at high altitudes has decreased, albeit only moderately.
The melting of the glaciers has become a popular symbol of climate change in the Alps. However, the reduction of the snow cover, although already visible from space, is not nearly as dramatic. The most marked change is a pronounced and widespread increase in vegetation at high altitudes in the Alps. This is the conclusion reached by a research team led by Professor Sabine Rumpf from UNIBAS and Professors Antoine Guisan and Grégoire Mariéthoz from UNIL.
In collaboration with research groups based in the Netherlands and Finland, the scientists examined changes in snow cover and vegetation using high-resolution satellite data collected from 1984 to 2021. Over this period, plant biomass has increased above the tree line in more than 77% of the Alps. This phenomenon of "greening" due to climate change is already well documented in the Arctic and is beginning to be identified in the mountains as well.
Increase in plant biomass in three quarters of the Alps
"The scale of change is absolutely massive in the Alps", says Sabine Rumpf, the first author of the study and, since February 2022, assistant professor at UNIBAS. The Alps are becoming greener because the vegetation is colonising new areas and becoming denser and higher overall.
Previous studies have focused mainly on the impact of global warming on Alpine biodiversity and on changes in the distribution of plant species. Until now, however, no one had conducted such a comprehensive analysis of the evolution of plant productivity in the Alps. The authors show that the increase in plant biomass is primarily attributable to changes in the precipitation regime and the lengthening of the plant growing season as a result of rising temperatures. "Alpine plants are adapted to harsh conditions, but they are not very competitive", explains Sabine Rumpf. As environmental conditions change, these highly specialised species lose their advantage and are overtaken by competition: "The unique biodiversity of the Alps is therefore under considerable pressure."
Slight reduction in snow cover
In contrast to the vegetation, the extent of snow cover above the tree line has changed only slightly since 1984. The experts excluded regions below 1700 metres, glaciers and forests from their analysis: they found that the snow cover decreased significantly in almost 10% of the remaining regions. This may seem moderate, but the scientists stress that it is nevertheless a worrying trend.
"Previous analyses of satellite data had not identified such a trend", says Antoine Guisan, professor at UNIL and one of the authors who co-led the study. Perhaps this is due to the insufficient resolution of the satellite images or the fact that the periods considered were too short.
The scale of change is absolutely massive in the Alps.
Sabine Rumpf, assistant professor, UNIBAS
"For years, local ground measurements have shown a decrease in the depth of the snow cover at low altitudes", adds Grégoire Mariéthoz, professor at UNIL and the second author to have co-directed the study. As a result of this reduction, some regions are now largely devoid of snow. Using satellite data, it is possible to distinguish the presence or absence of snow, but this data does not provide information on the depth of the snow cover.
With global warming, the Alps will become less and less white and more and more green, entering a vicious circle: "Greener mountains mean less reflection of sunlight, which will further increase warming and mechanically reduce the snow cover and its reflectivity", explains Sabine Rumpf. Warming also increases the melting of glaciers and the thawing of permafrost, which increases the risk of landslides, rockslides and mudslides. In addition, Sabine Rumpf emphasises the important role that snow and ice in the Alps play in the supply of drinking water, as well as in recreation and tourism.
This article is based on a press release issued by UNIL and UNIBAS on 2 June 2022 at 20:00.
Original publication: Sabine Rumpf et al. From white to green: Snow cover loss and increased vegetation productivity in the European Alps,Science (2022), doi: 10.1126/science.abn6697
Photo: View of the Swiss Alps from Pischahorn to the Plattenhörner (©Sabine Rumpf)
About the experts:
Sabine Rumpf, Head of the Ecology Research Group, Department of Environmental Sciences, UNIBAS
Antoine Guisan, Head of the Spatial Ecology Group, Department of Ecology and Evolution, Faculty of Biology and Medicine and Institute of Land Surface Dynamics, Faculty of Geosciences and Environment, UNIL
Grégoire Mariéthoz, Head of the Geostatistical Algorithms & Image Analysis research group and director of the Institute of Land Surface Dynamics, Faculty of Geosciences and Environment, UNIL
To find out more about CLIMACT: |
Since the year 2000, the Eurasian grey wolf, Canis lupus lupus, has spread across Germany. For Ines Lesniak, doctoral student at the Leibniz Institute for Zoo and Wildlife Research (Leibniz-IZW), and her colleagues, a good reason to have a closer look at the small "occupants" of this returnee and to ask the question whether the number and species of parasites change with an increasing wolf population. This was the case, because the number of parasite species per individual wolf increased as the wolf population expanded. Furthermore, cubs had a higher diversity of parasite species than older animals. The good news: wolf parasites do not pose a threat to human health. The results of this study were published in the scientific online journal "Scientific Reports" of the Nature Publishing Group.
In the course of a long-term study of wolf health in Germany, the internal organs of 53 wolf carcasses were studied in detail. They came from wolves which had died in traffic accidents or were illegally killed between 2007 and 2014.
"Whereas tapeworms are recognisable with the naked eye, the identification of single-celled Sarcocystis parasites was a real challenge, since the species of this genus do not differ morphologically," explains Lesniak.
According to their developmental cycle, endoparasites can be grouped into two types: Some, such as many tapeworms, infect their hosts directly. Others, such as Sarcocystis parasites, first live in an intermediate host, the prey animal of the wolf, and reach their final host, the wolf, only if the intermediate host has been consumed by the final host. With the faeces of the final host, these parasites are released back into the environment. Potential prey animals of the wolf feed then on vegetation that was previously contaminated with the parasites. The parasites thereby invade the intermediate host and settle in the muscle flesh. Roe deer, red deer and wild boar are such intermediate hosts in central Europe. When these are eaten by a wolf, the parasites infect the final host -- the wolf -- and reproduce in its intestines.
By applying sophisticated molecular genetic analyses, the scientists identified 12 Sarcocystis species in the wolf carcasses. They also found four tapeworm species (cestodes), eight roundworm species (nematodes) as well as one fluke species (trematode). In order to examine parasite infections also in the wolf's large prey species, the team collected internal organs of shot prey animals from hunting parties.
In Germany, wolves mainly feed on roe deer, but also red deer and wild boars. Small mammals, such as hares, voles or mice, are very seldom "on the menu." The identified parasites provide indirect evidence for this insight, since fox tapeworms were found in only one of the 53 wolves. Fox tapeworms are transmitted by mice and can occur in all canids, but particularly frequently in foxes. "Good news," Lesniak says, because the larvae of fox tapeworms can cause severe diseases in humans.
The scientists found that the infestation of wolves with parasites varied during their lifetime. "Cubs carry many more parasite species than yearlings or adults." According to Ines Lesniak, such variation in parasite species prevalence can be explained by the more robust immune system of older wolves. Wolves, just like any other wild canid -- other than domestic dogs -- are never dewormed, after all.
Wolves that died at the beginning of the study period had a lower parasite diversity than those who died later. "The bigger the population, the more often wolves are in contact with each other and their prey, and the more often they became infected with different parasites," Lesniak summarises the results.
Currently, there are 46 wolf packs settled within Germany. A pack consists of the parents as well as the cubs of the current and the previous year and can comprise up to ten individuals. "Genetic analyses conducted by our cooperation partners for this study show that the ancestors of the Central European lowland population, which nowadays ranges from Germany to Poland, originated from Lusatia in eastern Germany," Lesniak says. This population was probably initiated by individuals who migrated from the Baltic region at the beginning of the millennium and settled between southern Brandenburg and northern Saxony. From there, they began to spread across northeastern Germany and southwestern Poland, a process which continues to this day.
"Wolves are shy, wild animals. Thus, contact between people and wolves is rare," Lesniak emphasises. "Nevertheless, hunters should boil the leftovers of shot game thoroughly before feeding this to their hunting dogs, in order to avoid possible parasite infections," warns Lesniak. It is also essential to regularly deworm hunting dogs in regions occupied by wolves.
Occasionally, it has been reported that wolves come closer to residential areas; sheep farmers are complaining about losses. "It may well be that today's wolves have learnt that it is easier to find food closer to humans -- those, who once eradicated their wolf forefathers" presumes Lesniak. Of course, it is more convenient for a wolf to break into a sheep enclosure than to chase roe deer in the forest. Therefore, the implementation of appropriate protective measures of domestic animals is very important and now also financially supported by the government in Germany.
Cite This Page: |
Avoiding Fires in Poultry Litter Dry Stack Sheds
In today’s commercial poultry industry, dry stack litter sheds are important components of a waste management program. When litter is periodically removed from poultry houses, it must be handled in an environmentally sound manner. To obtain the most value from poultry litter, producers store it until the appropriate application time for ideal plant nutrient uptake and reduced environmental impact (Nottingham, 2012). Therefore, a litter storage structure becomes critical to a poultry operation’s nutrient management program. When properly managed, a storage facility protects litter from the elements, preserves nutrients in the litter, lessens the threat of runoff and water pollution, and allows for proper timing of land application to meet crop and forage needs.
Producers should be aware, however, of the fire danger associated with storing poultry litter (Figure 1). As microbial activity occurs within the litter, heat and methane gas are produced. Heat is also produced at the boundary layer between moist and dry litter in the storage pile. Spontaneous combustion (self-ignition) in a litter pile can occur from this buildup of heat and methane. Fires may also occur if litter is stacked too closely to the wooden walls of the shed, which can ignite if the temperature in the litter reaches the wood’s flash point. The process is similar to spontaneous combustion of hay bales or silage stored in barns or silos, respectively. However, less is known about spontaneous combustion of litter. Additionally, it is important not to drive a tractor on stored litter as this can compact the litter and increase the likelihood of a fire (Figure 2).
We have known for some time that heat is generated when microbial activity occurs in an insulated environment, such as a garden compost pile or dairy manure stored outside. Overheating and spontaneous combustion in hay barns, coal piles, landfills, and containers of oily rags are not uncommon occurrences. Both biological and chemical factors are likely associated with litter storage fires, although the exact causes are not well understood.
Fires and explosions have occurred before in sanitary landfills that generate combustible methane. For methane to be generated, conditions must be right for the growth of anaerobic bacteria. This includes proper moisture content (greater than 40 percent) and an oxygen-free or very-low-oxygen environment. Methane has a specific gravity less than air and, therefore, can escape to the atmosphere given a proper conduit (i.e., adequate pore space in the surrounding litter). However, litter that is compacted and insulated in a storage shed may not have adequate pore space.
Methane is flammable in air at concentrations of 5 to 15 percent. As such, production of methane in litter storage is a potential fire hazard. If the pile is compacted and insulated by additional litter being placed on top of compacted litter, overheating and spontaneous combustion may occur as temperatures rise above 190°F. While microbial activity may generate much of the heat, it is likely chemical reactions that cause the fire. Because most bacteria are killed between 130°F and 165°F, chemical reactions are most likely responsible for the processes that lead to the actual combustion.
Common Risk Factors
There are several common factors that are usually present when a litter storage shed fire occurs:
- Moisture. Moisture is a critical factor in all litter storage shed fires. Dry litter does not generate heat well, but wet litter does. Perhaps the most common mistake made by producers is adding moist litter to dry litter already in the shed. A second mistake is allowing wind-driven rain to reach the litter stored in the shed. The layering effect that occurs when new, moist litter contacts old, dry litter creates an insulated heat- and methane-producing area as the dry litter absorbs moisture. Anaerobic bacteria generate about 50 to 65 percent methane, about 30 percent carbon dioxide, and a smaller percentage of other gases (Hess et al., 2018). If the moisture content of stored litter is more than about 40 percent in a pile with little or no oxygen, anaerobic bacteria will grow and produce methane gas. Litter added to the pile at less than 40 percent moisture will lessen the risk of heating and methane production. If the pile is not compacted and has adequate pore space, any methane that is produced can escape into the atmosphere and will not concentrate in the pile.
- Pile size. Pile size will affect heat release. Height and width are more important than length of the pile. The larger the pile size, the greater the chance for excessive heat and fire. Small piles provide a larger surface area for heat release. Litter in the shed should not be stacked more than 7 feet high at the center of the pile.
- Compaction. Compacting litter encourages anaerobic conditions. Compacting traps heat in the pile and lessens the available pore space for dissipating heat and methane.
- Layering. Layering new, moist litter on top of old, dry litter creates a dangerous, heat-producing situation. Only dry litter should be added to litter already in the shed.
- Caked litter. Caked litter is often wet litter with a high moisture content and can increase the risk of litter storage fires. It is best to separate caked litter from dry litter in the shed until the caked litter has dried.
Best Management Practices
- Dry litter is best to lessen the fire danger. Protect litter in the shed from blowing rain. Do not add wet litter to dry.
- Do not compact wet or dry litter as this encourages anaerobic conditions and increased heat and methane production.
- Do not stack litter over 7 feet high.
- Store wet, caked litter in a separate area from dry litter.
- Stack litter away from wooden walls and support posts, to the degree possible.
- Monitor temperatures at various locations within the pile on a regular basis with a 36-inch compost thermometer (Figure 3). Temperatures of 160°F or less are normal. Temperatures above 160°F are an indication that closer attention and caution are needed. Remove any materials that have a temperature greater than 180°F. If temperatures are 190°F or greater, or if the pile is smoldering, notify the local fire department and get instructions on safely removing the material from the storage shed. Use extreme caution when digging into the pile because a smoldering pile can burst into flame when exposed to oxygen. Be aware that a garden hose is not adequate fire suppression equipment if a litter pile bursts into flame. Spread the litter on a field using caution to avoid catching dry grass or other combustible materials in the field on fire.
- Do not store expensive farming equipment such as tractors, combines, decaking machines, windrowing equipment, hay mowers, rakes, and balers under the litter storage shed.
Litter storage sheds are a vital part of every broiler operation’s nutrient management program. Litter storage allows flexibility in timing land applications and lessens the possibility of polluting surface and ground waters, as could occur with litter stored outdoors. Litter storage shed fires are possible because of heat and methane buildup in litter stacked in the shed. Spontaneous combustion in a litter pile can occur under the right conditions. Several common factors can lead to spontaneous combustion in a litter pile. The most critical of these factors is likely litter moisture content; however, pile size, caked litter, layering, and compacting the pile are also important. Proper precautions will greatly reduce the risk of a litter shed fire. Good management and common sense will help keep your litter shed intact and working for you for many years to come.
Hess, J. B., J. O. Donald, and J. P. Brake. 2018. Preventing fires in litter storage structures. Alabama Cooperative Extension Service Publ. No. ANR-0915.
Nottingham, R. 2012. Preventing fires in manure storage structures. University of Maryland Extension. Commercial Poultry Newsletter, 1(1):3.
Publication 3718 (POD-10-21)
By Tom Tabler, PhD, Extension Professor, Poultry Science; Jonathan R. Moyle, PhD, Extension Poultry Specialist, University of Maryland Extension; Jessica Wells, PhD, Assistant Clinical/Extension Professor, Poultry Science; and Jonathan Moon, Poultry Operations Coordinator, Poultry Science.
The Mississippi State University Extension Service is working to ensure all web content is accessible to all users. If you need assistance accessing any of our content, please email the webteam or call 662-325-2262. |
Lyme disease is a bacterial infection transmitted to humans from the bite of an infected blacklegged tick – also known as the deer tick. If left untreated, the infection can spread to joints, the heart, and the nervous system. Nonetheless, prompt treatment can help you recover quickly.
Lyme disease cases have traditionally been more prevalent in the Northeast, but have now been found in all 50 states. You’re more at risk to contract Lyme disease if you live or spend time in heavily wooded or grassy areas where ticks carrying Lyme disease can thrive.
Early signs of Lyme disease may include flu-like symptoms such as headache, fever and chills. Typically, but not always, the disease is marked by a red “bull’s-eye” rash. Later symptoms may include pain, weakness, numbness in the arms and legs, changes in vision, heart palpitations and chest pain, a rash, and facial paralysis (Bell’s palsy).
The key to preventing Lyme disease is to avoid being bitten by ticks. Follow these suggestions:
- Wear long sleeves and long pants with high socks when in the woods or doing gardening or raking leaves
- Wear a tick repellent that has DEET, lemon oil, or eucalyptus on your skin and clothing
- When coming in from outside, check thoroughly for ticks and do the same with pets
- Shower within 2 hours after coming inside, if possible.
If you find a tick embedded in the skin, remove it with tweezers and clean the area with rubbing alcohol and anti-bacterial soap. You are unlikely to get infected with Lyme disease if you remove the tick within 36 hours. Consult your health care provider if you have questions. |
The cell is the basic structural and functional unit of living organisms. While unicellular organisms (e.g., bacteria, protozoa) consist of a single cell capable of sustaining life, multicellular organisms (e.g., animals, land plants) consist of numerous highly specialized and diverse cells organized into various types of tissue. Cells are surrounded by a membrane composed of a lipid bilayer with embedded proteins. Depending on their cell structure, organisms are classified as prokaryotes or eukaryotes. Prokaryotes, which encompass the domains of the Bacteria and the Archaea, are unicellular organisms that lack membrane-bound organelles such as a nucleus and mitochondria (see bacteria overview). Eukaryotes are unicellular and multicellular organisms with a cell or cells containing various specialized, membrane-bound organelles such as nuclei and mitochondria.
Cell types are classified as either prokaryotic or eukaryotic. Prokaryotes are unicellular organisms that encompass the domains of Bacteria and Archaea. They consist of a single cytoplasm-filled compartment enclosed by a cell membrane. Eukaryotes contain a nucleus and other membrane-bound cell organelles. Eukaryotes encompass all multicellular organisms as well as some unicellular ones (protozoa). Eukaryotic cells are larger (100–10,000-fold) than prokaryotic cells and have a significantly more complex structure.
|Overview of the eukaryote and prokaryote cell structure|
|Factor||Eukaryotes (humans, protozoa, animals, and plants)||Prokaryotes (archaea and bacteria)|
|Nucleus|| || |
Location of DNA
|DNA storage form|| || |
|Amount of noncoding DNA|| || |
|Mitochondria|| || |
|Ribosomes|| || |
|Cell wall|| || |
|Compartmentalization|| || |
|Locomotive structures (flagellum)|| || |
Prokaryotic cells do not have a nucleus.
Both prokaryotes and eukaryotes have cell membranes. The cell membrane provides a boundary between the outside environment and the cell interior and is an essential component of living systems. Eukaryotic cells also have intracellular membranes that envelop individual organelles and enable specialized processes to occur in separation from cytoplasmic processes. Furthermore, most prokaryotic and plant cells possess a cell wall, which envelops the cell membrane, stabilizes, and protects cells from the outside environment.
Cell membrane structure
The cell membrane (or plasma membrane) is composed of an asymmetric lipid bilayer with embedded or attached membrane proteins. The synthesis of membrane components occurs in the smooth endoplasmic reticulum (SER).
Structure: consists of amphiphilic lipids such as phospholipids or sphingolipids, which possess a polar head (e.g., phosphate, sphingosine) and hydrophobic tails (fatty acids).
- Distribution of nonpolar and polar groups: In an aqueous solution, the nonpolar hydrocarbon tails face inward, while the polar heads form a boundary to water in both directions. As a result, stable lipid bilayers develop, forming a spherical entity (e.g., cells or vesicles).
- Distribution of membrane lipids: The different types of lipids are distributed asymmetrically between the two leaflets of the membrane.
- Outer lipid layer: rich in phosphatidylcholine and sphingomyelin
- Inner lipid layer: rich in phosphatidylserine, phosphatidylethanolamine, and phosphatidylinositol
- Almost impermeable to polar molecules
- Highly permeable to nonpolar molecules and water
- Fluidity: The fluidity of the membrane lipid bilayer changes depending on the composition of bilayer and the temperature of the environment.
- Unsaturated fatty acids increase membrane fluidity.
- Cholesterol and glycolipids; (i.e., lipids with a carbohydrate attached by a glycosidic covalent bond) stabilize the membrane.
Diffusion (transport): The fluidity of the lipid bilayer allows for movement of individual molecules within the membrane.
- Lateral (parallel) diffusion: Individual lipid molecules diffuse freely within the lipid bilayer.
- Transverse diffusion : very slow; requires enzymatic support by flippases, floppases, or scramblases (phospholipid translocators)
- Flippases: move phospholipids from the outer to the inner surface
- Floppases: move phospholipids from the inner to the outer surface
- Scramblases: move phospholipids in both directions
- Facilitated diffusion: diffusion of molecules across the cell membrane via carrier proteins, channel proteins, or ions (e.g., glucose and fructose transport into cells via GLUT transporters)
- Proteins that are embedded in the lipid bilayer of membranes
- Usually glycoproteins
- Membrane protein content in the lipid bilayer: 20–80%
Types of membrane proteins
Integral membrane proteins
- Strongly bind to the lipid bilayer
- Partially penetrate into the membrane
- Transmembrane proteins: completely penetrate the lipid double layer (e.g., Na+/K+-ATPase)
Peripheral membrane proteins
- Poor binding to the lipid bilayer
- Usually bind via electrostatic affinity or hydrogen bonds between a peripheral and an integral membrane protein
- Integral membrane proteins
- Distribution of membrane proteins: variable composition of the inner and outer membrane surface
|Examples of asymmetrically distributed membrane components|
|Integral membrane proteins||Transmembrane proteins|
|Integral monotopic proteins|| |
|Peripheral membrane proteins||Extracellularly directed|| |
Because of their fluidity, membranes are also permeable to water and some small molecules like O2, even without the use of specific channels or transporters. Accordingly, they are described as semipermeable.
- Definition: loose glycoprotein-polysaccharide layer covering the outside of the cell membrane in some eukaryotic and prokaryotic cells
- Long, branching network of polysaccharides
- Covalently bound to proteins and, to a lesser extent, lipids of the cell membrane
- Protects the cell from dehydration
- Enables immune cells to differentiate between host cells and foreign organisms
- At the RBC membrane: differentiation of blood groups
Protects the cell from the external environment
- Cell membrane: separates the cell from the external environment
- Membrane of cell organelles (endomembrane system): separates cell compartments within the cytosol
- Transport of substances from the inside to the outside of the cell or from the outside to the inside of the cell
- Signal transduction: conversion of extracellular signals into intracellular reactions
- Every cell expresses specific proteins on its surface that are mostly glycosylated (glycoproteins).
- These glycoproteins are highly specific for each cell type and allow self cells to be distinguished from one another as well as from foreign cells.
- Generation of an electrochemical gradient across the membrane creates a membrane potential.
- Excitation activates voltage-gated ion channels, temporarily decreasing the negative membrane potential (depolarization).
- Cell junctions: formed by anchor proteins (cell adhesion molecules), which are anchored to the cytoskeleton and protrude outside of the cell
Cellular organelles are compartments within cells that are enveloped by a membrane and have a highly specific function. Eukaryotes contain numerous organelles, whereas prokaryotes lack compartmentalization.
|Overview of the most important cell organelles|
|Nucleus|| || |
|Endoplasmic reticulum (ER)|| || |
|Golgi apparatus|| || |
|Mitochondria|| || |
|Lysosomes|| || |
|Peroxisomes|| || |
The nucleus is the control center of the cell. It is surrounded by a double membrane and contains all of the cell's genetic material, except for the mitochondrial DNA.
The nuclear membrane consists of an inner and outer membrane, each composed of a lipid bilayer.
- Outer nuclear membrane: contains numerous ribosomes
Inner nuclear membrane: covered by the nuclear lamina, a network of intermediate filaments (lamins) that stabilizes the membrane
- Nuclear lamins provide mechanical support and are involved in various processes of the cell cycle (e.g., transcription, signal transduction, chromatin organization)
- A mutation in the gene encoding for lamin A results in Hutchinson-Gilford progeria syndrome
- Nuclear pores: The inner and outer nuclear membranes fuse at some points and form nuclear pores with the aid of large protein complexes.
- Chromatin: complex of DNA, histones, and nonhistone proteins
- Nucleolus: site of rRNA synthesis and ribosomal subunit assembly
- Storage of the entire genetic information of an organism in the form of chromatin (except mitochondrial DNA)
- Duplication of genetic information before cell division (DNA replication): See the cell cycle for further information.
- Transcription: initial step of protein synthesis
- Synthesis of rRNA in the nucleolus
- Packaging and protection of inactive DNA by histones
The endoplasmic reticulum (ER) is an extensive network of membranes that is directly connected to the outer nuclear membrane. The ER forms a channel system of elongated cavities. The most important function is the synthesis of cellular components and cell export products. The ER can be microscopically and functionally differentiated into the rough and smooth ER.
- Membranous channel system
- In direct contact with the outer nuclear membrane
- Composed of two microscopic and functionally different regions:
- Synthesis of membrane, secretory, and lysosomal proteins (translation) and their modification (e.g., N-linked glycosylation)
- Packaging of newly synthesized proteins into vesicles to transport to the Golgi apparatus (for further processing) or directly to a specific location
- All proteins found within cell organelles (e.g., Golgi apparatus, lysosomes, ER) have their origin in the RER.
- Cells rich in RER include exocrine pancreas cells, antibody-secreting plasma cells, and mucus-secreting goblet cells.
Nissl bodies: the RER found in the soma and dendrites of neurons
- Site of synthesis for peptide neurotransmitters that are transported to the presynaptic terminals
- Nissl stain: a cationic (basic) dye used to visualize negatively-charged ribosomes on light microscopy
- Synthesis of phospholipids, fatty acids, cholesterol, and steroids
- Biotransformation of drugs, alcohol, and toxins in the liver
- Storage and release of carbohydrates
- Location of glucose 6-phosphatase
- Calcium storage
- Cells rich in SER include hepatocytes, and steroid-secreting cells (e.g., adrenal cortex or gonadal cells)
Enveloped, disc-shaped, slightly curved vesicle system with two sides:
Cis-Golgi face (convex side)
- Bends slightly around the ER
- Membrane vesicles from the ER that are loaded with proteins are received at the cis-Golgi side.
Trans-Golgi face (concave side)
- Faces the cell membrane
- Vesicles are detached from the trans-Golgi side and sent towards the cell membrane and lysosomes.
- Synthesis of lysosomes and their loading with enzymes
- Recycling of plasma membrane proteins via endocytosis
- Activation of hormones and other proteins
- Modification of glycoproteins and hormone precursors received from the RER
- O-linked glycosylation: attachment of O-oligosaccharides to serine or threonine
- Modification of N-oligosaccharides on aspargine after N-linked glycosylation in the RER
- Phosphorylation: Mannose residues on glycoproteins (e.g., lysosomal acid hydrolases) are phosphorylated to mannose-6-phosphate, allowing them to be trafficed to lysosomes. (Defects in this process lead to I-cell disease.)
- Sorting of proteins according to their target sequence or attached oligosaccharides
Vesicular trafficking proteins
- COPI protein: trans-Golgi network (TGN) → cis-Golgi network (CGN) → endoplasmic reticulum (retrograde trafficking)
- COPII protein: endoplasmic reticulum → CGN → TGN (anterograde trafficking)
Clathrin: formation of coated vesicles (endosomes) for transport within cells
- Receptor-mediated endocytosis: plasma membrane forms endosomes (e.g., mediated by LDL receptor)
- TGN can also form endosomes (endosomes can become lysosomes)
Defective labeling of lysosomal acid hydrolases in the Golgi apparatus leads to I-cell disease.
To remember that COPII facilitates anterograde (forward) transport from the rough endoplasmatic reticulum to the Golgi apparatus and COPI facilitates retrograde (backward) transport, think: “Two cops (COPII) go for (forward) a coffee to go (to the Golgi apparatus). One cop (COPI) goes back (backward) to the rough (rough ER) neighborhood.”
- Vesicular, membrane-enclosed cell organelles originating from the trans-Golgi face of the Golgi apparatus
- Subclassified into early and late endosomes depending on their stage of maturation
- Intracellular sorting and transport system
- Early endosomes
- Internalize materials from outside the cell via plasma membrane invagination
- Recycle receptors (e.g., LDL receptor) and transport them back to the cell surface membrane
- Can receive vesicles from the Golgi apparatus and send them back
- Late endosomes: fuse with lysosomes and thereby allow for lysosomal degradation of endosomal content
- Early endosomes
Mitochondria are often described as the powerhouses of the cell because of their central role in the synthesis of ATP, a vital source of energy for the body. They are composed of a double membrane, intramembranous space, and matrix. Various mitochondrial types can be differentiated based on the inner membrane structure.
The structure and DNA of mitochondria resemble the structure and DNA of prokaryotes. Mitochondria are believed to have been prokaryotes originally that evolved into endosymbionts living inside eukaryotes (see symbiogenesis).
There are two, highly specialized mitochondrial membranes that surround the mitochondrion. They provide the framework for the electron transport chain and ATP production.
- Structure: smooth
- Permeability: interspersed with pores, highly permeable for various molecules
- Structure: convoluted
- Permeability: impermeable, especially to ions; however the inner membrane contains many different highly specific transport proteins
- Characteristic component: cardiolipin (stabilizes the enzymes of oxidative phosphorylation)
Types of inner mitochondrial membranes
- Thin invaginations (cristae) of the inner membrane
- Present in most cells
- Inner membrane forms tubules
- Mainly in cells that produce steroids
Carriers of the inner mitochondrial membrane
Specific transporters regulate the transport of substances through the inner membrane.
- Functional mechanism: antiporter of two molecules
In the malate-aspartate shuttle, only the electrons of NADH and not NADH itself are transported across the inner mitochondrial membrane.
- Contains mitochondrial DNA (mtDNA) and ribosomes responsible for the synthesis of ∼ 15% of the mitochondrial proteins
- The remaining mitochondrial proteins are encoded in the nucleus and are transported into the mitochondria in an unfolded state, where they take on their final folded structure.
- Energy production: The inner mitochondrial membrane contains the enzymes of the respiratory chain and the ATP synthase that together produce ATP (oxidative phosphorylation).
- Other metabolic pathways in the matrix
- Initiation of apoptosis: See section “Apoptosis” in the article on cellular changes and adaptive responses for more information.
“If you cite (cytoplasm) my article, I might (mitochondria) give you a HUG”: Heme synthesis, the Urea cycle, and Gluconeogenesis take place in both, the cytoplasm and the mitochondria, think: “If you cite (cytoplasm) my article, I might (mitochondria) give you a HUG”.
Heme synthesis, the urea cycle, and gluconeogenesis take place in both the cytoplasm and the mitochondria.
The DNA and ribosomes of mitochondria and prokaryotes have many similarities. The discovery of this resulted in the endosymbiotic theory of mitochondrial evolution, which is that mitochondria were originally independent prokaryotic bacteria with the special ability to produce energy through oxidative phosphorylation and were eventually engulfed by eukaryotic cells. As a result, the prokaryotic cells lost parts of their DNA and their ability to live independently, while the eukaryotic host cell became dependent on the energy produced by the incorporated bacterium.
Lysosomes can be regarded as the cell's waste disposal system. Their main function is intracellular digestion (e.g., the degradation of polymers into monomers).
- Small, spherical organelles that are surrounded by a lipid bilayer and filled with digestive hydrolytic enzymes, which are responsible for the degradation of macromolecules
Hydrolytic enzymes: lipases, glucosidases, acidic phosphatases, nucleases, endoproteases (e.g., cathepsins )
Origin of hydrolytic enzymes
- Enzymes are synthesized at the ribosomes of the rough ER and then transported to the Golgi apparatus.
- A mannose 6-phosphate molecule is attached to the enzymes after their translation by N-acetylglucosaminyl-1-phosphotransferase in the Golgi apparatus.
- The enzymes tagged with mannose 6-phosphate are packaged into vesicles (primary lysosomes).
- Origin of hydrolytic enzymes
Acidic environment (pH value of ∼ 5)
- Optimal pH value for hydrolytic enzymes
- Maintained by the active transport of H+ through the membrane H+-ATPase
- Hydrolytic enzymes: lipases, glucosidases, acidic phosphatases, nucleases, endoproteases (e.g., cathepsins )
The main enzyme stored in lysosomes is acidic phosphatase.
Intracellular degradation of macromolecules
- Primary lysosomes are vesicles with newly synthesized hydrolytic enzymes that bud from the Golgi apparatus.
- They fuse with vesicles that contain digestive materials, e.g., endosomes, phagosomes, and thereby form secondary lysosomes.
- The hydrolytic enzymes in the secondary lysosomes degrade the macromolecules.
- Cleavage products are emptied into the cytosol and can be reused for new synthesis processes.
Residual bodies: lipid-rich, undigested material (lipofuscin) left over from macromolecule degradation is expelled from the cell or stored in the cytosol in residual bodies.
- Intracellular lipofuscin deposits (yellow-brown pigmented granules) accumulate in neurons, hepatocytes, and cardiomyocytes with age.
Origin of macromolecules
- Receptor-mediated endocytosis: Endocytic vesicles from the plasma membrane fuse first with early endosomes and later with lysosomes.
- Phagocytosis: Particles are engulfed and taken up by phagocytic cells, forming phagosomes.
- Autophagy: Autophagosomal membranes fuse and form an autophagosome that sequesters intracellular debris (e.g., proteins, lipids, cell organelles). It later fuses with lysosomes in order to degrade the macromolecules.
Lysosomes play an important role in adaptive immunity. Antigen-presenting cells (e.g., macrophages, dendritic cells) internalize antigens and degrade them through proteolysis within lysosomes. Afterwards, the resulting peptides are loaded onto MHC class II molecules, delivered to the cell surface and presented to naive T cells.
In the event of severe cellular damage, lysosomes release their contents into the cytosol, causing the cell to disintegrate (apoptosis).
Peroxisomes are spherical organelles surrounded by a single membrane; they play a key role in fatty acid oxidation and the biosynthesis and degradation of specific molecules.
- Relatively small, round, membrane-enclosed vesicles
Fatty acid oxidation
- α-oxidation of branched-chain fatty acids
- β-oxidation of very-long-chain fatty acids (VLCFA) to octanoyl-coenzyme A (CoA)
Hydrogen peroxide metabolism
- Mono-oxygenases convert substrates using molecular oxygen which results in hydrogen peroxide synthesis.
- Catalases convert cytotoxic hydrogen peroxide (H2O2 ) to water and oxygen (2 H2O2 → 2 H2O + O2), which protects the cell from reactive oxygen species.
- Steroid hormones
- Bile acids
- A type of ether phospholipid found in cell membranes
- Specific for white matter cells of the brain and cardiac myocytes
- Catabolic function: amino acids and ethanol metabolism
Zellweger syndrome is caused by impaired peroxisome formation, which results in the accumulation of cytotoxic hydrogen peroxide within the cells.
Refsum disease is caused by insufficient α-oxidation of branched-chain fatty acids.
Adrenoleukodystrophy is caused by insufficient β-oxidation of very-long-chain fatty acids.
Cytosol and ribosomes
The cytosol, also termed matrix, is part of the cytoplasm and enclosed by the cell membrane. In prokaryotes, almost all metabolic pathways occur directly in the cytosol. In eukaryotes, several of these processes occur in cell organelles that are separated from the cytosol by a membrane (compartmentalization).
- Water, dissolved ions, and small molecules (70%)
- Proteins, e.g., enzymes involved in metabolic pathways (30%)
- Glycolysis, hexose monophosphate shunt, gluconeogenesis
- Synthesis of nucleotides
- Translation, protein degradation
- Heme synthesis
- Urea cycle
The cytoplasm surrounds the nucleus and consists of the cytosol and the cell organelles.
Heme synthesis, the urea cycle, and gluconeogenesis take place in both the cytoplasm and the mitochondria.
Ribosomes are very large molecule complexes of RNA and proteins that are located in the cytosol, on the cytosolic side of the rough endoplasmic reticulum (rER) and within the mitochondria. The ribosome is the site of protein synthesis (translation).
Mass: The mass of the ribosomal subunits is measured using the sedimentation coefficient (unit: Svedberg, or S).
- Small subunit: 40S in eukaryotes, 30S in prokaryotes
- Large subunit: 60S in eukaryotes, 50S in prokaryotes
- Total mass : 80S in eukaryotes, 70S in prokaryotes
- Not attached to a membrane; can be found floating in the cytosol (free ribosomes) or bound to the cytoskeleton
- Site of synthesis for a number of intracellular proteins (e.g., cytosolic and mitochondrial proteins)
- Membrane-bound ribosomes: bound to the RER
- Ribosomes constitute the structural prerequisites for protein synthesis and are catalytically active.
- The RNA components of ribosomes (rRNA) interact with mRNA and tRNA and catalyze peptide bond formation .
Cytosolic proteins (such as tubulin) are synthesized on free ribosomes. Lysosomal and membrane proteins are synthesized on ribosomes of the rER.
- Definition: a network of filaments (protein fibers) that extends throughout the cytosol.
- Stability and movement of the cell and its organelles
- Transport processes within the cell
- Essential for cell division
- Elongated cell structures composed of monomers
- RBCs contain a special kind of cytoskeleton filament on the cytosolic side of their cell membrane that consists of the filamentous protein spectrin. Spectrin forms a meshwork with other proteins (e.g., band 3, ankyrin, and band 4.1 proteins) .
- Accessory proteins
- Responsible for various functions of the cytoskeleton (e.g., motion, attachment and detachment of monomers)
- Motor proteins: important accessory proteins responsible for filament motion
Actin filaments (microfilaments)
| || || |
|Intermediate filaments (IFs)|| || || |
| || || |
The spectrin-based cytoskeleton of RBCs is deficient in hereditary spherocytosis.
Intermediate filaments can be used as immunohistochemical tumor markers to detect the origin of a neoplasm.
To remember drugs that disrupt microtubules, think “Microtubules Get Constructed Very Poorly”: Mebendazole, Griseofulvin, Colchicine, Vincristine/Vinblastine, Paclitaxel.
Negative end Near Nucleus, while Positive end Points to the Periphery: The negative end of the microtubule is oriented towards the nucleus and the positive end is oriented towards the periphery of the cell.
Kin (keen) to go out (anterograde), Dying to come back home (retrograde). Kinesin transports anterograde (from – → +) along the microtubule. Dynein transports retrograde (from + → –) along the microtubule.
The cells of the body are connected to other cells and the surrounding structures by cell-cell junctions and cell-matrix junctions. The type and number of junctions varies between different cell types. While red blood cells do not form cell junctions, epithelial cells are tightly connected to one another and to the basal lamina.
Tight junction (zonula occludens): sealing contact that forms an intercellular barrier between epithelial cells
- Membrane proteins (claudins and occludins) of two cells interact.
- Connected to actin filaments of the cytoskeleton via adapter proteins
- Localization: usually at the apical surface between epithelial cells
- Seals adjacent epithelial cells together and thereby separates the apical from the basal side of the epithelium.
- Prevents the paracellular transport of ions and molecules
- Serves as diffusion barrier
Anchoring junctions (adhering junctions)
Anchoring junctions are mechanical attachments between cells. Several forms can be differentiated according to function.
Adherens junction (zonula adherens, belt desmosome)
- Description: tightly connects cells across a broader belt-shaped area
- Vinculin and catenin are located on the intracellular side of the cell membrane and connect the intracellular actin filaments with transmembrane adhesion proteins such as cadherins (mainly E-cadherin).
- Calcium-dependent transmembrane proteins responsible for adhesion of cells to other cells
- Loss of cadherins is associated with metastatic transformation in neoplasias.
- Function: connects, e.g., epithelial cells and endothelial cells in a continuous, belt-like manner
Desmosomes (macula adherens, spot desmosome)
- Description: linking of two cells via intermediate filaments
- Intermediate filaments radiate intracellularly and cadherins (mainly desmoglein and desmocollin) extracellularly from the desmosomal plaque, which is located on the cytoplasmic side of the cell membrane.
- Cadherins connect the desmosomal plaques of two cells.
- Primarily connect cells subject to high levels of mechanical stress (e.g., epithelial cells and cardiomyocytes)
- Pemphigus vulgaris: autoantibodies against desmoglein 1 and/or 3
- Description: : does not connect two cells, but attaches cells to the extracellular matrix
- Structure: : Integrins connect the intracellular cytoskeleton (keratin) with molecules of the basement membrane (laminin, fibronectin, and collagen).
- Connects epithelial cells with the basal lamina and maintains the integrity of the basement membrane.
- Bullous pemphigoid: autoantibodies against hemidesmosomes
Communicating junctions permit the passage of electrical or chemical signals.
Gap junction (nexus): intercellular channels that connect two cells
Structure: formed by the interaction of the connexons of two neighboring cells
- Connexon: composed of six membrane-spanning proteins (connexins) with a central pore
- Primarily cardiomyocytes; control the passage of electrical stimulus in cardiomyocytes as well as epithelial and retinal cells
- Chemical communication between cells with second messenger molecules (e.g., IP3, Ca2+)
- Structure: formed by the interaction of the connexons of two neighboring cells
- Synapse: areas where signals or action potentials are transmitted from a presynaptic to a postsynaptic structure (e.g., neurons, muscle)
Auto-antibodies directed against components of the cell junctions are formed in autoimmune blistering diseases, e.g., in pemphigus vulgaris (antidesmosome antibodies) and bullous pemphigoid (antihemidesmosome antibodies).
CADherins are CAlcium dependent ADhesion proteins. |
Concerns about a child’s development are an inevitable part of teaching. As teachers care for their students and only want the best for them, this is natural. Sharing those concerns with parents, however, can be an intimidating, yet vital, conversation to have.
First things First: Acclimate
At the beginning of the year, it is so important to build a positive rapport and relationship with both the child and the parents. Spend one-on-one time with each child getting to know his personality, as well as likes and dislikes. Share lots of positive news and send “happy grams” home with every child. Make sure that every family knows how much you care right from the start. It’s difficult to assess the developmental level of a child that you don’t know well.
If (err, when) problems start to arise in your classroom, try all the usual tactics:
- Double-check your expectations. Are most children able to meet them? Have you taken into account the child’s attention span, motor skills, culture, and home language?
- Model appropriate behaviors. Practice them together. Model again. Repeat!
- Provide visual cues (like this one for circle time) to make sure the children understand exactly what you expect.
- Give it time. For many young children, preschool is a whole new world– new people, new furniture, new rules, new expectations. It takes time to settle in.
Second: Observe and Document
Good observations are a crucial part of assessing all preschool children.
- Jot down notes on sticky paper.
- Have little checklists all over the room and at the end of the day, take the sticky notes and put them into a notebook.
- Sit back for a few minutes while a co-teacher leads the class and write directly into a notebook or computer. When we are observing children about whom we have concerns, though, it is especially important to remember good observation skills:
- Be objective! (No emotions, no interpretation, just the facts)
- Keep it simple! (Who, what, when, where, etc.)
- Record both “positive” and “negative” observations.
Distinguishing between Typical and Concerning Behaviors
It is extremely important to make the distinction between undesirable preschool behaviors that are developmentally expected and ones that are of greater concern. It is also good to remember that some developmentally appropriate behaviors, such as tantrums, can become concerning when they are excessive in either intensity or frequency.
(Please note that these are only examples of some of the behaviors you might see.)
Some Typical Preschool Developmental Behaviors:
- Not sharing
Atypical Preschool Behaviors:
- Avoids eye contact
- Lack of communication
- Excessive aggression
- Hyperfocused behavior (will ONLY play by spinning wheels on cars, for example)
- Unable to be redirected to other activities
- Excessive tantrums, especially during transitions
Next: Conference with the Parents
If through your observations and knowledge of child development, you feel that it would benefit the child to have an expert take a closer look, it’s time to have a conference with the parents.
- Start on a positive note. Be genuine. Every child has great things about them, and now is the time to let the parent know that Billy builds the most amazing block creations, or Sanjay shows a real connection with the class pet.
- Let the parents take the lead. Ask the parents, “Do you have any questions or concerns about your child’s development?” Some will immediately lead you into the discussion you want, some will simply say, “Nope.”
- Share your viewpoint. Start with a phrase like, “I have observed a few things that I would like to share with you.”
- Reassure the parent. Tell them “First of all, Jill is so lucky that you are her mom – you’re doing a great job!” or something similar.
- Objectively compare to typical behavior. “Most of the class is able to clean up when I play the clean up song, but Billy throws blocks and screams at the teachers.” or “Most of the class loves it when I call out their names during the circle time song, but Jill doesn’t make eye contact or respond at all when I say her name.”
- Give the parents a direction to go. Share with them the Early Intervention or other agencies that can help with diagnostic screening or encourage them to make an appointment with the pediatrician to discuss the concerns and rule out any problems.
A quick note of caution:
Unless you are trained and licensed to do so, DO NOT MAKE ANY TYPE OF GUESS OR DIAGNOSIS.
Sometimes, a parent will become defensive or refuse to believe you that there is any cause for concern. This is to be expected. Remember, it is extremely difficult for the parents to hear that their child is struggling – even harder than it is for you to tell them. Other times, the parent will thank you profusely for validating their inner doubts. Regardless, it is up to the parent to take the next step.
Finally: Teamwork is the Best Answer
Follow up the conference with a short, positive email that summarizes the development concerns in a short list. Close with a personal invitation to let you know how the next step (a pediatrician appointment, developmental screening, or another specialist appointment) goes. It is often a good idea to remind the parent that you have their child’s best interest at heart. After all, you are spending hours with their child every week, and you want them to succeed as well. Let them know that you are on their team!
Click HERE if you’re in the U.S. to view a list of early intervention services by state.
Subscribe to receive a FREE dramatic play printable set, as well as our latest content by email. |
Imagine you have a programming task that involves parsing and analyzing text. Nothing complicated: maybe just breaking it into tokens. Now imagine the only programming language you had available:
- has no text handling functions at all: you can pack characters into numeric types, but how they are packed and how many you get per type are system dependent;
- allows integers in variables starting with the letters I→N, with A→H and O→Z floating point;
- has IF … THEN but no ELSE, with the preferred form being
IF (expr) neg, zero, pos
where expr is the expression to evaluate, and neg, zero and pos are statement labels to jump to if the evaluation is negative, zero or positive, respectively;
- has only enough memory for (linear, non-associative) arrays of a couple of thousand entries;
- disallows recursion completely;
- charges for computing time such that a solo researcher’s work might cost many times their salary in a few weeks.
Sounds impossible, right? But that’s the world described in Colin Day’s book from 1972, Fortran techniques with special reference to non-numerical applications.
The programming language used is USA Standard FORTRAN X3.9 1966, commonly known as Fortran IV after IBM’s naming convention. For all it looks crude today, Fortran was an efficient, sod-the-theory-just-get-the-job-done language that allowed numerical problems to be described as a text program and solved with previously impossible speed. Every computer shipped with some form of Fortran compiler at the time. Day wasn’t alone working within Fortran IV’s text limitations in the early 1970s: the first Unix tools at Bell Labs were written in Fortran IV — that was before they built themselves their own toolchain and invented the segmentation fault.
The book is a small (~ 90 page) delight, and is a window into system limitations we might almost find unimaginable. Wanna create a lookup table of a thousand entries? Today it’s a fraction of a thought and microseconds of program time. But nearly fifty years ago, Colin Day described methods of manually creating two small index and target arrays and rolling your own hash functions to store and retrieve stuff. Text? Hollerith constants, mate; that’s yer lot — 6HOH HAI might fit in one computer word if you were running on big iron. Sorting and searching (especially without recursion) are revealed to be the immensely complex subjects they are, all hidden behind today’s one-liner methods. Day shows methods to simulate recursion with arrays standing in for pointer stacks of GO TO targets (:coding_horror_face:). And if it’s graphics you want, that’s what the line printer’s for:
Why do I like this book enough to track down a used copy, import it, scan it, correct it and upload it to the Internet Archive? To me, it shows the layers we now take for granted, and the privilege we have with these hard problems of half a century ago being trivially soluble on a $10 computer the size of a stick of gum. When we run today’s massive AI models with little interest in the underlying assumptions but a sharp focus on getting the results we want, we do a disservice to the years of R&D that got us here.
The ‘charges for computing time’ comment above is from Colin’s website. Early central computing facilities had the SaaS billing down solid, partly because many mainframes were rented from the vendor and system usage was accounted for in minute detail. Apparently the system Colin used (when a new lecturer) was at another college, and it was the custom to send periodic invoices for CPU time and storage used back to the user’s department. Nowhere on these invoices did it say that these accounts were for information only and were not payable. Not the best way to greet your users.
(Incidentally, if you hate yourself and everyone else around you, you can get a feel of system billing on any Linux system by enabling user quotas. You’ll very likely stop doing this almost immediately as the restrictions and reporting burden seem utterly alien to us today.)
While the book is still very much in copyright, the copy I have sat unread at Lakehead University Library since June 1995; the due date slip’s still pasted in the back. It’s been out of print at Cambridge University Press since May 1987, even if they do have a plaintive/passive aggressive “hey we could totally make an ebook of this if you really want it” link on their site. I — and the lovely folks hosting it at the Internet Archive — have saved them from what’s evidently too much trouble. I won’t even raise an eyebrow if they pull a Nintendo and start selling this scan.
Colossal thanks to Internet Archive for making the book uploading process much easier than I thought it was. They’ve completely revamped the processing behind it, and the fully open-source engine gives great results. As ever, if you assumed you knew how to do it, think again and read the How to upload scanned images to make a book guide. Uploading a zip file of images is much easier than mucking about with weird command-line TIFF and PDF tools. The resulting PDF is about half the size of the optimized scans I uploaded, and it’s nicely tagged with metadata and contains (mostly) searchable text. It took more than an hour to process on the archive’s spectacularly powerful servers, though, so I hate to think what Colin Day’s bill would have been in 1972 for that many CPU cycles … or if even a computer of that time, given enough storage, could complete the project by now. |
Diagnosis of childhood schizophrenia involves ruling out other mental health disorders and determining that symptoms aren't due to substance abuse, medication or a medical condition. The process of diagnosis may involve:
- Physical exam. This may be done to help rule out other problems that could be causing symptoms and to check for any related complications.
- Tests and screenings. These may include tests that help rule out conditions with similar symptoms, and screening for alcohol and drugs. The doctor may also request imaging studies, such as an MRI or CT scan.
- Psychological evaluation. This includes observing appearance and demeanor, asking about thoughts, feelings and behavior patterns, including any thoughts of self-harm or harming others, evaluating ability to think and function at an age-appropriate level, and assessing mood, anxiety and possible psychotic symptoms. This also includes a discussion of family and personal history.
- Diagnostic criteria for schizophrenia. Your doctor or mental health professional may use the criteria in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), published by the American Psychiatric Association. Diagnostic criteria for childhood schizophrenia are generally the same as for adult schizophrenia.
The path to diagnosing childhood schizophrenia can sometimes be long and challenging. In part, this is because other conditions, such as depression or bipolar disorder, can have similar symptoms.
A child psychiatrist may want to monitor your child's behaviors, perceptions and thinking patterns for six months or more. As thinking and behavior patterns and signs and symptoms become clearer over time, a diagnosis of schizophrenia may be made.
In some cases, a psychiatrist may recommend starting medications before an official diagnosis is made. This is especially important for symptoms of aggression or self-injury. Some medications can help limit these types of behavior and restore a sense of normalcy.
Schizophrenia in children requires lifelong treatment, even during periods when symptoms seem to go away. Treatment is a particular challenge for children with schizophrenia.
Childhood schizophrenia treatment is usually guided by a child psychiatrist experienced in treating schizophrenia. The team approach may be available in clinics with expertise in schizophrenia treatment. The team may include, for example, your:
- Psychiatrist, psychologist or other therapist
- Psychiatric nurse
- Social worker
- Family members
- Case manager to coordinate care
Main treatment options
The main treatments for childhood schizophrenia are:
- Life skills training
Most of the antipsychotics used in children are the same as those used for adults with schizophrenia. Antipsychotic drugs are often effective at managing symptoms such as delusions, hallucinations, loss of motivation and lack of emotion.
In general, the goal of treatment with antipsychotics is to effectively manage symptoms at the lowest possible dose. Over time, your child's doctor may try combinations, different medications or different doses. Depending on the symptoms, other medications also may help, such as antidepressants or anti-anxiety drugs. It can take several weeks after starting a medication to notice an improvement in symptoms.
Newer, second-generation medications are generally preferred because they have fewer side effects than do first-generation antipsychotics. However, they can cause weight gain, high blood sugar, high cholesterol and heart disease.
Examples of second-generation antipsychotics approved by the Food and Drug Administration (FDA) to treat schizophrenia in teenagers age 13 and older include:
- Aripiprazole (Abilify)
- Olanzapine (Zyprexa)
- Quetiapine (Seroquel)
- Risperidone (Risperdal)
Paliperidone (Invega) is FDA-approved for children 12 years of age and older.
These first-generation medications are usually as effective as second-generation antipsychotics in controlling delusions and hallucinations. In addition to having side effects similar to those of second-generation antipsychotics, first-generation antipsychotics also may have frequent and potentially significant neurological side effects. These can include the possibility of developing a movement disorder (tardive dyskinesia) that may or may not be reversible.
Because of the increased risk of serious side effects with first-generation antipsychotics, they often aren't recommended for use in children until other options have been tried without success.
Examples of first-generation antipsychotics approved by the FDA to treat schizophrenia in children and teens include:
- Chlorpromazine for children 13 and older
- Haloperidol for children 3 years and older
- Perphenazine for children 12 years and older
First-generation antipsychotics are often cheaper than second-generation antipsychotics, especially the generic versions, which can be an important consideration when long-term treatment is necessary.
Medication side effects and risks
All antipsychotic medications have side effects and possible health risks, some life-threatening. Side effects in children and teenagers may not be the same as those in adults, and sometimes they may be more serious. Children, especially very young children, may not have the capacity to understand or communicate about medication problems.
Talk to your child's doctor about possible side effects and how to manage them. Be alert for problems in your child, and report side effects to the doctor as soon as possible. The doctor may be able to adjust the dose or change medications and limit side effects.
Also, antipsychotic medications can have dangerous interactions with other substances. Tell your child's doctor about all medications and over-the-counter products your child takes, including vitamins, minerals and herbal supplements.
In addition to medication, psychotherapy, sometimes called talk therapy, can help manage symptoms and help you and your child cope with the disorder. Psychotherapy may include:
- Individual therapy. Psychotherapy, such as cognitive behavioral therapy, with a skilled mental health professional can help your child learn ways to deal with the stress and daily life challenges brought on by schizophrenia. Therapy can help reduce symptoms and help your child make friends and succeed at school. Learning about schizophrenia can help your child understand the condition, cope with symptoms and stick to a treatment plan.
- Family therapy. Your child and your family may benefit from therapy that provides support and education to families. Involved, caring family members who understand childhood schizophrenia can be extremely helpful to children living with this condition. Family therapy can also help you and your family to improve communication, work out conflicts and cope with stress related to your child's condition.
Life skills training
Treatment plans that include building life skills can help your child function at age-appropriate levels when possible. Skills training may include:
- Social and academic skills training. Training in social and academic skills is an important part of treatment for childhood schizophrenia. Children with schizophrenia often have troubled relationships and school problems. They may have difficulty carrying out normal daily tasks, such as bathing or dressing.
- Vocational rehabilitation and supported employment. This focuses on helping people with schizophrenia prepare for, find and keep jobs.
During crisis periods or times of severe symptoms, hospitalization may be necessary. This can help ensure your child's safety and make sure that he or she is getting proper nutrition, sleep and hygiene. Sometimes the hospital setting is the safest and best way to get symptoms under control quickly.
Partial hospitalization and residential care may be options, but severe symptoms are usually stabilized in the hospital before moving to these levels of care.
Lifestyle and home remedies
Although childhood schizophrenia requires professional treatment, it's critical to be an active participant in your child's care. Here are ways to get the most out of the treatment plan.
- Follow directions for medications. Try to make sure that your child takes medications as prescribed, even if he or she is feeling well and has no current symptoms. If medications are stopped or taken infrequently, the symptoms are likely to come back and your doctor will have a hard time knowing what the best and safest dose is.
- Check first before taking other medications. Contact the doctor who's treating your child for schizophrenia before your child takes medications prescribed by another doctor or before taking any over-the-counter medications, vitamins, minerals, herbs or other supplements. These can interact with schizophrenia medications.
- Pay attention to warning signs. You and your child may have identified things that may trigger symptoms, cause a relapse or prevent your child from carrying out daily activities. Make a plan so that you know what to do if symptoms return. Contact your child's doctor or therapist if you notice any changes in symptoms, to prevent the situation from worsening.
- Make physical activity and healthy eating a priority. Some medications for schizophrenia are associated with an increased risk of weight gain and high cholesterol in children. Work with your child's doctor to make a nutrition and physical activity plan for your child that will help manage weight and benefit heart health.
- Avoid alcohol, street drugs and tobacco. Alcohol, street drugs and tobacco can worsen schizophrenia symptoms or interfere with antipsychotic medications. Talk to your child about avoiding drugs and alcohol and not smoking. If necessary, get appropriate treatment for a substance use problem.
Coping and support
Coping with childhood schizophrenia can be challenging. Medications can have unwanted side effects, and you, your child and your whole family may feel angry or resentful about having to manage a condition that requires lifelong treatment. To help cope with childhood schizophrenia:
- Learn about the condition. Education about schizophrenia can empower you and your child and motivate him or her to stick to the treatment plan. Education can help friends and family understand the condition and be more compassionate with your child.
- Join a support group. Support groups for people with schizophrenia can help you reach out to other families facing similar challenges. You may want to seek out separate groups for you and for your child so that you each have a safe outlet.
- Get professional help. If you as a parent or guardian feel overwhelmed and distressed by your child's condition, consider seeking help from a mental health professional.
- Stay focused on goals. Dealing with childhood schizophrenia is an ongoing process. Stay motivated as a family by keeping treatment goals in mind.
- Find healthy outlets. Explore healthy ways your whole family can channel energy or frustration, such as hobbies, exercise and recreational activities.
- Take time as individuals. Although managing childhood schizophrenia is a family affair, both children and parents need their own time to cope and unwind. Create opportunities for healthy alone time.
- Begin future planning. Ask about social service assistance. Most individuals with schizophrenia require some form of daily living support. Many communities have programs to help people with schizophrenia with jobs, affordable housing, transportation, self-help groups, other daily activities and crisis situations. A case manager or someone on your child's treatment team can help find resources.
Preparing for your appointment
You're likely to start by first having your child see his or her pediatrician or family doctor. In some cases, you may be referred immediately to a specialist, such as a pediatric psychiatrist or other mental health professional who's an expert in schizophrenia.
In rare cases where safety is an issue, your child may require an emergency evaluation in the emergency room and possibly a hospital specializing in child and adolescent psychiatry.
What you can do
Before the appointment make a list of:
- Any symptoms you've noticed, including when these symptoms began and how they've changed over time — give specific examples
- Key personal information, including any major stresses or recent life changes that may be affecting your child
- Any other medical conditions, including mental health problems, that your child has
- All medications, vitamins, herbs or other supplements that your child takes, including the doses
Questions to ask
Make a list of questions to ask the doctor, such as:
- What is likely causing my child's symptoms or condition?
- What are other possible causes?
- What kinds of tests does my child need?
- Is my child's condition likely temporary or long term?
- How will a diagnosis of childhood schizophrenia affect my child's life?
- What's the best treatment for my child?
- What specialists does my child need to see?
- Who else will be involved in the care of my child?
- Are there any brochures or other printed material that I can have?
- What websites do you recommend?
Don't hesitate to ask any other questions during your appointment.
What to expect from your doctor
Your child's doctor is likely to ask you and your child a number of questions. Anticipating some of these questions will help make the discussion productive. Your doctor may ask:
- When did symptoms first start?
- Have symptoms been continuous or occasional?
- How severe are the symptoms?
- What, if anything, seems to improve the symptoms?
- What, if anything, appears to worsen the symptoms?
- How do the symptoms affect your child's daily life?
- Have any relatives been diagnosed with schizophrenia or another mental illness?
- Has your child experienced any physical or emotional trauma?
- Do symptoms seem to be related to major changes or stressors within the family or social environment?
- Have any other medical symptoms, such as headaches, nausea, tremors or fevers, occurred around the same time that the symptoms started?
- What medications, including herbs, vitamins and other supplements, does your child take?
Sept. 29, 2016 |
History of Vacuum Diode
In November 16, 1904 first vacuum diode was invented by Sir John Ambrose Fleming and it is also called the Fleming valve, the first thermionic valve. In those days there was no existence of p-n junction in electronics field. A conceptual figure of vacuum diode is shown below.
How Does Vacuum Tube Diode Work?
Here the vacuum diode works mostly like a modern diode. But its size is larger. It consists of a vacuum container with cathode and anode inside. This cathode and anode are connected across a voltage source.
The anode is applied with positive voltage with respect to cathode. It works on the principle of thermionic emission. A filament heats this cathode. Hence electrons get emitted from the cathode and attracted towards the anode. If the positive voltage applied at the anode, is not sufficient enough, the anode cannot attract the electrons emitted from the cathode due to hot filament.
As a result, one cloud of electrons gets accumulated in the space between cathode and anode. This is called space charge. Due to this space charge, the further emitted electrons get repealed and come back to the cathode. Hence virtually electron emission stops. No current flows through the circuit.
If the applied voltage between anode and cathode is increased gradually then more and more space charge electrons come to the anode and create vacant space for further emitted electrons. So with the increase of voltage across anode and cathode, we can increase the emission rate of electrons.
At the same time, the space charge gradually vanishes that means it gets neutralised on the anode. Once for certain applied voltage between anode and cathode, the entire space charge vanishes. There is no more obstruction for emission of electrons from the cathode. Then a beam of electrons starts flowing freely from cathode to anode through space. As a result, current flows from the anode to cathode.
On the other hand if anode is made negative with respect to cathode there is no electron emission from it as it is cold not hot. Now the emitted electrons from heated cathode do not come to the anode. Due to repulsion of negative anode strong space charge will be accumulated between anode and cathode. Again due to repulsion of this space charge all further emitted electrons come back to the cathode hence no virtual emission takes place hence no current flows in the circuit. So, vacuum diode allows current to flow in one direction only.
Under reverse bias this vacuum diode does not work. This vacuum tube was the basic component of electronics throughout the first half of the twentieth century. It was available and common in the circuit of radio, television, radar, sound reinforcement, sound recording system, telephone, analog and digital computers, and industrial process control.
V-I Characteristics of Vacuum Diode
The V-I characteristics of a vacuum diode is shown below.
The size of space charge depends upon the emission of electrons from cathode during formation of space charge. The emission of electrons further depends upon the temperature at which the cathode is heated. Hence if temperature is increased the amount of space charge is also increased. So the anode voltage required to neutralize the space charge will also be more.
Thus same vacuum diode will have different V-I characteristic graphs at different cathode temperatures. In the above figure we have shown only three of them. One graph for ToC, one for temperature more than ToC and one for temperature less than ToC. When anode voltage is gradually increased from zero the current from anode to cathode is proportionally increased. Since the space charge limits the emission from cathode, the current is proportionally increased with decrease of space charge strength.
This zone of the characteristics is called space charge limiting region as shown in the figure. After space charge is vanished the electron emission becomes constant and is solely dependent upon temperature of the cathode. Here the current in the vacuum diode becomes saturated. When no voltage is applied to anode there should not be any current in the circuit but actual case is not like that. Because of statistical fluctuation in the velocity, some electrons are energetic enough to reach the anode even there is no voltage at anode. The small current caused by this phenomenon is known as splash current.
Use of Vacuum Tube Diodes
Gradually p-n junction semiconductor has come in the market and vacuum tubes got replaced by them. The most basic structure of vacuum tube is a vacuum diode. Vacuum tubes are still being used widely around the world. The applications for vacuum tubes include:
- Atomic Clocks
- Audio Systems
- Car Dashboards
- Cellular Telephone Satellites
- Computer Monitors
- DVD Players and Recorders
- Electromagnetic Testing
- Electron Microscopes
- Gas Discharge Systems
- Gas Lasers
- Guitar Amplifiers
- Ham Radio
- High-speed Circuit Switching
- Klystron Tubes
- Industrial Heating
- Ion Microscopes
- Ion Propulsion Systems
- LCD Computer Displays
- Microwave Systems
- Microwave Ovens
- Military Systems
- Mobile Phone, Bluetooth and Wi-Fi Microwave Components
- Musical Instrument Amplifiers
- Particle Accelerators
- Photo multiplier Tubes
- Plasma Panel Displays
- Plasma Propulsion Systems
- Professional Audio Equipment
- Radar Systems
- Radio Communications
- Radio Stations
- Recording Studios
- Solar Collectors
- Sonar Systems
- Strobe Lights
- Satellite Ground Stations
- Semiconductor Vacuum Electronic Systems
- TV Stations
- Vacuum Electron Devices
- Vacuum Panel Displays
Types of Vacuum Diodes
The vacuum diode tubes are classified as
- Frequency range wise (audio, radio, microwave)
- Power rating wise (small signal, audio power)
- Cathode/filament type wise (indirectly heated, directly heated)
- Application wise (receiving tubes, transmitting tubes, amplifying or switching)
- Specialized parameters wise (long life, very low micro phonic sensitivity and low noise audio amplification)
- Specialized functions wise (light or radiation detectors, video imaging tubes) |
C9 Lectures: Dr. Erik Meijer - Functional Programming Fundamentals Chapter 10 of 13
In Chapter 10, Declaring Types and Classes, Dr. Meijer teaches us about type declarations, data declarations, arithmetic expressions, etc. In Haskell, a new name for an existing type can be defined using a
type String = [Char]
String is a synonym for the type [Char].
Like function definitions, type declarations can also have parameters. Type declarations can be nested, but
type Pos = (Int,Int)
type Trans = Pos -> Pos
type Tree = (Int,[Tree])
A completely new type can be defined by specifying its values using a data declaration:
data Bool = False | True
Bool is a new type, with two new values False and True.
Get the presentation slides here |
This board game can help your students practise indefinite articles a/an. The task: to name a picture using correct article. I created this board game for 8-9-year-olds. I used words we have studied before. Here are the words in the order they are listed in the board game: a table, a window, an orange, a bag, a cat, an apple, a pen, an umbrella, a dog, a camera, an eraser, a chair, a house, a desk, a pencil, an egg, a banana, an ice-cream, a pencil case, a book, a door. |
Scientific Name: cygnus atratus
Habitat: lakes and rivers
Diet: aquatic vegetation, roots
Life Span: 10 years
Young: 5 – 6 eggs per clutch
Size: up to 9 kg
The Black Swan is a large waterbird that is native to the southern regions of Australia. The species was hunted to extinction in New Zealand but later reintroduced. It has been introduced as an ornamental waterbird in other regions of the world as well.
Black swans are mostly black-feathered though they have white flight feathers that are only visible when they fly. Their bills are bright red, and their legs and feet are grey. Black swans are hatched with grey down and do not get their beautiful black feathers until the age of two. Swans have over twenty vertebrae making their necks very flexible (giraffes have seven vertebrae). They have a wing space of 1.6 to 2 meters.
The Black Swan is almost exclusively a herbivore, with its diet consisting mainly of aquatic and marshland plants.
Black Swans pair for life, and often reuse the same nest that is essentially a large heap or mound of reeds, grasses, and weeds between 1 and 1.5 meters across and up to 1 meter high. Both parents care for the young.
Find out more about Black Swans by visiting Beauty of Birds, or by stopping by the Edmonton Valley Zoo today! |
To start off Creative Leap 2018 we’ll be exploring the process of asemic writing. I've presented this technique in previous courses and believe that it's a fantastic exercise to play with individual handwriting, to loosen up and to explore mark making before expanding into the different mini-projects that'll incorporate handwriting.
A is for Art, Aesthete, Azure and Asemic
A stands for ART. Art is subjective emotive and personal to each of us.
Art is something that makes you breathe with a different kind of happiness. -- Anni Albers
Aesthete - a person who has or who affects a highly developed appreciation of beauty, especially in poetry and visual arts.
Azure – a beautiful fresh sky blue colour
A is also for ASEMIC.
In essence asemic means having no specific semantic content. Asemic writing looks like writing, but the content is generally illegible. It’s almost related to those marks you make when you try out a pen for the first time on a scrap of paper. Intuitive scribble or doodle marks that look a bit like our individual handwriting. Asemic writing as a concept is wide, applicable in many ways. From abstract, symbolical and trans-linguistical to poetry, calligraphy and almost legible examples. In principle it is using the shapes of intuitive formed letters as an artform.
The concept of asemic writing is wide, so here are links to explain in more detail, as well as some visual inspiration for your project.
To start off, a pen is better than a pencil to get going and in order to avoid the urge to erase marks. In the context of this project, making ‘mistakes’ is good, and evidence that you are experimenting and are learning.
Before trying the projects in earnest do the following:
On a regular copy paper sheet (as used in home printers) write the alphabet as one continuous line of letters filling the whole page in your normal handwriting.
Once you have done this, write the twenty six individual letters spaced out to about four letters in a row. Use your normal handwriting, this is important. You can do the same with the capital letters as well.
Now, for the asemic writing project.
Having done the letter sheets in a regular pen or fineliner, try the Asemic writing with a variety of utensils, pen, sharpie or other felt tip marker or paintbrush. Have a go at writing ‘shapes’ on a few sheets of paper of different sizes in order to try different scales. Again you can try this on white copy paper if you would like to experiment a bit.
Watercolour or dip pen would be better on a thicker paper than copy paper.
The objective is to create free yet controlled marks that you feel embody your handwriting. The action of ‘using letters’ to create marks is important; it is not just random scribbling.
Don’t overthink the process, start with names, the letters of the alphabet, or a short written piece like an inspirational quote and allow yourself to explore the lines and loops of writing, the rhythm, exaggerating and elongating shapes.
If the total abandonment of coherent text and content is hampering your process, try your hand at writing some poetry or a chapter from a magazine or book you are reading.
Looking for inspiration? To explore more, try writing using words, sentences, names or a favourite saying or expression. Do it a few times, also try doing it with closed eyes. Trust your hands to do the writing. Move the paper in different directions, write sideways or in a grid. What appeals to you? Explore possibilities: elongate, distort, and enjoy inventing shapes that will trick eyes into reading it as a form of handwriting as you adjust the letter shapes in order to play with the marks and lines created. Change the scale: go big or small, thicker and thinner line, vary the pressure and you write... mix it up.
Once you start to get the idea of the creative process try it with your name or particular letters you like the shape of... I love the letters F G and Z. Add a few vowels, round and spiked shapes and play with just a few letters that you like the look of. Observe the way you write, adapting the letter shapes, play. See how different kinds of paper respond to the marks and materials you use. What suits your style, what challenges the way you write? Do you like using a dip pen or paintbrush to write/paint letters? Try it on a variety of surfaces: smooth paper, watercolour paper, tissue paper etc.
If you have the high flow white ink or another opaque white ink use a dip pen and write on a paper that’s been covered with black, blue or another dark colour. Remember to allow the background to dry first!
The asemic writing mini-project's primary objective is to get you to review how you write versus what you write. Letters becoming marks and shapes are linked to expressive art. There are forty-six different alphabets and writing systems used in the world today all with their own shapes to convey meaning. Many of them are visually very beautiful, an art form in itself.
Remember this is an exercise: you are trying something new and it does not have to be pretty. Enjoy this mark making and playing with the shapes that come easily and use them in ways to explore new directions for of creating.
Words and phrases to keep in mind: nonsensical, aesthetics, flowing marks, mark making, visual appeal, repetition, pattern, overlapping, strokes, lines, curves, expression, sensitive, calligraphically, unique, scale, flowing. |
The earthquakes and tsunami in Central Sulawesi that killed more than 2,000 people in September 2018 did not only leave a deep sorrow. It made us rethink the relationship between humans, technology and nature in Indonesia.
Indonesia have seen many natural disasters from landslides, tidal waves, earthquakes and tsunami. Yet the country often fails to prepare itself for catastrophes. Is it because people are unable to use technology to help them deal with disasters? Or is it because of a fraught relationship between humans and nature?
Our research in two areas of Indonesia, Semarang in Central Java and Aceh on the island of Sumatra, puts forward two important arguments in understanding relations between humans, technology and nature in responding disasters. First, relations between humans and nature are highly determined by the economy and policies. Second, development and utilisation of disaster mitigation technologies will not be optimum if the digital divide still exists.
In its very basic understanding, technology is defined as anything that could ease humans’ work. In the context of reducing the impacts of disasters, we use technology to minimise loss and destruction by identifying disaster-prone areas, saving lives, reducing economic losses, and other activities to help during mitigation and post-disaster rehabilitation.
Generally, the relationship between humans and technology can be explained through technological determinism. Meanwhile, relations between humans and nature are mostly seen as exploitation. Humans are deemed powerless in the face of technology, but at the same time are destructive towards the environment.
But when disaster strikes, humans and their technology could not defend themselves against nature. Relationships between these three elements are complex due to their multidimensional, contextual and temporal characters.
For example, new settlements built in disaster-prone areas around Banda Aceh, Lembang, Bandung, and several places in Semarang reflect disharmony between humans and nature.
In Banda Aceh, coastal areas affected by the 2004 tsunami have become crowded by new settlement for a number of reasons. The picture below shows an area called Ulee Lheue taken in April 2018, 14 years after being severely damaged by the tsunami.
What is happening in Ulee Lheue shows how people are overly confident that they are secure from disasters. People are relying more on their intuition instead of acquiring scientific knowledge to mitigate disasters.
We could not, however, conclude that people are merely ignorant. Social backgrounds, culture, politics, education and economic reasoning might equally contribute to uncontrolled settlements.
Strict policies needed
People live in disaster-prone areas partly for economic reasons. In the case of Semarang, where tidal flooding submerges houses, people stay due to economic factors. With no other skills, fishermen in Tambak Lorok, Semarang, are forced to live with disasters.
In this situation, government policies can help shape the relationship between humans and nature. Governments should take a firm position to stop people living in disaster-prone areas. But, to be able to do that, they should offer good alternatives for the people. Suggested areas for relocation should not only be safe from disasters but also provide economic opportunities.
Government policies on disaster management, through regulation and various risk reduction programs, should help people become aware of disaster risks and of the technology and mitigation infrastructures that exist for them.
In Aceh, a research finding by the Tsunami and Disaster Mitigation Research Center (TDMRC) at Syiah Kuala University shows people are not aware of evacuation procedures and existing facilities to safeguard them during disasters. So, while the government builds evacuation spots and buildings in tsunami-prone coastal areas, it should also should work hand in hand to empower people. The government should educate people about the procedures and the function of evacuation buildings.
In the context of a digital society, people are heavily dependent on information and communication technology to mitigate and respond to disasters. One of the problems in the global south like Indonesia is the digital divide.
The problem is not merely limited access. The digital divide is influenced by lack of interest in learning to use new technology, low technical capability and inefficient tools. These contributing factors should be resolved so people can adapt and use technology better in disaster management.
The problem is that many people are not interested in using these technologies. In Semarang, CoREM publishes the Rob Calendar application to help people anticipate and react to tidal floods. But not many people download this app.
Additionally, the spread of hoaxes related to natural disasters is now becoming a serious problem in Indonesia because it has disrupted concerted efforts to respond to disasters.
Mass integrated technology
Individual access to technology, such as mobile phone ownership by people living in disaster-prone areas, does not guarantee effective disaster management if there’s no collective action to maximise its use. Mass integrated technology, such as early warning systems in public spaces, is necessary.
People who manage tsunami early warning systems should embrace new technologies and feature more integrated information on earthquakes, floods, volcanic eruptions and avalanches.
We should also develop alternative communication systems, such as amateur satellite or radio communication. Learning from the Palu and Donggala tsunami, communication networks were disconnected during the emergency.
At the end of the day, we need to continuously re-adjust the balance in using technology. We should not forget that humans might get entrapped by technological dependency, but we also do not want to neglect the importance of technologies in saving our lives. |
* Note: The Farm at Green Village no longer offers a warranty on any boxwoods sold due to the ease of spreading of this disease.*
What is Boxwood Blight?
Boxwood blight (also known as box blight), caused by the fungus Calonectria pseudonaviculata, is a serious fungal disease of boxwood that results in defoliation and decline of susceptible boxwood. Once introduced to a landscape, boxwood blight is very difficult and costly to control with fungicides. The major means of spread of this disease is by movement of contaminated plant material (e.g. container or field-grown boxwood, boxwood greenery used for holiday decoration), but boxwood blight spores can also be spread on pruning tools, clothing, equipment and anything that might have contacted infected plants. Home growers can best protect their boxwood by following the measures listed below to avoid introduction of the disease to their landscape.
Symptoms of Boxwood Blight
The most characteristic symptoms of boxwood blight on susceptible boxwood cultivars are brown leaf spots that lead to defoliation and black streaking on boxwood stem tissue. Some cultivars of boxwood can harbor the boxwood blight pathogen, yet show no symptoms; these cultivars are considered partially resistant (also referred to as “tolerant”) cultivars (see Table 1 in PDF). Fungicides can also mask symptoms of the disease on susceptible cultivars.
Other plant hosts
Pachysandra terminalis (Japanese spurge), Pachysandra procumbens (Allegheny spurge) and Sarcococca species (sweetbox), which are in the same family (Buxaceae) as boxwood, are also susceptible to boxwood blight and infected plants of these species could introduce the disease to a landscape. Symptoms of the disease on P. terminalis are brown leaf spots. New host plants may be identified as researchers learn more about this disease, but hosts will likely be limited to members of the Buxaceae family.
Avoiding introduction of boxwood blight to a landscape
Because the boxwood blight pathogen is not well adapted to long-distance spread by long-distance air currents, the most likely entry point for the disease in a home landscape is by accidental introduction of infected plant material and/or contaminated tools, equipment or other items. Home growers who have boxwood in the landscape should carefully adhere to the following recommendations to avoid inadvertent introduction of this devastating disease to their landscape:
- Prior to purchase, carefully inspect plants for symptoms of boxwood blight.
- Be aware that partially resistant cultivars of boxwood (Table 1 in PDF) could act as a “Trojan horse” in a landscape because partially resistant cultivars may harbor the boxwood blight pathogen, yet not show obvious symptoms.
- Be aware that fungicide treatment can suppress symptom development.
- Monitor established boxwood and newly planted boxwood on a regular basis for any symptoms of boxwood blight.
- Be aware that boxwood greenery used for holiday decoration could harbor the boxwood blight pathogen.
- To minimize risk of introducing the disease by this route, do not use boxwood greenery near landscape boxwood.
- When disposing holiday greenery, double-bag it in sealed plastic bags and dispose of it in the landfill. Do not compost boxwood greenery.
The boxwood blight fungus can be spread from one property to another via contaminated spray hoses, pruning tools, wheelbarrows, tarps, vehicles, clothing, shoes, or anything to which the sticky spores of the boxwood blight fungus might adhere.
What to do if boxwood blight is diagnosed in the landscape
Since boxwood blight cannot be effectively controlled once the infection begins, prompt removal of any diseased boxwood is recommended to help prevent spread of disease to healthy plants. Associated leaf debris should also be removed. Be aware that removing diseased boxwood and leaf debris will not eradicate the boxwood blight pathogen from the location, since the pathogen produces long-lived survival structures that can persist in the soil for 5 to 6 years. These survival structures can infect susceptible replacement boxwood planted in locations where the disease has been diagnosed. Therefore, replanting susceptible boxwood cultivars or members of the Buxaceae family in a location where infected boxwood has been removed is not advisable. Partially resistant boxwood cultivars (Table 1 in PDF) could be used as replacement plants, but repeated fungicide applications will be necessary to protect any susceptible boxwood cultivars that remain in the landscape. Boxwood cultivars with a high level of resistance (termed “most resistant” in Table 1 in PDF) will not require fungicide treatment.
After removal of diseased plants and debris, different management approaches should be considered, depending on the particular landscape situation. The two different scenarios outlined below illustrate two recommended, but different, management approaches:
• SCENARIO 1
A landscape contains highly valued boxwood that are susceptible to boxwood blight, symptoms of boxwood blight were observed in the planting, and the disease was confirmed by a plant diagnostic lab.
Immediate Actions Recommended
Remove diseased boxwood and leaf litter promptly. Remove leaf litter from soil surface by vacuuming, raking, or sweeping. If leaf debris has been incorporated into the soil, removing soil to a depth of 8” to 12” may help eliminate fungal inoculum of the pathogen. Diseased boxwood, leaf debris, and soil should be bagged and removed to the landfill OR buried 2’ deep in soil away from boxwood plantings. Do not compost boxwood debris or plant material.
Because the fungal spores can stick to tools, equipment, etc., sanitize all tools, equipment, tarps, shoes, gloves, etc., used after removing plants to prevent spread of fungal inoculum to healthy boxwood (Table 3 in PDF).
Promptly begin a preventative fungicide spray program on any susceptible boxwood in the landscape to prevent further disease outbreaks.
Be aware that pets, children, and other animals can also potentially move the sticky spores of this fungus to new locations.
Long-term Actions Recommended
Repeat fungicide applications (7- to 14-day intervals, according to product label) to susceptible boxwood throughout the growing season for the life of the boxwood plants. If temperatures warm after the growing season has ended, additional fungicide application may be warranted. Warm temperatures plus leaf wetness are very favorable for boxwood blight infection and spread, so any time temperatures are over 60°F and rainfall is expected, a preventative fungicide spray program should be in place.
- Monitor boxwood weekly during the growing season for symptoms of boxwood blight.
- Remove any symptomatic plants/debris/soil as outlined above.
Boxwood debris should never be composted.
- Bag and dispose of in the landfill OR bury 2’ deep in a location away from boxwood plantings.
- When working in boxwood plantings, minimize the chance of spreading boxwood blight inoculum that could be present on shoes, gloves, clothing, equipment and tools by sanitizing between plants/ plantings (Table 3).
- Implement the suggested cultural practices in the section below: “Cultural Practices Recommended to Minimize Chance of Boxwood Blight.”
• SCENARIO 2
A landscape contains boxwood plants that developed symptoms of boxwood blight and the disease was confirmed by a plant diagnostic lab (e.g. Plant Disease Clinic); however, the boxwood in this landscape are not highly valued specimens. In this situation the simplest approach would be to replace boxwood blight-susceptible boxwood with boxwood cultivars that possess a high level of resistance (termed “most resistant” in Table 1 in PDF). This will allow the grower to enjoy the beauty of boxwood plantings without the significant burden of repeated fungicide sprays to susceptible boxwood over the lifetime of the planting.
All susceptible boxwood should be removed, including the roots. Infested plant debris should be removed by raking, sweeping, or vacuuming, then bagged and taken to the landfill. Alternatively, debris can be buried 2’ deep in soil away from landscape plantings. Do not compost diseased plant material. Be aware that removing diseased boxwood and leaf debris will not eradicate the boxwood blight fungal pathogen from the location, since the pathogen produces long-lived survival structures that can persist in the soil for 5 to 6 years.
If boxwood leaf debris has been incorporated into the soil, removing soil to a depth of 8” to 12” may help eliminate survival structures of the pathogen. Dispose of soil and leaf litter as recommended above for diseased plant material. Do not compost.
Sanitize all tools, equipment, tarps, shoes, gloves, clothing, etc., used when removing plants to prevent spread of fungal inoculum that can cause infection on healthy boxwood (Table 3 in PDF).
Fungicide management of boxwood blight in the home landscape
Important considerations for home growers when deciding whether to implement a preventative fungicide management program for boxwood in the home landscape are:
- Fungicides cannot eradicate the disease from infected plants.
- Once boxwood blight is present in the landscape, it is very difficult to control. Fungicide applications that are begun after the disease is already present
do not provide acceptable disease control, according to the latest research results from North Carolina State University.
- Fungicides labeled for use by home growers are protectant fungicides and must be used preventatively.
- An effective preventative fungicide spray program will require repeated applications (at 7- to 14-day intervals, depending on product label and environmental conditions) of fungicides throughout the growing season.
- Post-growing season: Warm temperatures with leaf wetness results in high boxwood blight disease pressure, so if temperatures are over 60°F and a rain event is expected, a preventative fungicide spray should be in place post-season as well.
- Thorough fungicide coverage of boxwood foliage is difficult, yet necessary for protection from the disease.
Currently, effective fungicide options for home growers are limited; however, professional applicators in the home landscape have more product options. Future research may lead to development of effective control of boxwood blight for home growers with fewer fungicide applications. For a list of specific fungicides labeled for control of boxwood blight in the landscape for use by non-professional applicators, refer to Table 2 in the PDF.
Cultural Practices Recommended to Minimize Chance of Boxwood Blight
Minimize leaf wetness and promote good air circulation in boxwood plantings to minimize disease pressure. Examples include:
Choose cultivars that have a more open-growth habit (e.g. Buxus microphylla cultivars as opposed to B. sempervirens ‘Suffruticosa’).
Avoid overhead irrigation.
Ensure good air circulation in plantings by providing adequate spacing between plants. In general, growers may want to avoid close spacing of boxwood and, therefore, hedges.
Mulch boxwood plantings to reduce the spread of boxwood blight inoculum to foliage by splashing water.
Avoid working in boxwood plantings when the foliage is wet and fungal inoculum is more likely to be spread.
Practice good sanitation practices to avoid moving infested soil or plant material to landscape locations where boxwood are located.
- Sanitize pruning tools and other tools/equipment/ clothing/tarps between boxwood plantings and also between other members of the Buxaceae family.
- Bag and dispose of all boxwood debris (including holiday greenery) in the landfill or bury 2’ deep in soil away from boxwood plantings.
- Be aware that allowing boxwood tippers onto your property to collect greenery may increase the risk of introduction of boxwood blight if the tippers visit multiple boxwood plantings and do not follow good sanitation practices.
- If you hire landscape professionals to spray or otherwise maintain landscape boxwood, discuss your concern about boxwood blight with them to learn about management practices they may have in place to avoid movement of boxwood blight from one client’s landscape to another. Then you can decide if their approach is acceptable to you.
Best Management Practices for Boxwood Blight, Virginia Cooperative Extension, Virginia Department of Agriculture and Consumer Services
TO VIEW ORIGINAL SOURCE MATERIAL: |
Nile virus (WNV) has emerged in recent years in temperate regions
of Europe and North America, presenting a threat to public and
animal health. The most serious manifestation of WNV infection
is fatal encephalitis (inflammation of the brain) in humans and
horses, as well as mortality in certain domestic and wild birds.
WNV has also been a significant cause of human illness in the
United States in 2002 and 2003.
Nile virus was first isolated from a febrile adult woman in the
West Nile District of Uganda in 1937. The ecology was characterized
in Egypt in the 1950s. The virus became recognized as a cause
of severe human meningitis or encephalitis (inflammation of the
spinal cord and brain) in elderly patients during an
outbreak in Israel in 1957. Equine disease was first noted in
Egypt and France in the early 1960s. WNV first appeared in North
America in 1999, with encephalitis reported in humans and horses.The
subsequent spread in the United States is an important milestone
in the evolving history of this virus.
Nile virus has been described in Africa, Europe, the Middle East,
west and central Asia, Oceania (subtype Kunjin), and most recently,
of WNV encephalitis in humans have occurred in Algeria in 1994,
Romania in 1996-1997, the Czech Republic in 1997, the Democratic
Republic of the Congo in 1998, Russia in 1999, the United States
in 1999-2003, and Israel in 2000. Epizootics of disease
in horses occurred in Morocco in 1996, Italy in 1998, the United
States in 1999-2001, and France in 2000, and in birds in Israel
in 1997-2001 and in the United States in 1999-2002.
the U.S. since 1999, WNV human, bird, veterinary or mosquito activity
have been reported from all states except Hawaii, Alaska, and
Human Case and Virus Distribution Information
case information and maps
1999 through 2001, there were 149 cases of West Nile virus human
illness in the United States reported to CDC and confirmed,
including 18 deaths. |
In the war against climate change, carbon dioxide is the enemy. But when used to boost geothermal energy, it can be an ally.
Scientists have known for years that carbon dioxide can be captured from power plants and stored underground to reduce emissions, but the practice is cost-prohibitive without federal incentives. Researchers have now found a way to use carbon dioxide to enhance and expand geothermal energy, which can offset the costs of capturing and storing the gas.
“The goal is to simultaneously sequester carbon dioxide and use it to generate electricity,” says Jeffrey Bielicki, an energy and sustainability researcher at Ohio State University who is part of a team exploring the unusual combo’s power potential.
Underground heat is typically mined by pumping water into the earth, then bringing it back up hot to power turbines that generate electricity. But carbon dioxide can outperform water: The greenhouse gas extracts heat about two to four times more efficiently. In a computer simulation, researchers found that compared with an average geothermal plant, a carbon dioxide-fed geothermal plant could produce 10 times more power while also locking away the annual carbon dioxide emissions from as many as three midsize coal-fired power plants.
Preliminary tests of the technology, called carbon dioxide plume geothermal (CPG), are underway. Meanwhile, researchers are also looking for other ways to use carbon dioxide in tandem with renewable energy. “We need to reduce carbon dioxide emissions, and expand and develop renewable energy technology,” says Bielicki. “What I really like about my research is it combines the two.”
[This article originally appeared in print as "Geothermal's CO2 Boost."]
Read more in our "Powering the Future" special report » |
Botulism is an illness caused by a toxin produced by Clostridium botulinum bacteria. The C. botulinum bacteria spores are naturally present in the environment, often living in garden soil. Although botulism is a serious condition, fortunately it is very rare.
How can you get botulism?
You can get botulism in two different ways:
- Through food – usually honey, home preserves, canned food, meat, seafood and soft cheeses.
- Through a wound – this is more common in people who work on the land or who use injecting drugs.
There is a type of botulism called infant botulism. In this situation, a child gets the Clostridium botulinum bacteria in their gut, and it produces the toxin from there. Honey sometimes causes this – children under 12 months of age should not be given honey.
You can’t catch botulism from someone else.
The main symptom of botulism is extreme weakness – weakness that is so severe it is hard to open your eyes, hard to speak and hard to have the strength to breathe. It is sometimes fatal.
Babies with botulism can’t tell you how they feel. But it makes it hard for them to cry, to move, to eat and to drink.
If you suspect that you or someone you know has botulism, see a doctor as soon as possible or go to your nearest emergency department.
If medical help is sought quickly, treatment can reduce the severity of the condition.
Botulism is very rare, but you can reduce the risk further by:
- not feeding honey to infants
- taking care when preserving food – the bacterial spores can survive at temperatures of 100 °C, so make sure food is well cooked and containers are thoroughly sterilised
- throwing away canned food that is past its use-by date, damaged or spoiled
- covering open wounds when gardening or in contact with soil
Learn more here about the development and quality assurance of healthdirect content.
Last reviewed: February 2018 |
Most people are confused when they first hear the words “live rock”. Certainly rocks and sand aren’t alive, are they?
But aquarium owners know that live rock is essential for a healthy ecosystem within an aquarium, and without it, their other living aquatic pets and organisms won’t last for long.
“Live rock” is rock that has been harvested from the ocean, and is placed in a saltwater aquarium environment without being sterilized. These rocks are often not rocks at all, but skeletons of long-dead aquatic organisms. They are host and home to various corals, algae, sponges, invertebrates, and bacteria and microorganisms that are required to keep a closed-loop saltwater ecosystem healthy.
Primary Purpose of Live Rock
The primary purpose of using live rock in an aquarium is to create a healthy nitrogen cycle. The nitrogen cycle works like this:
- Fish and marine creatures produce waste in the form of ammonia. If allowed to build up in the water, this ammonia is toxic to life in the aquarium.
- Fortunately, bacteria called nitrosomonas consume ammonia and convert it into nitrite. Unfortunately, nitrite is also toxic to fish if it accumulates in an aquarium.
- If we have the presence of another bacteria called nitrobacter, they will consume nitrite and convert that into nitrate. Nitrate is less toxic than the previous compounds, but is still harmful to fish and will kill corals.
- But, finally, denitrifying bacteria consume nitrate and convert it into nitrogen gas. This harmless gas leaves the aquarium in the form of air bubbles.
This process, in which fish and aquarium pets produce toxic waste, which beneficial bacteria then convert into a harmless gas, is a cycle necessary to sustain life in the aquarium. In this way, due to the presence of a variety of helpful bacteria and microorganisms, live rock acts as a natural biological water filter, helping the aquarium stay clean and healthy.
Other Benefits Live Rock Brings
However, while fixing the nitrogen cycle in a saltwater aquarium is the primary role of live rock, that’s not all it can do. Live rock brings many other healthy and beneficial properties to an aquarium. Live rock also:
- Helps stabilize water chemistry. By slowly releasing calcium carbonate over time, live rocks help to maintain the pH level of the water.
- Adds biological diversity. Because the rock is naturally harvested from the ocean, it may come along with a diverse array of species that will surprise the aquarium owner. It may have algae and small plant life in surprising colors, and it will continue to live, grow, and change over time.
- Adds beauty. Live rock is essential for the natural look of a marine habitat. It adds beauty, texture, and interest to the look of an aquarium.
- Provides habitat. Just like in nature, where fish, crabs, snails, and other marine animals find shelter in and among rocks on the ocean floor, live rock will create small habitats within the home aquarium. Many fish require hiding places to feel safe and reduce stress, and make the aquarium feel more like “home”.
- Differentiates light and water flow. Many species of marine creatures and plants like different amounts of light and current available to them. Live rock creates micro-climates where different species can thrive.
For all these reasons and more, live rock is both necessary and desirable in a saltwater aquarium. Because it is alive and natural, no two specimens are exactly alike, and no two specimens have the exact same composition of micro- and macro-organisms. Each one is unique, and plays a special role in the chemical and visual composition of an aquarium. Contact us to learn more about the special properties of live rock, and how it can make your aquarium healthier and happier. |
Many teens complain of feeling tired all of the time or having frequent headaches. For some teens, the cause of these symptoms could be anemia. Jasmine Reese, M.D., from Johns Hopkins All Children’s Hospital, is here to share with us some helpful information regarding anemia in adolescents and what usually causes it.
What is anemia and why is it important to understand?
In brief, anemia is when your body does not have enough red blood cells. Your blood is made up of thousands of red blood cells that carry oxygen throughout your body and to your organs. Each red blood cell has a protein called hemoglobin that carries the oxygen. In order for hemoglobin to carry oxygen, it needs iron, which comes from the food we eat.
What causes anemia?
Anemia can happen if your body has a chronic illness that doesn’t allow your body to make enough red blood cells, for example a bone marrow illness or infection. It can happen if your body is losing too many red blood cells, for example a bleeding disorder. Anemia also can happen in some autoimmune disorders where the body is mistakenly destroying red blood cells. However, it is important to know that most common form of anemia in adolescents in the United States is iron deficiency anemia. This typically happens when a teen does not get enough iron in their diet. Other common reasons for teens be iron deficient include rapid growth spurts and onset of menstrual cycles for girls.
What are the symptoms of iron deficiency anemia?
Over time, teens can start complaining of feeling tired, weak, having frequent headaches and having low energy. Their skin may start to look pale, they may feel that they have shortness of breath or that they have a faster heart rate than normal.
What will your doctor do and how is iron deficiency anemia treated?
If you are worried about your teen having anemia you should discuss this with your pediatrician or doctor. Their evaluation will including a detailed history including diet history, a physical exam and some blood work to check on your teen’s hemoglobin levels. Your doctor might prescribe an iron supplement or they may just recommend including iron-rich sources of food in your teen’s daily diet. Examples include lean meats, raisins, dried beans, tomato sauce, eggs, nuts, molasses, iron-rich cereals and bread.
On Call for All Kids is a weekly series featuring Johns Hopkins All Children’s Hospital medical experts. Visit HopkinsAllChildrens.org/Newsroom each Monday for the latest report. |
Action: Breed mammals in captivity
Key messagesRead our guidance on Key messages before continuing
- Three studies evaluated the effects of breeding mammals in captivity. One study was across Europe, one was in the USA and one was global.
COMMUNITY RESPONSE (0 STUDIES)
POPULATION RESPONSE (3 STUDIES)
- Abundance (1 study): A review of captive-breeding programmes across the world found that the majority of 118 captive-bred mammal populations increased.
- Reproductive success (2 studies): A review of a captive breeding programme across Europe found that the number of European otters born in captivity tended to increase over 15 years. A study in the USA found that wild-caught Allegheny woodrats bred in captivity.
- Survival (1 study): A review of a captive breeding programme across Europe found that the number of European otters born in captivity that survived tended to increase over 15 years.
BEHAVIOUR (0 STUDIES)
Captive breeding involves taking wild animals into captivity and establishing and maintaining breeding populations. It tends to be undertaken when wild populations become very small or fragmented or when they are declining rapidly. Captive populations can be maintained while threats in the wild are reduced or removed and can provide an insurance policy against catastrophe in the wild. Captive breeding also potentially provides a method of increasing reproductive output beyond what would be possible in the wild. However, captive breeding can result in problems associated with inbreeding depression, removal of natural selection and adaptation to captive conditions.
The aim is usually to release captive-bred animals back to natural habitats, either to original sites once conditions are suitable, to reintroduce species to sites that were occupied in the past or to introduce species to new sites. Some captive populations may also be used for research to benefit wild populations.
Studies that investigate the effectiveness of releasing captive-bred mammals are discussed elsewhere. Those studies are not included in this section, unless specific details about captive breeding were included.
Supporting evidence from individual studies
A review of a captive breeding programme in 1978-1992 across Europe (Vogt 1995) reported that the number of institutions successfully breeding European otters Lutra lutra, the number of otters born in captivity and that survived tended to increase over 15 years. These results were not tested for statistical significance. The number of institutions keeping otters remained fairly stable (23-32) from 1978 to 1989, whilst the number of captive animals born and surviving tended to increase from 1978-1983 (born: 0-20; survived: 0-18) to 1984-1989 (born: 18-46; survived: 12-38). Authors reported that until 1990, breeding was only successful in about 10 collections, but that in 1991-1992, when the number of institutions participating in the programme increased to 55, the number that successfully bred otters almost doubled. In 1992 the total captive population was 196 individuals, of which 67% was captive born, and 43 out of 50 cubs survived. In 1990, 36 otter keeping institutions (60% of those co-operating with the studbook) and in 1992 fifty five (91% included in the studbook) took part in the European breeding program for self-sustaining captive populations of otters. These institutions provided information about their captive breeding populations from 1978-1992.
A study in 2009-2011 in a captive facility in Indiana, USA (Smyser & Swihart 2014) found that wild-caught Allegheny woodrats Neotoma magister bred in captivity. Over 26 months, 33 pairings resulted in copulation which produced 19 litters (58% pregnancy rate). Those litters comprised of 43 pups (26 male, 17 female), of which 40 (24 male, 16 female) survived to weaning at 45 days. Overall, eight of 12 wild‐caught females produced offspring (1-5 litters) and four of six wild‐caught males sired litters (1-8 litters). In 2009 a captive breeding program was established using eight wild-caught individuals collected from the seven populations in Indiana and four caught from populations in Pennsylvania. The breeding population was maintained at 12-13 animals with a female bias (8:4). Seven new wild animals replaced five in 2010-2011. Individuals were housed in wire mesh enclosures (91 x 61 x 46 cm or 76 x 46 x 91 cm) with access to the opposite sex and an external nest box (23 x 23 x 23 or 36 cm). Enclosures were at 20°C with 13 hours of light/24 hrs. Captive‐reared juveniles were released into wild populations in April-July each year.
A review of captive-breeding programmes in 1970-2011 across the world (Alroy 2015) found that the majority of 118 captive-bred mammal populations increased in size. The average annual rate of population increase was 0.028, and only 17 populations (14%) declined (five ‘endangered’ or ‘critically endangered’ according to the IUCN Redlist). Authors reported that positive growth rates were maintained for a large majority of the populations in all IUCN categories except those of ‘least concern’. However, average growth rates declined from 1970-1991 (0.054) to 1992–2011 (0.021). Authors reported that there was a slight decrease in average death rate of populations over time and either no change in average birth rate, or lower birth rates after 1989. Population growth rates did not vary with body mass, but were reported to decrease as the ratio of individuals in programs to populations increased (see original paper for details). Counts of births, deaths and end-of-year totals of individuals in captive populations recorded in studbooks (excluding regional studbooks) were published in the International Zoo Yearbook. Those published from 1970 to 2011 were used to calculate rates of population growth for 118 captive-bred populations (81 species and 37 subspecies). Only populations for which the sum of end-of-year totals was at least 250 over the time period were included.
- Vogt P. (1995) The European Breeding Program (EEP) for Lutra lutra: its chances and problems. Hystrix, the Italian Journal of Mammalogy, 7, 247-253
- Smyser T.J. & Swihart R.K. (2014) Allegheny woodrat (Neotoma magister) captive propagation to promote recovery of declining populations. Zoo Biology, 33, 29-35
- Alroy J. (2015) Limits to captive breeding of mammals in zoos. Conservation Biology, 29, 926-931 |
Libya is a big country with large desert expanses. Its north coast is washed by the Mediterranean Sea. Be that as it may, 2,400 years ago, Herodotus journeyed through it and made notes which have survived. He referred to Libya as a …large peninsula! In addition, when on his travels there, presented briefly in another page he reached as far as the Atlantians!
Regrettably, until now, no one has given due consideration to his reports, not to add that in many translations, by inaccuracy of the translators, the word “peninsula” has been replaced by “coast” or “beach” because they overlooked the fact that Herodotus would systematically term a large peninsula as “coast” (ακτή = Coast in the after Herodotus era!) whereas a small one would be termed as peninsula or cheronissos (chero+nissos = land+island). As explained elsewhere (MOM1), an Island was a landmass that gave the impression of a duck (Nissa) floating on water, while a cheronissos or peninsula was also as such, but at some place must be connected to the mainland.
From Roman times onwards, Libya was commonly considered to be all the land from Egypt to the Atlantic Ocean. Earlier however, this was not the case at all, as documented by the great travellers and historians of the Pre-Roman times, namely, the ancient Greek scholar-scientists.
The starting cause for Libya forfeiting its standing as a peninsula, were the climatic changes of the Holocene Epoch, which began around 10.400 BC and are still continuing, with the steady rise in temperature bringing about various and significant sea and geomorphologic and other changes.
Image 1. Asia today, to the right and above the red line, with Europe to the left.
Image 2. Map of Earth according to Ptolemy (Basle edition of 1545)
Image 3. Asia, according to Strabo, was between the red lines. There was no knowledge of the western/eastern boundary. The concept of the “Big Libya” was established from the time of Strabo.
Image 4. Going back in time (~450 years from Strabo), Herodotus precisely defines the borders of Asia of his time (Between the lower red lines).
Image 5. Herodotus goes on to delimit the lengths of Europe, Libya and Asia. He equates the length of Europe with that of the other two together (yellow lines). This is the same reasoning applied by Plato in Timaeus, where he gives the length of Atlantis as being greater than that of Asia and Libya together, i.e. longer than the Europe of that time which has remained relatively unchanged to date! Thus was derived the solid conclusion that Libya went only as far as the sea at the Gulf of Gabes. This further shows that exactly there, at Gabes, was where the Pillars of Heracles had always been. This personal journey of Herodotus leaves no disputable or undecided issues as to distances, since overland to Gibraltar is so long and so difficult because of the uneven terrain and the Atlantic mountains, that it is impossible to cover the same (refered) distance in the length of time it would take if one was travelling through desert. Therefore, the length of Europe, Asia and Libya are concurrently determined, as is the location of the Pillars of Heracles which are validated by completely different testimonies as indeed being at Gabes.
Image 6. Strabo defines the boundaries of Libya as a right triangle whose altitude is the length of Libya. He is the first to establish Libya as extending to Gibraltar. Since then, these have been considered to be the “European” boundaries of Libya, mistakenly assuming that it reached as far as the Atlantic Ocean. A long time before Strabo, Homer was the first to report on Western Ethiopians. He defined their land as being in the blue area to the west of Strabo’s triangle, in today’s West Africa. In the book “The Apocalypse* of a Myth”, are presented detailed reports and analyses of the ancient texts of several ancient philosophers and writers.
Image 7. The voyage of the Samians (islanders of Samos, Aegean Sea) as described by Herodotus cannot be logically explained by contemporary commentators with the Pillars being regarded as at the oceanic side of today’s Spain. His account is, as always, concise, which makes it impossible (nautically) for the Samians to have made the route they have been attributed with, namely, all the way to the island of Gadir in SW Spain. Especially when taking into consideration the direction of the winds given by Herodotus.
Image 8. This journey of Herodotus has already been verified historically (and archaeologically even the archaeologists had not in their mind the Herodotus trip to Libya) for 5 out of his 6 stations in the interior of Libya. The fifth station on the map (#27), which is at the Garamantes peoples, was verified by relatively recent archaeological recoveries. In which case, after that, on of the two alternative routes (two points #29 … both in the north root) before meeting Atlantians as he writes, is correct, with slight deviations and with the most likely being the southern one. The divergence of the proffered route from that declared by the archaeologists was approximately 60-80 km. The map shows the bodies of water along the length of the Atlas Mountains that define Atlantis as an island. Also shown, is the hypothetical watercourse from Lake Siwa (top right) as it probably joins to Lake Chad, going on to join the River Niger and reaches the Atlantians (you may see it on the next map). Then, the water flow turns north and discharges into the Mediterranean, except of course the south outlet. In this way, Libya is depicted as truly being a peninsula, as declared by Herodotus, even though it has been discounted as such, either due to mistranslation or lack of interest or careless observation. Thus, a landmass surrounded by any kind of waters (as in the case of the island) and has only one narrow connection to the land.
Image 9. Hypothetical illustration of the Africa of that time, with Libya depicted as a peninsula, as stated by Herodotus. Its only overland means of access is a small strip of land between Lake Siwa and the Mediterranean Sea (top right), about 60-80 km wide. Today, recorded are Lake Siwa and the rivers that supply it, Lake Chad, which has considerably reduced in size especially over the past few decades and the river network that supplies it, the riverine area of Niger that spreads as far as “Atlantis” in the regions P1, as well as a multitude of Chotts, i.e. seasonal lakes that cover region P, which, when water once covered that entire area, must have been the size of the Aegean Sea. In order to conclusively prove this uninterrupted body of water around Libya at that time, modern researches need to be conducted. Some such, commenced for different reasons, were carried out in areas in the east of the illustration and have revealed many dried-up rivers and lakes containing 10,000 year old fossils of fish, crocodiles and aquatic life that continues today to survive in the Nile. Dried-up lakes and rivers dated at around 10,000 years have been located also in central and western parts of Africa. So it appears that all these now waterless areas once had a rich aquatic past and a very different environment then, at the very beginning of the Holocene Epoch. These and other findings that have surfaced every now and then in recent years, add force to Herodotus claim. It is now considered almost certain that in the future, those bygone watercourses, rivers and lakes which once surrounded, delineated and gave good reason for his designation of Libya as a Peninsula, will all be traced.
Image 10. This Second World War map depicts the region between Libya and Tunisia. There is shown a lot more water at that time (1940) than there is today, although, even now in the winter with a little rainfall, boats are used to get around which, in the summer, rest in the sand… Hypothetically indicated, albeit after chronological investigation, is the historically ‘mislaid’ Cape Soloes as well as the probable site of the Pillars of Heracles which are now historically proven as definitely having been at this region of the Gulf of Gabes. The distance from the cove of the gulf to the beginning of the island-continent of Atlantis, is approximately as was defined by Ephorus, namely, as equal to a 4-5 day voyage, something that, anyway,could never be confirmed at Gibraltar.
Image 11. A picture paints a thousand words. Aerial pictures of the still magnificent Lake Chad, photographed over four and a half decades. Its basin used to be 11 times that of the Aegean archipelago. It has reduced in size by 95% in just 45 years! It shows to startling effect how rapidly and the extent to which The Earth’s climate has changed. The future of the lake is murky. One can imagine its capacity 10,000 years ago and the dimensions of the rivers or the volumes of rain from the monsoons that supplied it. Found Here Birkett, C., and I. Mason. 1995. A new global lakes database for remote sensing programme studying climatically sensitive large lakes. Journal of Great Lakes Research, 21 (3) 307-318. International Lake Environment Committee, the United Nations Environment Program and Environment Agency, Government of Japan. 1997. World Lakes Database.
Image 12. The region of WADI SORA (SW Egypt) is famous for its colourful and exceptional cave drawings also depicting swimmers diving and in various related poses. They are approximately 10,000 years old. Today, nowhere around and for long distances, is there any trace of water; only seemingly endless desert.
Looking just at today’s existing bodies of water, it seems that there is enough water around Libya to somewhat approach the concept of “peninsula”. Over certain wide areas, there is currently no data on rivers or lakes, big and small, to be added to the map of the “peninsula”. At any rate, for the time being, provisional but inconclusive research indicates that most probably, in this case too, Herodotus, as always, will be proven true. |
For life to have an ongoing process there must be the process of creating new life. This process is called
reproduction. Human beings reproduce in much the same way as other mammals. There is need for
both male and female to be involved in the human reproductive process. On the other hand the nervous
system is the most complex system in the body. This particular system is well needed for the brain and
the rest of our body to function properly.
The female reproductive system consists of the fallopian tube, ovum, ovary, uterus, cervix and vagina.
These are just to name a few and the things they do. A female also has breast, they go through
menstruation and pregnancy. To narrow it down it’s basically how the female body works and what is
inside and outside of the female body. The organ of the female reproductive system contains the uterus,
ovaries, and a few associated organs. The uterus is located in the lower abdomen between the urinary
bladder and the rectum; the uterus is the cervix. The ovaries are a part of the fallopian tubes that leads
to the uterus; and the vagina that leads to the outside the body. Along with the vagina is the clitoris that
is in front of the urethral meatus. The female body has two ovaries that contain millions and millions of
eggs that are discharges every month which is called menstruation. The fallopian tubes are pass ways for
the eggs, if an egg attaches to the uterus walls and there was sexual intercourse a female can get
pregnant. This is where pregnancy begins.
The nervous system is a very complex system in the body, it has several parts. The nervous system is
divided into two main systems, the central nervous system (CNS) and the peripheral nervous system.
The spinal cord and the brain make up the CNS. Its main job is to get the information from the body and
send out instructions. The peripheral nervous system is made up of all of the nerves and the wiring. This
system sends the messages from the brain to the rest of the body. The brain keeps the body in order. It
helps to control all of the body systems and organs, keeping them working like they should. The brain
also allows us to think, feel, remember and imagine. In general, the brain is what makes us behave as
human beings. The brain communicates with the rest of the body through the spinal cord and the
nerves. They tell the brain what is going on in the body at all times. This system also gives instructions to
all parts of the body about what to do and when to do it. Nerves divide many times as they leave the
spinal cord so that they may reach all parts of the body. The thickest nerve is 1 inch thick and the
thinnest is thinner than a human hair. Each nerve is a bundle of hundreds or thousands of neurons
(nerve cells). The spinal cord runs down a tunnel of holes in your backbone or spine. The bones protect
it from damage. The cord is a thick bundle of nerves, connecting your brain to the rest of your body.
Both the nervous system and the female |
Hi and welcome. This is Anthony Varela. And today, I'm going to introduce quadratic equations. Now, quadratic equation's a big topic. So I'm going to introduce a lot of stuff, but not in a lot of detail. So feel free to look up these concepts in greater detail by seeking out other videos.
So we're going to talk about quadratic relationships. Then we're going to look at different forms of quadratic equations. We'll look at what quadratic equations are like on graphs and, finally, some methods for solving quadratic equations.
So first, let's talk about quadratic relationships. Well, a quadratic is a second-degree polynomial with an x squared term as its highest degree term. So you're not going to see x cubed. You're not going to see x to the fourth or anything higher than x to the power of 2. That's what defines a quadratic. So here's an example of a quadratic, x squared plus 3x minus 1.
Now, quadratic relationships can be recognized from a table of values. So I'm going to look at consecutive x-values. Let's plug in 0, 1, 2, 3, and 4 into x.
So when x equals 0, we have 0 plus 0 minus 1. So that's a negative 1. When x equals 1, we have 1 plus 3 minus 1. So it'd give us a positive 3. When x equals 2, we have 4 plus 6 minus 1. So that gives us a positive 9. When x equals 3, we have 9 plus 9 minus 1. So that gives us 17. And finally, when x equals 4, we have 16 plus 12 minus 1. So that gives us a total of 27.
Now we're going to be finding the difference between these numbers here. So the difference between negative 1 and 3 is 4. The difference between 3 and 9 is 6. The difference between 9 and 17 is 8. And maybe you're already seeing the pattern. The difference between 17 and 27 is 10.
Now, in linear relationships, if you took this difference, you would get the same value, a common difference, as you went down the rows. But in our quadratic relationships, we have to find the second difference. So the difference between 4 and 6 is 2, between 6 and 8 is 2, and between 8 and 10 is 2. So we have the same second difference in our quadratic relationships.
So now let's talk about forms of quadratic equation. So the first form that we use is standard form. And this is y equals ax squared plus bx plus c. So we have an x squared term, an x-term, and our constant, c.
So a is the coefficient of the x squared term. B is the coefficient of the x-term. And our constant is c. And we use the standard form if we would like to solve a quadratic equation using the quadratic formula. And we'll get to that in a minute.
Another form is called vertex form. And this is y equals a times x minus h, quantity squared, plus k. And we use equations in vertex form if we'd like to easily identify the vertex of a parabola. And we'll talk about parabolas and vertices in a minute as well.
And lastly, we have the factored form of a quadratic equation, which is y equals a times x minus x1 times x minus x2. And we use equations in factored form if we'd like to easily identify x-intercepts. So there are our different forms of our quadratic equation-- standard form, vertex form, and factored form.
Now I'd like to talk about parabolas, which are quadratic equations graphed on the coordinate plane. So a parabola is the shape of a quadratic equation on a graph. It is symmetric at the vertex. So we're going to talk about symmetry. And we'll talk about the vertex as well.
So the vertex of a parabola is the maximum or minimum point of a parabola. And it's located on the axis of symmetry. So here in the graph, I have marked the red dot. That's the vertex to this parabola. It happens to be a minimum point because this parabola opens upwards. It's a U-shaped parabola.
And we see that dotted vertical line. That's the axis of symmetry. And we can think about that as a line of reflection. So notice that our parabola is symmetrical. So what this means is you can take a point on the parabola and reflect it across that line. And you'll still be on the parabola.
Now, I said that the vertex represents a maximum or a minimum point. Here we see it's a minimum point. This is what the vertex looks like as a maximum point. So you notice here the parabola opens downward. And we still have that axis of symmetry that acts as a line of reflection.
So when do we have parabolas that are the U shape, opening upwards? And when do we have parabolas that have that upside down U shape, opening downward? Well, that depends on our variable a. If a is a positive number, our parabola opens upward. If a is a negative number, our parabola opens downward.
So lastly, I'd like to talk about solutions to quadratic equations and how we can solve quadratic equations. Well, first, there are a couple of different names for solutions. We can call them roots. And we can also call them zeros. And these represent x-intercepts on our graph of a parabola. So they're x-values that make y equal 0.
So when we're solving a quadratic equation, we have our quadratic expression set equal to 0. And when we solve for x with y equals 0, we've found our solution. So there are two common ways to solve quadratic equations. We could solve by factoring. And we can solve by using the quadratic formula.
So when we're factoring, we're taking our quadratic equation and we're writing it as factors. So one factor here is x plus 1. And another factor is x plus 2. And if you're interested, you can FOIL this out. And you'll get this expression.
But in general, you notice that x could have coefficients here. And the great thing about solving by factoring is that once you have it written in factors, you can set each factor equal to 0 and solve for x.
So we're going to do that over here on the left, just separating this into two equations, setting each factor equal to 0. So I can see that x equals negative 1 and x equals negative 2 are solutions to this equation.
Now, we could also solve by using the quadratic formula. And the quadratic formula is useful because you can use this for any quadratic equation. If you can't factor an equation or you don't know how, you can always use the quadratic formula. And the quadratic formula relies on our variables a, b, and c in our standard form set equal to 0.
So the quadratic formula is x equals negative b plus or minus the square root of b squared minus 4ac, all over 2a. So like I said, this could be used for any quadratic equation. The trade-off is that it's kind of messy algebraically. But if we plugged in-- let's see-- a would be 1, b would be 3, and c would be 2-- into our quadratic formula and did our simplification, we would get that x equals negative 3 plus or minus 1, all over 2.
So what you would do then is evaluate negative 3 plus 1 over 2 and negative 3 minus 1, all over 2. And we would get x-values of negative 2 and negative 1, same solutions as before.
So let's review our introduction to quadratic equations. We talked about how a quadratic is a second-degree polynomial. We have a couple of different forms of quadratic equations-- standard form, vertex form, and factored form.
When graphed, quadratic equations are parabolas. And we can have parabolas that open upward or downward, depending on the value of a, if it's positive or negative. And we also talked about common ways to solve quadratic equations, either by factoring or using the quadratic formula.
So thanks for watching this introduction to quadratic equations-- hope to see you next time. |
Conflict is inherent in all societies and arises when two or more groups believe their interests are incompatible. ‘Conflict’ is not, however, interchangeable with ‘violence’. Non-violent resolution is possible when individuals and groups have trust in their governing structures, society and institutions to manage incompatible interests. Conflict becomes a problem when this trust and respective conflict management capacities are absent and conflicting parties choose instead to resort to the use of force to secure their goals.
Violent conflict is the subject of this topic guide. The guide provides an overview of key topics ranging from the causes, dynamics and impacts of conflict to options for interventions to prevent, manage and respond to conflict. It is divided into five main parts:
Chapter 1: Understanding violent conflict
Chapter 2: Living in conflict affected areas: focus on children and youth
Chapter 3: Preventing and managing violent conflict
Chapter 4: Recovering from violent conflict
Chapter 5: Intervening in conflict-affected areas |
(PhysOrg.com) -- A team of Yale University scientists has discovered a previously unknown type of molecular scissors that can tailor micro-RNAs, tiny snippets of genetic material that play a key role in regulating many of life's functions.
The team also found that the absence of these molecular scissors, or the micro-RNAs they create, could trigger anemia in mice and in zebrafish, the team reports online in the May 6 issue of Science Express.
“We are still just beginning to scratch the surface in our understanding of small RNAs,” said Antonio J. Giraldez, the Lois and Franklin H. Top, Jr. Yale Scholar in the genetics department of the Yale School of Medicine and senior author of the study. “This discovery really opens the door to finding new families of these RNAs that influence many forms of biological activity.”
In last decade, scientists have come to realize the great importance of small RNAs in regulating gene activity. Micro-RNAs are the smallest genes known, with as few as 22 building blocks or nucleotides. Most genes average more than 1000 nucleotides. Unlike most genes that are encoded as DNA and produce proteins, these tiny genes act by controlling much larger messenger RNAs, which carry the protein-making instructions of the DNA. Although micro-RNAs account for only about four percent of all genes, each one can regulate hundreds of genes.
“Recent finding have told us that micro-RNAs have deep implications not only in how humans and animals are made, but in the development of human diseases,” Giraldez said.
Up until now, it was thought that the creation of these microRNAs depended upon the presence of an enzyme called Dicer, which acts like a molecular scissors and helps to cut microRNAs to the right size and shape. Giraldez’s lab, in conjunction with researchers at Cold Spring Harbor, upended that dogma when they discovered important micro-RNA activity could take place without Dicer. The team created a micro-RNA that appears to be essential to the creation of red blood cells in both zebrafish and in mice by using a different enzyme called Argonaute 2.
Giraldez noted that the conservation of this mechanism in all vertebrates, maintained across 400 million years of evolution, suggests it plays an important role in survival.
“There is an immense, vast sea of small RNAs out there, and it is difficult to sort out what is junk from what is functional,” Giraldez said. “With this new molecular scissors, we have another tool to find small RNAs that are important to life, that activate genes in disease, and may be important in developing new therapeutics.”
Other Yale researchers contributing to the paper were Daniel Cifuentes, Huiling Xue, David, W. Taylor, Heather Patnode and Shrikant Mane. Other authors on the paper are from Cold Spring Harbor Laboratory, Kobe University, Stony Brook University, the University of Massachusetts Medical School and the University California, Berkeley.
Explore further: How to get high-quality RNA from chemically complex plants |
News Release Article from
Archived - The Canadian Naval Ensign
BG 13.014 - May 2, 2013
A naval ensign is a flag worn by a warship to indicate its nationality. Most Commonwealth nations wear a distinctive naval ensign on their warships that includes elements of their national flag. This is an internationally accepted practice that is also observed by many non-Commonwealth nations throughout the world such as Japan, China, and Russia. However, not all nations have a distinctive naval ensign, and some nations, such as the United States and France, instead choose to wear their national flag as the naval ensign on their warships.
Wearing a distinctive naval ensign that incorporates the National Flag, distinguishes Canadian warships from other Canadian flagged vessels and foreign navies. It also recognizes the special status of Canadian warships under international maritime law, which stipulates that warships on the high seas have complete immunity from the jurisdiction of all states other than their flag state. Because Canadian warships are units of the Canadian Armed Forces, crewed by military personnel who deploy throughout the world in furtherance of Canadian national policy, they are deemed to have special status under international maritime law. Additionally, the Canadian Naval Ensign promotes and strengthens our Canadian naval identity, and underscores the unique roles, responsibilities, liabilities, and powers of the crews who serve in Her Majesty’s Canadian Ships (HMCS) and other naval vessels.
There are now two distinct symbols that signal Canadian nationality onboard Canadian warships and other naval vessels. The first is the Canadian Naval Ensign, which is worn at the masthead while at sea, or at the stern when alongside, moored, or at anchor. The second is the National Flag, also known as the Maple Leaf Flag, which is worn as the Naval Jack at the bow when the ship is alongside, moored, or at anchor. Additionally, while not specifically required by law or maritime custom, Canadian warships have historically displayed a Maple Leaf badge on or near the main ship’s funnel.
Starting in 1870, the Canadian Marine Service used a Blue Ensign to designate the special government status of its vessels. When the Naval Service of Canada was established on May 4, 1910, this practice continued. At the Imperial Conference of 1911, there was a naval agreement whereby Canadian warships would fly the Royal Navy White (naval) Ensign at the stern and the flag of the Dominion (the Canadian Blue Ensign) at the jack-staff located at the bow. Canadian merchant vessels flew the familiar Red Ensign, indicating their non-governmental status. Later that same year, on August 16, King George V authorized that Canadian naval forces be designated as the Royal Canadian Navy (RCN). On December 16, 1911, the Canadian Government ordered the following:
All ships and vessels of the Royal Canadian Navy shall fly at the stern the White Ensign as the symbol of the authority of the Crown, and at the Jack Staff the distinctive flag of the Dominion of Canada, such distinctive flag being the Blue Ensign with the arms of the Dominion inset in the fly. The White Pendant will be flown at the Masthead. (Canadian Order-in-Council PC 2843 of December 16, 1911. Published in the Canadian Gazette on December 30, 1911.)
The authorization of the White Ensign and Blue Jack in 1911 included the statement that “The White Pendant will be flown at the Masthead.” The ship's pennant (to use the modern spelling) is the mark of a commissioned ship and also symbolizes the captain’s authority to command the ship. This pennant, also known as the captain’s pennant, the mast-head pennant or the commissioning pennant, is really the distinguishing flag of the captain. If the Sovereign or a more senior officer in the chain of command were aboard, their distinguishing flag would displace the captain’s pennant at the masthead. Together, the Ensign at the stern, the Jack at the bow, and a distinguishing flag at the masthead form a part of the ship's suit of colours.
While the White Ensign remained unchanged until its use was discontinued in 1965, the Blue Jack underwent a series of changes: the four-province badge was used on the fly until 1922; thereafter, the shield of the Canadian arms was used. The maple leaves on that shield changed from green to red shortly after 1957.
The RCN continued using the White Ensign and the Canadian Blue Jack up until the adoption of the Maple Leaf Flag as the new National Flag on February 15, 1965. The Maple Leaf Flag was also adopted as both the Ensign and the Jack, as it is a common Commonwealth practice to wear the National Flag as a jack. As part of post-1965 efforts to develop military ensigns and flags, a distinctive naval jack that incorporated the Maple Leaf Flag was created in 1968 and flown by commissioned warships when alongside or at anchor. Coincidentally, in 1968 the Canadian Armed Forces were re-organized into one service and the RCN ceased to exist as a separate service, with all naval forces being assigned to the Canadian Armed Forces Maritime Command. In 1985, an Order-in-Council authorized the Canadian Armed Forces Naval Jack to be flown ashore as the Maritime Command flag, in addition to flying it onboard commissioned warships. The National Flag remained as the Ensign and was flown by all Canadian naval vessels.
In the early 1990s, the British Royal Navy-style Commissioning Pennant was phased out in favour of a new Canadian-designed Commissioning Pennant, which featured a maple leaf instead of the Cross of St. George. Only commissioned warships fly the Commissioning Pennant.
On August 16, 2011, the historic name of the RCN was restored and Maritime Command became known as the “Royal Canadian Navy.” On May 5, 2013, the Government of Canada restored a standard Commonwealth naval practice by authorizing RCN vessels to fly a distinctive Canadian Naval Ensign and fly the National Flag as the Naval Jack. Essentially, the flag previously known as the Canadian Naval Jack became the Canadian Naval Ensign, whereas the National Flag became the Canadian Naval Jack.
Search for related information by keyword
National Defence and the Canadian Forces Military
- Date modified: |
Motivating kids to brush their teeth is not always a simple task. Learning how to maintain proper oral hygiene takes time and to be honest is not always inherently fun for children. It is critical however, for kids from a young age, to learn to, practice, and understand the importance of brushing their teeth properly and often. In fact, the daily routine of brushing one’s teeth is essential to overall health in general and must be instilled in children at a young age, even if this is difficult to achieve. If your child is having a difficult time learning or remembering to practice good oral hygiene skills, the following tips may help you teach him or her to take care of their teeth. Learn more about dental topics tips and safety on Kids Dental Online.
Tips For Motivating Kids To Brush Their Teeth
Begin Early: Toddlers who are accustomed to having their teeth cleaned tend to be more responsive to learning how to brush their teeth themselves. Parents should clean their child’s mouth from infancy, beginning with wiping a new baby’s gums with a clean damp cloth or gauze pad after feedings.
Brush Your Teeth Together: Make teeth brushing a family activity. Young children love to imitate their parents, and an affective way to encourage children to practice oral hygiene is to make the activity of teeth brushing an entire family ordeal. Model how to practice good oral hygiene by brushing and flossing in front of your child, and then let them try themselves. Essentially, the same thing that compels small kids to dress up like their parents and pretend to do whatever their parents do, will prompt them to emulate modeled oral hygiene as well. Completing this routine together also gives the parent the ability to make sure the child is brushing properly.
Let Your Child Pick A Toothbrush: Allowing a child to pick his or her own toothbrush helps make the process of learning how to take care of their teeth fun! There are all sorts of different themed toothbrushes to help motivate a child to practice brushing. When kids are excited about a toothbrush because it is modeled after their favorite Disney princess or cartoon character for example, they are more likely to become excited about brushing and want to practice more often.
Choose Child Friendly Toothpaste: A multitude of kid friendly toothpastes exist in a large assortment of colors and flavors. These fun and child tailored flavors tend to make brushing a more enjoyable task. Rather than forcing a child to brush with mint or cinnamon adult flavors, which may be too harsh on a child’s palette, these flavors help kids look forward to completing the routine of brushing their teeth. Bringing your child with you when shopping for their toothpaste and give them a choice in the color and flavor they find most appealing. Ultimately, a child who likes the flavor of his or her toothpaste is more inclined to become excited about brushing.
Buy Hand-Held Flossers In Fun Colors: Colored hand held flossers can encourage kids to go the extra step and floss their teeth at night after brushing. Anything to keep the routine of practicing proper oral hygiene fun can help kids establish those lifelong healthy habits that are so important to overall and dental health.
Utilize Educational Tools: Often times, kids tend to pay closer attention to messages from sources other than their parents. Educational tools can be key in the actual implementation of healthy habits. When kids read books about brushing their teeth, watch videos, or play “mobile app” or tablet games having to do with the subject for example, oral hygiene tasks may become more important to children and more ingrained as significant skills to obtain.
Use Music: Music can aid in passing the time it takes to complete oral hygiene tasks. Playing a child’s favorite song while they brush their teeth can help them learn how long to brush and make the routine more fun. Kids and adults should brush their teeth for about two minutes every time. Help kids pick one or multiple fun songs to play while brushing and flossing to help make the possibly ‘boring’ routine more exciting.
Create Motivational Rewards Charts: Encourage children to keep track of their daily brushing by setting up a motivational rewards chart for completing their oral hygiene tasks. There are many creative ways of doing this, so choose what works best for your family. Ultimately, keeping track of their brushing and flossing on a chart helps kids learn consistency with their habits.
Motivational Tooth Brushing Charts |
Rare diseases, by definition, affect few people, tending to fall off the health and social policy radar screen. However, policy makers looking to contain long-run healthcare costs are doing themselves a great disservice by ignoring this category of diseases which affects some 30 million people in EU, a figure equivalent to the combined populations Belgium, Luxembourg and the Netherlands.
A rare disease is a disease that occurs infrequently or rarely in the general population. In Europe, for example, a rare disease is defined as affecting less than 1 in 2,000 citizens, and in the US as affecting fewer than 200,000 patients. Yet for combined population of 800 million, this could range from a few hundred to as many as 400,000 individuals for any single rare disease.
Despite this number, the rare disease patient is the orphan of health systems, often denied diagnosis, treatment and the benefits of research.
For patients, families and individuals affected by rare diseases, gaining access to services is often extremely difficult. Finding expert help is too frequently a matter of luck rather than a consequence of systematic planning by national health systems. Paradoxically, although any given rare condition may only affect a few hundred, there are between 5000 and 7000 distinct rare diseases identified to date, meaning that the number of families in need of health and social services is vast. And, because the diseases share a number of common characteristics, it is possible to develop public policy and actions to improve access to information, diagnosis, care, treatment as well as to promote biomedical research and R&D in medicines.
Rare diseases are often life-threatening. They are chronic, progressive, degenerative and disabling. People living with rare diseases face many common challenges, such as delayed or inaccurate diagnosis, difficulty accessing care and lack of knowledge or access to expertise. For the individual sufferer this is a disaster, and for an economy it represents a significant direct and indirect cost.
When a disease is diagnosed on time and managed well, people are often able to maintain a normal quality of life, meaning that timely diagnosis and correct treatment of rare disease patients is not only ethical but cost effective. Misdiagnosis and delays in diagnosis of rare disease patients often lead to increased expenses and waste for health care and social systems due to inadequate treatments. And the numbers can balloon quickly when a number of rare diseases are taken together.
Rare diseases not only affect the person diagnosed - they also impact families, friends, care takers and society as a whole.
For more information about rare diseases please watch the following documentary on rare diseases (english subtitles included). |
The brain is the black box of the body: full of secrets and often inaccessible. Recently, though, medical and technical advances have enabled scientists to explore a variety of new modeling techniques and allowed them to see the brain as never before.
New brain models may offer insight into the biology of brain cancer, enhance surgical planning, and help us understand better how the brain performs a variety of functions. They are also the first steps toward new interventions and improved quality of life for thousands of people.
One of the best and most advanced approaches for modeling any part of the body is through the use of cellular scaffolding: biocompatible materials that support the growth of organ-specific cells. In fact, such scaffolding-based approaches have allowed scientists to model entire organs in laboratory settings and are the best hope for eventually being able to grow transplantable organs.
With regard to brain tissue, the science is still in its infancy. But by using a silk and collagen scaffold and pluripotent stem cells, scientists have begun to culture clusters of brain tissue successfully in the lab. This includes modeling glioblastoma, the most common form of brain cancer in adults.
Scaffold-based cellular modeling is the result of complex partnerships across the sciences and engineering disciplines. Clinical biomedical researchers and biological engineers develop biocompatible materials, they partner with neuroscientists and oncologists, and eventually may include a variety of other subspecialists, such as pharmaceutical researchers and immunologists.
For example, one of the most promising treatment modalities for brain cancer right now is CAR-T therapy, which uses the body’s own immune system to fight cancer cells, but requires a comprehensive understanding of both the brain and the immune system.
Seeing In 3D
Not surprisingly, based on the current popularity of 3D printing, scientists have been looking for ways to employ this technology to visualize the brain more fully. Using MRIs and CT scans, 3D printers can create realistic biological models.
Though these models don’t allow for the same degree of biological engagement as cellular models, 3D-printed brains offer the opportunity to model a specific brain, not just general cellular structure. In patients that have tumors or other brain abnormalities, these 3D models can help their doctors visualize and plan surgical strategies better, through the use of a tactile, interactive approach.
3D-printed brain models also pair well with tools such as the Virtual Brain, an approach to simulating brain tumors that uses fMRI. Surgeons can use the Virtual Brain to identify not just the structural makeup of the brain, but also the functions of those areas.
Given greater knowledge of the brain’s function in proximity to a tumor or other surgical target region, surgeons can plan a more appropriate approach to minimize damage and functional loss.
The Brain in Action
Finally, to move beyond our longstanding reliance on fMRI and certain EEG models as the only ways to see the brain performing tasks as well as for isolating functions, scientists recently published a proof-of-concept study using diffusion spectrum imaging to see how brain structure related to language-based tasks.
This approach can identify when brain areas synchronize and which areas are in an active state during tasks. It can also help researchers visualize how the brain coordinates various structures to perform a given task.
Though this research is only in the trial phase, it could help researchers identify differences in how individual brains perform a task, which is of particular interest in the study of abnormal or post-surgical brains.
The brain does not readily respond to modeling because its functions are so complex. Unlike the kidney or liver, in which the entire organ performs a singular task, the brain executes thousands of tasks and regulates dozens of functions simultaneously.
As modeling technology advances, however, the brain may slowly reveal its secrets. It’s an exciting time for the neurological and cognitive sciences. |
The movement of the sun along the ecliptic from its most northerly declination of +23.5 ° to its most southerly declination of -23.5 ° is called the Dakshinayana, or the southern movement of the sun. (summer solstice to winter solstice)
The movement of the sun along the ecliptic from its most southerly declination of -23.5 °to its most northerly declination of +23.5 ° is called Uttarayana, or the northern movement of the sun. (winter solstice to summer solstice)
Fig. 13.1 – Uttarayana and Dakshinaya
Uttarayana and Dakshinayana can be observed from the position of the sunrise on the horizon on different days of the year.
Fig. 13.2 – Sunrise on different days of the year
Rashis and nakshatras
The Zodiac or Rashi chakra and the nakshatras are the projection of the distant stars, which appear fixed with respect to the earth, onto the ecliptic.
Fig. 14 – Rashis or Zodiac constellations projected onto the ecliptic
The rashis divide the ecliptic of 360° into 12 regions of 30° . The sun spends about a month in each rashi, completing 360° in one year. Hence, the rashis can also be looked at as solar constellations.
The 12 rashis are given below.
The nakshatras divide the ecliptic of 360° into 27 regions of 13° 20’. The moon spends one day in each nakshatra, completing 360° of the zodiac in 27 days, which is the lunar month (sidereal*). Hence, the nakshatras can also be looked at as lunar constellations.
Each nakshatra is further divided into 4 padas (quarter) 3° 20’ each. Thus there are 27 * 4 = 108 padas.
The 27 nakshatras are given below.
* There are two ways the lunar month is reckoned : sidereal and synodic. We will look at it in Part 2 of this article.
Fig. 15 – Rashis and nakshatras
Orientation to the Galactic center
In Vedic astronomy, the rashi chakra is oriented to the center of our galaxy, the Milky Way. The light from the galactic center comes to the earth through the fixed stars of the constellation Sagittarius, Dhanu.
In the rashi chakra, the galactic center is located in the early portion of Sagittarius. The nakshatra Moola, meaning “root” or “source” suggests that it is the first of the series of nakshatras on a cosmic level. It marks the first 13 ° 20′ of Sagittarius, in the middle of which (6 ° 40′) is located the galactic center.
The previous nakshatra is called Jyeshta, meaning “the eldest” which marks the end of Scorpius. This shows that the ancients knew of the galactic center and named their constellations in such a way as to acknowledge it as the beginning.
Modern astronomy confirms this fact. Astronomers have discovered that the galactic center is an intense radio source known as Sagittarius A*. It lies in the direction of Sagittarius constellation, near the border with Scorpius (Fig. 18). Sagittarius A* is the most plausible candidate for the location of the supermassive black hole at the centre of our galaxy. (Fig. 19)
The Milky Way has 2 major arms and a number of minor arms. One of the minor arms, known as the Orion Arm, contains the sun and the solar system. The Orion arm is located between two bigger arms, Perseus (major arm) and Sagittarius. The sun is at a distance of 26,000 light-years from the galactic center. (Fig. 15)
The Milky Way galaxy can be observed from the earth as a beautiful stretch, because the plane of the solar system (ecliptic) is inclined at an angle of 60° to the galactic plane. (As shown in Fig. 16)
Fig. 15 – The location of our solar system in the Milky Way
Fig. 16 – Plane of the solar system (plane of ecliptic) and galactic plane
Fig. 17 – Milky Way galaxy as viewed from the Earth
Fig. 18 – Galactic center – between Sagittarius and Scorpio
Fig. 19 – Sagittarius A* at galactic center
The phases of the moon and thithi
Phases of the moon occur as a result of the position of the moon at different points in its orbit around the earth. What we refer to as phase is the part of the moon’s surface illumined by the sun, as seen from the earth. This is shown in Fig. 20.
Fig. 20 – Thithis of the moon
The waxing phase is called Shukla paksha and the waning phase is called Krishna paksha.
A thithi is 12°movement of the moon with respect to the sun in the rashi chakra.
A thithi is a lunar day. There are 15 thithis in the Shukla paksha and 15 thithis in the Krishna paksha.
Hence, 30 x 12°= 360° . The moon traverses the rashi chakra in 30 lunar days or thithis.
8.Ashtami (Half moon)
15.Purnima (Full Moon in Shukla paksha) or Amavasya (New Moon in Krishna paksha)
Fig. 21 – Thithi – 12° movement of the moon with respect to the sun
Grahana – solar and lunar eclipse
The moon’s orbit path is not in line with the earth-sun orbit path. It is tilted by 5°. The points of intersection of the moon’s path and the sun’s path are called nodes. As the earth moves in its orbit, these nodes line up with the sun twice a year.
If the moon is between the sun and the earth, we get a solar eclipse, surya grahana.
If the earth is between the sun and the moon, we get a lunar eclipse, chandra grahana.
Total eclipses are possible only because the sun and the moon appear to be of the same size from earth. While the sun is 400 times bigger than the moon, it is also 400 times farther away.
Fig. 22 – Lunar nodes – intersection of moon path and sun path
Fig. 23 – Solar eclipse
Fig. 23.1 – Solar eclipse as seen from the earth
Fig. 24 – Lunar eclipse
Fig. 24.1 – Lunar eclipse as seen from the earth
The primary celestial bodies in Vedic astronomy are the navagrahas – 7 planets and 2 lunar nodes.
Since the orbits of all planets are nearly on the same plane as the ecliptic, with only minor differences, the grahas also traverse the rashi chakra.
The 7 planets:
1) Surya (Sun)
2) Chandra (Moon)
3) Budha (Mercury)
4) Shukra (Venus)
5) Mangala (Mars)
6) Guru (Jupiter)
7) Shani (Saturn)
8) Rahu (Ascending node) -the node where lunar eclipse occurs
9) Ketu (Descending node) – the node where solar eclipse occurs
Fig. 25 – Grahas
Each graha has its own speed and corresponding time period of traversing the rashi chakra.
For example, the sun covers about 1° in 1 day. Thus, it traverses a rashi (30°) in about 30 days, which is one solar month. The sun goes through the entire rashi chakra (360°) in about 360 days, which is one solar year.
The moon covers 1° in 1 3/4 hours. It traverses a rashi (30°) in 2 and 1/4 days. It completes the entire rashi chakra in 27 and 1/3 days, which is also its period of revolution around the earth. This constitutes a lunar month.
Precession of the Equinoxes
The axis of the earth spins like a top, and its direction moves from the North star, Polaris, to Vega. As a result, the equinoxes (which are the intersection points of the ecliptic and the celestial equator) precess slowly westwards relative to the fixed stars, completing one revolution in about 24,000 years (According to Sri Yukteshwar*).
*Sri Yukteshwar Giri was the Guru of Paramahamsa Yogananda. His Guru was Lahiri Mahasaya, whose Guru was Kriya Babaji.
Fig. 26 – Precession of the equinoxes
Due to precession, the equinoxes have moved around the ecliptic such that the background constellation for the vernal point is now Pisces, and for the autumnal point is now Virgo.
Fig. 27 – Current positions of the equinoxes (2007)
The basic concepts of astronomy have been explained. How astronomy is connected to timekeeping will be the subject of the following article.
Fig. 1 – By Tfr000 (talk) 20:06, 29 March 2012 (UTC) – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=18893066
Fig. 2 – Plane of the ecliptic tilted on the earth’s celestial sphere
By Brad Freese, published on Mar 29, 2009
Fig. 3 – By Tfr000 (talk) 15:34, 15 June 2012 (UTC) – Own work, CC BY-SA 3.0,https://commons.wikimedia.org/w/index.php?curid=19907447
Fig. 5 and Fig. 6 – PE Robinson, Published on Jan 14, 2013
Figs 8, 9 10, 11 – Kurdistan Planetarium, Published on Mar 15, 2011
Fig. 13.2 – P.E. Robinson
Published on Feb 3, 2013
Fig. 19 – https://www.vofoundation.org/
Fig. 21 – https://en.wikipedia.org/wiki/Tithi
Astrology of the Seers by David Frawley |
The island of Milos is located in the southwestern part of the archipelago of the Cyclades. The volcanic rock formations are responsible for the impressive colorful landscape and the mineral wealth. This ”geological peculiarity” of Milos was identified by the Neolithic man.
As documented by the archaeological findings, the obsidian of Milos was used for manufacturing tools and weapons. The first mining facilities were located in Nichia, just outside the current city of Adamas. By entering the Bronze Age (3000 BC), more mining facilities were founded. At the same time, the first organized settlement was founded in Phylakopi. The archaeological site of Phylakopi is located in the northeastern part of the island. The area is characterized by dense structuring surrounded by huge walls. Today, a large part of the settlement is submerged beneath the sea. The life of the city last almost 2,000 years until its final abandonment in 1100 BC.
During the Geometric period (11th-8th century BC), a new tribe arrived to the island who settled in Klima. The ancient city of Milos flourished and became center of pottery and miniature. The rich archaeological finds in the area include the ancient theater with its impressive sculptures, imposing walls and various statues. The most important is the world-famous statue of Aphrodite of Milos who is currently preserved in the Louvre museum in Paris. The intense conflict between Athens and Sparta in the 5th century BC affected the island. The refusal of the residents of Milos to war with the Athenians led to their destruction.
In Hellenistic times, the area bloomed once again. In 27 BC the island was converted into a Roman province. The Roman architecture is reflected to the remains of various infrastructures (baths, port facilities, private buildings) in Klima. The archaeological journey to the island ends with the impressive catacombs of the 2nd century AD. The Catacombs of Milos are located in a short distance from the ancient theater at the south of Tripitis settlement. It is a cluster of arcades that were used as a Community Cemetery in Early Christian times until the 5th century AD. |
An electronic signature is any author identification and verification mechanism used in an electronic system. This could be a scan of your real hand-written signature, or any kind of electronic authenticity stamp. It's a generic term that covers a lot of authenticity measures.
A digital signature is a type of electronic signature. It is a signature generated by a computer for a specific document, for the purposes of strong authenticity verification. For example, in asymmetric cryptography, a private key might be used to sign a hash of a document, which anyone in possession of the corresponding public key can verify but not forge. It also prevents modification of the document after the signature is generated. This allows one user to place a digital signature on a document, and many other users to verify that the signature is correct.
A digital signature scheme might work as follows:
- Alice generates an asymmetric key pair (e.g. RSA)
- Alice computes a cryptographic hash (e.g. SHA256) of the document.
- Alice encrypts the hash using her private key.
- Alice makes her public key available to anyone who wants it.
- Bob downloads the document and a copy of Alice's public key.
- Bob computes a cryptographic hash of the document.
- Bob decrypts the signature value stored in the document using Alice's public key.
- Bob compares the decrypted hash with the hash he computed. If they match, the document is authentic.
In the next scenario, Eve fails to subvert the process:
- Alice publishes her public key and the signed document.
- Eve downloads them, but wants to modify the document. Since Eve only has the public key, she cannot forge the signature.
- Eve modifies the document anyway, and gives it to Bob.
- Bob opens the document and checks that the hash matches the signature. It does not, so he knows that the document has been modified or the signature forged.
Disclaimer: IANAL - In terms of legal standing (at least in the UK, pretty sure the US too) an electronic signature, in the form of a scanned image of the signer's hand-written signature, is considered to be legally binding. However, it is often trivial to extract the signature and use it on other documents without the author's permission. In the case of a dispute, most courts require some sort of digital signature of authenticity to prove that the electronic copy of the physical signature is authentic. |
1361-1437. Holy Roman Emperor. Second son of Emperor Charles IV, Sigismund inherited the mark of Brandenburg on his father's death in 1378. For six years thereafter he studied at the Hungarian court. In 1385 he married Maria, daughter of King Louis of Hungary and Poland, and in 1387 he succeeded his father-in-law as king of Hungary. Domestic Hungarian problems, Turkish attacks, and intrigues in his bid for succession in Germany and Bohemia weakened his rule. In 1410 he was elected German king, or “king of the Romans.” Solution of the Great Schism being in the best imperial interest, he pressured * to convoke the * (1414-18), and his international travels and appeals during the sessions were instrumental in restoring a unified papacy. He guaranteed John Hus* safe passage to the council, where the Reformer was martyred. The Hussite Wars in Bohemia (c.1420-36) erupted after Sigismund succeeded Wenceslas as king of Bohemia in 1419 and pledged to prosecute heresy. Vexed by yet another Ottoman advance on Hungary, Sigismund was unable to consolidate his power in Germany. Although Pope Eugene IV crowned him Holy Roman Emperor in 1433, Sigismund died without having achieved his goal of unifying Christendom against Islamic advance. |
Types of castles
arly Saxon and Norman earthwork castles can be divided into two main types, ringwork castles and motte and bailey castles. The main difference between the two types is the existance of the motte (or mound) on which the keep is located.
The key characteristic parts of late Saxon and early Norman castles were: -
The primary defences of a ringwork type of castle were the ditch, bank and palisade. One ditch or a series of ditches were dug around the edge of the bailey (the open area within the castle's structure) and the earth taken out was piled up inside to form banks. Where possible the ditches were allowed to fill with water from a nearby river or stream to provide extra defence.
On top of the banks a palisade of wooden planks or logs were constructed to add extra height. A wall-walk was usually built behind the palisade to allow the defenders to see over the top and fire missiles down on attackers below who were attempting the climb up. The palisade continued all the way around the edge of the bailey and its only real threat was from fire. This threat eventually lead to the introduction of stone for building material.
The area within the palisade is called a bailey and are mostly circular or kidney-shaped. If attackers did manage to cross the ditch and get over the palisade, the people inside the castle needed a last line of defence. The keep was designed as this last line of defence. Made usually of wood, the keep needed to be large enough to hold the baron's family and household. Space could also be required to hold soldiers and local villagers at times of attack.
Types of Keep
In some early castles the strongest part of the construction could have been its gatehouse in which case this building would have been the keep
A keep could have been made in the shape of a tower with a couple of floors or could just have been a wooden hall. Most of the early keeps were constructed from wood and were always under threat from fire. A keep built from stone was more secure but was expensive to build and only the richest barons could afford a stone keep.
Hall keeps were very common and most Norman barons and Saxon thegns depended on the protection they gave. These hall keeps needed to be large enough to house not only the baron's family, but his supporters and their animals. Inside, the hall keeps looked like large barns with huge posts supporting the roof.
A large fire was situated at the centre of the hall away from any wood that could catch alight. The smoke would rise into the rafters and exit through a small hole in the roof above or through a gap at the end of the hall.
More castle pages
Pages in this section
Types of castles |
- Join over 1.2 million students every month
- Accelerate your learning by 29%
- Unlimited access from just £6.99 per month
University Degree: Macroeconomics
Meet our team of inspirational teachers
- Marked by Teachers essays 2
This increased the power both economically and politically for the European Union, as there are gains in prestige and political power associated with a common monetary policy.2(Tsoukalis) The European Union is now compared relatively with America as a large economic power with a single currency; this makes trading with the rest of the world easier as it will "increase the relative political weight of the countries involved with EMU"3(p12). The EU can use this new power to become more confrontational in world affairs.
- Word count: 1881
By plotting these to curves, we can calculate the equilibrium rate of interest, and would be able to see it move in relation to changes in the economic environment, such as a rise in the demand for capital equipment due to an improvement in technology. If this were the case then there would be an increase in the demand for loanable funds and the demand curve would shift to the right, increasing the interest rate and therefore encouraging a higher level of savings.
- Word count: 1453
An argument against optimum currency areas (OCA) is that the single currency and the common interest rate used in an OCA would mean one-size-fits-all monetary policy. Explain this argument, and consider whether and under what condit
For example, a two input production function involving capital and labour could take the form: Function 2 Where (K) represents capital and (L) labour. Given a production function like this, a firm could always increase the quantity it produces by increasing one or all inputs and the firm would simply employ an infinite number of inputs in order to achieve infinite output. However, since in reality inputs are scarce and thus have prices, the firm can only employ the combinations of inputs that its budget will allow. For example, if a firm with production Function 2 faced equal prices, say wages for one unit of labour (w)
- Word count: 1756
Both supply and demand shocks can cause inflation, but, without money growth the inflation would be short lived. Discuss.
However when it comes to inflation in the short run there is a debate between Keynesian and monetarist economists. Monetarist economists believe that the most significant factor influencing inflation is money supply. They believe that for sustained inflation money supply must rise faster than the rate of growth of national income. However the Keynesian view is that changes in money supply do not directly affect prices and that inflation is caused by pressures in the economy such as supply and demand shocks. According to Robert J Gordon's "triangle model", there are three main types of inflation: Demand pull inflation, cost push inflation, and built in inflation.
- Word count: 1693
Evaluating the Health of the US economy. Analyse the macroeconomic policies of the Bush and Obama administrations.
Adopted from www.tradingeconomics.com If we look at the Real GDP however, it shows a growing trend. Using the year 1980 as the base year, United States real GDP was 3.10% change in 2010 according to the International Monetary Fund (IMF). It is forecasted that US Real GDP for 2011-2015 will be around 2.39% change. Figure 3 Real GDP Rate. Adopted from www.tradingeconomics.com After that, we will also look into the inflation of the economy. Inflation refers to the rise in prices of goods and services compared with the standard of living. The most well known measures of Inflation are the CPI which measures consumer prices, and the GDP deflator, which measures inflation in the whole of the domestic economy.
- Word count: 4042
(Stanton et.al,,2009) Besides these 2 main players, there are other smaller players as larger regional bakeries such as Daily bakery at Johor, Federal Bakery and Angel Bakery under Kuala Lumpur and Klang Valley region, and Family Bakery in Central Peninsula Malaysia. Others such as Kilang Roti Florida Sdn. Bhd., Lee brothers Bakery, Economic Bakery factory.(Malaysia Business Listings,undated) While for retail bakery chains such as Kings Confectionery, Bee's confectionery and bread boutiques like Bread Story.(Stanton et.al,2009) Bread Industry in Malaysia is categorized as oligopoly which has less number of large bread makers control in the market.
- Word count: 5042
The purpose of this report is to explore Australias current economic conditions and how the government and central bank utilizes its macroeconomic instruments Fiscal and Monetary policies to accommodate the financial crisis. Moreover, th
This report portrays the current Australian economic situation supported by current economic indicators and variables. Next, the report outlines the macroeconomic policies (Fiscal and Monetary Policies) taken by the Australian government and the central bank to accommodate the Global Financial Crisis. Specifically, the report provides an overview of the government's Economic stimulus plan which includes government spending over a specific horizon. Lastly, the report justifies the reasons for the slowdown of government's spending given the current economic situation and prospects of Australian economy.
- Word count: 2806
Who and what determines the interest rate Summaries of the processes by which monetary policy is commanded are readily accessible in Budd (1998) and in retrospect, King (1997, 2002). The Monetary Policy Committee (MPC) of the Bank of England has the responsibility for setting interest rates is currently held. The MPC has nine members including the Governor, two Executive Directors, and two Deputy Governors, responsible for monetary policy analysis and monetary policy operations. Other four 'external' members are appointed by the Chancellor of the Exchequer with 'experience and knowledge, which is likely to be relevant to the committee's functions.
- Word count: 1781
International migration of capital is generated not only by the absolute need of capital mentioned above, but also by the possibility of a more favorable exploitation of the available capital in a country other than that in which it was formed. This way arise the actual flows of financial linkages among countries. A lot of these flows take the form of foreign loans granted or received. 2. In what does the external debt consist of? It includes all loans solicited by the government or by corporations or private households, resident to the country.
- Word count: 3649
Explain the concept of Price Elasticity of Demand and discuss its relevance for Business and Government.
Demand can be judged to be relatively elastic, relatively inelastic or unitary elastic and can be represented as a figure. It is common for all goods to experience an increase in demand from a decrease in price, however if the reduction in price leads to a more than proportionate change in quantity demanded, it is said that demand very sensitive to price changes. Therefore, the demand is relatively elastic and the price elasticity would be greater than one. On the other hand, if there has been a less than proportionate change in quantity demanded, it indicates that it is not very responsive to price changes; thus demand is relatively inelastic and the price elasticity would be between zero and one.
- Word count: 1811
The loss of a job would mean lower living standards and often a fall in their self-esteem. Moreover, involuntary unemployment also harms the whole economy, as they are not contributing to the economy's production and potential output is wasted; they are viewed as the burden of the economy and adversely affect the growth of the country (Lipsey et al, 2007). As a result, unemployment is a recurring debated matter of governments. There are two main types of unemployment, equilibrium and disequilibrium unemployment. Keynesian unemployment is often referred as demand deficient or cyclical unemployment and is categorized to being in disequilibrium; this is where "real wage rates in the economy are above the equilibrium level" (Sloman et al, 2010).
- Word count: 1308
Macroeconomic analysis of Brazil. In the following sections, we give a brief overview of the economys past and then present a detailed analysis of its current macroeconomic policies and the challenges faced.
Brazil's major export partners are China, US, Argentina and Netherlands while its major import partners include China, US, Argentina and Germany. The large inflow of FDI into Brazil in the last decade has played a significant role in the country's industrialization process. The stock of direct foreign investment was $319.9 billion in 2009. In the following sections, we give a brief overview of the economy's past and then present a detailed analysis of its current macroeconomic policies and the challenges faced.
- Word count: 3994
Welfare Economics. In this essay I will be examine the arguments for and against some of the key concepts (such as the fundamental theorems and Market failures) of this branch of economics.
relative to their budget constraint. The further out the indifference curve the more utility the individual gets. This would mean that U3 is the most desired indifference as it is furthest out however due to the budget constraint U3 is unattainable and U1 gives lower utility. Therefore the highest attainable utility subject to the budget constraint would be on U2 because here the individual's indifference curve is tangential to the budget line. At this point the slope of the indifference curve is equal to the slope of the budget line. The slope budget line is determined by the relative prices of good 1 and 2 and the slope of the indifference curve is simply the marginal rate of substitution2 (MRS)
- Word count: 2999
The Global Economic Crisis. The present project analyses different approaches of the crisis management in some of the most affected industries and countries. Moreover, the strategies used by some big companies and the measures some governments took durin
ECATERINA MUSTEA, 19 years old "I like interacting with people and because of that I wanted to ask them directly how the Romanian people are affected by the economic crisis." THE ECONOMIC CRISIS Nicoleta Rosu; Andreea Pirvu; Magda Ungureanu; Aurelia Scarlat; Ana Maria Tudose; Alexandra Prunel; Ecaterina Mustea; Adrian Salcu Academia de Studii Economice, Facultatea de Administrare a Afacerilor in limba engleza, anul I ABSTRACT The current crisis is the most global one since the Great Depression of the 1930s.
- Word count: 7713
What are the costs and benefits of using fiscal policy to manage an economys short-term and long term growth rates? Discuss.
CAD can be financed in two ways: foreign borrowings and equity investments by foreigners in Australia. There is a gap between theory and practice. In theory, CAD can be paid entirely by foreign equity investments and no need to borrow overseas funds at all. But in practice, CADs are financed by a combination of borrowing and equity investments. Consider a situation in which Australia has CAD and borrows foreign funds to cover the CAD. Valuation effect causes the Australian dollar value of our debt to fall. Australian dollar appreciates in value against other currencies. If the value of our foreign debt falls far enough, it may offset the additional funds borrowed during the quarter so that net foreign debt is lower at the end of the quarter than it was at the start.
- Word count: 2185
Macroeconomics questions - Supply and demand of labour, effects of a minimum wage, labour force participation in Australia.
Minimum wage laws prescribe the lowest hourly wage that employers must pay to workers. The demand and supply model shows that this law must raise the unemployment rate. The real wage is when the quantity of labour demanded equals the quantity of labour supplied (X). If a minimum wage is imposed (Wm) that exceeds the market clearing wage, then the number of people who want jobs exceeds the number of people who are willing to hire, thus creating unemployment. D S Wm X This law benefits especially low skilled workers, who would have not otherwise earned more.
- Word count: 1032
Macroeconomics- Demand and Supply of Money, Monetary Policy and Whether the Australian Government should tighten monetary policy.
Thus, a higher price level is associated with a higher demand for money. c) Real output Rising real incomes and increasing numbers of people employed will increase the demand for money at each rate of interest. An increase of real output raises the quantity of goods and services that people and businesses want to buy and sell. To accommodate the increase in transactions, both individuals and businesses tend to hold more money. (ii) Define monetary policy. Discuss the possible channels by which monetary policy might affect the economy?
- Word count: 1140
The Policy Implications of the Relationship between Inflation and Unemployment in Canada (1967 2006)
Friedman believed low unemployment and low inflation were not mutually exclusive policy objectives. There was a trade-off between inflation and unemployment in the short term but not in the long term. This study aims to establish the relationship between unemployment and inflation in Canada for both the short and long term and recommends policy implications for the Canadian Government and the Bank of Canada. II. Economic Theory A negative relationship between wage inflation and unemployment was first hypothesised by Alban Williams Phillips in 1958.
- Word count: 3478
Economics in theory. The main purpose of this report is to explain a couple of economic concepts to the business men and women attending the conference held by the investment bank of Bluefoot Securities and to further help them understand how to apply ec
2.0 Explanation of Some Key Economic Terms Economics is always related to money according to the general public, which is indeed a part of the research of economics yet not the sole and only part. In effect, economics deals with a lot of researches other than money and this report will begin with some fundamental economic problems. 2.1 Scarcity and Choice Scarcity is an eternal economic topic for human being and it is the result of the bridge between people's infinite desire and finite resources that can be utilised to satisfy people's desire.
- Word count: 3661
Describe types of unemployment and their causes. Explain Keynesian and classical assumptions in relation to the types of unemployment
People who are considered unemployed are those who are seeking work or lay off for more than a week. There are many different reasons why a person could be unemployed. The government tries to find solutions in order to reduce unemployment by making up policies. The unemployment problem began in the 1990s. First, it resulted from the restructuring of economy. In the period of planned economy, the large-scale corporation is the most common production organization. But to the market economy, the most common one is the individual or small-scale corporation. The workers from the large-scale firms cannot adapt themselves to the production form of the individual or small-scale ones.
- Word count: 1790
Describe the macroeconomic performance of the UK economy over the past 40 years. How does this performance compare with other developed industrial economies?
Unemployment rates in the UK were as low as 2.2% change but rising to an average of 4.5% throughout the 1970-to 1979. The 1970's was a huge era during this time things started to change a lot economically the way of Keynesian economics was being scrapped and new ways were being brought in, in Britain this change was signified after conservatives won the vote in 1979 under the power of Margret Thatcher which although signaled change it wasn't for the better.
- Word count: 1870
The cost of production would include things such as land labour and capital all the initial costs the producer faces in making the good. The cost of living is the minimum or the fair cost of how much a person or a worker needs in surviving day to day. The cost of complying with fair trade standards is a key cost as producers are often monitored to ensure they are agreeing to their share of the deal. With the help of the Fair trade Labelling Organisation (FLO), an organisation, which ensures the fair trade name, is not being misused and people are complying with the standards of fair trade.
- Word count: 1757
It was mainly in charge of money supply and foreign exchange policies. Before it was nationalised during the 1600's ( http://www.bankofengland.co.uk/about/history/index.htm ) The Bank of England was appointed as the English government's bank. At this time, the Bank of England is in charge of monetary stability and financial stability. The instiution works along side the HM Treasury and many other international banks to make sure the economy remains stable and there is continuous growth. Just as the PBC works with the ministry of finance.
- Word count: 1845
When in a time sensitive situation, a leader will need to put their stamp of approval on the work being done to be assured that the job is done correctly. This leadership seemed most ineffective when trying to develop a strong sense of teamwork, when the team wants a more natural feel to the environment, or when the members of the group have some knowledge or skill for what they need to accomplish a job or project. While working at American Home Shield, this leadership style is definitely used.
- Word count: 1524
More so, Crowther, defines inflation as a state in which the value of money is falling. On the other hand, Deflation is the opposite of inflation. Here the level of prices is going down and consequently the value of money drops. Professor Paul Einzig in his book monetary policy defines deflation as "a state of disequilibrium in which a contraction of purchasing power tends to cause, or is the effect of a declining in the price level". An economy is experiencing deflation when it is in a period of falling prices and the output of work by productive agents increases relatively to money.
- Word count: 1658 |
MADISON - By changing the composition of fish populations in a lake, scientists have found a switch by which the flow of carbon between lakes and the atmosphere can be turned on, off, or reversed.
The finding, reported by researchers from the University of Wisconsin-Madison in this week's (July 11) edition of the journal Science, is the first to show that only slight rearrangement of an intact ecosystem's food web can directly influence the atmosphere.
The discovery is important because it demonstrates that single, seemingly subtle changes in ecosystems can have far-reaching consequences, and are capable of disrupting the fundamental biogeochemical processes of the Earth.
"Linkages in ecosystems are both stronger and stranger than we imagined," said Stephen R. Carpenter, a UW-Madison limnologist who, with fellow limnologists Daniel E. Schindler and James F. Kitchell, authored the report. "Biological processes have powerful feedbacks to processes that are normally thought to be purely physical or chemical in nature."
While lakes occupy a very small area of the planet's surface, the discovery that simple biotic change is capable of altering the exchange of carbon between the atmosphere and the Earth's surface raises questions of global significance, said Carpenter.
"To what extent could fertilization of the oceans and alteration of oceanic food webs affect global carbon cycles? In fact, runoff from land is now enriching coastal oceans to unprecedented levels, and industrial fishing is causing massive changes in marine food webs. So the global experiment is underway," said Carpenter.
Carbon, an essential nutrient in lakes, typically flows from the land in the formof dead leaves and other organic matter that accumulates and decays underwater. Usually, these processes lead to a surplus of carbon dioxide in lakes. Excess carbon in a lake is released as a gas, carbon dioxide, to the atmosphere.
When there is a deficit of carbon dioxide, however, lakes draw the gas directly from the atmosphere.
Working on an isolated, undeveloped suite of lakes in Michigan's Upper Peninsula, the Wisconsin scientists were able to manipulate the flow of carbon between an entire, intact ecosystem and the atmosphere by placing either minnows or bass at the apex of the lake food web.
Bass, by preying on the minnows that consume algae-grazing zooplankton, effectively increased the flow of carbon to the atmosphere by freeing zooplankton from their predators. The booming zooplankton populations grazed the algae to the point where they were no longer a force to use the lake's excess carbon. The lakes, in effect, became pumps, expelling unused carbon to the atmosphere.
In lakes dominated by minnows, whose menus include algae-eating zooplankton, burgeoning algae populations and their photosynthetic requirements resulted in a carbon deficit, and the lakes become carbon sinks, drawing carbon directly from the atmosphere.
"This effect of fishes on gas exchange results from the changes in aquatic food webs that are regulated by the species of fish present in a particular lake," said Schindler.
The changes in lakes, Schindler emphasized, will not have implications for global climate. However, the new understanding of the processes that alter the exchange of carbon dioxide between lakes and the atmosphere can be generalized to other ecosystems such as oceans.
"Although the consequences ... are much less known for marine systems than for lakes, we should expect that the ecological responses to exploitation are similar in many ways," Schindler said.
The work done by the Wisconsin scientists was funded by the National Science Foundation and conducted under the auspices of the UW-Madison Center for Limnology.
###- Terry Devitt (608) 262-8282, [email protected]
(Editor's note: Limnologist Daniel E. Schindler is in transition from the University of Wisconsin-Madison to the University of Washington in Seattle. He can best be reached through the University of Washington's Office of News and Information at(206) 543-2580.)
The above post is reprinted from materials provided by University Of Wisconsin-Madison. Note: Materials may be edited for content and length.
Cite This Page: |
Learning to 'talk things through in your head' may help people with autism
Press release issued: 25 January 2012
Teaching children with autism to 'talk things through in their head' may help them to solve complex day-to-day tasks, which could increase the chances of independent, flexible living later in life, according to new research from Durham University, the University of Bristol and City University London.
The study, co-authored by Professor Chris Jarrold of Bristol's School of Experimental Psychology and published in Development and Psychopathology, found that the mechanism for using 'inner speech' or 'talking things through in their head' is intact in children with autism but they do not always use it in the same way as typically developing children do.
The psychologists found that the use, or lack of, thinking in words is strongly linked to the extent of someone's communication impairments which are rooted in early childhood.
However, the researchers suggest teaching and intervention strategies for children targeted at encouraging inner speech may make a difference. These strategies, which include encouraging children to describe their actions out loud, have already proven useful for increasing mental flexibility among typically developing children.
It is also suggested that children with autism spectrum disorder (ASD) could, for example, benefit from verbal learning of their daily schedule at school rather than using visual timetables – currently a common approach.
Lead author, Dr David Williams, lecturer in the Department of Psychology at Durham University, said: "Most people will 'think in words' when trying to solve problems, which helps with planning or particularly complicated tasks. Young, typically developing children tend to talk out loud to guide themselves when they face challenging tasks.
"However, only from about the age of seven do they talk to themselves in their head and, thus, think in words for problem-solving. How good people are at this skill is in part determined by their communication experiences as a young child."
One out of every 100 people in the UK has ASD, which is diagnosed on the basis of a set of core impairments in social engagement, communication and behavioural flexibility. Children with autism often miss out on the early communicative exchanges when they are young which may explain their tendency not to use inner speech when they are older. This relative lack of inner speech use might contribute to some of the repetitive behaviours which are common in people with autism.
In the study, those individuals with more profound communication impairments also struggled most with the use of inner speech for complex tasks. People with ASD did, however, use inner speech to recall things from their short-term memory.
Dr Williams said: "These results show that inner speech has its roots in interpersonal communication with others early in life, and it demonstrates that people who are poor at communicating with others will generally be poor at communicating with themselves.
"It also shows that there is a critical distinction between being able to express yourself verbally and actually using silent language for problem-solving. For example, the participants with ASD in our study were verbally able, yet did not use inner speech to support their planning."
Caroline Hattersley, Head of Information, Advice and Advocacy at the National Autistic Society, said: "This study presents some interesting results and could further our understanding of autism. If the findings are replicated on a wider scale they could have a significant impact on how we develop strategies to support children with the disability."
In the study, 15 high-functioning adults with ASD and 16 comparison participants were asked to complete a commonly used task which measures planning ability, called the Tower of London task. This task consists of five coloured disks that can be arranged on three individual pegs. The aim of the task is to transform one arrangement of disks into another by moving the disks between the pegs, one disk at a time, in as few moves as possible. This type of complex planning task is helped by 'talking to yourself in your head'.
The participants did the task under normal conditions as well as under an 'articulatory suppression' condition whereby they had to repeat out loud a certain word throughout the task, in this case, either the word 'Tuesday' or 'Thursday'. If someone uses inner speech to help them plan, articulatory suppression prevents them from doing so and will detrimentally affect their planning performance, whereas it will have little impact on the planning performance of someone who doesn't use inner speech.
The results showed that whilst almost 90 per cent of normally developing adults did significantly worse on the Tower of London task when asked to repeat the word, only a third of people with autism were in any way negatively affected by articulatory suppression during the task. This suggests that, unlike neurotypical adults, participants with autism do not normally use inner speech to help themselves plan.
The participants also completed a short-term memory task to asses the use of inner speech in short-term recall.
The research was funded by a City University London Research Fellowship to the lead researcher.
'Inner speech is used to mediate short-term memory, but not planning, among intellectually high-functioning adults with autism spectrum disorder', Williams, Bowler and Jarrold, published by Cambridge University Press in Development and Psychopathology, January 2012. |
Renewable energy generation from sources such as wind is largely unpredictable. What if you could smooth out the volatility and make a wind turbine act like a conventional generator? This is exactly what we are trying to achieve in a wind-battery project we’ve developed alongside Cowessess First Nation.
The project consists of an 800 kW Enercon wind turbine, along with a Saft battery capable of charging or discharging up to 400 kW for 90 minutes. The battery monitors the instantaneous output of the wind turbine, and automatically charges or discharges in an effort to smooth or steady the output of the wind turbine, thereby improving the turbine’s predictability.
Wind turbine power production is inherently volatile, and can be very difficult to predict, even five to ten minutes in advance, let alone one day in advance. We’ve recorded many instances where the turbine has both ‘ramped’ up by 600 kW of output and decreased by 600 kW within a five-minute window. This is equivalent to 150 homes coming online almost simultaneously, and then going back offline – and this is just one turbine! By using energy storage, SRC is able to reduce or ‘smooth out’ these ramps by up to 78 per cent, meaning that more wind turbines could be put on to the grid without negatively affecting power quality or reliability.
The battery even has the capability to provide power during times when the turbine is not producing any power at all. This is important because the grid will often ‘see’ two peaks of energy usage every day: one in the morning when people wake and turn on their lights, stoves and showers, and another similar peak in the evening as people get home from work. During these peaks of energy usage, there’s no guarantee that there’s sufficient wind to provide energy during these peaks. Energy storage, such as the system we’re demonstrating, can provide immediate response to these energy needs, improving the reliability of wind turbine generation.
Energy Storage is Good for the Environment and the Grid
When large amounts of variable generation such as wind turbines are put onto the grid, the electrical grid can begin to become unstable. By improving the predictability of wind power generation through the use of energy storage, even more wind turbines can be installed on the grid without negatively affecting reliability. In the end, this means more environmentally-friendly energy produced, less reliance on fossil fuels and fewer greenhouse gas emissions. |
A goat is a member of the genus Capra of the bovid (Bovidae) family of even-toed ungulates, or hoofed mammals. There are several species of goats, all of them native to Asia, Europe, or northern Africa.
The domestic goat is descended from the wild goat, Capra aegagrus, and is sometimes considered a subspecies, C. aegagrus hircus, and sometimes a distinct species, C. hircus. It was one of the first animals domesticated by humans and remains an important domesticated animal today.
Goats provide numerous benefits to humans, including food (milk, meat, cheese), fiber and skin for clothing, brush and weed control, and as symbols in religion, folklore, and mythology. While the domestication of goats has been a tremendous benefit to humanity, poor management of goats has led to overgrazing of land and desertification in various regions. Properly managed goat herds can serve a valuable purpose in controlling weeds and in reducing excess undergrowth in forested areas vulnerable to fires.
A male goat is called a buck or billy, and a female is called a doe or nanny. Young goats are called kids.
The Rocky Mountain goat, Oreamnos americanus, of North America is not a true goat; although it, like sheep, the musk ox, the chamois, and other members of the goat-antelope subfamily (Caprinae), are closely related to the goats.
Goats naturally live in rugged mountain or desert habitats. They are strong and skillful climbers and jumpers.
Goats are small for ungulates. Depending on the species, adults stand from 65 to 105 cm (2 to 3.5 feet) at the shoulder and weigh from 18 to 150 kg (40 to 330 lbs). Their bodies are covered with thick hair that protects them from the cold.
Both male and female goats have horns with the male's being larger. The horns are either curved or spiral shaped and can be as long as 165 cm (5.4 feet). Both male and female goats use their horns to fight among themselves and to fight off predators (Nowak 1983).
Goats mostly live in groups ranging in size from 5 to 100 or so animals. Sometimes adult males live alone. The groups tend to keep moving, which helps them find food in their sparse habitats.
Goats are thought to be more intelligent than most other hoofed animals and seem to have a natural curiosity. They sometimes climb up into trees to feed on the leaves (Nowak 1983; Voelker 1986).
Goats give birth to one or two young after a gestation period of between 150 and 180 days, depending on the species. Like the young of most other bovids, newborn goats can stand and follow their mothers almost as soon as they are born. The milk of goats is very rich and young goats grow rapidly. Mother goats are very protective of their young and will fight to defend them (Nowak 1983).
Each of these goat species has several subspecies (Nowak 1983; IUCN 2007).
Goats were one of the first animals domesticated by humans. This seems to have taken place first in the Middle East, perhaps as long as 10,000 years ago (the same time that sheep were also being domesticated). It has been suggested that the goats' natural curiosity and search for new food sources led them to associate with human settlements (Budlansky 1992; Clutton-Brock 1999).
Keeping goats proved to be a valuable resource for early communities. They provided meat and milk, and their hair was used as fiber for clothing. The skin and the bones were also used. Historically, goat hide has been used for water and wine bottles, in both traveling and transporting wine for sale. It has also been used to produce parchment, which was the most common material used for writing in Europe until the invention of the printing press.
Domestic goats were generally kept in herds that wandered on hills or other grazing areas, often tended by goatherds who were frequently children or adolescents. These methods of herding are still utilized today. Goats can survive in difficult conditions. They also prefer different food than sheep and cattle, which are primarily grazers while goats are browsers, like deer, eating mostly leaves and leafy plants. Goats are better at fighting off predators than sheep and historically were kept sometimes with flocks of sheep to help defend the sheep.
Over time, goat keeping spread over most of Asia, Europe, and Africa. In parts of Africa and Asia, large herds of goats were maintained and land was often overgrazed. This has contributed to the expansion of deserts over large areas of these continents.
The Spanish and Portuguese brought goats to North and South America, and the English brought goats to Australia and New Zealand. Goats were also kept aboard ships to provide milk and meat on long voyages. Some of them were released by sailors on islands so that they could be hunted when the sailors returned. This has given rise to feral goat populations, which have caused much environmental damage on many islands around the world. Feral goats also exist on continents, but are not such an environmental problem there since their numbers are controlled by predators (ISSG 2007; OSU 1996).
Goats have continued to be an important domestic animal to the present day. The total number of domestic goats in the world is hard to estimate. China and India have the largest goat populations, over 100 million each, with most of them being raised for meat (Miller 1998).
Many farmers use inexpensive (i.e. not purebred) goats for brush control, leading to the use of the term "brush goats." (Brush goats are not a variety of goat, but rather a function they perform.) Because they prefer weeds (e.g. multiflora rose, thorns, small trees) to clover and grass, they are often used to keep fields clear for other animals. Their plant diet is extremely varied and includes some species that are toxic or detrimental to cattle and sheep. This makes them valuable for controlling noxious weeds and clearing brush and undergrowth. They will seldom eat soiled food or water unless facing starvation.
In efforts to reduce the environmental impact of human land use, some institutions, such as the NASA Ames Research Center in the heart of California's Silicon Valley, are turning to goats to reduce use of herbicides and mowing machines.
The taste of goat meat, called chevon, is said to be similar to veal or venison, depending on the age of the goat. It can be prepared in a variety of ways including stewed, baked, grilled, barbecued, minced, canned, or made into sausage. It is also healthier than mutton as it is lower in fat and cholesterol, comparable to chicken. It is popular in China, the Middle East, south Asia, Africa, Mexico, and northeastern Brazil. Saudi Arabia is the largest importer of goat meat (Miller 1998). It is not currently popular in Europe and the United States.
Goats' milk is more easily digested than cows' milk and is recommended for infants and people who have difficulty with cows' milk. The curd is much smaller and more digestible. Moreover it is naturally homogenized since it lacks the protein agglutinin. Furthermore, goats' milk contains less lactose, which means it will usually not trigger lactose intolerance in humans.
Goats' milk is also used to make popular cheeses such as Rocamadour and feta.
Goat skin is still used today to make gloves, boots, and other products that require a soft hide. Kid gloves, popular in Victorian times, are still made today. The Black Bengal breed, native to Bangladesh, provides high-quality skin.
Cashmere goats produce a fiber, "Cashmere wool," which is one of the best in the world. Cashmere fiber is very fine and soft, and grows beneath the guard hairs. Ideally there is a proportionally smaller amount of guard hair (which is undesirable and cannot be spun or dyed) to the cashmere fiber. Most goats produce cashmere fiber to some degree; however, the Cashmere goat has been specially bred to produce a much higher amount of it with fewer guard hairs.
In south Asia, cashmere is called pashmina (Persian pashmina, meaning fine wool) and these goats are called pashmina goats (often mistaken as sheep). Since these goats actually belong to the upper Kashmir and Laddakh region, their wool came to be known as cashmere in the West. The pashmina shawls of Kashmir with their intricate embroidery are very famous.
The Angora breed produces long, curling, lustrous locks of mohair. The entire body of the goat is covered with mohair and there are no guard hairs. The locks can be six inches or more in length.
Goats do not have to be slaughtered to harvest the wool, which is instead sheared (cut from the body) in the case of Angora goats, or combed, in the case of Cashmere goats. The fiber is made into products such as sweaters. Both cashmere and mohair are warmer per ounce than sheep's wool and are not scratchy or itchy or as allergenic as wool sometimes is. Both fibers command a higher price than wool, compensating for the fact that there is less fiber per goat than there would be wool per sheep.
Goats are mentioned many times in the Bible. A goat was a considered a clean animal by Jewish dietary laws and was slaughtered for an honored guest. It was also acceptable for some kinds of sacrifices.
On Yom Kippur, the festival of the Day of Atonement, two goats were chosen and lots were drawn for them. One was sacrificed and the other allowed to escape into the wilderness, symbolically carrying with it the sins of the community. From this comes the word "scapegoat" (Moller 2007).
Since its inception, Christianity has associated Satan with imagery of goats. The common medieval depiction of the devil was that of a goat-like face with horns and a small beard (a goatee). A common superstition in the Middle Ages was that goats whispered lewd sentences in the ears of the saints. The origin of this belief was probably the behavior of the buck in rut, the very epitome of lust.
The goat has had a lingering connection with Satanism and pagan religions, even into modern times. The pentagram, a symbol used by both Satanism and Wicca, is said to be shaped like a goat's head. The "Baphomet of Mendes" refers to a satanic goat-like figure from nineteenth-century occultism.
According to Norse mythology, the god of thunder, Thor, has a chariot that is pulled by several goats. At night when he sets up camp, Thor will eat the meat of the goats, but take care that all bones remain whole. Then he wraps the remains up, and in the morning, the goats will always come back to life to pull the chariot. When a mortal who is invited to share the meal breaks one of the goats' legs to suck the marrow, however, the animal's leg remains broken in the morning, and the mortal is forced to serve Thor as a servant to compensate for the damage.
The goat is one of the twelve-year cycle of animals that appear in the Chinese zodiac related to the Chinese calendar. Each animal is associated with certain personality traits; those born in a year of the goat are predicted to be shy, introverted, creative, and perfectionist. The Capricorn sign in the Western zodiac is usually depicted as a goat with a fish's tail.
Several mythological hybrid creatures are part goat; including the Chimera which was part goat, part snake, and part lion. Fauns and satyrs are mythological creatures that are part goat and part human.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia: |
What Do They Look Like?
Cutworms are the immature stage of a group of moths. Cutworms are fat, smooth, relatively large caterpillars, about 2-5 cm long. They curl up when disturbed. Some species are a dull, greasy grey, while others are green or tan with patterns of lines or dark markings.
Large yellow underwing moth, cutworms and typical leaf damage
The large yellow underwing moth is the latest cutworm to become a problem on the BC coast. The adult moth has dark brown forewings and apricot orange hindwings.
What Does Cutworm Damage Look Like?
All cutworms feed on plants at night. They hide in the soil or leaf litter near the base of plants during the day. There are two groups of cutworms: One group feeds on plant stems, just at or below the soil surface. Toppled seedlings and transplants snipped off at the soil line show where they are feeding. The most common cutworms in local gardens are the ‘climbing cutworms’, which includes the large yellow underwing moth. At night they chew large, ragged holes in leaves and new shoots. They can be particularly damaging to vegetable transplants, early shoots of potatoes and perennials.
Cutworm Life Cycles
In the mild coastal winter climbing cutworms often over-winter as caterpillars; some species over-winter as eggs. All species feed on plants in the spring for 3 weeks to 2 months. When full grown, cutworms change into an immobile stage, called a pupa, in the soil. Inside the pupa, the caterpillar transforms into a moth. Moths emerge later in the summer. Some lay eggs in the soil, while climbing cutworms lay eggs on branches, fence posts and other objects.
When Are Cutworms A Problem?
Cutworm numbers vary from year to year. When climbing cutworm numbers are high, they are particularly damaging to small plants between early May and late June. Large yellow underwing cutworms also feed on leaves of winter vegetables and other plants during warms spells in the winter.
How Can I Prevent Damage?
Turn soil several weeks before planting to allow birds to feed on cutworms. Set out transplants as late in the season as possible to avoid the main feeding period. Plant extra seeds or seedlings to ensure a normal crop after some losses from cutworms. Protect transplants from non-climbing cutworms by planting them with a protective “collar” around the stem. Make cylinders about 7-10 centimetres high and 5 centimetres across. You can make them from stiff plastic, light cardboard, toilet paper rolls or small metal cans with both ends removed. Push the cylinder at least 2 centimetres into the soil, with the rest extending above the soil. Protect the natural enemies of cutworms, such as birds, predatory wasps, parasitic wasps, ground beetles and other predators. Avoid using insecticides and attract beneficial insects by planting flowers that supply pollen and nectar (see the Beneficial Insects Info Sheets in the series).
What Can I Do To Control Cutworms?
Pick cutworms from plants at night or unearth them from around the base of damaged plants in the morning. Squash them or drown them in soapy water. This is sufficient to control cutworms in most gardens as there are usually only a few present. Microscopic worms, called nematodes, that attack insects are sold at some garden centres. They are expensive, but might be useful where cutworm numbers are unusually high or there is a large area to treat. Not all nematodes control cutworms, therefore before trying this approach, read product labels and talk to garden centre staff.
Tips For A Healthy Garden
© Image courtesy of L. Gilkeson
- Enrich the soil once or twice a year with compost or other organic fertilizers.
- Choose plants adapted to the conditions of sun or shade, moisture and soil acidity. If necessary, correct the drainage and acidity to suit the plants.
- Plant native plants, which are adapted to the local climate. Most are easy to care for and have few pest problems.
- Before buying plants, make sure they are healthy and free of diseases and insect pests.
- Water deeply, but infrequently, to encourage deep rooting.
- Cover the soil between plants and under shrubs with organic mulches. This insulates the soil, keeps in moisture and suppresses weeds.
- Protect and attract native beneficial insect, birds and other animals. |
This material is for information and support; not a substitute for professional advice.
Essential nutrients for your body
- Vitamins and minerals are essential nutrients because they perform hundreds of roles in the body.
- There is a fine line between getting enough of these nutrients (which is healthy) and getting too much (which can end up harming you).
- Eating a healthy diet remains the best way to get sufficient amounts of the vitamins and minerals you need.
Every day, your body produces skin, muscle, and bone. It churns out rich red blood that carries nutrients and oxygen to remote outposts, and it sends nerve signals skipping along thousands of miles of brain and body pathways. It also formulates chemical messengers that shuttle from one organ to another, issuing the instructions that help sustain your life.
But to do all this, your body requires some raw materials. These include at least 30 vitamins, minerals, and dietary components that your body needs but cannot manufacture on its own in sufficient amounts.
Vitamins and minerals are considered essential nutrients—because acting in concert, they perform hundreds of roles in the body. They help shore up bones, heal wounds, and bolster your immune system. They also convert food into energy, and repair cellular damage.
But trying to keep track of what all these vitamins and minerals do can be confusing. Read enough articles on the topic, and your eyes may swim with the alphabet-soup references to these nutrients, which are known mainly be their initials (such as vitamins A,B,C,D,E, and K—to name just a few).
In this article, you’ll gain a better understanding of what these vitamins and minerals actually do in the body and why you want to make sure you’re getting enough of them.
Micronutrients with a big role in the body
Vitamins and minerals are often called micronutrients because your body needs only tiny amounts of them. Yet failing to get even those small quantities virtually guarantees disease. Here are a few examples of diseases that can result from vitamin deficiencies:
- Scurvy. Old-time sailors learned that living for months without fresh fruits or vegetables—the main sources of vitamin C—causes the bleeding gums and listlessness of scurvy.
- Blindness. In some developing countries, people still become blind from vitamin A deficiency.
- Rickets. A deficiency in vitamin D can cause rickets, a condition marked by soft, weak bones that can lead to skeletal deformities such as bowed legs. Partly to combat rickets, the U.S. has fortified milk with vitamin D since the 1930s.
Just as a lack of key micronutrients can cause substantial harm to your body, getting sufficient quantities can provide a substantial benefit. Some examples of these benefits:
- Strong bones. A combination of calcium, vitamin D, vitamin K, magnesium, and phosphorus protects your bones against fractures.
- Prevents birth defects. Taking folic acid supplements early in pregnancy helps prevent brain and spinal birth defects in offspring.
- Healthy teeth. The mineral fluoride not only helps bone formation but also keeps dental cavities from starting or worsening.
The difference between vitamins and minerals
Although they are all considered micronutrients, vitamins and minerals differ in basic ways. Vitamins are organic and can be broken down by heat, air, or acid. Minerals are inorganic and hold on to their chemical structure.
So why does this matter? It means the minerals in soil and water easily find their way into your body through the plants, fish, animals, and fluids you consume. But it’s tougher to shuttle vitamins from food and other sources into your body because cooking, storage, and simple exposure to air can inactivate these more fragile compounds.
Interacting—in good ways and bad
Many micronutrients interact. Vitamin D enables your body to pluck calcium from food sources passing through your digestive tract rather than harvesting it from your bones. Vitamin C helps you absorb iron.
The interplay of micronutrients isn’t always cooperative, however. For example, vitamin C blocks your body’s ability to assimilate the essential mineral copper. And even a minor overload of the mineral manganese can worsen iron deficiency.
A closer look at water-soluble vitamins
Water-soluble vitamins are packed into the watery portions of the foods you eat. They are absorbed directly into the bloodstream as food is broken down during digestion or as a supplement dissolves.
Because much of your body consists of water, many of the water-soluble vitamins circulate easily in your body. Your kidneys continuously regulate levels of water-soluble vitamins, shunting excesses out of the body in your urine.
What they do
Although water-soluble vitamins have many tasks in the body, one of the most important is helping to free the energy found in the food you eat. Others help keep tissues healthy. Here are some examples of how different vitamins help you maintain health:
- Release energy. Several B vitamins are key components of certain coenzymes (molecules that aid enzymes) that help release energy from food.
- Produce energy. Thiamin, riboflavin, niacin, pantothenic acid, and biotin engage in energy production.
- Build proteins and cells. Vitamins B6, B12, and folic acid metabolize amino acids (the building blocks of proteins) and help cells multiply.
- Make collagen. One of many roles played by vitamin C is to help make collagen, which knits together wounds, supports blood vessel walls, and forms a base for teeth and bones.
Words to the wise
Contrary to popular belief, some water-soluble vitamins can stay in the body for long periods of time. You probably have several years’ supply of vitamin B12 in your liver. And even folic acid and vitamin C stores can last more than a couple of days.
Generally, though, water-soluble vitamins should be replenished every few days.
Just be aware that there is a small risk that consuming large amounts of some of these micronutrients through supplements may be quite harmful. For example, very high doses of B6—many times the recommended amount of 1.3 milligrams (mg) per day for adults—can damage nerves, causing numbness and muscle weakness.
A closer look at fat-soluble vitamins
Rather than slipping easily into the bloodstream like most water-soluble vitamins, fat-soluble vitamins gain entry to the blood via lymph channels in the intestinal wall (see illustration). Many fat-soluble vitamins travel through the body only under escort by proteins that act as carriers.
Absorption of fat-soluble vitamins
Fatty foods and oils are reservoirs for the four fat-soluble vitamins. Within your body, fat tissues and the liver act as the main holding pens for these vitamins and release them as needed.
To some extent, you can think of these vitamins as time-release micronutrients. It’s possible to consume them every now and again, perhaps in doses weeks or months apart rather than daily, and still get your fill. Your body squirrels away the excess and doles it out gradually to meet your needs.
What they do
Together this vitamin quartet helps keep your eyes, skin, lungs, gastrointestinal tract, and nervous system in good repair. Here are some of the other essential roles these vitamins play:
- Build bones. Bone formation would be impossible without vitamins A, D, and K.
- Protect vision. Vitamin A also helps keep cells healthy and protects your vision.
- Interact favorably. Without vitamin E, your body would have difficulty absorbing and storing vitamin A.
- Protect the body. Vitamin E also acts as an antioxidant (a compound that helps protect the body against damage from unstable molecules).
Words to the wise
Because fat-soluble vitamins are stored in your body for long periods, toxic levels can build up. This is most likely to happen if you take supplements. It’s very rare to get too much of a vitamin just from food.
A closer look at major minerals
The body needs, and stores, fairly large amounts of the major minerals. These minerals are no more important to your health than the trace minerals; they’re just present in your body in greater amounts.
Major minerals travel through the body in various ways. Potassium, for example, is quickly absorbed into the bloodstream, where it circulates freely and is excreted by the kidneys, much like a water-soluble vitamin. Calcium is more like a fat-soluble vitamin because it requires a carrier for absorption and transport.
What they do
One of the key tasks of major minerals is to maintain the proper balance of water in the body. Sodium, chloride, and potassium take the lead in doing this. Three other major minerals—calcium, phosphorus, and magnesium—are important for healthy bones. Sulfur helps stabilize protein structures, including some of those that make up hair, skin, and nails.
Words to the wise
Having too much of one major mineral can result in a deficiency of another. These sorts of imbalances are usually caused by overloads from supplements, not food sources. Here are two examples:
- Salt overload. Calcium binds with excess sodium in the body and is excreted when the body senses that sodium levels must be lowered. That means that if you ingest too much sodium through table salt or processed foods, you could end up losing needed calcium as your body rids itself of the surplus sodium.
- Excess phosphorus. Likewise, too much phosphorus can hamper your ability to absorb magnesium.
A closer look at trace minerals
A thimble could easily contain the distillation of all the trace minerals normally found in your body. Yet their contributions are just as essential as those of major minerals such as calcium and phosphorus, which each account for more than a pound of your body weight.
What they do
Trace minerals carry out a diverse set of tasks. Here are a few examples:
- Iron is best known for ferrying oxygen throughout the body.
- Fluoride strengthens bones and wards off tooth decay.
- Zinc helps blood clot, is essential for taste and smell, and bolsters the immune response.
- Copper helps form several enzymes, one of which assists with iron metabolism and the creation of hemoglobin, which carries oxygen in the blood.
The other trace minerals perform equally vital jobs, such as helping to block damage to body cells and forming parts of key enzymes or enhancing their activity.
Words to the wise
Trace minerals interact with one another, sometimes in ways that can trigger imbalances. Too much of one can cause or contribute to a deficiency of another. Here are some examples:
- A minor overload of manganese can exacerbate iron deficiency. Having too little can also cause problems.
- When the body has too little iodine, thyroid hormone production slows, causing sluggishness and weight gain as well as other health concerns. The problem worsens if the body also has too little selenium.
The difference between “just enough” and “too much” of the trace minerals is often tiny. Generally, food is a safe source of trace minerals, but if you take supplements, it’s important to make sure you’re not exceeding safe levels.
A closer look at antioxidants
Antioxidant is a catchall term for any compound that can counteract unstable molecules such as free radicals that damage DNA, cell membranes, and other parts of cells.
Your body cells naturally produce plenty of antioxidants to put on patrol. The foods you eat—and, perhaps, some of the supplements you take—are another source of antioxidant compounds. Carotenoids (such as lycopene in tomatoes and lutein in kale) and flavonoids (such as anthocyanins in blueberries, quercetin in apples and onions, and catechins in green tea) are antioxidants. The vitamins C and E and the mineral selenium also have antioxidant properties.
Why free radicals may be harmful
Free radicals are a natural byproduct of energy metabolism and are also generated by ultraviolet rays, tobacco smoke, and air pollution. They lack a full complement of electrons, which makes them unstable, so they steal electrons from other molecules, damaging those molecules in the process.
Free radicals have a well-deserved reputation for causing cellular damage. But they can be helpful, too. When immune system cells muster to fight intruders, the oxygen they use spins off an army of free radicals that destroys viruses, bacteria, and damaged body cells in an oxidative burst. Vitamin C can then disarm the free radicals.
How antioxidants may help
Antioxidants are able to neutralize marauders such as free radicals by giving up some of their own electrons. When a vitamin C or E molecule makes this sacrifice, it may allow a crucial protein, gene, or cell membrane to escape damage. This helps break a chain reaction that can affect many other cells.
It is important to recognize that the term “antioxidant” reflects a chemical property rather than a specific nutritional property. Each of the nutrients that has antioxidant properties also has numerous other aspects and should be considered individually. The context is also important—in some settings, for example, vitamin C is an antioxidant, and in others it can be a pro-oxidant.
Words to the wise
Articles and advertisements have touted antioxidants as a way to help slow aging, fend off heart disease, improve flagging vision, and curb cancer. And laboratory studies and many large-scale observational trials (the type that query people about their eating habits and supplement use and then track their disease patterns) have noted benefits from diets rich in certain antioxidants and, in some cases, from antioxidant supplements.
But results from randomized controlled trials (in which people are assigned to take specific nutrients or a placebo) have failed to back up many of these claims. One study that pooled results from 68 randomized trials with over 230,000 participants found that people who were given vitamin E, beta carotene, and vitamin A had a higher risk of death than those who took a placebo. There appeared to be no effect from vitamin C pills and a small reduction in mortality from selenium, but further research on these nutrients is needed.
These findings suggest little overall benefit of the antioxidants in pill form. On the other hand, many studies show that people who consume higher levels of these antioxidants in food have a lower risk of many diseases.
The bottom line? Eating a healthy diet is the best way to get your antioxidants.
Adapted with permission from The Truth About Vitamins and Minerals: Choosing the Nutrients You Need to Stay Healthy, a special health report published by Harvard Health Publications. |
|Course:||“Rise and Fall of the Slave South,” University of Virginia|
|Rating:||5 (4 votes)|
Temperance reforms in the nineteenth century were not widely known for their success in the South. In fact, Delaware was the only slaveholding state to enact prohibition laws in the 1850s. However, temperance reform victories can be seen on a smaller, yet equally effective, scale throughout the southern United States, especially among young men. The Southern temperance movement was driven by fraternal organizations of men, and eventually women, holding each other to the standard of temperance and encouraging themselves and others to lead a temperate lifestyle.
The Sons of Temperance was the most widely known fraternal temperance organization. The Order was founded by temperance advocates in New York City in 1842 but quickly became a nationally recognized brotherhood. Its three goals were clearly laid out in the constitution of the organization, which "proposed then, as it does now, three distinct objects-To shield its members from the evils of intemperance; afford mutual assistance in case of sickness, and elevate their character as men."
Fraternal temperance orders became exceedingly popular in the south as the nineteenth century progressed. Members from slave states comprised 44 percent of the national membership of the Sons of Temperance in 1850. This is especially important because the Sons did not admit African Americans, and the South only represented 32 percent of the nation's total white population. The Independent Order of Good Templars was another prominent fraternal organization dedicated to temperance, and also boasted a large southern participation rate.
These orders tended to function like secret societies and rarely published their membership lists. This was largely done in order to limit bad press. It would have been extremely embarrassing for a brother to be seen violating the codes of the Order, and secrecy eliminated the possibility of bad exposure. Another reason for secrecy became important when sectional differences began to get out of hand. Since the society had Northern roots and linkages, many Southern chapters kept their activity under wraps so as not to anger the political powers at hand. |
Thanks to a genetically engineered enzyme, a bug that eats plastic bottles developed a much bigger appetite for our rubbish. It is a hopeful sign
Evolution never sleeps. Before 1970 there can have been no significant bacteria that ate plastic, because there was not enough of that plastic in the world to sustain a population. But in 2016 a group of Japanese scientists discovered a new species, Ideonella sakaiensis, in the samples they were sifting from a bottle-recycling plant, that was able to attack and eat PET, the plastic used in most bottles, almost all of which ends up in landfill or dumped at sea, where it may last for centuries. Everything that rots in nature does so because it is being eaten by bacteria. Most plastics – among them PET – were considered totally impervious to bacterial attack, making them almost indestructible unless burned or crushed. So a bacterium that can consume even one kind of plastic could become a desperately needed ally in the struggle to stop the oceans being choked with plastic waste.
What has captured the imagination of the world is that a subsequent group of scientists, who were trying to understand on a molecular level how I sakaiensis breaks down and digests plastic bottles, found the enzymes that it uses and made a slightly different version of one to see what would happen. The new enzyme is much more efficient than the version found in nature, and works on more kinds of plastic. This kind of molecular tweaking of substances, already found in nature, is at the root of another recent scientific breakthrough, the Crispr-Cas9 technique for genetic engineering. It offers some hope that we can use technology to moderate and even to some extent to reverse the impacts that earlier technologies, such as those that make it easy to manufacture billions of tons of plastic, have had on the world around us. |
Multiplication table worksheets
Would you like to practice your tables at your leisure? Below you will find tables practice worksheets. Click on one of the worksheets to view and print the table practice worksheets, then of course you can choose another worksheet. You can choose between three different sorts of exercises per worksheet. In the first exercise you have to draw a line from the sum to the correct answer. In the second exercise you have to enter the missing number to complete the sum correctly. In the third exercise you have to answer the sums which are shown in random order. All in all, three fun ways of practicing the tables in your own time, giving you a good foundation for ultimately mastering all of the tables. Choose a table to view the worksheet.
Practice your tables worksheets
A great addition to practicing your tables online is learning them with the assistance of worksheets. Here you can find the worksheets for the 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12 times tables. You can also use the worksheet generator to create your own multiplication facts worksheets which you can then print or forward. The tables worksheets are ideal for in the 3th grade. |
Digital rights are the rights of people regarding what they can do with their computer. There are many cases that need special thought related to digital rights, such as the right to have data accessible to a few people (privacy) and freedom of expression. Very often the word is used when referring to computer networks such as the Internet.
In 2003, the United Nations held a special talk, called the World Summit on the Information Society. One of the tasks of this talk was to make the digital divide smaller. The digital divide separates countries into "rich" ones and "poor" ones. For "poor" countries accessing the internet is much more hard. After long talks, all people in the talk signed a closing statement.
The statement pointed out that human rights were universal, and that they could not be divided. They were also related to the basic freedoms that the Vienna Declaration had defined, and could not be separated from them. In addition, democracy as well as sustainable development should be respected and the rule of law should be made stronger.
This declaration also makes specific reference to the importance of the right to freedom of expression in the "Information Society". It says:
"We reaffirm, as an essential foundation of the Information Society, and as outlined in Article 19 of the Universal Declaration of Human Rights, that everyone has the right to freedom of opinion and expression; that this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers. Communication is a fundamental social process, a basic human need and the foundation of all social organization. It is central to the Information Society. Everyone, everywhere should have the opportunity to participate and no one should be excluded from the benefits of the Information Society offers."
Digital rights management is in this area, as an example.
References[change | change source]
- Klang, Mathias; Murray, Andrew. Human Rights in the Digital Age. Routledge. p. 1. http://books.google.co.uk/books?id=USksfqPjwhUC&dq=%22digital+rights%22+human+rights&source=gbs_summary_s&cad=0.
- "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.", see http://www.un.org/en/documents/udhr/index.shtml#a19
Other websites[change | change source]
- The Free Software Foundation encourages freedom in the area of digital rights |
for National Geographic News
Solving an 86-year-old medical mystery, British scientists have determined the structure of the so-called "Spanish flu" virus that jumped from birds to humans in 1918, killing more than 20 million people worldwide.
In two separate studies, researchers from the Medical Research Council in London showed that the virus likely derived from an avian virus and retained some key characteristics of its avian precursor that caught the human immune system off-guard.
Although the discovery will probably not have an immediate impact on the current outbreak of chicken flu in Asia, the work will help scientists better understand flu viruses and their transmission from birds to humans.
The new evidence suggests that "receptor binding," the initial event in virus infection in which a foreign virus mixes with human proteins, is perhaps more important than the virulence of the virus in determining risk of transmission.
"This paper is important because of the knowledge it brings about how these viruses, which originate in birds, can jump to humans," Sir John Skehel, the Medical Research Council's lead scientist on the project, said in a prepared statement. "This allows us to track and monitor the changes in the virus for public health purposes, even though it does not allow us to predict or prevent future forms of flu."
The research is published in tomorrow's issue of the journal Science.
High Mortality Rates
The influenza pandemic in 1918 was named "Spanish flu" because it was first widely reported in Spanish newspapers. News reports of the outbreak were suppressed by wartime censorship in many countries fighting in World War I.
The pandemic was exceptional in both breadth and depth. Unlike most subsequent influenza strains, which first appeared in Asia, the initial wave of the Spanish flu seemingly arose in the United States. In September through November of 1918, it killed more than 10,000 people per week in some U.S. cities.
The pandemic swept not only North America and Europe, but also spread as far as the Alaskan wilderness and remote Pacific islands. The disease was exceptionally severe, with mortality rates of 2.5 percent among those infected, compared to less than 0.1 percent in other influenza epidemics.
Studying what made the Spanish flu so lethal is important because influenza viruses continually evolve. An understanding of the genetic makeup of the most virulent influenza strain ever seen could help health officials manage possible pandemics in the future.
For their new study, the British scientists sequenced sections of the 1918 influenza virus's genome, using samples from flu victims preserved in the Alaskan permafrost.
SOURCES AND RELATED WEB SITES |
1. The problem statement, all variables and given/known data A particle of mass m is dropped into a hole drilled straight through the center of the Earth. neglecting rotational effects and friction, show that the particle’s motion is a simple harmonic if it is assumed that the Earth has a uniform mass density. Obtain an expression for the period of oscillation. 2. Relevant equations The answer is that K = (4/3)Gmπρ and so T = sqrt(3π/Gρ), right? K = force constant G = gravitational constant ρ = density of earth m = mass of object dropped T = period 3. The attempt at a solution I want to know why the constant K is not actually GmM/r3 and the force is not GmMx/r3 where x is the displacement of the object from equilibrium at any point within the earth (I defined equilibrium to be at the center of earth). I think my issue is not understanding what it means to solve a problem using uniform density. I thought that uniform density meant that the mass per unit volume was the same everywhere in the earth, so the constant M is okay to leave in the equation since it's density is not changing. I want to understand this because there is a problem in my homework set which asks to find the electrostatic potential energy of a sphere of uniform density with charge Q and radius R. My mind tells me this is simply Q2/4πεR... but that can't be correct because it's too easy and that is the same as a point charge. What am I not getting? |
Paleontologists in Japan have unearthed the jaw of a primitive mammal from the early Cretaceous period.
The pint-size creature, named Sasayamamylos kawaii for the geologic formation in Japan where it was found, is about 112 million years old and belongs to an ancient clade known as Eutherian mammals, which gave rise to all placental mammals. (A clade is a group of animals that share uniquely evolved features and therefore a common ancestry.)
The jaw sports pointy, sharp teeth and molars in a proportion similar to that found in modern mammals, said paleontologist Brian Davis of Missouri Southern State University, who was not involved in the study.
"This little critter, Sasayamamylos, is the oldest Eutherian mammal to demonstrate what paleontologists consider the modern dental formula in placental mammals," Davis told LiveScience. [In Photos: Mammals Through Time]
The new mammal fossil, described today (March 26) in the journal Proceedings of the Royal Society B, suggests that these primitive creatures were already evolving quickly, with diverse traits emerging, at this point in the Cretaceous Era, he added.
Between 145 million and 66 million years ago, most mammals were tiny creatures that scampered underfoot as giant dinosaurs roamed the Earth. Scientists recently proposed that the first mammalian Eve, the mother to all placental mammals, lived about 65 million years ago, when dinosaurs went extinct. The first true mammal likely emerged at least 100 million years before that.
But because the fossil record is spotty, determining exactly when mammals evolved their specific traits has been murkier.
Amateur fossil-hunters were searching through sediments in Hyogo, Japan in 2007 when they unearthed the skeletal fragments of an ancient mammalian jaw. They turned it over to a local museum, said study co-author Nao Kusuhashi, a paleontologist at Ehime University in Japan
The jaw contained four sharp, pointy teeth known as pre-molars and three molars with complex ridges. That same pattern in the number of each type of tooth is found in placental mammals to this day, whereas earlier mammals have more of the sharp, pointy teeth.
The teeth probably allowed Sasayamamylos to poke through the hard exoskeletons of beetles or other insects, Davis told LiveScience.
In general, molars probably allowed these primitive mammals to chew their food well, extracting as much energy as possible from it, Davis said.
"Especially these little bitty guys, they're burning energy like crazy," Davis said.
- Image Gallery: 25 Amazing Ancient Beasts
- Image Gallery: Dinosaur Fossils
- Image Gallery: One-of-a-Kind Places on Earth
Copyright 2013 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. |
- slide 1 of 5
The Many Facets of Grammar
Grammar is difficult to learn. Distinguishing between nouns and adjectives is challenging for most students, so attempting to explain the difference between verbals, participles, gerunds, and infinitives can be horrific.
Students are much more capable however, than they pretend to be. They think maybe if they act as if they do not understand, then maybe the material will disappear. They soon discover the material is not going to disappear until they demonstrate comprehension.
Lessons on teaching infinitives, teaching gerunds and teaching participles can last all year.
- slide 2 of 5
Gerunds always end with the suffix ing, functions as a noun in the sentence, and answers the question "what." Tell students to find the verb and ask themselves "what."
For example: Surfing is fun.
- First, find the verb: is
- Next, ask yourself--what is?
- Finally, answer your question--surfing. Surfing is the gerund.
Finding gerunds is as easy as 1-2-3.
- slide 3 of 5
Participles always have verb endings -ing, ed, -n, -t, etc. More importantly, participles function as adjectives, which sets them apart from gerunds.
Example of a participle:
Jack, walking down the road, was looking at the lake.
The participle is walking, as it ends in ing and is describing Jack.
- slide 4 of 5
Infinitives are verb phrases that begin with the word "to" and will be followed by an action verb.
For example: To understand the material, one must study
The Infinitive verb phrase is "to understand the material" with the infinitive being "to understand."
- slide 5 of 5
The best way to assess knowledge of verbals is to create a multiple-choice worksheet with examples of a gerund, participle, or infinitive. Students must select what type of verbal it is. Teach this once; go over examples, give students a multiple choice worksheet.
If grades are low, review the assignment, reteach verbals, and assign another worksheet on verbals.
Before long, students will master verbals. |
Haploid vs Diploid
What’s the difference between haploid vs diploid, and how do I remember which is which?
Haploid is when a cell has one complete set of chromosomes.
Diploid is when a cell has two complete sets of chromosomes.
One way to remember the difference is to associate the beginnings of the words with a corresponding attribute.
So, haploid refers to half the usual amount of chromosomes (one instead of two), and diploid refers to di which means two sets of chromosomes.
Another way to remember the difference is to look at the Greek origins of the words.
The word haploid stems from the Greek haplos which means single. The word diploid stems from the Greek word diploos which means double.
That’s the main difference, but there is much more to haploid vs diploid than the number of chromosome sets so let’s delve a bit deeper to understand them better.
It can help to have a little background information to better understand the differences between haploid and diploid. So, we’ll look at cells, chromosomes, homologous chromosomes, and chromosome denotation. And we’ll use them as a lens to help understand the differences between haploid and diploid.
All living organisms (except for viruses) are made up of cells. They are the basis for all life and are considered “the building blocks of life.”
Cells are also the smallest self-replicating unit of life. There are two major types of cells: prokaryotes and eukaryotes.
Prokaryotes do not have a nucleus or any membrane-bound organelles. Bacteria would be an example of a prokaryote. Most prokaryotes are haploid (half the number of chromosomes).
Eukaryotes do have a nucleus and membrane-bound organelles. Humans, animals, and plants are examples of eukaryotes. Most eukaryotes are diploid (two sets of chromosomes).
Cells can also be classified according to their function. There are two types of functional cells: gametes and somatic cells.
Gametes are sexual reproduction cells. An example would be sperm (for males) and ova/eggs (for females). Gamete cells are haploid.
Somatic cells are any other cells in a living organism that are not used for sexual reproduction. An example would be the cells that make up your skin, bones, blood, muscle, and internal organs. Basically, everything but the reproductive cells. Somatic cells are diploid.
Most haploid organisms have haploid cells, and most diploid organisms have a majority of diploid cells with only their sexual cells being haploid.
However, there are some exceptions to this such as the gender differences in certain insects. Female wasps, bees, and ants are diploid, but male wasps, bees, and ants are haploid.
Chromosomes are DNA molecules that store an organism’s genetic makeup. Different organisms and cell types have differing numbers of chromosomes.
The number of sets of chromosomes is called ploidy. This can also be referred to as the ploidy level. A cell can be monoploid (one set), diploid (two sets), triploid (three sets), tetraploid (four sets), and so forth.
A key point to remember is not to confuse monoploid with haploid.
Monoploid refers to the total number of chromosomes in a single set of chromosomes, while haploid refers to the total number of chromosomes in a cell.
This means a cell is monoploid if it has one set of unique chromosomes, and a cell is haploid if it only has half the usual set of chromosomes.
An example of this difference would be wheat. Gametes of wheat cells are haploid because they contain half the genetic material of their somatic cells. However, they still contain three sets of chromosomes, so they are triploid cells not monoploid.
Homologous chromosomes are a pair of chromosomes that have the same genes or characteristics, but they are not identical. So, the chromosomes have the same length and code for the same characteristics, but they have different DNA sequences.
These two chromosomes come from the parents (one chromosome from each parent). So, for example, humans have 23 homologous chromosomes (that contain the genetic material from both parents) and a pair of sex chromosomes that determine our gender.
The genes on the homologous chromosomes are in the same order, but they might not have the same alleles. Alleles are variations of genes that are obtained from the parent organism.
An example of this would be eye color. The gene for eye color is stored on homologous chromosomes. The chromosomes are considered homologous because you received one from each parent, and both chromosomes contain your eye color gene.
However, you could have received a brown allele from your mother and a blue allele from your father. So, it’s the same gene (the eye color gene), but the alleles are different (brown and blue).
The number of chromosomes in a cell is denoted by an n in scientific writing. This is referred to as a diploid number. The letter n stands for the number of chromosomes, and the prefix before it refers to how many sets of chromosomes there are.
So, haploid would be denoted by n since it has half the number of chromosome sets (one set). Diploid would be 2n since it has two sets.
The diploid number can also be represented by an equation that shows the total number of chromosomes as well. An example would be the equation for the diploid number for humans which is 2n=46. We have two (2) sets of 23 chromosomes (n) which totals to (=) 46 chromosomes.
Haploid vs Diploid Cellular Reproduction
Cellular reproduction is another defining difference between haploid vs diploid cells.
Haploid cells are typically a result of meiosis, and diploid cells are typically a result of mitosis.
Meiosis is where diploid cells divide twice to produce four haploid cells. These cells then come together and then the cells become diploid again.
Mitosis is when a single cell divides into two genetically identical cells.
Keep in mind that this is only referring to cellular reproduction.
Haploid vs Diploid Sexual Reproduction
When it comes to organisms instead of cells, reproduction is a different story.
Haploid organisms (such as bacteria) typically reproduce asexually (by themselves) normally by the process of binary fission. Binary fission is where a single organism breaks into two, and the two resulting organisms have the same genetic material.
Diploid organisms (such as humans) typically reproduce sexually (two organisms come together to produce two offspring that have genetic material from both parents) normally by the process of meiosis.
Final Thoughts on Haploid vs Diploid
It can initially be difficult to remember the differences between haploid and diploid, but keep in mind the beginnings of the words. Ha refers to a cell only having half the usual amounts of chromosomes (one instead of two), and di refers to a cell having two sets of chromosomes.
That is the main difference between them: haploid cells have one complete set of chromosomes, and diploid cells have two complete sets of chromosomes.
Most haploid organisms are prokaryotes (such as bacteria), and most diploid organism are eukaryotes (such as humans).
Haploid cells are typically the gametes (sexual cells) and are the result of meiosis.
Diploid cells are typically the somatic cells (non-sexual) and are the result of mitosis.
Haploid organisms typically reproduce asexually by the process of binary fission. Diploid organisms typically reproduce sexually by the process of meiosis.
Other articles you might enjoy: |
Being massive, Jupiter stands as the largest planet in the solar system. It is also the fifth planet from the sun. The planet Jupiter is classified as a Gas Giant together with other similar planets including Neptune, Uranus, and Saturn. These four planets are combined to call Jovian planets. The planet was discovered in the prehistoric times but the discoverer is yet to be known. Jupiter is predominantly composed of hydrogen with some portion of helium. It does not have a real solid surface. Robotic spacecraft has explored the Jupiter on number of occasions. It is believed to be the fastest spinning planet in our solar system in that the planet only takes 10 hours to complete a full rotation on its axis. Be ready for the most amazing Jupiter facts for kids including Jupiter’s mass, atmosphere, temperature, moons, gravity, and characteristics.
Jupiter Facts For Kids
The gigantic Jupiter is believed to give out more energy than it receives from the sun. You might be wondering as to where the extra energy comes from! Actually, Jupiter is capable of generating its own heat that produces from within the planet as a result of gravitational force, which is why you can classify Jupiter as a star. According to scientists, the earthlings should be thankful to Jupiter for its strong gravity seizes several incoming asteroids and comets that would otherwise have crashed into earth. We can say that there would possibly be no life on earth without the planet Jupiter.
- The Jupiter’s clouds are no more than 50 km in thickness.
- One of the most fascinating features of Jupiter is its Great Red Spot which seem to exist for almost 350 years. This spot is deemed to be shrinking.
- Jupiter has planetary rings which are fairly dimmer as compared to that of Saturn’s.
- The planet has the strongest magnetic field among all planets which is why compasses believe to work on Jupiter.
- Jupiter has total 63 moons; all of them are less than 10 km in diameter.
- The spacecraft (from earth) has visited Jupiter 7 times.
- After Venus and Moon, Jupiter is by far the brightest planet in the solar system.
- The Jupiter can hold more than 1,300 earths in it.
- The Voyager I observed at least eight active volcanoes on I0 and plumes spreading up to 250 km above the surface.
Features of Jupiter
- The mass of the Jupiter is 1,898.6 × 1024 kg
- Jupiter has a volume of 143,128 × 1010 km3
- The equatorial radius of a Jupiter is calculated as 71,492 km
- The polar radius of this planet is 66,854 km
- The volumetric mean radius is 69,911 km
- The ellipticity is calculated as 0.06487
- Jupiter has a mean density of 1,326 kg/m3
- It has a gravity of 24.79 m/s2
- The acceleration of Jupiter is 23.12 m/s2
- It has an escape velocity of 59.5 km/s
- The visual magnitude of Jupiter is minus 9.40 V
- There are total 67 satellites
- The black-body temperature of this planet is 110.0 K
Orbital Parameters of Jupiter
- The semimajor axis of Jupiter is 778.57 × 106
- There are 4,332.589 days in a sidereal orbit period
- There are 4,330.595 days in a sidereal tropical orbit period
- There are 398.88 days in a synodic period
- The maximum orbital velocity is calculated as 13.72 km/s
- The minimum orbital velocity is calculated as 12.44 km/s
- It has a mean orbital velocity of 13.07 km/s
- The orbit eccentricity is 0.0489
- There are 9.9250 hours in a sidereal rotation
- There are 9.9259 hours in a single day
Observational Parameters of Jupiter
- The discoverer of Jupiter is yet to be known; however, it was discovered since prehistoric times
- It has a minimum distance of 588.5 × 106 from the earth
- It has a maximum distance of 968.1 × 106 from the earth
- The maximum apparent diameter of Jupiter from the earth is 50.1 seconds of arc
- The minimum apparent diameter of Jupiter from the earth is 29.8 seconds of arc
Jovian Atmosphere of Jupiter
- The surface temperature of Jupiter is 1000 bars
- The wind speeds is up to 150 m/s
- It has a scale height of 27 km
- The mean molecular weight of this planet is 2.22 g/mole |
ITHACA, N.Y. – Researchers at Cornell University published a study last week in which they claim that clay helped life spontaneously arise from non-life millions of years ago.
On Thursday, scientists affiliated with Cornell University released a statement detailing new research findings regarding the initial development of life—also known as abiogenesis. In the statement, the researchers suggest clay was a key ingredient when—according to the university—life spontaneously emerged from non-life in earth’s early years.
“We propose that in early geological history clay hydrogel provided a confinement function for biomolecules and biochemical reactions,” said Dan Luo, a professor at Cornell.
The statement from Cornell further suggests that, “over billions of years,” clay could have “confined and protected” certain chemical processes, much like cell membranes do today. Then, the protected chemicals “could have carried out the complex reactions that formed proteins, DNA and eventually all the machinery that makes a living cell work.”
A 14-page scientific report explains the Cornell researchers’ findings in more technical terms:
“Here we mimic the confinement function of cells by creating a hydrogel made from geological clay minerals, which provides an efficient confinement environment for biomolecules,” the report explains. “[O]ur results support the importance of localized concentration and protection of biomolecules in early life evolution, and also implicate a clay hydrogel environment for biochemical reactions during early life evolution.”
According to the report, clay may have protected the very first life forms as they formed and developed.
For evolutionists, life’s origin is a difficult topic, since—despite countless attempts—abiogenesis has never been replicated; nor has it been observed in nature. Thus, many scientists speculate that life somehow arose in a primordial ocean, perhaps due to input from lightning or a volcanic vent.
However, other scientists reject the theory of naturalistic abiogenesis. Dr. Kevin Anderson is a microbiologist with a Ph.D. in microbiology and many years of research experience. He told Christian News Network that last week’s Cornell report does not realistically portray any type of leap from non-life to life. Rather, it vaguely suggests that abiogenesis is possible and proven, without citing tangible evidence.
“This type of ‘hand wave’ is common—act like everything is all figured out—and is frequently done so as to avoid having to actually acknowledge that abiogenesis has no evidence,” Anderson stated. “Thus, with a ‘hand wave’ [evolutionists] can pretend there are just a few minor points to address, and rationalize that there is no need to answer creationists’ challenges.”
“However,” Anderson continued, “creationists have long scoffed at such a hand wave, pointing out that even under almost pristine conditions, evolutionists’ experiments rarely achieve anything but a D & L mixture of a few amino acids or a few bits of other organic molecules (and often ignore many other very toxic molecules, such as formate, that are also formed during the process).”
In terms of the Cornell scientists’ findings, Anderson says they concluded that clay (or some type of gel) would be necessary to protect early biomolecules, but never explain how those living molecules formed in the first place. Ultimately, Anderson told Christian News Network, the idea that life spontaneously appeared without a Creator takes an enormous leap of faith.
“The immense speculation, and lack of any significant evidence or mechanisms for abiogenesis strongly support the creationists’ claims that abiogenesis is really nothing more than ‘wishful thinking’ on the part of the materialists,” he concluded. “In fact, the more the problem is studied, the more difficulties arise. Adding to the problem for materialists, the more we understand about cells and living systems the greater the gulf becomes between life and non-life. Thus, the final conclusion is that there is not a shred of evidence that life can form spontaneously under any conditions.” |
8.7 Curved Mirrors
WARNING: OBJECTS IN MIRROR MAY BE CLOSER THAN THEY APPEAR.
If youve ever sat in the drivers seat of a car, or indeed the one next to it, you are probably familiar with this sign, which is often printed on the rear-view mirrors. And if the car ride was long enough, perhaps you had time to wonder why this was the case. The driver, and any bored passengers, are subject to the powers of a curved mirror.
Curved mirrors are used by many of us everyday, from shaving and makeup mirrors, to the curved mirrors drivers use for seeing around a blind corner. Though they might not appear to be, they are in fact very similar to lenses. However, for the mirrors to follow these rules, they have to be evenly shaped, and therefore are usually made of the surface of a sphere shape, though they can also be parabolic. The mirror is given a name depending on whether it is the inside surface or the outside surface of the sphere that reflects.
Convex mirrors, opposite to lenses, are known as diverging mirrors, while concave mirrors cause parallel light rays to converge. The name of the mirror indicates the side of the sphere that is reflective.
In a concave lens, the point at which parallel light rays converge is once again called the focal point, sometimes known as the principal focus. From the law of reflection, where the angle of incidence must equal the angle of reflection, and the above idea, we know that:
1. A ray that is parallel to the principal axis will be reflected so that it passes through the focal point, and conversely,
2. A ray that passes through the focal point will be reflected so that it is parallel
to the principal axis
The focal point and the center of the would-be sphere of which the mirror was a part lie on a line that is also called the principal axis, as in lens diagrams.
In this diagram, the principal axis is pink, and the focal point is green. Notice that the focal point is not very close to the center of the sphere (shown here in two dimensions).
Ray diagrams can once again be drawn, to show how curved mirrors produce images. To construct the image of a point, only two rays are needed. If the focal point is known, we can construct two rays and their reflections without having to measure any angles. These are a ray that is parallel to the principal axis, and a ray that goes through the focal point.
In the figure above, a concave mirror is used to make a real, inverted image of the light purple arrow (symbolizing the object). Notice that in the case of mirrors, real images are located on the same side of the mirror as the object. A concave mirror can be used to make an enlarged or diminished real image of an object just as a convex lens can. This is because, after all, the rays in a mirror diagram can go both ways, and for real images, the object and the image can be interchanged.
In this second image, a concave mirror is used to make an upright, magnified, virtual image. Once again it acts in the same way as a convex lens. In this case, however, it was necessary to measure one angle, because drawing a ray through the focal point would not have been of much use. Also, because the reflected rays diverge, they are followed back (in red) to their apparent origin, the image of the tip of the purple arrow. This is another indication that the image produced is virtual. This concave mirror set up is often used in telescopes, and is the one you would use for shaving or doing your makeup.
The final ray diagram for curved mirrors shows a convex mirror creating an upright, diminished, virtual image, just as a concave lens can do. In fact, you might notice that this diagram is similar to the previous one, with only the image and the object switched. This diagram however, does not require that the focal point be known, though it does require two angles to be measured.
Because they are so similar to lenses, it is easy to expect that spherical mirrors also follow the lens equation. In fact, they do. However, it is important to keep in mind the differences between what is considered by convention to be real or virtual. A simple guideline for this is that if the item (e.g. focal point, image distance) in question is on the same side of the mirror as the object, it is considered to be real. Otherwise, it is virtual, and its value in the lens equation is said to be negative.
To get back to the original question, why are objects in the rear-view mirror closer than they appear? Because they appear to be smaller than they actually are. The kind of mirror used in the rear-view mirror must therefore be a convex lens, because it creates smaller, yet still upright images - after all, the cars you see are not upside down. This type of mirror is useful in a car because by making everything smaller, it allows the driver to see a greater range of things behind him. |
…We begin with a brief account of the history and origins of the writ. Our account proceeds from two propositions. First, protection for the privilege of habeas corpus was one of the few safeguards of liberty specified in a Constitution that, at the outset, had no Bill of Rights. In the system conceived by the Framers the writ had a centrality that must inform proper interpretation of the Suspension Clause. Second, to the extent there were settled precedents or legal commentaries in 1789 regarding the extraterritorial scope of the writ or its application to enemy aliens, those authorities can be instructive for the present cases.
The Framers viewed freedom from unlawful restraint as a fundamental precept of liberty, and they understood the writ of habeas corpus as a vital instrument to secure that freedom. Experience taught, however, that the common-law writ all too often had been insufficient to guard against the abuse of monarchial power. That history counseled the necessity for specific language in the Constitution to secure the writ and ensure its place in our legal system.
Magna Carta decreed that no man would be imprisoned contrary to the law of the land. (“No free man shall be taken or imprisoned or dispossessed, or outlawed, or banished, or in any way destroyed, nor will we go upon him, nor send upon him, except by the legal judgment of his peers or by the law of the land.”) Important as the principle was, the Barons at Runnymede prescribed no specific legal process to enforce it. Holdsworth tells us, however, that gradually the writ of habeas corpus became the means by which the promise of Magna Carta was fulfilled.
The development was painstaking, even by the centuries-long measures of English constitutional history. The writ was known and used in some form at least as early as the reign of Edward I. Yet at the outset it was used to protect not the rights of citizens but those of the King and his courts. The early courts were considered agents of the Crown, designed to assist the King in the exercise of his power. Thus the writ, while it would become part of the foundation of liberty for the King’s subjects, was in its earliest use a mechanism for securing compliance with the King’s laws. Over time it became clear that by issuing the writ of habeas corpus common-law courts sought to enforce the King’s prerogative to inquire into the authority of a jailer to hold a prisoner.
Even so, from an early date it was understood that the King, too, was subject to the law. As the writers said of Magna Carta, “it means this, that the king is and shall be below the law.” (“The king must not be under man but under God and under the law, because law makes the king.”) And, by the 1600’s, the writ was deemed less an instrument of the King’s power and more a restraint upon it.
Still, the writ proved to be an imperfect check. Even when the importance of the writ was well understood in England, habeas relief often was denied by the courts or suspended by Parliament. Denial or suspension occurred in times of political unrest, to the anguish of the imprisoned and the outrage of those in sympathy with them.
A notable example from this period was Darnel’s Case. The events giving rise to the case began when, in a display of the Stuart penchant for authoritarian excess, Charles I demanded that Darnel and at least four others lend him money. Upon their refusal, they were imprisoned. The prisoners sought a writ of habeas corpus; and the King filed a return in the form of a warrant signed by the Attorney General. The court held this was a sufficient answer and justified the subjects’ continued imprisonment.
There was an immediate outcry of protest. The House of Commons promptly passed the Petition of Right, which condemned executive “imprison[ment] without any cause” shown, and declared that “no freeman in any such manner as is before mencioned [shall] be imprisoned or deteined.” Yet a full legislative response was long delayed. The King soon began to abuse his authority again, and Parliament was dissolved. When Parliament reconvened in 1640, it sought to secure access to the writ by statute. The Act of 1640 expressly authorized use of the writ to test the legality of commitment by command or warrant of the King or the Privy Council. Civil strife and the Interregnum soon followed, and not until 1679 did Parliament try once more to secure the writ, this time through the Habeas Corpus Act of 1679. The Act, which later would be described by Blackstone as the “stable bulwark of our liberties,” established procedures for issuing the writ; and it was the model upon which the habeas statutes of the 13 American Colonies were based.
This history was known to the Framers. It no doubt confirmed their view that pendular swings to and away from individual liberty were endemic to undivided, uncontrolled power. The Framers’ inherent distrust of governmental power was the driving force behind the constitutional plan that allocated powers among three independent branches. This design serves not only to make Government accountable but also to secure individual liberty. Because the Constitution’s separation-of-powers structure, like the substantive guarantees of the Fifth and Fourteenth Amendments, protects persons as well as citizens, foreign nationals who have the privilege of litigating in our courts can seek to enforce separation-of-powers principles.
That the Framers considered the writ a vital instrument for the protection of individual liberty is evident from the care taken to specify the limited grounds for its suspension: “The Privilege of the Writ of Habeas Corpus shall not be suspended, unless when in Cases of Rebellion or Invasion the public Safety may require it.” The word “privilege” was used, perhaps, to avoid mentioning some rights to the exclusion of others. (Indeed, the only mention of the term “right” in the Constitution, as ratified, is in its clause giving Congress the power to protect the rights of authors and inventors.)
Surviving accounts of the ratification debates provide additional evidence that the Framers deemed the writ to be an essential mechanism in the separation-of-powers scheme. In a critical exchange with Patrick Henry at the Virginia ratifying convention Edmund Randolph referred to the Suspension Clause as an “exception” to the “power given to Congress to regulate courts.” A resolution passed by the New York ratifying convention made clear its understanding that the Clause not only protects against arbitrary suspensions of the writ but also guarantees an affirmative right to judicial inquiry into the causes of detention.
Alexander Hamilton likewise explained that by providing the detainee a judicial forum to challenge detention, the writ preserves limited government. As he explained in The Federalist No. 84:
[T]he practice of arbitrary imprisonments, have been, in all ages, the favorite and most formidable instruments of tyranny. The observations of the judicious Blackstone … are well worthy of recital: “To bereave a man of life … or by violence to confiscate his estate, without accusation or trial, would be so gross and notorious an act of despotism as must at once convey the alarm of tyranny throughout the whole nation; but confinement of the person, by secretly hurrying him to jail, where his sufferings are unknown or forgotten, is a less public, a less striking, and therefore a more dangerous engine of arbitrary government.” And as a remedy for this fatal evil he is everywhere peculiarly emphatical in his encomiums on the habeas corpus act, which in one place he calls “the bulwark of the British Constitution”….
This is an excerpt from the majority opinion in Boumediene v. Bush, which was decided this past June. Citations to books and articles have been removed to facilitate reading. For the complete opinion of the Court, including citations, see: http://caselaw.lp.findlaw.com/scripts/getcase.pl? court=US&vol=000&invol=06-1195.
This article originally appeared in the September 2008 edition of Freedom Daily. Subscribe to the print or email version of Freedom Daily. |
1896: Neoslavery Defeated (Reconstruction II)
In STEEPS categorization, this is a Political, Economic and Social counterfactual. There are many examples of regressive Supreme Court rulings, from Citizens United in 2010 right back to the start of our Republic. But some stand out as particularly poor, and we can only imagine how much better things might have eventually been if we’d gone a different direction. Let’s look at one now.
As the Pulitzer prize-winning author Douglas Blackmon explains in Slavery By Another Name: The Re-Enslavement of Black Americans from the Civil War to World War II (2009), an “Age of Neoslavery” emerged in America, in the aftermath of the civil war right up to the start of World War II, and discrimination of course continued after the war as well. See also Sam Pollard’s lovely PBS documentary, Slavery By Another Name (2012), which brings to life this shameful era of US history.
The North had outlawed slavery in all of its states by 1818. It took a Civil War, 1861-1865, the most terrible war we’ve ever had to fight by far, to get the South to understand it was subject to federal laws on this issue. Early after the Civil War, in a decade known as Radical Reconstruction (1867-1877), blacks were granted the right to vote, we had our first black congressman, Hiram Revels, 1877, fourteen black men served in the House of Representatives, and more than 600 served in Southern State legislatures, and blacks began attending state constitutional conventions. The end of Reconstruction in 1877, removed federal laws that provided civil rights protections in the south. White southerners organized paramilitary groups to push blacks out of government, and when federal troops finally withdrew from the south in 1877, violence against blacks escalated. It took a while for the bigots to kill off a decade of black advances. In the 1880s, a few blacks were still being elected to local offices in the South, but between 1890 and 1910, ten of the eleven former Confederate states passed new constitutions and a series of Jim Crow laws, including poll taxes, grandfather laws, and literacy tests, that progressively disenfranchised black voters, laws and actions designed to reverse every advance of the emerging black middle class.
The injustices continued to grow, and a key turning point was an 1896 Supreme Court challenge, Plessy vs. Ferguson, to policies of racial segregation in public facilities. This was a decision where the court could have turned the tide, noted the growing unfairness of treatment of blacks, the always inferior condition of black facilities versus white ones, which journalists lampooned in cartoons (right), and drawn a line in the sand. Instead, Plessy upheld the fiction that racial segregation could be “separate but equal”. It was approved in a 7 to 1 vote. In hindsight, two things are obvious. First, the Supreme Court was too conservative to make any other decision at that time, and second, the decision was obviously unfair, and would clearly legitimize all sorts of future discrimination. We human beings are incapable of creating separate but equal facilities. The very act of separating people, by force, into into two communities of unequal resources and upbringing, automatically creates and reinforces social inequalities.
School segregation was finally banned in another Supreme Court decision almost sixty years later, Brown vs Board of Education in 1954. Even then, it took another decade of civil rights activism, and more lives lost, before we finally started to see segregation ending in the South. Meanwhile restrictive housing covenants, real estate agent discrimination, educational discrimination, job discrimination, and many other forms of racist discrimination against blacks and other minorities emerged in the Age of Neoslavery. Milder forms of such discrimination persist and continue to be a stain on our society today. As recent news and the Black Lives Matter movement remind us, American blacks continue to be treated far worse by law enforcement and our criminal justice system, and by several indicators, are doing worse, not better, over time. For just one example, the wage gap between black and white workers is worse today than in 1979.
In our counterfactual, it would have taken a presidential order, and the reintroduction of troops into the South, as a reaction to the Plessy decision, to stop the continued rise of institutionalized racism and other forms of Neoslavery. Our congress would have had to initiate a smaller, second phase of Reconstruction, to acknowledge the backsliding that occurred since the first, and to resolve to do better. This kind of action would have had precedent. An earlier heartless and bigoted Supreme Court decision, Dred Scott v. Sandford in 1857 had argued that all blacks that were descendents of slaves were in fact “property”, and so could not be American citizens, and furthermore it was “unconstitutional” for the federal government to try to regulate slavery in the states. Dred Scott was destined to be radical no matter the decision, and it was only the second time the Supreme Court had ruled an Act of Congress to be unconstitutional. The ruling was widely reviled at the the time, as Northerners had expected the court to finally settle the slavery question against slaveholding, rather than ducking it as they did. Dred Scott is widely regarded as the worst Supreme Court decision ever, and became one of the major catalysts of the Civil War.
The Presidential election of 1896 was a complex and narrow race. The business-oriented Grover Cleveland barely edged out the populist, William Jennings Bryant. Bryant would have been more likely to stand up to Plessy, but in reality, either could have decided at that point to care about black Americans. The recognition of injustice has no party affiliation. In our counterfactual, the newly-elected president decided to display moral leadership after Plessy, by reminding Americans of the hateful Dred Scott decision, and by arguing that the court had again let down a large number of her citizens, after so terrible a price had been paid to secure their liberty. He could have then immediately moved troops back into the South, as a force to counter lynchers and the Klan, claiming that the violence and discrimination was again becoming threats to American security, and he could have worked with congress on a more modest and targeted second Reconstruction, with more handouts to cooperating Southern pacifists, and strategies to to address issues surrounding the violence and discrimination against black Americans, as well as the disenfranchisment of the poor of all races across America.
When unaccountable committees, like our Supreme Court justices, occasionally don’t do what they should do, the people and the leaders can still step up and fix the problem. This may be a very low-probability counterfactual, but we relay it nonetheless, as an example of a better past, and a much better present, that might have been. Foresight matters! |
Main navigation | Main content
The following suggestions for course planning have been adapted from chapter 3 of the book Effective Grading: A Tool for Learning and Assessment by Barbara E. Walvoord and Virginia Johnson Anderson. When you've completed your planning, visit the syllabus tutorial for help in communicating your course plan to your students.
In assignment-centered course planning, the teacher begins by asking, "What should my students learn to do?" rather than "What should I cover in this course?" Coverage does not disappear under the assignment-centered model: basic facts, concepts, and procedures are still important; lectures may be used as a pedagogical device; textbooks may be assigned and read. However, the course planning process begins by focusing on the assignments, tests, and exams that will both teach and test what the teacher most wants students to know. The rest of the course is then structured to help students learn what they need to know if they are to do well on the tests and assignments. Research suggests that the assignment centered course enhances students' higher order reasoning and critical thinking more effectively than courses centered around text, lecture, and coverage (Kurfiss, 1988).
Effective course planning begins when the teacher says to herself, "By the end of the course, I want my students to be able to..."
Concrete verbs such as define, argue, solve, and create are more helpful for course planning than vague verbs such as know or understand or passive verbs such as be exposed to. If you write, "I want students to think like economists," elaborate on what that means. How does an economist think? Which aspects of that thinking do you want to cultivate in students?
Here are some examples:
At the end of Western Civilization 101, I want my students to be able to:
At the end of Math 101, I want my students to be able to:
At the end of this course (in dental hygiene), I want my students to:
List the major assignments and tests and describe their salient characteristics. For example, you might start by listing "An argumentative essay on the French Revolution" or "A mid-term exam with multiple-choice questions and problem-solving." Describe the relationship of each assignment or test to your objectives ("Students will be evaluated on their ability to use historical data as evidence and their ability to raise and counter arguments.", "Multiple-choice questions will test their basic knowledge of…." "Problem-solving questions will provide evidence that students can solve problems of type A, B. can C").
Try to ensure that any assignments, tests, and exams that you give and grade will teach and test the knowledge and skills you most want students to learn. Some research indicates that many faculty do not achieve a good fit between the learning they say they want and the tests and assignments they actually give:
"Faculty often state that they are seeking to develop students' abilities to analyze, synthesize, and think critically. However, research indicates that faculty do not follow their good intentions when they develop their courses. A formal review and analysis of course syllabi and exams revealed that college faculty do not in reality focus on these advanced skills but instead are far more concerned with students' abilities to acquire knowledge, comprehend basic concepts or ideas and terms, and apply this basic knowledge [National Center for Education Statistics, 1995, p. 167]."
A combination of careful forethought, knowledge of your students, and analysis of your students' work are the keys here. For example, the mathematician who wanted his students to solve problems and explain the process realized that his existing testing and grading were putting too much emphasis on merely getting the right answers. So he added a requirement to some of his assignments and exams: students had to draw a vertical line down the center of a page, dividing it into two columns. In one column they solved the problem. In the opposite column they wrote sentences for each step describing what they did and why they did it.
Combine all your tests and assignments into a bare-bones course outline so that you can see a broad profile of the course and can ask some important questions. For an example, see Breihan's Western Civilization course skeleton below.
First, notice that there is no term paper. Instead, Breihan concentrated on three argumentative essays. He gave students the essay questions ahead of time so they could prepare, rather than write hastily to answer a question they had not seen before. He fashioned questions that would require them to synthesize what they had studied.
To keep them from merely copying sources, he asked them to draft an essay in class without notes. Then he responded to the drafts, and students revised their essays out of class and resubmitted them. For the first essay, revision was mandatory. For the second, it was optional. For the third (the final exam), it was not possible.
In his assignment-centered course skeleton, Breihan focused on a type of assignment that he believed had the best chance of eliciting from his students the careful arguments he most valued. He kept the paper load manageable. He structured the writing experiences so that student had the time and conditions necessary to produce coherent arguments. (The skeleton does not include minor assignments such as response to reading, map quizzes, and the like.)
We suggest that you begin your course planning in this same way. Your discipline may be quite different from history you may have labs or clinics in addition to class. But the same principle applies: state what you want your students to learn, then list the major assignments and tests that will both teach and test that learning.
Fit: Do my tests and assignments fit the kind of learning I most want?
Feasibility: Is the workload I am planning for myself and my students reasonable, strategically placed, and sustainable?
Students: Primarily non-majors fulfilling general education requirements.
I want my students to be able to apply sociological analysis to what they see around them.
Laying out his course in this skeletal way helped this sociology professor realize that his tests and exams did not fit the learning he most wanted. Students were likely to study all night before the exams, using their texts and class notes - a procedure not likely to elicit thoughtful application of sociological perspectives to what they saw around them. The term paper he assigned was likely to appear to them as a library exercise, also unrelated.
The professor decided to change his assignments to fit more closely with what he wanted students to learn. He abandoned the term paper and exams and instead asked his students every other week to write a "sociological analysis" where they analyzed some event or situation they observed in light of the sociological viewpoints they had been studying.
Before fleshing out your outline, ask yourself:
The classes we've experienced as students were usually structured like this: the topic is introduced during an in-class lecture. Students are given basic information on the topic, and concepts and terms are introduced and exemplified. Then students are asked to go home and apply the concepts, solve problems, and analyze and synthesize the information. In other words, we have them do the relatively “easy work” of comprehension in class and then ask them to do the difficult work on their own.
Consider shifting this balance so that students get much of the introductory information outside of class and spend time in class doing tasks where they can benefit from the feedback of the instructor and peers like using historical data as evidence for a position on a debatable historical issue. Research strongly indicates that student involvement is the key to learning and that for higher-order learning such as analysis, argument, and problem-solving, the most effective teaching methods involve having students actually practice the activities of the discipline, interact frequently with one another and with the teacher, and receive frequent feedback.
As you consider the question of what students should do “outside” of the class hour versus during the class, also keep in mind that technology is changing the definition of "class hour." Resources such as Web, e-mail, chat, and simulation make students' study time more like the class or lab, as students, from their desks at home, interact with teachers, classmates, and interactive software that, for example, teaches and tests them on basic information. Thus today a teacher has a richer but also more complex array of times, spaces, and technologies to arrange into a sequence of activities that will lead to maximum learning.
When you've completed your planning, visit the syllabus tutorial for help in communicating your course plan to your students. |
How To Teach Your Children Social Skills
As our children grow, they will be going to schools and interacting with lots of different people other. For example, friends and teachers. Hence it is necessary to teach them the social skills that enable them to get along with others, work as part of a group, follow rules, make and keeps friends and act with confidence. These abilities also help our children to build good character.
Families have a profound influence on the early development of our children social abilities and skills. If they enjoy love, warming relationship with parents, siblings, grandparents and other relationships, they will have a strong foundation in form good relationship with other people. They will be more understanding about how other people feel and have the ability to treat other the way they want and how they should be treated by others. Families have a profound influence on the early development of our children social abilities and skills. If they enjoy love, warming relationship with parents, siblings, grandparents and other relationships, they will have a strong foundation in form good relationship with other people. They will be more understanding about how other people feel and have the ability to treat other the way they want and how they should be treated by others.
To help children acquire the basic social behavior, parents must set the proper expectation, rules, rewards and punishment associated with those rules and more important set themselves as good examples for their children. Your children learn by observing what you as their parents do and how you behave in your daily life 'e.g. how you treat and interact with your spouse, eldest and friends. As they begin interact with others, your kids will model their behavior on actions he has witnessed at home.
Following are some of the important social skills that you will want to work with your children:
Learning that Others Have Their Own Views and Feelings
I have seemed adults hold very strong views about certainly things and they try to impose their views onto others. This often results in tension and uneasiness in the relationship. It is not something healthy.
It is important for parents to teach their children from young that others have their own opinions and feelings. They need to learn to respect them and know that it is perfectly okay for people to have different views. With this understanding, children can then begin to develop empathy 'the ability to discern and share another's feelings or ideas. It is the ability to put themselves into some else's shoe that make them willing to share, take turns, cooperate and treat their friends with kindness and respect.
Preschoolers usually do not have a clear sense of empathy. However you can help them begin to understand by talking about other people's thoughts and feelings. At home, I teach my preschool daughter empathy by asking her question such as:
"How do you think Sarah will feel if someone takes her toys without asking her permission?" "How mummy and daddy will feel if you hurt yourself?" "How would you feel if none of your friend didn't ask you to join them when they are playing?"
Often she will provide a sensible answer and follow by the proper action. When parents practice these often and long enough with their children, they will form the habit of being empathetic and sensible children who are welcome and love by their friends.
We need to help our children know that they are certain rules of proper social behaviors. For example, no hitting of others, no cutting of queue, wait for others to finish talking before they can talk, ask for permission if they want to take something that doesn't belong to them etc.
Sharing does not come automatically to most young children. Often they learn this skill by observing their parents.
I know of some parents who in general are not very generous with their things. And their young children demonstrate this selfish characteristic very clearly when they interact with their playmate. For example, I have observe some of children refusing to share their toys when they are playing with their friends, quickly and quietly keep all the good things for themselves and leave the not so good ones for their friends etc 'they all have not so generous parents.
If we want to make friends and build good relationship with others, we need to be generous. Generosity does not have to be related to material things; it can be sharing of love and care, ideas, knowledge etc. At home, I often share this teaching with my loved ones include our young children
"The more we share, the more we get"
Taking turn is one form of sharing that requires little children to do something hard 'wait. It is important to practice this because there are plenty of turn takings in school 'waiting to answer until the teacher calls, waiting for their turn to touch the rabbit in the science concern, waiting for their turn to play with an interesting gadget etc.
Respecting Others' Properties
In school, your kids will be surrounded by many children with their own things such as books, stationary, toys, food etc. They need to learn how to treat their friends' things and handle them with care when their friends lend anything to them. And parents must teach their children the proper way of making a request if they want to borrow something from others and how to show their appreciation if their wishes are granted. Teach them the proper use of words like "May I...", "please" and "thank you"
Working With Others
Help your children learn to cooperate and help out their friends in schools or when they are in a project team. The best way to teach them at home is to get them to share the work of family chores and housework. Get your children to help you tidy-up up the rooms; help you to clean the table after meals etc. Tell them that they belong to the family and it is important for them to help in keeping the house clean so that everyone can enjoy a good environment. And when they help out, they will have more time from mummy and daddy reading and playing with them 'this method works very well in our home.
Children are more apt to get off to a good start in school and be more confident of their own social skills if they learned to treat others with courtesy. Teach your children to say words like "please", "thank you", "yes Sir/ Madam" etc.
Social skills emerge slowly in children. Parents need to persevere in teaching them. Often you'll have to go over rules again and again, talk to your children many times about the right and proper way to behave and treat others. Children need to be guided and reminded and corrected 'no matter how well disposed they are.
About the Author
Article by Alvin Poh, founder of Learning Champ, a parenting wesbite that provides information and resources to parents, who want to help their children develop the important skills and mind set for a brighter future -> http://www.learningchamp.com |
Trees and other plants help keep the planet cool, but rising levels of carbon dioxide in the atmosphere are turning down this global air conditioner.
According to a new study by researchers at the Carnegie Institution for Science, in some regions more than a quarter of the warming from increased carbon dioxide is due to its direct impact on vegetation.
This warming is in addition to carbon dioxide's better-known effect as a heat-trapping greenhouse gas. For scientists trying to predict global climate change in the coming century, the study underscores the importance of including plants in their climate models.
"Plants have a very complex and diverse influence on the climate system," says study co-author Ken Caldeira of Carnegie's Department of Global Ecology. "Plants take carbon dioxide out of the atmosphere, but they also have other effects, such as changing the amount of evaporation from the land surface. It's impossible to make good climate predictions without taking all of these factors into account."
Plants give off water through tiny pores in their leaves, a process called evapotranspiration that cools the plant, just as perspiration cools our bodies. On a hot day, a tree can release tens of gallons of water into the air, acting as a natural air conditioner for its surroundings. The plants absorb carbon dioxide for photosynthesis through the same pores (called stomata). But when carbon dioxide levels are high, the leaf pores shrink. This causes less water to be released, diminishing the tree's cooling power.
youris.com provides its content to all media free of charge. We would appreciate if you could acknowledge youris.com as the source of the content. |
Silence is often seen as a negative in the typical classroom and the silent student viewed as timid, fearful or disengaged, writes a British researcher in a recent issue of the Cambridge Journal of Education. But use of silence is a neglected aspect of teaching, one that can serve productive purposes in the learning process and that also should be taken into greater account in classroom observations and teacher evaluations, writes Ros Ollin.
“Classroom observations, currently geared towards overt teacher behaviors such as initiating learning activities and intervening to maintain control, would be enriched by an awareness of different types and uses of silence,” the researcher writes. “This could lead to closer attention to the more subtle skills of good teaching–the often complex decisions on abstaining from talking, moving or intervening–and could provide a fruitful basis for a deeper understanding of classroom practice and an aid to the professional development of teachers.”
In this qualitative study on uses of silence in the classroom, the researcher interviewed 25 teachers, many of them teachers of adult students in a range of subject areas such as performing and visual arts, photography, music, science, sport, occupational therapy, yoga, English as a second language, fire safety training, etc. A few participants who expressed a strong dislike of silence were included in the group.
Teachers in the study reported using silence for the following purposes:
- Dramatic impact;
- relaxation, slowing down at the beginning or end of class;
- focus, discipline and control;
- inner reflection;
- creative space–to give students time to form own thoughts and dreams; and
- freedom from intrusion, from classmates, teachers, etc.
Communal silence in class, due to concentration and absorption was seen as a positive indicator of comfort and security, Ollin reports.
One art teacher remarked, “Silence can very naturally occur when people start getting really into their painting or drawing or making something and I think it’s because they’ve got so absorbed in it.” Teachers sometimes make the mistake of intruding on that type of positive communal silence by comments like “Isn’t everyone being quiet?”
If there is a child in class who does not talk, typically, teachers feel unease and they make great efforts to get the student to talk using a variety of techniques, Ollin notes. But, there are three possible reasons why students are silent, the researcher writes:
- They may be shy.
- They may be resistant to the dominant discourse.
- They may be involved in a reflective and engaged silence.
When learners are silent they might be engaged in a variety of activities–listening, thinking, feeling, withdrawing. They may be actively
participating, even though they are quiet. The research article has a list of 21 questions to consider in evaluating teachers in their use of silence in classroom observations. They include the following:
- What instances are there of non-verbal communication used productively so as
not to interrupt learners’ thinking?
- How often does the teacher demonstrate productive abstention from
- How effectively does the teacher use changes in position or changes in
physical activity to give learners an opportunity to absorb what has gone
before, or to enable a shift in perspective?
- How does the teacher deal with the learner who does not talk?
Other evaluation questions addressed the use of writing and visuals as silent activities for students. Writing allows for more measured thought and a more permanent record of thinking, the researcher writes. It is a “slow time” rather than “fast-time” activity. Visuals can often be allowed to speak for themselves. Teachers reported not speaking at all during a PowerPoint presentation or showing images without text so that students could think for themselves rather than being told or guided by words.
Movement and activity can also be described in terms of silence. Activities such as sculpting in clay or drawing can be a means of developing a fresh perspective. Drawing your problems or ideas rather than talking about them might lead to a less conventional solution, Ollin writes. One teacher mentioned that students and teachers could get a fresh perspective simply by moving from one
space to another.
Talking and silence can be viewed as fast time or slow time. During silence or “slow time”, students have time to think at their own pace rather than on the teacher’s time or at the pace of the rest of the class. Sometimes silence can be a negative. It can signify a breakdown in communication or a behavioral situation. A skilled teacher should be able to differentiate between productive and unproductive silences.
Educators who conduct classroom observations should rethink some of their attitudes towards silence in the classroom, the researchers note.”Silence, as an absence of speech ,is often problematised in a classroom situation, with the underlying implication that classrooms are for talking–as long as the talking is under the control of the teacher.”
“Silent pedagogy and rethinking classroom practice; structuring teaching through silence rather than talk,” by Ros Ollin, Cambridge Journal of Education, Volume 38, Number 2, June 2008, pp. 265-280. |
And find out why ATL is the fastest growing union in the education sector
This will not necessarily mean treating all children ‘equally’ or every child achieving ‘the same’. Some will need special, or different, levels of support or challenge. For teachers, this means planning for effective learning for all pupils - irrespective of disability, heritage, special educational needs, social group, gender, physical or emotional needs, race or culture.
The national curriculum statutory inclusion statement makes this very clear. It is the responsibility of the school to provide a broad and balanced curriculum for all pupils, based on the programmes of study for each key stage in the national curriculum. The teacher’s responsibility is to minimise any obstacles to effective learning and plan for all children to participate in the curriculum and achieve the best that they can. This will help to ensure an inclusive classroom.
The national curriculum sets out three key principles that are essential for developing an inclusive curriculum, and ensuring that equal opportunities are met:
Setting suitable learning challenges
This involves teachers planning lessons and teaching in a way that takes into consideration the abilities and needs of the class, and enables children to achieve the learning objectives through a variety of approaches. High expectations of all children’s learning, differentiation and targeted work for individual children will be a feature of this approach.
Responding to pupils' diverse needs
The key to maintaining high expectations of children’s learning is to get to know the children well, and focus upon what it is that they can do. Some children will need extra support if they are struggling with their learning, and others might need to have extension activities. Differentiation will be essential to support children’s learning. This might take the form of differentiated input from the teacher, differentiated tasks set for the children, use of a variety of resources to support children’s needs, support from others in the class – including other children or different expectations in terms of outcome.
The national curriculum clearly states that teachers should respond to pupils' diverse needs through carefully considering the role that the following play:
Overcoming potential barriers to learning and assessment for individuals and groups of pupils
To overcome potential barriers teachers will, for example, have to take into consideration the following specific needs of children, and how these might affect children’s approaches to learning:
Teachers will also need to be aware of what children bring to their learning, from home and their prior experiences. They need to ensure that children from different cultures, with different religions and worldviews, have full access to the curriculum. They need to ensure that their cultures are reflected in the classroom environment, and that no child is inhibited in their learning because of gender.
Consideration of the following issues might assist the teacher in planning for an inclusive curriculum, and ensuring equal opportunities for all.
In conclusion, equal opportunities, and inclusive practice in the classroom involves careful planning, by all professionals concerned, to ensure effective learning opportunities for all children.
Relevant Acts and documentation concerning equal opportunities |
In telecommunications, squelch is a circuit function that acts to suppress the audio (or video) output of a receiver in the absence of a sufficiently strong desired input signal. Squelch is widely used in two-way radios and radio scanners to suppress the sound of channel noise when the radio is not receiving a transmission. Squelch can be 'opened', which allows all signals entering the receiver's discriminator tap to be heard. This can be useful when trying to hear distant, or otherwise weak signals (also known as DXing).
A carrier squelch or noise squelch is the most simple variant of all. It operates strictly on the signal strength, such as when a television mutes the audio or blanks the video on "empty" channels, or when a walkie talkie mutes the audio when no signal is present. In some designs, the squelch threshold is preset. For example, television squelch settings are usually preset. Receivers in base stations or repeaters at remote mountain top sites are usually not adjustable remotely from the control point.
In devices such as two-way radios (also known as radiotelephones), the squelch on a local receiver can be adjusted with a knob, others have push buttons or a sequence of button presses. This setting adjusts the threshold at which signals will open (un-mute) the audio channel. Backing off the control will turn on the audio, and the operator will hear white noise (also called "static" or squelch noise) when there is no signal present. The usual operation is to adjust the control until the channel just shuts off - then only a small threshold signal is needed to turn on the speaker. However, if a weak signal is annoying, the operator can set the control a little higher thereby adjusting the squelch to open only when stronger signals are received.
A typical FM two-way radio carrier squelch circuit is noise operated. It takes out the voice components of the receive audio by passing the detected audio through a high-pass filter. A typical filter might pass frequencies over 4,000 Hz (4 kHz). The squelch control adjusts the gain of an amplifier which varies the level of noise coming out of the filter. The audio output of the filter and amplifier is rectified and produces a DC voltage when noise is present. The presence of continuous noise on an idle channel creates a DC voltage which turns the receiver audio off. When a signal with little or no noise is received, the noise-derived voltage goes away and the receiver audio is unmuted. Some applications have the receiver tied to other equipment that uses the audio muting control voltage as a "signal present" indication, for example in a repeater the act of the receiver unmuting will switch on the transmitter.
Tone squelch and selective calling
Tone squelch, or other forms of selective calling, is sometimes used to solve interference problems. Where more than one user is on the same channel (co-channel users), selective calling addresses a subset of all receivers. Instead of turning on the receive audio for any signal, the audio turns on only in the presence of the correct selective calling code. This is akin to the use of a lock on a door. A carrier squelch is unlocked and will let any signal in. Selective calling locks out all signals except ones with the correct key to the lock (the correct code).
In non-critical uses, selective calling can also be used to hide the presence of interfering signals such as receiver-produced intermodulation. Receivers with poor specifications—such as inexpensive police scanners or low-cost mobile radios—cannot reject the strong signals present in urban environments. The interference will still be present. It will still degrade system performance but by using selective calling the user will not have to hear the noises produced by receiving the interference.
Four different techniques are commonly used. Selective calling can be regarded as a form of in-band signaling.
CTCSS (Continuous Tone-Coded Squelch System) continuously superimposes any one of about 50 low-pitch audio tones on the transmitted signal, ranging from 67 to 254 Hz. The original tone set was 10, then 32 tones, and has been expanded even further over the years. CTCSS is often called PL tone (for Private Line, a trademark of Motorola), or simply tone squelch. General Electric's implementation of CTCSS is called Channel Guard (or CG). RCA Corporation used the name Quiet Channel, or QC. There are many other company-specific names used by radio vendors to describe compatible options. Any CTCSS system that has compatible tones is interchangeable. Old and new radios with CTCSS and radios across manufacturers are compatible.
Selcall (Selective Calling) transmits a burst of up to five inband audio tones at the beginning of each transmission. This feature (sometimes called "tone burst") is common in European systems. Early systems used one tone (commonly called "Tone Burst"). Several tones were used, the most common being 1,750 Hz, which is still used in European amateur radio repeater systems. The addressing scheme provided by one tone was not enough, so a two tone system was devised - one tone followed by a second tone (sometimes called a "1+1" system). Later on, Motorola marketed a system called "Quik-Call" that used two simultaneous tones followed by two more simultaneous tones (sometimes called a "2+2" system) that was heavily used by fire department dispatch systems in the USA. Later selective call systems used paging system technology that made use of five sequential tones. In the same way that a single CTCSS tone would be used on an entire group of radios, a single five-tone sequence is used in a group of radios.
DCS (Digital-Coded Squelch), generically known as CDCSS (Continuous Digital-Coded Squelch System), was designed as the digital replacement for CTCSS. In the same way that a single CTCSS tone would be used on an entire group of radios, the same DCS code is used in a group of radios. DCS is also referred to as Digital Private Line (or DPL), another trademark of Motorola, and likewise, General Electric's implementation of DCS is referred to a Digital Channel Guard (or DCG). DCS is also called DTCS (Digital Tone Code Squelch) by Icom, other names by other manufacturers. Radios with DCS options are generally compatible provided the radio's encoder-decoder will use the same code as radios in the existing system.
DCS adds a 134.4 bps (sub-audible) bitstream to the transmitted audio. The code word is a 23-bit Golay (23,12) code which has the ability to detect and correct errors of 3 or fewer bits. The word consists of 12 data bits followed by 11 check bits. The last 3 data bits are a fixed '001', this leaves 9 code bits (512 possibilities) which are conventionally represented as a 3-digit octal number. Note that the first bit transmitted is the LSB, so the code is "backwards" from the transmitted bit order. Only 83 of the 512 possible codes are available, to prevent falsing due to alignment collisions.
XTCSS is the newest signalling technique and it provides 99 codes with the added advantage of "silent operation". XTCSS fitted radios are purposed to enjoy more privacy and flexibility of operation. XTCSS is implemented as a combination of CTCSS and in-band signalling.
||This section contains instructions, advice, or how-to content. (August 2012)|
Squelch was invented first and is still in wide use in two-way/three-way radio, especially in the amateur radio world. Squelch of any kind is used to indicate loss of signal, which is used to keep commercial and amateur radio repeaters from transmitting continually. Since a carrier squelch receiver cannot tell a valid carrier from a spurious signal (noise, etc.), CTCSS is often used as well, as it avoids false keyups. Use of CTCSS is especially helpful on congested frequencies or on frequency bands prone to skip and during band openings.
It is a bad idea to use any coded squelch system to hide interference issues in systems with life-safety or public-safety uses such as police, fire, search and rescue or ambulance company dispatching. Adding tone or digital squelch to a radio system does not solve interference issues, it just covers them up. The presence of interfering signals should be corrected rather than masked. Interfering signals masked by tone squelch will produce apparently random missed messages. The intermittent nature of interfering signals will make the problem difficult to reproduce and troubleshoot. Users will not understand why they cannot hear a call, and will lose confidence in their radio system.
Professional wireless microphones use squelch to avoid reproducing noise when the receiver does not receive enough signal from the microphone. Most professional models have adjustable squelch, usually set with a screwdriver adjustment on the receiver. |
An endoscopy is a minimally invasive procedure that allows physicians to identify and evaluate the function of vital organs as well as locate the presence of any type of abnormalities. The procedure is conducted using a device known as an endoscope. Under certain conditions, an endoscopy will sometimes make use of a similar device that is called a borescope.
An endoscope usually is composed of a tube that is either flexible or rigid, depending on the type of endoscopic procedure to be performed. The device includes a light source to illuminate the interior area that the physician wishes to observe, as well as a lens to help focus the view and to take photographs is necessary. The presence of the tube also makes it possible to utilize various types of medical instruments to gently move organs to one side or to harvest a tissue sample of some kind.
The main purpose of an endoscopy is to allow the physician to observe what is happening within the body. The procedure can help the physician to identify signs that an organ is not functioning as it should, is enlarged, or in some other manner is not as it should be. At the same time, an endoscopy can be used to visually evaluate any type of abnormal growths present in or around an organ, such as a tumor.
Along with providing a real-time visual image to the physician, an endoscopy sometimes includes the extraction of a small sample of tissue. This is especially helpful when the physician feels there is a need to perform a biopsy or other testing on some other type of tissue sample. Along with harvesting tissue samples, an endoscopy procedure will normally include taking snapshots of the body’s interior. The attending physician can use these photographs in the ongoing process of diagnosis and treatment.
This type of procedure can be used to observe activity in a number of systems throughout the body. For example, a gastrointestinal endoscopy would provide access to the entire GI tract, including the small intestine, bile duct, and colon. The duodenum, stomach, and esophagus may also be observed during this procedure. Depending on the particular set of organs the physician wishes to view, the GI procedure may be referred to as a stomach endoscopy or an upper endoscopy.
A capsule endoscopy is a common designation when the procedure includes the use of a small camera. Usually classified as a noninvasive form of endoscopy, the encapsulated camera is ingested by the patient, and records images as the capsule moves through the digestive tract. This procedure provides the attending physician with a wealth of information without the need to schedule any type of exploratory surgical procedure.
Endoscopies are also used to observe activity and conditions in the urinary and respiratory tracts. The procedure can also be used in diagnosing health issues with the female reproductive system as well as observe the activity of the heart and other organs found in the chest. There are even specialized forms of the procedure that allow the physician to monitor the condition of the fetus and the amnion during pregnancy.
In years past, the information that can be gathered during an endoscopic procedure would have only been available by using a highly invasive procedure. As a result, the recovery time for the patient is minimal and it is possible to access and utilize the collected data immediately rather than later. |
This chapter is about working with text files.
Sometimes you will want to access data stored in text files.
Text files used to store data is often called flat files.
Common flat file formats are .txt, .xml, and .csv (comma-delimited values).
In this chapter you will learn:
In the example to follow, you will need a text file to work with.
On your web site, if you don't have an App_Data folder, create one.
In the App_Data folder, create a new file named Persons.txt.
Add the following content to the file:
The example below shows how to display data from a text file:
Server.MapPath finds the exact text file path.
File.ReadAllLines opens the text file and reads all lines from the file into an array.
For each dataItem in each dataline of the array the data is displayed.
With Microsoft Excel, you can save a spreadsheet as a comma separated text file (.csv file). When you do so, each row in the spreadsheet is saved as a text line, and each data column is separated by a comma.
You can use the example above to read an Excel .csv file (just change the file name to the name of the Excel file). |
Dragon or Damsel?
Dragonflies and damselflies belong to a group (or “order”) of insects known as Odonata. Within this order, there are two main types (or “sub-orders”): damselflies (Zygoptera) and the true dragonflies (Anisoptera).
Damselflies are generally small, delicate insects with a weak flight. Their wings all have the same size and shape. Their eyes are widely separated and positioned on either side of the head. When at rest, they hold their wings closed along their abdomen – with the exception of Emerald damselflies (Lestes species) that hold them partly open.
True dragonflies are usually larger, more robust insects and are fast,
powerful fliers. Their hindwings are broader than their forewings. They
also have multi-facetted eyes, but these are much larger, occupy most
of the head and are very close to one another, often touching. When perched,
they hold their wings opened out flat. |
Whether you are making a jack-o’-lantern for Halloween, a pumpkin pie for the holidays, or pumpkin soup for a cold, rainy day, here is a fun way to engage your child in estimation, measurement and simple graphing activities.
- Before you tackle the task of cooking or carving, invite your child to help you decide if this particular pumpkin is going to meet your needs. If you are cooking, for example, decide together: Is the pumpkin going to be big enough? How could you figure this out? (One idea is to weigh it: a 3-pound pumpkin usually gives you 4 to 5 cups of pulp.) Or, if you are making a jack-o’-lantern, talk about any problems the pumpkinís shape might create as you make the face. What could you do to overcome those problems?
- If you are using more than one pumpkin, talk about ways they are the same and different (height, circumference, color, overall shape). Invite your child to help you cut a length of string for each pumpkin that shows how tall it is. (To keep track of which string goes with which pumpkin, use a colored marker to mark both the pumpkin and the string.) Then compare their heights using a “bar graph.” To make the “graph,” lay the strings parallel to one another on a table or countertop, with the bottom ends lined up. Similarly, you can ask how the height of a pumpkin compares to the distance around its middle — the circumference. Is the circumference greater than the height? How could you find out? Invite your child to help you cut a length of string that measures the distance around the pumpkinís widest part. (Use the same markers to keep track, as explained above.) Now pair the height of each pumpkin with its circumference. What do you notice? (That the circumference is greater than the height.)
- If you like to bake the seeds for snacks, before scooping them out, invite your child to estimate how many seeds are inside the pumpkin. To do this, use a sharp knife to open up the pumpkin and look inside. What strategies could you use to estimate how many seeds you see? (One way is to mentally divide the inside of the pumpkin into equal sections and count how many seeds are in that section. Then, using repeated addition, add that number to itself for all the sections.) Using your estimate, do you think youíll have enough seeds to give snacks to 10 friends?
- If you are making a jack-o’-lantern, invite your child to suggest geometric shapes for the eyes, nose, and mouth, and then draw the shapes where they should go. (You can either cut out the shapes or have your child color them in.) Finish off the pumpkin by adding fun and wacky details, like string for hair, costume jewelry, etc.
- If you are cooking with your pumpkin, invite your child to help you measure the pulp and other ingredients for your recipe. |
Tooth decay is a serious problem for some children. Decay in the upper and lower front teeth are the most common problems.
Keeping Teeth Healthy
Your child needs strong, healthy baby teeth to chew food and to talk. Baby teeth also make space in children's jaws for their adult teeth to grow in straight.
Foods and drinks with sugar that sit in your child's mouth cause tooth decay. Milk, formula, and juice all have sugar in them. A lot of snacks children eat also have sugar in them.
- When children drink or eat sugary things, sugar coats their teeth.
- Sleeping or walking around with a bottle or sippy cup with milk or juice keeps sugar in your child's mouth.
- Sugar feeds the natural forming bacteria in your child's mouth.
- Bacteria produce acid.
- Acid contributes to tooth decay.
Preventing Tooth Decay
To prevent tooth decay, consider breastfeeding your baby. Breast milk by itself is the best food for your baby. It keeps the inside of your baby's mouth healthy and prevents tooth decay.
If you are bottle-feeding your baby:
- Give babies, ages newborn to 12 months, only formula to drink in bottles.
- Remove the bottle from your child's mouth or hands when your child falls asleep.
- Put your child to bed with a bottle of water only. DO NOT put your baby to bed with a bottle of juice, milk, or other sweet drinks.
- Teach your baby to drink from a cup at 6 months of age. Stop using a bottle for your babies when they are 12 to 14 months old.
- DO NOT fill your child's bottle with drinks that are high in sugar, such as punch or soft drinks.
- DO NOT let your child walk around with a bottle of juice or milk.
- DO NOT let your baby suck on a pacifier all the time. DO NOT dip your child's pacifier in honey, sugar, or syrup.
Caring for Your Child's Teeth
Check your child's teeth regularly.
- After each feeding, gently wipe your baby's teeth and gums with a clean washcloth or gauze to remove plaque.
- Begin brushing as soon as your child has teeth.
- Create a routine. For instance, brush your teeth together at bedtime.
If you have infants or toddlers, use a pea-size amount of non-fluoridated toothpaste on a washcloth to gently rub their teeth. When your children are older and can spit out all of the toothpaste after brushing, use a pea-size amount of fluoridated toothpaste on their toothbrushes with soft, nylon bristles to clean their teeth.
Floss your child's teeth when all of their baby teeth come in. This is usually by the time they are 2 ½ years old.
If your baby is 6 months or older, they need fluoride to keep their teeth healthy.
- Use fluoridated water from the tap.
- Give your baby a fluoride supplement if you drink well water or water without fluoride.
- Make sure any bottled water you use has fluoride.
Feed your children foods that contain minerals to strengthen their teeth.
Take your children to the dentist when all their baby teeth have come in or at age 2 or 3, whichever comes first.
Bottle mouth; Bottle carries; Baby bottle tooth decay; Early childhood caries (ECC)
Hughes CV, Dean JA. Mechanical and chemotherapeutic home oral hygiene. In: Dean JA, ed. McDonald and Avery's Dentistry of the Child and Adolescent. 10th ed. St. Louis, MO: Elsevier Mosby; 2016:chap 7.
Martin B, Baumhardt H, D'Alesio A, et al. Oral disorders. In: Zitelli, BJ, McIntire SC, Norwalk AJ. Atlas of Pediatric Diagnosis. 6th ed. Philadelphia, PA: Elsevier Saunders; 2012:chap 20.
Tinanoff N. Dental caries. In: Kliegman RM, Stanton BF, St Geme JW, Schor NF, eds. Nelson Textbook of Pediatrics. 20th ed. Philadelphia, PA: Elsevier; 2016:chap 312.
Review Date 2/22/2016
Updated by: Michael Kapner, DDS, General and Aesthetic Dentistry, Norwalk Medical Center, Norwalk, CT. Review provided by VeriMed Healthcare Network. Also reviewed by David Zieve, MD, MHA, Isla Ogilvie, PhD, and the A.D.A.M. Editorial team. |
Bacterial hair-like extensions appear to be capable of conducting electricity down their length, possibly playing a key role in respiration by allowing the cells to dump electrons at distances far outside the cell.
Image: Wikimedia commons,
Gross L, PLoS Biology Vol. 4/8/2006, e282
The results, reported online today (11th October) in Proceedings of the National Academy of Sciences, add to a controversial body of literature about the function of these conductive pili, or "nanowires." "It is the first time in which [researchers] actually measure electron transport along the wires at micrometer distances, [which] make it a biologically relevant process," said microbiologist Gemma Reguera of Michigan State University, who was not involved in the research. "This suggests they could be relevant mode of respiration for bacteria."
"It's an incredibly important finding," agreed microbiologist Derek Lovley of the University of Massachusetts, who also did not participate in the study. "It's fascinating that these microorganisms can make electricity and can get electrons outside the cell."
Shewanella oneidensis MR-1 are among a class of bacteria that can generate energy using solids, such as metal oxides, as electron acceptors. Unlike oxygen, for example, which diffuses into cells to accept the electrons produced during respiration, these solids are found outside cells. These bacteria must thus find a way to transport their electrons to solid surfaces across the cell membrane. A number of strategies have been proposed for how bacteria can accomplish this. If the cells are in direct contact with the solids, electron transfer proteins on the cell membrane can transfer the electrons. Alternatively, small soluble molecules may act as chauffeurs, shuttling the electrons to their final destination.
Recently, a third mechanism of electron dumping has been proposed: Bacteria use nanowires to conduct the electrons to areas where the metal electron acceptors may be more abundant. Evidence that nanowires actually conduct electrons, or electricity, down their length has been lacking, however. To resolve this lingering question, biophysicist Moh El-Naggar of the University of Southern California and his colleagues grew S. oneidensis under conditions that promote the production of lots of nanowires, namely by limiting the number of available electron acceptors. They then rested platinum rods at each end of a nanowire and applied a voltage. Sure enough, the nanowire conducted the current. When the nanowires were snipped, the current stopped.
"It's the first demonstration that these bacterial nanowires are actually conductive," El-Naggar said. "The question is now, what are the implications for these bacterial nanowires in entire microbial communities?"
Until in vivo measurements can be made, it is impossible to know if the bacteria are using the nanowires as a mechanism for transporting electrons for respiration, El-Naggar cautioned. Unfortunately, the techniques available today are adopted from research on inorganic wires, which may impact any findings, he said. But when the group repeated the experiment using a different technique, they got the same results. "Our research indicates that bacteria produce nanowires that are capable of mediating electron transport over long distances."
The team also repeated the experiment using mutant bacteria that lacked two electron transfer proteins known as cytochromes, suspected to be important for conducting electricity. These mutants did not conduct a current. If it turns out these bacteria are indeed linking up into complex biological circuits, "the implications are huge," El-Naggar said. "If [the nanowires are] central to the functioning and to the survival of the community, it enables us to either try to optimize it or even disrupt it."
In the case of microbial fuel cells, for example, which produce electricity by oxidizing biofuels, understanding how these nanowires work could allow researchers to increase the efficiency of the process. Conversely, in the case of pathogenic biofilms, it could provide a target to try to disrupt bacterial function.
Another potential application of these nanowires is in bioremediation of toxic heavy metals, said chemical engineer Plamen Atanassov of the University of New Mexico, who was not involved in the research. "The hope is that bacteria like Shewanella with its ability to reduce metal oxides will be successfully deployed as a bioremediation agent."
But more research is needed before these applications are realized, said Reguera. "We first have to do these baby steps of characterizing the physical and biology properties of the wires themselves," she said, such as what they are made of. "And perhaps, once they know that, they may be able to mass produce them and explore applications in nanotechnology."
M.Y. El-Naggara, et al., "Electrical transport along bacterial nanowires from Shewanella oneidensis MR-1," PNAS, www.pnas.org/cgi/doi/10.1073/pnas.1004880107, 2010. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.