content
stringlengths
275
370k
What is GUI? GUI is a Graphical Interface which is a visual representation of communication presented to the user for easy interaction with the machine. GUI means Graphical User Interface. It is the common user Interface that includes Graphical representation like buttons and icons and communication can be performed by interacting with these icons rather than the usual text-based or command based communication. - A common example of a GUI is Microsoft operating systems. - Consider using MS-DOS and Windows 7. - Now the most important revelation is the easiness that Windows 7 brings to the table. - For a common user, Windows 7 is the go-to option without a doubt because it is hard for them to communicate with the machine through commands as in MS-DOS. - Why Windows 7 is easier to use for a common user? The answer is GUI. - Yes, GUI helps the user to understand the functionalities present within the computer through Graphical icons and a click on the icon initiates the action and the desired communication of the user. - Thus GUI provides the functionality by abstracting the hard to understand technical details of each component/modules and provides hassle-free usage of the system. How Does GUI Work? 1. The uses of a pointer that serves as navigation to interact with different visually appealing Graphical icons. 2. Abstraction is a major concept that has been used in a GUI operating system. 3. User can use the pointer to click on the icon which initiates a series of actions. 4. Normally an application or functionality will get started. 5. Then the user will have to provide input or tasks to generate the desired action from the machine. 4.6 (3,144 ratings) 6. The GUI actually translates user language which comprises of simple one-line commands, single click and double clicks to machine language or assembly language. 7. Machine language is understood by the machine and hence the machine responds to the task initiated which is translated to use language and communicated to the user via GUI. - An Example of a GUI screen is as follows. - In the above screen, if you want to access or start an application, say a video player, then all we need is to click VLC Media player icon using the pointer. - Double click the icon to open the application. - This makes the user open a video player like VLC just by the click of a button. - Now, what if there is no GUI? - If there is no GUI, we have to open a command prompt and add the application command line Interface and feed instructions to start an application, run the video player, etc. - This is particularly inconvenient because you have to literally feed in commands for each and every action. - But, cut to the chase and we have GUI. We want to open the VLC Media player. We see the icon. Once we Double click it the application opens. We can select the file we want and click open. The video starts playing. - This is how GUI made life simpler in terms of using the computer for normal people who are not an expert in working with computers. - This is the sole reason why GUI helped in making computers to reach the Masses and made working with computers so much fun. The advantages of GUI are: - It is visually appealing and makes anyone to get involved in working with the machine. - Even a guy with no computer knowledge can use the computer and perform basic functions. GUI is responsible for that. - Searching becomes very easy as GUI provides a visual representation of files present and provides details about it. - Each and every response from the computer is visually communicated through GUI. - A user with no computer knowledge can literally start learning about the machine because of GUI as it provides scope for users to explore and provides discoverability. - If for example, a user starts using a computer with no Interface, then he/she has to provide commands to the machine to execute each task. In a way, the user must have some kind of programming knowledge. The Disadvantages of GUI are: - One can only do what is already pre-programmed by some other developer. - You cannot change the basic functionality of a system. - It takes more power for the system to function. - It is slow compared to simple command-based Interfaces. - It consumes more memory space. - GUI may be simple for a consumer but not as simple for the programmers who have to design and implement each and every function and also apply abstraction so that the users will feel the advantages of GUI. - If the functionality that the user needs is not present, then the user must know the commands that are necessary to proceed with the flow or else they are just stuck with it at the exact point. How Does the User Interact with GUI 1. A user interacts with the GUI through simple functions like click which triggers the GUI to understand what the user wants and promptly translates it to assembly language as shown in the image below. 2. Apart from the translation to machine language, GUI helps in displaying the actual process that is being done, the response from the machine, the amount of memory that is being used, the size of the file, the speed of the processor, the power being used by the computer, what are the tasks being performed and many more features. 3. The user uses a single click to select a particular process. 4. User can double click to start an application. 5. User can do right click to know the properties and other details of the application. 6. User can use the pointer to get information and continue multitasking desired operations. Why Should We Use GUI? There are some standards as to how one should use a Graphical User Interface. - The Visibility and abstraction must be uniform at least with GUI developed from a single company. - Each and every GUI has its own features and functions, but the graphic elements and terminology of the system and its architecture must be well maintained. - A good GUI provides a lot of freedom to users like backtracking to the last step. Undo features must be present for the user. - And many more. As said above, there are a lot of standards and GUI delines for a programmer to design and develop a GUI. - The whole effort that they put into developing a GUI helps a user to just simply perform a task like playing a video by just a matter of some clicks. - Simplicity is why we should definitely use it. Why Do We Need GUI 1. One could practically start using a computer using GUI. 2. But, one could actually start to learn and unravel several kinds of options present in the computer. 3. Also, one could also start understanding the computer and its language and get interested in it so much that the person himself could learn or even create a programming language that makes computers and its products even simpler to work within the future. How this Technology will Help you in Career Growth? - GUI or Graphical User Interface will definitely help you in your career irrespective of what job you do. - Anyone whose job requires a computer will require a GUI. - Developing a GUI will always be a bright prospect for developers at every moment of their career. - One could learn a programming language like Python, Ruby, Java, Dot Net and many more to develop different types of applications. 1. Before the start of GUI, there was CLI (Command Line Interface). 2. At that time, no one thought normal people could use a computer. 3. But, now everyone owns a computer and has a basic knowledge of how to use it. 4. That is what GUI achieved. It did not ask more from the user. Instead, it provided more for the user to actually start using the computer. 5. The Information Technology boomed with several jobs offers being presented to the people for designing and developing GUI. 6. Future languages have adapted itself and are being used to develop the GUI. 7. GUI will always have the eternal scope in the job market and GUI will continue to improve and update itself into a more usable and simpler user Interface and change the world as it has already done in the past. This has been a guide to What is GUI?. Here we discussed the How it Work, Needs and Advantages, Uses & Career With Examples of GUI. You can also go through our other suggested articles to learn more –
The Lives of the Caesars Julius Caesar was a strong leader for the Romans who changed the course of the history. With his courage and strength he created a strong empire. Caesar was a major part of the Roman Empire because of his strength and his strong war strategies. Julius Caesar was a Roman general and statesman whose dictatorship was pivotal in Rome s transition from republic to empire. Suetonius seems to be attacking Caesar s reputation but is still showing the good side. Caesar s lust for having a big military command. Caesar treated his men with respect and fairness. Suetonius says, “He judged his men by their fighting record, not by their moral or social positions, treating them all with equal severity and equal indulgence. He really showed his soldiers that he really cared a lot for them by spoiling them with banquets and entertainment. Julius Caesar secured the office of Dictator. Caesar used his dictatorship and used it to increase his power. With all of his powers he was pretty much the king of Rome. It was here, that Caesar found his power to preside over others, and where he became passionately hated by the Roman ruling class. As dictator, Caesar had secured the power of an absolute ruler. He had gained many rights, as a dictator, which also allowed him to control the magistrates, and their elections. During Caesar’s rule, elections into office proceeded as normal. However, he had passed legislation, which allowed him to control the elections, whereby his suggestions were always acknowledged. In this manner, many of his colleagues were rewarded with posts as magistrates. Suetonius shows how Caesar was a pushover to the people on the senate. He brided the senate by lending them money in order to get his way. This was the way in which he got the people to be love him so much. Caesar was very smart by using money to win the people. Suetonius portrays Caesar as being unscrupulous in amassing wealth from his offices, civilian and military. His conquests in Gaul, Italy and Britain are shown as catapulting him into a stratum of high power, prestige, and fortune. Caesar also obtained honors to increase his prestige. He wore the robe, crown, and scepter of a triumphant general. A general theme through the life of Julius Caesar is that the lack of power is a corrupting influence. Caesar loved to throw away money like if it would grow on trees. Caesar built a house but did not like the way it was built so he just threw it down. Then the wasting of money soon caught up with him. Caesar was running out of money or jewelry he would go to another city (Britain) to rampage it. There he would sell off the pearls so that he could get out of debt. Caesar is shown to be a ladies man. He would go out and take any women that he pleased. Suetonius acknowledges this by using Curio s speech. Caesar was every woman s man and every man s women. He would often give lots of presents to the women. Caesar would even get many of friends wives. He wanted to make a law that would allow him to marry whatever wives he pleases. This was the sake for begetting children. Suetonius shows how Caesar many characteristics. They might have been good but most often were bad. A general theme that plagued the life of Julius Caesar is that the lack of power is a corrupting influence. He had no remorse for anybody but for his soldiers and his women. Caesar knew that with money he could move mountains. As long as he had a military to back him up there was nothing to stop him. He was a very good strategic fighter that made him very popular with the people. He won many battles and never looked backed until his death.
Moon in our solar system does not shine by its own light but in fact it is visible to us on earth because it reflects light from the sun, this phenomenon is known as moon’s Albedo. Albedo is basically the reflecting power of a surface and it is measured on a scale of 0 to 1; 0 means low reflection and 1 means high or low reflection. Our Moon’s average visual Albedo is 0.12 comparatively our earth has an average Albedo of 0.37. Moon’s Albedo varies from 0.07 as its darkest point and 0.24 as it’s lightest. Moon orbits around the earth every 29.5 earth days and at all times half of the moon is lit by the sun. The other half of the moon is in darkness as it is facing away from the sun. We can see more and more of the lit side of the moon as it orbits around the earth. This process changes slowly and these changes are known as the phases of the moon. When the moon is between the sun and the earth it is called the new moon and it rises and sets at the same time as the sun but it is not visible to us because the side being lit by sun cannot be seen from earth. Basically the Moon is not totally invisible to us at this point, because there is other light in the universe that it reflects, including the light the Earth itself reflects from the Sun to the Moon, but generally, a New Moon is very difficult for the naked eye to see. One week after the new moon, first we see a slim crescent also known as the horned moon and then the moon looks like a half circle. This stage is known as the first quarter because the moon has completed one-quarter of its orbit around the earth. Half of the moon’s sunlight is visible from earth at this stage. This first quarter moon rises at noon and sets at midnight. After a week when the moon grows out of the first quarter this process is called waxing. Now the moon has moved to a point where the earth is between the moon and the sun. We can now see the entire side of the moon lit by the sun. This full moon rises as the sun sets and sets as the sun rises. The Moon is said to be ‘gibbous’ when it is larger than a semi-circle, but not yet a full circle. One week after the full moon the moon again looks like a half circle and it is called the last quarter because the moon has completed the last quarter of its orbit around the earth. Now half of the moon’s sunlit side is again visible from earth.
Dahlias are all about adventurous journeys, explorations and exotic places. Originally from Mexico, this fascinating plant was already appreciated as a food crop by the Aztecs in pre-Columbian times. They called it acocotli or cocoxochitl and ate its fleshy tubers. In the 16th century, the Spanish conquistadors who arrived in Central America came across wild Dahlia species with brightly coloured flowers. However, in the late 18th century, this marvellous plant was shipped across the ocean and reached Europe, where it was renamed “Dahlia”, as a tribute to Swedish botanist Anders Dahl, a pupil of Linnaeus. Partly due to a marked tendency to hybridise, the botanists of the time shaped Dahlias into different forms and colours: not only single flower specimens, but also two flower specimens became widespread, then pompon ones, followed by anemone flower specimens and, finally, cactus flower ones. It was a resounding success – the hybrids were highly sought after and the tubers very expensive – which reached its peak at the 1851 Universal Exhibition in London, where visitors from all walks of life had the chance to admire several specimens.
Fungi are part of the great family that is comprised of microorganisms such as mushrooms, mold, and yeast. For some of you that think that fungi are plants, they are not. They are not animals, either. But they are closer to animals rather than to plants. Their morphology is quite complicated, actually. Their structure is composed of a network of fine strands which spread out through the ground, fallen stumps of trees or anything that have nutrients and the fungi can feed on. Fungal reproduction is intricate because of the diversity of the family. Primarily, fungi reproduce by scattering spores. Many of you might have seen mushrooms and think that it is just a simple organism, but, in reality, the mushrooms are the components of fungi that help them scatter the spores. To survive, fungi need food. Nitrogen is one of the elements that help sustain life; therefore, the fungus needs it to thrive. There is one downside to this: nitrogen is scarce. Thus, fungi make use of some organisms that resemble worms and which are no bigger than one millimeter. Fungi Are One Of The Most Bizarre Hunters In The Forests After the fungi ensnare one organism, they start eating the nematode – the worm-like organism. Biologist Paul Keddy says about the nematodes that they “are actually one of the most abundant groups of invertebrates in the biosphere, and are present in a wide array of natural habitats, particularly soil, where they live in thin water films and feed mainly on fungi and bacteria.” The fungus has an essential part of keeping the community of nematodes under control. They are also indispensable to the well being of trees, as they cannot develop right without them. The microorganisms aid trees to take in minerals and water and also protect them against pests. Trees also help fungi with energy. Between the two is a special relationship; neither of them can thrive without the other. As a result, a forest is no longer comprised of individual organisms, as it was previously believed, but as a single organism that stretches on broad areas with the help of fungi.
Notice in the figure below that the rock is formed by layers (or strata). This type of rock is called sedimentary rock and forms from changes in other rocks. Wind rain, river water, sea waves: all of these gradually break the rocks into mineral grains. Little by little, over thousands of years, even the most solid granite turns into small fragments. This process is called weathering. Rock fragments are carried by winds or rainwater to rivers, which in turn carry them to the bottom of lakes and oceans. There the fragments are deposited in layers. This is how, for example, sand-covered terrain is formed, such as beaches. These fragments or sediments accumulate over time. The top layers exert pressure on the bottom layers, compressing them. This pressure ends up grouping and cementing the fragments and hardens the mass formed. this is how sedimentary rocks arise. All this, don't forget, takes thousands of years. In this way, the sand of the beach slowly turns into a sedimentary rock called sandstone. Clay sediment turns into clayite. The layers also cover the remains of plants and animals. So it is very common to find animal or plant remains or marks on sedimentary rocks: the animal or plant dies and is covered by thousands of mineral grains. The remains or marks of ancient organisms are called fossils. By analyzing the fossils, scientists can study what life was like in the past on our planet. Sedimentary Rock Formation The origin of sandstone Sandstone forms when rocks such as granite gradually disintegrate due to wind and rain. The quartz grains of these rocks form the sand. Sands and sand dunes, but they are not rocks: they are fragments of rocks. The sand may settle on the sea floor or in depressions and be subjected to increased pressure or temperature. Thus cemented and hardened, it forms the sandstone - a type of sedimentary rock. Sandstone is used in floors. Sand Dunes in Death Valley, California The accumulation of skeletons, shells and carapaces of salt-rich calcium carbonate aquatic animals can form another variety of sedimentary rock, limestone. Limestone also forms from deposits of calcium salts in water. Limestone is used in the manufacture of cement and lime. Lime is used for wall painting or paint manufacturing. Lime or limestone itself can be used to counteract the acidity of soils. Limestone waterfalls in Turkey, Aegean.
Ultraviolet–visible spectroscopy or ultraviolet-visible spectrophotometry (UV-Vis or UV/Vis) refers to absorption spectroscopy or reflectance spectroscopy in the ultraviolet-visible spectral region. This means it uses light in the visible and adjacent (near-UV and near-infrared [NIR]) ranges. The absorption or reflectance in the visible range directly affects the perceived color of the chemicals involved. In this region of the electromagnetic spectrum, molecules undergo electronic transitions. This technique is complementary to fluorescence spectroscopy, in that fluorescence deals with transitions from the excited state to the ground state, while absorption measures transitions from the ground state to the excited state. - 1 Principle of ultraviolet-visible absorption - 2 Applications - 3 Beer–Lambert law - 4 Ultraviolet-visible spectrophotometer - 5 Microspectrophotometry - 6 Additional applications - 7 See also - 8 References Principle of ultraviolet-visible absorption Molecules containing π-electrons or non-bonding electrons (n-electrons) can absorb the energy in the form of ultraviolet or visible light to excite these electrons to higher anti-bonding molecular orbitals. The more easily excited the electrons (i.e. lower energy gap between the HOMO and the LUMO), the longer the wavelength of light it can absorb. UV/Vis spectroscopy is routinely used in analytical chemistry for the quantitative determination of different analytes, such as transition metal ions, highly conjugated organic compounds, and biological macromolecules. Spectroscopic analysis is commonly carried out in solutions but solids and gases may also be studied. - Solutions of transition metal ions can be colored (i.e., absorb visible light) because d electrons within the metal atoms can be excited from one electronic state to another. The colour of metal ion solutions is strongly affected by the presence of other species, such as certain anions or ligands. For instance, the colour of a dilute solution of copper sulfate is a very light blue; adding ammonia intensifies the colour and changes the wavelength of maximum absorption (λmax). - Organic compounds, especially those with a high degree of conjugation, also absorb light in the UV or visible regions of the electromagnetic spectrum. The solvents for these determinations are often water for water-soluble compounds, or ethanol for organic-soluble compounds. (Organic solvents may have significant UV absorption; not all solvents are suitable for use in UV spectroscopy. Ethanol absorbs very weakly at most wavelengths.) Solvent polarity and pH can affect the absorption spectrum of an organic compound. Tyrosine, for example, increases in absorption maxima and molar extinction coefficient when pH increases from 6 to 13 or when solvent polarity decreases. - While charge transfer complexes also give rise to colours, the colours are often too intense to be used for quantitative measurement. The Beer-Lambert law states that the absorbance of a solution is directly proportional to the concentration of the absorbing species in the solution and the path length. Thus, for a fixed path length, UV/Vis spectroscopy can be used to determine the concentration of the absorber in a solution. It is necessary to know how quickly the absorbance changes with concentration. This can be taken from references (tables of molar extinction coefficients), or more accurately, determined from a calibration curve. A UV/Vis spectrophotometer may be used as a detector for HPLC. The presence of an analyte gives a response assumed to be proportional to the concentration. For accurate results, the instrument's response to the analyte in the unknown should be compared with the response to a standard; this is very similar to the use of calibration curves. The response (e.g., peak height) for a particular concentration is known as the response factor. The wavelengths of absorption peaks can be correlated with the types of bonds in a given molecule and are valuable in determining the functional groups within a molecule. The Woodward-Fieser rules, for instance, are a set of empirical observations used to predict λmax, the wavelength of the most intense UV/Vis absorption, for conjugated organic compounds such as dienes and ketones. The spectrum alone is not, however, a specific test for any given sample. The nature of the solvent, the pH of the solution, temperature, high electrolyte concentrations, and the presence of interfering substances can influence the absorption spectrum. Experimental variations such as the slit width (effective bandwidth) of the spectrophotometer will also alter the spectrum. To apply UV/Vis spectroscopy to analysis, these variables must be controlled or accounted for in order to identify the substances present. UV-Vis spectroscopy is also used in the semiconductor industry to measure the thickness and optical properties of thin films on a wafer. UV-Vis spectrometers are used to measure the reflectance of light, and can be analyzed via the Forouhi-Bloomer dispersion equations to determine the Index of Refraction (n) and the Extinction Coefficient (k) of a given film across the measured spectral range. The method is most often used in a quantitative way to determine concentrations of an absorbing species in solution, using the Beer-Lambert law: where A is the measured absorbance, in Absorbance Units (AU), is the intensity of the incident light at a given wavelength, is the transmitted intensity, L the pathlength through the sample, and c the concentration of the absorbing species. For each species and wavelength, ε is a constant known as the molar absorptivity or extinction coefficient. This constant is a fundamental molecular property in a given solvent, at a particular temperature and pressure, and has units of or often . The absorbance and extinction ε are sometimes defined in terms of the natural logarithm instead of the base-10 logarithm. The Beer-Lambert Law is useful for characterizing many compounds but does not hold as a universal relationship for the concentration and absorption of all substances. A 2nd order polynomial relationship between absorption and concentration is sometimes encountered for very large, complex molecules such as organic dyes (Xylenol Orange or Neutral Red, for example). The Beer-Lambert law has implicit assumptions that must be met experimentally for it to apply otherwise there is a possibility of deviations from the law to be observed. For instance, the chemical makeup and physical environment of the sample can alter its extinction coefficient. The chemical and physical conditions of a test sample therefore must match reference measurements for conclusions to be valid. Monochromaticity of light incident on the sample cell, which is the width of the triangle at one half of the peak intensity.A given spectrometer has a spectral bandwidth that characterizes how monochromatic the light is. It is important to have monochromatic source of radiation for analysis of the sample. If this bandwidth is comparable to the width of the absorption features, then the measured extinction coefficient will be altered. In most reference measurements, the instrument bandwidth is kept below the width of the spectral lines. When a new material is being measured, it may be necessary to test and verify if the bandwidth is sufficiently narrow. Reducing the spectral bandwidth will reduce the energy passed to the detector and will, therefore, require a longer measurement time to achieve the same signal to noise ratio. In liquids, the extinction coefficient usually changes slowly with wavelength. A peak of the absorbance curve (a wavelength where the absorbance reaches a maximum) is where the rate of change in absorbance with wavelength is smallest. Measurements are usually made at a peak to minimize errors produced by errors in wavelength in the instrument, that is errors due to having a different extinction coefficient than assumed. The detector used is broadband; it responds to all the light that reaches it. If a significant amount of the light passed through the sample contains wavelengths that have much lower extinction coefficients than the nominal one, the instrument will report an incorrectly low absorbance. Any instrument will reach a point where an increase in sample concentration will not result in an increase in the reported absorbance, because the detector is simply responding to the stray light. In practice the concentration of the sample or the optical path length must be adjusted to place the unknown absorbance within a range that is valid for the instrument. Sometimes an empirical calibration function is developed, using known concentrations of the sample, to allow measurements into the region where the instrument is becoming non-linear. As a rough guide, an instrument with a single monochromator would typically have a stray light level corresponding to about 3 Absorbance Units (AU), which would make measurements above about 2 AU problematic. A more complex instrument with a double monochromator would have a stray light level corresponding to about 6 AU, which would therefore allow measuring a much wider absorbance range. Deviations from the Beer–Lambert law At sufficiently high concentrations, the absorption bands will saturate and show absorption flattening. The absorption peak appears to flatten because close to 100% of the light is already being absorbed. The concentration at which this occurs depends on the particular compound being measured. One test that can be used to test for this effect is to vary the path length of the measurement. In the Beer-Lambert law, varying concentration and path length has an equivalent effect—diluting a solution by a factor of 10 has the same effect as shortening the path length by a factor of 10. If cells of different path lengths are available, testing if this relationship holds true is one way to judge if absorption flattening is occurring. Solutions that are not homogeneous can show deviations from the Beer-Lambert law because of the phenomenon of absorption flattening. This can happen, for instance, where the absorbing substance is located within suspended particles (see Beer's law revisited, Berberan-Santos, J. Chem. Educ. 67 (1990) 757, and Absorption flattening in the optical spectra of liposome-entrapped substances, Wittung, Kajanus, Kubista, Malmström, FEBS Lett 352 (1994) 37). The deviations will be most noticeable under conditions of low concentration and high absorbance. The last reference describes a way to correct for this deviation. Some solutions like copper(II)chloride in water changes colour at a certain concentration because of changed conditions around the coloured ion (the divalent copper ion). For copper(II)chloride it means a shift from blue to green, which would mean that monochromatic measurements would deviate from the Beer-Lambert law. Measurement uncertainty sources The above factors contribute to the measurement uncertainty of the results obtained with UV/Vis spectrophotometry. If UV/Vis spectrophotometry is used in quantitative chemical analysis then the results are additionally affected by uncertainty sources arising from the nature of the compounds and/or solutions that are measured. These include spectral interferences caused by absorption band overlap, fading of the color of the absorbing species (caused by decomposition or reaction) and possible composition mismatch between the sample and the calibration solution. ,uv The instrument used in ultraviolet-visible spectroscopy is called a UV/Vis spectrophotometer. It measures the intensity of light passing through a sample (), and compares it to the intensity of light before it passes through the sample (). The ratio is called the transmittance, and is usually expressed as a percentage (%T). The absorbance, , is based on the transmittance: The UV-visible spectrophotometer can also be configured to measure reflectance. In this case, the spectrophotometer measures the intensity of light reflected from a sample (), and compares it to the intensity of light reflected from a reference material () (such as a white tile). The ratio is called the reflectance, and is usually expressed as a percentage (%R). The basic parts of a spectrophotometer are a light source, a holder for the sample, a diffraction grating in a monochromator or a prism to separate the different wavelengths of light, and a detector. The radiation source is often a Tungsten filament (300-2500 nm), a deuterium arc lamp, which is continuous over the ultraviolet region (190-400 nm), Xenon arc lamp, which is continuous from 160-2,000 nm; or more recently, light emitting diodes (LED) for the visible wavelengths. The detector is typically a photomultiplier tube, a photodiode, a photodiode array or a charge-coupled device (CCD). Single photodiode detectors and photomultiplier tubes are used with scanning monochromators, which filter the light so that only light of a single wavelength reaches the detector at one time. The scanning monochromator moves the diffraction grating to "step-through" each wavelength so that its intensity may be measured as a function of wavelength. Fixed monochromators are used with CCDs and photodiode arrays. As both of these devices consist of many detectors grouped into one or two dimensional arrays, they are able to collect light of different wavelengths on different pixels or groups of pixels simultaneously. A spectrophotometer can be either single beam or double beam. In a single beam instrument (such as the Spectronic 20), all of the light passes through the sample cell. must be measured by removing the sample. This was the earliest design and is still in common use in both teaching and industrial labs. In a double-beam instrument, the light is split into two beams before it reaches the sample. One beam is used as the reference; the other beam passes through the sample. The reference beam intensity is taken as 100% Transmission (or 0 Absorbance), and the measurement displayed is the ratio of the two beam intensities. Some double-beam instruments have two detectors (photodiodes), and the sample and reference beam are measured at the same time. In other instruments, the two beams pass through a beam chopper, which blocks one beam at a time. The detector alternates between measuring the sample beam and the reference beam in synchronism with the chopper. There may also be one or more dark intervals in the chopper cycle. In this case, the measured beam intensities may be corrected by subtracting the intensity measured in the dark interval before the ratio is taken. Samples for UV/Vis spectrophotometry are most often liquids, although the absorbance of gases and even of solids can also be measured. Samples are typically placed in a transparent cell, known as a cuvette. Cuvettes are typically rectangular in shape, commonly with an internal width of 1 cm. (This width becomes the path length, , in the Beer-Lambert law.) Test tubes can also be used as cuvettes in some instruments. The type of sample container used must allow radiation to pass over the spectral region of interest. The most widely applicable cuvettes are made of high quality fused silica or quartz glass because these are transparent throughout the UV, visible and near infrared regions. Glass and plastic cuvettes are also common, although glass and most plastics absorb in the UV, which limits their usefulness to visible wavelengths. Specialized instruments have also been made. These include attaching spectrophotometers to telescopes to measure the spectra of astronomical features. UV-visible microspectrophotometers consist of a UV-visible microscope integrated with a UV-visible spectrophotometer. A complete spectrum of the absorption at all wavelengths of interest can often be produced directly by a more sophisticated spectrophotometer. In simpler instruments the absorption is determined one wavelength at a time and then compiled into a spectrum by the operator. By removing the concentration dependence, the extinction coefficient (ε) can be determined as a function of wavelength. UV-visible spectroscopy of microscopic samples is done by integrating an optical microscope with UV-visible optics, white light sources, a monochromator, and a sensitive detector such as a charge-coupled device (CCD) or photomultiplier tube (PMT). As only a single optical path is available, these are single beam instruments. Modern instruments are capable of measuring UV-visible spectra in both reflectance and transmission of micron-scale sampling areas. The advantages of using such instruments is that they are able to measure microscopic samples but are also able to measure the spectra of larger samples with high spatial resolution. As such, they are used in the forensic laboratory to analyze the dyes and pigments in individual textile fibers, microscopic paint chips and the color of glass fragments. They are also used in materials science and biological research and for determining the energy content of coal and petroleum source rock by measuring the vitrinite reflectance. Microspectrophotometers are used in the semiconductor and micro-optics industries for monitoring the thickness of thin films after they have been deposited. In the semiconductor industry, they are used because the critical dimensions of circuitry is microscopic. A typical test of a semiconductor wafer would entail the acquisition of spectra from many points on a patterned or unpatterned wafer. The thickness of the deposited films may be calculated from the interference pattern of the spectra. In addition, ultraviolet-visible spectrophotometry can be used to determine the thickness, along with the refractive index and extinction coefficient of thin films as described in Refractive index and extinction coefficient of thin film materials. A map of the film thickness across the entire wafer can then be generated and used for quality control purposes. UV/vis can be applied to determine the kinetics or rate constant of a chemical reaction. The reaction, occurring in solution, must present color or brightness shifts from reactants to products in order to use UV/vis for this application. For example, the molecule mercury dithizonate is a yellow-orange color in diluted solution (1*10^-5 M), and turns blue when subjected with particular wavelengths of visible light (and UV) via a conformational change, but this reaction is reversible back into the yellow "ground state". The rate constant of a particular reaction can be determined by measuring the UV/vis absorbance spectrum at specific time intervals. Using mercury dithizonate again as an example, one can shine light on the sample to turn the solution blue, then run a UV/vis test every 10 seconds (variable) to see the levels of absorbed and reflected wavelengths change over time in accordance with the solution turning back to yellow from the excited blue energy state. From these measurements, the concentration of the two species can be calculated. The Mercury dithizonate reaction from one conformation to another is first order and would have the integral first order rate law : ln[A](time t)=−kt+ln[A](initial). Therefore graphing the natural log (ln) of the concentration [A] versus time will graph a line with slope -k, or negative the rate constant. Different rate orders have different integrated rate laws depending on the mechanism of the reaction. An equilibrium constant can also be calculated with UV/vis spectroscopy. After determining optimal wavelengths for all species involved in equilibria, a reaction can be run to equilibrium, and the concentration of species determined from spectroscopy at various known wavelengths. The equilibrium constant can be calculated as K(eq) = [Products] / [Reactants]. - Ultraviolet-visible spectroscopy of stereoisomers - Infrared spectroscopy and Raman spectroscopy are other common spectroscopic techniques, usually used to obtain information about the structure of compounds or to identify compounds. Both are forms of Vibrational spectroscopy. - Fourier transform spectroscopy - Near-infrared spectroscopy - Vibrational spectroscopy - Rotational spectroscopy - Applied spectroscopy - Slope spectroscopy - Benesi-Hildebrand method - Skoog et al. (2007). Principles of Instrumental Analysis (6th ed.). Belmont, CA: Thomson Brooks/Cole. pp. 169–173. ISBN 9780495012016. - Principle of Ultraviolet-Visible Spectroscopy - Mehta, A. Derivation of Beer Lambert Law - Misra, Prabhakar; Dubinskii, Mark, eds. (2002). Ultraviolet Spectroscopy and UV Lasers. New York: Marcel Dekker. ISBN 0-8247-0668-4. - Mehta, A. http://pharmaxchange.info/press/2012/05/ultraviolet-visible-uv-vis-spectroscopy-%e2%80%93-limitations-and-deviations-of-beer-lambert-law/ Deviations of Beer Lambert Law] - Ansell, S.; Tromp, R. H.; Neilson, G. W. "The solute and aquaion structure in a concentrated aqueous solution of copper(II) chloride". J. Phys.: Condens. Matter 7 (8): 1513. doi:10.1088/0953-8984/7/8/002. - Sooväli, L.; Rõõm, E.-I.; Kütt, A.; Kaljurand, I.; Leito, I. (2006). "Uncertainty sources in UV-Vis spectrophotometric measurement". Accred. Qual. Assur 11: 246–255. doi:10.1007/s00769-006-0124-x. - Skoog, et al. Principles of Instrumental Analysis. 6th ed. Thomson Brooks/Cole. 2007, 349-351. - Skoog, et al. Principles of Instrumental Analysis. 6th ed. Thomson Brooks/Cole. 2007, 351. - Forensic Fiber Examination Guidelines, Scientific Working Group-Materials, 1999, http://www.swgmat.org/fiber.htm - Standard Guide for Microspectrophotometry and Color Measurement in Forensic Paint Analysis, Scientific Working Group-Materials, 1999, http://www.swgmat.org/paint.htm - "Spectroscopic thin film thickness measurement system for semiconductor industries", Horie, M.; Fujiwara, N.; Kokubo, M.; Kondo, N., Proceedings of Instrumentation and Measurement Technology Conference, Hamamatsu, Japan, 1994,(ISBN 0-7803-1880-3). - pharmax change http://pharmaxchange.info/press/2011/12/ultraviolet-visible-uv-vis-spectroscopy-principle/. Retrieved 2014-11-11. Missing or empty - Sertova (June 2000). "Photochromism of mercury(II) dithizonate in solution". Journal of Photochemistry and Photobiology A: Chemistry 134 (3): 163–168. doi:10.1016/s1010-6030(00)00267-7. Retrieved 2014-11-11. - "The Rate Law". ChemWiki. Retrieved 2014-11-11. - "Chemical equilibrium".
Storytelling is the art of passing on, in oral prose, the feelings, observations, and experiences of living beings. History is defined as “the systematic narrative of past events as relating to a particular people, country, etc.” 1 Though one is considered an art and the other a science, the two obviously have a great deal in common. I propose that in creating History Stories,2 we can take the best of both and, with honesty and intelligence, do what historians have neglected to do for centuries: acknowledge multiple truths about any time, place, or event, illuminate them, and present them in a way that enables people to identify with the lives and struggles of all peoples. There might be a single set of cold facts that illustrate an event or movement, but the truth of what happened depends upon who is telling the story. As storytellers, equipped with historic fact, we have the opportunity to tell about the past from a multiplicity of vantage points, thus illuminating history in a broader and more inclusive palate than any text has ever attempted. By presenting a broad cross-section of experiences and thus ‘truths’ from any single time and place in history, our listeners are not told what to believe, but called upon to analyze the many lessons and themes that emerge. This cognitive dissonance empowers our students to be thinkers rather than rote learners of facts. Further, by being drawn into and identifying with a new vantage point, you come closer to understanding what informs and motivates those outside of your comfort and knowledge zone. A global audience’s discovery of new perspectives of history can only lead to greater cross-cultural tolerance and understanding, bringing us a hairs breath closer to a peaceful planet. HISTORY AS VANTAGE POINT Events have occurred, but how we perceive and interpret them is a very personal process. When I studied American History in high school, “The Westward Movement” was always featured. Covered wagons, cowboys, and Manifest Destiny were rolled into an exciting picture of a nation’s growth. Talk with a Cherokee and this same episode from our national past is depicted as genocide. It was the new Americans’ complete lack of knowledge or empathy towards the indigenous cultures, brewed with a greed for the land and fear of the unknown that catapulted them toward the destruction of native cultures. Now, as an “old broad”, I realize that each story was true for the people who told it. The European immigrant saw only new hope, new land, new beginnings. The Apache, Navajo, and Sioux saw their land, religion, and way of life brutally destroyed. After hearing both accounts told can we learn about the forces that shaped our past. Only then can we make intelligent, informed, heartfelt decisions about our future. I was commission to create a story by the U.S. Department of the Interior. That tale, From Her Arms to His, is about the women who manufactured the M1’ rifles during W.W.II at the Springfield Armory in Massachusetts. During the researching and writing process, many facts, documents, and interviews of employees were made available to me. It would have been a simple task to create a drama based on this existing information, but I wanted to offer other perspectives of this era, its women, and their work. The world knows that these women constituted 55% of the work force, sacrificed on many levels, and maintained the most remarkable production rates this nation have ever seen. These facts would all be included, but what it was really like for a woman to enter a man’s world? How did the extensive sexism and racism that was the fabric of this nation effect the armory workers? Did the women who worked at the Armory really give up the work readily to homecoming GI’s? One of the Armory’s historians warned me to be careful about “revisionist” interpretations. “These were the forties- women and blacks didn’t expect to be treated equally.” He was telling me, “Don’t try and skew history to meet your own biases.” I knew the facts. Women were not treated equally. African Americans were treated abominably. My historian friend was correct in saying that official structures did nothing to address these issues. But is that the end of the story? In interview after interview, I did not find a single woman who joyously or even passively accepted a lower pay scale. I never spoke to an African American employee who believed that his or her lack of promotion was “acceptable.” Both groups spoke in loud voices that were never officially recorded. Creating characters that expressed their views, within the appropriate historic context, became my job. If we can’t get a full, honest picture of our past, how can we make informed, intelligent decisions about our future? We learn through windows and mirrors. A good story allows the listener to identify in some way with the main character. That is the mirror we enter through. The window is then the new way of experiencing and seeing the world. When we enter that reality outside of our own, our intellectual and emotional understanding of “the other” is broadened. This ability to identify and empathize with someone else’s experience is an essential variable for living peacefully in a multicultural world. My friend Anna is the child of Holocaust survivors. Her parents both survived Auschwitz. To say that this experience marked her parents and their family, would be an understatement. There was never a day in her life that my friend was not reminded of the horror her parents suffered. While, Anna could have chosen to accept the cloud of death that surrounded her life, curse all Germans and Poles as her parents did, and live with a pall of fear around her, this was not the life she wanted to live. She no longer wanted to be a prisoner to the one story she knew. So Anna decided to broaden her world with other stories and points-of-view. She has become the central figure of an organization that brings together the children of Holocaust victims and the children of perpetrators. They tell each other their stories. The process is painful, but ultimately, in being privy to one another’s lives, hopes, desires, fears, and idiosyncrasies, all the people involved start to see the others as individuals. They discovered that both generations of children are laden with unspoken guilt’s and fears. They emerge from the experience capable of seeing each other’s lives as detailed, conflicted, and hopeful as their own. When in conflict, if we can understand and empathize with the conditions that created our adversaries, we are more able to shape a compromise that will best meet the needs of all involved. While I can’t claim that Israelis and Palestinians, upon hearing one another’s’ stories, will suddenly unite into a single peaceful democracy, or that horrible scars of ethnic cleansing will be washed away, chances are we’ll all lean further in one another’s direction, after knowing their story.
This is a detailed summary of the history of Potosi. You can learn more about Potosi Bolivia by looking up information on points that are of special interest to you. In extreme Southwestern Bolivia there is a city that was once the pearl of the Spanish Crown, the center of legendary riches. Its name, in the Spanish lexicon, is synonymous with excessive quantities of wealth, the numbers of which are too high to describe: Potosí. In pre-Hispanic times this region was inhabited by Charcas and Chullpas natives, along with smaller groups of Quechua and Aymara. They were peaceful peoples, able artisans who worked with pottery and silver, as did other Western Bolivian ethnic groups, and they were colonized by the Inca. Aware that the mountains in this area contained valuable minerals, the Quechua established a system to exploit the silver mines of Porco, for which they created a labor system called the “mita” – they enslaved other peoples to work in the mines. They became very wealthy from the riches found in the mines an over time also contributed to paying for a ransom to rescue Atahualpa, the last emperor of the Tawantisuyo, when he was held hostage by the Spaniards. The mines were already famous when the Spanish arrived. The latter destroyed the Incan Empire and soon after arrived in Potosí in search of gold and silver. The mines of Porco were the first to fall into their hands as the riches of Sumaj Orco (the well-known “Cerro Rico” so often visited by tourists today) had not yet been extracted. Legend has it that the Incan emperor Huayna Kapac, a descendant of Pachacutec, intended to exploit the mountain’s silver and sent in his miners to do so. However, when they began to dig they heard a supernatural voice that came from within the bowels of the mountain and it ordered them to leave the mountain intact. Those who heard the voice say it told them the silver in the mountain was not destined for them, but for others. These “others”, who would be hairy and very light-skinned, arrived eight decades later in 1539. By then the region had already been given the name it has today, some say due to the news given them by the voice that sprang from within the mountain and others say it is named after the waters that spring out all along the foothills. Regardless of which version you choose to believe, the word “Ptojsi” or “Ptoj” means to “spring forth”. The Spanish, with their characteristic awful pronunciation of indigenous words, “spanish-ized” the word and it became “Potosí”. The honor of being the founder of this city would have gone to Gonzalo Pizarro, the ambitious younger brother of Francisco Pizarro (the rotund Spaniard who became a marquis and conquistador of the empire) if his intuition hadn’t failed him. The younger Pizarro, bored of his job as the Corregidor of the Charcas region, set out to explore the Sumaj Orco (the Cerro Rico) in 1541. However, either because he was too impatient, or because he was in too much of a hurry, he decided there was no silver to be found. He didn’t find a single vein of the metal. All he saw were stone altars which had been set up in the area as offerings, and being the Catholic that he was, he promptly declared them pagan and left. It was a native named Diego Huallpa who found the metal three years later. And this is where the legend splits off into two versions: according to the first version, Huallpa was looking for some llamas that had strayed from his flock near the top of the mountain when, as he pulled out some plants by their roots, he found a vein of silver. The second version affirms that Diego was feeling very cold and lit a fire to warm himself. The fire melted a vein of silver and upon seeing the precious mineral in its liquid form, Huallpa decided to exploit the metal in secret. He told only one other person, his friend Chalco who, it is believed, told one of the conquistadors. Spaniard Juan de Villarroel was one of the conquistadors who was busy exploiting the mines at Porco. Upon hearing of the discovery at Sumaj Orco he decide to go see for himself, despite the fact that the mountain was very high and extremely windy. Along with some of his companions, Diego de Centeno, Juan de Cotamito and others who were working with him, he arrived in Potosí in April of 1545 and claimed the mountain for himself and on behalf of King Charles the 1st of Spain and 5th of Germany. The wealth they found in the mountain appeared to be unending. They named the first mine they opened La Descubridora. From it they extracted so many bars of silver and sent them to the Spanish crown that by 1553 Charles the 5th had given the city a coat of arms with a slogan praising its wealth and named it the “Villa Imperial”. The name became famous when Miguel de Cervantes, who authored “Don Quijote de la Mancha”, used the phrase “vale un Potosí” (it’s worth a Potosí) to describe anything that is extremely costly. More interested in mining than in establishing cities, the pioneers settled haphazardly in the region, occupying native homes and improvising huts in the driest areas of Potosí. They spend years in “urban” chaos until finally Francisco de Toledo, the Viceroy of Peru, decided to organize the colony. He officially declared the founding of the Villa Imperial de Potosí in 1572 because in their excitement at having found so much silver, the first colonists had never taken it upon themselves to carry out the official city foundation ritual. He organized the town as best as he could, not following the customary Spanish design. Instead he drained the swamp that covered much of the area to make the city more inhabitable and instituted the “mita” system, which he copied from the Incas, introducing the use of mercury (a toxic element) to purify the mineral in its raw form. This cost thousands of “mitayos” (enslaved indigenous miners) their lives. In 1575 Viceroy Toledo also ordered the construction of artificial lagoons to provide water for the city. Potosí needed a lot of water both for consumption by its growing population, and for work in the mines, but water was scarce. Therefore, he decided to take advantage of the springs that ran from the Qari-Qari mountain range that surrounds the city and built enormous dikes that carried the water, and rainwater, toward five artificial lagoons. Over time he built a total of 32 dikes. Some of these still exist and are known collectively as the Qari-Qari Lagoons. By this time the city was already enormous, in comparison to others of the same era, and was larger than most European capitals. The population was a great mix of all types of peoples: adventurers, soldiers, fugitives, noblemen, friars and priests, artists, academics, gamblers, swordsmen, artisans, miners, traders, and women from all walks of life. Those who didn’t dedicate their time to seeking their fortunes in the mines earned their livings by providing goods and services to those who did. The Church also received its portion of the bonanza as just two years after the Spanish settled the area, two churches were constructed (La Anunciación and Santa Bárbara) which were followed by several others until finally there was a total of 36 sumptuously adorned temples with altars of pure gold and silver, some in the simple Neoclassic style, and others in the more ornate Mestizo Baroque style. It is interesting to note that the division of social classes could also be seen in the churches which were divided into “churches for indigenous people” and “churches for Spaniards and creoles”. Many of these churches are still standing and their façades remain as testimony of their splendor. Convents and seminaries were also built as were great mansions for noblemen and their families, gaming houses, and dance halls for the entertainment of the Spaniards and creoles (Spaniards born in the Americas). No one else was allowed to enter them. The most notable building of this period is the Casa de la Moneda (the Spanish Mint) which is one of only three that were built in the Americas during this period. Viceroy Francisco de Toledo ordered it built in 1572 so that the silver found could be processed into coins or bars on site and sent in this manner to Spain where the royal seal would be imprinted upon them if, that is, they were not stolen by English and Dutch pirates who had taken to raiding the Spanish galleons. It was designed by architect Salvador de Vila, who also designed the Spanish Mints of Lima and Mexico. So much silver was being produced that it soon became too small for its purpose and the King of Spain then ordered another to be built, using the taxes contributed by the miners to do so. The new Casa de la Moneda was begun in 1751 and completed in 1773, under the supervision of two architects named José de Rivero and Tomás Camberos, who designed an enormous complex in the Mestizo Baroque style. It covered over 15,000 square meters and had 200 rooms. This building certainly was “worth a Potosí”. It cost over 10 million dollars (in today’s terms) to build it and it operated as the Spanish Mint for over two hundred years until finally, in 1953, it was converted into a museum. The city was at its most splendorous during the 16th century and grew to become one of the most populated cities in the world, surpassing even the largest cities in the Old World. It was culturally and architecturally enviable and its inhabitants were known to be extremely ostentatious. Among other eccentricities, for example, during the procession of Corpus Christi in 1658 the city center cobblestones were dug up and replaced with bars of silver all the way to the Recoleto Church. This display of luxury gave rise to legends about the American city whose streets were paved with silver and not cobblestones, and that so much silver was used to pave its streets that it was enough to build a solid silver bridge from Potosí to Madrid. Another curious story tells of a man named Juan Fernández who declared himself the King of Potosí. This cost him his head when the Spanish crown charged him for treason. Although there was enough silver to keep everyone happy, disputes arose between Spaniards of different origins. Thus, for example, neither the Spaniards nor the creoles got along with the Basques who had acquired too much power in their opinion. This led to a battle known as the Guerra de los Vicuñas y los Vascogados, in 1617. The Vicuñas, as the creoles (American born Spaniards) were disrespectfully called, rebelled against the Vascongado (the Basques) and the restrictions the latter imposed upon them because they were not born on the Spanish peninsula. The Vicuñas won, because they were more numerous. Near the mid 18th Century Potosí underwent a second silver boom and it was during this time that the second Spanish Mint (Casa de la Moneda) was built. However, after this brief period, silver mining entered a period of decadence which was aggravated by the wars for independence during which Potosí was one of the most disputed cities. It changed hands several times as Spain had no intention of letting go of its grip on the hen that laid golden eggs. A fifteen-year war began on 10 November 1810 when the patriots rebelled and expelled governor Francisco de Paula Sanz, who was said to have been an illegitimate son of Spain’s King Charles the 3rd. Encouraged by the victories of patriots in neighboring Argentina, guerrilla warriors from the town of Tupiza also entered into combat against Sanz’ royalist army. The latter found it easy to defeat them and then marched into Charcas where he squelched an uprising there as well. The war was finally won and Bolivia declared its independence from Spain in 1825 when Venezuelans Simón Bolivar and Mariscal Sucre intervened. Sucre decreed the creation of the Department of Potosí on 23 January 1826. Years later Andrés de Santa Cruz added several provinces that belonged to Tarija to the jurisdiction of Potosí, making Potosí one of the largest of Bolivia’s nine departments (states). The war for independence left Potosí in ruins. Its population dwindled from over 100,000 to under 9000, it was despoiled of its riches which were looted or transported to Spain and other places, and its mining industry was paralyzed. It wasn’t until decades later that it recovered slightly thanks to the international need for tin, which until that time had not been a greatly appreciated metal. In 1850 the mines were reactivated and preference was given to the extraction of tin. With the high prices being paid for this metal worldwide, Potosí became Bolivia’s economic center until the end of the Second World War during which the United States purchased Bolivia’s tin at bargain prices. Of all the wars Bolivia fought against its various neighbors, the one that most affected Potosí was the War of the Pacific (against Chile) in 1879. A new department was created from part of the coast that belonged to Potosí. It was named Mejillones (or Litoral as it is more commonly known). This department was lost during the war and with it, Bolivia lost its access to the sea. This obviously made exports from Potosí’s mines less competitive as the metals now had to be exported through foreign ports and taxes had to be paid. In addition, the prices of these precious metals fell steeply and the costs of extracting, purifying and exporting them increased. Mining decreased greatly in the years after the World Wars, leaving Potosí impoverished. Due to several successive droughts, the lack of a means for living, and the fall of the tin mines, there was a period of mass migration from Potosí to Argentina and other departments, above all La Paz, where people primarily settled in what is now the city of El Alto. Others headed to Eastern Bolivia. During the Chaco War (1932-1935 against Paraguay) miners were recruited as soldiers with disastrous results. Bolivia lost more territory and production fell steeply. Beginning with the reforms that took place in the 1950’s, and through to the more recent autonomy movements that have taken place in the country, attempts have been made to solve some of the socioeconomic problems of this region, but with poor results. Potosí, which once paved its streets with silver centuries ago, is now one of the three poorest departments of Bolivia. Culturally, however, the region is very rich. It possesses relics and intangible wealth of great historic value and attempts are being made to preserve them. UNESCO declared Potosí Cultural Heritage of Humanity on 7 December 1987. The city has preserved a great deal of its colonial architecture, narrow streets, many of its sumptuous temples, museums with unique and priceless objects, the Casa de la Moneda, the machinery from the Spanish Mint (which is in excellent condition) and even the comical mask at its entrance. And, of course, the Cerro Rico, after five centuries of exploitation and the drilling of thousands of tunnels into its bowels, still continues to produce silver. Tourists can visit some of the mining tunnels with a guide. One of the most outstanding festivals is the Festival of Ch’utillos which rivals the Carnaval of Oruro and the Entrada del Gran Poder of La Paz. Its natural attractions are also important, especially the beautiful Salar de Uyuni (currently one of Bolivia's top tourist attractions, the Lagoons, the Tarapaya hot springs, Laguna Colorada with its red waters, the Llallagua and Porco mining complexes, the colonial town of Tupiza, and several summits that are popular among mountain climbers.
Students will begin the year by reviewing the Age of Absolute Monarchs and the Enlightenment, the first struggle for Empire, and by considering the importance of the American Revolution in the larger context of World History. This will set the stage for the study of the rise of the nation-state in Europe, the French Revolution, and the economic and political roots of the modern world. They will study the origins and consequences of the Industrial Revolution, 19th century political reform in Western Europe, and Imperialism in Africa, Asia, and South America. They will explain the causes and consequences of the great military and economic events of the past century, including World War I, the Great Depression, World War II, the Cold War, and the Russian and Chinese revolutions. Finally, students will study the rise of nationalism and the continuing persistence of political, ethnic, and religious conflict in many parts of the world. This course will also attempt to examine American influence on world events and the influence that world events have had on United States history. TERM ONE Opening activities and expectations, review of Renaissance, Scientific Revolution, Absolute Monarchs, and the Enlightenment. Impact of French and American Revolutions, The Industrial Revolution; Democracy and Reform in Europe. TERM TWO Rising European Nationalism and subsequent Imperialism, World War I: causes, course, and consequences; the Russian Revolution of 1917; Paris Peace Conference: The Treaty of Versailles; Global Communism. TERM THREE The Great Depression: causes and global consequences; Fascism and Totalitarianism; the Second World War: geography, leaders, factors and turning points; the Holocaust TERM FOUR The Cold War; The Collapse of the Soviet Union: A New World Order? Democracy and Human Rights; Decolonization, and Modern Middle East, War on Terrorism. - Teacher: Zachary Ritland
Drawing Conclusions Lesson Plans Teachers can use drawing conclusions lesson plans to help students learn how to connect their background knowledge to text. By Lesley Roberts In teaching the comprehension concept of drawing conclusions, most teachers know that a conclusion is the decision you make using the information you already know, and the information you gather as you read a text. For example, it is common knowledge that wolves are considered carnivores or wild animals that eat meat. So when you read "Little Red Riding Hood" and you read that the wolf is disguised and waiting for the little girl, you decide that the wolf could only be there to eat "Little Red Riding Hood". How did you come to this conclusion? You used the information about wolves that you already possessed, and the knowledge you gathered as you read the story. Students need to know that, in order to draw conclusions or make decisions, they will need to do two things. First, they will need to ask themselves "What do I know about this subject?" and second, they will need to ask, "What information am I getting from the story?" An additional skill students need to know in order to draw conclusions is how to identify the important information in the story. They will then be able to easily make connections between what they know and what they are reading. Teachers can begin instruction by using classic fairy tales, such as "Little Red Riding Hood", "Cinderella", "Goldilocks and the Three Bears", "The Three Little Pigs", and others. These stories are simple and the themes are clear to most students. As the story progresses, students can make connections and begin to draw conclusions. This makes teaching this skill easier for the teacher as, with these types of text, students usually require little prompting to make connections. Teachers can ask students to write down what they already know about the story they are going to read. Before the end of the story is read, teachers can have students discuss the decisions they have made about the text and the flow of the action. This can be done by asking, "How do you think the story will end?" or "What do you think will happen next?" After the story is finished, students can then write what they have learned. As the class reads and processes each text, and students gain more experience with drawing conclusions, they can begin practicing with other simple texts that may not be as familiar to them. The following drawing conclusions lesson plans can help students develop their reading comprehensions skills. Drawing Conclusions Lesson Plans: Students make cut-out gingerbread cookies. After reading "The Gingerbread Boy", their cookies "disappear" and students must make predictions and draw conclusions about what happened to their cookies Students learn about drawing conclusions using a nonfiction selection. Students also identify main ideas and respond to cause and effect questions. Students make several inferences based on the reading of Shel Silverstein poems. They write their own poetry and complete an assessment in which they differentiate between sentences that are stated or inferred. Students use a story by Valerie Fournoy, "The Patchwork Quilt", to learn about drawing conclusions. They then design their own classroom quilt. Students design a poster about a character in a fiction book they have been reading. They have to draw conclusions about the main character. Their poster has to include a description of their character, an illustration, and inferences about their character.
There are a number of methods for creating typographical contrast. Lynch & Horton (2001) outline the benefits and pitfalls of using the following tags: The tag <I> makes text slightly lean to create contrast. Creating large blocks of italicized text is hard to read. The tag <B> makes weighted text that is thicker than normal text. Large blocks of bold text are hard to read. The tag <U> underlines text. The effect of underlining text is confusing to readers because underlined text usually indicates a link to a differnet page. - Colored Text Using colored text attributes within tags is a great way to create contrast on webpages. However, great care must be taken when choosing background and text colors. Color combinations should be pleasing and text should be different than the color of text links to avoid confusion.
A View from Emerging Technology from the arXiv First Tunguska Meteorite Fragments Discovered Nobody knows what exploded over Siberia in 1908, but the discovery of the first fragments could finally solve the mystery. The Tunguska impact event is one of the great mysteries of modern history. The basic facts are well known. On 30 June 1908, a vast and powerful explosion engulfed an isolated region of Siberia near the Podkamennaya Tunguska River. The blast was 1000 times more powerful than the bomb dropped on Hiroshima, registered 5 on the Richter scale and is thought to have knocked down some 80 million trees over an area of 2000 square kilometres. The region is so isolated, however, that historians recorded only one death and just handful of eyewitness reports from nearby. But the most mysterious aspect of this explosion is that it left no crater and scientists have long argued over what could have caused it. The generally accepted theory is that the explosion was the result of a meteorite or comet exploding in the Earth’s atmosphere. That could have caused an explosion of this magnitude without leaving a crater. Such an event would almost certainly have showered the region in fragments of the parent body but no convincing evidence has ever emerged. In the 1930s, an expedition to the region led by the Russian mineralogist Leonid Kulik returned with a sample of melted glassy rock containing bubbles. Kulik considered this evidence of an impact event. But the sample was somehow lost and has never undergone modern analysis. As such, there is no current evidence of an impact in the form of meteorites. That changes today with the extraordinary announcement by Andrei Zlobin from the Russian Academy of Sciences that he has found three rocks from the Tunguska region with the telltale characteristics of meteorites. If he is right, these rocks could finally help solve once and for all what kind of object struck Earth all those years ago. Zlobin’s story is remarkable in a number of ways. The area of greatest interest for meteor scientists is called the Suslov depression, which lies directly beneath the location of the air blast and is the place where meteorite debris was most likely to fall. Dig into the peat bogs here and you can easily find layers that show clear evidence of the explosion. Zlobin said he dug more than ten prospect holes in the hope of finding meteorite fragments, but without success. However, he had more luck exploring the bed of the local Khushmo River, where stones are likely to collect over a long period of time. He collected around 100 interesting specimens and returned to Moscow with them. This expedition took place in 1988 and for some unexplained reason, Zlobin waited 20 years to examine his haul in detail. But in 2008, he sorted the collection and found three stones with clear evidence of melting and regmalypts, thumblike impressions found on the surface of meteorites which are caused by ablation as the hot rock falls through the atmosphere at high speed. Zlobin and others have used tree ring evidence to estimate the temperatures that the blast created on the ground and says that these were not high enough to melt rocks on the surface. However, the fireball in the Earth’s atmosphere would have been hot enough for this. So Zlobin concludes that the rocks must be fragments of whatever body collided with Earth that day. Zlobin has not yet carried out a detailed chemical analysis of the rocks that would reveal their chemical and isotopic composition. So the world will have to wait for this to get a better idea of the nature of the body. However, the stony fragments do not rule out a comet since the nucleus could easily contain rock fragments, says Zlobin. Indeed he has calculated that the density of the impactor must have been about 0.6 grams per cubic centimetre, which is about the same as nucleus of Halley’s comet. Zlobin says that together the evidence seems “excellent confirmation of cometary origin of the Tunguska impact.” Clearly there is more work to be done here, particularly the chemical analysis perhaps with international cooperation and corroboration. Then there is also the puzzle of why Zlobin has waited so long to analyse his samples. It’s not hard to imagine that the political changes that engulfed the Soviet Union in the year after his expedition may have played a role in this, but it still requires some explaining. Nevertheless, this has the potential to help clear up one of the outstanding mysteries of the 20th century and finally determine the origin of the largest Earth impact in recorded history. Ref: arxiv.org/abs/1304.8070: Discovery of Probably Tunguska Meteorites at the Bottom of Khushmo River’s Shoal Blockchain is changing how the world does business, whether you’re ready or not. Learn from the experts at Business of Blockchain 2019.Register now
What is a MOSFET: Basics & Tutorial - the MOSFET, Metal Oxide Semiconductor field effect transistor offers many advantages, particularly in terms of high input impedance and overall performance. FETs of all types are widely used electronics components today. Of all the types of FET, the MOSFET is possibly the most widely used. Even though MOSFETs have been in use for many years, these electronics components are still a very important element in today's electronics scene. Not only are MOSFETs found in many circuits as discrete components, but they also form the basis of most of today's integrated circuits. MOSFETs provide many advantages. In particular they offer a very high input impedance and they are able to be used in very low current circuits. This is particularly important for integrated circuit technology where power limitations are a major consideration. The term MOSFET stands for Metal Oxide Semiconductor Field Effect Transistor, and the name gives a clue to its construction. The devices had been known about for several years but only became important in mid and late 1960s. Initially semiconductor research had focussed in developing the bipolar transistor, and problems had been experienced in fabricating MOSFETs because process problems, particularly with the insulating oxide layers. Now the technology is one of the most widely used semiconductor techniques, having become one of the principle elements in integrated circuit technology today. Their performance has enabled power consumptions in ICs to be reduced. This has reduced amount of heat being dissipated and enabled the large ICs we take for granted today to become a reality. As a result of this the MOSFET is the most widely used form of transistor in existence today. There a several MOSFET circuit symbols that are used. Some MOSFET symbols are equivalents of each other, while others indicate more detail about the MOSFET itself. As there are several varieties of MOSFET, the symbols used to indicate them need to be different. MOSFET symbols for N-channel and P-channel types (enhancement mode) MOSFET symbol used above generally indicates that the device has a bulk substrate - this is indicated by the arrow on the central area of the substrate. MOSFET symbols for N-channel & P-channel types (no bulk substrate) It can be seen from the MOSFET circuit symbols above that there are two common circuit symbols for a MOSFET with no bulk substrate. Both are widely used. MOSFET symbols for N-channel & P-channel types (depletion mode) The MOSFET provides some key features for circuit designers in terms of their overall performance. |Key MOSFET Features| |Gate construction||gate is physically insulated from the channel by an oxide layer. Voltages applied to the gate control the conductivity of the channel as a result of the electric field induced capacitively across the insulating dielectric layer.| |N / P channel||Both N-channel and P-channel variants are available| |Enhancement / depletion||Both enhancement and depletion types are available. As the name suggests the depletion mode MOSFET acts by depleting or removing the current carriers from the channel, whereas the enhancement type increases the number of carriers according to the gate voltage.| The two main types of MOSFET are N-channel and P-channel. Each has different features: |Comparison of the key features of N-channel and P-channel MOSFETs| |Source / drain material||N-Type||P-Type| |Threshold voltage Vth||negative||doping dependent| |Inversion layer carriers||Electrons||Holes| In view of the structure of the MOSFET - its gate is insulated from the channel by a thin oxide layer and this means that it can be damaged by static if it is not handled in the correct way, or the circuit does not protect it adequately. Like other forms of FET, the current flowing in the channel of the MOSFET is controlled by the voltage present on the gate. As such MOSFETs are widely used in applications such as switches and also amplifiers. They are also able to consume very low levels of current and as a result they are widely used in microprocessors, logic integrated circuits and the like. CMOS integrated circuits used MOSFET technology. Also like other forms of FET, the MOSFET is available in depletion mode and enhancement mode variants. The enhancement mode is what may be termed normally OFF, i..e when the VGS gate source voltage is zero and requires a gate voltage to turn it on, whereas the other form, deletion mode devices are normally ON when VGS is zero. There are basically three regions in which MOSFETs can operate: - Cut-off region: In this region of the MOSFET is in a non-conducting state, i.e. turned OFF - channel current IDS = 0. The gate voltage VGS is less than the threshold voltage required for conduction. - Linear region: In this linear region the channel is conducting and controlled by the gate voltage. For the MOSFET to be in this state the VGS must be greater than the threshold voltage and also the voltage across the channel, VDS must be greater than VGS. - Saturation region: In this region the MOSFET is turned hard on. The voltage drop for a MOSFET is typically lower than that of a bipolar transistor and as a result power MOSFETs are widely used for switching large currents. Switching for Different Types of MOSFET MOSFET type VGS +ve VGS 0 VGS -ve N-Channel Enhancement ON OFF OFF N-Channel Depletion ON ON OFF P-Channel Enhancement OFF OFF ON P-Channel Depletion OFF ON ON As already implied the key factor of the MOSFET is the fact that the gate is insulated from the channel by a thin oxide layer. This forms one of the key elements of its structure. For an N-channel device the current flow is carried by electrons and in the diagram below it can be seen that the drain and source are formed using N+ regions which provide good conductivity for these regions. In some structures the N+ regions are formed using ion implantation after the gate area has been formed. In this way, they are self-aligned to the gate. The gate to source and gate to drain overlap are required to ensure there is a continuous channel. Also the device is often symmetrical and therefore source and drain can be interchanged. On some higher power designs this may not always be the case. N channel enhancement mode MOSFET structure It can be seen from the diagram that the substrate is the opposite type to the channel, i.e. P-type rather than N-type, etc. This is done to achieve source and drain isolation. The oxide over the channel is normally grown thermally as this ensure good interfacing with the substrate and the most common gate material is polysilicon, although some metals and silicides can be used. The depletion mode has a slightly different structure. For this a separate N-type channel is set up within the substrate. N channel depletion mode MOSFET structure P-channel FETs are not as widely used. The main reason for this is that the holes do not have as high a level of mobility as electrons, and therefore the performance is not as high. However they are often required for use in complementary circuits, and it is mainly for this reason that they are manufactured or incorporated into ICs. MOSFET circuit design The MOSFET follows the same basic circuit design principles that are used for all forms of FET. They are essentially high impedance voltage devices and as such they are treated in a slightly different way to bipolar transistors that are current devices. Note on FET circuit design: FETs can be used in a whole variety of circuits. Like the bipolar transistor, there are basic circuits. These include the common source, common drain and common gate. These form the basis of FET circuits. Click on the link for further information about FET circuit design There are many different circuits in which MOSFETs can be used, from low power amplifiers to high power switching applications. In all these circuit areas FETs can be used and they offer high levels of performance. By Ian Poole Share this page Want more like this? Register for our newsletter
Univ. of Wisconsin J.D. Univ. of Wisconsin Law school Brian was a geometry teacher through the Teach for America program and started the geometry program at his school A reflection is an isometry, which means the original and image are congruent, that can be described as a "flip". To perform a geometry reflection, a line of reflection is needed; the resulting orientation of the two figures are opposite. Corresponding parts of the figures are the same distance from the line of reflection. Ordered pair rules reflect over the x-axis: (x, -y), y-axis: (-x, y), line y=x: (y, x). When we're talking about transformations, there are 4 different types one of which is a reflection. Now, what is a reflection? Well, reflection is an isometry which means it's rigid transformation which means its shapes are congruent after the reflection. Secondly its image has opposite orientations. Well that's a little difficult to understand so let's take a look at two different pictures. Here I have an original and I reflected this quadrilateral over this dotted little line onto the other side, so if we were to go in clockwise direction, after a I would have d after d I would have vertex c after vertex c I would have vertex b and after vertex b I would have vertex a. Now, if we go over to our image, if I start with a the next vertex is b not d, so that's what we mean when we say the image has opposite orientations because when you reflect it you're going to have a different order of vertices that's going to be important when you get to Chemistry. Next, a point and its image are equidistant from the line of reflection so again let's go back to our little example here and if I picked this vertex c and if I drew a perpendicular from vertex c to the line of reflection, that's going to be the same distance as its image which is c prime, so the distance from c prime to that line of reflection will be congruent, so that's how you know how far to reflect and in what direction over the line that is given to you. And the last thing is you can kind of consider this a flip that will be the non mathematical way of describing a reflection. There are 3 types of reflections that you need to know about in regards to the order pair rules. The first one is xy is mapped onto -x, y. Well to figure out what type of reflection this is let's write a little, draw a little sketch here, so I'm going to make an x axis and a y axis and I'm just going to pick some random point in the first quadrant here and we're going to call this point a and let's say point a has x coordinate 3 and y coordinate 5. Now according to this order pair rule I'm going to take my x and takes its opposite and I'm going to keep my y exactly the same so the opposite of 3 is going to be -3 so I'm going to have to write my new image a prime over on this side of the y axis so what is our line of reflection? Well it's pretty clear that a kept the same y coordinate but it's x coordinate was taken the opposite of so we're going to categorize this as reflection over the x axis excuse me y axis. I was thinking that the x coordinates changed but since the x coordinates are changing that means that the y axis is our line of reflection. Secondly we have xy is mapped onto x and -y which means if we go back to our original point, x is going to stay the same and our y is going to be taken the opposite of so the opposite of 5 would be -5 so now our point a double prime is going to be at 3,-5 so it looks like our line of reflection is the x axis so what we're saying reflection over the x-axis. The final one that you should know by heart is xy is mapped onto yx. What this one does, it takes every x coordinate and then makes that into a y coordinate and does the same for your y coordinates, every y coordinate becomes an x coordinate so if we have a, our a since we have a prime and a double prime our a triple prime is going to be at 5, 3 so what I'm going to do is I'm going to write 3, 5 right here. Now 5, 3 is going to be lower and more to the right I'm going to write a 1, 2, 3 tripple prime at 5, 3 now this one is a little tricky to figure out what exactly is a line of reflection, now give you a hint is a diagonal line with the slope of 1 and the y intercept of 0 so if we're to drawn this in that line right there is the line y=x so if you have a point on this side of the line y=x when you switch your x and y coordinates it's going to flip up to this side. So we'll say this reflection over the line y=x. So that way in your quiz when your teacher says what does this mapping do x, y -xy well you can say it's going to be reflected over y axis. This one when you keep x the same and take the opposite of y you know that's a reflection over the x axis and last when you switch the x's and y's that will reflect it over line y equals x.
Although there had been hostilities between the two countries during 1919, the conflict began when the Polish head of state Józef Piłsudski formed an alliance with the Ukrainian nationalist leader Symon Petlyura (April 21, 1920) and their combined forces began to overrun Ukraine, occupying Kiev on May 7. In June the Soviet Red Army launched a counteroffensive, reaching the former Polish border by the end of July. In a wave of revolutionary enthusiasm, Soviet forces advanced through Poland to the outskirts of Warsaw (early August). The western European powers, fearing that the Russians might succeed in establishing a Soviet government in Poland and perhaps proceed to Germany, sent a military mission, headed by the French general Maxime Weygand, to advise the Polish army. The Poles counterattacked Piłsudski devised a strategy of counterattack, and in mid-August and the Poles forced the Russians to retreat. An armistice was signed in October 1920. The Treaty of Riga, concluded on March 18, 1921, provided for the bulk of Ukraine to remain a Soviet republic, although substantial portions of Belorussia (Belarus) and Ukraine were ceded to Poland.
By Anne Holden Most of us know that global warming wreaks havoc on our planet. Climate scientists make dire predictions of what is in store for us, but some aren’t ready to throw in the towel just yet. In a study published in last week’s Science, an international team of climatologists predicts that not all our planet’s environments will suffer from the effects of global warming. In fact, some might be doing just fine. Carlos Jaramillo, climatologist for the Smithsonian Tropical Research Institute (STRI), and his colleagues analyzed fossilized pollen dating to more than 50 million years ago. What makes this time frame so important? In one of the most dramatic warming events in Earth’s history, annual temperatures increased by three to five degrees Celsius in only 10,000 years, an unprecedented shift. Jaramillo and his team wanted to see if rainforests survived the heat wave during this period, known as the Paleocene-Eocene Thermal Maximum. Upon analyzing the preserved pollen, they had expected to see reduced numbers and lower diversity of plant life. After all, how could the delicate plants and flowers that make up the rainforest endure such a remarkable rise in temperatures? But the team’s observations showed that plants endured—and thrived. Their analysis actually revealed increased diversity of tropical plant life. New species of plants seemed to crop up everywhere. For example, the earliest ancestors of the passionflower and cacao plant appeared for the first time in history. If Earth really was hotter 50 million years ago, the rainforests seemed to relish in it. Does this mean we should stop worrying about global warming? Hardly. Human impacts on our environment, including deforestation and pollution, combined with global warming, will still have unfortunate effects on most of Earth’s ecosystems. But we have learned that, even when faced with extreme temperatures, some of our most fragile environments will find a way to survive. Anne Holden, a docent at the California Academy of Sciences, is a PhD trained genetic anthropologist and science writer living in San Francisco. Creative Commons image by ggallice
Did you know that: - before someone gets type 2 diabetes they may have prediabetes? - most people who have prediabetes don’t even know it? - there are things one can do to help prevent prediabetes from turning into diabetes? When it comes to prediabetes, it pays to be informed. What is prediabetes? Prediabetes is when your blood glucose level is higher than normal, but not high enough to be considered diabetes. Having prediabetes means that you do not yet have diabetes, but you are at risk of developing type 2 diabetes. Prediabetes can also cause damage to your heart and other organs such as your eyes, kidneys and nervous system. In 2010, the Centers for Disease Control estimated that about 79 million Americans had prediabetes that year. What causes prediabetes? Insulin is what helps to move glucose from your blood into your body’s cells. The cells use the glucose as fuel for energy. When you have prediabetes, your body may keep the insulin from working properly. This is called insulin resistance. If you carry extra weight, especially in your belly, you may be more likely to be insulin resistant. Lack of physical activity can also cause insulin resistance. Beta cells play a part in prediabetes. These are the cells in the pancreas that produce and distribute insulin. When you are insulin resistant, the beta cells have to work a little harder to produce insulin. As time goes on, some of the beta cells may stop working altogether and therefore, less insulin is made. Not enough insulin can put you on the road to type 2 diabetes. High risk groups You are at high risk for prediabetes if you: - are overweight - have a family member with diabetes - have high blood pressure - do not get regular physical activity - had diabetes while pregnant - are African American, Hispanic/Latino, Native American, Asian American or Pacific Islander - are 45 years of age or older Your healthcare provider can find out if you have prediabetes or diabetes with one of these blood tests: - A1C: This test measures your average blood glucose over the past 2-3 months. You don’t need to fast for this test. - Fasting Glucose: This test measures your blood glucose after you have not had food or drink, other than water, for at least 8 hours. It is usually done first thing in the morning. - Glucose Tolerance Test (GTT): This two hour test checks your blood glucose after you have drunk a sugary liquid given to you by your healthcare provider. You can also take the GTT test at a lab. Here are the results the tests may show: |A1C||Less than 5.7%||5.7-6.4%||6.5 % or higher| |Fasting Glucose||Less than 100 mg/dl*||100-125 mg/dl||126 mg/dl or higher| |GTT||Less than 140 mg/dl||140-199 mg/dl||200 mg/dl or higher| |*mg/dl = milligrams per deciliter| If you find out you have prediabetes, you should feel empowered, not upset or angry at yourself. There are many things you can do and steps you can take to prevent prediabetes from turning into diabetes. The Diabetes Prevention Program study showed that lifestyle changes can reduce your risk for diabetes by more than half over the course of 3 years. These lifestyle changes include: - A little weight loss: If you lose 5% to 10% of your weight it can make a difference. For example, if you weigh 200 pounds, losing just 10 to 20 pounds will help. - Physical activity: Moderate exercise such as brisk walking, riding a bike or swimming are ways to be active. A suggested amount of exercise is 30 minutes of exercise 5 days a week. - A healthy eating plan: Your food plan should include fruits, vegetables, whole grains and lean protein. Changing what you eat and your exercise habits can be tough. But you don’t have to do it alone. Here are some places where you can find support. - Many health care systems offer prediabetes or diabetes prevention classes. Check with your local hospital or look online for programs in your area. - You could meet with a dietitian in order to set up a meal plan that will help you lose weight. A dietitian can give you tips on how to make healthier food choices. - Join a fitness center and work with a personal trainer. If you don’t like gyms or fitness centers, try walking around your local mall to get your exercise. - You can get support from family, friends or coworkers who may also have prediabetes. Share ideas, tips and success stories. Knowledge is power If you do fit into one or more of the high risk groups, talk with your healthcare provider. Ask him or her to order a blood glucose test. Being aware that you have prediabetes will give you the chance to take action. The sooner you can take steps to improve your eating habits, exercise and lose a few pounds, the better. If your blood test shows you may already have type 2 diabetes, don’t let it get you down. The lifestyle changes suggested can also help to keep type 2 diabetes under control. Your heart, eyes, kidneys and nervous system will thank you. Now that you know what to do, it’s all up to you. Centers for Disease Control and Prevention. National diabetes fact sheet: national estimates and general information on diabetes and prediabetes in the United States, 2011. Atlanta, GA: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, 2011.
Carbohydrates are sugars. Sugars are carbohydrates. Saccharide means sugar. We commonly divide carbs into three classes: 1. Monosaccharides (means one sugar) 2. Disaccharides (made of two single sugars, snapped together) 3. Polysaccharides (many sugars snapped together) There’s hundreds of types of these but we’re going to go over just a few examples of each. Five examples of Monosaccharides The first two examples are 5-carbon atoms long: Ribose and Deoxyribose. Ribose is found in RNA. Deoxyribose is found in DNA. A deoxyribose is missing one oxygen, which is why it’s called “de-oxy.“ The next three examples are 6-carbon atoms long: Glucose, Fructose and Galactose. What’s interesting about these are that all three of them are 6 carbon atoms, 12 hydrogen atoms and 6 oxygen atoms. The molecular formula for all three of them is C6H12O6 So wait a second… If all three of them are C6H12O6, why are they different? They are isomers. The term we use when the atoms are the same molecular formula but arranged differently are chemical isomers. Isomer literally means “same type” but they are not identical. These three monosaccharides are mostly used as sources of energy by living cells, including ours. What’s also shown above is that these sugars are not a straight chain of carbon atoms. They are actually looped together as a ring-shape. This is known as a cyclic shape which refers to the ring shape. When you see pictures that look like this, recognize them as sugars. Three examples of Disaccharides (“double” sugars) 1. Sucrose aka cane sugar is made of glucose + fructose snapping together to form a disaccharide. The name for this type of reaction is a dehydration synthesis reaction. This is how most organic molecules are snapped together. Synthesis means to join together. Note also that “syn-” and “sym-” both mean together. For example, in anatomy, you learned that the part where the two pubic bones join is called the pubic symphysis. This reaction is reversible. When water is added to break them apart, it’s called hydrolysis. “Hydro-” means water and “-lysis” means to break apart. We’re going to see these two reactions all over the place. 2. Lactose. “Lacto-” means milk and it’s found in all milk. Milk doesn’t taste sweet but that’s because not all sugars taste sweet. Lactose is actually made of glucose + galactose. People who are said to be lactose-intolerant means they can’t digest lactose apart into glucose and galactose and if they drink it, they get diarrhea and cramps. Lactaid milk has the lactase-enzyme added to it has already broken down the lactose for you, which is why it tastes slightly sweeter. 3. Maltose is found in all grains and commonly called grain sugar. It’s made up of two glucose’s joined together. One way to remember that is to remember the food, Malt-O-Meal, which has maltose in it. Three examples of Polysaccharides (“complex carbohydrates”) Most polysaccharides are really made up of a lot of glucose’s joined together. They are known as polymers of glucose that form long chains or coils. There are many polysaccharides but we’re going to go over just three of them. 1. Amylose, commonly known as “starch” is the way many plants store sugars. Plants and other photosynthetic organisms join glucose sugars made from photosynthesis into a big chain called amylose. They could be hundreds of glucose sugars joined together. This is found in rice, potatoes, corn, bread & pasta (from wheat), beans and so forth. Notice these are foods that are plants or come from them. There are no starches in meat, fish, eggs and none in your own body. So what happens if you eat starch? Starch gets digested and broken down into their individual glucose molecules and that’s what you absorb. 2. Cellulose. Cellulose is made up of a bunch of glucoses, just like starch, however the way they are joined together forms a branching pattern, unlike starch. We as humans cannot digest or break apart these sugar molecules. which is why it is also known as “indigestible fiber,” “roughage,” or “insoluble fiber.” Whether you’re eating grains, celery, carrots, or anything made of plants, the outer plant cell walls are crushed by our teeth and the contents of the cells are digested. The cellulose though, will remain unchanged and exit out of our stool. This is especially noticeable in kids. When kids eat corn or raisins, you’ll see the outer skin of a corn kernel or raisin in the stool. It seems that our digestive tract needs a certain amount of this indigestible fiber to keep it healthy. When we don’t have enough of this, it makes our digestive tract thin and weak. Remember that cellulose (just like starch) exists only in plants. 3. Glycogen is sometimes called “animal starch” because what starch is to plants, glycogen is to animals. In other words, the same way plants store sugar by creating starch, animals store sugar by creating glycogen. This is primarily stored in our liver and muscles. Any athlete that does something that requires endurance, such as long distance runners or cyclists, will do something called carb-loading. Before they are going to run a marathon race, they eat lots and lots of carbs. They absorb these simple sugars and store it as glycogen in our liver and muscles. These are just a long chain of glucose’s. These stored sugars can then be broken down easily during the race. If you keep carb-loading, however, and don’t expend the energy within the next couple days, it will be turned into fat. So to recap, we have: - Ribose, Deoxyribose, Fructose, Glucose, Galactose - Sucrose (glucose+fructose), Lactose (glucose+galactose), Maltose (glucose+glucose) - Amylose (starch), Cellulose (indigestible fiber), Glycogen (animal starch) Hopefully by now you have noticed that naming pattern: Most sugars end with “-ose.” We have reviewed our first of the four organic compounds. Let’s move onto lipids!
|Dinosaur fossil (stock image).| "The big chill following the impact of the asteroid that formed the Chicxulub crater in Mexico is a turning point in Earth history," says Julia Brugger from the Potsdam Institute for Climate Impact Research (PIK), lead author of the study to be published in the Geophysical Research Letters. "We can now contribute new insights for understanding the much debated ultimate cause for the demise of the dinosaurs at the end of the Cretaceous era." To investigate the phenomenon, the scientists for the first time used a specific kind of computer simulation normally applied in different contexts, a climate model coupling atmosphere, ocean and sea ice. They build on research showing that sulfur- bearing gases that evaporated from the violent asteroid impact on our planet's surface were the main factor for blocking the sunlight and cooling down Earth. In the tropics, annual mean temperature fell from 27 to 5 degrees Celsius "It became cold, I mean, really cold," says Brugger. Global annual mean surface air temperature dropped by at least 26 degrees Celsius. The dinosaurs were used to living in a lush climate. After the asteroid's impact, the annual average temperature was below freezing point for about 3 years. Evidently, the ice caps expanded. Even in the tropics, annual mean temperatures went from 27 degrees to mere 5 degrees. "The long-term cooling caused by the sulfate aerosols was much more important for the mass extinction than the dust that stays in the atmosphere for only a relatively short time. It was also more important than local events like the extreme heat close to the impact, wildfires or tsunamis," says co-author Georg Feulner who leads the research team at PIK. It took the climate about 30 years to recover, the scientists found. In addition to this, ocean circulation became disturbed. Surface waters cooled down, thereby becoming denser and hence heavier. While these cooler water masses sank into the depths, warmer water from deeper ocean layers rose to the surface, carrying nutrients that likely led to massive blooms of algae, the scientists argue. It is conceivable that these algal blooms produced toxic substances, further affecting life at the coasts. Yet in any case, marine ecosystems were severely shaken up, and this likely contributed to the extinction of species in the oceans, like the ammonites. "It illustrates how important the climate is for all lifeforms on our planet" The dinosaurs, until then the masters of Earth, made space for the rise of the mammals, and eventually humankind. The study of Earth's past also shows that efforts to study future threats by asteroids have more than just academic interest. "It is fascinating to see how evolution is partly driven by an accident like an asteroid's impact -- mass extinctions show that life on Earth is vulnerable," says Feulner. "It also illustrates how important the climate is for all lifeforms on our planet. Ironically today, the most immediate threat is not from natural cooling but from human-made global warming." The above post is reprinted from Materials provided by Potsdam Institute for Climate Impact Research (PIK).
Today, we'll start working with contents of unit 6. To start with, we'll work with `will´ and `going to'. You can practise it better with these exercises and these ones. Copy in your notebook the different uses and one example of each use for you to remember. Now, you can practise vocabulary based on nature and read about the climate(copy on your notebook the main problems that the Earth has which are mentioned there). Now,go to the `links folder´and practise with the vocabulary about the environment. Choose both textbooks of 4th Eso. Copy the words in the notebook and translate them into Catalan or Spanish. Any time left, you can do the vocabulary game there.
Each year in the U.S., there are several poisonous snakebites. All snakes will bite when threatened or surprised, but most will usually avoid an encounter if possible and only bite as a last resort. Snakes found in and near water are frequently mistaken as being poisonous. Most snakes are harmless and many bites will not be life-threatening, but unless you are absolutely sure that you know the species, treat it seriously. Poisonous snake bites include bites by any of the following: - cottonmouth (water moccasin) - coral snake - bloody discharge from wound - blurred vision - excessive sweating - nausea and vomiting Snake Venom Properties - Venom is modified saliva. Its primary function is to capture/kill prey and then it also helps digest the prey. - Some venom is referred to as hematoxin which means that they primarily affect the blood. Hematoxin venom destroys tissue and is very painful. - Neurotoxin venom attacks the nervous system and brain. They may cause almost no pain, but shut down the respiratory system and interfere with heart functions. The Coral snakes and the Mojave rattler have this venom. - Snake venom is made up of many different enzymes. These enzymes determine the toxicity of the snake and whether it is hematoxin or neurotoxin. How to Treat a Snakebite - Avoid habitat areas. - If bitten, note time of the bite, remove jewelry or other items that might constrict swelling, remain calm. - Do not try to capture the snake. - Do not cut the wound and try to extract the venom by mouth. - Do not use ice or a tourniquet. - Do not take pain relievers or other medications without medical advice. Do not drink alcohol. - Call the Poison Center at 1-800-222-1222 for instructions on all snake bites. Western Diamondback Rattlesnake Adults are typically 2.5 to 4.5 feet. Males average 10% larger than females. Record length is 7 feet 8 inches. Virtually every dry land area within its range, though it prefers the area be neither too closed nor too open. It is uncommon in dense woodland or short-grass prairie, but abundant in rocky canyons and deserts. They are active all year round in southern Texas. In northern climates they may be active around their dens on warm winter days and active during the summer at night. As the days shorten, they tend to be more active during daylight hours. When disturbed, these snakes are quick to adopt a defensive position, body curled from which they can quickly strike. The rattle is shaken vigorously and the tongue darts in and out of the mouth. This is purely a defensive posture and if the threat does not escalate, the snake will usually move away. Adult males are typically 1.5 - 2 feet long. Females tend to be somewhat smaller. The record length is 4 feet 4 inches. Wooded habitats from hardwood bottomlands in east Texas to isolated woody patches in the Trans Pecos. It can be found in open areas that are not too remote from wooded habitats. In drier habitats they will be limited to watercourses and similar corridors. They are relatively shy and inoffensive, unless provoked or otherwise disturbed. They do occasionally climb but they spend most of their time on the ground. They can swim, but are rarely found in the water. May be active throughout the year under the right conditions. In the spring and fall they are generally active during the day and switch to night activity during the summer. Cottonmouth Water Moccasin The adult male is usually 20 to 30 inches and females are typically smaller. Record length is 5 feet 2 inches. They are found in a wide variety of aquatic and semi aquatic habitat being most abundant where prey is plentiful and temperatures not too extreme. They require abundant basking sites and do well in urban areas. They often travel far from water to hibernate. They are reputed to be aggressive but its disposition is often shy unless provoked. Tend to be sluggish on land and good swimmers. They spend a lot of time coiled at the edge of the bodies of water or draped loosely in overhanging vegetation. When found away from their escape route they will often coil their body and open their mouth in a wide gape displaying their white lining of the mouth, thus the common name. The average length is 15 to 25 inches and record is 3 feet 11.75 inches. This snake is best remembered by the rhyme "Red touches Yellow will kill a fellow". "Red touches Black venom lack". A wide variety of urban, suburban, and other habitats from theTexas pine forests to the oak juniper canyons on the Pecos River provide it sufficient rock crevice cover or plant cover. These snakes will attempt to escape and if discovered will engage in complex defensive displays if prevented from escaping. They may coil, hiding their heads beneath the coils and mimicking the head with their tails. They may also engage in erratic body movements and fake death. Bees and Wasps Honey bee venom contains almost 20 active substances. Melittin, the most common ingredient is one of the most potent anti-inflammatory agents known. Adolapin is another strong anti-inflammatory ingredient, which is also an analgesic. The wasp venom is different depending upon the type of wasp. Most have similar ingredients to the bee, but the makeup is different in the percentage of each ingredient. One of the main differences between the wasp stings vs. the bee sting is the way the two inject their venom. Both Bees and Wasps sting their victims using a similar process. Bee lancets have larger barbs than wasps. The bee is unable to rip the shaft back out due to the barbs' resistance against the victims flesh. The poor old bee ends up having his entire stinging device, poison sac and all, wrenched out of its abdomen. The bee will later die due to the damage caused. The wasp thrusts his shaft into the victim and the lancets move rapidly backwards and forwards in a sawing action. The lancets are barbed which allow anchorage against the victims flesh. A wasp can extract the shaft from the victims flesh and fly off. How to Treat a Bee or Wasp Sting - Remove the stinger by scraping across the skin with a credit card. - Use ice or cool water for 10 to 30 minutes after the sting. Scorpions are venomous arthropods of the class Arachnidan and are considered relatives of spiders, ticks and mites. Scorpions have four pairs of legs, two larger pincer arms in front, a long slender body, and tail with a bulb-shaped stinger that can be arched over the body. The venom of scorpions is used for both prey capture, defense and possibly to subdue mates. All scorpions do possess venom and can sting, but their natural tendencies are to hide and escape. Scorpions can control the venom flow, so some sting incidents are bee-like which may produce a local reaction. Scorpion venoms are complex mixtures of neurotoxins (toxins which affect the victim's nervous system) and other substances; each species has a unique mixture.How to Treat a Scorpion Sting - Recognize scorpion sting symptoms: immediate pain or burning, very little swelling, sensitivity to touch, and numbness/tingling sensation. - If you are a victim of a scorpion sting, wash the area with soap and water. - Apply a cool compress on the area of the scorpion sting. - If you experience difficulty breathing or a rash, contact 911 or go to the emergency room. Black Widow Spider Females are 1-2 inches in diameter. Males are much smaller. The female black widow is shiny black with a red hourglass on abdomen; however this does not always have to be the case. The red hourglass could take the form of a red dot or many variations of shapes. The black widow is common in fields, woodpiles, and unoccupied dwellings. Black widows are found in every state except Alaska. The initial bite is not very painful, but can go unnoticed. The surface of the skin may display two red bite wounds, one, or none. The worst pain is in the first 8-12 hours, symptoms may continue for several days. Black widow venom is primarily a neurotoxin causing widespread muscle spasm and often mimics that of a severe abdominal problem i.e. acute appendicitis. Brown Recluse Spider The top of the front segment often bears a violin-shaped marking on the exterior thus giving it the name "fiddle back spider" or "violin spider". The body size is ¼ to ¾ inch with a leg span about the size of a half dollar (fifty cent piece). The female can be twice the length of the male. Recluses come in various shades of tan and take five years to reach full size. They are nocturnal hunters occupying dry, dark areas, such as attics, closets, and woodpiles. Bites occur when the spiders are forced into contact, such as when a person is cleaning out the attic or rolls over on it in bed. Most people do not feel the bite and never see the spider. Their bite contains a powerful, refined digestive fluid. Bite marks are very tiny and are viewed through magnification. There is increasing tissue damage progressing to a small blister containing dark serum, and associated skin discoloration, usually within one to two days. Tissue damage may continue over two to several weeks and may finally result in slough and tissue loss. The healing process is very slow and local symptoms may persist for months. Most stinging caterpillars belong to the insect family known as flannel moths. Flannel moths get their name from the flannel-like appearance of the wings of the adult, which are clothed with loose scales mixed with long hairs. The immature stages of flannel moths are caterpillars which are clothed with fine hairs and venomous spines. The spines, when brushed against the skin, produce a painful rash or sting. The best known flannel moth and stinging caterpillar in Texas is the puss moth caterpillar, Megalopyge opercularis, commonly called an "Asp." This caterpillar is often abundant and may infest shade trees and shrubbery around homes, schools, and in parks. They are of little importance as enemies of shade trees, but they can cause a severe sting. When a puss moth caterpillar rubs or is pressed against skin, venomous spines stick into the skin causing a severe burning sensation and rash. Puss moth caterpillars are teardrop-shaped, and, with their long, silky hairs, resemble a tuft of cotton or fur. Their color varies from yellow or gray to reddish-brown, or a mixture of colors. They have no bones or brains, but what a sting! Jellyfish are not fish at all. They are invertebrates, relatives of corals and sea anemones (uh-NEH-muh-neez). A jelly has no head, brain, heart, eyes, nor ears. It has no bones, either. But that's no problem! To capture prey for food, jellies have a net of tentacles that contain poisonous, stinging cells. When the tentacles brush against prey (or, say, a person's leg), thousands of tiny stinging cells explode, launching barbed stingers and poison into the victim. Be careful around jellies washed up on the sand. Some still sting if their tentacles are wet. Tentacles torn off a jelly can sting, too. If you are stung, wash the wound with vinegar, or sprinkle meat tenderizer or put a baking soda and water paste on the sting. Don't rinse with water, which could release more poison.
The terminology may be confusing, but it's important to understand pneumonia because it kills more than 50,000 Americans every year. Here are 10 pneumonia terms you should know: Double pneumonia: Double pneumonia is just a descriptive term for any type or cause of pneumonia that affects both lungs. Walking pneumonia: This term for pneumonia simply means that a person with a mild case of pneumonia is well enough to "walk around." About 50 percent of pneumonia cases are caused by viruses, and they tend to be less serious than bacterial pneumonias. Walking pneumonia may also describe atypical pneumonia and mycoplasma pneumonia. Atypical pneumonia: These pneumonias cause less fever, less cough, and less mucus production than bacterial pneumonias. Atypical pneumonias include the pneumonia that causes Legionnaire's disease, which can be caught by inhaling infected droplets from air conditioning systems, spas, or fountains. Chlamydophila pneumonia is a mild, atypical pneumonia seen in older people. Mycoplasma pneumonia: This pneumonia may be described as both atypical and walking pneumonia. It's caused by a tiny organism that is related to bacteria. Mycoplasma infections are more common in young people and spread like a common cold in tight living conditions. Symptoms, which are similar to the flu, can usually be treated with antibiotics. This type of pneumonia usually doesn't require a hospital stay, which is why a mycoplasma infection is sometimes called walking pneumonia. Opportunistic pneumonia: This term describes all pneumonias that attack anyone with a weakened immune system. The germs that cause these pneumonias usually do not make healthy people sick. An example is Pneumocystis pneumonia, which was once considered a parasitic pneumonia but is now classified as a fungus. Opportunistic pneumonias are most common in people who have HIV/AIDS, are undergoing cancer treatment, or who had an organ transplant. Bronchial pneumonia: Any pneumonia can affect your lung in two basic ways, bronchial and lobar. Bronchial pneumonia occurs in patches throughout both lungs. The term ‘bronchial' means that the airways throughout the lungs are also involved in pneumonia. Lobar pneumonia: This term describes a pneumonia that settles in a section of your lung called a lobe. Lobar pneumonia is usually caused by pneumococcus bacteria and tends to be more serious and extensive. Aspiration pneumonia: This is a type of pneumonia caused by breathing food or liquid into your lungs. These substances cause irritation in your lungs, and an infection may follow. You could be at risk for this type of pneumonia if you vomit while you're drunk, if you have a neurological disease that interferes with your ability to swallow, or if stomach acid seeps up into your throat at night (gastroesophageal reflux disease, or GERD). Last Updated: 12/4/2013
When we consider temperatures on Mercury we can see that we truly have the better deal on Earth. On Earth we complain if the weather hits 90 to 100 degrees Fahrenheit or it dips below 30 degrees Fahrenheit. However Mercury experiences much more extreme changes in temperature. First off the planet is much closer to the sun. This is because of the planet’s positions and also because of it more elliptical orbit which makes it pass even closer to the sun at certain points. The second factor is that Mercury’s atmosphere is so thin that is can not retain solar radiation like Earth can. The result of these two conditions on Mercury are very extreme temperature changes. During the Day mercury gets up to 10 ten times more solar energy than Earth. So temperatures can go as high 430 degrees Celsius. That is 806 degrees Fahrenheit. If a human being was to be exposed to such high temperatures they would burst into flames and burn to a crisp. In fact it is so hot that lead and zinc would met. During the night Mercury can’t retain heat so its temperature plummets to a freezing -183 degrees Celsius or -297.4 degrees Fahrenheit. When it comes to seasons Mercury is tilted away from the sun so its southern hemisphere gets hotter summers and cooler winters. The orbit also effects the intensity. Since the northern hemisphere tends to tilt towards the sun on the longer end of Mercury’s orbit so its winter and summers are not as intense. We’ve also recorded an entire episode of Astronomy Cast all about Mercury. Listen here, Episode 49: Mercury.
Chapter 4 Physical Activity Americans tend to be relatively inactive. In 2002, 25 percent of adult Americans did not participate in any leisure time physical activities in the past month,9 and in 2003, 38 percent of students in grades 9 to 12 viewed television 3 or more hours per day.10 Regular physical activity and physical fitness make important contributions to one's health, sense of well-being, and maintenance of a healthy body weight. Physical activity is defined as any bodily movement produced by skeletal muscles resulting in energy expenditure (http://www.cdc.gov/nccdphp/dnpa/physical/terms/index.htm). In contrast, physical fitness is a multi-component trait related to the ability to perform physical activity. Maintenance of good physical fitness enables one to meet the physical demands of work and leisure comfortably. People with higher levels of physical fitness are also at lower risk of developing chronic disease. Conversely, a sedentary lifestyle increases risk for overweight and obesity and many chronic diseases, including coronary artery disease, hypertension, type 2 diabetes, osteoporosis, and certain types of cancer. Overall, mortality rates from all causes of death are lower in physically active people than in sedentary people. Also, physical activity can aid in managing mild to moderate depression and anxiety. Regular physical activity has been shown to reduce the risk of certain chronic diseases, including high blood pressure, stroke, coronary artery disease, type 2 diabetes, colon cancer and osteoporosis. Therefore, to reduce the risk of chronic disease, it is recommended that adults engage in at least 30 minutes of moderate-intensity physical activity on most, preferably all, days of the week. For most people, greater health benefits can be obtained by engaging in physical activity of more vigorous intensity or of longer duration. In addition, physical activity appears to promote psychological well-being and reduce feelings of mild to moderate depression and anxiety. Regular physical activity is also a key factor in achieving and maintaining a healthy body weight for adults and children. To prevent the gradual accumulation of excess weight in adulthood, up to 30 additional minutes per day may be required over the 30 minutes for reduction of chronic disease risk and other health benefits. That is, approximately 60 minutes of moderate- to vigorous-intensity physical activity on most days of the week may be needed to prevent unhealthy weight gain (see table 4 for some examples of moderate- and vigorous-intensity physical activities). While moderate-intensity physical activity can achieve the desired goal, vigorous-intensity physical activity generally provides more benefits than moderate-intensity physical activity. Control of caloric intake is also advisable. However, to sustain weight loss for previously overweight/obese people, about 60 to 90 minutes of moderate-intensity physical activity per day is recommended. Most adults do not need to see their healthcare provider before starting a moderate-intensity physical activity program. However, men older than 40 years and women older than 50 years who plan a vigorous program or who have either chronic disease or risk factors for chronic disease should consult their physician to design a safe, effective program. It is also important during leisure time to limit sedentary behaviors, such as television watching and video viewing, and replace them with activities requiring more movement. Reducing these sedentary activities appears to be helpful in treating and preventing overweight among children and adolescents. Different intensities and types of exercise confer different benefits. Vigorous physical activity (e.g., jogging or other aerobic exercise) provides greater benefits for physical fitness than does moderate physical activity and burns more calories per unit of time. Resistance exercise (such as weight training, using weight machines, and resistance band workouts) increases muscular strength and endurance and maintains or increases muscle mass. These benefits are seen in adolescents, adults, and older adults who perform resistance exercises on 2 or more days per week. Also, weight-bearing exercise has the potential to reduce the risk of osteoporosis by increasing peak bone mass during growth, maintaining peak bone mass during adulthood, and reducing the rate of bone loss during aging. In addition, regular exercise can help prevent falls, which is of particular importance for older adults. The barrier often given for a failure to be physically active is lack of time. Setting aside 30 to 60 consecutive minutes each day for planned exercise is one way to obtain physical activity, but it is not the only way. Physical activity may include short bouts (e.g., 10-minute bouts) of moderate-intensity activity. The accumulated total is what is importantboth for health and for burning calories. Physical activity can be accumulated through three to six 10-minute bouts over the course of a day. Elevating the level of daily physical activity may also provide indirect nutritional benefits. A sedentary lifestyle limits the number of calories that can be consumed without gaining weight. The higher a person's physical activity level, the higher his or her energy requirement and the easier it is to plan a daily food intake pattern that meets recommended nutrient requirements. Proper hydration is important when participating in physical activity. Two steps that help avoid dehydration during prolonged physical activity or when it is hot include: (1) consuming fluid regularly during the activity and (2) drinking several glasses of water or other fluid after the physical activity is completed (see chs. 2 and 8). 9 Behavioral Risk Factor Surveillance System, Surveillance for Certain Health Behaviors Among Selected Local Areas-United States, Behavioral Risk Factor Surveillance System, 2002, Morbidity and Mortality Weekly Report (MMWR), 53, No SS-05. http://www.cdc.gov/brfss/. Return to Table of Contents Updated Wednesday, July 09, 2008 by ODPHP Web Support
When somebody opens their front door to pick up the morning newspaper and sees a dead bird below their hedge, they get curious for answers. As soon as they stoop down for a closer look, an Indiana Jones adventure unfolds within the confines of their backyard. Was it poison, disease, predation, starvation, old age? Is this a fluke or widespread plague? Perhaps dead birds like this one are widely scattered across a country. But, if so, what sort of scientific method could find answers to what happened to them all? The stone soup method. In my favorite version of the folk story "Stone Soup," a group of monks traveling through the war-torn countryside sit in the center of a quiet village and boil a stone in a large pot of water. Soon curiosity wins over the initial distrust and skepticism of impoverished villagers as each, in turn, are enticed to add a vegetable or spice. Through cooperation and sharing, the entire village feasts on delicious, nutritious soup. When my colleagues and I carry out research using citizen science methods, we are like the monks boiling stone soup. Instead of a pot, we have a big blank spreadsheet and curious folk are enticed to each add their observations, ultimately creating a robust database with observations from across a continent. Through citizen science I study healthy birds, but several of my colleagues focus on the sick and dying ones. This week in PLOS ONE, a research team led by Becki Lawson, a veterinarian and ecologist, reported a new strain of avian pox spreading in a common backyard bird in Great Britain. Citizen science participation was pivotal to tracking the outbreak, unraveling its mysteries, and informing localized studies. The new strain of avian pox entered Great Britain and spread in one family of birds, the Paridae. The Paridae include chickadees in North America, their European counterparts are various types of tits, most notably the Great Tit. By piecing together reports from citizen science participants, the team was able to track the spread of pox, starting in southeast England, moving to central England, and then into Wales in less than five years. Avian pox is not for the squeamish, so this study is a testament to what citizen scientists are willing to do. Birds with avian pox grow red, yellow, or gray wart-like lesions, particularly around the eyes, beak, and legs. The new strain makes really large lesions, so severe that they leave the bird unable to feed itself or look out for predators. The pox spreads from individual to individual through direct contact, indirect contact (like touching the same bird feeder), or through a vector that bites, like mosquitoes. There is no way to treat wild birds medically. When an outbreak occurs, people are advised to remove bird feeders to prevent birds from congregating. Also, the study is a reminder for people to periodically clean and sanitize wild bird feeders, just as you would with pets. There are numerous causes of bird deaths in Great Britain. I get the shivers from the names, such as the bacteria like salmonellosis, colibacillosis, Suttonella ornithocola, and Chlamydia psittaci, viruses like pox and fringilla papilloma, and parasites, like trichomonosis, cnemidocoptiasis, and syngamiasis. People have found birds with all of these infectious diseases in over 60 species since 2005 because thousands of individuals have followed hygienic protocols to pick up, package, and submit over 2,500 dead birds to designated veterinary labs for post mortem exams. The veterinary labs participate in the Garden Bird Health initiative (GBHi), a highly collaborative research project to investigate causes of sickness and death in British garden birds. Researchers at the Zoological Society of London collate information from two citizen science projects. First, they receive ad hoc reports, typically through the Royal Society for the Protection of Birds. Second, Garden BirdWatch, run by the British Trust for Ornithology, formed a systematic surveillance system in which participants provided information every week throughout the year (not just when sick or dead birds are found). Over the past few year Brits were alert and tracking the spread of this pox virus. Two years ago they also followed an epidemic of parasitic finch trichomonosis that caused a significant decline in British greenfinch populations, in research also led by Becki Lawson. The parasitic epidemic spread from the UK to the rest of Europe. The current viral pox epidemic turned the tables: this epidemic is likely invading the UK from Europe. Great Tits don't migrate, so the new strain of pox had to arrive some other way. Working in coordination with the national efforts, ornithologists from the University of Oxford confirmed that the Great Tit was more susceptible than other species. Although the avian pox has severe effects on individual birds, in particular lowering the odds of survival for chicks and juvenile birds, researchers do not anticipate population declines as occurred with the greenfinch. In the US, citizen scientists are helping study disease and death in birds, too. The House Finch Disease Survey, which is a project by Andr? Dhondt, my colleague (and supervisor) at the Cornell Lab of Ornithology, has tracked an epidemic of conjunctivitis, spread by bacteria. Like pox, people can typically see the symptoms of conjunctivitis in house finches, mainly red swollen and crusty eyes, like pink eye in our children. In the Pacific Northwest, hundreds of people help monitor marine health as they take long walks on the beach. They have counted thousands of dead (beached) sea birds each year and submitted their observations to my colleague Julia Parrish through the Coastal Observation and Seabird Survey Team (COASST). These baseline numbers are important. Unless people are paying attention, we won't notice if there is a sudden uptick in deaths, or be able to properly estimate the impact of a catastrophe, such as an oil spill. There are plenty of misconceptions about citizen science, largely attributed to its dual achievements: public engagement and academic research. Is the purpose of making stone soup to teach people about cooperation or to produce a good meal? The intent doesn't matter because the stone soup method achieves both. Likewise, citizen science can woo everyday people into falling in love with science AND co-create knowledge that an individual scientist could not acquire alone. Lawson, B., Lachish S., Colvile, K.M., Durrant, C., Peck, K.M., Toms, M.P., Sheldon, B.C., Cunningham, A.A. Emergence of a novel avian pox disease in British tit species. PLoS ONE Lachish, S., Bonsall, M.B., Lawson, B., Cunningham, A.A., Sheldon, B.C. Individual and population-level impacts of an emerging poxvirus disease in a wild population of great tits. PLoS ONE Lachish, S., Lawson, B., Cunningham, A.A., Sheldon, B.C. Epidemiology of the emergent disease Paridae pox in an intensively studied wild bird population. PLoS ONE
Looking at Cosmic Muons Einstein's Special Relativity Cornell College, PHY 312 – Prof. Derin Sherman In this paper we will be building a modified Geiger counter to detect cosmic muons and will be directly measuring the average muon density or flux on the earth's surface and at varying altitudes. This will give us a relationship between them and this in turn can be used to verify the time dilation concept of special relativity. The very high speed of the muons (essentially c, the speed of light) causes them to live longer and this enables them to travel a longer distance to reach the surface of the earth. Cosmic showers from high energy events in outer space reach earth's atmosphere and create a whole family of particles which rain down upon the surface at a very high rate. Out of these particles, muons are the most energetic ones and very interesting to study. Since their discovery in 1936 by Carl Anderson, muons have fascinated scientists and the scientific community alike that it prompted the very famous remark from Nobel Laureate I.I.Rabi “ who ordered that?” There are a lot of ways of looking at muons and one of them is to have a modified Geiger counter consisting of two detectors and looking for a coincidence between the two. Natural terrestrial radiation will also trigger a detector but will get stopped because of their low energy. Muons on the other hand are highly energetic and pass through two of them quite easily without loosing much energy and give us a very accurate count. Given below is the basic design of the experiment. In this we have many different types of detectors Fluorescent light bulb Flat Vacuum chamber Neon bulb detector Figure 1 – Cosmicrays.org This design is a very low cost cosmic ray detector using common Fluorescent Tubes. It is based on variation of an experiment performed in 2000 by the CERN (European Organization for Nuclear Research) laboratories by Dr. Schmeling which uses a simple method for detecting and visualizing cosmic rays using everyday fluorescent tubes inside a wire mesh of feed with a high voltage. In our variation we have used copper tape on both sides of the tube to give it the high electric field. Figure 2 – Cosmicrays.org Problems with the detector: Power supply requires good filtering and regulation 2) Tubes vary in voltage requirements from one tube to another even between the same make, model and age 3) Oscillation is a problem as the supply voltage and/or coupling plate surface area increase 4) Internal filament electrodes must be insulated, even loose coupling increases oscillation and spurious pulses 5) Oscillation occurs as the circuit forms a basic relaxation oscillator I will be following the procedure given in the Scientific American (Feb 2001) article by Shawn Carlson. The basic design of a detector consists of four sheets of plexiglass one below the other and sealed to withhold a low pressure of about a tenth of an atmosphere. The picture below shows in layers how they are constructed. Figure 4. Sci-Am Feb 2001article The aluminum foils on the inside of both the outer layers act as the ground and the assembly of thin wires in the middle are the high voltage. Here in the Sci-Am article, they have used four sheets of Plexiglas and I will be using five as I will be repeating the second layer in the bottom also to give it plenty of room in case the wires make contact with the aluminum foil and cause a short circuit. High voltage power supply: this detector requires a regulated high voltage supply which can be adjusted to have a range of about 600V to 1100V. This is very easily available in the market but to have one which we can run of a battery and is portable and cheap at the same time is needed. Hence we decided to build it from scratch and it was quite easy as the circuit was provided by the cosmicrays.org website. This regulated high voltage power supply is designed to be powered off a 6 volt battery to give up to about 900V and can also be varied using the 5K pot seen in the picture. Design – III The neon bulb detector has very much the same principle but the only difference being that the breakdown voltage is low (about 65.3V) compared to the other detectors. The circuit was provided by Peter Lay in his article in electronic design and we made a few modifications to it and ended up with a circuit which is shown below. All resistors are in ohms Vac = 120V, R1 = 25K, C1 = 1900 MF and 47 MF in parallel, R2 = 80K, R3 = 25K pot (offers the control on the voltage), R4 = 75K, R5 = 3M, R6 = 100K, B1 = Neon bulb D1 = Diode, D2 = 100V Zener diode This circuit provides a stable and variable power supply to the bulb and the voltage is adjusted so that its value is very close to the breakdown of the bulb. AC input is rectified by the diode and the capacitor is present just the reduce the ripples and provide a steady source of voltage. R1 is present to make the voltage drop across the zener diode around the sum of the resistors R2 to R4. Thus using the pot R3 we can change the voltage on the bulb. Physics of the detector For all the designs, the physics is really the same. One end of the detector carries high voltage while the other end has ground. This potential difference creates an enormous electric field near each wire and in the gap between the plates. When a cosmic ray/muon enters the gap, it strips of some electrons of the atoms present in there and these electrons get accelerated towards the positive electrode. As it moves towards the positive electrode, it in turn strips more electrons of other atoms and starts a chain event. Here there are two ways of detecting the current flowing between the two electrodes. One way is to make the high voltage really close to the breakdown value (when the system conducts) so that this chain of electrons continues and we get an avalanche of electrons which we can detect easily. Another way of detecting the small current amplitude is to send it through a trans-impedance amplifier followed by an op-amp with minimal gain (preferably unitary) and look at the voltage coming out of the follower amp. Figure 10 – Trans-impedance amplifier The basic principle of this trans-impedance amplifier is that since the inverting input and the non inverting input are both kept at zero potential, no current flows through the amp. Thus any current pulse coming from the detector (IS – in this case) will be forced to go through the resistor RF and that voltage drop can be measured using a scope. Connection to Special Relativity The cosmic rays which strike the atmosphere create the muons and these muons have a typical lifetime of about 2.2 micro seconds. When they are created at approximately 15 km higher in the atmosphere and essentially travel at the speed of light we get that they should not travel more than D = speed x time (1) D = (3 x 108) x (2.2 x 10-6) D = 660 m This is not really true since the muon flux measured on the surface of the earth is about 1 per minute per sq. cm. Here is where special relativity comes into the picture as for particles traveling close to the velocity of light, time slows down by a factor called as the Lorentz factor. We assume that muons are produced at a typical height of about 15km above ground level. If they travel at the speed of light then the time taken to travel the 15 km would be T = x/c T = (15 x 103)/(3 x 108) T = 5 x 10-5 s. If the mean lifetime of the particles is t = 2.2 x 10-6s then the fraction of muons generated at 15km surviving to reach ground level should be: N = no of muons reaching the surface N0=no of muons created in the atmosphere If we consider 20 Ge V muons then we can get k from the equation E = mc2. This equation can be written in terms of the rest mass of the particle E = k m0 c2 (6) so k = E/m0 c2 – Lorentz factor In energy terms, the rest mass of the muon is 106MeV so k = 20GeV/106MeV k = 189. The mean lifetime now becomes 189 x 2.2 x 10-6s and so the fraction of muons now capable of reaching ground level becomes: This ratio tells us that there is close to 90% chance that a muon created by a cosmic ray higher in the atmosphere (15 km) would reach the earth's surface. For our experiment, we would be testing this at varying altitudes and looking at the same ratio. Results and Further Work Even though the concept behind this detector is simple the calibration is very important to the detector. So for I have been able to calibrate only the neon bulb to a pretty good extent where in distinguishable difference between the radiation kept my us and the ambient terrestrial radiation was observed. One of the problems which we encountered was that the AC line in had noise which was getting transferred to the bulb and which made it flicker thus sending a lot of spikes to the scope. This can be stopped by having a better regulated power supply. The Plexiglas detector is the next closest to being completed but still a lot of calibration is required in the detector circuit. My next goal would be to complete this detector and set up a coincidence logic circuit. This is also given in the Sci-Am article and I would hope to get it done within the coming month. Main detector designs - www.cosmicrays.org High voltage power supply - www.hardhack.org.au/hv_reg_power Scientific American - Shawn Carlson “Counting Particle from Space” February 2001 Electronic Design - Peter Lay “Simple Geiger Detector used Neon Glow lamp” March 2002 Trans-impedance Amplifier - www.ecircuitcenter.com/Circuits/opitov/opitov.htm
SciPackDigital resources are stored online in your NSTA Library. SciPacks are self-directed online learning experiences for teachers to enhance their understanding of a particular scientific concept and its related pedagogical implications for student learning. Unlimited expert content help via email and a final assessment both facilitate and document teacher learning. The Atomic Structure SciPack uses investigative evidence to explore the structure of atoms and the parts that make up atoms. Additionally, this SciPack looks at the forces that hold an atom and an atom’s nucleus together, isotopes, radioactivity, nuclear fusion and fission, and the energy contained in atoms.
Hopp into Reading! Growing Independence and Fluency By: CaSandra Yarbrough Rationale: Children who are fluent readers have the ability to read text fast and smooth. Reading can be frustrating for children who cannot read fluently. Children will more likely find a love for reading once they are able to read fluently. Repeated Readings of text has been shown to produce improvements for children's fluency, along with their comprehension of the text and word recognition. This lesson strives to increase children's fluent reading by providing them with passages for repeated readings. The best way to learn fluency is to read and reread decodable text. In this lesson we will read and reread "Lee and the Team" to become fluent readers. "Lee and the Team" Book, one for every student to do repeated readings with A stopwatch for every group of students. A chalkboard and chalk for each group to write practice words on the board. Worksheet for with a field on it for the children to advance their rabbit on as they improve their fluent reading. A cut out rabbit for each child to move across the field. . Introduce the lesson to all of the students; explain how fun it will be once we all become fluent readers. 1. Say: "Children, who knows what the best letter is to start with when trying to decode a word?" Say: " We should start with the vowel sound and then add the first letter and then add the last letter. An example of this is sat. When trying to decode this word, I would start with the vowel sound a=/a/. Then I would add the s sound. Finally, I will add the t sound. Then we would say s-a-t.Sat. See, we have sounded out the word sat. Let's try this on the board with some other words: bed, map, hit, doc. 2. Introduce the term blending to the children. When we sound out all of the sounds of the sounds s-a-t, this is called blending. Let's blend out some of the words together. I will say the sounds of some words, and I want us to blend them together. Here are the words: b-a-d, s-e-e, s-a-d, m-e-t. Let's blend these sounds together to come up with the words. "As you probably noticed, it is so much easier to read when we can say them smoothly. It is hard to understand them when we say them choppy." Once we all learn to blend and decode, we will be on the road to fluent reading. 3. Now, we need to practice. This will help us all be fluent readers. I want everyone to find a partner. Each group get copy of "Lee and the Team" and a stopwatch. We are going to read this book to our reading buddy. Each child will get a turn to read, and after we both read, we will do it again. The second time we are going to use the stop watch so we can see how long it takes us to read this story. Remember to use all of the strategies we have talked about. Take your field worksheet and move the rabbit from one side of the field to the other as your reading time improves. We will do repeated readings several times. I will call each child up during center time and have them read "Lee and the Team" to me. I will record their time and their improvements in their reading folder. I will be using a rubric in which I will look for decoding and blending strategies. I will also use a stop watch to accurately time the reading. Lee and the Team Denamur, Whitni: Reading Genie
Infectious diseases are caused by bacteria, fungi, parasites and viruses. These diseases can be passed directly from one person to another, through animal bites, and through contaminated water, food or other substances. When the body’s immune system weakens (due to illnesses or medications), it is less able to fight these foreign invaders, allowing infectious diseases to take over the body’s defenses. Common symptoms of infectious diseases may include fever and chills, among many others. While some infectious diseases can be treated with simple home remedies, others may require hospitalization. Vaccines can prevent a number of infectious diseases. Routine hand-washing also helps to prevent the spread of diseases. The symptoms of infectious diseases vary depending on the specific infection. However, symptoms often include fatigue, fever, muscle pain and appetite loss. You should see your doctor if you have difficulty breathing, swelling or sudden fever. Also see your doctor if you’ve been bitten by an animal. Other signs of infectious disease may include bad headache or seizures with fever and a cough that lasts longer than a week. There are many types of infectious diseases, all of which have different combinations of symptoms. To make a proper diagnosis, your doctor may suggest certain tests depending on the symptoms you are showing. Such tests may include lab tests, imaging scans or biopsies (surgical removal of tissue). Lab work may include: - Blood tests, in which a blood sample is taken through inserting a needle into a vein or through a finger prick. - Urine tests, in which you urinate into a container so that the urine sample can be examined for signs of disease. - Lumbar puncture, also known as a spinal tap, in which cerebrospinal fluid (the fluid surrounding the brain and spinal chord) is drawn from the lower spine. - Throat culture, in which the back of your throat is scraped with a cotton swab to locate any germs. Imaging scans that may be used to help diagnose infectious diseases include: - X-rays, which use electromagnetic waves of radiation to show general internal structures of the body. - Computed tomography (CT) scans, which use x-ray methods to show a more detailed cross section of the bones, organs and other tissues within the body. - Magnetic resonance imaging (MRI), which produces high-resolution pictures of your bones and soft tissues are shown through magnetic scanning. Unlike x-rays and CT scans, MRI doesn’t use radiation. A biopsy is done by sampling a small piece of a living tissue from an internal organ to locate any damage or diseases. Common infectious diseases, such as cold, can be treated with simple home remedies and over-the-counter medications. For these types of infections, getting enough rest and drinking ample fluids may help speed up recovery. More complex diseases may require medications. Bacterial infections: Bacteria are organized into groups of similar kinds. The antibiotics to treat these infections are also grouped by similar types. Your doctor will decide which antibiotic is best suited to treat your bacterial infection. For example, penicillin is often used to treat urinary tract infections. Viral infections: Antibiotics are not effective in treating viral infections. However, medications called antivirals have been designed to treat some viral infections. Certain antivirals are used to treat HIV and the flu. Fungal infections: Some fungi can reproduce by spreading small spores through the air. These spores can be inhaled or land on the skin, which is why fungal infections are often found in the lungs or on the skin. Antifungals are used to treat fungal infections. Tolnaftate is an antifungal usually used to treat skin infections like athlete’s foot. The causes of infectious diseases are grouped into four main categories: - Bacteria: Bacteria are organisms with just one cell that can quickly multiply in the body. They often release chemicals that make people ill. An example of bacterial infection is tuberculosis. - Viruses: These foreign invaders spread by using people’s own cells. Both the common cold and AIDS are viral infections. - Fungi: Fungi are spread mostly through the air. They get into the body through the lungs or the skin. Athlete’s foot is just one example of infection caused by fungi. - Parasites: These invaders use the human body both for food and as a place to live. An example of parasitic infection is malaria. Infectious diseases can be spread in a number of ways. One infected person can pass an infection on to another. Infected animals also may spread infectious diseases to humans. People may even catch an infectious disease through the food they eat. Here are the primary ways that infectious diseases are spread: - Direct contact: The quickest way is through a direct contact with another person or animal with the infection. It can be from one person to another (touch, cough, kisses, sexual contact, or blood transfusion), animal to person, or mother to unborn baby. A common example is influenza. - Indirect contact: Infection can also be spread through an indirect contact with items that have germs. Some examples are doorknob and faucet handle. - Animal and insect bites: Some germs can be passed on through animal or insect hosts. For example, mosquitoes may carry Malaria parasite or West Nile virus. - Food contamination: Germs can also infect you through contaminated food and water, which results in food poisoning. For example, Salmonella is often found in uncooked meat or unpasteurized milk. If you think you have an infection, you may call your family doctor for treatment. Depending on your infection, your doctor may refer you to a specialist. In order to get a proper diagnosis and treatment, you may want to prepare some information for your doctor. Make a detailed list of your symptoms, your medical history, your family’s medical history, any medications and supplements that you take and questions you may have for your doctor. Infectious diseases are the main cause of death of children and teens and one of the leading causes of death among adults around the world. Most deaths from infectious diseases happen in low- and middle-income countries. Many of these deaths are caused by preventable or treatable diseases like diarrhea, lower respiratory infections, HIV/AIDS, tuberculosis and malaria. Even with major improvements in medicine, infectious diseases continue to spread. Many of the interventions to prevent and treat infectious diseases are not available to the populations that need them the most. However, through collective efforts, the public health community has had some successes in reducing or eliminating some infectious diseases. Many of the most common infectious diseases, such as the cold, will go away on their own. Remember to drink lots of water and other fluids and to get plenty of rest. If you develop an infection, here are some tips to prevent further spread of the disease: - Wash your hands regularly, especially before eating and after using the restroom. - Stay home when having diarrhea, vomiting or running a fever. - Do not share personal items. - Practice safe sex. - Avoid flight travel when you’re sick. - Discuss vaccinations with your doctor.
Metacognition can be defined as "thinking about thinking." Good readers use metacognitive strategies to think about and have control over their reading. Before reading, they might clarify their purpose for reading and preview the text. During reading, they might monitor their understanding, adjusting their reading speed to fit the difficulty of the text and "fixing" any comprehension problems they have. After reading, they check their understanding of what they read. Students may use several comprehension Identify where the difficulty "I don't understand the second paragraph on page 76." Identify what the difficulty is: "I don't get what the author means when she says, 'Arriving in America was a milestone in my grandmother's Restate the difficult sentence or passage in their own words: "Oh, so the author means that coming to America was a very important event in her grandmother's life." Look back through the text: "The author talked about Mr. McBride in Chapter 2, but I don't remember much about him. Maybe if I reread that chapter, I can figure out why he's acting this way now." Look forward in the text for information that might help them to resolve the difficulty: "The text says, 'The groundwater may form a stream or pond or create a wetland. People can also bring groundwater to the surface.' Hmm, I don't understand how people can do that… Oh, the next section is called 'Wells.' I'll read this section to see if it tells how they do it." Graphic and semantic organizers Graphic organizers illustrate concepts and relationships between concepts in a text or using diagrams. Graphic organizers are known by different names, such as maps, webs, graphs, charts, frames, or clusters. Regardless of the label, graphic organizers can help readers focus on concepts and how they are related to other concepts. Graphic organizers help students read and understand textbooks and picture books. Graphic organizers can: Help students focus on text structure "differences between fiction and nonfiction" as they read Provide students with tools they can use to examine and show relationships in a text Help students write well-organized summaries of a text are some examples of graphic organizers: Venn-Diagrams (29K PDF)* Used to compare or contrast information from two sources. For example, comparing two Dr. Seuss Storyboard/Chain of Events (29K Used to order or sequence events within a text. For example, listing the steps for brushing your teeth. Story Map (19K PDF)* Used to chart the story structure. These can be organized into fiction and nonfiction text structures. For example, defining characters, setting, events, problem, resolution in a fiction story; however in a nonfiction story, main idea and details would be identified. Cause/Effect (13K PDF)* Used to illustrate the cause and effects told within a text. For example, staying in the sun too long may lead to a painful sunburn. Click here for more free graphic Questions can be effective because they: Give students a purpose for reading Focus students' attention on what they are to learn Help students to think actively as Encourage students to monitor their Help students to review content and relate what they have learned to what they already know Question-Answer Relationship strategy (QAR) encourages students to learn how to answer questions better. Students are asked to indicate whether the information they used to answer questions about the text was textually explicit information (information that was directly stated in the text), textually implicit information (information that was implied in the text), or information entirely from the student's own background knowledge. There are four different types of Questions found right in the text that ask students to find the one right answer located in one place as a word or a sentence in the passage. Example: Who is Frog's friend? "Think and Search" Questions based on the recall of facts that can be found directly in the text. Answers are typically found in more than one place, thus requiring students to "think" and "search" through the passage to find Example: Why was Frog sad? Answer: His friend was leaving. "Author and You" Questions require students to use what they already know, with what they have learned from reading the text. Student's must understand the text and relate it to their prior knowledge before answering the Example: How do think Frog felt when he found Toad? Answer: I think that Frog felt happy because he had not seen Toad in a long time. I feel happy when I get to see my friend who lives far away. "On Your Own" Questions are answered based on a students prior knowledge and experiences. Reading the text may not be helpful to them when answering this type of question. Example: How would you feel if your best friend moved away? Answer: I would feel very sad if my best friend moved away because I would generating questions, students become aware of whether they can answer the questions and if they understand what they are reading. Students learn to ask themselves questions that require them to combine information from different segments of text. For example, students can be taught to ask main idea questions that relate to important information in Recognizing story structure story structure instruction, students learn to identify the categories of content (characters, setting, events, problem, resolution). Often, students learn to recognize story structure through the use of story maps. Instruction in story structure improves Summarizing requires students to determine what is important in what they are reading and to put it into their own words. Instruction in summarizing helps Identify or generate main ideas Connect the main or central ideas Eliminate unnecessary information Remember what they read 7. Read a Variety of Genres Broaden children's background knowledge by encouraging them to read newspapers, magazines, internet, and different genres of books. Bingo is very motivating to students to get them to read different genres of books.) 8. Anticipate and predict Really smart readers try to anticipate the author and predict future ideas and questions. If you're right, this reinforces your understanding. If you're wrong, you make adjustments quicker. 9. Pay attention to supporting cues Study pictures, graphs and headings. Read the first and last paragraph in a chapter, or the first sentence in each section. 10. Highlight, summarize and review Just reading a book once is not enough. To develop a deeper understanding, you have to highlight, summarize, and review important 11. Build a good vocabulary For most educated people, this is a lifetime project. The best way to improve your vocabulary is to use a dictionary regularly. You might carry around a pocket dictionary and use it to look up new words. Or, you can keep a list of words to look up at the end of the day. Concentrate on roots, prefixes and endings. When it comes to reading, don't allow your children to skip over unknown words. Predict what they words might mean while reading, and then look them up to find the meaning. 12. Visualize the text. Create pictures in your head of vocabulary and description from the story.
physicists have experimentally determined the melting point of iron in Earths core. New data from the highest pressure experiments to date show that iron begins to melt at a lower pressure than previously thought and that a second crystalline form of solid iron theorized for the past 20 years to occur in the core does not exist. Physicists are using a giant gas gun at Lawrence Livermore National Laboratory in Livermore, Calif., to smash an iron bullet into an iron target at high velocity creating a shockwave and producing extreme pressures for studying conditions at Earths core. Photo courtesy of Lawrence Livermore National Laboratory. What we have determined is that the solid-solid phase transition doesnt exist, says Jeffrey Nguyen, a physicist from Lawrence Livermore National Laboratory (LLNL) in Livermore, Calif., and that the pressure that iron melts at is lower than previously reported. The findings are a step forward in the ongoing effort to accurately describe how iron behaves at high pressures and temperatures. Working out the extremes of irons melting curve is key to understanding the thermodynamics of the planet, especially in the planets core, where liquid iron swirling in the outer core generates Earths geomagnetic field. As reported in the Jan. 22 Nature, Nguyen and LLNL colleague Neil Holmes found that iron under core conditions starts to melt at 225 Gigapascals (32 million pounds per square inch) and a temperature of 5,100 degrees Kelvin (about 4,800 degrees Celsius), and finishes melting by 260 Gigapascals at a temperature of 6,100 degrees Kelvin. Those values fall between the pressures that are known from seismological surveys to exist at the liquid outer cores boundary. The experimental values were also very close to theoretical values calculated by Dario Alfè and colleagues at University College London. We believe that this is good news, Alfè says. For years, scientists have been attempting, both theoretically and experimentally, to plot and connect new points on the high-pressure, high-temperature end of irons phase diagram in order to reveal irons melt line. They have not been able to reach a consensus, however, and estimates of irons melting point at pressures found in the core have differed by 2,000 degrees Celsius or more. Trying to work out what happens at pressures of millions of atmospheres the pressures inside Earths core is by no means an easy task, and experiments are extremely challenging, Alfè says. The only way to create and study high-pressure core conditions, Alfè says, is through experiments, in which an iron bullet is smashed into an iron target at high velocity to create a shockwave. As with the core, the temperature of iron in the shockwave experiments cannot be measured directly. Instead, researchers record the changes in pressure, volume and sound velocity that occur on impact. What we do is look at how sound velocity changes with pressure, Nguyen says. To create a shockwave in the target, the researchers used a 20-meter-long two-stage gas gun that can accelerate a bullet up to 8 kilometers per second and create impact pressures exceeding 400 Gigapascals (58 million pounds per square inch). This is the highest pressure on the iron melt curve anyone has ever gotten to, Nguyen says. The pressure at the center of Earth is 360 Gigapascals. Although the conditions exist for only a millionth of a second, it is long enough for researchers to measure the shockwaves traveling through the target. The experiment is repeated at different pressures until a drop in sound velocity indicates a phase change. Researchers then face the additional challenge of calculating the melting temperature using a thermodynamic equation that involves two variables whose values are not known exactly. If you get these two parameters wrong, you also get the melting temperature wrong, Alfè says. For this reason experiments based on this technique have been criticized for a long time. The Livermore researchers, however, used the same values for these parameters as earlier experiments, including the study that found two solid phases of iron 20 years ago. Alfè notes the variables also are close to theoretical values, as are the resulting pressures and calculated melting temperature. Our calculated values are very close to those used by Nguyen and Holmes, therefore supporting their results, Alfè says. But still more work is needed to confirm and extend these recent findings. Geotimes contributing writer Back to top
Some 2.7 billion years ago in what is now Omdraaisvlei farm near Prieska, South Africa, a brief storm dropped mild rain on a new layer of ash laid down by a recent volcanic eruption (not unlike ash from the 2010 Eyjafjallajökull eruption in Iceland) forming tiny craters. Additional ash subsequently buried the craters and, over eons, hardened to become rock known as tuff. Closer to the present, other rainstorms eroded the overlying tuff, exposing a fossil record of raindrops from the Archean eon, and may now have revealed the density of early Earth's atmosphere. By scanning with lasers the craters created by ancient raindrops—and comparing the indentations with those made by water drops sprinkled onto a layer of similar ash today—physicist Sanjoy Som of the University of Washington in Seattle and his colleagues have derived a measurement of the pressure exerted by the primitive atmosphere. The scientists report in Nature on March 29 that the ancient air could not have been much denser than the present atmosphere—and, in fact, may have been much less so. (Scientific American is part of Nature Publishing Group.) "Air pressure 2.7 billion years ago was at most twice present levels, and more likely no higher than at present," Som explains. The key to that determination is raindrop size. Back in 1851 pioneering geologist Charles Lyell suggested that measuring the fossilized indentations of raindrops might reveal details about the ancient atmosphere. These mini-craters are formed based on the size and speed of ancient raindrops. Because the atmosphere drags on each drop, constraining the speed of its descent based on its size, if one could determine an ancient raindrop's size, one could determine how thick the atmosphere likely was. The largest raindrop ever measured in modern times was 6.8 millimeters around, Som notes, which is also the theoretical limit; larger raindrops break apart. Because the laws of physics were likely the same in the distant past, this suggests that raindrops were no bigger in the Archean and puts an upper limit how big the ancient drops could have been. Plus, such raindrops are exceedingly rare in modern storms—and tend to fall in powerful downpours, which in the Archean would have been more likely to have washed ash away rather than form craters that could be fossilized. To determine the size of the ancient droplets, Som and his colleagues compared the fossilized imprints with the craters that formed when they released various-size droplets from 27 meters above similar ash taken from the 2010 Eyjafjallajökull eruption in Iceland as well as from Hawaii. They then turned these modern craters to "rock" "using hair spray and low-viscosity liquid urethane plastic." Based on the comparisons, they concluded that the size of ancient droplets fell in the range of 3.8 to 5.3 millimeters. Plugging those numbers into the mathematical relationship between raindrop size, speed and atmospheric density suggests that the early Earth's atmosphere exerted at most twice as much pressure as the present day atmosphere—assuming raindrops of the maximum size and speed created the craters—and more likely was roughly the same or as little as half present pressure. A better understanding of the properties of Archean Earth's atmosphere may help explain what's known as the "faint young sun" paradox. Billions of years ago, the sun emitted less radiation, roughly 85 percent of its present output, and therefore heated the planet less. Yet, the fossil records suggest abundant liquid water and other signs of a warm, "clement" climate, as Som and colleagues noted in the analysis. The simplest explanation for this is that Earth simply boasted an atmosphere thick with greenhouse gases. "The sky was probably hazy," from the gases, Som says, in addition to being ruled by a fainter sun that passed across the sky more quickly because Earth rotated faster then. Plus, the atmosphere lacked a significant quantity of oxygen (because there were no plants), potentially lightening the atmospheric pressure. "Earth back then looked nothing like it does today." Consistent with the scenario suggested by this new calculation, research published in Nature Geoscience on March 18 suggests that the early atmosphere cycled through periods of a "hydrocarbon haze" that included greenhouse gases such as methane, better known today as natural gas. Such a hydrocarbon haze—potentially being re-created today—helped trap the heat of the faint, young sun, warming the Earth. That explains the clement Earth, according to Som—high levels of stronger greenhouse gases, such as methane. "Our work suggests that it was indeed greenhouse gases that kept the planet warm," Som says, a process ongoing in modern times. Of course, this judgment relies on an assumption—that average temperatures 2.7 billion years ago were roughly 20 degrees Celsius, based on the lack of evidence for ice in the geologic record of the time. "This may be a preservation bias," Som admits. What was clearly preserved, however, are the fossil imprints of ancient rain. And that record reinforces the fact that early Earth was essentially an alien world compared with today's planet—one devoid of plant life; with a moon that orbited more closely, driving stronger tides; and a very different atmosphere. "Yet it was very much alive," Som notes, boasting a rich array of microbial life, including photosynthetic bacteria, the ancestors of modern plant life just a scant few hundred million years from loading the atmosphere with oxygen. A better understanding of this planet's proto-atmosphere may help scientists identify life on other planets—as well as better understand just how influential greenhouse gases can be.
Imperial German Marines - Landungskorps & Seebataillon Roy Jones notes that "There’s confusion on the part of many, however, on who the Marines were. Partly it’s due to language. Since “Marine” in German means “navy” or “naval”, some people mistakenly think that there WERE no German Marines. They think that the word “Marine” must be referring to sailors and naval landing parties. Not true! There were sailors who would form a naval landing party or landing corps (Landungskorps) during a colonial war or expedition." Up into at least the Great War major warships carried one or two small cannon or howitzers for use by the landing parties. The United States was the first Power to make a treaty with Samoa. The treaties of Germany and Great Britain with Samoa were concluded in the following year; but the Germans outstripped the other Powers in trade and in planting. The increase of their commercial interests led to friction with the natives. Hostilities ensued; and a party of German marines, who had been sent ashore to protect German property, were ambushed by Mataafa's forces and many of them killed and wounded. A state of war with Samoa was then announced by Prince Bismarck; and the German minister at Washington complained that the force by which the German marines were attacked was commanded by an American named Klein. This allegation has often been repeated by writers, who have inferred from it that the attack was due to American inspiration. It was shown, however, by subsequent investigation that Klein, who was in no way connected with the public service, was a correspondent of the American press, who had visited Samoa merely in the pursuit of his profession. He swore that he advised the natives not to fire, and hailed the German boats to warn them of their danger; that the German marines fired first, and that he did not advise the Samoans to return the fire. On December 2, 1899, the Samoa Island group was divided, the United States receiving the island of Tutuila and its dependencies, while Germany took the rest. Between 30 May 1900 and 29 June 1901 German marines and civilians took part in the defence of the Foreign Legations in Peking (Beijing) during the two and a half months siege by the ‘Boxers’. The alliance fielded a force numbering 54,000 of whom 300 were German soldiers and 600 German marines with five German warships giving support. Roy Jones notes that in addition to the Landungskorps temporarily serving as infantry the Imperial German Navy had a force specifically dedicated to fighting on land. These men were specially trained as infantry." At the beginning of the Franco-Prussian War the North German Navy had one battalion of Marines stationed at Kiel. The Seebataillon (“Sea Battalion”) had been established on 13 May 1852, und by 1870 its strength had increased to 22 officers und 680 men in five marine infantry companies. When the German Empire was established in 1871, the Navy of the North German Federation became the Imperial German Navy. The strength of the Seebataillon was increased to six companies. By 1873 the German Marine Corps numbered 2000 men, commanded by a Colonel Commandant, their brigade consisted of two regiments, armed with the needle gun. The uniform is of dark blue, greatly resembling that of the US Marines. On 01 October 1886 the Seebataillon was split into two half-battalions, stationed at Kiel und Wilhelmshaven. Battalion Headquarters remained with the I. Halbbataillon at Kiel. The II. Halbbataillon at Wilhelmshaven was commanded by the most senior company commander there. By Imperial Order of 12 March 1889 both half-battalions were increased to marine battalions with four companies each. Roy Jones notes that in by 1895 they no longer were even based on warships. The Schutztruppen, the Expedition Korps, and III. SeeBataillon of the Imperial German Marines were the forces assigned to protect Germanys far-flung Colonial possessions in Africa and China. According to Roy Jones " The troops were called Seesoldaten (“Sea soldiers”) and fought organized in Marine-Infanterie-Kompagnien [“Naval Infantry Companies”]." The III. Seebataillon was raised on 03 December 1897 from Detachments of I. Seebataillon (1. und 2. Kompanie) and II. Seebataillon (3. und 4. Kompanie), and it was deployed to the German Protectorate of Kiautschou in China. The Imperial German Marines prior to World War I was organised into two Sea Battalions based in Germany [I. Seebataillon – Kiel and II. Seebataillon – Wilhelmshaven], one Sea Battalion based in the great German Naval base at Tsingtau [Kiautschou] (German Trust Territory in China, had a Depot Battalion in Germany), plus smaller Detachments in Skutari and Peking. The 1st and 2nd Matrosen (Sailors) Regiments along with naval artillery batteries and naval air squadrons were stationed in Flanders along the Belgian coast from 1914-18. The regiments were quickly expanded to divisional strength and along with the Naval Infantry Division (formed from the Seebatallione) became the Marinekorps Flandern. |Join the GlobalSecurity.org mailing list|
The crack willow is one of the most common trees growing along the bank. It often used to be pollarded i.e. the branches were cut back regularly to produce a supply of poles for rural industries. The willow’s roots help to stabilize the banks. Many emergent plants may be found i.e. those rooted underwater but much of their growth is above the water surface. The arrowhead is a small emergent with some leaves underwater, and the reedmace, a large example, can form a dense reedbed. Other bankside plants grow well in marshy ground, such as the yellow marsh marigold and the pink great willowherb. Plants such as curled pondweed are rooted in the bottom and remain either submerged in deep water or their leaves reach the surface. In what ways are these plants, and other river-dwelling species, useful to the animals of a river?Read More: Fish
Please answer 7 question from each modules: any 6 questions in black and a questions in blue. All questions are based on the module resources. An excellent answer should be 2-3 paragraphs long, it should be as detailed as possible. 1. Explain what is meant by ‘the hockey stick of prosperity’. List as many early drivers of prosperity and describe one in detail. 2. Think of one important invention from each of the following three periods of time -before 18th century, 18-19th century and 20th century before 1990. What groups of people did this invention empower and how? 3. How did farming lead to the invention of the internet? How did the share of farmers in the US labor force change throughout history? What forces were responsible for the drop in agricultural population? 4. What is meant by the “market revolution”? In what ways did it transform the work lives of people in the US and all over the world? 5. Unpaid work around the house was a full time job that required non-trivial amounts of time and effort on the part of our ancestors for thousands of years. Thanks to technology, we now spend less time on cooking and cleaning and more time working for pay or enjoying leisure. Share your thoughts on which household innovations may have had the most significant impact on our quality of life. Speculate about how the availability of time saving innovations have influenced men and women’s decision to work, study, marry and have more/fewer children. 6. Globalization was made possible by technology. What is globalization, and what kind of technologies made it possible in the past and what is making it possible now? 7. Describe what is meant by The 4th Industrial Revolution. Identify some of its impacts. 8. The technological you. Describe your own relationship with technology: What is you college major? What kind of jobs will it lead to? What technology skills are needed in order to succeed in those jobs? What is your superpower (your strongest skills, does not need to be in technology)? What technology do you use regularly and why -what does it help you achieve? 1. How did the internet transform the newspaper industry? How did newspapers survive? What interesting changes do you observe in the media industry these days? 2. Think about shopping today and in the future. 20 years ago, during the first internet boom of the late 1990s, many people thought that shopping malls, bookstores and grocery stores were about to disappear because online shopping and home food deliveries threatened to replace them. Yet the brick-and-mortar shops are still alive, even though we have eBay, Amazon and other online options. How did the digital age change our shopping experience, and why did the old-fashioned shops survive? The new 3D printing technology may soon allow us to print things at home. Will that put traditional shops out of business? 3. Education is changing: online classes and massive free online courses (MOOCs) are on the rise, college level courses are available to anyone with good internet connection. What are the pros and cons of online education? Will traditional education survive? 4. Think about the future of personal transportation. Using study resources and your own knowledge, explain how different cars of the future will be compared to cars today. What features would you like to see in the car of the future? Notice that certain features may create ethical dilemmas like the one in case of who is responsible for a driverless vehicle mistake. What is your thought on how it might be resolved? 5. What are some of the cutting edge technologies identified as the most influential by the World Economic Forum and the Singularity hub? Describe one that caught your attention and explain why you chose it. 6. Technology makes the marriage market more efficient. How does matching work? Do matched marriages fair worse, just as good, or better compared to arranged marriages or random ‘love’ marriages? Do you observe any other impacts of technology on the modern family – dating, marriage stability, divorce, decision to have children or not? 7. Science fiction. Which of the predictions from Isaac Asimov’s science fiction novels came true? What about other predictions of your favorite science fiction writers – did they came true or not? I am thinking of Bradbury’s mechanical hound from Fahrenheit 451, and looks like we don’t have the hounds, but we have GPS and Google maps which can serve a similar purpose (of tracking down people’s location). 8. Interview your parent, your grandparent, your boss or someone you know age 40+, preferably someone who worked in the same industry for 20+ years. Your goal is to describe the impact of technology on this person’s workplace over the last 20 years. What kind of technology did they use 20 years ago? What are they using now? How did his/her job tasks change? What tasks are now automated? What kind of skills are they looking for when they hire new workers? What are his/her big predictions about the future of the industry – will the internet and automation play a larger role? will new products or services emerge that will affect his/her workplace? 1. Diamandis, a well-known entrepreneur and futurist, claims that we are entering “the age of abundance”. At the same time, robots are replacing human jobs. What are the pros and cons of automation? What kind of tasks will most likely be replace by robots in the near future, and what tasks will remain human? Can your job be automated? Are there ways to make our jobs robot-proof? 2. When smart machines eliminate work, citizens may have a hard time finding jobs in order to earn income so they can buy goods and services produced by robots. What are some of the solutions to this problem? And by the way, what are we going to do with all the extra time when you don’t have to work as much? 3. In his talk about the future, the famous scientist Dr.Kaku described several new products that are going to be available during our lifetime. What will the future look like? 4. In his 2013 talk, Ray Kurtzweil (technology world’s biggest celebrity) explains Moore’s law, mentions technology-driven price deflation, discusses whether 3D printing will destroy the fashion industry, defends open source information sharing, and expands on some brain engineering techniques. What is Moore’s law and why we should care about it? Pick any issue discussed in this presentation and share your thoughts on how this trend may affect us. 5. Which of the drone applications do you consider most promising? Are there jobs that drones may potentially create or eliminate? 6. How close to immortality is technology projected to bring us? What are the ways in which it might be achieved? If humans are to live to age 200, how will it change any of our decisions, for example how much education we get and how much risk we take in life? What legal or ethical issues may immortality create? 7. Explain the idea of ‘singularity’ or the idea of ‘artificial intelligence’. How can humans benefit from it? 8. Share your examples of technology adoptions. Your examples can come from your city, your work, or something you read about in the news. Share a picture if you can. What kind of technology got adopted, where and why? Did any skills get eliminated? Were any jobs created by this change? How are all of the involved parties likely affected? Here is my example of technology that most likely resulted in a loss of jobs: hot food vending machines that are now widely spread in some west European countries such as Norway. I can imagine a lot of sales people lost their jobs when these monsters replaced their low skill sales jobs. Indeed the company does not have to pay these robots wages, health insurance, pensions and payroll taxes. Of course, someone has to service these robots, so a higher skill jobs was possibly created too, but maybe only one job created for each 10 jobs lost. It would also be interesting to consider gender impact of this change. If the old sales jobs employed females and the new robot service jobs are more likely to be ‘male’ jobs, then women are disproportionately affected by this automation. We don’t have hot food vending machines in the US yet, possibly because the US fast food industry fears that one company switching to robots may start a brutal competition spiral that may end in all sales personnel in McDonalds and Pizza Huts eventually replaced by robots. Well, may be not, but feel free to think of various consequences of automation in your own example. There may be a good use for machines like this for distributing food in ebola epidemics affected areas where contacts among humans are undesirable. 1. Describe one of the features of the new economy: peer to peer sharing, reputation capital or trust. Give an example of business built on sharing or a business in which buyers and sellers value reputation. How does the business work? How can good reputation create value and bad reputation hurt a business? Why is trust important? 2. Over the last 10 years, new businesses emerged with innovative ways to create value by offering new services or old services in new ways. Can you think of an innovative business near you? Is there a new company in your town that uses an interesting businesses model? Here is my example of an innovative business: a services exchange site fiverr.com that allows people all over the world to trade skills for money. 3. One of your readings, a Businessweek article discusses teen millionaires, entrepreneurs under age 17. What kind of projects did these young people launch? Do these kids have any secrets of success other than a good internet connection and super helpful parents? 4. Google is one of the most innovative companies in the world. Yet many of its products die before ever succeeding. Why do products die? Have you observed any new products that failed in your town or your workplace? How can you explain the failure? 5. Review the list of famous rich successful people who dropped out of college. Who are these people and in what way are they different from you and me? Should you drop out of college to start your own business – why or why not? 6. Many – if not most – successful companies in the US were started by immigrants! Recall that Steve Jobs’ biological father was from Syria. The question is: if immigrants create wealth, innovation and jobs, should countries welcome as many immigrants as they can? or should countries somehow select and welcome only ‘good’ immigrants? If so, what are the characteristics of a good immigrant? 7.In his video statement, professor Rushkoff criticizes digital companies and suggests a better way to serve local communities. Explain his arguments and his suggestions of how businesses can improve their operating models. 8. Richard Branson, Elon Musk, Jeff Bezos are famous entrepreneurs and innovators. While looking for profitable opportunities, they created products that have empowered millions of people. Who is your favorite entrepreneur from any country, and why? Which of their creations do you consider the most groundbreaking, revolutionary or important? In what way does it empower people, and what group of people? 1. What is the network effect and where do we observe it? Why is creating a cool new product not enough for starting the network effect? How can this effect be created? 2. What is Bitcoin, how does it work, and what makes it valuable? Can anyone start issuing currency? What problems is Bitcoin likely to encounter? Are private currencies going to replace the dollar one day? 3. What is Big Data and where is it being used? How can large amount of data be of value to companies like Facebook and others? Can consumers benefit from big data? 4. In the past, small businesses had two ways to procure startup funds: use family money or borrow from a bank. Now a new source of financing has emerged: crowd funding. What is it? How does it work? What are the advantages and disadvantages to investors and borrowers? 5. In the past, governments all over the world were the main sources of financing innovations. Now things are changing: private organizations sponsor innovations. Lookup xprize.org. What kind of projects are being financed at this time? Who pays for the projects? How does this information help us predict what the next breakthroughs are going to be? 6. Read the introduction to the book ‘Free!’ by Andersen and an article about free prices. What kind of products and why are offered for free? How would you explain why the pdf version of the book has typos and strange symbols in the middle of the text? 7. Read chapter 2 of the book ‘Free!’. Explain what you learned. 8. Based on your own readings of tech news, share a news story related to technology and economy. Comment. 1. Based on the MRUiversity videos, put together a list of factors that explain why some countries are rich and some are poor, why some economies grow and others don’t. Which of these factors can be improved with advanced technology? 2. How can the economic development of countries be measured and compared? Use UN human development index or CIA Factbook to compare indicators from any two countries. Explain briefly what your metrics include and what you discovered. 3. In his presentation, Hans Rosling describes an interesting demographic trend. What is this trend? How do advances in science and technology contribute to it? Is it expected to continue? 4. What is the best way to help the poorest countries, according to the TED talk by Collier? What do you think is a good way for the industrialized world to help people in the poor countries? Can any specific advances in technology and innovations be borrowed from the rich world to empower people in poor countries? 5. Browse through the World Economic Forum about lessons and the Social Innovation Guide. What do you think of social entrepreneurship? What motivates business people to start these projects? What kind of projects succeed, and how can success be measured when a business is not for profit? What else have you learned from these articles about social entrepreneurs? 6. Browse through the UN sustainable development goals. Identify one or more goals which may be attained with the help of technology and innovation. Explain why you think so. 7. Choose a company from a list of the most innovative businesses of the past year and research what makes it innovative. Here are some of the questions you may want to research: What does it produce (products, services, bundles of both)? Is it organized any differently than a typical company? Does it have a unique strategy to reward work, to price its product, to expand into new markets? Does it operate in more than one country? Does the concepts of reputation capital apply to this company, and how? How can you explain its success? 1. Read Physics for Future Presidents and listen to unconventional ways to solve environmental problems. What are some of the best solutions to the global warming problem? Which of these solutions do you find the most promising? 2. Food – our basic need – is changing with technology. We enjoy the convenience of mass produced fast food and precooked meals made with GMO crops, and they have the potential to reduce world hunger. What are some of the advances in food production that can help the developing world? Why do some people oppose GMOs? 3. If we want to solve the world problems, it may be easier to simply design smarter people who would be able to stop fighting and start thinking of more productive activities. How do you like this idea? 4. How does web video power global innovation? What is crowd accelerated innovation? What makes it a powerful force? Are you part of it? 5. There are many armed conflicts in the developing world. How can technology help us stop wars? I sometimes wonder if every teenager in Afghanistan and Congo had access to broadband connection and a good computer, would they choose to play war games instead of real wars? Share your ideas of how technology can be used to promote peace. 6. Select an innovative company. This may be a company from the list of the most innovative businesses, or a business based on sharing (Uber, Zipcar, WeWork, Instacart, Rent the Runway, ThredUp, Chegg), or one that provides on demand service (Postmates, Seamless, UberRUSH, FreshDirect), or a company that is disrupting a “traditional” industry model (Casper, Warby Parker, even Netflix). What kind of old problem has this company solved for its customers? How did it solve it? What is unique about it – does it have an innovative strategy to reward work, to price its product, to expand into new markets? Does it operate in more than one country? Do the concepts of reputation capital, network effect, trust and free pricing apply to this company, and how? How can you explain its success? - Browse through the Global Risks Report by the World Economic Forum. What are some of the main risks facing businesses worldwide? Which of these risks should we be most concerned about, and why? Can you think of solutions to any of these risks? 2. Technology can help us solve crime, but it can also enable crime. What new types of crime can be enabled by technology? How can technology help us solve and prevent new and old kinds of crime? 3. We share a lot of information about ourselves through social media, web searches and apps. Phones with GPS and surveillance cameras can track our location at any time. Companies and governments have access to all this information. Do we still have any privacy? What do we get in exchange for losing privacy? Why is some privacy desirable? 4. What are intellectual property rights? How are they protected, and why might some copyright protection be useful? How and why can too much protection be harmful? Can software developers and artists can make money without copyright and patents? 5. Read or listen to any part of the book “Content: Selected Essays on Technology, Creativity, Copyright and the Future of the Future” by Cory Doctorow. Describe the author’s main message and share your thoughts. 6. In his book on law and economics written 15 years ago, D. Friedman discusses several curious problems of the cyber age and points out interesting legal implications. Read one of the chapters and share your thoughts. 7. Some applications of technology are ridiculous and even dangerous. Downloadable medicine, exorcism offered over Skype, and more. Share your own samples of questionable or harmful applications of technology. 8. Give a long answer based on ‘Predictive Policing’ book. Describe several ways in which big data can help prevent or solve crime.
Full text search systems use a data structure called an inverted index. Logically an inverted index consists of a key containing the word, and a list of documents that word appears in. The document entry may also have a weighting based on the frequency with which that word appears in the document. A weighting may also be applied to the search terms. Full text search engines locate documents matching the search terms and calculate the closeness of the match using a heuristic called a cosine ranking. This is calculated by forming an n-dimensional vector from the search terms and then constructing similar vectors from the search results. The dot product of these two vectors is the cosine of the angle between these vectors in n-dimensional space. A cosine value of 1 indicates parallel vectors and the closest possible match. Typically the search results are fed into a priority queue and then popped out in order from highest cosine ranking to lowest. Some systems also apply weightings to the cosine rankings based on other factors; the most famous examples of this is Google's Pagerank algorithm. Text retrieval systems normally use proprietary engines, although many general-purpose database systems also offer a full text search function.
National Astronomical Observatory of Japan | ALMA | 2017 Sep 10 ALMA spots transforming disk galaxies [img3="Observation images of a galaxy 11 billion light-years away. Submillimeter waves detected with ALMA are shown in left, indicating the location of dense dust and gas where stars are being formed. Optical and infrared light seen with the Hubble Space Telescope are shown in the middle and right, respectively. A large galactic disk is seen in infrared, while three young star clusters are seen in optical light.Astronomers found that active star formation upswells galaxies, like yeast helps bread rise. Using three powerful telescopes on the ground and in orbit, they observed galaxies from 11 billion years ago and found explosive formation of stars in the cores of galaxies. This suggests that galaxies can change their own shape without interaction with other galaxies. ... Credit: ALMA (ESO/NAOJ/NRAO), NASA/ESA Hubble Space Telescope, Tadaki et al."]https://alma-telescope.jp/assets/upload ... 1-2017.jpg[/img3][hr][/hr] Aiming to understand galactic metamorphosis, the international team explored distant galaxies 11 billion light-years away. Because it takes time for the light from distant objects to reach us, by observing galaxies 11 billion light-years away, the team can see what the Universe looked like 11 billion years ago, 3 billion years after the Big Bang. This corresponds the peak epoch of galaxy formation; the foundations of most galaxies were formed in this epoch. Receiving faint light which has travelled 11 billion years is tough work. The team harnessed the power of three telescopes to anatomize the ancient galaxies. First, they used NAOJ’s 8.2-m Subaru Telescope in Hawai`i and picked out 25 galaxies in this epoch. Then they targeted the galaxies for observations with NASA/ESA’s Hubble Space Telescope (HST) and the Atacama Large Millimeter/submillimeter Array (ALMA). The astronomers used HST to capture the light from stars which tells us the “current” (as of when the light was emitted, 11 billion years ago) shape of the galaxies, while ALMA observed submillimeter waves from cold clouds of gas and dust, where new stars are being formed. By combining the two, we know the shapes of the galaxies 11 billion years ago and how they are evolving. ... Rotating Starburst Cores in Massive Galaxies at z = 2.5 - Ken-ichi Tadaki et al - Astrophysical Journal Letters 841(2):L25 (2017 Jun 01) DOI: 10.3847/2041-8213/aa7338 arXiv.org > astro-ph > arXiv:1703.10197 > 29 Mar 2017 (v1), 14 May 2017 (v2)
Brrr, it’s cold outside! Frigid temperatures and record snowfall in many areas may leave some people wondering what happened to global warming. Given the whiter-than-normal winter, is the Earth really still heating up? For insight, we turned to Brenda Ekwurzel, a climate science expert with the Union of Concerned Scientists, who has studied climate change for 17 years. Her research has stretched from the arid U.S. Southwest to icebreaker ship expeditions to the North Pole. She stays abreast of the latest science to help inform the public and policymakers. Q: Could colder temperatures and more snow be signs that global warming is not happening? A: No singular weather event or even a colder year represents a change in global warming. Weather is the temperature or precipitation over a couple of days. Climate refers to the average temperature or weather patterns over a decade or more. Global warming occurs over a long period of time, therefore gradual shifts to a warmer climate represents global warming. One event, such as increased snowfall or a heat wave, will not significantly change the climate pattern from the last decade. You have to look at what has happened over time, not in the past month or two, to determine if there is general trend of warming. Q: Could some extreme weather events signify a shift in climate patterns? A: Yes, actually. Because the Earth is heating up, the air is warmer and more humid and evaporation occurs faster, leading to an intensification of precipitation. This means that during the winter months, in many parts of the United States, we’re more likely to experience intense snowfall and in the summer months intense rainfall. If you look at these events over a long period of time though, the total annual volume of rain may not change as significantly as it may appear. There are just more occurrences of heavy rain events. The increase in intense rain can lead to flooding and longer dry periods in between. Q: Other than more intense precipitation, what are some indicators that global warming is actually occurring? A: The timing of spring and fall are indicators that global warming is happening. The spring season is coming earlier and fall is happening later. This makes for a shorter winter season and a longer summer season. Other evidence includes the accelerated melting of glaciers in Greenland and West Antarctica, and the diminishing habitat for some wildlife species, placing them at risk for extinction. Q: Warmer temperatures could cause some species to die off? A: Plants and animals that live near the tops of mountains can only move upslope so far before they “run out of mountain.” As sea-levels raise, roads, buildings and infrastructure could block animals and plants from moving. Animals that live their entire lifecycle on the Arctic sea/ice face grave danger because many studies indicate that in the coming decades the Arctic Ocean could become ice-free for the first time during the summer months. Additionally, decreasing water resources coupled with warmer climates makes it nearly impossible for some already threatened species to survive. Q: What are some ways we can stop the Earth from getting even warmer? A: Gases trap heat and warm the atmosphere for decades or centuries after they were released into the air. Burning fossil fuels just traps more heat on Earth. Think of when you are going to bake cookies. You have to preheat the oven, so you turn the oven to whatever temperature the recipe calls for and wait for the oven to warm up. For awhile now, the Earth has been preheating. The oceans have acted as a buffer, so the Earth has not yet reached that preheating temperature. Meanwhile, we are continuing to increase the temperature dial on the oven. We are preheating the Earth to an even higher temperature. We have evidence that we are going to see more changes because of this. The ice will continue to melt in the Arctic and Antarctica, and we will experience warmer temperatures year round. We can stop the Earth from heating to very dangerous levels if we successfully harness renewable fuel like solar or wind power and decrease our use of fossil fuels. At the Union of Concerned Scientists Web site, you can learn much more about global warming, including easy to understand reports on the latest research from the world’s leading scientists. You also can easily take action and tell Congress to support renewable energy, low carbon fuels and other proactive measures.
PostedTuesday, June 14, 2022 at 1:04 PM by Howard Chan, Co-Founder of Cori STEAM, which stands for Science, Technology, Engineering, Arts, and Mathematics, is not simply a list of subjects that are to be taught or kit recipes to follow, but more of an educational approach to teaching and mindset for learning/applying. Although there are several models of implementing a STEAM program, the most commonly applied model is the Engineering Design Process (EDP). Although the EDP is typically used in the professional field, we have formatted the process in the context of K12 education and is now a critical part of NGSS. The Engineering Design Process is a five step cycle where teachers create an inquiry-based learning environment that stimulates students to learn through questioning and doing. The five steps are the following: Ask, Imagine, Plan, Create and Improve. Within each of those steps, and transitions, there are teaching and learning strategies that help facilitate the process. Below describes the cycle in the context of K12 STEAM education. Although the first step in the cycle is to ask the right questions before beginning any process, teachers often skip to step 3, the Plan. In K12 education, it is not uncommon to teach with the “plan” as the focus, and inadvertently bypass two important steps of what are we trying to do/learn. Instead we need to give students opportunities to research and imagine ... Although the first step in the cycle is to ask the right questions before beginning any process, teachers often skip to step 3, the Plan. In K12 education, it is not uncommon to teach with the “plan” as the focus, and inadvertently bypass two important steps of what are we trying to do/learn. Instead we need to give students opportunities to research and imagine the topic/problem in question. When one skips steps 1 and 2, what often occurs is that teachers give away what we call the “formula” or “step-by-step” plans of solving problems. While following the steps are important skills, it is only one part of the process of learning. Students who are simply given the formula in the book are fixated on how to systematically solve an equation and not taught how to truly problem solve. Instead of developing critical thinking skills, the unfortunate outcome is that students are taught to memorize steps and practice rote techniques. The first step in a solid STEAM program is to build a curriculum established on asking the right questions. Fundamentally, we are trying to provide insight on common questions found in STEAM studies, such as “why am I learning math?” It is important to build curriculum that puts Algebra or other mathematical concepts in context of real-world applications. In helping guide those questions, a well-thought out socratic seminar or collaborative brainstorm will put the context around Step 3 (Plan) and give opportunities for divergent thoughts around the same topic. It is in Step 2 (Imagine), that teachers give students opportunities to ask questions that will guide them to formulate the problem that needs to be solved. In this context, students are discovering the learning, and not given the answer. Strategically, a teacher will guide the questions and divergent thoughts into converging ideas, ultimately leading to Step 3, the Plan. The work and effort to get to Step 3 gives students the foundations and context of the formula, rather than searching for the formula in the textbook. The first 3 steps of the Engineering Design Process remain in the theoretical framework of learning. In order to provide experiential opportunities, a well-rounded STEAM program will need to integrate the application layers of the model, which are Steps 4 (Create), and Steps 5 (Improve). Once students have established theoretical proficiency of content, teachers can elevate the learning experience by introducing project-based activities around the content. It is in Step 4 that students experience STEAM in its fullest by providing opportunities to transform the theory into practical hands-on experiences. In this level, students are building, designing, creating, and experimenting with the content in ways textbooks could never provide. It is important to develop a strong project-based curriculum that strategically brings together the theoretical frameworks into practical design applications. The last step of the Engineering Design Process is giving students opportunities to improve upon their creation. In a test taking culture, we often create an environment of a pass-fail mentality. Step five is the opposite of that mentality, where failure is looked upon as an opportunity to improve the design. The ideal EDP fosters a culture of trial-and-error and that improvement is a sign of self-direction and evaluation, and ultimately a growth mindset. When students are in the improvement level, rubrics, feedback and portfolio-based assessments help guide the evaluation process. If designed correctly, students would be documenting the process right from the beginning in a portfolio that can be referenced, improved, and edited along the way. The culmination of the Engineering Design Process can lead to three desired outcomes for any given topic. The first outcome is referencing back to the original question that the project asked and determining if it was appropriately addressed. The second outcome is determining that the original question was just the beginning, and that one has to ask a higher level of questions to get to the desired outcome; therefore going through the EDP again. The last outcome is what engineers call innovation, the creation of something new that addresses a problem. In K12 education, an important last step of the EDP process is providing students a platform called Mountain Top to share all their hard work, no matter the outcome. The Mountain Top can present itself in many forms, such as digital portfolios, competitions, debates, showcases, science fairs, videos, and more. It is through Mountain Top experiences where inspirations lead to new innovations. Step 1: Ask to Step 2: Imagine Step 2: Imagine to Step 3: Plan Step 3: Plan to Step 4: Create Step 4: Create to Step 5: Improve Step 5: Improve and Beyond
Origin: a Latin derivative meaning "Gift of the Earth." Organic chemistry is the study of the carbon atom. While it may seem simple to study just one of the many elements that exist in nature, the carbon atom is actually very important to the world around us. In fact, carbon is so important that it is considered the chemical basis of all known life! Other elements such as hydrogen, oxygen, nitrogen, and sulfur are also important in organic chemistry, but occur far less frequently than do carbon atoms. Every carbon atom has four electrons in its valence shell, so it needs four more electrons to maximize stability (remember that atoms are most stable when their valence shell is completely full, which occurs when eight electrons are present). It can gain these four electrons by forming four covalent bonds with other elements. The basic structure of all organic molecules is what is called a hydrocarbon chain. It includes a series of carbon atoms bonded together with all extra bonding spaces filled in by hydrogen. The structure of this backbone changes depending on the type of compound—it can be a chain or a ring, branched or straight. Bonds between carbon and hydrogen atoms are very stable, making hydrocarbons chemically unreactive. Connected to the hydrocarbon backbone, an organic molecule may have a functional group or groups composed of non-carbon atoms such as oxygen, nitrogen, or sulfur. Bonds between carbon and functional atoms are much less stable, giving them a higher reactivity. In other words, these groups are called “functional” because they do just that—they increase an atom’s reactivity and in turn, give the molecule a specialized function. Although at first it may seem as if there are endless possible combinations of atoms that could come together to form molecules, there are just a few common patterns of chemical structures that exist in nature. Included below is a diagram of the most common types of functional groups. Most of these groups are also found in essential oils.
The Ten Commandments appear twice in the Old Testament. The first time they appear is when the Israelites have been delivered out of centuries of slavery and brought through the Red Sea. One of the early stops in their wilderness wanderings was Mount Sinai (also called Mount Horeb). It was there that Moses received the Ten Commandments from God. The Ten Commandments appear a second time in the Old Testament in Deuteronomy 5. By this time, a whole new generation stands before Moses, as the previous generation had died in the wilderness because of their unbelief and rebellion against God. Moses is at the end of his life, and the book of Deuteronomy contains five final sermons Moses gives to the people before he dies. The Israelites are all gathered on the plains of Moab and listening to Moses restate the law a second time. This is why the book is called Deuteronomy, a word that means “Second Law,” meaning the Law is being repeated a second time. Thus, in Deuteronomy 5, the Ten Commandments are repeated, as is much of the legislation that appeared earlier in Leviticus. When Moses originally received the Law from God, it took place on Mount Sinai. Moses ascended the mountain and received the Law through a series of revelations from God over a forty-day period. We do not know precisely how these laws came to Moses, but the New Testament indicates (and it was widely taught in Judaism) that the Law was given to Moses through the mediation of angels (Acts 7:53; Gal. 3:19). However, something dramatic happened with the Ten Commandments. These commands were given directly by God to Moses and were actually written on two tablets of stone by the very “finger” of God. Exodus 31:18 says that “he [God] gave to Moses, when he had finished speaking with him on Mount Sinai, the two tablets of the testimony, tablets of stone, written with the finger of God” (ESV). These commands are actually called the Ten Commandments in several passages of Scripture, including Exodus 34:28, Deuteronomy 4:13, and Deuteronomy 10:4. The phrase can also be translated “Ten Words,” and frequently the Ten Commandments are referred to by Jewish and Christian teachers as the Decalogue, which means the “Ten Words.” Traditionally, Jewish rabbis, dating back to a third century rabbi named Simlai, have identified 613 distinct laws that appear in the Old Testament. Rabbi Simlai identified 248 of these as “positive commands,” namely, commands for us to do something. For example, Leviticus 19:36 commanded the Israelites to use just measurements and weights. It was common at that time for people to sell food in the market by weight. Some merchants would secretly cheat people by using weights that were below the standard weight. This command showed God’s interest in promoting integrity in the marketplace. Three hundred sixty-five of the commands were “prohibition commands,” telling God’s people to avoid certain things. For example, Leviticus 19:14 commanded them not to put an obstacle in front of a blind man, demonstrating God’s special kindness toward those with special needs. The 365 “thou shalt nots” and the 248 “thou shalts” add up to the overall number of 613. So, if there are 613 laws, what makes the Ten Commandments so special, and why were they given to us in such a dramatic fashion? The Ten Commandments are broad, summative commands. This means that all of the 613 laws of the Old Testament will, in one way or another, find their fulfillment and logical expression in one of the ten. Thus, the Ten Commandments are a wonderful way for someone to understand the heart of the Law. They are not simply a set of negative commands. Rather, the Ten Commandments represent the pathway out of our own self-orientation and into a whole new orientation that puts God, ourselves, and others in their rightful places. It has been observed that of the 613 laws, only 77 of the positive commands and 194 of the negative commands apply today because quite a few of the laws relate to specific actions around the temple (which no longer exists) and the particular practices related to Israelite worship (which no longer apply to the church). Yet, even those commands, if examined closely, reflect deeper moral concerns of God which find their broad expression in the Ten Commandments. Thus, the Ten Commandments are not bound by any particular time, culture, or covenant. They reflect a timeless moral code that is applicable to all people everywhere. For this reason, the Ten Commandments have been found at the core of Christian catechesis manuals for centuries.
The Nervous System How the nervous system works? We have receptors all over our skin. These are nerve endings or danger sensors. Different receptors pick up different information. Some react to temperature; some to pressure. The information that the receptors pick up is sent through the nerves to the spinal cord. The message that is sent is more like a question asking “do I need to protect this?” Not all messages are sent. We are not aware of every time we are touched. We do not constantly feel our clothes on our skin and sometimes we cannot remember getting the bruise that we discover on our leg. The spinal cord acts like a postal sorting office and decides when and which danger messages are sent to the brain. When enough danger messages reach the spinal cord, the message is sent to the brain. The brain processes the danger messages and decides if the body is at risk of harm or injury. If the brain decides the body is at risk, we will feel pain and react to get ourselves out of danger. If our hand touches something hot, the danger sensors will send messages to the brain. Our tissues are not designed to withstand much heat, so to protect the hand tissues our brain sends a response of pain and we very quickly remove our hand from the source of heat. It is sometimes easier to think of how the danger sensors and the brain combine together to form an alarm system. As we mentioned above about the six-inch nail and the paper cut, pain does not always equal harm. Therefore, when we talk about the nerves sending messages, we like to call them danger messages. It is the brain’s use of this information from the alarm system that results in the feeling of pain. The brain is working out “do I need to protect this?” Why does it hurt? Most of us will have sprained an ankle at some time in our lives. Remember how quickly after the injury the area became swollen, red and tender to touch? This is because all the chemicals involved in healing arrive at the injury site to do their job and start the healing process. These chemicals irritate the nerves, making the area tender to touch. This reaction has a protective role. The first few days after the sprained ankle you may have rested more or even used crutches to ease the pain when walking which allows the injured tissues time to heal. This type of pain is called acute pain. Over time, as healing takes place, we can walk easier and, after more time, are able to return to our normal activities. There was a useful reason for this pain. It meant we rested the injury and allowed the tissues time to heal. As the healing takes place, the chemicals stop being produced and the pain should gradually reduce and stop. This process takes time but the tissues will heal. A broken bone or tendon and ligament injuries will heal within six to 12 weeks. The body will continue to remodel scar tissue for three to six months after healing. Why does the pain last after the tissue has healed? Researchers have learnt more about pain in the past 10 years than in the past 100 years. They can see what changes happen in the nervous system producing persistent pain; even though the tissues are completely healed. For some people the pain starts without damage to the tissues. Unfortunately, we do not fully understand why this persistent pain starts. What is happening in the tissue to cause pain? This type of pain is called chronic or persistent pain. In persistent pain the brain continues to receive the danger message; it thinks that the tissues may require protecting. The brain decides the body needs all the protection it can get. It starts to adapt and build more defences; it upgrades the alarm system. It needs more information from the tissues, so it creates more danger sensors. The brain thinks, the more information the better! More danger sensors create more danger signals. All these messages are sent to the spinal cord. The spinal cord or postal sorting office becomes overloaded with messages and starts working overtime to deal with all the extra messages. It becomes quicker at processing the messages and sends out more deliveries, allowing more of the messages to be sent out as soon as they arrive. Before it would have been more selective of which messages needed to be sent. The spinal cord is now starting to amplify the signals it receives from the tissues. Our brain receives more messages, it becomes better and quicker at recognising the danger messages. The alarm system stays on high alert. This now means that we receive the pain message more often. Levels of pressure and movement that didn’t previously hurt, can now feel painful and things that previously hurt, now hurt even more. The alarm system is now going off, for example, when someone taps the window, instead of only when the window breaks. As time goes on The pain starts, it does not go, and you become worried. In the past when you had pain you rested, so now you do the same and avoid movement to protect the area and avoid any potential damage. The pain stays; you keep the reduced level of activity because any time you try to do more or do something differently, it hurts. Your muscles and tissues become weaker and less flexible as they get used to doing less. When you walk further than normal you may become out of breath and hot. You find that you can no longer do things that you used to find easy. When you become active it takes less to stress your muscles and tissues. When the tissues are stressed they release danger chemicals as the brain is now on alert and quicker at recognising these signals and is more likely to send a pain message. When tighter tissues, weaker muscles and lots of danger chemicals irritate the over excited alarm system it is not surprising that the smallest stretch starts to feel so painful. Even though the tissues are healed as best they can, you feel pain. This pain can often feel exactly like the pain you had when you initially had your injury. This familiar pain helps to reinforce the thoughts you may be having that there must be something wrong; there must be damage to the tissues. Hopefully, by reading and understanding what has been written so far we know that this is not always true and pain does not always mean harm. Pain and your memory Pain memory can be very powerful. Even a memory of a specific situation that caused harm from the past, can elicit the danger alarm system to produce pain and make us act in ways that we think will protect us from further harm; even if there is no real danger the second time round.
The researchers behind “Uniting in Science”, coordinated by the World Meteorological Organization (WMO), studied several factors related to the climate crisis – from CO2 emissions, global temperature rises, and climate predictions; to “tipping points”, urban climate change, extreme weather impacts, and early warning systems. One of the key conclusions of the report is that far more ambitious action is needed, if we are to avoid the physical and socioeconomic impacts of climate change having an increasingly devastating effect on the planet. Greenhouse gas concentrations continue to rise to record highs, and fossil fuel emission rates are now above pre-pandemic levels, after a temporary drop due to lockdowns, pointing to a huge gap between aspiration and reality. Cities, hosting billions of people, are responsible for up to 70 per cent of human-caused emissions: they will face increasing socio-economic impacts, the brunt of which will be faced by the most vulnerable populations. In order to achieve the goal of the Paris Agreement, namely keeping global temperature rises to 1.5 degrees Celsius above pre-industrial levels, greenhouse gas emission reduction pledges need to be seven times higher, says the report. High chance of climate ‘tipping point’ If the world reaches a climate “tipping point”, we will be faced with irreversible changes to the climate system. The report says that this cannot be ruled out: the past seven years were the warmest on record, and there is almost a 50-50 chance that, in the next five years, the annual mean temperature will temporarily be 1.5°C higher than the 1850-1900 average. The report’s authors point to the recent, devastating floods in Pakistan, which have seen up to a third of the country underwater, as an example of the extreme weather events in different parts of the world this year. Other examples include prolonged and severe droughts in China, the Horn of Africa and the United States, wildfires, and major storms. “Climate science is increasingly able to show that many of the extreme weather events that we are experiencing have become more likely and more intense due to human-induced climate change,” said WMO Secretary-General Petteri Taalas. “We have seen this repeatedly this year, with tragic effect. It is more important than ever that we scale up action on early warning systems to build resilience to current and future climate risks in vulnerable communities”. ‘Early warnings save lives’ A WMO delegation led by Mr. Taalas joined Selwin Hart, Assistant Secretary-General for Climate Action, and senior representatives of UN partners, development and humanitarian agencies, the diplomatic community, and WMO Members at a two-day event in Cairo last week. The meeting advanced plans to ensure that early warnings reach everyone in the next five years. This initiative was unveiled on World Meteorological Day – 23 March 2022 – by UN Secretary-General António Guterres, who said that “early warnings save lives”. Early Warning Systems have been recognized as a proven, effective, and feasible climate adaptation measure, that save lives, and provide a tenfold return on investment. ‘Still way off track’ The harmful impacts of climate change are taking us into ‘uncharted territories of destruction’, Mr. Guterres said on Tuesday. Responding to the United in Science report, Mr. Guterres said that the latest science showed “we are still way off track”, adding that it remains shameful that resilience-building to climate shocks was still so neglected. “It is a scandal that developed countries have failed to take adaptation seriously, and shrugged off their commitments to help the developing world” said Mr. Guterres. “Adaptation finance needs are set to grow to at least $300 billion dollars a year by 2030”. The UN chief recently visited Pakistan, to see for himself the massive scale of the destruction caused by the floods. This brought home, he said, the importance of ensuring that at least 50 per cent of all climate finance must go to adaptation. Uniting in Science: some key findings - United in Science provides an overview of the most recent science related to climate change, its impacts and responses. It includes input from WMO (and its Global Atmosphere Watch and World Weather Research Programmes); the UN Environment Programme (UNEP), the UN Office for Disaster Risk Reduction (UNDRR), the World Climate Research Programme, Global Carbon Project; UK Met Office, and the Urban Climate Change Research Network. The report includes relevant headline statements from the Intergovernmental Panel on Climate Change’s Sixth Assessment Report. - Levels of atmospheric carbon dioxide (CO2 ), methane (CH4) and nitrous oxide (N 2O) continue to rise. The temporary reduction in CO2 emissions in 2020 during the pandemic had little impact on the growth of atmospheric concentrations (what remains in the atmosphere after CO2 is absorbed by the ocean and biosphere). - Global fossil CO2 emissions in 2021 returned to the pre-pandemic levels of 2019 after falling by 5.4% in 2020 due to widespread lockdowns. Preliminary data shows that global CO2 emissions in 2022 (January to May) are 1.2% above the levels recorded during the same period in 2019, driven by increases in the United States, India and most European countries. - The most recent seven years, 2015 to 2021, were the warmest on record. The 2018–2022 global mean temperature average (based on data up to May or June 2022) is estimated to be 1.17 (± 0.13 degrees Celsius) above the 1850–1900 average. - Around 90% of the accumulated heat in the Earth system is stored in the ocean, the Ocean Heat Content for 2018–2022 was higher than in any other five-year period, with ocean warming rates showing a particularly strong increase in the past two decades. - New national mitigation pledges for 2030 show some progress toward lowering greenhouse gas emissions, but are insufficient. The ambition of these new pledges would need to be four times higher to get on track to limit warming to 2 degrees Celsius and seven times higher to get on track to 1.5 degrees Celsius.
One centimeter equals 0.01 meters, to convert 80.4 cm to meters we have to multiply the amount of centimeters by 0.01 to obtain the width, height or length in meters. 80.4 cm is equal to 80.4 cm x 0.01 = 0.804 meters. The result is the following: 80.4 cm = 0.804 meters The centimeter (symbol: cm) is a unit of length in the metric system. It is also the base unit in the centimeter-gram-second system of units. The centimeter practical unit of length for many everyday measurements. A centimeter is equal to 0.01 (or 1E-2) meter. The meter (symbol: m) is the fundamental unit of length in the International System of Units (SI). It is defined as "the length of the path travelled by light in vacuum during a time interval of 1/299,792,458 of a second." In 1799, France start using the metric system, and that is the first country using the metric. To calculate a centimeter value to the corresponding value in meters, just multiply the quantity in centimeters by 0.01 (the conversion factor). meters = centimeters * 0.01 The factor 0.01 is the result from the division 1 / 100 (meter definition). Therefore, another way would be: meters = centimeters / 100 - How many meters are in 80.4 cm? - 80.4 cm is equal to how many meters? - How to convert 80.4 cm to meters? - What is 80.4 cm in meters? - How many is 80.4 cm in meters? meterscm.com © 2023
What is Passive Building? Passive building is a set of design principles used to attain a rigorous and measurable level of energy efficiency within a specific structure. “Optimizing your gains and losses based on climate”, summarizes the approach of a passive design. According to PHIUS (Passive House Institute, United States), a passive building is designed and built in accordance with these five building-science principles: - Employs continuous insulation throughout its entire envelope without any thermal bridging. - The building envelope is extremely airtight, preventing infiltration of outside air and loss of conditioned air. - Employs high-performance windows (double or triple-paned windows depending on climate and building type) and doors – solar gain is managed to exploit the sun’s energy for heating purposes during the cold season and to minimize overheating during the warm season. - Uses some form of balanced heat and moisture recovery ventilation. - Uses a minimal space conditioning system. Passive building principles can be applied to all building types, from single-family homes to large office buildings. Passive Design Strategy Passive design strategy uses a comprehensive set of factors including heat emissions from appliances to keep the building at comfortable and consistent indoor temperatures throughout the heating and cooling seasons. As a result, passive buildings offer tremendous long-term benefits in addition to energy efficiency. Unmatched comfort: Superior insulation and airtight construction provide comfort, even in extreme weather conditions. Excellent indoor air quality: Continuous mechanical ventilation of fresh filtered air. Highly resilient buildings: A comprehensive systems approach to modeling, design and construction to produce strong buildings. Best Path to Achieve Net Zero/Net Positive: Passive building principles offer the best path to net zero and net positive buildings by minimizing the load renewables are required to provide. The Performance Standard According to PHI, North American building scientists and builders were the first to pioneer passive building principles in the 1970s. Over time, project teams in North America learned that a single standard for all North American climate zones is unworkable. In some climates, meeting the standard is cost prohibitive, in other milder zones it’s possible to hit the standard while leaving substantial cost-effective energy savings unrealized. The PHIUS Technical Committee developed passive building standards that account for the broad range of climate conditions, market conditions, and other variables in North American climate zones. The result was the PHIUS+ 2015 Passive Building Standard – North America, which was released in March of 2015. That standard has been updated to PHIUS+ 2018. The PHIUS Technical Committee will continue to periodically update the standard to reflect changing market, materials, and climate conditions.
The Persian Empire is the name given to a series of dynasties centered in modern-day Iran that spanned several centuries—from the sixth century B.C. to the twentieth century A.D. The first Persian Empire, founded by Cyrus the Great around 550 B.C., became one of the largest empires in history, stretching from Europe’s Balkan Peninsula in the West to India’s Indus Valley in the East. This Iron Age dynasty, sometimes called the Achaemenid Empire, was a global hub of culture, religion, science, art and technology for more than 200 years before it fell to the invading armies of Alexander the Great. At its height under Darius the Great, the Persian Empire stretched from Europe’s Balkan Peninsula—in parts of what is present day Bulgaria, Romania, and Ukraine—to the Indus River Valley in northwest India and south to Egypt. The Persians were the first people to establish regular routes of communication between three continents—Africa, Asia and Europe. They built many new roads and developed the world’s first postal service. The ancient Persians of the Achaemenid Empire created art in many forms, including metalwork, rock carvings, weaving and architecture. As the Persian Empire expanded to encompass other artistic centers of early civilization, a new style was formed with influences from these sources. Early Persian art included large, carved rock reliefs cut into cliffs, such as those found at Naqsh-e Rustam, an ancient cemetery filled with the tombs of Achaemenid kings. The elaborate rock murals depict equestrian scenes and battle victories. Ancient Persians were also known for their metalwork. In the 1870s, smugglers discovered gold and silver artifacts among ruins near the Oxus River in present-day Tajikistan. The artifacts included a small golden chariot, coins and bracelets decorated in a griffon motif. (The griffon is a mythical creature with the wings and head of an eagle and the body of a lion, and a symbol of the Persian capital of Persepolis.) British diplomats and members of the military serving in Pakistan brought roughly 180 of these gold and silver pieces—known as the Oxus Treasure—to London where they are now housed at the British Museum. The history of carpet weaving in Persia dates back to the nomadic tribes. The ancient Greeks prized the artistry of these hand-woven rugs—famous for their elaborate design and bright colors. Today, most Persian rugs are made of wool, silk, and cotton.
Construction projects can produce a lot of pollution if not handled properly. To prevent this, it is important to understand the sources of pollution and how to control them. This article will provide an overview of the most common sources of pollution in construction and how to prevent them. Storm Water Pollution Construction and development projects daily disturb the earth and can cause erosion and sedimentation. These disturbances can lead to serious environmental consequences without proper management, including loss of vegetation, damage to aquatic ecosystems, increased flooding and decreased water quality. Storm water pollution is a common issue in construction projects. It can be caused by site-runoff, sedimentation from disturbed soils, and emissions from vehicles and equipment. It is important to control runoff, sedimentation, and emissions to prevent storm water pollution. You can control runoff by installing storm water detention ponds or filters. You can reduce sedimentation by stabilizing additives in the soil and minimize emissions by using low-emitting equipment. Taking these steps can help ensure that your construction project does not produce any harmful pollutants. Registered SWPPP Reviewer (RSR) A Registered SWPPP Reviewer (RSR) is a professional who has been certified by the U.S. Environmental Protection Agency (EPA) to review and approve Storm Water Pollution Prevention Plans (SWPPPs) for construction storm water management. They are responsible for ensuring that all plans meet federal and state regulations and that facilities comply with these plans. RSRs must be familiar with all aspects of storm water management, from planning and design to implementation and maintenance. They must also communicate effectively with facility owners and operators to ensure that plans are being properly implemented. There are many benefits to having a Registered SWPPP Reviewer on staff. RSRs can help save time and money by ensuring that plans are designed correctly. They can also help facilities avoid potential fines and penalties by catching problems before they happen. Having an RSR on staff can also show regulators that a facility is serious about preventing pollution and is taking steps to ensure compliance. Registered Storm Water Inspector A Registered Storm Water Inspector (RSI) is a professional with a high knowledge and experience in storm water management. They must pass a rigorous examination covering rainfall analysis, drainage systems, and regulatory requirements. Candidates are also required to have at least five years of experience working in the field of storm water management. By making sure that construction projects follow secure and efficient storm water management procedures, registered storm water inspectors contribute to environmental protection. They collaborate with builders, engineers, and developers to create and carry out designs that lessen the impact of runoff on ecosystems and rivers. Registered Storm Water Inspectors reduce the possibility of adverse environmental effects by ensuring that storm water is correctly handled during construction. That prevents water contamination. Certified Inspector of Sediment & Erosion Control A Certified Inspector of Sediment & Erosion Control (CISEC) is a professional with a high level of knowledge and experience in erosion and sediment control. They must pass a rigorous examination covering erosion mechanisms, design principles, construction techniques, and regulatory requirements. In addition to passing the exam, candidates must also have a minimum of five years of experience working in the field of erosion and sediment control. Certified Inspectors of Sediment and Erosion Control play a vital role in protecting the environment by ensuring that construction projects adhere to safe and effective sediment control practices. They work with developers, engineers, and contractors to develop and implement plans that minimize the impact of erosion on waterways and ecosystems. By ensuring that sediment is properly managed during construction, Certified Inspectors help to prevent soil erosion, minimizing the potential for negative environmental impacts. Environmental Control Supervisor (ECS) The Environmental Control Supervisor (ECS) ensures that the work zone’s air quality and environmental conditions are safe and compliant with all regulations. They work closely with the contractor to ensure that all equipment and materials are stored and used safely, and they also monitor the weather conditions to make sure that they do not pose a safety hazard. The ECS is responsible for ensuring that the work zone is free of dust, fumes, and other airborne contaminants. They also monitor the air quality inside the work zone and make sure that it meets all safety standards. In addition, the ECS is responsible for ensuring that the work zone is free of hazardous materials and wastes. The ECS is also responsible for monitoring the work zone for potential safety hazards and implementing safety measures to mitigate these hazards. They also conduct regular safety audits of the work zone and make sure that all safety procedures are being followed. The ECS is responsible for ensuring that the work zone is a safe and healthy environment for all workers. They play a vital role in ensuring that the work zone is compliant with all safety regulations. Keep Your Construction Project Environmentally Responsible Construction projects can have a serious impact on the environment if not managed properly. That’s why it’s important to have trained professionals who know how to prevent water, soil, and air pollution. By using certified inspectors and supervisors, you can ensure that your construction project is environmentally responsible.
Source:Max Planck Society Being able to feel empathy and to take in the other person’s perspective – these are two abilities through which we understand what is going on in the other person’s mind. Although both terms are in constant circulation, it is still unclear what exactly they describe and constitute. Scientists at the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) in Leipzig, together with colleagues from Oxford University and other institutions, have now developed a model which explains what empathy and perspective taking are made of. They show that it is not one specific skill that enables us to put ourselves in another person’s shoes. These skills are made up of many individual factors that vary according to the situation. Understanding what other people want, how they feel, and how they see the world is becoming increasingly important in our complex, globalized society. Social skills enable us to make friends and create a network of people who support us. But not everyone finds it easy to interact with other people. One of the main reasons is that two of the most important social skills—empathy, i.e. being able to empathize with the other person’s emotions, and the ability to take a perspective, i.e. being able to gain information by adopting another person’s point of view—are developed to different degrees. Researchers have long been trying to find out what helps one to understand others. The more you know about these two social skills, the better you can help people to form social relationships. However, it still not exactly clear what empathy and perspective taking are (the latter is also known as ‘theory of mind’). Being able to read a person’s emotions through their eyes, understand a funny story, or interpret the action of another person—in everyday life there are always social situations that require these two important abilities. However, they each require a combination of different individual subordinate skills. If it is necessary to interpret looks and facial expressions in one situation, in another it may be necessary to think along with the cultural background of the narrator or to know his or her current needs. To date, countless studies have been conducted that examine empathy and perspective taking as a whole. However, it has not yet been clarified what constitutes the core of both competencies and where in the brain their bases lie. Philipp Kanske, former MPI CBS research group leader and currently professor at the TU Dresden, together with Matthias Schurz from the Donders Institute in Nijmegen, Netherlands, and an international team of researchers, have now developed a comprehensive explanatory model. “Both of these abilities are processed in the brain by a ‘main network’ specialized in empathy or changing perspective, which is activated in every social situation. But, depending on the situation, it also involves additional networks,” Kanske explains, referring to the results of the study, which has just been published in the journal Psychological Bulletin. “If we read the thoughts and feelings of others, for example, from their eyes, other additional regions are involved than if we deduce them from their actions or from a narrative. “The brain is thus able to react very flexibly to individual requirements.” For empathy, a main network that can recognize acutely significant situations, for example, by processing fear, works together with additional specialized regions, for example, for face or speech recognition. When changing perspective, in turn, the regions that are also used for remembering the past or fantasizing about the future, i.e., for thoughts that deal with things that cannot be observed at the moment, are active as the core network. Here too, additional brain regions are switched on in each concrete situation. Through their analyzes, the researchers have also found out that particularly complex social problems require a combination of empathy and a change of perspective. People who are particularly competent socially seem to view the other person in both ways—on the basis of feelings and on the basis of thoughts. In their judgment, they then find the right balance between the two. “Our analysis also shows, however, that a lack of one of the two social skills can also mean that not this skill as a whole is limited. It may be that only a certain factor is affected, such as understanding facial expressions or speech melody,” adds Kanske. A single test is therefore not sufficient to certify a person’s lack of social skills. Rather, there must be a series of tests to actually assess them as having little empathy, or as being unable to take the other person’s point of view. The scientists have investigated these relationships by means of a large-scale meta-analysis. They identified, on the one hand, commonalities in the MRI pattern of the 188 individual studies examined when the participants used empathy or perspective taking. This allowed the localisation of the core regions in the brain for each of the two social skills. However, results also indicated how the MRI patterns differed depending on the specific task and, therefore, which additional brain regions were used. Keywords; brain research, empathy, social skills
By learning about ocean life and the many ways humans choose to explore the sea, students gain concrete information about the world around them, as well as habits of mind that will enable them to continue their own journeys of exploration and discovery. Skills for August 29-September 1: ELA: The Sea Central Text: Amos & Boris Identify story elements in Amos & Boris. Understand how an author groups related information together and why it is important. Identify adverbs with the morpheme -ly and examine their function in particular sentences. Group related information together in an explanatory paragraph. With support, capitatlize appropriate words in titles. Define abstract nouns using the morpheme -less. Use scaled picture graphs as an introduction to multiplication as equal groups. Connect situations involving equal groups to tape diagrams. Use multiplication expressions to represent equal groups. Represent and solve multiplication problems. Relate multiplication equations to situations and diagrams and write equations Social Studies: Introduction to Geography Analyze maps and globes using common terms including country, equator, hemisphere, latitude, longitude, north pole, prime meridian, region, south pole, time zones. Use cardinal directions, intermediate directions, map scales, legends, and grids to locate major cities in Tennessee and the U.S. Use different types of maps (e.g., political, physical, population, resource, and climate), graphs, and charts to interpret geographic information Describe the properties of solids, liquids, and gases and identify that matter is made up of particles too small to be seen Differentiate between changes caused by heating or cooling that can be reversed and that cannot Describe and compare the physical properties of matter including color, texture, shape, length, mass, temperature, volumes, state, hardness, and flexibility. Have a great weekend! The 3rd Grade Team
We define mechanical energy as the ability to produce mechanical work that a body possesses due to its mechanical origins, such as its position or speed. There are two types of mechanical energy which are: Kinetic energy is the energy of motion. All moving objects have kinetic energy. Potential energy is the energy that an object can store energy as a result of its position. This stored energy of position is referred to as potential energy. The mechanical energy of a body is the sum of the potential and kinetic energy. The principle of conservation of mechanical energy relates to both energies. According to it, the sum of the kinetic and potential energy of a body remains constant. Mechanical energy is constant if we do not take into account external forces such as frictional ones. Energy is the ability of a person or an object to do work or to cause a change. Kinetic energy is a form of energy, which has a body in motion due to mass inertia. Kinetic energy depends on the mass and the speed of the body. Also, the kinetic energy of a moving body is equal to the work required to bring the body from rest to its state. Two types are distinguished: Translational kinetic energy, in which the object moves from one point to another. It applies whether or not the force acts in a straight line. Rotational kinetic energy, in which the object rotates on itself. In an atomic view, thermal energy is the kinetic energy of the particles in the material. Potential energy is the work that an object can do as a result of the object's state. This state can be the location in a force field (for example, gravity) or the object's internal configuration. The magnitude of the potential energy is not defined by itself. Only the difference in size has been determined. There are different types: Elastic potential energy, which depends on the energy stored inside it (for example, a spring). Gravitational potential energy, which depends on gravity and, therefore, on height. Electric potential energy results from conservative Coulomb forces and is associated with the configuration of a particular set of point charges. Chemical potential energy, when it depends on its chemical composition. Thermal potential energy is the potential energy at the atomic levels that can turn into thermal kinetic energy or related forms of energy. The combination of thermal potential energy and thermal kinetic energy is the internal energy of an object. Examples of Mechanical Energy Currently, there are many examples: A falling ball. If we hold a ball with our hand it has potential energy and it has no kinetic energy. If we stop holding the ball, it will begin to gain speed and lose altitude. What is the same, increases the kinetic energy and decreases the potential energy. An electric engine converts electrical energy into mechanical energy. A hydroelectric power station (hydropower) harnesses the potential energy of the water at the top. When the water falls, the potential energy becomes kinetic. A car engine gets mechanical energy from chemical energy by burning fuel.
What is Labor Day? Labor Day is a public and federal holiday (a holiday established by law and is usually a non-working day) that is celebrated on the first Monday of September. It was established to honor the labor movement in the US and the contributions that workers have made which benefit our country. Additionally, Labor Day Weekend is considered the end of summer with children going back to school and all public swimming pools closing. In the late 19th century trade unions and the labor movement grew, leading to trade unionists wanting to set aside a day to recognize and celebrate labor. Labor Day was then further advocated by the Central Labor Union who organized the first Labor Day parade in New York City in 1882. Oregon was the first state of the United States to make it Labor Day an official holiday and they did so in in 1887. Seven years later, in 1894, it became an official federal holiday. Historians say the expression “no white after Labor Day” comes from when the upper class would return from their summer vacations and stow away their lightweight, white summer clothes as they returned back to school and work.
6 Important Reasons We Play Games Together at Kids Place Pediatric Therapy Parents often ask us why we will place two children together during a therapy session to play a game. This is a great question that has an even better answer! When two children are playing together, we are able to build several social and communication skills. During a motivating game where we play together, we are able to target the following skills: 1. Eye Contact Eye contact is important to signal active listening. It is also an essential component in paying attention to directions. Eye contact is something that many children need help, and learning with a peer can be easier for some children. One way that that we help to facilitate eye contact is bringing objects to our eye level, which directs the child’s eye gaze to the our partner’s eyes. 2. Turn Taking Turn taking is an important skill to teach children, and pairing a child with a peer for one-on-one interactions introduces them to socialization skills. It is important for a child to be able to request a turn, as well as to be able to hand a toy over when it is someone else’s turn. We also teach children to attend to the game, even when it is not their turn. One way that that we help to facilitate turn taking is using the phrase “who’s turn is it?” paired with a visual cue “my turn” (palm to chest 2 times). When children are first learning to communicate, they are often responders, answering our questions and making choices that we present. They also learn language through imitation, saying a word after we say it to achieve a desired result. The end goal is for them to be initiators, which means that they verbally request and comment without being cued. A simple way we teach this in games is by waiting. We will let the children look to each other to decide how the game will continue. We will also use basic indirect cues, such as “Hmm, Johnny’s turn is finished. I wonder what will happen next.” Teaching children about personal space (their own and others) is very important in creating the right connections and leaving a good impression. One way that we help to facilitate when objects or body parts are in peers’ personal space is by giving a verbal cue, “that is too close,” or by drawing their attention to peer’s reaction. We also work on teaching them to hand a peer an object without shoving, throwing, or grabbing the object. 5. Conversational Turns We teach conversational turns by showing children how to allow their peer a turn to talk and how to engage in the conversation. We redirect when they veer off the topic at hand. One way that we help to facilitate appropriate conversational turns is by giving the verbal cue, “Wait your turn” or “Let your friend finish.” 6. Problem Solving Skills Problem solving skills are developed through everyday life and experiences. Working in a group and learning the process of working through details of a problem to reach a solution together help lay the foundation for later cooperative group tasks. Even during simple games like Cariboo or building a Mr. Potato Head, we facilitate problem solving skills by trying not to jump in when things aren’t going as planned. We want children to be able to work together to trade objects, to move pieces out of the way or to request an action from each other. As you can see, there are many ways to work on social language skills when we play games together. With a little structure, we are able to focus on important pragmatic elements, such as eye contact, turn-taking, initiating, proxemics, conversational turns and verbal cooperative problem solving. Parents can follow through with these tips at home when their children are playing with peers or siblings.
Distribution of Powers Functionally. There are two methods may be employed for distributing governmental powers, the territorial and the functional. These two are not alternative methods. The territorial division relates to the splitting up of the territory of the State into political divisions and the distribution of governmental powers among such divisions. Each of the political divisions is provided with a governmental organization through which it performs its functions. But the work of government is so wide and complex that it is imperative to establish special organs for the performance of the several kinds of work to be done. This is necessary for two obvious reasons: first that the benefits of specialization may be secured and, secondly, responsibility may be more definitely located. When the work of government is distributed to political organs in accordance with the nature of function to be performed, it is the functional distribution powers. Carl J. Friedrich says that true constitutional government does not exist unless procedural restraints are established and effectively Operating. Such restraints involve some division of power for evidently some considerable power must be vested in those who are expected to do the restraining. Such a division of governmental power under a constitution has largely taken two forms: the functional division-such as that into legislative, executive, and judicial, and the spatial (territorial) division of federalism. Based upon this principle of distribution, all the powers of government have long been conceived as falling within one or another of three great classes, according as they have to do with : (l) the enactment of making of laws, (2) the interpretation of these laws and (3) their enforcement. To these three classes have been given the names, legislative, judicial and executive. Structurally considered, government has been deemed to be made up of three branches having for their functions the enactment, the adjudication, and the enforcement of law, and the branches to which these functions belong are known as the Legislature, the Judiciary, and the Executive respectively. This three-fold division of governmental powers had received such general recognition that it became a classical division. But recently it has been held by some writers that this division is unscientific. Willoughby, for example, says “that attempts to act upon it lead not only to confusion of thought but to serious difficulties in working out the practical problems of the distribution of governmental powers functionally ” He suggests that electorate and administration are distinct branches of government and it is important to recognize their distinct character in the practical work of organizing and operating a government In Sweden administrative power has been separated from the executive power and Carl Friedrich says, “without any theoretical recognition of the fact, the American federal government tends in the same direction of differentiating between strictly executive and purely administrative functions. ” (4 ) Gladden, however, does not support this point of view, and is of the Opinion that administration “is subordinate to the main powers or branches of government” Gladden opinion is convincing and we adhere, to the three-fold division Nor can there be a divorce between the electorate and legislative functions. Political sovereign and legal sovereign are the two aspects of the sovereignty of the State. The will of the electors is the controlling power behind the legal sovereign and it is to their mandate that the legal sovereign must ultimately bow Theory of the Separation of Powers. Political liberty, we have emphasized, is possible only when the government is restrained and limited. The theory that the functions of government should be differentiated, and that they should be performed by distinct organs consisting of different bodies of persons so that each department should be limited to its own sphere of action without encroaching upon the others, and that it should be independent within that sphere, is called in its traditional form, the theory of the Separation of Powers. Montesquieu, the celebrated French scholar, wrote in his famous book, The Spirit of the Laws, that “constant experience shows us that every man invested with power is apt to abuse it, and to carry his authority until he is confronted with limits.” Montesquieu poses the question clearly enough. He asserts that concentrated power is dangerous and leads to despotism. But how to avoid concentration of power? His answer is simple by separating the functions of the executive, legislative and judicial departments of government, so that one may operate as a balance against another and, thus, power should be a check on power. Le pouvoir arrete le pouvoir; power halts power. A constitution may be such that none “shall be compelled to do things to which be is not obliged by law, or not to do things which the law permits. ” Montesquieu’ s thesis is the division of powers by functions and the theory emerging there from is known as that of the Separation of Powers. The exposition given by Montesquieu has now become classical. The idea contained in the theory of the Separation of Powers was not entirely unknown before Montesquieu. Its origin can be traced back to Aristotle, if not indeed to earlier writers. In the Politics is found an analysis of three “parts” or branches of government- the deliberative, executive and judicial. Aristotle did not go into details. He confined himself to a description of their personnel, organization, and functions, without suggesting their separation various political philosophers, from Marsigilo of Padua in the fourteenth century, gave some attention to the theory of Separation of Powers, but it meant little to Political Science until the issue of political liberty became urgent the seventeenth century it ” began to acquire eighteenth, with the critical times, items to the forefront of discussion.” There are traces of the theory in John Locke’s Civil Government. Locke distinguished between three powers that existed in every commonwealth. These he called legislative, executive and federate; the federate power related to the conduct of foreign affairs. The executive and federate powers, he pointed out “are always almost unite ”, and to this union he expressed no objection But he would not permit the union of the executive power with the legislative. The legislative power, he said, “in well-ordered commonwealth, where the good of the whole is so considered as it ough, ” is placed in the hands of an assembly that convenes at intervals, But since the administration and enforcement of law is a continuous task, a power distinct from the legislative must remain “always in being.” In practice, “the legislative and executive powers come often to be separated.” In principle, too, Locke argued that they should be separate, “because it may be too great temptation to human frailty, apt to grasp at power, for the same persons who have the power of making laws to have also in their hands the power to execute them.” This division of authority and the separation of executive and legislative power is justified and explained by Locke on the ground that it is necessary for the maintenance of liberty. Liberty suffers when the same human beings make the laws and apply them. The Ideas of Montesquieu. Here were the threads for Montesquieu to gather, elaborate and expand, and then to formulate them in concrete terms. Montesquieu-lived in the time of Louis XIV, the author of the famous dictum, “I am the State.” The monarch combined in his person all the three powers. His word was law and his authority was unquestionable. There was no liberty for the people under such an Oppressive and despotic government Montesquieu happened to visit Great Britain and was tremendously impressed by the’ spirit of freedom prevailing there. He tried to find, out the causes of the liberty of the British people. He compared the independence of the judges and the strength of Parliament there with the subordination of the judiciary to the French. Monarchy and the virtual extinction of the Estates-General. Not foreseeing the rise of the Cabinet system of government in Britain and keenly desiring to substitute political liberty for royal absolutism in France, Montesquieu advocated the separation of powers as a device to make government safe for the governed. The division of powers that he envisaged was the same as that of Locke, except for renaming Locke’s executive power and calling it the judicial power The executive function, as described by Locke, had been to execute the laws in any case. He also changed Locke’s terminology and named his federative power as the executive power, But in his insistence that they must be entrusted separately to different personnel he went considerably ahead of his predecessor. His most famous statement runs thus: “When the legislative and executive powers are united in the same person, or in the same body of Magistrates, there can be no liberty; because apprehensions may arise, lest the same monarch or senate should enact tyrannical laws, and‘execute them in a tyrannical manner. Again, there is no liberty if the judicial power he not separated from the legislative and executive. Were it joined with the legislative, the life and liberty of the subject would exposed to arbitrary control, for the judge would then be legislator. Were it joined to the executive power, the judge might behave with Violence and oppression. There would be an end of everything, were the same man or the same body to exercise those three powers, that of enacting laws, that of executing the public resolutions, and of trying the cases of individuals” To explain briefly and in simple language, Montesquieu endeavored to establish that. whoever has unrestrained power, will abuse it If the legislative and executive powers are combined in the same person or body of persons, there can be no liberty, because the same agency becomes the maker and executor of laws. Similarly, if the legislative and judicial functions are combined the maker of laws is also their interpreter. If the executive powers are combined with the judicial, the same agency is the prosecutor as well as the judge. If all the three powers are concentrated a single hand there would be an end of everything, as there will be tyrannical laws interpreted and enforced with the violence of an oppressor. Montesquieu’s thesis is that concentration of legislative, executive and judicial functions, either in one single person or a body of persons, results in abuse of authority, and such an organization is tyrannical He urged that the three departments of government should be so organized that each should be entrusted to different personnel, and each department should perform distinct functions within the sphere of powers assigned to it. There has been some controversy among students of Political Science whether Montesquieu contemplated an absolute or only a limited separation of thee powers. One school is of the opinion that Montesquieu desired absolute separation so that each department remained independent and supreme within its own sphere. Others believe that he never thought to separate the powers completely. He rather suggested modification of the concentration of powers. “Montesquieu was searching for means, as Herman Finer observes, “to limit the Crown; to make a constitution to build canals through which, but , not over which, power should stream; to create intermediary bodies; to check and balance probable despotism and yet he did not wish to fly to the extreme of democracy?” For Montesquieu, the executive convenes the legislature, fixes its duration, and votes legislation. The legislature has the right of impeachment. It may not arraign the chief of the State but, “as the ‘person entrusted with the executive power cannot abuse it, without bad counselors, and such. as have the laws as ministers, though the laws protect them as subjects, these men may be examined and punished.” His idea of impeachment is that of political responsibility of ministers in our times. Loeke’s analysis of the government structure, too, proves that the various powers of government Were not to b separated into watertight compartments. He made them dependent on the supreme power of the people, the executive functioned in subordination to the legislature; and the judiciary worked as part and parcel of the executive. The essence of Montesquieu’s theory of Separation of Powers is that it imposes t each organ of government the obligation to explain itself and to see that it acted with law and not beyond it. If the authority exercised is in excess of that permitted by law, it should be checked by the. other in order to restrain its encroachments. And this is the correct meaning of le pouvoir artere le pouvoir– power has power. There must be a separation of powers within the structure of government in order that one power may operate as a balance against another power. Such check Montesquieu considered necessary for safeguarding the liberty of the individual and for avoiding tyranny. Montesquieu follows Locke, but with more system and it is important to observe that he never thinks to separate the powers completely, but rather to modify the concentration of powers.” A similar view was expressed by Backstone, the British jurist. In his Commentaries on the Laws of England, Blackstone said, “Whenever” the right of making and enforcing the law is vested in the same man or one and the same body of men, there can be no public liberty. The Magistrate may-enact tyrannical laws and execute “them in a tyrannical manner since he is possessed, in his quality of dispenser of justice, with all the power which he as legislator thinks proper to give himself Where it (the judicial power) joined with the legislative, the life; liberty, and property of the subject would be in the hands of arbitrary judges whose decisions would be regulated by their opinions, and not by any fundamental principles of law which though legislators may depart from yet judges are bound to observe. were it joined-with the executive this union. might be an over-balance of the legislative.” Practical Effects of Montesquieu’s Theory. Montesquieu’s theory of Separation of Powers had a great democratic appeal and it soon became a political dogma. The teachings of Montesquieu gave fillip to the French Revolution, and nearly all governments of the ‘ revolutionary period were organized “on the principle of Separation of Powers. The famous Declaration of Rights; issued after the Revolution, laid down that “every society in which the separation of powers is not determined has no constitution.” The Constitution , of 1791 made the executive and the legislature independent of each other, and the judges elective and independent. For a short span of tithe, during the regime of Napoleon, it was defied, but the doctrine was constantly in the minds of the people. As a constitutional maxim it is jealously ‘cherished even today. In the United States, Montesquieu’s theory found its best expression. “We shall never know,” says Herman Finer, “whether the Fathers of the American Constitution established the separation of powers horn the influence of the theory, or to accomplish the immediately practical task of safeguarding liberty and property.” But they definitely desired liberty in the sense enunciated by Montesquieu. They also desired limits upon despotism. Independence from British suzerainty had given them the first. A short experience ‘with legislative supremacy, after the Declaration of Philadelphia, had convinced them that concentration of power in any one institution was fraught with abuse. While writing about the Constitution of Virginia, Jefferson wrote: “All the powers of government, legislative, executive, and judicial, result to the legislative body. The concentrating of these in the same hands is precisely the definition of despotic government. It will be no alleviation that these powers will be exercised by a plurality of hands, and not by a single one. One hundred and seventy-three despots would surely be as oppressive as one The same point was elaborated by Madison while issuing a similar warning: “The legislative department is everywhere extending the Sphere of its activity, and drawing all power into its impetuous vortex They (the founders of our republics) seem never to have recollected the danger from legislative usurpation, which by assembling all power in the same hands, must lead to the same tyranny as it had been threatened by executive usurpation.” If concentration of power was the evil to be avoided, was there besides executive or legislative omnipotence some third possibility? The alternative was what has come to be called Separation of Powers. In fact Separation of Powers became a political creed with the statesmen and those engaged in the framing of the national constitution at the Philadelphia Convention. They were not new to the theory. The governmental system of the Colonial “period embodied a species of Separation of Powers. Prior to 1776, the executive branch, under the Governor, was distinct from the legislative, and controversies between them were rampant in the two decades that led up to the Independence. With the principle of judicial review, the statesmen of that day’ were also equally familiar as the constitutionality of Colonial Acts could be challenged before the Judicial Committee of the Privy Council in London “History, therefore, joined hands with philosophy in writing a separation of powers into the federal constitution ” The influence of Montesquieu’s was, indeed, powerful and decisive. Madison unequivocally maintained that Montesquieu was “the oracle who is always consulted and cited on the subject.” Whatever ‘ be the respective weights of influence in the Philadelphia Convention, the American. Constitution, as Finer observes, “was consciously and elaborately made an essay in the separation of powers and is today the most important polity in the world which operated ‘ upon that principle. ” But the American Constitution did not explicitly state that powers ought to be separate. It simply distribute the powers; legislative powers were vested in Congress, the executive powers in the President, and the judicial in the Courts. While apportioning the lion’s share of powers to one department of government the Constitution gave smaller slices to each of other departments. This was done to avoid concentration and consequent abuse of power. The maxim with the Fathers of the Constitution was that power should be limited, controlled and diffused. “If power is not to be abused then it is necessary, in the nature of things, that power be made a check to power.” In the field of legislation, for example, the bulk of the law-making power was placed in Congress, but the President received his share in the powers to recommend measures, to summon Congress in special session, and to veto bills. Similarly, the Senate shared with the President his power to make appointments, declare war, ‘and ratify treaties. The Supreme Court, by exercising the power of judicial review, asserted its claim to a portion of the legislative function. Congress, too, acted in a judicial capacity in cases of impeachment where the House was empowered to prosecute and the Senate sat in judgment. The President could intervene in the business of the courts through his power of pardon for all offenses except treason. As portions of each function were distributed among different agencies, the Separation of Powers was really attended to result in a system of checks and balance. The system of checks and balances had two obvious results. First, and ordinarily, unless the members of the three branches of government saw eye to eye and cooperated harmoniously, none of the principal functions of government could be adequately performed Second and conversely, if any department or pair of departments ventured to exceed their constitutional authority, they “could be restrained by the refusal of a third to contrive.” In this way, the Fathers of the Constitution destroyed the concert of leadership in government which is so prominent a feature of our times. Finer, thus, sums up the theory of Separation of Powers as it has worked in the United States. He says, Legislative procedure has come to differ essentially from that in Britain and France; financial procedure is worlds apart; there is no coordination of political energy or responsibility; but each branch has its own derivation and its morsel of responsibility. All is designed to check the majority, and the end is achieved. At what cost, the cost cannot be measured in terms of dollars. With powers divided between the executive and legislative departments without any means of proper coordination, there is always inordinate delay to arrive at an agreement even on pressing matters which demand expeditious disposal. One branch of government may be operating on one policy whereas the other may follow quite a different one, particularly, when the executive belongs to one party and the Congressional majority to another. Some Presidents have, no doubt, succeeded in bridging the gap separating them from the legislature. “But while an emergency may bring,” says Zink, “temporary coordination and the use of patronage can usually be counted upon to pave the way to some action, the National government is still torn to parts by the provision which the framers made for separation of powers. ” EVALUATION OF THE THEORY The Theory Restated. Much has been said about the theory of the Separation of Powers.But what kind of Separation of Powers is needed ? Here much of the clarity rs obscured by the use of the ambiguous term “power” The government has certain functions to perform in order to serve the purpose of the State. If functions are taken as powers, then the idea of service entirely disappears and the organs of government become invested with power Wherever there rs power there is force. A government having its foundation on power becomes an engine of force. The use of the term power is most unfortunate and accordingly, the cause of so much confusion. The doctrine of Separation of Powers is itself a protest against power and its meaning can be better analyses and appreciated, if we drop the reference to powers and substitute for it functions of the organs or branches of government. A branch is an organization of agencies with their personnel. The services they undertake are their functions. The functions of the government are legislative (rule making), executive (rule application), and judicial (rule adjudication). Accepting this as the criterion of our distinction, the doctrine of Separation of Powers can be restated in the following manner, The activities of government group themselves. into three divisions. These divisions are not a matter of theory, but it is a practical fact associated with the character of the functions themselves. it is one thing to legislate another to administer, and a third to judge. By assigning each of time functions to different branches of government composed of separate personnel and following their own mode of action, separation is obtained. Such a statement transfers the doctrine from the realm of theory to that of political fact. Absolute Separation Impossible. But it does not mean absolute separation. Separation of Powers, according to Barker, must certainly mean a distinct mode of action. Each organ of government has its own distinctive mode of action. The legislative mode is deliberate and deliberative, the judicial mode is critical rather than deliberative and the executive mode is a rapid determination of decisions and instructions in order to give effect to legislative ,and judicial modes. In a word, as Barker says, “we shall find three organs corresponding to the three different modes of action, but we may find one of the organs so absolutely specialized in its mode of action, or so entirely separate in its province, that it cannot also act in the mode and enter the province of others.” Madison correctly explained the doctrine of Separation of Powers when he said: The powers properly belonging to one department ought not to be directly administered by either of the other departments. It is equally evident that neither of them ought to possess, directly or indirectly, an over ruling influence over the other in the administration of their respective powers. The premise of Federalist Paper 47, which is essentially concerned with the political theory of federalism is that any person or body of persons possessing power may be tempted to abuse it unless controlled, and that power can be checked by power. The Separation of Powers attempts to create a balance among the competing units. The State is an organic unity and the various departments of its machinery are interconnected. By the nature of their functions, they cannot be divided into water-tight compartments. The government must always be viewed as a whole, and its organs, though distinct, must work in unison in order to be useful and effective in serving the purposes for which they have been created. The real problem, according to MacIver, “is so to articulate these that responsibility shall not be divorced from efficiency. The functions of government are divided ,into different departments so that each department does its job to the host of its efficiency and with due regard to its responsibility. Efficiency demands expert knowledge of the problems which face a country and responsibility means the diversion of that knowledge towards those channels which are responsive to the needs of the people. This is the first principle of democracy. The Separation of Powers is, accordingly, needed for proper articulation and not for the division of organs of government into water-tight compartments. To put it in the language of Almond and Powell the theory of Separation of Powers is per-eminentlya functional theory, “Among its central concerns are the nature of legislative, executive and judicial power; the question of how best to maintain their separateness, the values resulting from such separation and the problem of how best to mesh these separate institutions of government with the structure of‘ society” There cannot be any isolation or disharmony between the different departments of government. Isolation is not the essence of the doctrine and Montesquieu never suggested it. Each department performs some functions which actually do not belong to it. In fact, in all modern systems, institutions exercise overlapping functions of some kind or provision is made for some degree of cooperation between the different organs and branches to perform the work of government The legislative department is not wholly and solely confined to the legislative mode of action, although it is primarily and mainly concerned with that mode. There is a judicial organ primarily and mainly concerned with the judicial mode of action, but not necessarily confined to that mode. There is similarly,an executive organ which may be concerned with other modes of action besides the executive. A judge, for example, makes a new law when he gives a decision on a point covered by law or in which there does not exist a precedent. Here is a case in which the judicial and legislative functions combine as a result of natural process. Again, the executive everywhere possesses the power of issuing ordinances and proclamations. This is a device of practical utility, but it has to be admitted that ordinances and proclamations are a formidable substitute for legislation. The executive is a legislature in another sense too. It suggests and guides the process of law-making by the legislative organ. It does sounder the American system of division of functions between the President and Congress; and it does so even more under Cabinet system such as the British and the Indian. The legislature, too, performs various executive functions. In a parliamentary government it creates the real executive, retains it in office and controls its functions. In the Presidential system, as obtainable in the United States, the Senate has a share in making appointments and ‘ ratifying treaties. Executive and legislative departments perform judicial functions too. The Chief Executive head of the State everywhere possesses the power of pardon The House of Lords rs the highest Court of Appeal in Britain. The Senate in the United States acts as a court of impeachment. There is no Separation of Powers in Britain as Montesquieu is claimed to have understood. He had in his mind a longing for liberty against the autocratic powers of Kings and princes. Britain presented to him a sharp contrast with the conditions prevailing in his own country. Without forming a real idea of the actual working of a democratic government, more so a responsible, he concluded that liberty can be secured only by a mechanical check of one department over the other. For him this was above all else a practical recipe for political liberty. But Montesquieu wrote at a time when institutional cheeks appeared to be the only feasible ones. The value of the doctrine by dispersing functions among different political institutions is that it attempts to provide a limit to political power and a brake on action by constitutional devices. Power must be limited if liberty is to exist, for unchecked power is as dangerous as the unity of temporal and spiritual powers This is precisely what Montesquieu enunciated. All Departments not Coordinate. The traditional analysis of the do doctrine of Separation of Powers takes for granted that the three wings of government are coordinate or equal. But this is not precisely so With the growth of democracy the executive has been reduced to a subordinate position. The legislature is really the regulator of administration. By its control over the finances of the country it limits and controls the executive, howsoever theoretically independent the executive may be In a Cabinet system of government the subjection of the executive to the legislature at every step is undisputable The judiciary, too, is obviously subordinate to the legislature, although its independence is the most coveted maxim of democracy. It ‘does not, however, mean that legislature is not subject to any kind of check. The bounds of the sovereign legislature are many and various. In the first place, legislature is bound by moral and ethical codes. All proposals for law are assayed on the touchstone of practical utility and moral considerations. No parliament can pass laws which are against the facts of nature or are against the established codes of public or private morality. Secondly, the legislature, like the whole of government, is limited both by the purpose it fulfills and the mode of action it follows. The most important limit on the legislature is the limit imposed by the development and activity of political parties. There is, what has been described as, a parliamentary forbearance. The minority agrees that the majority should govern, and the majority agrees that the minority must criticism and oppose. Opposition is an effective restraint on the vagaries of the majority party in the legislature. Both the party in office and the Opposition understand the mics of the game and know that at’ some future date their positions may be reversed. Thus, the concept of Separation of Powers, in its traditional analysis, has been impossible to realize in any complete way. The totalitarians reject the doctrine of Separation of powers from beginning to the end. Separation of Powers is aimed at preventing despotism whereas totalitarianism believes in unity and oneness of power. One of the Communist jurists wrote, “The Separation of Powers belongs to a political era in which political unity was reduced to a minimum in the interest of an autonomous bourgeois society. However, national and ethnic unity and oneness demand that all political powers be gathered in the hand of. one leader. The Communists reject the doctrine outright as it is bourgeois principle.” Vyshinsky wrote, From top to bottom the Soviet social order is penetrated by the single general spirit -of the oneness of authority of the toilers. The programme of the All Union Communist Party rejects the bourgeois principle of Separation of Powers. Soviet writers argued that Montesquieu developed the theory as a means of limiting the absolute powers of the Kings of France. In the Soviet Union there was no class conflict and hence there was no need to limit one branch of government by another. All organs of government had to work in the same interest. What the Doctrine Means Today. The modern democratic view does not accept the traditional analysis of the doctrine of Separation of Powers. It is explained that Montesquieu’ s views were the product of an era which looked upon government itself as something inherently dangerous and possibly despotic. That government was deemed best which governed least as it existed to protect and restrain, not to foster and promote. But today, even the most conservative person is unable to think of government in purely ” passive terms. The intensive integration and complexity modem industrial society, and the accepted concept of a Welfare State demand more and more action and Services from the government. All this needs planning the life and resources of the nation. The Welfare State tends to concentrate power on the executive level and consequently, it means ascendancy of the executive over the legislative branch. Locke had conceived of the relation between the three powers in terms of legislative supremacy. Montesquieu and Madison preferred to see an equilibrium between the three coordinate branches But such a division now seems outmoded for all practical purposes as it is incapable of guaranteeing the services which the government is expected to render. Flaming and active service cannot be the work of separate branch of government which cancel each other out. Planning must be unified. Fusion and not rigid separation of functions is required. Thus, “moulds are broken in which the thoughts of Locke, Montesquieu and Madison were cast and their contents have spilled together.” The ascendancy of the executive and the blurring of the traditional division of functions have been influenced by two other important tendencies. One is the organization of the career civil service and the second is the emergence of political parties with their nationwide organizations. Political parties unite what one may try to separate The development of the executive, therefore, into what may be called a multi functioning organ, is one of the most notable features of a modem government. To put it in the words of Barker, “If the growth of the legislative organ, in consequence of the development of the cabinet.system, was the notable feature of the eighteenth century, it may be said that the growth of the executive organ in consequence ‘of the extension of rights and the corresponding extension of services which mostly fall to the lot of the executive, is the notable feature of the twentieth.” Today, the executive is not only an executive, it is also, at the same time a legislature; it exercises a judicial jurisdiction too. Administration and adjudication no longer seem as different as they had once appeared. The core of the modern problem of government is to find a synthesis combining the answer to tum needs-the need for the Welfare State and the need for The Welfare State, as said before, means concentration of power on the executive level and, accordingly, the ascendancy of the executive over the legislative branch. This tendency seems to be an alarming development to many. It is, undoubtedly, alarming unless controlling and balancing devices are properly developed to keep pace with the ever changing face of the executive power. The doctrine of Separation of Powers has become important today than perhaps at any other time. One of the checks on the executive is the system of judicial review. Montesquieu: himself was particularly interested in the judicial power as check over and arbiter between the other two branches. This concept is more clearly realized in the United States, India and some. Commonwealth countries. The idea of an independent and coequal judicial branch also spread to Germany (Bonn) and Austria. In France and Italy the supreme administrative court, the Council of States in the former, applies the most effective check on the executive power, although it is nominally part thereof. The balance between executive and legislative branches is a legal question primarily in countries with the Presidential system of government. There the Constitution prescribes, the rights of both as well as their limitations: The emergence of political parties in the United States, it is suggested, has tended to redistribute the authority divided by the Constitution and have obliterated the doctrine of Separation of Powers to a considerable Extent Carl Friedrich, however, is of the view that emergence of the political parties “does not obliterate the Separation of Powers but it certainly softens it. At the same time the alteration of two parties itself constitutes a regularized restraint which consequently reduces the need for rigid separation. But under a parliamentary system the principal check is the existence of political parties and the development of the constitutional custom of party alteration There is also the impartial judiciary. Then, the elected -representatives debate and‘ ‘propose” in Harington’s phrase, while the electorate resolves through general elections the party to form the government and the one to constitute the Opposition. By this process is effectively preserved the basic conception of balance and counterpoise Barker suggests another check. He accepts the bare truth of our times that the executive is a mum-functioning organ, but he emphasizes that when the executive performs legislative and judicial functions let it employ the mode of action relevant to that department. For example, if the executive exercises judicial functions let it adopt the proper and peculiar mode of judicial action, i.e., it must accept the procedure of public hearing, summoning witnesses and recording of evidence according to the rules of evidence. It must publish its decisions and it must also admit, if it possibly can, the possibility of appeal. The need is, therefore, for union as well as separation. Democratic government demands that a synthesis be found between the Separation of Powers, and the possibility of concerted government action. The first is obtained by continuing with the separate organs of government. It is intrinsically good to do so, for it sets a limit of jurisdiction over the functions of each organ. Each organ establishes its own distinctive mode of action with its own distinctive technique. But it does not mean that separation of functions prevents leadership. Too much separation destroys responsibility, immobilizes action and ultimately destroys free government. Without leadership there would soon’ be a constitutional crisis and possibilities of the rise of dictatorship. But it is essential to temper leadership-by imposing limitations upon it. The real limitations are those which make the government responsible to the people, that is, it must answer to the people for its policies and if its answers are not satisfactory to the people, they should have the means to replace it. It can be ensured further by the presence of an independent and impartial judiciary, the guardian of the rights of the people. Thus, the Separation of powers is a living force in all democratic countries as a check to irresponsible power. In the context of what has been said above the theory of Separation of Powers now rests upon broader grounds than suggested by the limited doctrine of Locke and Montesquieu. It reconciles theory with practice and thereby establishes harmony between ‘ division and concentration of powers to maintain the safety of the political order as a whole. It, stands for an effective system of divided powers as contained in the classical doctrine and considers it sound, but holds that there is nothing sacrosanct about it. It appreciates the difficulties resulting from divided powers and considers them great, but ~ it also realizes that the consequences of concentrating power are really disastrous. This has given rise to a new theory of divided powers; a scheme suitable, on the whole, to the needs of an industrial society. The advocates of this new theory point out that the classical doctrine of Separation of Posters has an implicit double meaning 0n the one hand it contains a generalization, theory or hypothesis; on the other hand it contains a practical suggestion, a proposal for the organization of government in the interest of individual liberty. The idea that there are three major types of governmental power seems to them a valid generalization and one in accord with the Operations of the human mind. They agree with Immanuel Kant that this distinction of powers corresponds to the pattern of a practical syllogism, divided as syllogism into the major premise, the minor premise, and the consequent.” The resemblance of the distinctions underlying the separation of powers and the pattern of a syllogism is due to the fact that commands imply decisions, and decisions in turn imply judgments. Power means, inter Alia, that a person or group possesses the ability to command and the ability to command involves the ability to decide, whenever there is a choice between several alternatives. Power, therefore, admits of commanding and deciding. Specific decisions and commands are the realm of executive power, general decisions and commands fall within the sphere of legislative power. The latter is for that reason often called the rule-making power. Similarly, the executive power may be called measure-taking or rule application. The judicial power apparently stands between the two, for it transforms a, general into a specific decision. When a general command has been given, or a general decision made, that is, the rule has been established, there still remains the further decision involved in adjudicating the rule. The judicial power makes a specific decision by applying the rule. The decision made is not a command. It is for this reason that the pronouncements of the courts, while adjudicating, are described as decisions and the whole process as rule. That is, the specific decision is rendered while applying rules. But the advocates of the new theory of divided powers contend that most of the time the government functionaries are their own judges. Whenever they decide to do or not to do something because the law demands or forbids it, they apply that law by subsuming the particular situation with which we are confronted under the established legal rules. Ordinarily, it is only the doubtful and controversial points of law which are brought before the courts. Majority pf them are decided by various commissions and tribunals with which an industrial society is honeycombed today. The decisions of these commissions and tribunals are administrative in their‘nature and not in strict accord with the classical theory of the Separation of Powers Those who criticize the new theory of divided powers seldom appreciate the practical aspect of the functions of government and the task which it has to undertake. Administrative tribunals and commissions have taken deep roots in almost all countries and, in some, such tribunals carry a constitutional sanctity, as in India. The only point which need be emphasized IS that these commissions and tribunals should adopt the mode of judicial action as Barker suggested
Nearly half of adults with autism will experience clinical depression in their lifetime, according to our new research published in the Journal of Abnormal Child Psychology. Depression can have devastating consequences for individuals with autism, including a loss of previously learned skills, greater difficulty carrying out everyday tasks, and at worst, suicide. People with autism should be regularly screened for depression so that they can access appropriate treatment. Autism is a condition that involves difficulties with social interactions and restricted and repetitive patterns of behavior. Autism also raises risk for severe mental illness. Until now, researchers and clinicians did not know how many individuals with autism were affected by depression. Our study, which involved a systematic review of nearly 8,000 research articles, reveals clear evidence that depression is highly prevalent in both children and adults with autism. It also reveals that depression is more common in individuals with autism who have higher intelligence. Clinical depression is defined in the “Diagnostic and Statistical Manual of Mental Disorders” by a longstanding pattern of negative mood. Additional features include loss of interest in activities, physiological changes (e.g., sleep, appetite or energy disturbance), cognitive changes (e.g., feelings of worthlessness, difficulties with attention) and suicidal thoughts or actions. In the general population, clinical depression is the leading cause of disability worldwide. Depression in autism is defined by these same criteria, but the symptoms can be challenging to detect. Individuals with autism often have trouble identifying and communicating their feelings. Clinicians may have to rely on observed behavior changes or the reports of others close to the individual to make a diagnosis. Clinicians also have to be particularly careful that they do not confuse the features of depression with those of autism. For example, people with autism and people with depression have difficulties with social relationships. The key difference between these groups is why they experience these problems. People with autism often lack the social skills necessary to engage with others. By contrast, people with depression often withdraw from others because they lose the ability to find pleasure in their social interactions. Intelligence and depression: We found that the highest rates of depression are seen in individuals with autism who have above-average intelligence quotients (IQs). This finding is in contrast to the general population, where lower intelligence is associated with higher rates of depression. Although this study did not look into why higher intelligence is associated with higher depression rates in autism, we can make some guesses. It could be that individuals with autism who have above-average IQs are more aware of the social difficulties associated with their autism diagnosis, and this awareness leads to higher rates of depression. On the other hand, it could be that individuals with below-average IQs have difficulties communicating their symptoms, making it difficult to diagnose depression in this subgroup. We also learned that how studies assess depression influences the rates of depression. Rates are highest among studies that used standardized structured interviews to assess depression, compared with studies that used less formal assessment methods. It is possible that structured interviews may be picking up on features that other assessment methods are missing. At the same time, structured interviews may bias the prevalence of depression because these interviews were not designed for people with autism. Depression is also more common when clinicians ask the person with autism directly about their symptoms, rather than asking a caregiver. It is possible that individuals with autism are experiencing depressive symptoms that their caregivers are missing. It is also possible that studies used a caregiver when participants were not able to report on their own symptoms (for example, because of low IQ). Depression is more widespread in people with autism than we previously thought. This important research will hopefully prompt clinicians to include an assessment of depression in their routine clinical practice with people who have autism. This assessment will ensure that people with autism are receiving appropriate treatment. This article was originally published on The Conversation. It has been slightly modified to reflect Spectrum‘s style.
Could you cope without the numerous electronic devices that help you get through the day? You might have to if extreme space weather heads our way. Every 11 years the Sun enters a period of heightened activity – a solar maximum. At solar maximum, the Sun’s twisted magnetic field unwinds by breaking at certain points and reforming in a more relaxed state. In doing so, the Sun releases huge amounts of ultraviolet and X-ray radiation in the form of solar flares, as well as hot plasma in coronal mass ejections. And it’s these coronal mass ejections, or CMEs, that make up the extreme space weather events that pose most danger to us on Earth. When a CME is released, scientists scramble to see whether it will affect the Earth. Data from the few satellites that monitor the Sun is fed into computer models to forecast an arrival time at Earth, but with only limited amounts of information available, the best predictions are still a few hours out. In addition, not all CMEs of the same size affect the Earth in the same way. The Earth is more vulnerable to CMEs when the magnetic fields of the Earth and CME combine. The potential for this to occur, what scientists call geoeffectiveness, increases when the two magnetic fields are pointing in opposite directions. Currently, the only way to determine the direction of the magnetic field inside a CME is from data collected by an ageing NASA probe called ACE. Launched in 1997, ACE is located at the first Lagrangian point – a point in space along the line between the Earth and the Sun where the gravitational pulls balance. Spacecraft can stay at this point for many years using little fuel, but its short distance from Earth means that ACE can only give 30-60 minutes warning of what’s heading our way. Jim Wild, a scientist from Lancaster University studying aurora, says that being able to predict the direction of the magnetic field of a CME before it arrives at the Earth is “a nut we’d like to crack”. Scientists are working on a new method that relies on observing changes in very long wavelength radio waves from distant stars as they travel through a CME. However, Mike Hapgood, plasma scientist and advisor to the UK government on space weather risks, admits it’s still early days, “It’s not been tried before and the new technologies like new radio telescopes are just getting to the point where it might be possible.” Extremely large CMEs, with the potential to cause widespread disruption to electronic devices, occur every few hundred years. In 1859 the magnetic field of the Earth was disrupted so much by a CME, the aurora borealis was seen as far south as Hawaii and telegraph stations caught fire. With our modern day reliance on electronics many industry sectors are taking extreme space weather seriously even though many of the precise effects of an extremely large CME are unknown. “I’m much happier there are conversations going on now at a top level between government, industry and science to quantify impact on the technological systems we use” says Jim Wild. If a CME were to cause a power grid to shut down, there would be huge consequences. “Today it’s a big deal. Without power you don’t have refrigeration which is important for food and medicine, you lose all communication and most importantly there would be no electronic money – ATMs won’t work, debit cards won’t work” says Mike Hapgood. This is another great chance for the world to unite, share expertise and work together as Mike says, “This is a global problem. We’re all in this together. If you had a really big event, there aren’t any safe places on Earth.” The folowing links are external
The government imposes some compulsory levies on an individual or establishments which are known as taxes. Almost every country in the world imposes taxes to its citizens, primarily to raise revenue for government expenditure as well as other purposes. Taxes are a vital source of government earning, and all the taxpayers need to understand the purpose of taxation. While the taxes are without a doubt collected for the well-being of taxpayers as a whole, the single taxpayer’s liability is autonomous of any particular advantage obtained. The purpose of taxation is widely classified into three parts, i.e., reduce inequality, reduce the demerit goods consumption and financing the government consumption. Four significant purposes of taxation are listed here according to various economists- revenue, re-pricing, redistribution, and representation. Purpose of Taxation - Revenue- By imposing various kinds of taxes from the individuals such as income tax, property tax, sales tax and other taxes government earn money and this money is primarily spent on the development and improvement of the country. Therefore the prime purpose that is believed of taxation is to be revenue. The government spends the money that is raised from various taxes from individuals for the construction of roads, schools, hospitals and also for other development causes. The earned revenue is also spent on the justice system and regulation by the national government. - Re-pricing- Governments use the re-pricing concept of taxation in order to discourage the consumption of faulty and defective goods in the society. Re-pricing is believed to be the authoritative purpose of the taxation. Goods like tobacco and alcohol are such products the consumption of which affects both the society and the consumer. Excise duty is the examples of such taxation. - Redistribution- In every country, there is economic discrimination prevailing in the society, and every nation tries fully to minimize it. In order to impose higher tax rates on the higher income group, the countries use progressive taxation and lower tax rates or some time no tax to the lower income groups. This purpose of the taxation is also known as the redistribution of taxation. This means government earns wealth from the more affluent class of the society and then allocates it to the economically less furnished section of the community. - Representation- The government applies various types of taxes. The tax rates to be inflicted on the society are also changeable on multiple grounds. The primary purpose of the tax distribution is to scatter the load of tax among the diverse classes and people of the society. Taxes are imposed on the community in order to support the poor and tax those who are working says the social security system of the modern world. Various objectives of taxation The different objectives of taxation are summarized as under: - Regulatory objective- In different socio-financial factors, taxation works on an essential dominant part. - The objective of improving revenue- The primary purpose of taxation is growing earnings. Modern government authorities need a tremendous amount of national defense, development of facilities and social upliftment techniques make regular and methodical source mobilization necessary. - Regulating imports and exports- Imports of unwanted products can be controlled by magnificent excessively great transfer responsibilities. Exports can be motivated by cutting liabilities and taxation on exports. - Economic development- Concerning the nationwide product the Economic growth is calculated, i.e., the result obtained in all the important segments of the economic system, i.e., farming, industry, and services. Taxation can be utilized as a catalyst to any one or all the three areas by careful changes in the tax prices. - Increasing career opportunities- Method and small business usually have highest possible perspective for career, particular financial areas, commercial properties, trade-focused recreational areas, etc. have excellent career perspective. - Reduction in local imbalances- Some areas may become well developed compared to others in the country. Tax rewards and privileges to start areas in the reverse regions can be a suitable method of working with the problem. Hire the online assignment writers for getting assistance in your tax-related assignments If you are a college or university student and pursuing your academic career in tax-related disciplines, then you need to accomplish a lot of assignments and homework. For scoring the highest grades in your assignments hire the online assignment helpers of StudentsAssignmentHelp.com and receive the best tax Assignment Help Services. We will help you in accomplishing all your assignment work on time by furnishing you with the high-quality academic writings. The professionals we have hired in our team are masters and Ph.D. degree holders from various eminent universities around the world. They will help you in completing all your work on time by guiding you with its various theories and concepts. Just provide us with the details you need in your assignment and rest leave to us. Our taxation Assignment Writers are available online 24×7, and you can anytime come to us for free revisions and services.
DTD - XML Building Blocks The main building blocks of both XML and HTML documents are elements. The Building Blocks of XML Documents Seen from a DTD point of view, all XML documents are made up by the following building blocks: Elements are the main building blocks of both XML and HTML documents. Examples of HTML elements are "body" and "table". Examples of XML elements could be "note" and "message". Elements can contain text, other elements, or be empty. Examples of empty HTML elements are "hr", "br" and "img". Attributes provide extra information about elements. Attributes are always placed inside the opening tag of an element. Attributes always come in name/value pairs. The following "img" element has additional information about a source file: The name of the element is "img". The name of the attribute is "src". The value of the attribute is "computer.gif". Since the element itself is empty it is closed by a " /". Some characters have a special meaning in XML, like the less than sign (<) that defines the start of an XML tag. Most of you know the HTML entity: " ". This "no-breaking-space" entity is used in HTML to insert an extra space in a document. Entities are expanded when a document is parsed by an XML parser. The following entities are predefined in XML: PCDATA means parsed character data. Think of character data as the text found between the start tag and the end tag of an XML element. PCDATA is text that WILL be parsed by a parser. The text will be examined by the parser for entities and markup. Tags inside the text will be treated as markup and entities will be expanded. However, parsed character data should not contain any &, <, or > characters; these need to be represented by the & < and > entities, respectively. CDATA means character data. CDATA is text that will NOT be parsed by a parser. Tags inside the text will NOT be treated as markup and entities will not be expanded.
ZIMSEC O Level Geography Notes: Surface Water Flow and the Origin of rivers Surface water flow and origin of rivers - Rain falling down on land flows down the slope as sheet flow, rill flow and gully flow all of which contribute to stream discharge. - Underground water oozes at certain points called springs and also contributes stream discharge. - It is a type of overland flow or downslope movement of water which takes the form of a thin, continuous film over relatively smooth soil or rock surfaces - is generated when rain falling onto the earth’s surface flows over the whole surface as a thin layer of water. - It commonly occurs at the head of the watershed where the slope is gentle and the surface flat e.g. artificial surfaces, rocks etc. - Rills are shallow channels (no more than a few tens of centimetres deep) cut into soil by the erosive action of flowing water. - As the slope steepens,the amount of water increases and sheet flow encounters surface irregularities sheet flow turns into small shallow channels or rivulets known as rills. - Rills in turn join up with other rills and form gullies. - A gully is a landform created by running water, eroding sharply into soil, typically on a hillside. - Gullies resemble large ditches or small valleys, but are metres to tens of metres in depth and width. - The process by which gullies are formed is called gullying. - A gully may grow in length by means of headward erosion at a knickpoint. - Gullies are sometimes known as dongas. - Gullies empty into streams which are perennial rivers. The results of water erosion - Sheet flow results in sheet erosion - This results in the washing away of fertile top soils and shallow soils. - Rock surfaces and plant roots are also exposed by sheet wash. - Rill flow results in rill erosion. - Gully flow results in gullies also known as dongas. - Both Rill and gully erosions results in the formation of dongas and ravines. The problems of dongas. - Can lead to some areas becoming inaccessible as they are difficult to cross especially when it comes to carts and motor vehicles. - Disrupts communication lines such as roads. - Reduces the area available for crops pastures and settlements. - Can lead to the uprooting of trees. - Contribute to siltation. - Humans and animals can fall into these ravines leading to injuries. To access more topics go to the Geography Notes page.
All children can be rowdy and hyperenergetic from time to time. And they all have times when it’s difficult for them to sit still and concentrate. But if these behaviors reach chronic levels, or measurably interfere with their ability to learn and cope with their lives, then it’s possible the child has attention deficit disorder (ADD), most recently referred to as attention deficit hyperactivity disorder (ADHD). ADHD starts in childhood and affects approximately 12% of children under 16 and is on the rise. Around 60% of children with ADHD will still have symptoms by the time they reach adulthood. Many sources go on and on about the types of behaviors that are symptomatic of ADHD, but they mostly boil down to the following: - Hyperactivity (inability to calm down) - Impulsive behavior (extremely low impulse control) - Inattention and lack of ability to concentrate - Learning disabilities (largely due to the above symptoms) - Defiant or disruptive behavior, angry outbursts What Causes ADD/ADHD? ADHD is considered to be a neurological disorder or, more specifically, a neurodevelopmental disorder, meaning that it is a problem with brain development. However, scientists and healers are divided on the subject and many natural healers believe that ADD and ADHD is just the pharmaceutical industry’s label for a range of symptoms that largely come from poor nutrition, specifically from the Standard American Diet (SAD). Other natural health practitioners believe that ADHD is a set of symptoms that come from dysfunctional environments (home, school, church, etc.). Following is a summary of the most accepted theories: - Genetic predisposition - Brain injury during pregnancy, at birth, or in early childhood - Environmental toxins during pregnancy and in early childhood - Lack of nutrition combined with excess of bad foods - Dysfunction in a child’s key environment - Mercury toxicity from vaccines Treatments for ADD/ADHD Most medical experts say that there is no cure for ADD/ADHD, only treatments that help keep ADD related behaviors in check and improve cognitive functioning. Whether or not that’s true depends on your definition of what, exactly, ADD/ADHD is. There’s a good chance that one or more of these natural cures will help or even cure the problem. Therefore, the best therapy involves a combination of cognitive, nutritional, and psychological treatments, including the following: - Replace sugary, fatty foods like cheese, candy, sodas, and fried foods with nutrientrich, health forming foods like green leafy vegetables, Spirulina, kelp, bee pollen, and maca. Studies prove that children with better diets are able to concentrate and relax more, and have higher achievement levels. - Increase omega fatty acids in the diet and through supplements. - Add vitamin C and E supplements and start an antioxidantrich diet. - Avoid food additives and foods known to cause allergies, including wheat, dairy products, and processed meats. - Reduce stress at home and in school and implement relaxation and calming practices. - Reduce dysfunction at home through emotional support treatments for the entire family. The herb skullcap has been shown to help children with ADHD symptoms, and a daily dose of 500 mg of magnesium relieves symptoms by promoting a calm feeling. If you have trouble getting your child off a sugarand junkfooddependent diet, try getting away from the source of the temptation. Take a healthy, goodfoods vacation for the entire family.
This Positive Emotions Pupil Book consists of 10 lessons which introduce children to five evidence-based strategies to boost positive emotions on a daily basis. Each of these strategies is presented as an ‘ingredient’ which makes up a ‘Positive Emotion Potion. - Expressing gratitude - Understanding and experiencing ‘Flow’ activities - Performing acts of kindness - Noticing and savoring small positive events - Keeping fit and healthy. OTBlearning is the latest extension of the well known CPD publisher and distributor Outside The Box Learning Resources. We are currently offering online courses for teachers and plan to extend into the areas of SNA courses, courses for parents in the near future. Weaving Well-Being – Positive Psychology, Relationships, and Resilience This online course provides an understanding of the science of Positive Psychology and offers practical and evidence-based strategies for supporting children’s and teacher well-being. Grounded in the SPHE curriculum and shows how a whole-school approach can be supported by the SSE process, with particular reference to Well-Being in Primary Schools (DES, 2015).
Comets may contain much less carbon than thought, whichcould rewrite what role they might have played in delivering the ingredients oflife to Earth, a new study suggests. Researchers have detected carbon-loaded molecules in cometsin the past, including some simple amino acids, which are considered thebuilding blocks for life. The presence of these organic molecules in comets, aswell as the fact that comets regularly strike planets, suggested they mighthave helped seedour planet with the carbon-based materials needed to form life. To learn more about the carbon in comets, scientistsanalyzed wide-field images of comet C/2004 Q2 (Machholz) recorded by the GalaxyEvolution Explorer (GALEX)satellite. They focused on the ultraviolet light shed by the envelope of dustand gas surrounding the comet's nucleus. Carbon atoms on comets become ionized, or electricallycharged, when they are hit by enough energy from the sun. The researchersstudied radiation emitted by charged carbon atoms to determine how long ittakes most carbon on a comet to become ionized. They found that this processoccurs after only seven to 16 days ? much more quickly than thought. This suggests that past research could have overestimatedthe amount of carbon in comets "by a factor of up to two," researcherJeff Morgenthaler, a space physicist at the Planetary Science Institute inTucson, told SPACE.com. Scientists have known that sunlight can charge carbon. Thesenew results show how much the solarwind ? the gusts of electrically charged particles from the sun ? alsoinfluences carbon in space. "This had been predicted earlier, but until now no onehad quantitatively put all the pieces together and done a measurement thatconfirmed it," Morgenthaler said. These findings "could rein in speculation as to what carbon-containingmolecules comets might have been contributing to Earth," Morgenthalersaid. By rewriting what scientists know of carbon levels in comets, thediscovery might also influence models of how these space rocks are formed. "We are looking for trends in the compositions ofcomets as a function of their orbital dynamics," Morganthaler said."Orbital dynamics can tell us something about where comets came from; thisresearch helps provide a clearer picture of what they are made of. Together,they provide a view of the early solar system." Morgenthaler and his colleagues will detail their findingsin the Jan. 1 issue of the Astrophysical Journal. - Supernova Explosions Offer Potential Spin on Life's Origins - 5 Reasons to Care About Asteroids - Video - Collision Watch: Scientists Track Comets That Could Slam Earth
Is there a better way to learn how a rotary engine works? On this episode of Engineering Explained, Jason Fenske explains how the wankel rotary engine works. Using a 3D-printed 1/3 scale model of a 13B-REW engine from an FD Mazda RX-7, we get a closer look at how rotaries function. The wankel rotary engine was first used by Mazda when the company debuted the Cosmo back in 1967. It was later used in pickup trucks, but didn’t gain popularity until it found its way into the first generation RX-7 in 1978. From there, rotary engines and the RX-7 name became synonymous until the final production of the RX-8 in 2012. Unlike conventional piston-pumping internal combustion engines, the wankel engine instead contains a rotor inside. Looking at the model of the 13B-REW, you can see inside the rotor housing where all the fun happens. The Dorito-shaped rotor inside is the key to making power and rotates with the help of the eccentric shaft. The shaft and rotors spin together as opposed to a four-stroke engine, which uses reciprocating motion. Gallery: 3D Printed Mazda Rotary Engine During the rotation of the rotor, all three chambers of the combustion process are active: intake stroke, power stroke, and exhaust stroke. With the 13B engine having two rotors, that means six cycles are occurring simultaneously. This combustion process allows the rotary engine to create a lot of power compared to a similar four-stroke engine. Not having to deal with reciprocating mass going up and down, rotary engines can rev up to 9,000 rpm no problem due to the rotational inertia. Due to the long shape of the of the combustion chamber, there is often unburnt fuel exiting the exhaust, which isn’t very efficient. By design, rotary engines burn oil to help seal the combustion chamber. This is why most RX-7 owners carry quarts of oil in the trunk. Rumors of the Mazda RX-7’s return surface every year, but will it ever really happen? Only time will tell.
Lynching, the practice of killing people by extrajudicial mob action, occurred in the United States chiefly from the late 1700s through the 1960s. This type of murder is most often associated with hanging, although it often included burning and various other methods of torture. Only rarely were lynchers punished, or even arrested, for their crimes. Lynching is often associated with white supremacy in the South after the American Civil War. The granting of civil rights to freedmen in the Reconstruction era (1865–77) aroused anxieties among white citizens, who came to blame African Americans for their own wartime hardship, economic loss, and forfeiture of social privilege. African Americans, and whites active in the pursuit of equal rights, were frequently lynched in the South during Reconstruction, but lynchings reached a peak in the late 19th and early 20th centuries, when southern states enacted a series of segregation and Jim Crow laws to reestablish white supremacy. Notable lynchings of civil rights workers during the 1960s in Mississippi contributed to galvanizing public support for the Civil Rights Movement and civil rights legislation. The Tuskegee Institute has recorded 3,437 lynchings of African Americans and 1,293 lynchings of whites between 1882 and 1968. Southern states created new constitutions between 1890 and 1908, with provisions that effectively disenfranchised most blacks, as well as many poor whites. People who were not permitted to vote were also not permitted to serve on juries, further excluding them from the political process. African Americans mounted resistance to lynchings in numerous ways. Intellectuals and journalists encouraged public education, actively protesting and lobbying against lynch mob violence and government complicity in that violence. The National Association for the Advancement of Colored People (NAACP), as well as numerous other organizations, organized support from white and black Americans alike. African-American women's clubs, such as the Association of Southern Women for the Prevention of Lynching, raised funds to support the work of public campaigns, including anti-lynching plays. Their petition drives, letter campaigns, meetings and demonstrations helped to highlight the issues and combat lynching. In the Great Migration, extending in two waves from 1910 to 1970, 6.5 million African Americans left the South, primarily for northern and mid-western cities. The term "Lynch's Law" – subsequently "lynch law" and "lynching" – apparently originated during the American Revolution when Charles Lynch, a Virginia justice of the peace, ordered extralegal punishment for Loyalists. In the South, members of the abolitionist movement and other people opposing slavery were often targets of lynch mob violence before the Civil War. One motive for lynchings, particularly in the South, was the enforcement of social conventions – punishing perceived violations of customs, later institutionalized as Jim Crow laws, mandating segregation of whites and blacks. Financial gain and the ability to establish political and economic control provided another motive. For example, after the lynching of an African American farmer or an immigrant merchant, the victim's property would often became available to white Americans. In much of the Deep South, lynchings peaked in the late 19th and early 20th centuries, as whites turned to terrorism to dissuade blacks from voting. In the Mississippi Delta, lynchings of blacks increased, beginning in the late nineteenth century, as white planters tried to control former slaves who often became landowners or sharecroppers. Lynchings would also occur in frontier areas where legal recourse was distant. In the West, cattle barons took the law into their own hands by hanging those whom they perceived as cattle and horse thieves. Journalist and anti-lynching crusader Ida B. Wells wrote in the 1890s that black lynching victims were accused of rape or attempted rape only about one-third of the time. The most prevalent accusation was murder or attempted murder, followed by a list of infractions that included verbal and physical aggression, spirited business competition and independence of mind. White lynch mobs formed to restore the perceived social order. Lynch mob "policing" usually led to murder of the victims by white mobs. Law-enforcement authorities sometimes participated directly or held suspects in jail until a mob formed to carry out the murder. In the view of social historian Michael J. Pfeifer, the United States had two parallel systems of "justice", one legal (through the courts) and the other illegal. Both were racially polarized and both, he said, operated to enforce white social dominance. There is much debate over the violent history of lynchings on the frontier, obscured by the mythology of the American Old West. Compared to the myths, real lynchings in the early years of the western United States did not focus as strongly on "rough and ready" crime prevention, and often shared many of the same racist and partisan political dimensions as lynchings in the South and Midwest. In unorganized territories or sparsely settled states, security was often provided only by a U.S. Marshal who might, despite the appointment of deputies, be hours, or even days, away by horseback. Lynchings in the Old West were often carried out against accused criminals in custody. Lynching did not so much substitute for an absent legal system as to provide an alternative system that favored a particular social class or racial group. One historian writes, "Contrary to the popular understanding, early territorial lynching did not flow from an absence or distance of law enforcement but rather from the social instability of early communities and their contest for property, status, and the definition of social order." The San Francisco Vigilance Movement, for example, has traditionally been portrayed as a positive response to government corruption and rampant crime, though revisionists have argued that it created more lawlessness than it eliminated. It also had a strongly nativist tinge, initially focused against the Irish and later evolving into mob violence against Chinese and Mexican immigrants.. In 1871, at least 18 Chinese-Americans were eventually killed by the mob rampaging through Old Chinatown, after a white businessman was inadvertently caught in the crossfire of a tong battle. During the California Gold Rush, at least 25,000 Mexicans had been longtime residents of California. The Treaty of 1848 expanded American territory by one-third. To settle the war, Mexico ceded all or parts of Arizona, California, Colorado, Kansas, New Mexico, Nevada, Oklahoma, Texas, Utah and Wyoming to the United States. In 1849, California became a state within the United States. Many of the Mexicans who were native to what would become a state within the United States were experienced miners and had had great success mining gold in California. Their success aroused animosity by white prospectors who intimidated Mexican miners with the threat of violence and committed violence against some. Between 1848 and 1860, at least 163 Mexicans were lynched in California alone. One particularly infamous lynching occurred on July 5, 1851, when a Mexican woman named Josefa Segovia was lynched by a mob in Downieville, California. She was accused of killing a white man who had attempted to assault her after breaking into her home. Another well-documented episode in the history of the American West is the Johnson County War, a dispute over land use in Wyoming in the 1890s. Large-scale ranchers, with the complicity of local and federal Republican politicians, hired mercenary soldiers and assassins to lynch the small ranchers (mostly Democrats) who were their economic competitors and whom they portrayed as "cattle rustlers." During the Civil War, Southern Home Guard units sometimes lynched white Southerners whom they suspected of being Unionists or deserters. One example of this was the hanging of Methodist minister Bill Sketoe in the southern Alabama town of Newton in December 1864. Other (fictional) examples of extrajudicial murder are portrayed in Charles Frazier's novel Cold Mountain. The first heavy period of violence in the South was between 1868 and 1871. White Democrats attacked black and white Republicans. This was less the result of mob violence characteristic of later lynchings, however, than insurgent secret vigilante actions by groups such as the Ku Klux Klan. To prevent ratification of new constitutions formed during Reconstruction, the opposition used various means to harass potential voters. Failed terrorist attacks led to a massacre during the 1868 elections, with the systematic insurgents' murders of about 1,300 voters across various southern states ranging from South Carolina to Arkansas. After this partisan political violence had ended, lynchings in the South focused more on race than on partisan politics. They could be seen as a latter-day expression of the slave patrols, the bands of poor whites who policed the slaves and pursued escapees. The lynchers sometimes murdered their victims but sometimes whipped them to remind them of their former status as slaves. White vigilantes often made nighttime raids of African-American homes in order to confiscate firearms. Lynchings to prevent freedmen and their allies from voting and bearing arms can be seen as extralegal ways of enforcing the Black Codes and the previous system of social dominance. The 14th and 15th Amendments in 1868 and 1870 had invalidated the Black Codes. Although some states took action against the Klan, the South needed federal help to deal with the escalating violence. President Ulysses S. Grant and Congress passed the Force Acts of 1870 and the Civil Rights Act of 1871, also known as the Ku Klux Klan Act, because it was passed to suppress the vigilante violence of the Klan. This enabled federal prosecution of crimes committed by groups such as the Ku Klux Klan, as well as use of federal troops to control violence. The administration began holding grand juries and prosecuting Klan members. In addition, it used martial law in some counties in South Carolina, where the Klan was the strongest. Under attack, the Klan dissipated. Vigorous federal action and the disappearance of the Klan had a strong effect in reducing the numbers of murders. Political insurgency and partisan violence surged again. From the mid-1870s on in the Deep South, violence rose. In Mississippi, Louisiana, the Carolinas and Florida especially, the Democratic Party relied on paramilitary "White Line" groups such as the White Camelia to terrorize, intimidate and assassinate African American and white Republicans in an organized drive to regain power. In Mississippi, it was the Red Shirts; in Louisiana, the White League that were paramilitary groups carrying out goals of the Democratic Party to suppress black voting. Insurgents targeted politically active African Americans and unleashed violence in general community intimidation. Grant's desire to keep Ohio in the Republican aisle and his attorney general's maneuvering led to a failure to support the Mississippi governor with Federal troops. The Democrats' campaign of terror worked. In Yazoo County, for instance, with a Negro population of 12,000, only seven votes were cast for Republicans. In 1875, Democrats swept into power in the state legislature. Once Democrats regained power in Mississippi, Democrats in other states adopted the "Mississippi Plan" to control the election of 1876, using informal armed militias to assassinate political leaders, hunt down community members, intimidate and turn away voters, effectively suppressing African American suffrage and civil rights. In state after state, Democrats swept back to power. From 1868 to 1876, most years had 50–100 lynchings. White Democrats passed laws and constitutional amendments making voter registration more complicated, to further exclude black voters from the rolls. Following white Democrats' regaining political power in the late 1870s, legislators gradually increased restrictions on voting, chiefly through statute. From 1890 to 1908, most of the Southern states, starting with Mississippi, created new constitutions with further provisions: poll taxes, literacy and understanding tests, and increased residency requirements, that effectively disenfranchised most blacks and many poor whites. Forcing them off voter registration lists also prevented them from serving on juries, whose members were limited to voters. Although challenges to such constitutions made their way to the Supreme Court in Williams v. Mississippi (1898) and Giles v. Harris (1903), the states' provisions were upheld. Most lynchings from the late nineteenth through the early twentieth century were of African Americans in the South, with other victims including white immigrants, and, in the southwest, Latinos. Of the 468 victims in Texas between 1885 and 1942, 339 were black, 77 white, 53 Hispanic, and 1 Indian. They reflected the tensions of labor and social changes, as the whites imposed Jim Crow rules, legal segregation and white supremacy. The lynchings were also an indicator of long economic stress due to falling cotton prices through much of the 19th century, as well as financial depression in the 1890s. In the Mississippi bottomlands, for instance, lynchings rose when crops and accounts were supposed to be settled. The late 19th and early 20th century history of the Mississippi Delta showed both frontier influence and actions directed at repressing African Americans. After the Civil War, 90% of the Delta was still undeveloped. Both whites and tens of thousands of African Americans migrated there for a chance to buy land in the backcountry. It was frontier wilderness, heavily forested and without roads for years. Before the turn of the century, lynchings often took the form of frontier justice directed at transient workers as well as residents. Thousands of workers were brought in to do lumbering and work on levees. Whites were lynched at a rate 35.5% higher than their proportion in the population, most often accused of crimes against property (chiefly theft). During the Delta's frontier era, blacks were lynched at a rate lower than their proportion in the population, unlike the rest of the South. They were most often accused of murder or attempted murder in half the cases, and rape in 15%. There was a clear seasonal pattern to the lynchings, with the colder months being the deadliest. As noted, cotton prices fell during the 1880s and 1890s, increasing economic pressures. "From September through December, the cotton was picked, debts were revealed, and profits (or losses) realized... Whether concluding old contracts or discussing new arrangements, [landlords and tenants] frequently came into conflict in these months and sometimes fell to blows." During the winter, murder was most cited as a cause for lynching. After 1901, as economics shifted and more blacks became renters and sharecroppers in the Delta, with few exceptions, only African-Americans were lynched. The frequency increased from 1901 to 1908, after African-Americans were disenfranchised. "In the twentieth century Delta vigilantism finally became predictably joined to white supremacy." After their increased immigration to the US in the late 19th century, Italian Americans also became lynching targets, chiefly in the South, where they were recruited for laboring jobs. On March 14, 1891, eleven Italian Americans were lynched in New Orleans after a jury acquitted them in the murder of David Hennessy, an ethnic Irish New Orleans police chief. The eleven were falsely accused of being associated with the Mafia. This incident was the largest mass lynching in U.S. history. A total of twenty Italians were lynched in the 1890s. Although most lynchings of Italian Americans occurred in the South, Italians had not immigrated there in great numbers. Isolated lynchings of Italians also occurred in New York, Pennsylvania, and Colorado. Particularly in the West, Chinese immigrants, East Indians, Native Americans and Mexicans were also lynching victims. The lynching of Mexicans and Mexican Americans in the Southwest was long overlooked in American history, attention being chiefly focused on the South. The Tuskegee Institute, which kept the most complete records, noted the victims as simply black or white. Mexican, Chinese, and Native American lynching victims were recorded as white. Researchers estimate 597 Mexicans were lynched between 1848 and 1928. Mexicans were lynched at a rate of 27.4 per 100,000 of population between 1880 and 1930. This statistic was second only to that of the African American community, which endured an average of 37.1 per 100,000 of population during that period. Between 1848 and 1879, Mexicans were lynched at an unprecedented rate of 473 per 100,000 of population. Henry Smith, a troubled ex-slave, was one of the most famous lynched African-Americans. He was lynched at Paris, Texas, in 1893 for allegedly killing Myrtle Vance, the three-year-old daughter of a Texas policeman, after the policeman had assaulted Smith. Smith was not tried in a court of law. A large crowd followed the lynching, as was common then, in the style of public executions. Henry Smith was fastened to a wooden platform, tortured for fifty minutes by red-hot iron brands, then finally burned alive while over 10,000 spectators cheered. After 1876, the frequency of lynching decreased somewhat as white Democrats had regained political power throughout the South. The threat of lynching was used to terrorize freedmen and whites alike to maintain re-asserted dominance by whites.. Southern Republicans in Congress sought to protect black voting rights by using Federal troops for enforcement. A congressional deal to elect Rutherford B. Hayes as President in 1876 included a pledge to end Reconstruction in the South. The Redeemers, white Democrats who often included members of paramilitary groups such as White Cappers, White Camellia, White League and Red Shirts, had used terrorist violence and targeted assassinations to reduce the political power that black and white Republicans had gained during Reconstruction. Lynchings both supported the power reversal and were public demonstrations of white power. Racial tensions had an economic base. In attempting to reconstruct the plantation economy, planters were anxious to control labor. In addition, agricultural depression was widespread and the price of cotton kept falling after the Civil War into the 1890s. There was a labor shortage in many parts of the Deep South, especially in the developing Mississippi Delta. Southern attempts to encourage immigrant labor were unsuccessful as immigrants would quickly leave field labor. Lynchings erupted when farmers tried to terrorize the laborers, especially when times came to settle and they couldn't pay wages, but tried to keep laborers from leaving. More than 85 percent of the estimated 5,000 lynchings in the post-Civil War period occurred in the Southern states. 1892 was a peak year when 161 African Americans were lynched. The creation of the Jim Crow laws, beginning in the 1890s, completed the revival of white supremacy in the South. Terror and lynching were used to enforce both these formal laws and a variety of unwritten rules of conduct meant to assert white domination. In most years from 1889 to 1923, there were 50-100 lynchings annually across the South. The ideology behind lynching, directly connected with denial of political and social equality, was stated forthrightly by Benjamin Tillman - Governor of South Carolina and later a United States Senator: "We of the South have never recognized the right of the negro to govern white men, and we never will. We have never believed him to be the equal of the white man, and we will not submit to his gratifying his lust on our wives and daughters without lynching him." Often victims were lynched by a small group of white vigilantes late at night. Sometimes, however, lynchings became mass spectacles with a circus-like atmosphere because they were intended to emphasize majority power. Children often attended these public lynchings. A large lynching might be announced beforehand in the newspaper. There were cases in which a lynching was timed so that a newspaper reporter could make his deadline. Photographers sold photos for postcards to make extra money. The event was publicized so that the intended audience, African Americans and whites who might challenge the society, was warned to stay in their places. Fewer than one percent of lynch mob participants were ever convicted by local courts. By the late 19th century, trial juries in most of the southern United States were all white because African Americans had been disfranchised, and only registered voters could serve as jurors. Often juries never let the matter go past the inquest. Such cases happened in the North as well. In 1892, a police officer in Port Jervis, New York, tried to stop the lynching of a black man who had been wrongfully accused of assaulting a white woman. The mob responded by putting the noose around the officer's neck as a way of scaring him. Although at the inquest the officer identified eight people who had participated in the lynching, including the former chief of police, the jury determined that the murder had been carried out "by person or persons unknown." In Duluth, Minnesota, on June 15, 1920, three young African American travelers were lynched after having been jailed and accused of having raped a white woman. The alleged "motive" and action by a mob were consistent with the "community policing" model. A book titled The Lynchings in Duluth documented the events. Although the rhetoric surrounding lynchings included justifications about protecting white women, the actions basically erupted out attempts to maintain domination in a rapidly changing society and fears of social change. Victims were the scapegoats for peoples' attempts to control agriculture, labor and education as well as disasters such as the boll weevil. According to an article, April 2, 2002, in Time: At the turn of the 20th century in the United States, lynching was photographic sport. People sent picture postcards of lynchings they had witnessed. The practice was so base, a writer for Time noted that even the Nazis "did not stoop to selling souvenirs of Auschwitz, but lynching scenes became a burgeoning subdepartment of the postcard industry. By 1908, the trade had grown so large, and the practice of sending postcards featuring the victims of mob murderers had become so repugnant, that the U.S. Postmaster General banned the cards from the mails." African Americans emerged from the Civil War with the political experience and stature to resist attacks, but disenfranchisement and the decrease in their civil rights restricted their power to do much more than react after the fact by compiling statistics and publicizing the atrocities. From the early 1880s, the Chicago Tribune reprinted accounts of lynchings from the newspaper lists with which they exchanged, and to publish annual statistics. These provided the main source for the compilations by the Tuskegee Institute to document lynchings, a practice it continued until 1968.. In 1892 journalist Ida B. Wells-Barnett was shocked when three friends in Memphis, Tennessee were lynched because their grocery store competed successfully with a white-owned store. Outraged, Wells-Barnett began a global anti-lynching campaign that raised awareness of the social injustice. As a result of her efforts, black women in the US became active in the anti-lynching crusade, often in the form of clubs which raised money to publicize the abuses. When the National Association for the Advancement of Colored People (NAACP) was formed in 1909, Wells became part of its multi-racial leadership and continued to be active against lynching. In 1903 leading writer Charles Waddell Chesnutt published his article "The Disfranchisement of the Negro", detailing civil rights abuses and need for change in the South. Numerous writers appealed to the literate public. In 1904 Mary Church Terrell, the first president of the National Association of Colored Women, published an article in the influential magazine North American Review to respond to Southerner Thomas Nelson Page. She took apart and refuted his attempted justification of lynching as a response to assaults on white women. Terrell showed how apologists like Page had tried to rationalize what were violent mob actions that were seldom based on true assaults. In what can be seen as multiple acts of resistance, thousands of African Americans left the South annually, especially from 1910-1940, seeking jobs and better lives in industrial cities of the North and Midwest, in a movement that was called the Great Migration. More than 1.5 million people went North during this phase of the Great Migration. They refused to live under the rules of segregation and continual threat of violence, and many secured better educations and futures for themselves and their children, while adapting to the drastically different requirements of industrial cities. Northern industries such as the Pennsylvania Railroad and others, and stockyards and meatpacking plants in Chicago and Omaha, vigorously recruited southern workers. For instance, 10,000 men were hired from Florida and Georgia by 1923 by the Pennsylvania Railroad to work at their expanding yards and tracks. President Theodore Roosevelt made public statements against lynching in 1903, following George White's death in Delaware, and in his sixth annual State of the Union message on December 4, 1906. When Roosevelt suggested that lynching was taking place in the Philippines, southern senators (all white Democrats) demonstrated power by a filibuster in 1902 during review of the "Philippines Bill". In 1903 Roosevelt refrained from commenting on lynching during his Southern political campaigns. Despite concerns expressed by some northern Congressmen, Congress had not moved quickly enough to strip the South of seats as the states disfranchised black voters. The result was a "Solid South" with the number of representatives (apportionment) based on its total population, but with only whites represented in Congress, essentially doubling the power of white southern Democrats. |“||My Dear Governor Durbin, ...permit me to thank you as an American citizen for the admirable way in which you have vindicated the majesty of the law by your recent action in reference to lynching... All thoughtful men... must feel the gravest alarm over the growth of lynching in this country, and especially over the peculiarly hideous forms so often taken by mob violence when colored men are the victims – on which occasions the mob seems to lay more weight, not on the crime but on the color of the criminal... There are certain hideous sights which when once seen can never be wholly erased from the mental retina. The mere fact of having seen them implies degradation... Whoever in any part of our country has ever taken part in lawlessly putting to death a criminal by the dreadful torture of fire must forever after have the awful spectacle of his own handiwork seared into his brain and soul. He can never again be the same man.||”| Durbin had successfully used the National Guard to disperse the lynchers. Further, Durbin publicly declared that the accused murderer—an African American man—was entitled to a fair trial. Theodore Roosevelt's efforts cost him political support among white people, especially in the South. In addition, threats against him increased so that the Secret Service increased the size of his detail. African-American writers used their talents in numerous ways to publicize and protest against lynching. In 1914, Angelina Weld Grimké had already written her play Rachel to address racial violence. It was produced in 1916. In 1915, W. E. B. Du Bois, noted scholar and head of the recently formed NAACP, called for more black-authored plays. African-American women playwrights were strong in responding. They wrote ten of the fourteen anti-lynching plays produced between 1916 and 1935. The NAACP set up a Drama Committee to encourage such work. In addition, Howard University, the leading historically black college, established a theater department in 1920 to encourage African-American dramatists. Starting in 1924, the NAACP's major publications Crisis and Opportunity sponsored contests to encourage black literary production. (Main article Ku Klux Klan) The Klan revived and grew because of white peoples' anxieties and fear over the rapid pace of change. Both white and black rural migrants were moving into rapidly industrializing cities of the South. Many Southern white and African-American migrants also moved north in the Great Migration, adding to greatly increased immigration from southern and eastern Europe in major industrial cities of the Midwest and West. The Klan grew rapidly and became most successful and strongest in those cities that had a rapid pace of growth from 1910–1930, such as Atlanta, Birmingham, Dallas, Detroit, Indianapolis, Chicago, Portland, Oregon; and Denver, Colorado. It reached a peak of membership and influence about 1925. In some cities, leaders' actions to publish names of Klan members provided enough publicity to sharply reduce membership. The 1915 murder near Atlanta, Georgia of factory manager Leo Frank, an American Jew, was one of the more notorious lynchings of a white man. Sensationalist newspaper accounts stirred up anger about Frank, charged in the murder of Mary Phagan, a girl employed by his factory. He was convicted of murder after a flawed trial in Georgia. His appeals failed. Supreme Court justice Oliver Wendell Holmes's dissent condemned the intimidation of the jury as failing to provide due process of law. After the governor commuted Frank's sentence to life imprisonment, a mob calling itself the Knights of Mary Phagan kidnapped Frank from the prison farm at Milledgeville, and lynched him. Georgia politician and publisher Tom Watson used sensational coverage of the Frank trial to create power for himself. By playing on people's anxieties, he also built support for revival of the Ku Klux Klan. The new Klan was inaugurated in 1915 at a mountaintop meeting near Atlanta, and was comprised mostly of members of the Knights of Mary Phagan. D. W. Griffith's 1915 film The Birth of a Nation glorified the original Klan and garnered much publicity. (Main article Tulsa Race Riot) The NAACP mounted a strong nationwide campaign of protests and public education against the movie The Birth of a Nation. As a result, some city governments prohibited release of the film. In addition, the NAACP publicized production and helped create audiences for the 1919 releases The Birth of a Race and Within Our Gates, African-American directed films that presented more positive images of blacks. African-American resistance against lynching carried substantial risks. In 1921 in Tulsa, Oklahoma, a group of African American citizens attempted to stop a lynch mob from taking 19-year-old assault suspect Dick Rowland out of jail. In a scuffle between a white man and an armed African-American veteran, the white man was killed. Whites retaliated by rioting, during which they burned 1,256 homes and as many as 200 businesses in the segregated Greenwood district. Confirmed dead were 39 people: 26 African Americans and 13 whites. Recent investigations suggest the number of African American deaths may have been much higher. Rowland was saved, however, and was later exonerated. The growing networks of African-American women's club groups were instrumental in raising funds to support the NAACP public education and lobbying campaigns. They also built community organizations. In 1922 Mary Talbert headed the Anti-Lynching Crusade, to create an integrated women's movement against lynching. It was affiliated with the NAACP, which mounted a multi-faceted campaign. For years the NAACP used petition drives, letters to newspapers, articles, posters, lobbying Congress, and marches to protest the abuses in the South and keep the issue before the public. While the second KKK grew rapidly in cities undergoing major change and achieved some political power, many state and city leaders, including white religious leaders such as Reinhold Niebuhr in Detroit, acted strongly and spoke out publicly against the organization. Some anti-Klan groups published members' names and quickly reduced the energy in their efforts. As a result, in most areas, after 1925 KKK membership and organizations rapidly declined. Cities passed laws against wearing of masks, and otherwise acted against the KKK. In 1930, Southern white women responded in large numbers to the leadership of Jessie Daniel Ames in forming the Association of Southern Women for the Prevention of Lynching. She and her co-founders obtained the signatures of 40,000 women to their pledge against lynching and for a change in the South. The pledge included the statement: "In light of the facts we dare no longer to ...allow those bent upon personal revenge and savagery to commit acts of violence and lawlessness in the name of women." Despite physical threats and hostile opposition, the women leaders persisted with petition drives, letter campaigns, meetings and demonstrations to highlight the issues. By the 1930s, the number of lynchings had dropped to about ten per year in Southern states. In the 1930s, communist organizations, including a legal defense organization called the International Labor Defense (ILD), organized support to stop lynching. (see The Communist Party and African-Americans). The ILD defended the Scottsboro Boys, as well as three black men accused of rape in Tuscaloosa in 1933. In the Tuscaloosa case, two defendants were lynched under circumstances that suggested police complicity. The ILD lawyers themselves narrowly escaped lynching. The ILD lawyers aroused passionate hatred among many Southerners because they were considered to be interfering with local affairs. In a remark to an investigator, a white Tuscaloosan was quoted, "For New York Jews to butt in and spread communistic ideas is too much." Anti-lynching advocates such as Mary McLeod Bethune and Walter Francis White campaigned for Franklin D. Roosevelt as President in 1932. They hoped he would lend public support to their efforts against lynching. Senators Robert F. Wagner and Edward P. Costigan drafted the Costigan-Wagner bill to require local authorities to protect prisoners from lynch mobs. It proposed to make lynching a Federal crime and thus take it out of state administration. Southern Senators continued to hold a hammerlock on Congress. Because of the Southern Democrats' disfranchisement of African Americans in Southern states at the turn of the century, Southern whites for decades had nearly double the representation in Congress beyond their own population. Southern states had Congressional representation based on total population, but essentially only whites could vote and only their issues were supported. Due to seniority achieved through one-party Democratic rule in their region, Southern Democrats controlled many important committees. Southern Democrats consistently opposed any legislation related to reducing lynching or putting it under Federal oversight. As a result, Southern white Democrats were a formidable power in Congress until the 1960s. In the 1930s, virtually all Southern senators blocked the proposed Wagner-Costigan bill. Southern senators used a filibuster to prevent a vote on the bill. However, the legislation did herald a change; there were 21 lynchings of blacks in the South in 1935, but that number fell to eight in 1936, and to two in 1939. A lynching in Miami, Florida, changed the political climate in Washington. On July 19, 1935, Rubin Stacy, a homeless African-American tenant farmer, knocked on doors begging for food. After resident complaints, Dade County deputies took Stacy into custody. While he was in custody, a lynch mob took Stacy out of the jail and murdered him. Although the faces of his murderers could be seen in a photo taken at the lynching site, the state did not prosecute the murder of Rubin Stacy. Stacy's murder galvanized anti-lynching activists, but President Franklin Roosevelt did not support the federal anti-lynching bill. He feared that support would cost him Southern votes in the 1936 election. He believed that he could accomplish more for more people by getting re-elected. The industrial buildup to World War II acted as a "pull" factor in the second phase of the Second Great Migration starting in 1940 and lasting until 1970. Altogether in the first half of the 20th century, 6.5 million African Americans migrated from the South to leave lynchings and segregation behind, improve their lives and get better educations for their children. Unlike the first round, composed chiefly of rural farm workers, the second wave included more educated workers and their families who were already living in southern cities and towns. In this migration, many migrated west from Louisiana, Mississippi and Texas to California in addition to northern and midwestern cities, as defense industries recruited thousands to higher-paying, skilled jobs. They settled in Los Angeles, San Francisco and Oakland. In 1946, the Civil Rights Section of the Justice Department gained its first conviction under federal civil rights laws against a lyncher. Florida constable Tom Crews was sentenced to a $1,000 fine and one year in prison for civil rights violations in the killing of an African-American farm worker. In 1946, a mob of white men shot and killed two young African-American couples near Moore's Ford Bridge in Walton County, Georgia 60 miles east of Atlanta. This lynching of four young sharecroppers, one a World War II veteran, shocked the nation. The attack was a key factor in President Harry Truman's making civil rights a priority of his administration. Although the FBI investigated the crime, they were unable to prosecute. It was the last documented lynching of so many people. In 1947, the Truman Administration published a report titled To Secure These Rights, which advocated making lynching a federal crime, abolishing poll taxes, and other civil rights reforms. The Southern Democratic bloc of senators and congressmen continued to obstruct attempts at federal legislation. In the 1940s, the Klan openly criticized Truman for his efforts to promote civil rights. Later historians documented that Truman had briefly made an attempt to join the Klan as a young man in 1924, when it was near its peak of social influence in promoting itself as a fraternal organization. When a Klan officer demanded that Truman pledge not to hire any Catholics if he was reelected as county judge, Truman refused. He personally knew their worth from his WWI experience. His membership fee was returned and he never joined the KKK. With the beginning of the Cold War after World War II, the Soviet Union criticized the United States for the frequency of lynchings of black people. In a meeting with President Harry Truman in 1946, Paul Robeson urged him to take action against lynching. Soon afterward, the mainstream white press attacked Robeson for his sympathies toward the Soviet Union. In 1951, the Civil Rights Congress (CRC) made a presentation entitled "We Charge Genocide" to the United Nations. They argued that the US government was guilty of genocide under Article II of the UN Genocide Convention because it failed to act against lynchings. The UN took no action. In the postwar years of the Cold War, some U.S. politicians and appointed officials appeared more worried about possible Communist connections among anti-lynching groups than about the lynching crimes. For instance, the FBI branded Albert Einstein a communist sympathizer for joining Paul Robeson's American Crusade Against Lynching. J. Edgar Hoover, head of the FBI for decades, was particularly fearful of the effects of Communism in the US. He directed more attention to investigations of civil rights groups for communist connections than to Ku Klux Klan activities against the groups' members and other innocent blacks. By the 1950s, the Civil Rights Movement was gaining momentum. Membership in the NAACP increased in states across the country. The NAACP achieved a significant US Supreme Court victory in 1954 ruling that segregated education was unconstitutional. A 1955 lynching that sparked public outrage about injustice was that of Emmett Till, a 14-year-old boy from Chicago. Spending the summer with relatives in Money, Mississippi, Till was killed for allegedly having wolf-whistled at a white woman. Till had been badly beaten and shot before being thrown into the Tallahatchie River. His mother insisted on a public funeral with an open casket, to show people how badly Till's body had been disfigured. News photographs circulated around the country, and drew intense public reaction. People in the nation were horrified that a boy could have been killed for such an incident. The state of Mississippi tried two defendants, but they were speedily acquitted. In the 1960s the Civil Rights Movement attracted students to the South from all over the country to work on voter registration and other issues. The intervention of people from outside the communities and threat of social change aroused fear and resentment among many whites. In June 1964, three civil rights workers disappeared in Neshoba County, Mississippi. They had been investigating the arson of a black church being used as a "Freedom School". Six weeks later, their bodies were found in a partially constructed dam near Philadelphia, Mississippi. Michael Schwerner and Andrew Goodman of New York, and James Chaney of Meridian, Mississippi had been members of the Congress of Racial Equality. They had been dedicated to non-violent direct action against racial discrimination. The US prosecuted eighteen men for a Ku Klux Klan conspiracy to deprive the victims of their civil rights under 19th century Federal law, in order to conduct the trial in Federal court. Seven men were convicted but received light sentences, two men were released because of a deadlocked jury, and the remainder were acquitted. In 2005, 80-year-old Edgar Ray Killen, one of the men who had earlier gone free, was retried, convicted of manslaughter in a new trial and sentenced to 60 years in prison. Because of J. Edgar Hoover's and others' hostility to the Civil Rights Movement, agents of the U.S. FBI resorted to outright lying to smear civil rights workers and other opponents of lynching. For example, the FBI disseminated false information in the press about lynching victim Viola Liuzzo, who was murdered in 1965 in Alabama. The FBI said Liuzzo had been a member of the Communist Party, had abandoned her five children, and was involved in sexual relationships with African Americans in the movement. Although lynchings became rare following the civil rights movement and changing social mores, they have occurred. In 1981, two KKK members in Alabama randomly selected a 19-year-old black man, Michael Donald, and murdered him, to retaliate for a jury's acquittal of a black man accused of murdering a police officer. The Klansmen were caught, prosecuted, and convicted. A $7 million judgment in a civil suit against the Klan bankrupted the local subgroup, the United Klans of America. In 1998, Shawn Allen Berry, Lawrence Russel Brewer, and ex-convict John William King murdered James Byrd, Jr. in Jasper, Texas. Byrd was a 49-year-old father of three, who had accepted an early-morning ride home with the three men. They arbitrarily attacked him and dragged him to his death behind their truck. The three men dumped their victim's mutilated remains in the town's segregated African-American cemetery and then went to a barbecue. Local authorities immediately treated the murder as a hate crime and requested FBI assistance. The murderers (two of whom turned out to be members of a white supremacist prison gang) were caught and stood trial. Brewer and King were sentenced to death; Berry received life in prison. On June 13, 2005, the United States Senate formally apologized for its failure in previous decades to enact a Federal anti-lynching law. Earlier attempts to pass such legislation had been defeated by filibusters by powerful Southern senators. Prior to the vote, Louisiana Senator Mary Landrieu noted, "There may be no other injustice in American history for which the Senate so uniquely bears responsibility." The resolution was passed on a voice vote with 80 senators cosponsoring. The resolution expressed "the deepest sympathies and most solemn regrets of the Senate to the descendants of victims of lynching, the ancestors of whom were deprived of life, human dignity and the constitutional protections accorded all citizens of the United States." Tuskegee Institute, now Tuskegee University, is recognized as the official expert that has documented lynchings since 1882. It has defined conditions that constitute a recognized lynching: Tuskegee remains the single complete source of statistics and records on this crime since 1882, and is the source for all other compiled statistics. As of 1959, which was the last time that their annual Lynch Report was published, a total of 4,733 persons had died as a result of lynching since 1882. To quote the report, The following graph gives the number of lynchings and racially motivated murders in each decade from 1865 to 1965. Data for 1865–1869 and 1960-1965 are partial decades. The same source gives the following statistics for the period from 1882 to 1951. Eighty-eight percent of victims were black and 10% were white. Fifty-nine percent of the lynchings occurred in the Southern states of Kentucky (neutral in the Civil War), North Carolina, South Carolina, Tennessee, Arkansas, Louisiana, Mississippi, Alabama, Georgia, and Florida. Lynching was less frequent in the West and Midwest but was virtually nonexistent in the Northeast, except for isolated instances. The most common reasons given by mobs for the lynchings were murder and rape. As documented by Ida B. Wells, such charges were often pretexts for lynching blacks who violated Jim Crow etiquette or engaged in economic competition with whites. Other common reasons given included arson, theft, assault, and robbery; sexual transgressions (miscegenation, adultery, cohabitation); "race prejudice", "race hatred", "racial disturbance;" informing on others; "threats against whites;" and violations of the color line ("attending white girl", "proposals to white woman"). Tuskegee's method of categorizing most lynching victims as either black or white in publications and data summaries meant that the mistreatment of some minority and immigrant groups was obscured. In the West, for instance, Mexican, Native Americans, and Chinese were more frequent targets of lynchings than African Americans, but their deaths were included among those of whites. Similarly, although Italian immigrants were the focus of violence in Louisiana when they started arriving in greater numbers, their deaths were not identified separately. In earlier years, whites who were subject to lynching were often targeted because of suspected political activities or support of freedmen, but they were generally considered members of the community in a way new immigrants were not. Southern trees bear a strange fruit, Blood on the leaves and blood at the root, Black bodies swinging in the Southern breeze, Strange fruit hanging from the poplar trees. Pastoral scene of the gallant south the bulging eyes and the twisted mouth scent of magnolia sweet and fresh then the sudden smell of burning flesh Here is a fruit for the crows to pluck for the rain to gather for the wind to suck for the sun to rot for the tree to drop Here is a strange and bitter crop Although Holiday's regular label of Columbia declined, Holiday recorded it with Commodore. The song became identified with her and was one of her most popular ones. The song became an anthem for the anti-lynching movement. It also contributed to activism of the American civil rights movement. A documentary about lynching, entitled Strange Fruit and produced by Public Broadcasting Service, aired on U.S. television. For most of the history of the United States, lynching was rarely prosecuted, as the same people who would have had to prosecute were generally on the side of the action. When it was prosecuted, it was under state murder statutes. In one example in 1907-09, the U.S. Supreme Court tried its only criminal case in history, Chattanooga, Tennessee.. Shipp was found guilty of criminal contempt for lynching Ed Johnson in Starting in 1909, legislators introduced more than 200 bills in Congress to make lynching a Federal crime, but they failed to pass, chiefly because of Southern legislators' opposition. Because Southern states had effectively disfranchised African Americans at the turn of the century, the white Southern Democrats controlled all the seats of the South, nearly double the Congressional representation that white citizens alone would have been entitled to. They comprised a powerful voting block for decades. Under the Franklin D. Roosevelt Administration, the Civil Rights Section of the Justice Department tried, but failed, to prosecute lynchers under Reconstruction-era civil rights laws. The first successful Federal prosecution of a lyncher for a civil rights violation was in 1946. By that time, the era of lynchings as a common occurrence had ended. Many states now have specific anti-lynching statutes. California, for example, defines lynching, punishable by 2–4 years in prison, as "the taking by means of a riot of any person from the lawful custody of any peace officer", with the crime of "riot" defined as two or more people using violence or the threat of violence. A lyncher could thus be prosecuted for several crimes arising from the same action, e.g., riot, lynching, and murder. Although lynching in the historic sense is virtually nonexistent today, the lynching statutes are sometimes used in cases where several people try to wrest a suspect from the hands of police in order to help him escape, as alleged in a July 9, 2005, violent attack on a police officer in San Francisco. South Carolina law defines second-degree lynching as "[a]ny act of violence inflicted by a mob upon the body of another person and from which death does not result shall constitute the crime of lynching in the second degree and shall be a felony. Any person found guilty of lynching in the second degree shall be confined at hard labor in the State Penitentiary for a term not exceeding twenty years nor less than three years, at the discretion of the presiding judge." In 2006, five white teenagers were given various sentences for second-degree lynching in a non-lethal attack of a young black man in South Carolina.
This virus falls into a class of very common viruses that cause 10 to 15 million infections each year in the United States The news is full of stories about enterovirus D68. This virus falls into a class of very common viruses that cause 10 to 15 million infections each year in the United States, according to the U.S. Centers for Disease Control and Prevention. Most people infected by enteroviruses do not even get sick. If they do, they may think they just have a cold. But sometimes, the virus causes respiratory illness that can become quite severe, especially in infants, children and teenagers. The staff at Norton Children’s Hospital usually sees more children with respiratory illnesses this time of year, with kids back in school or starting new schools. Sometimes these illnesses are severe enough to require hospitalization, especially for kids with underlying health issues such as asthma. This year, there has been an increase in the number of children who have been hospitalized with severe respiratory illnesses and test positive for viruses that include enterovirus and rhinovirus, the bug that causes the common cold. Should you worry? “If your child is having difficulty breathing or has a high fever, you need to call your pediatrician,” said Lindsay K. Sharrer, M.D., Norton Children’s Hospital Medical Associates – Dupont. “These are serious health concerns that need to be addressed. There is no treatment for the virus itself other than letting it run its course and treating the symptoms,” she said. “Mild cases may require only rest and hydration, but more severe cases may require breathing treatments or even hospitalization.” Stop the spread Enteroviruses are found in an infected person’s bodily fluids, including blood, stool, mucus, saliva and blister fluids. That means that the virus can spread when a person sneezes or coughs. The fluid from the sneeze or cough lands on a hard surface that others may touch. If they then touch the mouth, eyes or nose, the infection spreads. The best way to prevent infection is to wash your hands with soap and water. You can learn more about proper hand-washing from our Kohl’s Cares High Five Prevention Program. Also, avoid close contact with people who are sick and make sure you disinfect surfaces that are touched often, such as doorknobs and tabletops. Common symptoms of enterovirus, according to the CDC: - Runny nose - Skin rash - Mouth blisters - Body and/or muscle aches If your child needs urgent medical attention, visit one of our three hospitals for pediatric emergency care: Norton Children’s Hospital in downtown Louisville, Norton Children’s Medical Center – Brownsboro in east Louisville at U.S. 22 and Chamberlain Lane, and Norton Suburban Hospital, future home of Norton Women’s and Norton Children’s Hospital in St. Matthews. To find a pediatrician, call (855) KCH-KIDS/(855) 524-5437.
Scientists from the University of Nottingham Trent in the United Kingdom have developed and tested a new prototype device. A device that can remotely track the activity of the hive without distributing the bees. The device receives and immediately analyzes vibration and a special kind of bee buzzing. The device, as described, successfully monitors all the changes in bees colony behavior for 24-hours, noticing specific bees signals. “We want to develop an effective tool, if necessary, to find out the status of colonies. Are bees starving there, is there enough food in the hive or if, perhaps, the colony is preparing for swarming.” said Martin Bencsik, a scientist in the School of Science & Technology at the University of Nottingham Trent. Swarming, as it is known, occurs when the queen and a large group of worker bees leave the hive in search for a new home. Most, beekeepers, except those who sell swarms, do not want their bees to leave the hive. They in many different ways prevent swarming or, at least, try to have swarming in control. By removing the queen cells from which a new queen should spawn or adding extra space for the bees to make them “convinced” that time for swarming still did not come. Bee signals functions It is important to know the habits of bees and to spend a lot of time with them to understand such phenomena. But with that beekeeper still needs to open the hive if he thinks that he should check the status of the colony. Martin Bencsik and his colleagues with a help of small accelerometers, sensors in the hive, can collect data from a remote location without opening the hive. One sensor is in the middle of the honeycomb and another one seven centimeters lower. After devices installation, they continuously record all vibrations in a hive. “Bees use comb cells for pollen, brood and honey close to the built-in device quite normal, so that does not bother them too much in the way of work.” explained Bencsik. For data analysis, the team has developed a special software that aims to detect the audio signal that bees use. Scientists believe that bees use signals for food, but previous knowledge of bees suggest that signal can also tell us about some other activities of bees. The actual function of some signals are now still disputed or challenged, but for some with high confidence, we can know what they mean. Bencsik and his colleagues used daily rhythms and the frequency of a signal, there is a clear decline when the colony enters the colder months. He notes that long-term monitoring can help beekeepers with bee activities among professionals. But also including, small amateur beekeepers who need something to convince him that everything is fine with their bees and that there is no need to disturb them, at least not too often. His father has been keeping bees for fifty years and interest in bees has been handed down for years, from father to son. Although Bencsik spent much of his professional career in the study of magnetic resonance, he began to search for a way to keep track of bees eight years ago. He said “I love bees, they are so interesting that I have finally started and seriously investigate them”
Running shoes are the single most important piece of equipment in both track and distance running. A well-constructed shoe, that balances protection of the athlete from undue physical stress with lightweight construction and responsiveness, will assist runners in the achievement of their ultimate goal: to run as fast as possible. An effective running shoe must combine the features of shock absorbency, motion control when the foot strikes the ground, flexibility and responsiveness, and a measure of durability. Running shoe science began a remarkable progression that included the work of Adi Dassler (1900–1978) of Germany, the founder of Adidas, and the later creations of Bill Bowerman (1911–1999), the American track coach who developed the Nike "waffle" outsole in the early 1970s. Each component of the modern running shoe has a specific function. The outsole is the outer tread of the shoe; it is usually made from a carbon rubber compound and provides traction for the runner. The midsole is the part of the shoe construction that provides both cushioning and stability to the runner. The midsole will appear to be made of a foam material, usually ethylene vinyl acetate (EVA), an extremely lightweight material, or polyurethane. It is common for running shoes to have a post implanted in the midsole to provide further stability. Running shoes often have different densities of materials in the mid-sole construction, with the medial part of the midsole (inner) composed of a harder EVA, and the lateral (outer) side made of a softer material. This design is intended to counter the effects of "pronation," the inward movement of the foot on the contact with the running surface; 80% of runners tend to pronate. The midsole may also include a liquid or semi-gel, air, or specialized plastic compound to further absorb shock. Most distance runners will generate forces that are approximately three times their body weight on impact with each foot strike. The upper is the part of the running shoe that encases the foot. It is padded and it is usually a synthetic material and typically washable. The heel counter is a hard, cup-shaped device set against the heel of the runner to promote stability and to limit the movements of the heel on impact (both laterally and vertically). Many modern running shoes are built to accommodate a foot orthotic, used to correct the structural imbalances that are a primary cause of running injuries. With each stride, the runner delivers a force through the shoe into the ground, as with classic Newtonian physics, every such action produces an equal and opposite reaction, with forces of impact directed into the foot. The more efficiently such forces may be distributed through shoe construction, the more responsive the shoe to the next stride and the less likely the musculoskeletal structure will be to unduly absorb these forces. The construction of the quintessential perfect running shoe is a marriage of the contrasting features of cushioning and responsiveness.
Communication is a process of exchanging information, ideas, thoughts, feelings and emotions through speech, signals, writing, or behavior. In communication process, a sender(encoder) encodes a message and then using a medium/channel sends it to the receiver (decoder) who decodes the message and after processing information, sends back appropriate feedback/reply using a medium/channel. Types of Communication People communicate with each other in a number of ways that depend upon the message and its context in which it is being sent. Choice of communication channel and your style of communicating also affects communication. So, there are variety of types of communication. Types of communication based on the communication channels used are: - Verbal Communication - Nonverbal Communication Verbal communication refers to the the form of communication in which message is transmitted verbally; communication is done by word of mouth and a piece of writing. Objective of every communication is to have people understand what we are trying to convey. In verbal communication remember the acronym KISS(keep it short and simple). When we talk to others, we assume that others understand what we are saying because we know what we are saying. But this is not the case. usually people bring their own attitude, perception, emotions and thoughts about the topic and hence creates barrier in delivering the right meaning. So in order to deliver the right message, you must put yourself on the other side of the table and think from your receiver’s point of view. Would he understand the message? how it would sound on the other side of the table? Verbal Communication is further divided into: - Oral Communication - Written Communication In oral communication, Spoken words are used. It includes face-to-face conversations, speech, telephonic conversation, video, radio, television, voice over internet. In oral communication, communication is influence by pitch, volume, speed and clarity of speaking. Advantages of Oral communication are: It brings quick feedback. In a face-to-face conversation, by reading facial expression and body language one can guess whether he/she should trust what’s being said or not. Disadvantage of oral communication In face-to-face discussion, user is unable to deeply think about what he is delivering, so this can be counted as a In written communication, written signs or symbols are used to communicate. A written message may be printed or hand written. In written communication message can be transmitted via email, letter, report, memo etc. Message, in written communication, is influenced by the vocabulary & grammar used, writing style, precision and clarity of the language used. Written Communication is most common form of communication being used in business. So, it is considered core among business skills. Memos, reports, bulletins, job descriptions, employee manuals, and electronic mail are the types of written communication used for internal communication. For communicating with external environment in writing, electronic mail, Internet Web sites, letters, proposals, telegrams, faxes, postcards, contracts, advertisements, brochures, and news releases are used. Advantages of written communication includes: Messages can be edited and revised many time before it is actually sent. Written communication provide record for every message sent and can be saved for later study. A written message enables receiver to fully understand it and send appropriate feedback. Disadvantages of written communication includes: Unlike oral communication, Written communication doesn’t bring instant feedback. It take more time in composing a written message as compared to word-of-mouth. and number of people struggles for writing ability. Nonverbal communication is the sending or receiving of wordless messages. We can say that communication other than oral and written, such as gesture, body language, posture, tone of voice or facial expressions, is called nonverbal communication. Nonverbal communication is all about the body language of speaker. Nonverbal communication helps receiver in interpreting the message received. Often, nonverbal signals reflects the situation more accurately than verbal messages. Sometimes nonverbal response contradicts verbal communication and hence affect the effectiveness of message. Nonverbal communication have the following three elements: Speaker: clothing, hairstyle, neatness, use of cosmetics Surrounding: room size, lighting, decorations, furnishings facial expressions, gestures, postures Voice Tone, Volume, Speech rate Types of Communication Based on Purpose and Style Based on style and purpose, there are two main categories of communication and they both bears their own characteristics. Communication types based on style and purpose are: - Formal Communication - Informal Communication In formal communication, certain rules, conventions and principles are followed while communicating message. Formal communication occurs in formal and official style. Usually professional settings, corporate meetings, conferences undergoes in formal pattern. In formal communication, use of slang and foul language is avoided and correct pronunciation is required. Authority lines are needed to be followed in formal communication. Informal communication is done using channels that are in contrast with formal communication channels. It’s just a casual talk. It is established for societal affiliations of members in an organization and face-to-face discussions. It happens among friends and family. In informal communication use of slang words, foul language is not restricted. Usually. informal communication is done orally and using gestures. Informal communication, Unlike formal communication, doesn’t follow authority lines. In an organization, it helps in finding out staff grievances as people express more when talking informally. Informal communication helps in building relationships.
Reading comprehension and listening comprehension engage different processes in the brain and must be taught separately. First, students need to comprehend what they read. The second kind of comprehension is listening and comprehending while someone reads aloud to the student. There is also a difference between listening to someone read a story out loud and simply listening to someone give oral directions. So, the third kind of comprehension comes into play when we listen to directions spoken orally. Each form of comprehension involves different brain cells and requires that different mental processes be engaged (Block, Schaller, Joy, & Gaine, 2002). Neuroimaging studies have shown that while the brain uses similar regions of the brain for reading comprehension and listening comprehension, the brain engages and uses these regions of the brain in diverse ways. The brain uses a unique set of processes for each aspect of comprehension: cognitive processing, executive functioning, and working memory (Baker, Zeliger-Kandasamy, & DeWyngaert, 2014). The first problem is silent reading. In school, lessons often emphasize silent reading and answering multiple choice test questions after a story (Ness, 2011), but comprehension is too complex a process to be taught by simply previewing, reading a story, and answering a set of questions (Pardo, 2004). Research has shown that students need intrinsic motivation, decoding/encoding skills, vocabulary knowledge, and self-monitoring skills in order to be able to comprehend what they read (Block, Schaller, Joy, & Gaine, 2002). Self-monitoring skills are especially important because students must stop when they do not understand what they are reading. Many children just keep reading or skip over words that they do not know. If students are reading silently, how can you tell whether they have trouble processing what they are reading or are failing to comprehend because they do not understand the meaning of the words that they are reading? If students do not know the meaning of the words used in a test question, how can you determine whether they did not comprehend the meaning of the story or simply did not understand the meaning of the words in the test question? We cannot teach comprehension through silent reading and answering questions at the end of the story. The second problem is that, too often, students are not taught vocabulary words or the meaning of words. Students will never be able to comprehend what they are reading if they do not understand the meaning of the words that they are reading. A third mistake in teaching comprehension is that schools often do not teach oral listening comprehension. Too often, educators do not check to make sure that students in the classroom understand oral directions or understand when stories or passages are read out loud. We need to be teaching students to comprehend what they read as well as what they hear being read or spoken to them. At the Reading Orienteering Club (ROC), we emphasize three different kinds of comprehension. The workers at the workstations READ the directions to the children rather than simply explaining what needs to be done. This helps the children understand what they are being asked to do, but also develops their listening comprehension skills. Oral explanations and clarifications are also used at the workstations—teaching comprehension from oral directions. Therefore, both types of listening comprehension must be taught. Students must listen and comprehend directions that are read orally by workstation helpers. Then, they ask questions when they do not understand. Oral explanations and clarifications from the workers at the workstations then allow children to engage their listening skills and comprehension while both listening to oral read directions and oral spoken directions. Reading comprehension is also taught at the ROC. Read and Spell word lists and vowel-clustered stories are interspersed throughout the workstations to give children practice reading lists of words and also reading the words in the context of a story. One workstation is totally devoted to reading. The children read out loud to a volunteer. The reading workstations employ the progressive step system (Steps 1, 2, 3) so that each child can read a beginning chapter book. We want children to practice decoding and encoding words contained in oral passages (stories), but we do not want the stories to be above the child’s ability level. Everyone starts at Step 1 and works their way up. We do not use multiple-choice test questions for teaching comprehension. At the ROC, the children are asked: Who, What, When, Where, and Why? Who was the story about? What happened in the story? When did the story take place? Where did the story take place? And why is this a good or a bad story? Would you recommend for someone else to read this story? At the ROC reading clinic, we want the children to fully engage their thinking and analytical processes so that they truly understand and comprehend the stories that they read. We also use the 4 steps (explained more completely in my 2/24/18 blog post). The 4 steps are to (1) break the word down letter-by-letter, sound-by-sound, learning to pronounce the word correctly; (2) practice spelling the word out loud and then write the word correctly on paper; (3) give a definition for the word by looking the word up correctly in the dictionary, and (4) write a sentence using the word, making sure the word is used correctly and that the sentence is grammatical. Children too young to write a sentence can give the sentences orally. All children write the words correctly on manuscript writing paper, making sure that they are shaping their letter correctly. Writing is one of the stages in learning to read (see 4/12/18 post). The children are encouraged to read both fiction and nonfiction. They even pretend to have their own TV show and became TV reporters. They report on the books they read, or give factual information about a topic being studied, such as outer space or the ocean. The TV shows give an extra challenge to students who are ready to take on harder reading material. The progressive step system allows struggling students to be full participates in the TV show. They work with age/ability appropriate reading material or become puppeteers. All students can learn to read and comprehend what they read, but we must start at the beginning and teach the decoding and encoding of letter sounds. We must emphasize phonemic awareness. Then, we must take letter sounds one step further, we must pull the sounds together into words, sentences, and stories. Comprehension begins with letter sounds and ends with word meaning. Students cannot read and comprehend what they read if they cannot decode and encode letter sounds into words. Teaching both reading and listening comprehension is essential for effective reading. Leading researchers in comprehension are stating that we need interventions that teach encoding, decoding, and comprehension (Cutting, Eason, Young, & Alberstadt, 2011). At the ROC reading clinic, we teach encoding, decoding, and comprehension at the same time.
Learn something new every day More Info... by email External combustion is a process in which a device, such as a motor or engine, is powered by fuel burned outside of the device. It is an alternative to traditional combustion engines, where fuel is burned within the engine itself. The steam engine is the classic example of external combustion. While steam technology is rarely used today, it has served as inspiration for other modern technologies that rely on external combustion to power a device. During external combustion, some type of fuel is burned within an independent combustion chamber. Lines filled with fluid transfer this heat energy into the motor, where the sudden increase in heat is used to drive one or more pistons. These pistons transfer force or motion, creating a functional engine that can be used to power many types of machines. Once the heat has been removed from the fluid, the cooled fluid passes back into the combustion chamber for reheating. This transfer fluid may consist of gaseous materials, or simple deionized water. While steam was once used to power trains and ships, modern external combustion systems are primarily used in electrical production. They are also popular with the military, which benefits from the flexible fuel sources used by this technology. External combustion engines may be used in manufacturing or industrial equipment, as well as equipment used in agriculture or construction. Some scientists even expect these engines to be used in standard passenger cars in the future. The primary advantage of these engines is that they can be powered by any type of fuel. This includes traditional fossil fuels like coal and gas, as well as alternate fuel sources like biofuel or ethanol. Some external combustion units may also run off of burning wood or even trash. This makes it easy and convenient for people in remote areas to run equipment and machinery. It also helps to reduce dependence on limited fossil fuel supplies, and may cut pollution emissions. These engines may also pose several drawbacks, depending on how and where they are used. In much of the world, the internal combustion engine has long been the preferred source of power for machinery and equipment. Despite the pollution and other problems associated with this technology, consumers are slow to adopt any new technologies. External combustion units are also relatively new compared to more widely used methods of power, and have not been tested to their full capabilities. I just don't understand how the detonation occurs in a four stroke engine. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
Allergic reactions occur when your immune system misidentifies a neutral substance, called an allergen, as harmful. Allergens come into contact with your body through inhalation, swallowing, touching or by injection. Your immune system responds by releasing chemicals that attack the allergen, which causes an allergic reaction. Allergic reactions range from mild to severe to life-threatening, and may affect a specific area or the entire body. They can intensify through repeated exposure to the allergen. Severe reactions typically occur within seconds or minutes following exposure, although ingested allergens may take as long as 24 hours to manifest, according to the National Institutes of Health. Respiratory system allergic reactions Inhaled allergens typically affect the respiratory system, especially the nose, nasal cavities, sinuses, bronchial tubes and lungs. They typically cause sneezing; coughing; nasal and sinus congestion; nasal itching; clear nasal discharge; and itching of the roof of the mouth and/or ears. Allergic rhinitis ("hay fever") affects the nose's mucus membrane. The nasal passageways become inflamed and cause mucus to discharge. Allergic asthma causes your airways to become inflamed. Symptoms include wheezing, breathing difficulties, coughing and chest tightness. Common complications of allergic reactions in the respiratory system are sinusitis, an inflammation or infection of the sinuses, ear infections and bronchitis, an inflammation of the lung airways. Reactions to skin allergies There are two types of skin allergies. The first, contact dermatitis, is caused by the skin coming into direct contact with an allergen. Symptoms may manifest as a rash or eczema (itchy, inflamed skin, sometimes with crusting, lesions, scaling or blisters). There are approximately 3,000 known contact allergens, and contact dermatitis is one of the most common skin diseases in adults, according to the Asthma and Allergy Foundation of America. Common causes of a contact dermatitis reaction include detergents left on washed clothes; nickel (in jewelry and underclothes fastenings); chemicals (e.g., latex in rubber gloves and condoms; certain cosmetic ingredients; plants such as poison ivy and oak; and topical medications, according to the ACAAI. The second type of skin allergy refers to allergic reactions in which the allergen comes into contact with a part of the body other than the skin, but manifests on the skin, causing reactions such as urticaria (itchy bumps frequently called hives), eczema and psoriasis (which often manifests as irritated, thick red skin with scaly, silver-white patches). Acute urticaria can be caused by an allergy to foods or medication, according to the AAFA. Angioedema refers to swelling in the deeper layers of the skin — typically the eyelids, lips, tongue, hands and feet. Gastrointestinal/digestive system allergies: The gastrointestinal tract controls our body's digestive processes. Areas of the gastrointestinal tract that are affected by allergic reactions include the mouth, esophagus and stomach. Food allergies, for example, can cause an itchy mouth or throat, nausea, vomiting, cramping and diarrhea. Eye allergies: Allergens that get into the eyes cause allergic conjunctivitis (aka "pink eye"), an inflammation of the mucous membrane that surrounds the eye. Health stories, tools, resources, expert advice and more.
2.5 Definitions: Absolute and Comparative Advantage - Learn how to define labor productivity and opportunity cost within the context of the Ricardian model. - Learn to identify and distinguish absolute advantage and comparative advantage. - Learn to identify comparative advantage via two methods: (1) by comparing opportunity costs and (2) by comparing relative productivities. The basis for trade in the Ricardian model is differences in technology between countries. Below we define two different ways to describe technology differences. The first method, called absolute advantage, is the way most people understand technology differences. The second method, called comparative advantage, is a much more difficult concept. As a result, even those who learn about comparative advantage often will confuse it with absolute advantage. It is quite common to see misapplications of the principle of comparative advantage in newspaper and journal stories about trade. Many times authors write “comparative advantage” when in actuality they are describing absolute advantage. This misconception often leads to erroneous implications, such as a fear that technology advances in other countries will cause our country to lose its comparative advantage in everything. As will be shown, this is essentially impossible. To define absolute advantage, it is useful to define labor productivity first. To define comparative advantage, it is useful to first define opportunity cost. Next, each of these is defined formally using the notation of the Ricardian model. Labor productivityThe quantity of a good that can be produced per unit of labor input. It is the reciprocal of the unit labor requirement. is defined as the quantity of output that can be produced with a unit of labor. Since aLC represents hours of labor needed to produce one pound of cheese, its reciprocal, 1/aLC, represents the labor productivity of cheese production in the United States. Similarly, 1/aLW represents the labor productivity of wine production in the United States. A country has an absolute advantageA country has an absolute advantage in the production of a good if it can produce the good at a lower labor cost and if labor productivity in the good is higher than in another country. in the production of a good relative to another country if it can produce the good at lower cost or with higher productivity. Absolute advantage compares industry productivities across countries. In this model, we would say the United States has an absolute advantage in cheese production relative to France if The first expression means that the United States uses fewer labor resources (hours of work) to produce a pound of cheese than does France. In other words, the resource cost of production is lower in the United States. The second expression means that labor productivity in cheese in the United States is greater than in France. Thus the United States generates more pounds of cheese per hour of work. Obviously, if aLC∗ < aLC, then France has the absolute advantage in cheese. Also, if aLW < aLW∗, then the United States has the absolute advantage in wine production relative to France. Opportunity costThe value or quantity of something that must be given up to obtain something else. In the Ricardian model, opportunity cost is the amount of a good that must be given up to produce one more unit of another good. is defined generally as the value of the next best opportunity. In the context of national production, the nation has opportunities to produce wine and cheese. If the nation wishes to produce more cheese, then because labor resources are scarce and fully employed, it is necessary to move labor out of wine production in order to increase cheese production. The loss in wine production necessary to produce more cheese represents the opportunity cost to the economy. The slope of the PPF, −(aLC/aLW), corresponds to the opportunity cost of production in the economy. Figure 2.2 Defining Opportunity Cost To see this more clearly, consider points A and B in Figure 2.2 "Defining Opportunity Cost". Let the horizontal distance between A and B be one pound of cheese. Label the vertical distance X. The distance X then represents the quantity of wine that must be given up to produce one additional pound of cheese when moving from point A to B. In other words, X is the opportunity cost of producing cheese. Note also that the slope of the line between A and B is given by the formula Thus the slope of the line between A and B is the opportunity cost, which from above is given by −(aLC/aLW). We can more clearly see why the slope of the PPF represents the opportunity cost by noting the units of this expression: Thus the slope of the PPF expresses the number of gallons of wine that must be given up (hence the minus sign) to produce another pound of cheese. Hence it is the opportunity cost of cheese production (in terms of wine). The reciprocal of the slope, −(aLW/aLC), in turn represents the opportunity cost of wine production (in terms of cheese). Since in the Ricardian model the PPF is linear, the opportunity cost is the same at all possible production points along the PPF. For this reason, the Ricardian model is sometimes referred to as a constant (opportunity) cost model. Using Opportunity Costs A country has a comparative advantage in the production of a good if it can produce that good at a lower opportunity cost relative to another country. Thus the United States has a comparative advantage in cheese production relative to France if This means that the United States must give up less wine to produce another pound of cheese than France must give up to produce another pound. It also means that the slope of the U.S. PPF is flatter than the slope of France’s PPF. Starting with the inequality above, cross multiplication implies the following: This means that France can produce wine at a lower opportunity cost than the United States. In other words, France has a comparative advantage in wine production. This also means that if the United States has a comparative advantage in one of the two goods, France must have the comparative advantage in the other good. It is not possible for one country to have the comparative advantage in both of the goods produced. Suppose one country has an absolute advantage in the production of both goods. Even in this case, each country will have a comparative advantage in the production of one of the goods. For example, suppose aLC = 10, aLW = 2, aLC∗ = 20, and aLW∗ = 5. In this case, aLC (10) < aLC∗ (20) and aLW (2) < aLW∗ (5), so the United States has the absolute advantage in the production of both wine and cheese. However, it is also true that so that France has the comparative advantage in cheese production relative to the United States. Using Relative Productivities Another way to describe comparative advantage is to look at the relative productivity advantages of a country. In the United States, the labor productivity in cheese is 1/10, while in France it is 1/20. This means that the U.S. productivity advantage in cheese is (1/10)/(1/20) = 2/1. Thus the United States is twice as productive as France in cheese production. In wine production, the U.S. advantage is (1/2)/(1/5) = (2.5)/1. This means the United States is two and one-half times as productive as France in wine production. The comparative advantage good in the United States, then, is that good in which the United States enjoys the greatest productivity advantage: wine. Also consider France’s perspective. Since the United States is two times as productive as France in cheese production, then France must be 1/2 times as productive as the United States in cheese. Similarly, France is 2/5 times as productive in wine as the United States. Since 1/2 > 2/5, France has a disadvantage in production of both goods. However, France’s disadvantage is smallest in cheese; therefore, France has a comparative advantage in cheese. No Comparative Advantage The only case in which neither country has a comparative advantage is when the opportunity costs are equal in both countries. In other words, when then neither country has a comparative advantage. It would seem, however, that this is an unlikely occurrence. - Labor productivity is defined as the quantity of output produced with one unit of labor; in the model, it is derived as the reciprocal of the unit labor requirement. - Opportunity cost is defined as the quantity of a good that must be given up in order to produce one unit of another good; in the model, it is defined as the ratio of unit labor requirements between the first and the second good. - The opportunity cost corresponds to the slope of the country’s production possibility frontier (PPF). - An absolute advantage arises when a country has a good with a lower unit labor requirement and a higher labor productivity than another country. - A comparative advantage arises when a country can produce a good at a lower opportunity cost than another country. - A comparative advantage is also defined as the good in which a country’s relative productivity advantage (disadvantage) is greatest (smallest). - It is not possible that a country does not have a comparative advantage in producing something unless the opportunity costs (relative productivities) are equal. In this case, neither country has a comparative advantage in anything. Jeopardy Questions. As in the popular television game show, you are given an answer to a question and you must respond with the question. For example, if the answer is “a tax on imports,” then the correct question is “What is a tariff?” - The labor productivity in cheese if four hours of labor are needed to produce one pound. - The labor productivity in wine if three kilograms of cheese can be produced in one hour and ten liters of wine can be produced in one hour. - The term used to describe the amount of labor needed to produce a ton of steel. - The term used to describe the quantity of steel that can be produced with an hour of labor. - The term used to describe the amount of peaches that must be given up to produce one more bushel of tomatoes. - The term used to describe the slope of the PPF when the quantity of tomatoes is plotted on the horizontal axis and the quantity of peaches is on the vertical axis. Consider a Ricardian model with two countries, the United States and Ecuador, producing two goods, bananas and machines. Suppose the unit labor requirements are aLBUS= 8, aLBE = 4, aLMUS = 2, and aLME = 4. Assume the United States has 3,200 workers and Ecuador has 400 workers. - Which country has the absolute advantage in bananas? Why? - Which country has the comparative advantage in bananas? Why? - How many bananas and machines would the United States produce if it applied half of its workforce to each good? Consider a Ricardian model with two countries, England and Portugal, producing two goods, wine and corn. Suppose the unit labor requirements in wine production are aLWEng = 1/3 hour per liter and aLWPort = 1/2 hour per liter, while the unit labor requirements in corn are aLCEng = 1/4 hour per kilogram and aLCPort = 1/2 hour per kilogram. - What is labor productivity in the wine industry in England and in Portugal? - What is the opportunity cost of corn production in England and in Portugal? - Which country has the absolute advantage in wine? In corn? - Which country has the comparative advantage in wine? In corn?
Watching this resources will notify you when proposed changes or new versions are created so you can keep track of improvements that have been made. Favoriting this resource allows you to save it in the “My Resources” tab of your account. There, you can easily access this resource later when you’re ready to customize it or assign it to your students. Spinal nerves, a part of the PNS, generally refers to mixed nerves, with motor, sensory, and autonomic signals between the CNS and the body. Describe spinal nerves of the peripheral nervous system Afferent sensory axons, bringing sensory information from the body to the spinal cord and brain, travel through the dorsal roots of the spinal cord, and efferent motor axons, bringing motor information from the brain to the body, travel through the ventral roots of the spinal cord. The foramen allows for the passage of the spinal nerve root, dorsal root ganglion, the spinal artery of the segmental artery, communicating veins between the internal and external plexuses, recurrent meningeal (sinu-vertebral) nerves, and transforaminal ligaments. The term spinal nerve generally refers to a mixed spinal nerve, which carries motor, sensory, and autonomic signals between the spinal cord and the body. Humans have 31 left-right pairs of spinal nerves, each roughly corresponding to a segment of the vertebral column: eight cervical spinal nerve pairs (C1-C8), 12 thoracic pairs (T1-T12), five lumbar pairs (L1-L5), five sacral pairs (S1-S5), and one coccygeal pair. The spinal nerves are part of the peripheral nervous system (PNS). Each spinal nerve is formed by the combination of nerve fibers from the dorsal and ventral roots of the spinal cord. The dorsal roots carry afferent sensory axons, while the ventral roots carry efferent motor axons. The spinal nerve emerges from the spinal column through an opening (intervertebral foramen) between adjacent vertebrae. This is true for all spinal nerves except for the first spinal nerve pair, which emerges between the occipital bone and the atlas (the first vertebra). Thus the cervical nerves are numbered by the vertebra below, except C8, which exists below C7 and above T1. The thoracic, lumbar, and sacral nerves are then numbered by the vertebra above. In the case of a lumbarized S1 vertebra (i.e., L6) or a sacralized L5 vertebra, the nerves are typically still counted to L5 and the next nerve is S1. Spinal Nerve Innervation Outside the vertebral column, the nerve divides into branches. The dorsal ramus contains nerves that serve the dorsal portions of the trunk, carrying visceral motor, somatic motor, and somatic sensory information to and from the skin and muscles of the back (epaxial muscles). The ventral ramus contains nerves that serve the remaining ventral parts of the trunk and the upper and lower limbs (hypaxial muscles), carrying visceral motor, somatic motor, and sensory information to and from the ventrolateral body surface, structures in the body wall, and the limbs. The meningeal branches (recurrent meningeal or sinuvertebral nerves) branch from the spinal nerve and re-enter the intervertebral foramen to serve the ligaments, dura, blood vessels, intervertebral discs, facet joints, and periosteum of the vertebrae. The rami communicantes contain autonomic nerves that serve visceral functions carrying visceral motor and sensory information to and from the visceral organs. The posterior distribution of the cervical nerves includes the suboccipital nerve (C1), the greater occipital nerve (C2) and the third occipital nerve (C3). The anterior distribution includes the cervical plexus (C1-C4) and brachial plexus (C5-T1). The muscles innervated by the cervical nerves are the sternohyoid, sternothyroid and omohyoid muscles. A loop of nerves called ansa cervicalis is part of the cervical plexus. Thoracic nerve branches exit the spine and go directly to the paravertebral ganglia of the autonomic nervous system where they are involved in the functions of organs and glands in the head, neck, thorax and Anterior divisions: The intercostal nerves come from thoracic nerves T1-T11, and run between the ribs. The subcostal nerve comes from nerve T12, and runs below the twelfth rib. Posterior divisions: The medial branches (ramus medialis) of the posterior branches of the upper six thoracic nerves run between the semispinalis dorsi and multifidus, which they supply; they then pierce the rhomboid and trapezius muscles, and reach the skin by the sides of the spinous processes. This branch is called the medial cutaneous ramus. The medial branches of the lower six are distributed chiefly to the multifidus and longissimus dorsi, occasionally they give off filaments to the skin near the middle line. This sensitive branch is called the posterior cutaneous ramus. The lumbar nerves are divided into posterior and anterior divisions. medial branches of the posterior divisions of the lumbar nerves run close to the articular processes of the vertebrae and end in the multifidus muscle. The lateral branches supply the erector spinae muscles. anterior divisions of the lumbar nerves (rami anteriores) consist of long, slender branches which accompany the lumbar arteries around the sides of the vertebral bodies, beneath the psoas The first and second, and sometimes the third and fourth lumbar nerves are each connected with the lumbar part of the sympathetic trunk by a white ramus communicans. The nerves pass obliquely outward behind the psoas major, or between its fasciculi, distributing filaments to it and the quadratus lumborum. The first three and the greater part of the fourth are connected by anastomotic loops and form the lumbar The smaller part of the fourth joins with the fifth to form the lumbosacral trunk, which assists in the formation of the sacral plexus. The fourth nerve is named the furcal nerve, from the fact that it is subdivided between the two plexuses. There are five paired sacral nerves, half of them arising through the sacrum on the left side and the other half on the right side. Each nerve emerges in two divisions: one division through the anterior sacral foramina and the other division through the posterior sacral foramina. The sacral nerves have both afferent and efferent fibers, thus they are responsible for part of the sensory perception and the movements of the lower extremities of the human body. The pudendal nerve and parasympathetic fibers arise from S2, S3, and S4. They supply the descending colon and rectum, bladder, and genital organs. These pathways have both afferent and efferent fibers. coccygeal nerve is the 31st pair of spinal nerves and arises from the conus medullaris. Its anterior root helps form the coccygeal Spinal nerve motor functions are summarized in the table below. Source: Boundless. “Overview of the Spinal Nerves.” Boundless Anatomy and Physiology. Boundless, 29 Jul. 2016. Retrieved 31 Aug. 2016 from https://www.boundless.com/physiology/textbooks/boundless-anatomy-and-physiology-textbook/peripheral-nervous-system-13/spinal-nerves-132/overview-of-the-spinal-nerves-710-5078/
Microscopic fungal parasites reveal host’s behavior Mycologists have long debated about which organisms should be or should be not counted as fungi. The Kingdom Fungi includes a lot of biodiversity; an estimate of 5.1 million species has been suggested , comprising all sorts of things - molds, the well-known mushrooms, plant and insect parasites, polypores, and some important model organisms such as Saccharomyces cerevisiae. To date, however, most fungi remain unknown and uncharacterized. The Laboulbeniales are perhaps (probably!) the most intriguing and yet the least studied of all insect-associated parasitic fungi. The order Laboulbeniales (Fungi, Ascomycota) consists of obligate parasites living attached to the exterior of their invertebrate hosts, mainly beetles. Unlike the well-known mushroom structure (stem, cap + gills or pores), Laboulbeniales are microscopic organisms, referred to as thalli, of 0.150-1 mm in length, rarely more, bearing antheridia and perithecia on a receptacle with appendages . These fungi exhibit great host specificity (i.e. one particular species lives on a single or a few host species), and are often remarkably adapted to a given position on the host body. This is important for the rest of this post: the occurrence of Laboulbeniales species on a precise portion of beetle integument is called "position specificity" . Successful establishment of the parasite requires both the presence of a suitable host and favorable environmental conditions for the fungus . They produce sticky spores that are exclusively spread by the activities of the host, have a short life span, and cannot spread through air. Infection with Laboulbeniales through sexual contact is the most important type of direct transmission, it can therefore be referred to as a sexually transmitted disease. In lady beetles, Laboulbeniales are also socially transmitted as transmission also occurs when large numbers of lady beetles form aggregations in winter-time shelters. In spite of their parasitic nature, most Laboulbeniales are avirulent and seem to have little to no effect on the reproduction and survival of their host . The order currently counts 140 genera and some 2,050 species , 1,260 of which have been described by one single person: Roland Thaxter (1858-1932), who studied at the Farlow Herbarium (of the Harvard University Herbaria), as I do right now. Understanding beetle behavior In research, even fundamental, we always want to answer some big questions. Now what can be possibly be answered using Laboulbeniales, tiny parasites without any harmful effects on their hosts? Tough question, but Lauren Goldmann seems to have found an interesting way of applying these fungi : using the specific positions of different species on their host in order to explain host beetle (mating) behavior. Wow, that is quite something. Let's have a closer look. Thirteen (!) species of the genus Chitonomyces have been described on the aquatic beetle Laccophilus maculosus (Coleoptera, Dytiscidae). Goldmann & Weir have shown by using molecular data that these 13 species reported to exhibit position specificity on L. maculosus were ... "... placed neatly into pairs of morphotypes, resulting in synonymies and recognition of six phylogenetic species (one species is a triplet)." "Male beetles are susceptible to all 13 morphotypes, opposed to only six morphotypes known on females." For example, the (new) phylogenetic species Chitonomyces paradoxus (Peyr.) Thaxt. consists of the morphotypes Ch. unciger (Thaxt.) Thaxt. and Ch. paradoxes (Peyr.) Thaxt. The latter name has been kept because it is the oldest of both: Chitonomyces paradoxus (Peyr.) Thaxter, 1896, Mem. Am. Acad. Arts Sci. 12: 287. = Heimatomyces paradoxus Peyritsch, 1873, Sitzungsber. Kaiserl. Akad. Wiss., Math-Naturwiss. Cl., Abt. 1 [Wien] 64: 251. = Chitonomyces unciger (Thaxt.) Thaxter, 1896, Mem. Am. Acad. Arts Sci. 12: 288. = Heimatomyces unciger Thaxter, 1895, Proc. Am. Acad. Arts Sci. 30: 478. Thalli of the Ch. paradoxus morphotype are positioned on the edge of the left elytron of both male and female Laccophilus maculosus. Chitonomyces unciger always parasitizes the left posterior claw of only males. This rings a bell, doesn't it? Well, it certainly did for the team from the State University of New York College of Environmental Science and Forestry. They found that the six phylogenetic species each comprise a pair - and in one case three - previously described morphological species that correspond to points of contact between male and female beetles during copulation. Bang! Quite some results, huh? And this is not all. When placing the copulating male beetle in parallel orientation with the female, the positions of the morphotypes in males and females misalign (see figure 4A). It is shown that males are oriented in a diagonal (as in figure 4B above), resulting in the alignment of all positions. This diagonal position is suggested to be demonstrated by males when females try to break free from the male hold, thereby increasing the overall pressure on the female. This specific pressure will result in the release of ascospores from the perithecium. So, then why are thalli of Chitonomyces paradoxus present on the edge of the left elytron present on both males and females? Shoudn't they just be present on females in that position? The funny thing about these beetles is that the male individuals just try to copulate with all, even with other males. Therefore, males are infected at both positions, because of the presence of so-called same-sex mounting behavior, which has never been observed in females. [Remember that the pairing morphotype, Ch. unciger, is positioned at the left posterior claw of only males.] End of story, problem solved, and one very nice application for my beloved organisms. Blackwell M 2011, American Journal of Botany 98: 426-438. Tavares II 1985, Mycologia Memoir 9: 1-627. Thaxter R 1896, Memoirs of the American Academy of Arts and Sciences 12: 187-429. De Kesel 1996, Mycologia 88: 565-573. Weir A & GW Beakes 1995, Mycologist 9: 6-10. Rossi W and S Santamaría 2012. Mycologia 104: 785-788. Goldmann LM & A Weir 2012, Mycologia 104 (5): 1143-1158.
April 10, 2012 Newfangled space-propulsion technology could help clean up Earth orbit Some of the most valuable “real estate” for humans isnt on Earth at all but rather above the planets atmosphere, where all manner of human-made objects orbit. The problem is that those orbits are too crowded with dead satellites and debris, making new launches riskier. Robert Winglee has spent years developing a magnetized ion plasma system to propel a spacecraft at ultra-high speeds, making it possible to travel to Mars and return to Earth in as little time as 90 days. The problem is that cost and other issues have dampened the desire to send astronauts to Mars or any other planet. But Winglee, who heads the University of Washingtons Earth and Space Sciences Department, believes his problem might actually be a solution to the problem of space junk crowding the orbital paths around Earth. A magnetized-beam plasma propulsion device (mag-beam for short) in Earth orbit would be able to use a focused ion stream to push dead satellites and other debris toward Earths atmosphere, where they would mostly burn up on re-entry. The idea has drawn interest, and some funding, from the U.S. Defense Department. “Our proposal was that we could mitigate a whole region of space rather than work with individual pieces one at a time,” Winglee said. As a propulsion method, mag-beam would interact with a specialized receptor on a spacecraft, pushing it to speeds perhaps greater than 18,000 miles per hour. Satellites orbiting Earth dont have those specialized receptors, but Winglee said applying the beam directly to the satellite would still provide enough momentum to move a satellite toward the atmosphere. A geosynchronous orbit, one in which a satellite returns to the same position above the Earth each day, “is very valuable space, but its full of dead satellites,” he said. For decades, communications satellites have been placed in orbits from hundreds of miles to several thousand miles above sea level to create a fixed point in the sky for ground installations to communicate with satellites. Many of those satellites have ceased to function, though they continue in their orbits. That might not sound like such a big problem, but as space gets more crowded with gadgets, the chance of a collision between two satellites becomes even greater. Then, instead of two larger objects to worry about, satellites worth vast sums of money – and perhaps even space vehicles such as the International Space Station – would have to navigate through a cloud of debris. Even a tiny washer or screw traveling at 6,700 miles per hour in Earth orbit could cause serious damage to another object. Using a mag-beam to clean up the debris is feasible now, Winglee said, and could be accomplished through a standard satellite mission costing perhaps $300 million. The technology would not be useful for pushing near-Earth asteroids or comets away from the planet, he said, because they have too much mass for the mag-beam to be effective. Meanwhile, Winglee and his students continue research in his Johnson Hall laboratory on the possibility of placing mag-beam units in orbit around Earth and around a planet such as Mars that humans might want to explore. With a unit on each end – one to give a spacecraft a high-velocity push on its journey and the other to slow it at its destination – a mission to Mars could be accomplished in as little as 90 days, rather than the 2.5 years it would take with conventional means. “Were continuing on a shoestring budget, and were modeling what the system can do over longer distances,” Winglee said.
An explosive material is a material that either is chemically or otherwise energetically unstable or produces a sudden expansion of the material usually accompanied by the production of heat and large changes in pressure (and typically also a flash and/or loud noise) upon initiation; this is called the explosion. Explosives are classified as low or high explosives according to their rates of decomposition : low explosives burn rapidly (or deflagrate), while high explosives detonate. While these definitions are distinct, the problem of precisely measuring rapid decomposition makes practical classification of explosives difficult. The chemical decomposition of an explosive may take years, days, hours, or a fraction of a second. The slower processes of decomposition take place in storage and are of interest only from a stability standpoint. Of more interest are the two rapid forms of decomposition, deflagration and detonation. The latter term is used to describe an explosive phenomenon whereby the decomposition is propagated by the explosive shockwave traversing the explosive material. The shockwave front is capable of passing through the high explosive material at great speeds, typically thousands of meters per second. Explosives usually have less potential energy than petroleum fuels, but their high rate of energy release produces the great blast pressure. TNT has a detonation velocity of 6,940 m/s compared to 1,680 m/s for the detonation of a pentane-air mixture, and the 0.34-m/s stoichiometric flame speed of gasoline combustion in air. Explosive force is released in a direction perpendicular to the surface of the explosive. If the surface is cut or shaped, the explosive forces can be focused to produce a greater local effect; this is known as a shaped charge. In a low explosive (which deflagrates), the decomposition is propagated by a flame front which travels much more slowly through the explosive material. The properties of the explosive indicate the class into which it falls. In some cases explosives can be made to fall into either class by the conditions under which they are initiated. In sufficiently large quantities, almost all low explosives can undergo a Deflagration to Detonation Transition (DDT). For convenience, low and high explosives may be differentiated by the shipping and storage classes. Explosive compatibility groupings Shipping labels and tags will include UN and national, e.g. USDOT, hazardous material Class with Compatibility Letter, as follows: - 1.1 Mass Explosion Hazard - 1.2 Non-mass explosion, fragment-producing - 1.3 Mass fire, minor blast or fragment hazard - 1.4 Moderate fire, no blast or fragment: a consumer firework is 1.4G or 1.4S - 1.5 Explosive substance, very insensitive (with a mass explosion hazard) - 1.6 Explosive article, extremely insensitive A Primary explosive substance (1.1A) B An article containing a primary explosive substance and not containing two or more effective protective features. Some articles, such as detonator assemblies for blasting and primers, cap-type, are included. (1.1B, 1.2B, 1.4B) C Propellant explosive substance or other deflagrating explosive substance or article containing such explosive substance (1.1C, 1.2C, 1.3C, 1.4C) D Secondary detonating explosive substance or black powder or article containing a secondary detonating explosive substance, in each case without means of initiation and without a propelling charge, or article containing a primary explosive substance and containing two or more effective protective features. (1.1D, 1.2D, 1.4D, 1.5D) E Article containing a secondary detonating explosive substance without means of initiation, with a propelling charge (other than one containing flammable liquid, gel or hypergolic liquid) (1.1E, 1.2E, 1.4E) F containing a secondary detonating explosive substance with its means of initiation, with a propelling charge (other than one containing flammable liquid, gel or hypergolic liquid) or without a propelling charge (1.1F, 1.2F, 1.3F, 1.4F) G Pyrotechnic substance or article containing a pyrotechnic substance, or article containing both an explosive substance and an illuminating, incendiary, tear-producing or smoke-producing substance (other than a water-activated article or one containing white phosphorus, phosphide or flammable liquid or gel or hypergolic liquid) (1.1G, 1.2G, 1.3G, 1.4G) H Article containing both an explosive substance and white phosphorus (1.2H, 1.3H) J Article containing both an explosive substance and flammable liquid or gel (1.1J, 1.2J, 1.3J) K Article containing both an explosive substance and a toxic chemical agent (1.2K, 1.3K) L Explosive substance or article containing an explosive substance and presenting a special risk (e.g., due to water-activation or presence of hypergolic liquids, phosphides or pyrophoric substances) needing isolation of each type (1.1L, 1.2L, 1.3L) N Articles containing only extremely insensitive detonating substances (1.6N) S Substance or article so packed or designed that any hazardous effects arising from accidental functioning are limited to the extent that they do not significantly hinder or prohibit fire fighting or other emergency response efforts in the immediate vicinity of the package (1.4S) In addition to chemical explosives, there exist varieties of more exotic explosive material, and theoretical methods of causing explosions. Examples include nuclear explosives and abruptly heating a substance with a high-intensity laser or electric arc A low explosive is usually a mixture of a combustible substance and an oxidant that decomposes rapidly (deflagration ); unlike most high explosives, which are compounds. Under normal conditions, low explosives undergo deflagration at rates that vary from a few centimeters per second to approximately 400 metres per second. It is possible for them to deflagrate very quickly, producing an effect similar to a detonation. This usually occurs when ignited in a confined space. Low explosives are normally employed as propellants. Included in this group are gun powders, pyrotechnics and illumination devices such as flares. normally are employed in mining, demolition, and military warheads. A high explosive compound detonates at rates ranging from 1,000 to 9,000 meters per second, and are, conventionally, subdivided into two explosives classes, differentiated by sensitivity: - Primary explosives are extremely sensitive to mechanical shock, friction, and heat, to which they will respond by burning rapidly or detonating. - Secondary explosives, also called base explosives, are relatively insensitive to shock, friction, and heat. They may burn when exposed to heat or flame in small, unconfined quantities, but detonation can occur. These are sometimes added in small amounts to blasting caps to boost their power. Dynamite, TNT, RDX, PETN, HMX, and others are secondary explosives. PETN is the benchmark compound; compounds more sensitive than PETN are classed as primary explosives. Some definitions add a third category: - Tertiary explosives or blasting agents, are insensitive to shock, they cannot be reliably detonated with practical quantities of primary explosive, and, instead, require an intermediate explosive booster, of secondary explosive, e.g. ammonium nitrate/fuel oil mixture (ANFO) and slurry (wet bag) explosives that are primarily used in large-scale mining and construction. Note that many, if not most, explosive chemical compounds may usefully deflagrate and detonate, and are used in high- and low-explosive compounds. Thus, under the correct conditions, a propellant (for example nitrocellulose) might deflagrate if ignited, or may might detonate if initiated with a detonator. Detonation of an explosive charge The explosive train , also called an initiation sequence or firing train , is the sequence of charges that progresses from relatively low levels of energy to initiate the final explosive material or main charge. There are low- and high-explosive trains. Low-explosive trains are as simple as a rifle cartridge, including a primer and a propellant charge. High-explosives trains can be more complex, either two-step (e.g., detonator ) or three-step (e.g., detonator, booster of primary explosive, and main charge of secondary explosive). Detonators are often made from tetryl Composition of the material An explosive may consist of either a chemically pure compound, such as nitroglycerin , or a mixture of an oxidizer and a fuel , such as black powder Mixtures of an oxidizer and a fuel is a pure substance (molecule ) that in a chemical reaction can contribute some atoms of one or more oxidizing elements, in which the fuel component of the explosive burns. On the simplest level, the oxidizer may itself be an oxidizing element , such as gaseous or liquid oxygen Chemically pure compounds Some chemical compounds are unstable in that, when shocked, they react, possibly to the point of detonation. Each molecule of the compound dissociates into two or more new molecules (generally gases) with the release of energy. - Nitroglycerin: A highly unstable and sensitive liquid. - Acetone peroxide: A very unstable white organic peroxide - TNT: Yellow insensitive crystals that can be melted and cast without detonation. - Nitrocellulose: A nitrated polymer which can be a high or low explosive depending on nitration level and conditions. - RDX, PETN, HMX: Very powerful explosives which can be used pure or in plastic explosives. The above compositions may describe the majority of the explosive material, but a practical explosive will often include small percentages of other materials. For example, dynamite is a mixture of highly sensitive nitroglycerin with sawdust, powdered silica, or most commonly diatomaceous earth, which act as stabilizers. Plastics and polymers may be added to bind powders of explosive compounds; waxes may be incorporated to make them safer to handle; aluminium powder may be introduced to increase total energy and blast effects. Explosive compounds are also often "alloyed": HMX or RDX powders may be mixed (typically by melt-casting) with TNT to form Octol or Cyclotol. Chemical explosive reaction A chemical explosive is a compound or mixture which, upon the application of heat or shock, decomposes or rearranges with extreme rapidity, yielding much gas and heat. Many substances not ordinarily classed as explosives may do one, or even two, of these things. For example, at high temperatures (> 2000°C) a mixture of nitrogen can be made to react with great rapidity and yield the gaseous product nitric oxide ; yet the mixture is not an explosive since it does not evolve heat, but rather absorbs heat. - N2 + O2 → 2NO - 43,200 calories (or 180 kJ) per mole of N2 For a chemical to be an explosive, it must exhibit all of the following: - Rapid expansion (i.e.,. rapid production of gases or rapid heating of surroundings) - Evolution of heat - Rapidity of reaction - Initiation of reaction Evolution of heat The generation of heat in large quantities accompanies every explosive chemical reaction. It is this rapid liberation of heat that causes the gaseous products of reaction to expand and generate high pressures . This rapid generation of high pressures of the released gas constitutes the explosion. It should be noted that the liberation of heat with insufficient rapidity will not cause an explosion. For example, although a pound of coal yields five times as much heat as a pound of nitroglycerin , the coal cannot be used as an explosive because the rate at which it yields this heat is quite slow. Rapidity of reaction Rapidity of reaction distinguishes the explosive reaction from an ordinary combustion reaction by the great speed with which it takes place. Unless the reaction occurs rapidly, the thermally expanded gases will be dissipated in the medium, and there will be no explosion. Again, consider a wood or coal fire. As the fire burns, there is the evolution of heat and the formation of gases, but neither is liberated rapidly enough to cause an explosion. This can be likened to the difference between the energy discharge of a battery , which is slow, and that of a flash capacitor like that in a camera flash, which releases its energy all at once. Initiation of reaction A reaction must be capable of being initiated by the application of shock or heat to a small portion of the mass of the explosive material. A material in which the first three factors exist cannot be accepted as an explosive unless the reaction can be made to occur when desired . A sensitiser is a powdered or fine particulate material that is sometimes used to create voids that aid in the initiation or propagation of the detonation wave. It may be as high-tech as glass beads or as simple as seeds. To determine the suitability of an explosive substance for military use, its physical properties must first be investigated. The usefulness of a military explosive can only be appreciated when these properties and the factors affecting them are fully understood. Many explosives have been studied in past years to determine their suitability for military use and most have been found wanting. Several of those found acceptable have displayed certain characteristics that are considered undesirable and, therefore, limit their usefulness in military applications. The requirements of a military explosive are stringent, and very few explosives display all of the characteristics necessary to make them acceptable for military standardization . Some of the more important characteristics are discussed below: Availability and cost In view of the enormous quantity demands of modern warfare, explosives must be produced from cheap raw materials that are nonstrategic and available in great quantity. In addition, manufacturing operations must be reasonably simple, cheap, and safe. Regarding an explosive, this refers to the ease with which it can be ignited or detonated—i.e., the amount and intensity of shock , or heat that is required. When the term sensitivity is used, care must be taken to clarify what kind of sensitivity is under discussion. The relative sensitivity of a given explosive to impact may vary greatly from its sensitivity to friction or heat. Some of the test methods used to determine sensitivity are as follows: - Impact Sensitivity is expressed in terms of the distance through which a standard weight must be dropped to cause the material to explode. - Friction Sensitivity is expressed in terms of what occurs when a weighted pendulum scrapes across the material (snaps, crackles, ignites, and/or explodes). - Heat Sensitivity is expressed in terms of the temperature at which flashing or explosion of the material occurs. Sensitivity is an important consideration in selecting an explosive for a particular purpose. The explosive in an armor-piercing projectile must be relatively insensitive, or the shock of impact would cause it to detonate before it penetrated to the point desired. The explosive lenses around nuclear charges are also designed to be highly insensitive, to minimize the risk of accidental detonation. is the ability of an explosive to be stored without deterioration The following factors affect the stability of an explosive: - Chemical constitution. The very fact that some common chemical compounds can undergo explosion when heated indicates that there is something unstable in their structures. While no precise explanation has been developed for this, it is generally recognized that certain radical groups, nitrite (–NO2), nitrate (–NO3), and azide (–N3), are intrinsically in a condition of internal strain. Increasing the strain by heating can cause a sudden disruption of the molecule and consequent explosion. In some cases, this condition of molecular instability is so great that decomposition takes place at ordinary temperatures. - Temperature of storage. The rate of decomposition of explosives increases at higher temperatures. All of the standard military explosives may be considered to have a high degree of stability at temperatures of -10 to +35 °C, but each has a high temperature at which the rate of decomposition rapidly accelerates and stability is reduced.As a rule of thumb, most explosives become dangerously unstable at temperatures exceeding 70 °C. - Exposure to the sun. If exposed to the ultraviolet rays of the sun, many explosive compounds that contain nitrogen groups will rapidly decompose, affecting their stability. - Electrical discharge. Electrostatic or spark sensitivity to initiation is common to a number of explosives. Static or other electrical discharge may be sufficient to inspire detonation under some circumstances. As a result, the safe handling of explosives and pyrotechnics almost always requires electrical grounding of the operator. The term "power" (or more properly, performance ) as applied to an explosive refers to its ability to do work. In practice it is defined as the explosive's ability to accomplish what is intended in the way of energy delivery (i.e., fragment projection, air blast, high-velocity jets, underwater shock and bubble energy, etc.). Explosive power or performance is evaluated by a tailored series of tests to assess the material for its intended use. Of the tests listed below, cylinder expansion and air-blast tests are common to most testing programs, and the others support specific applications. - Cylinder expansion test. A standard amount of explosive is loaded into a long hollow cylinder, usually of copper, and detonated at one end. Data is collected concerning the rate of radial expansion of the cylinder and maximum cylinder wall velocity. This also establishes the Gurney energy or 2E. - Cylinder fragmentation. A standard steel cylinder is loaded with explosive and detonated in a sawdust pit. The fragments are collected and the size distribution analyzed. - Detonation pressure (Chapman-Jouguet condition). Detonation pressure data derived from measurements of shock waves transmitted into water by the detonation of cylindrical explosive charges of a standard size. - Determination of critical diameter. This test establishes the minimum physical size a charge of a specific explosive must be to sustain its own detonation wave. The procedure involves the detonation of a series of charges of different diameters until difficulty in detonation wave propagation is observed. - Infinite-diameter detonation velocity. Detonation velocity is dependent on loading density (c), charge diameter, and grain size. The hydrodynamic theory of detonation used in predicting explosive phenomena does not include diameter of the charge, and therefore a detonation velocity, for an imaginary charge of Infinite diameter. This procedure requires a series of charges of the same density and physical structure, but different diameters, to be fired and the resulting detonation velocities extrapolated to predict the detonation velocity of a charge of infinite diameter. - Pressure versus scaled distance. A charge of specific size is detonated and its pressure effects measured at a standard distance. The values obtained are compared with that for TNT. - Impulse versus scaled distance. A charge of specific size is detonated and its impulse (the area under the pressure-time curve) measured versus distance. The results are tabulated and expressed in TNT equivalent. - Relative bubble energy (RBE). A 5- to 50 kg charge is detonated in water and piezoelectric gauges measure peak pressure, time constant, impulse, and energy. - The RBE may be defined as Kx 3 - RBE = Ks - where K = bubble expansion period for experimental (x) or standard (s) charge. In addition to strength, explosives display a second characteristic, which is their shattering effect or brisance (from the French meaning to "break"), which is distinguished from their total work capacity. An exploding propane tank may release more chemical energy than an ounce of nitroglycerin, but the tank would probably fragment into large pieces of twisted metal, while a metal casing around the nitroglycerin would be pulverized. This characteristic is of practical importance in determining the effectiveness of an explosion in fragmenting shells, bomb casings, grenades , and the like. The rapidity with which an explosive reaches its peak pressure is a measure of its brisance. Brisance values are primarily employed in France and Russia. The sand crush test is commonly employed to determine the relative brisance in comparison to TNT. No test is capable of directly comparing the explosive properties of two or more compounds; it is important to examine the data from several such tests (sand crush, trauzl, and so forth) in order to gauge relative brisance. True values for comparison will require field experiments. of loading refers to the mass of an explosive per unit volume. Several methods of loading are available, including pellet loading, cast loading, and press loading; the one used is determined by the characteristics of the explosive. Dependent upon the method employed, an average density of the loaded charge can be obtained that is within 80-99% of the theoretical maximum density of the explosive. High load density can reduce sensitivity by making the mass more resistant to internal friction . However, if density is increased to the extent that individual crystals are crushed, the explosive may become more sensitive. Increased load density also permits the use of more explosive, thereby increasing the power of the warhead . It is possible to compress an explosive beyond a point of sensitivity, known also as "dead-pressing," in which the material is no longer capable of being reliably initiated, if at all. , or the readiness with which a substance vaporizes , is an undesirable characteristic in military explosives. Explosives must be no more than slightly volatile at the temperature at which they are loaded or at their highest storage temperature. Excessive volatility often results in the development of pressure within rounds of ammunition and separation of mixtures into their constituents. Stability, as mentioned before, is the ability of an explosive to stand up under storage conditions without deteriorating. Volatility affects the chemical composition of the explosive such that a marked reduction in stability may occur, which results in an increase in the danger of handling. Maximum allowable volatility is 2 ml of gas evolved in 48 hours. The introduction of water into an explosive is highly undesirable since it reduces the sensitivity, strength, and velocity of detonation of the explosive. Hygroscopicity is used as a measure of a material's moisture-absorbing tendencies. Moisture affects explosives adversely by acting as an inert material that absorbs heat when vaporized, and by acting as a solvent medium that can cause undesired chemical reactions. Sensitivity, strength, and velocity of detonation are reduced by inert materials that reduce the continuity of the explosive mass. When the moisture content evaporates during detonation, cooling occurs, which reduces the temperature of reaction. Stability is also affected by the presence of moisture since moisture promotes decomposition of the explosive and, in addition, causes corrosion of the explosive's metal container. For all of these reasons, hygroscopicity must be negligible in military explosives. Due to their chemical structure, most explosives are toxic to some extent. Since the toxic effect may vary from a mild headache to serious damage of internal organs, care must be taken to limit toxicity in military explosives to a minimum. Any explosive of high toxicity is unacceptable for military use. Explosive product gases can also be toxic. Measurement of chemical explosive reaction The development of new and improved types of ammunition requires a continuous program of research and development. Adoption of an explosive for a particular use is based upon both proving ground and service tests. Before these tests, however, preliminary estimates of the characteristics of the explosive are made. The principles of thermochemistry are applied for this process. Thermochemistry is concerned with the changes in internal energy, principally as heat, in chemical reactions. An explosion consists of a series of reactions, highly exothermic, involving decomposition of the ingredients and recombination to form the products of explosion. Energy changes in explosive reactions are calculated either from known chemical laws or by analysis of the products. For most common reactions, tables based on previous investigations permit rapid calculation of energy changes. Products of an explosive remaining in a closed calorimetric bomb (a constant-volume explosion) after cooling the bomb back to room temperature and pressure are rarely those present at the instant of maximum temperature and pressure. Since only the final products may be analyzed conveniently, indirect or theoretical methods are often used to determine the maximum temperature and pressure values. Some of the important characteristics of an explosive that can be determined by such theoretical computations are: - Oxygen balance - Heat of explosion or reaction - Volume of products of explosion - Potential of the explosive Oxygen balance (OB%) is an expression that is used to indicate the degree to which an explosive can be oxidized. If an explosive molecule contains just enough oxygen to convert all of its carbon to carbon dioxide, all of its hydrogen to water, and all of its metal to metal oxide with no excess, the molecule is said to have a zero oxygen balance. The molecule is said to have a positive oxygen balance if it contains more oxygen than is needed and a negative oxygen balance if it contains less oxygen than is needed. The sensitivity, strength , and brisance of an explosive are all somewhat dependent upon oxygen balance and tend to approach their maximums as oxygen balance approaches zero. Heat of explosion When a chemical compound is formed from its constituents, heat may either be absorbed or released. The quantity of heat absorbed or given off during transformation is called the heat of formation . Heats of formations for solids and gases found in explosive reactions have been determined for a temperature of 15 °C and atmospheric pressure, and are normally given in units of kilocalories per gram-molecule. (See table 12-1). A negative value indicates that heat is absorbed during the formation of the compound from its elements; such a reaction is called an endothermic reaction. The arbitrary convention usually employed in simple thermochemical calculations is to take heat contents of all elements as zero in their standard states at all temperatures (standard state being defined as natural or ambient conditions). Since the heat of formation of a compound is the net difference between the heat content of the compound and that of its elements, and since the latter are taken as zero by convention, it follows that the heat content of a compound is equal to its heat of formation in such non-rigorous calculations. This leads to the principle of initial and final state, which may be expressed as follows: "The net quantity of heat liberated or absorbed in any chemical modification of a system depends solely upon the initial and final states of the system, provided the transformation takes place at constant volume or at constant pressure. It is completely independent of the intermediate transformations and of the time required for the reactions." From this it follows that the heat liberated in any transformation accomplished through successive reactions is the algebraic sum of the heats liberated or absorbed in the several reactions. Consider the formation of the original explosive from its elements as an intermediate reaction in the formation of the products of explosion. The net amount of heat liberated during an explosion is the sum of the heats of formation of the products of explosion, minus the heat of formation of the original explosive. The net difference between heats of formations of the reactants and products in a chemical reaction is termed the heat of reaction. For oxidation this heat of reaction may be termed heat of combustion. In explosive technology only materials that are exothermic—that have a heat of reaction that causes net liberation of heat—are of interest. Hence, in this context, virtually all heats of reaction are positive. Reaction heat is measured under conditions either of constant pressure or constant volume. It is this heat of reaction that may be properly expressed as the "heat of explosion." Balancing chemical explosion equations In order to assist in balancing chemical equations, an order of priorities is presented in table 12-1. Explosives containing C, H, O, and N and/or a metal will form the products of reaction in the priority sequence shown. Some observation you might want to make as you balance an equation: - The progression is from top to bottom; you may skip steps that are not applicable, but you never back up. - At each separate step there are never more than two compositions and two products. - At the conclusion of the balancing, elemental nitrogen, oxygen, and hydrogen are always found in diatomic form. Table 12-1. Order of Priorities || Composition of explosive || Products of decomposition || Phase of products | || A metal and chlorine || Metallic chloride || Hydrogen and chlorine || A metal and oxygen || Metallic oxide || Carbon and oxygen || Hydrogen and oxygen || Carbon monoxide and oxygen || Excess oxygen || Excess hydrogen || Excess carbon - C6H2(NO2)3CH3; constituents: 7C + 5H + 3N + 6O Using the order of priorities in table 12-1, priority 4 gives the first reaction products: - 7C + 6O → 6CO with one mol of carbon remaining Next, since all the oxygen has been combined with the carbon to form CO, priority 7 results in: - 3N → 1.5N2 Finally, priority 9 results in: 5H → 2.5H2 The balanced equation, showing the products of reaction resulting from the detonation of TNT is: - C6H2(NO2)3CH3 → 6CO + 2.5H2 + 1.5N2 + C Notice that partial moles are permitted in these calculations. The number of moles of gas formed is 10. The product carbon is a solid. Volume of products of explosion states that equal volumes of all gases under the same conditions of temperature and pressure contain the same number of molecules, that is, the molar volume of one gas is equal to the molar volume of any other gas. The molar volume of any gas at 0°C and under normal atmospheric pressure is very nearly 22.4 liters. Thus, considering the nitroglycerin reaction, - C3H5(NO3)3 → 3CO2 + 2.5H2O + 1.5N2 + 0.25O2 the explosion of one mole of nitroglycerin produces 3 moles of CO2, 2.5 moles of H2O, 1.5 moles of N2, and 0.25 mole of O2, all in the gaseous state. Since a molar volume is the volume of one mole of gas, one mole of nitroglycerin produces 3 + 2.5 + 1.5 + 0.25 = 7.25 molar volumes of gas; and these molar volumes at 0°C and atmospheric pressure form an actual volume of 7.25 × 22.4 = 162.4 liters of gas. Based upon this simple beginning, it can be seen that the volume of the products of explosion can be predicted for any quantity of the explosive. Further, by employing Charles' Law for perfect gases, the volume of the products of explosion may also be calculated for any given temperature. This law states that at a constant pressure a perfect gas expands 1/273.15 of its volume at 0 °C, for each degree Celsius of rise in temperature. Therefore, at 15 °C (288.15 kelvin) the molar volume of an ideal gas is - V15 = 22.414 (288.15/273.15) = 23.64 liters per mole Thus, at 15 °C the volume of gas produced by the explosive decomposition of one mole of nitroglycerin becomes - V = (23.64 l/mol)(7.25 mol) = 171.4 l The potential of an explosive is the total work that can be performed by the gas resulting from its explosion, when expanded adiabatically from its original volume, until its pressure is reduced to atmospheric pressure and its temperature to 15 °C. The potential is therefore the total quantity of heat given off at constant volume when expressed in equivalent work units and is a measure of the strength of the explosive. Example of thermochemical calculations The PETN reaction will be examined as an example of thermo-chemical calculations. - PETN: C(CH2ONO2)4 - Molecular weight = 316.15 g/mol - Heat of formation = 119.4 kcal/mol (1) Balance the chemical reaction equation. Using table 12-1, priority 4 gives the first reaction products: - 5C + 12O → 5CO + 7O Next, the hydrogen combines with remaining oxygen: - 8H + 7O → 4H2O + 3O Then the remaining oxygen will combine with the CO to form CO and CO2. - 5CO + 3O → 2CO + 3CO2 Finally the remaining nitrogen forms in its natural state (N2). - 4N → 2N2 The balanced reaction equation is: - C(CH2ONO2)4 → 2CO + 4H2O + 3CO2 + 2N2 (2) Determine the number of molar volumes of gas per mole. Since the molar volume of one gas is equal to the molar volume of any other gas, and since all the products of the PETN reaction are gaseous, the resulting number of molar volumes of gas (Nm) is: - Nm = 2 + 4 + 3 + 2 = 11 Vmolar/mol (3) Determine the potential (capacity for doing work). If the total heat liberated by an explosive under constant volume conditions (Qm) is converted to the equivalent work units, the result is the potential of that explosive. The heat liberated at constant volume (Qmv) is equivalent to the liberated at constant pressure (Qmp) plus that heat converted to work in expanding the surrounding medium. Hence, Qmv = Qmp + work (converted). - a. Qmp = Qfi (products) - Qfk (reactants) - where: Qf = heat of formation (see table 12-1) - For the PETN reaction: - Qmp = 2(26.343) + 4(57.81) + 3(94.39) - (119.4) = 447.87 kcal/mol - (If the compound produced a metallic oxide, that heat of formation would be included in Qmp.) - b. Work = 0.572Nm = 0.572(11) = 6.292 kcal/mol - As previously stated, Qmv converted to equivalent work units is taken as the potential of the explosive. - c. Potential J = Qmv (4.185 × 106 kg)(MW) = 454.16 (4.185 × 106) 316.15 = 6.01 × 106 J kg - This product may then be used to find the relative strength (RS) of PETN, which is - d. RS = Pot (PETN) = 6.01 × 106 = 2.21 Pot (TNT) 2.72 × 106 - Army Research Office. Elements of Armament Engineering (Part One). Washington, D.C.: U.S. Army Materiel Command, 1964. - Commander, Naval Ordnance Systems Command. Safety and Performance Tests for Qualification of Explosives. NAVORD OD 44811. Washington, D.C.: GPO, 1972. - Commander, Naval Ordnance Systems Command. Weapons Systems Fundamentals. NAVORD OP 3000, vol. 2, 1st rev. Washington, D.C.: GPO, 1971. - Departments of the Army and Air Force. Military Explosives. Washington, D.C.: 1967. - USDOT Hazardous Materials Transportation Placards - Swiss Agency for the Environment, Forests, and Landscap. 'Occurrence and relevance of organic pollutants in compost, digestate and organic residues', Research for Agriculture and Nature. 8 November 2004. p 52, 91, 182.
Common Warning Signs of Dysgraphia in Children in Grades 9–12 Has your teenager always struggled with written expression? Is his or her written work messy, disorganized and incomplete? If the answer is “yes”, review the following list of common warning signs of dysgraphia in high school students. Dysgraphia is a learning disability (LD) that affects writing, which requires a complex set of motor and information-processing skills. Most people struggle with learning at times, but learning disabilities are different – they may affect performance differently throughout a person’s school years and beyond, but what they share in common is that they persist over time. Dysgraphia is no different. If your child has displayed any of the signs below for at least the past six months, it may be time to seek help from the school or other professionals. Be sure to think back about writing-related challenges your child may have had in preschool and elementary school and share that information (and even work samples if available) when you reach out for help. Also, be aware that some of the signs listed below also apply to other types of learning disabilities and/or to Attention Deficit/Hyperactivity Disorder (ADHD), which often co-exist. You may want to review out more comprehensive Interactive Learning Disabilities Checklist to clarify your concerns. For At Least the Past Six Months, My Child Has Had Trouble - Gripping a pencil comfortably when writing or drawing. - Writing neatly, evenly and legibly. - Writing on a line or within margins. - Copying letters and numbers neatly and accurately. - Spelling even familiar words correctly. - Using correct syntax structure and grammar. - Expressing written ideas in an organized way. - Preparing outlines and organizing written work. - Turning ideas spoken aloud into a written format. - Thinking of words to write and then remembering to write them down. - Focusing on the meaning of what he writes; (because of the physical demands during writing) - Maintaining energy and easy posture when writing/drawing. - Aligning numbers correctly when doing math problems. - Feeling motivated and confident about writing. - Taking pride in written work. - Responding appropriately to teasing or criticism by peers and adults who don't understand “messy, incomplete and disorganized” writing. Don't hesitate to seek help if your teenager displays several of these warning signs. Print out this article, check off the items that apply to your child, and take the list to the educators or other professionals who you seek advice from about your child. The good news is that with proper identification and support, your teenager will be better able to succeed in school, the workplace and in life. - Interactive Learning Disabilities Checklist - Common Warning Signs of Dyslexia in Children in Grades 9–12 - Common Warning Signs of Dyscalculia in Children in Grades 9–12 - Common Warning Signs of Dyspraxia in Children in Grades 9–12
Skip to 0 minutes and 8 secondsPROFESSOR ROUMYANA SLABAKOVA: How are meanings acquired? When learning a second language, it's actually four separate acquisition tasks that we are looking at. First, we'll need to learn the lexical items. You cannot speak a language without its words. Now, learning the grammatical endings is qualitatively different. Once a person learns that the -ed ending of the verb means past tense, the learner knows it as a rule, and she can apply it to all the regular verbs. Skip to 0 minutes and 48 secondsThe rules of putting together sentences to create messages are essentially the same for all languages. Although lexical items take time to learn and have to be memorised one by one, grammatical word endings are actually the hardest to learn. They may be repeated in sentence after sentence, but they carry a lot of linguistic information. I have proposed the bottleneck hypothesis to explain what is hard and what is easy in second language learning. This picture illustrates the bottleneck hypothesis. In the picture, you see two bottles. One bottle is supposed to illustrate your native grammar, the one on the left. Skip to 1 minute and 43 secondsWhen you try to use the same grammar, and the other bits and pieces of rules that you have learned in the second language, then you go to the bottle on the right and you try to use this knowledge, that is, spill some of the beads in the cup. And you see that they cannot come out as fast as they can. There is a bottleneck. This picture illustrates that even if you have a lot of knowledge of the second language, the tight place through which it all comes pouring out are the little words and the word endings with grammatical meaning. We call these parts of words grammatical or functional morphemes. Without those morphemes, sentences do not work. Skip to 2 minutes and 33 secondsYou may not learn all of this information at the same time. It may be coming one bit after another. But you need all of that information in order to be able to produce and to understand a good, acceptable English sentence. How do we acquire meaning: the bottleneck hypothesis In this video, Roumyana Slabakova begins to consider what is hard and what is easy in learning a second language. She starts out by considering the more problematic aspects of language learning and she suggests her own hypothesis - ‘the bottleneck hypothesis’ for helping us to understand how humans acquire meaning in language. What do you think about this hypothesis? Does it seem to fit your own experience of language learning? Remember to read and reply to other learners’ comments. Do you agree with the other opinions expressed? Roumyana has published a book about her hypothesis this year. It is called Second Language Acquisition and is published by Oxford University Press as part of their Oxford Core Linguistics series. This link takes you to the book on Amazon.co.uk. © University of Southampton / British Council 2015
This illustration of Andreas Vesalius' De humani corporis fabrica shows the female vagina appearing remarkably like an inverted penis. Illustration of the female reproductive tract from Leonardo da Vinci's notebooks. The uterine horns are a prominent feature. The female reproductive system was something of a mystery for several centuries, and was described in a variety of ways throughout antiquity and the early modern period. The 2nd century Roman physician Galen saw the two sexes as complementary to each other, and described the female genitalia as being an inverse of the male – the uterus was essentially an internal scrotum, and the ovaries were testes. Others saw the uterus as distinctively female, sometimes as the site of noxious substances that could not possibly have a male counterpart. How the female reproductive system functioned was also a matter of some debate. The most infamous example of this is perhaps the idea of the “wandering uterus,” which has its origins in ancient Greece. Hippocrates characterized the uterus an independent entity that wandered throughout the female body, bumping up against other organs and causing various medical problems. This unfortunate state could be managed through marriage – frequent intercourse and child bearing would keep the womb stable – or through treatments such as fumigation and irrigation if the womb was already roaming freely. The belief that the uterus was responsible for a variety of illness – known collectively as “hysteria” – persisted until the early 20th century.
American Standard Code for Information Interchange ASCII is the 7-bit computer code that specifies the characters of the alphabet and the basic punctuation we see on the screen. Generally speaking, ASCII files are considered to be relatively safe, but... “Basic ASCII allows only 7 bits per character (128 characters), and the first 32 characters are “unprintable” (they issue commands such as Line Feed, Form Feed and Bell). Generally, ASCII files are text files. However, with a little effort, it is possible to write programs that consist only of printable characters (see EICAR). Also, Windows batch (BAT) files and Visual Basic Script (see VBS) files are typically pure text, and yet are programs. So, it is possible for ASCII files to contain program code, and thus to contain viruses. When sending out emails, especially those intended for a wide audience, using simple ASCII text to get your message across is the best choice. A pure-text email lets you control both content and layout exactly, and ensures that your mail will be legible by users of even the most old-fashioned email programs.” from Sophos' V-Files ASCII files are sometimes called 'text' files - and so they are. However, when we talk of a text file (.TXT) we specifically mean a file containing only ASCII characters, and specifically not containing any formatting or instruction codes. HTML is an example of the latter. It uses only ASCII characters, but contains formatting and instructions. The instructions in particular can pose a security threat (for example, sending the browser to a different location and downloading malicious code). So if we wish to be safe, we should specify 'text only' rather than just 'ASCII'.
The famous American linguist Noam Chomsky is the advocate of the concepts of competence and performance. There is a kind of similarity between Saussure’s concept of Langue and Parole and Chomsky’s concept of competence and performance. He first used these terms to specifically refer to a person’s intuitive knowledge of the rules and structure of his language as a native speaker (he called it competence), and his actual use of these (which he termed performance). Scholars of the earlier period were aware of this basic distinction but Chomsky precisely pointed out the inherent ability or knowledge in a native speaker of the structure of his language. According to Chomsky, competence is the native speaker’s knowledge of his language. It also includes his knowledge of the rules of his language and also ability to understand and produce a large number of sentences. Competence is the study of the system of rules. While performance is the study of the actual sentences. It is also the study of the actual use of the language in real-life situation. That way, the speaker’s knowledge of the structure of a language is his linguistic competence and the way in which he uses it is his linguistic performance. Competence is a set of principles which a speaker masters. It is kind of code. The speaker understands the language with the help of the rules of grammar. The rules of grammar give him an idea about the structure of the sentences. It is a capacity or ability to understand as well as produce many sentences. It brings out the speaker’s creative power. “In this way, the abstract internal grammar, which enable a speaker to utter and understand an infinite number of potential utterances is a speaker’s competence.” This competence is free from the interference of memory’s span, characteristic errors, lapses of attention etc. the speaker has represented in his brain a grammar that gives an idea account of the structure of the sentences of his language, but when actually faced with the task of speaking or understanding, many other factors act upon his underlying linguistic competence to produce actual performance. He may be confused or have several things in mind, change his pans in mid-stream. This is obviously the condition of the most actual linguistic performance. On the other hand performance is what a speaker does. Competence is a kind of code. Performance is an act of decoding. It has a direct relation with the competence. By the study of performance, one can know about competence. Though this discussion has aroused a lot of arguments in current day linguistics, a modern linguists remarks that these division are useful, if they are not carried to the extremes. In an ideal situation, these two approaches should be complementary to each other because speaker’s competence can be studied through his study of performance. In this way, Saussure’s concept of ‘Langue’ and “Parole” lays stress on sociological implicating of langue. Chomsky lays stress on the psychological implication of competence. These distinctions are also parole to a distinction between code and message in communication engineering. A cord is the pre-arrangeed singling system. A message is actual message sent using that system.
There are two main types of diabetes: type 1 diabetes and type 2 diabetes. Gestational diabetes is another type of diabetes that occurs during pregnancy. Pre-diabetes is the condition that develops prior to full diabetes. Type 1 diabetes - Type 1 diabetes usually first manifests in children or young adults but it can occur at any age. - The pancreas stops producing insulin and therefore the body’s cells are unable to turn glucose into energy. - Insulin injections are needed multiple times a day to stay alive. - Approximately 10-15% of all people with diabetes have type 1 diabetes. - The exact cause is not known, however it is thought to be associated with a viral trigger in people with a genetic predisposition to the disease. - Type 1 diabetes is also known as juvenile diabetes and insulin dependent diabetes. Type 2 diabetes - Type 2 diabetes is the most common type of diabetes occurring in 85-90% of people with diabetes. - It usually occurs in people aged over 50 years of age although it is now being seen increasingly in younger people and even children. - The pancreas does not produce enough insulin, or the insulin that it does produce is not working properly. - Type 2 diabetes can be managed with lifestyle changes such as improved diet and increased exercise. - Other management strategies include oral glucose-lowering drugs and insulin injections. - Gestational diabetes occurs during pregnancy and usually disappears once the baby is born. - It occurs in women who have no history of diabetes and whose blood sugar levels increase during pregnancy for the first time. - Gestational diabetes can recur in subsequent pregnancies. - Women who have gestational diabetes are at increased risk of developing type 2 diabetes later in life. - Management includes changes to diet and exercise, and possibly medication including insulin injections. Pre-diabetes is when a person has higher glucose levels than normal, but not high enough to be diagnosed with diabetes. There are two conditions classified as pre-diabetes: - Impaired Fasting Glucose (IFG) – this is when blood glucose levels are higher than normal following an 8 hour fast and a blood test, but are not high enough to be diabetes. - Impaired Glucose Tolerance (IGT) – this is when blood glucose levels are detected as higher than normal using an oral glucose tolerance test and a blood test 2 hours later, but are not high enough to be diabetes. - World Health Organization 1999. Definition, Diagnosis and Classification of Diabetes Mellitus and its Complications: Report of a WHO Consultation. Part 1: Diagnosis and Classification of Diabetes Mellitus. Geneva, World Health Org. - Rizwana Kousar 2010. What is Diabetes? Community Education Series, Melbourne. © Australian Community Centre for Diabetes (ACCD). - Australian Institute of Health and Welfare 2008. Diabetes: Australian facts 2008, Diabetes series no. 8. Cat. no. CVD 40. Canberra: AIHW. - Diabetes Australia, 2010.
Learn something new every day More Info... by email The planar transistor was invented by Jean Hoerni in 1959. The design of the planar transistor improved on earlier designs by making them cheaper to make, mass-producible, and better at amplifying electrical input. The planar transistor is built in layers and can have all of its connections in the same plane. The first layer in a planar transistor is a base of semiconductor material. Many impurities are added to this base that allow it to be a better conductor. A second layer of semiconductor, with fewer impurities, is then put on top of the base. After the second layer is in place, the center of it is etched out, leaving thick edges of the second material around the sides and a thin layer above the base, in the shape of a square bowl. A section of material of the opposite polarity than the initial two layers is then placed in the bowl. Once again, the center of this layer is etched away forming a smaller bowl. A material similar to the first layer of the planar transistor is then added. The second, third and fourth layers are all made flush with the top of the transistor. The positive and negative components of the planar semiconductor are accessed on the same plane of the device. Metal connectors can be attached to the transistor after the components are in place, allowing the device to receive and emit electricity. The transistor receives input from the first layer and emits output from the fourth. The third layer is used to run a charge into the transistor so that it can amplify input. Though the design of the device is a bit more complicated than earlier transistors, many planar transistors can be made at the same time. This decreases the amount of time and, subsequently, money needed to produce transistors and has helped paved the way for more affordable electronics. These types of transistors can also boost input to higher levels than earlier models of transistors. In earlier transistors, the oxide layer that naturally forms on the suface of the semiconductor was removed from the transistor to prevent contamination. This meant that the delicate junctions between the positive and negative sections of the transistor had to be exposed. Constructing the transistor in layers, as Hoerni’s design called for, incorporated the oxide layer as a protective feature for the junctions.
Sediment contamination is a major environmental issue due its potential toxic effects on biotic resources and human health. A large variety of contaminants from industrial, urban, and marine activities are associated with sediment contamination. These include: heavy metals, persistent organic pollutants, and radionuclides among others. Contaminants such as heavy metals and persistent organic pollutants are accumulated overtime forming secondary reservoirs. These contaminants can: - Be released to water, thus migrating to other sediments or absorbed by biota - Accumulate in aquatic organisms and move up the food chain to fish and eventually humans. High human pressure on water systems calls for increased dredging in: - Maintenance works (depth for shipping and drainage) - Construction works (flood defence, recreation, harbour enlargement) - Supply of construction material (sand, gravels) - Remediation works (hot spots) This can potentially redistribute the contaminants to water, landfills, and construction materials. European Union legislations of relevance for sediment management are: - European Landfill Directive - European waste legislation - Water Framework Directive 2000/60/EC and its amendments Several national and international Conventions deal with the quality of sediments and dredged materials: - The Helsinki Convention on the Protection of the Marine Environment of the Baltic Sea Area: HELCOM recommendation Disposal of dredged spoils (1992). - The London Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter. Dredged Material assessment Framework (2000). - The Convention on the Protection of the Marine Environment of the North-East Atlantic (OSPAR): Revised guidelines for the management of dredged material (2004). Due to implementation of international conventions and EU Directives, the different national authorities have developed specific dredged material guidelines and/or specific legislation on sediments. ALS Environmental has developed numerous sediment programs for the assessment of contaminated sediments, which fulfil the most stringent classification of materials. Our portfolio for sediment contains, but is not limited to, the following compounds/tests: - Dry matter - Basic granulometry - Full grain size - Heavy metals (mercury, lead, nickel, zinc, copper, cadmium, etc.) - Metal speciation (methylmercury, arsenic compounds, selenium compounds) - Tributyltin (TBT) and other organotin compounds - Total petroleum hydrocarbons (TPH) - Radionuclides, including alpha and beta activity - Octylphenols, nonylphenols and their ethoxylates - PCBs, PAH, hexachlorobenzene - DDT and isomers, lindane, chlorobenzenes - Polybrominated diphenyl ethers - Dioxins and furans - Ecotoxicological tests on aquatic organisms (plants and animals) - Microtox (Vibrio fischeri) Project specific quotations are strongly recommended for sediment work. These quotations will clearly specify limits of quantification, compounds configuration, sample containers and sample volumes, and provide details on logistics. For further details contact your local ALS representative.
Speech Language Disorder: Interventions And Strategies For Stuttering Stuttering is a speech language disorder that causes disfluency, or interruptions of speech that may be either normal or abnormal, to the speaker (Guitar, 2006). The causes of stuttering are unknown, but theories that link stuttering to genetic, epidemiology and environmental exists. Several studies have shown that approximately 68% of children who stutter have extended family who stutter, 39% have relatives that stutter in their immediate family, and 27% have stuttering parents (Ratner and Healey, 1999). According to Guitar, neurologically, scientist believes that the neural connections for talking may be underdeveloped or disturbed by an excess of emotional activity in the brain (Guitar, 2006). Environmental triggers like stress can activate the onset of stuttering during early childhood. Some incidences of stress induced stuttering may occur soon after a sibling is born, during a move to a new neighborhood or school, or criticism of speech at school (Guitar, 2006). This speech disfluency affects people of all ages, creed, cultures, and races in all parts of the world. According to Guitar, stuttering has five distinct stages all having unique characteristics (2006). The stages of stuttering disfluencies are normal, borderline, beginning, intermediate, and advance (www.coloradospeechinstitute.com). According to the Colorado Speech Institute, normal and borderline stuttering do not display any tension or adverse feelings or attitudes because the disfluencies may go unnoticed. The characteristics of normal disfluency consist of the disfluencies occurring less than ten times per one hundred words and have the multisyllabic and phrase repetition, revisions, and interjections. When repetitions of words are present, the words are slow and even and two or fewer occur per repetitive instance (www.coloradostutteringtherapy.com). Borderline stuttering have two or more disfluencies per one hundred words. More than two parts –word and/or single syllable whole –word repetition are exhibited as the stutter speaks (www.coloradospeechinstitute.com). Beginning through advanced stuttering exhibit feelings of tension and adverse feelings and attitudes because the awareness of the disfluency is conspicuous and the speaker begins to feel frustration, shame, embarrassment, or fear (Guitar, 2006). In 2006, Guitar stated that beginning stuttering presents the emergence of prolongation and the repetitions are fast and abrupt, with a noticeably louder pitch. The child may display facial tension and difficulty initiating airflow (www.coloradostuttering therapy.com). Intermediate stuttering has all of the above characteristics in addition to avoidance behavior and periods of blocks, inappropriate ending of sound and air, due to the immobility of the tongue, lips and/or vocal folds (Guitar, 2006). In the last stage, advance stuttering, the speaker has all of the above characteristics in addition to tremors, but the stutterer is fourteen years old or older and need an adult centered treatment program (www.coloradospeechinstitute.com). According to Guitar, beginning, intermediate, and advance stages of stuttering exhibit a higher affect on the child’s cognitive, behavioral, physical, emotional, and social development as well as their ability to learn and interact with one another (2006). Then normal and borderline stages do not have major impact on such areas because the disfluency may go unnoticed until it advances to the next level where prolongation and tremors begin. Stuttering puts a developmentally strain on the child because it places an intense speech and language demands on the immature central nervous system (Guitar, 2006). According to Guitar, the brain is much like a computer in its ability to multitask simultaneously, but too many demands can cause it to perform slower and inefficiently, which happens to the developing brain of a child (2006). When the processing capacity of the brain is compromised, the language development of the child becomes more developed than the speech motor control skills, and it gives the child much to say but a limited capacity to say it (Guitar, 2006). Stutterers usually compare themselves to others and formulate “self-conscious” emotions of pride, shame, guilt, fear, and embarrassment from what they observe (Guitar, 2006). According to the National Institute for Deaf and Other Communication Disorders (NIDCD), children who do not naturally recover from stuttering develop maladaptive responses and self-regulatory skills like tension, escape, and avoidance responses, sucking thumb, nonverbal communication, withdrawal, and becoming introverted (www.nidcd.nih.gov). Stuttering takes a toll on the child’s social, emotional, and behavioral development. As the stuttering progresses, a self awareness develops into embarrassment, jealousy of peers, and other difficult emotions. Rather than feeling secured and confident, a child who stutters sense of security is threatened and is very self-aware of the disfluencies. This may lead to self-corrections, which only worsen the problem (Guitar, 1999). The social-emotional traits, according to Guitar, from this fearfulness lead to withdrawal and sensitive temperament (1999). There have been proven research based classroom, behavioral, and instructional management strategy that have positive results. The strategies and intervention that are proven are (1) The Lidcombe Therapy, (2) Self Modeling Strategy, (3) Stress Reduction, (4) and assistive technology. The Lidcombe Therapy focuses on the behavioral management of students who stutter (Miller & Guitar 2009). The self modeling strategy can be utilized within the classroom instructional setting. Stress reduction is another behavioral and instructional intervention. The use of assistive technology is beneficial on an instructional and behavioral management level. Within each intervention, special and general education teachers and related services personnel in addition to parents can utilize the strategies to improve and manage the severity of stuttering. The Lidcombe Program is a parent-driven operant conditioning based behavioral treatment for early stuttering (Hayhow, 2009). This is a 2 phase program, ranging from 1 to 2 years, that is set in the child’s natural setting (Guitar & Miller, 2009). According to Guitar and Miller’s 2009 research, the parent will utilize verbal contingencies during natural conversations for both stutter free and stuttered speech during phase one. As the child displays stutter free speech for three consecutive visit, then phase two will begin. Phase two is a continuation of phase one, but parents reduce the events of verbal contingencies and use a quantitative measure to score the rate of stutter free and stuttered speech (Guitar & Miller, 2009). During the second phase, parents and speech language pathologists will identify triggers causing stutter free and stuttered speech (Hayhow, 2009). To improve the frequency of stutter free speech, the adults will reduce or eliminate the triggers. Phase 2 will end one year after the ending of phase 1 (Guitar & Miller 2009). According to Hayhow, quantitative research have shown that the Lidcombe Program is a successful tool of treatment for eliminating stuttering in children of 6 years and younger (2009). Parents, while helping their child at home can use this program easily because the Lidcombe Program is completed in the child’s natural setting with their parents. During stage 1, daily structured and subsequent daily unstructured conversations are administered by a parent in everyday situations to determine any triggers of stuttered speech (Hayhow 2009). Parents will find triggers and try to eliminate them from the child’s environment to improve the frequency of stutter free speech. According to Hayhow, parents then use the information received during the treatment procedure in their daily lives (2009). In many situations, there was a steady transfer from parental control to their children controlling the stuttering triggers because of the daily implementation (Hayhow, 2009). Parents can easily collaborate with the special education, general education, and related services personnel to help the child’s success outside their natural setting. Special and general education teachers and personnel can implement the results of the Lidcombe program within the instructional setting. This can be accomplished with direct and constant collaboration with the parents because the Lidcombe Program is primarily parent directed. One way teachers and personnel can use the result of the program is utilizing “talk-time”(Hayhow, 2009). “Talk-time” is a 15 minute teacher led motherese like conversation with the student using slower than average speech rate, short and simple sentences, and chunking words together. Teachers take the opportunity to teach students how to speak slower through differentiated instructions and altering oral presentation (Guitar & Miller, 2009). With every opportunity, teachers should judiciously praise, acknowledge stutters, correct the stutters, and evaluate the child’s progress. Self Modeling Strategy, or speech restructuring, is an effective technique that uses range of novel speech patterns for the improvement of stuttered speech (Cream, O’Brian, Onslow, Packman, & Menzies, 2008). Speech restructuring begins with an instatement or an establishment stage in which the stutters learn an extremely unhurried exaggerated form of speech model (Cream et al, 2008). This speech pattern is then used at methodically increasing the pace until normal speech rate is fairly accurate and the stutterer can consistently construct extensive samples of restructured stutter-free speech (Cream, O’Brian, Onslow, & Packman, 2010). According to Cream et al, self modeling strategy utilizes self-evaluation, performance contingent maintenance, and personal construct therapy, supplementary fluency training and cognitive or anxiety treatment (2008). Under this strategy, video self-modeling or VSM is a process in which stutterer views 5 minute video images of themselves with stutter free speech and free of problem target behaviors. Yaruss (2006) claimed that four techniques to the Speech Modeling Strategy that parents, teachers, and personnel can use has also been shown to hold positive results, based on his research, that when certain aspects of communication are modified can ease children’s production of fluent speech. During communication modification, the specific targeted areas are (a) use and exhibit a simpler and more relaxed manner of speaking; (b) use of increased gaps between each speaker’s turns to decrease time pressures a child may have when communicating; (c) reduce of burden to speak and increased time pressures often associated with “rapid-fire questioning”, if present; and (d) reflecting, rephrasing, and expanding on children’s utterances to provide a positive communication mode. Instructionally, special and general education teachers can utilize the VSM technique of the speech restructuring process to help their students overcome their speech impediment. The students will go through a process called self-efficacy to identify their own capacity to speak without stuttering (Cream et al, 2008; Cream et al, 2010). Cream et al reported the use of VSM in an instructional setting with three students who had completed speech restructuring treatment. The three 5-minute VSM videos were supplied for each student showing their stutter-free answers to teacher questions during academic lessons and the stutterers will give a self-reflection to the teacher (2010). There was a considerate decrease in stuttering for 2 out of 3 students during the 12-18 months the VSM technique was utilized. Related service personnel, like speech language pathologists, can implement the VSM during sessions to track the progress of the child’s stuttering (Cream et al, 2010). Speech restructuring with the VSM technique is also useful tool for parents to use at home. Using this technique at the child’s natural setting is a behavior management strategy. Both parent and the child can review the short videos and self-reflect together to improve the likelihood of stutter free speech. According to Guitar (2006), stress has been linked with stuttering. There are small tremors in everyone’s muscles that are amplified when faced in a stressful event. The strain on the tremors during these stressful moments may cause an increase in the stuttering disfluency (2006). If moments of stress can be identified and reduced or eliminated, then the frequencies of the disfluencies will be reduced as well. In Yaruss 2006 research, he mentioned that improving a stuttering disfluencies by identifying the precise tension on a stressor inventory are supported on the idea that children’s fluency is affected by the environment they are in and how their reaction is to that environment stressor, including their temperament. If parents, teachers and related service personnel can work together to identify any, and if possible, all stressors that aggravate the child’s stuttering, than the outcome of treatment may increase both at home and in the academic setting. Yaruss suggests that parents, educators, and all service related personnel should complete a stressor inventory intended to recognize individual characteristics of the child and environmental issues that may influence the child’s capability to speak freely and converse efficiently (2006). If collaboration exists between parents and educators, each will completes the inventory independently, then each educator and parent insight can be analyzed (Yaruss, 2006). By focus on the relationship between the stutterer and his/her surroundings, than the speech language pathologists can work with both educators and parent to personalized the management toots to each child’s specific needs. After all stressors are identified, each member works together to recognize ways to eliminate or lower the force of each stress even. An example of this in the instructional level is if student feels unprepared during asking questions in oral reading assignments, teachers can reduce the stress by informing the child ahead of time of the question he/she is to answer. At home, if the child feels pressure to tell family members about his/her day at school, a way to reduce the pressure is for the parents to set aside periods to talk. Assistive technologies are products and mechanical aids which are replacements for or to improve the function of some bodily or intellectual capability that is hindered (Williams, 2006). For speakers who stutter, the assistive technology device that is available to them is called SpeechEasy. SpeechEasy is a convenient and subtle fluency improving device that is much like a hearing aide. SpeechEasy is a prosthetic device that fits in or behind the ear. This device is intended to imitate choral speech. Choral speech is an occurrence that promotes fluency among stutterers by speaking in unison. SpeechEasy creates a choral speech pattern though Altered Auditory Feedback (AAF). AAF combines Delayed Auditory Feedback (DAF) which allows stutterers to hear their voice with a slight delay (echo) and Frequency Altered Feedback (FAF) which allows stutterers to hear their voice with a shift in the pitch (higher or lower) (Williams, 2006). Through AAF, SpeechEasy produces an illusion of choral speech to improve the fluency of stutterers. This assistive device is very effective in reducing the stuttering occurrence by 75-85% (Pollard, Ellis, Finan, & Ramig, 2009). Teachers, parents and related service personnel can used this device in additions to any communication modification therapy. The SpeechEasy as an addition will strengthen the effect of the therapy and decrease the likelihood of children relapsing, especially those at the intermediate and advanced stages in stuttering (Pollard et al, 2009). Stuttering is a lifelong challenge that network, strategies and interventions must be in place continuously. The information described about students with speech impairment, in the area of stuttering, had a huge impact on my future as an educator, related service personnel, and parent working together and educating students within this area of need. The primary notion is that using strategies and interventions without collaboration proves null. A child with disability, especially in the area of stuttering, needs constant interventions that must be generalized throughout his or her day. If the skills that the child learns at school are not practice at home and vice versa, the idea of overcoming the disfluency may not happen. When it comes to collaboration, the focus must be on the child and what is best for their success. With the four interventions mentioned above, (1) Lidcombe, (2) Speech Modeling, (3) Stress Reducer, and (4) SpeechEasy, researches have shown that majority of the achievements begin with the parents, then the parents providing quantitative and specific tools to teacher and personnel. The teacher and personnel must take the information given and implement it within their instructions to nurture the whole child. In conclusion, the information discussed in the articles read and summarized has provided a foundation to begin working with students who stutter. The VSM technique in the Speech Modeling intervention is the most cumbersome because of the extra video equipment needed. First, it may not be possible for teacher or school administrators to supply camcorders to record student speaking fluently. Secondly, it may also be impossible to actually get a 5 minute clip of the student speaking fluently. Lastly, if parents in the classroom do not give consent to videotaping, the school cannot record the child. The Stress Reducer strategy is the most effective for budget conscience school and the child’s parents. Parents, teachers, and related service personnel collaborating to identify the stress triggers focuses on the whole child, inside of the classroom and at his or her own natural environment. Knowing and reducing the stress will lead to the child effectively speaking fluently around the clock and not during specific hours in the day. If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:
If we say that light from a distant galaxy took 10 billion years to reach earth (10 billion light years from Earth), why isn't it true that the galaxy is actually much further away since during that 10 billion year time since the light started its journey toward us, the galaxy has continued to move still farther away? If this has any truth, then galaxies which are farthest away and traveling close to the speed of light away from us are almost twice the distance now (almost 28 billion light years away if the universe is 14 billion years old) as they were when the light started its journey! It is completely legitimate to say that the galaxy is farther than 10 billion light years away from Earth now -- if you're using a particular definition of the "distance" to the galaxy. Unfortunately, distance is one of those things that has an intuitive meaning in everyday life but is not so intuitive in our expanding universe! Astronomers (and other people) are not always very clear about what they mean when they talk about an object's "distance", leading to a lot of confusion about this topic. Read on for a further explanation. First of all, the expansion of the universe doesn't consist of galaxies moving through some static space, but rather the "stretching" of the space itself. The light is moving through this expanding space and has to travel the initial distance plus whatever distance is added due to the universe's expansion during the course of the journey. It's like running on a racetrack that is being stretched -- if the racetrack started off 100 meters long but got stretched to a final length of 400 meters as you were running from start to finish, then the total distance you've run is more than 100 meters. In fact, when you talk about the "distance" between the start and finish lines in this racetrack, you might mean several different things: (1) You could mean 100 meters, since that's the distance when you start running; it's also what the markings on the track say the distance is. (2) You could mean 400 meters, since that's the distance between start and finish at the moment you reach the finish line. (3) You could mean the actual distance you've run, which is more than 100 meters (since the track stretches while you're running on it), but less than 400 meters (since some of the stretching happens on parts of the track you've already passed through). [Thanks to a reader for pointing out the difference between (2) and (3) -- in my first attempt at answering this question I did not make any distinction between them!] You can see from the above example that when astronomers talk about the "distance" to a faraway galaxy, there are several things they might mean! Ned Wright's Cosmology Tutorial has a comprehensive technical discussion of the different types of distances that astronomers use (though it may be a bit hard to understand if you jump into it without reading the earlier parts of his tutorial first) -- some of these distances are similar to those discussed above for the racetrack, while others are completely different. He also has some answers posted to questions that are similar to the one you are asking. If we somehow know that "light from a distant galaxy took 10 billion years to reach Earth", as your question posits, then clearly, if we are using definition #3 we would say that the distance to the galaxy is 10 billion light years. However, if we are using definition #1 we would say that the distance to the galaxy is less than 10 billion light years (i.e. it was closer than 10 billion light years when the light was emitted), and if we are using definition #2 we would say that the distance to the galaxy is greater than 10 billion light years (i.e. it is greater than 10 billion light years right now, when the light is received by us). As you can see, saying that a galaxy is "10 billion light years away" is an ambiguous statement! It doesn't really mean much unless you also specify what definition of distance you are using. And while definition #2 is probably the one that corresponds most closely to your intuitive feeling for what "distance" is, in astronomy that is not always the best definition! After all, the light that travels from a faraway galaxy to us is our only source of information about that galaxy, so we might care a little more about the physical distance that the light has traveled (definition #3) than how far away the galaxy is now (definition #2), since how far away the galaxy is now has no bearing on what we see when we look at the galaxy. All the definitions of distance discussed here suffer from a bit of a practical problem, though. In order to use astronomical measurements to actually calculate any of these distances to a particular galaxy, we need to know something about the history of the universe's expansion (in other words, how did the racetrack stretch as a function of time?). Different models of the universe's expansion give different numbers for the distance to the galaxy, and although recent measurements (in particular those of the WMAP satellite) are helping us learn more about how the universe expands, we still don't know all the details. Therefore, the most common measurement of distance that astronomers use for faraway galaxies is a lot simpler and less informative than the definitions of distance discussed above, but it is much easier to measure! This distance measurement is known as the redshift of a faraway galaxy. Astronomers take advantage of the fact that as light travels through the expanding universe, the light itself gets stretched by the same factor that the universe does, causing its wavelength to increase and its color to change and become more towards the red end of the spectrum. The redshift of the light refers to the amount by which it has been stretched and is basically a measurement of how much the universe has expanded during the light's trip from the faraway galaxy to us. Astronomers can measure the wavelength of light that we receive on Earth, and they can also usually figure out what wavelength the light had when it was emitted, based on a knowledge of the chemical processes involved in the light's production (for some information on how this is done, see our answer to a previous question). Therefore, they can easily calculate the redshift for almost any faraway object. In the example of the racetrack discussed above, the track has expanded by a factor of 4 (from 100 meters to 400 meters). Astronomers would say that the redshift in this case is 3 (the redshift is defined as "one less than the factor by which the universe has expanded", just so it works out that if there is no expansion at all, the redshift will be zero). If you were running on the racetrack and your body behaved like light did, you would reach the finish line and find that your body was 4 times bigger than it was when you started out! Redshift is not a "traditional" measure of distance in the sense that we are used to. Standing at the finish line and saying that the starting line is at a redshift of 3 doesn't tell us anything about how big the track is or how far you just ran. (That's probably the reason that science journalists almost never use redshift to describe distances even though it's what astronomers use all the time -- it's not something that readers can intuitively connect with.) However, there is some meaning to the concept -- in an expanding universe, objects with larger redshifts are farther away. So if we measure one galaxy to have a redshift of 3 and another to have a redshift of 3.5, we might not have any idea how long it would take us to get to either of them in a spaceship, but at least we can say which one we could get to faster! This page was last updated June 27, 2015.
Critical thinking is something that we are striving for in ourselves as educators, and as a goal for our students, but it can be a complex concept to grasp. I am liking this definition from The Foundation For Critical Thinking because it’s one that could be used as is, or paraphrased, for students of a variety of ages. So much to unpack here, and so many ways to bring disciplinary thinking, global competencies and media literacy into play. Critical thinking is self-guided, self-disciplined thinking which attempts to reason at the highest level of quality in a fair-minded way. People who think critically consistently attempt to live rationally, reasonably, empathically. They are keenly aware of the inherently flawed nature of human thinking when left unchecked. They strive to diminish the power of their egocentric and sociocentric tendencies. They use the intellectual tools that critical thinking offers – concepts and principles that enable them to analyze, assess, and improve thinking. They work diligently to develop the intellectual virtues of intellectual integrity, intellectual humility, intellectual civility, intellectual empathy, intellectual sense of justice and confidence in reason. They realize that no matter how skilled they are as thinkers, they can always improve their reasoning abilities and they will at times fall prey to mistakes in reasoning, human irrationality, prejudices, biases, distortions, uncritically accepted social rules and taboos, self-interest, and vested interest. They strive to improve the world in whatever ways they can and contribute to a more rational, civilized society. At the same time, they recognize the complexities often inherent in doing so. They avoid thinking simplistically about complicated issues and strive to appropriately consider the rights and needs of relevant others. They recognize the complexities in developing as thinkers, and commit themselves to life-long practice toward self-improvement. They embody the Socratic principle: The unexamined life is not worth living , because they realize that many unexamined lives together result in an uncritical, unjust, dangerous world. ~ Linda Elder, September, 2007 I’m wondering about your thoughts about this definition? How do you and/or your students describe critical thinking?
|Glossary and Definitions. Associated Information Technology. 21 May 2003.||"The most commonly known flammable liquid is gasoline. It has a flash point of about -50° F (-65° C). The ignition temperature is about 495° F (232(232° C) [sic], a comparatively low figure."||553 K| |Hazardous Locations. 2000. US Motor. 22 May 2003.||"Gasoline, also Class I, Group D, has an approximate ignition temperature of 280°C."||553 K| |Multimedia - Ignition Temperature. Encarta Encyclopedia. 24 May 2003.|| |Properties of Fuels [pdf]. 25 May 2003.|| |Ignition Temperature. Taftan Data. 1998.||"Each fuel should be brought above its Ignition Temperature for starting the combustion process. An appropriate air-fuel ratio is also necessary. The minimum ignition temperature at atmospheric pressure for some substances are: carbon 400 C, gasoline 260 C, hydrogen 580 C, carbon monoxide 610 C, methane 630 C."||533 K| Ignition temperature is the minimum temperature at which a material will burn or explode. It is the temperature at which a mixture of flammable vapor and air would ignite without a spark or flame. The term ignition temperature is also used to describe the temperature of a hot surface that would cause flammable vapors to ignite. Gasoline is the most common flammable liquid and the main cause of injuries among teenage boys. The ignition temperature of gasoline ranges from 530 and 553 K. There is no direct correlation between explosive properties and ignition temperature. Therefore, materials can have the same physical properties and similar explosive properties while having ignition temperatures that vary greatly. The ignition temperature is affected by the chemical properties of the flammable liquid. When a flammable liquid is in its liquid state, it will not ignite. It will only burn when in its gaseous state. In addition to ignition temperature, other properties associated with the flammability of a liquid are its flash point, flammable range, and vapor density. The flash point is the temperature at which a flammable liquid vaporizes and is therefore able to ignite. Liquids with a flash point under 40 °C are considered combustible liquids. Gasoline has a flash point of about -45 °C. The flammable range of a liquid is the ratio of the flammable liquid to air that would create a volatile mixture. The flammability range of gasoline is between 1.4 and 7.6%. If the ratio of gasoline to air is less than 1.4%, then the mixture is to thin to burn. The mixture cannot burn when it contains more than 7.6% gasoline because it is too rich to burn.The vapor density is the weight of a vapor relative to the weight of air. The vapor density of gasoline is heavier than air and therefore will sink when in air. Shani Christopher -- 2003 External links to this page: - Insultingly stupid movie physics by Tom Rogers
No matter your approach to structuring a lesson in the classroom, you can apply those same strategies when teaching a lesson from start to finish using technology. It can be challenging to translate traditional teaching strategies into the digital world for busy teachers, but here are a few tips to get you started doing precisely that. The I do, we do, you do teaching strategy I do, we do, you do is an instructional strategy used by teachers in a variety of grade levels and content areas to present any lesson. Some teachers refer to this strategy as gradual release. Its systematic structure gives a teacher time to model a concept followed by a sequential gradual release of student practice. It’s a simple model that allows for a repetitive step-by-step approach. Teachers love this instructional strategy because it is flexible. It can let a teacher go back and forth between the three phases depending on student needs that may surface within the lesson. I do, we do, you do also helps provide a specific time during a lesson where an instructor can explain why something happens the way they are presenting. I do phase The I do phase is the first step in this strategy. It’s the step where teachers model what students need to know. Modeling for students is a powerful part of the learning process. In this instructional strategy, it’s essential to model before releasing students and expecting them to complete a task independently. During this first phase, teachers complete pre-assessments of what students already know, have discussions, build background knowledge, and predict or infer what may happen next in a sequential lesson. An effective way to model in the I do phase is for teachers to think aloud as they deliver content. Modeling can happen in every subject area. In math, students see the steps of a problem. Students can better understand comprehension techniques in reading when they model aloud. Teachers model the demonstration of an experiment in science, social studies projects, or the writing process. We know educators have many different options for including technology as they model new concepts for students. Nearpod can help strengthen a lesson’s modeling phase in a meaningful and fun way with several different tools. Nearpod’s whiteboarding feature allows teachers to model math problems, diagrams, graphic organizers, the writing process, and more. Traditionally modeling happens at the front of the classroom on a whiteboard, but with Nearpod’s whiteboard, students can follow along live on their own devices from anywhere in the room. The slide annotation feature in Nearpod uses whiteboard tools on top of slides in a Nearpod lesson. Teachers can upload a slide with a blank Venn diagram, graph paper, or another tool to fill out together as a class. Nearpod’s slide annotation allows students to follow along on their own devices too. The second phase of I do, we do, you do is guided practice. Through guided practice, teachers gradually release responsibility to the student. Guided practice may be in groups or individually. Students should have more than one attempt at practice with the teacher before moving to the last phase. In this phase, teachers give feedback on attempts as students practice the retrieval process from their memory of the I do step. Through formative assessment, teachers can surface student understanding. Students will be able to learn a new concept with no reinforcement of errors or misconceptions. This phase can provide teachers with an opportunity to scaffold or differentiate the acquisition of skills. Teachers can strengthen guided practice using technology with Nearpod’s 11 formative assessment features. Teachers can see what students understand with Draw It slides, Open-ended questions, polls, quizzes, and more during a live lesson. We know that finding the time to grade everything can be difficult for teachers. With Nearpod, you don’t need to find the time to grade to see student understanding. Teachers can provide feedback on attempts and reach every student through Nearpod by delivering lessons live and seeing student results in real-time as they present new concept practice opportunities. Teachers can pivot in-the-moment back to the modeling phase if student understanding shows a need. In Nearpod teachers can pull up the whiteboard to go back to modeling at any time in a lesson. The flexibility of Nearpod combined with the flexibility of I do, we do, you do create successful learning opportunities for every student. In the final phase of I do, we do, you do students are practicing retrieval of a new skill on their own to develop fluency of a concept. At this phase in the instructional strategy, students demonstrate an initial level of understanding and rely less on the teacher for guidance. Teachers continue to monitor student efforts and progress and provide feedback when applicable. You do activities are distributed over time for lasting impact. They make a great spiral review for practicing past skills or independent practice in the classroom or at home. Nearpod’s student-paced lesson delivery mode allows for teachers to create digital practice opportunities a student can work through at their own pace. Nearpod’s student-paced mode is ideal in a classroom setting where the teacher may be revisiting modeling or guided practice with students who need more support, but the teacher needs an activity to allow independent practice for students who are ready. Teachers can strengthen students’ independent practice with numerous activities like PHET simulations, VR field trips, interactive video, and other formative assessment tools. Nearpod’s gamified quiz, Time to Climb, is a great way to end an I do, we do, you do lesson and surface student understanding simultaneously. Nearpod’s award-winning platform is used by thousands of schools around the globe, transforming classroom engagement.
A loxodrome is a path on the Earth's surface that is followed when a compass is kept pointing in the same direction. It is a straight line on a Mercator projection of the globe precisely because such a projection is designed to have the property that all paths along the Earth's surface that preserve the same directional bearing appear as straight lines. The loxodrome isn't the shortest distance between two points on a sphere. The shortest distance is an arc of a great circle. But, in the past, it was hard for a ship's navigator to follow a great circle because this required constant changes of compass heading. The solution was to follow a loxodrome (from the Greek loxos for slanted and drome for course), also known as a rhumb line, by navigating along a constant direction. In middle latitudes, at least, this didn't lengthen the journey unduly. If a loxodrome is continued indefinitely around a sphere it will produce a spherical spiral, or a logarithmic spiral on a polar projection. The loxodrome can be written in terms of the longitude and latitude of a point on the curve and its angle with the meridians; there is no advantage in using parametric equations. History of the loxodrome The loxodrome was discovered by the Portuguese mathematician and geographer Pedro Nuñez after observations reported to him in 1533 by Admiral Martim Alfonso de Sousa. In 1569 the Flemish cartographer Geradus Mercator designed a world map with cylindrical projection such that the meridians and parallels were straight lines, intersecting at right angles, and whose distance from each other increased with their distance from the equator. The Mercator projection distorts the image (especially at high latitudes) but has the great advantage of showing loxodromes as straight lines; such maps are still important tools for navigation at sea and in the air.
Multiple Representations of Fractions, Decimals and Percents Activity: Fraction, Decimal and Percents Chart Activity: Understanding Equivalent Fractions We often introduce algebraic concepts before students even understand the basics of number. This activity is effective for helping students make sense of equivalent fractions. It builds on the idea of an "area model" where the shaded part is a fraction of the whole. This basic example builds onto many more fraction and algebra concepts that can be modeled using the same idea of area; the area model helps students connect all these concepts together.
“When an individual is protesting society’s refusal to acknowledge his dignity as a human being,” Bayard Rustin said. “His very act of protest confers dignity on him.” Rustin was a black activist whose contributions to the civil rights movement helped grow the nonviolent protest movement that evolved from the Montgomery bus boycott. This event would help Martin Luther King Jr. become one of the central figures in civil rights history. Although not forgotten, Rustin is no more than a footnote compared to Martin Luther King Jr., Rosa Parks or Malcolm X. Relegated to work behind the scenes because of his past association with the Communist party, Rustin would serve as King’s assistant from 1955 to 1960. Rustin counseled and gave advice to King on nonviolent demonstration. He also organized important events that are now considered historic moments in American history. An organizer of the Journey of Reconciliation in 1947, which was an inspiration for the historic Freedom Rides, Rustin helped set a blueprint for future nonviolent demonstrations against racial discrimination. Rustin was the chief organizer of the 1963 March on Washington, which was immortalized by King’s famous “I Have a Dream” speech. Rustin also was the chief organizer of King’s Southern Christian Leadership Conference, which promoted nonviolent protests in an effort the end segregation. “Bayard Rustin was a brilliant grassroots organizer,” Black Studies instructor Nathan Katungi said. “He was a major force in organizing the 1963 historical March On Washington. As a young activist, he helped put pressure on President Truman to integrate the military. ” Rustin eventually grew disillusioned with nonviolent demonstrations and shifted his attention elsewhere, parting ways with King in 1963. Aside from his civil rights contributions, Rustin became an advocate for gay and lesbian causes. Rustin was a gay man and had been arrested in 1953 for homosexual activity. Rustin would go on to promote educational, labor and civil rights reforms until the end of his life.
In the field of optics, transparency (also called pellucidity or diaphaneity) is the physical property of allowing light to pass through the material without being scattered. On a macroscopic scale (one where the dimensions investigated are much, much larger than the wavelength of the photons in question), the photons can be said to follow Snell's Law. Translucency (also called translucence or translucidity) is a super-set of transparency: it allows light to pass through, but does not necessarily (again, on the macroscopic scale) follow Snell's law; the photons can be scattered at either of the two interfaces where there is a change in index of refraction, or internally. In other words, a translucent medium allows the transport of light while a transparent medium not only allows the transport of light but allows for image formation. The opposite property of translucency is opacity. Transparent materials appear clear, with the overall appearance of one color, or any combination leading up to a brilliant spectrum of every color. When light encounters a material, it can interact with it in several different ways. These interactions depend on the wavelength of the light and the nature of the material. Photons interact with an object by some combination of reflection, absorption and transmission. Some materials, such as plate glass and clean water, transmit much of the light that falls on them and reflect little of it; such materials are called optically transparent. Many liquids and aqueous solutions are highly transparent. Absence of structural defects (voids, cracks, etc.) and molecular structure of most liquids are mostly responsible for excellent optical transmission. Materials which do not transmit light are called opaque. Many such substances have a chemical composition which includes what are referred to as absorption centers. Many substances are selective in their absorption of white light frequencies. They absorb certain portions of the visible spectrum while reflecting others. The frequencies of the spectrum which are not absorbed are either reflected back or transmitted for our physical observation. This is what gives rise to color. The attenuation of light of all frequencies and wavelengths is due to the combined mechanisms of absorption and scattering. Transparency can provide almost perfect camouflage for animals able to achieve it. This is easier in dimly-lit or turbid seawater than in good illumination. Many marine animals such as jellyfish are highly transparent. With regard to the absorption of light, primary material considerations include: With regard to the scattering of light, the most critical factor is the length scale of any or all of these structural features relative to the wavelength of the light being scattered. Primary material considerations include: Diffuse reflection - Generally, when light strikes the surface of a (non-metallic and non-glassy) solid material, it bounces off in all directions due to multiple reflections by the microscopic irregularities inside the material (e.g., the grain boundaries of a polycrystalline material, or the cell or fiber boundaries of an organic material), and by its surface, if it is rough. Diffuse reflection is typically characterized by omni-directional reflection angles. Most of the objects visible to the naked eye are identified via diffuse reflection. Another term commonly used for this type of reflection is “light scattering”. Light scattering from the surfaces of objects is our primary mechanism of physical observation. Light scattering in liquids and solids depends on the wavelength of the light being scattered. Limits to spatial scales of visibility (using white light) therefore arise, depending on the frequency of the light wave and the physical dimension (or spatial scale) of the scattering center. Visible light has a wavelength scale on the order of a half a micrometer (one millionth of a meter). Scattering centers (or particles) as small as one micrometer have been observed directly in the light microscope (e.g., Brownian motion). Optical transparency in polycrystalline materials is limited by the amount of light which is scattered by their microstructural features. Light scattering depends on the wavelength of the light. Limits to spatial scales of visibility (using white light) therefore arise, depending on the frequency of the light wave and the physical dimension of the scattering center. For example, since visible light has a wavelength scale on the order of a micrometer, scattering centers will have dimensions on a similar spatial scale. Primary scattering centers in polycrystalline materials include microstructural defects such as pores and grain boundaries. In addition to pores, most of the interfaces in a typical metal or ceramic object are in the form of grain boundaries which separate tiny regions of crystalline order. When the size of the scattering center (or grain boundary) is reduced below the size of the wavelength of the light being scattered, the scattering no longer occurs to any significant extent. In the formation of polycrystalline materials (metals and ceramics) the size of the crystalline grains is determined largely by the size of the crystalline particles present in the raw material during formation (or pressing) of the object. Moreover, the size of the grain boundaries scales directly with particle size. Thus a reduction of the original particle size well below the wavelength of visible light (about 1/15 of the light wavelength or roughly 600/15 = 40 nm) eliminates much of light scattering, resulting in a translucent or even transparent material. Computer modeling of light transmission through translucent ceramic alumina has shown that microscopic pores trapped near grain boundaries act as primary scattering centers. The volume fraction of porosity had to be reduced below 1% for high-quality optical transmission (99.99 percent of theoretical density). This goal has been readily accomplished and amply demonstrated in laboratories and research facilities worldwide using the emerging chemical processing methods encompassed by the methods of sol-gel chemistry and nanotechnology. Transparent ceramics have created interest in their applications for high energy lasers, transparent armor windows, nose cones for heat seeking missiles, radiation detectors for non-destructive testing, high energy physics, space exploration, security and medical imaging applications. Large laser elements made from transparent ceramics can be produced at a relatively low cost. These components are free of internal stress or intrinsic birefringence, and allow relatively large doping levels or optimized custom-designed doping profiles. This makes ceramic laser elements particularly important for high-energy lasers. The development of transparent panel products will have other potential advanced applications including high strength, impact-resistant materials that can be used for domestic windows and skylights. Perhaps more important is that walls and other applications will have improved overall strength, especially for high-shear conditions found in high seismic and wind exposures. If the expected improvements in mechanical properties bear out, the traditional limits seen on glazing areas in today's building codes could quickly become outdated if the window area actually contributes to the shear resistance of the wall. Currently available infrared transparent materials typically exhibit a trade-off between optical performance, mechanical strength and price. For example, sapphire (crystalline alumina) is very strong, but it is expensive and lacks full transparency throughout the 3–5 micrometer mid-infrared range. Yttria is fully transparent from 3–5 micrometers, but lacks sufficient strength, hardness, and thermal shock resistance for high-performance aerospace applications. Not surprisingly, a combination of these two materials in the form of the yttrium aluminium garnet (YAG) is one of the top performers in the field. When light strikes an object, it usually has not just a single frequency (or wavelength) but many. Objects have a tendency to selectively absorb, reflect or transmit light of certain frequencies. That is, one object might reflect green light while absorbing all other frequencies of visible light. Another object might selectively transmit blue light while absorbing all other frequencies of visible light. The manner in which visible light interacts with an object is dependent upon the frequency of the light, the nature of the atoms in the object, and often the nature of the electrons in the atoms of the object. Some materials allow much of the light that falls on them to be transmitted through the material without being reflected. Materials that allow the transmission of light waves through them are called optically transparent. Chemically pure (undoped) window glass and clean river or spring water are prime examples of this. Materials which do not allow the transmission of any light wave frequencies are called opaque. Such substances may have a chemical composition which includes what are referred to as absorption centers. Most materials are composed of materials which are selective in their absorption of light frequencies. Thus they absorb only certain portions of the visible spectrum. The frequencies of the spectrum which are not absorbed are either reflected back or transmitted for our physical observation. In the visible portion of the spectrum, this is what gives rise to color. Color centers are largely responsible for the appearance of specific wavelengths of visible light all around us. Moving from longer (0.7 micrometer) to shorter (0.4 micrometer) wavelengths: red, orange, yellow, green and blue (ROYGB) can all be identified by our senses in the appearance of color by the selective absorption of specific light wave frequencies (or wavelengths). Mechanisms of selective light wave absorption include: In electronic absorption, the frequency of the incoming light wave is at or near the energy levels of the electrons within the atoms which compose the substance. In this case, the electrons will absorb the energy of the light wave and increase their energy state, often moving outward from the nucleus of the atom into an outer shell or orbital. The atoms that bind together to make the molecules of any particular substance contain a number of electrons (given by the atomic number Z in the periodic chart). Recall that all light waves are electromagnetic in origin. Thus they are affected strongly when coming into contact with negatively charged electrons in matter. When photons (individual packets of light energy) come in contact with the valence electrons of atom, one of several things can and will occur: Most of the time, it is a combination of the above that happens to the light that hits an object. The electrons in different materials vary in the range of energy that they can absorb. Most glasses, for example, block ultraviolet (UV) light. What happens is the electrons in the glass absorb the energy of the photons in the UV range while ignoring the weaker energy of photons in the visible light spectrum. Thus, when a material is illuminated, individual photons of light can make the valence electrons of an atom transition to a higher electronic energy level. The photon is destroyed in the process and the absorbed radiant energy is transformed to electric potential energy. Several things can happen then to the absorbed energy: it may be re-emitted by the electron as radiant energy (in this case the overall effect is in fact a scattering of light), dissipated to the rest of the material (i.e. transformed into heat), or the electron can be freed from the atom (as in the photoelectric and Compton effects). The primary physical mechanism for storing mechanical energy of motion in condensed matter is through heat, or thermal energy. Thermal energy manifests itself as energy of motion. Thus, heat is motion at the atomic and molecular levels. The primary mode of motion in crystalline substances is vibration. Any given atom will vibrate around some mean or average position within a crystalline structure, surrounded by its nearest neighbors. This vibration in two dimensions is equivalent to the oscillation of a clock’s pendulum. It swings back and forth symmetrically about some mean or average (vertical) position. Atomic and molecular vibrational frequencies may average on the order of 1012 cycles per second (Terahertz radiation). When a light wave of a given frequency strikes a material with particles having the same or (resonant) vibrational frequencies, then those particles will absorb the energy of the light wave and transform it into thermal energy of vibrational motion. Since different atoms and molecules have different natural frequencies of vibration, they will selectively absorb different frequencies (or portions of the spectrum) of infrared light. Reflection and transmission of light waves occur because the frequencies of the light waves do not match the natural resonant frequencies of vibration of the objects. When infrared light of these frequencies strikes an object, the energy is reflected or transmitted. If the object is transparent, then the light waves are passed on to neighboring atoms through the bulk of the material and re-emitted on the opposite side of the object. Such frequencies of light waves are said to be transmitted. An object may be not transparent either because it reflects the incoming light or because it absorbs the incoming light. Almost all solids reflect a part and absorb a part of the incoming light. When light falls onto a block of metal, it encounters atoms that are tightly packed in a regular lattice and a "sea of electrons" moving randomly between the atoms. In metals, most of these are non-bonding electrons (or free electrons) as opposed to the bonding electrons typically found in covalently bonded or ionically bonded non-metallic (insulating) solids. In a metallic bond, any potential bonding electrons can easily be lost by the atoms in a crystalline structure. The effect of this delocalization is simply to exaggerate the effect of the "sea of electrons". As a result of these electrons, most of the incoming light in metals is reflected back, which is why we see a shiny metal surface. Most insulators (or dielectric materials) are held together by ionic bonds. Thus, these materials do not have free conduction electrons, and the bonding electrons reflect only a small fraction of the incident wave. The remaining frequencies (or wavelengths) are free to propagate (or be transmitted). This class of materials includes all ceramics and glasses. If a dielectric material does not include light-absorbent additive molecules (pigments, dyes, colorants), it is usually transparent to the spectrum of visible light. Color centers (or dye molecules, or "dopants") in a dielectric absorb a portion of the incoming light wave. The remaining frequencies (or wavelengths) are free to be reflected or transmitted. This is how colored glass is produced. Most liquids and aqueous solutions are highly transparent. For example, water, cooking oil, rubbing alcohol, air, and natural gas are all clear. Absence of structural defects (voids, cracks, etc.) and molecular structure of most liquids are chiefly responsible for their excellent optical transmission. The ability of liquids to "heal" internal defects via viscous flow is one of the reasons why some fibrous materials (e.g., paper or fabric) increase their apparent transparency when wetted. The liquid fills up numerous voids making the material more structurally homogeneous. Light scattering in an ideal defect-free crystalline (non-metallic) solid which provides no scattering centers for incoming lightwaves will be due primarily to any effects of anharmonicity within the ordered lattice. Lightwave transmission will be highly directional due to the typical anisotropy of crystalline substances, which includes their symmetry group and Bravais lattice. For example, the seven different crystalline forms of quartz silica (silicon dioxide, SiO2) are all clear, transparent materials. Optically transparent materials focus on the response of a material to incoming light waves of a range of wavelengths. Guided light wave transmission via frequency selective waveguides involves the emerging field of fiber optics and the ability of certain glassy compositions to act as a transmission medium for a range of frequencies simultaneously (multi-mode optical fiber) with little or no interference between competing wavelengths or frequencies. This resonant mode of energy and data transmission via electromagnetic (light) wave propagation is relatively lossless. An optical fiber is a cylindrical dielectric waveguide that transmits light along its axis by the process of total internal reflection. The fiber consists of a core surrounded by a cladding layer. To confine the optical signal in the core, the refractive index of the core must be greater than that of the cladding. The refractive index is the parameter reflecting the speed of light in a material. (Refractive index is the ratio of the speed of light in vacuum to the speed of light in a given medium. The refractive index of vacuum is therefore 1.) The larger the refractive index, the more slowly light travels in that medium. Typical values for core and cladding of an optical fiber are 1.48 and 1.46, respectively. When light traveling in a dense medium hits a boundary at a steep angle, the light will be completely reflected. This effect, called total internal reflection, is used in optical fibers to confine light in the core. Light travels along the fiber bouncing back and forth off of the boundary. Because the light must strike the boundary with an angle greater than the critical angle, only light that enters the fiber within a certain range of angles will be propagated. This range of angles is called the acceptance cone of the fiber. The size of this acceptance cone is a function of the refractive index difference between the fiber's core and cladding. Optical waveguides are used as components in integrated optical circuits (e.g. combined with lasers or light-emitting diodes, LEDs) or as the transmission medium in local and long haul optical communication systems. Attenuation in fiber optics, also known as transmission loss, is the reduction in intensity of the light beam (or signal) with respect to distance traveled through a transmission medium. Attenuation coefficients in fiber optics usually use units of dB/km through the medium due to the very high quality of transparency of modern optical transmission media. The medium is usually a fiber of silica glass that confines the incident light beam to the inside. Attenuation is an important factor limiting the transmission of a signal across large distances. In optical fibers the main attenuation source is scattering from molecular level irregularities (Rayleigh scattering) due to structural disorder and compositional fluctuations of the glass structure. This same phenomenon is seen as one of the limiting factors in the transparency of infrared missile domes. Further attenuation is caused by light absorbed by residual materials, such as metals or water ions, within the fiber core and inner cladding. Light leakage due to bending, splices, connectors, or other outside forces are other factors resulting in attenuation. Many marine animals that float near the surface are highly transparent, giving them almost perfect camouflage. However, transparency is difficult for bodies made of materials that have different refractive indices from seawater. Some marine animals such as jellyfish have gelatinous bodies, composed mainly of water; their thick mesogloea is acellular and highly transparent. This conveniently makes them buoyant, but it also makes them large for their muscle mass, so they cannot swim fast, making this form of camouflage a costly trade-off with mobility. Gelatinous planktonic animals are between 50 and 90 per cent transparent. A transparency of 50 per cent is enough to make an animal invisible to a predator such as cod at a depth of 650 metres (2,130 ft); better transparency is required for invisibility in shallower water, where the light is brighter and predators can see better. For example, a cod can see prey that are 98 per cent transparent in optimal lighting in shallow water. Therefore, sufficient transparency for camouflage is more easily achieved in deeper waters. For the same reason, transparency in air is even harder to achieve, but a partial example is found in the Glass frogs of the South American rain forest, which have translucent skin and pale greenish limbs. 全文を閲覧するには購読必要です。 To read the full text you will need to subscribe.
On the border between land and sea, a unique ecosystem covers tropical and subtropical regions around the world: Mangrove forests. Mangroves are well adapted to saline water and the tides, and they thrive along the coastlines of over 118 countries, including Sri Lanka. They offer a wide variety of ecosystem services, provide a sheltered habitat for many species of animals, and are vital allies in the fight against climate change. Mangroves are protectors of the coast and of the people that live around them. They form a green barrier that can hold off coastal erosion, storm surges, and even tsunamis. They are also blue lungs for the planet, storing up to five times as much carbon as other types of forests, which they bind under water and do not release after their death. They live in conditions that other trees cannot survive, and their extensive root systems create a unique environment for fish, birds, reptiles, amphibians, crustaceans, and many other animals and plants. Mangrove forests offer sheltered habitats and nurseries for countless species of fish, algae, seagrass, corals, oysters, barnacles, mollusks, shrimps, and crabs, as well as a hospitable environment for birds, bats, insects, water monitors, fishing cats, crocodiles, and monkeys. They are sources of wood, fiber, charcoal, and ingredients for cosmetics, perfumes, pharmaceuticals, and tanneries. One third of Sri Lanka’s population lives in coastal areas, and many of them depend on mangroves for their livelihoods in fisheries. Combine all of this, and they provide essential services for both climate change mitigation and adaptation: but at the same time, they are under threat. In the last three decades, more than 50% of Sri Lanka’s mangroves have been destroyed due to prawn farming, hotel development, settlements, logging, tourism, agriculture, and pollution. It is now illegal to fell mangroves, and Sri Lanka has become the world’s first country to protect the entirety of its mangrove forests: but immense damage has already been done. The Sri Lankan Forest Department states the extent of mangroves at 15,670 hectares based on a 2010 survey while IUCN estimates put mangrove cover at around 12,000 hectares. The majority of mangroves is concentrated in the districts of Puttalam, Jaffna, Trincomalee, and Batticaloa. There are over 20 true mangrove species native to the island, and a multitude of animals and plants that depend on mangrove ecosystems. Restoring and replanting mangroves is of great importance for Sri Lanka’s future. They will keep the island’s coastal communities protected in the face of sea level rise and extreme weather events. They will continue to host a wealth of wildlife and natural resources, to provide livelihoods, and to offer ecosystem services. Furthermore, they will mitigate climate change by binding carbon and releasing oxygen into the atmosphere. The devastating 2004 Indian Ocean Tsunami opened the eyes of many to the vulnerability of mangrove-less coastlines and set in motion numerous reforestation projects. However, many of these projects were donor-driven and focused on the rapid planting of trees for coastal protection, often without proper technical knowledge or community involvement shows that out of 1,000-1,200 hectares of previous restoration efforts, only 200-220 hectares survived. In 9 out of 23 planting sites, all the seedlings died due to inappropriate locations, unsuitable species, and a lack of post-planting monitoring and care. These experiences have changed the approach to mangrove conservation over the last years. Successful projects, for example those by the Small Fishers Federation of Sri Lanka (which has a Mangrove Re-Plantation Advisory Board) or the University of Ruhuna, all use an inclusive approach that relies on local knowledge and sustainable cooperation with coastal communities. A recent mangrove restoration project in this vein is spearheaded by SLYCAN Trust with the support of Drowning Islands, and in collaboration with the Marine Environment Protection Authority (MEPA) in the North of Sri Lanka, where large amounts of mangroves have been destroyed due to industrial activity and the civil war. Restoring mangroves here will not only help the ecosystem but also contribute to peacebuilding and reconciliation efforts in coastal communities. In 2018, SLYCAN Trust and MEPA started the "Blue-Green Protectors Project" with the support of Drowning Islands, and brought together local stakeholders, government agencies, university students, and youth to plant mangroves at multiple sites near Jaffna and Mannar and conduct environmental workshops. The local knowledge from farmers combined with the technical expertise of MEPA, the University, and the government while the youth engagement points the way to the future. Mangrove conservation cannot happen without local communities, and it needs to take developmental problems and poverty, which are often exacerbated by climate change, into account. By building the capacities of local communities and involving them in the process, restoration efforts like the Blue-Green Protectors Project can create sustainable livelihoods and allow for long-term ecosystem protection. Dennis Mombauer currently lives in Colombo as a freelance writer and researcher on climate change and education. He focuses on ecosystem-based adaptation and sustainable urban development as well as on autism spectrum disorder in the field of education. Besides articles and research, he has published numerous works of fiction in German and English.
The DESY accelerator facility in Hamburg, Germany, goes on for miles to host a particle making kilometer-long laps at almost the speed of light. Now researchers have shrunk such a facility to the size of a computer chip. A University of Michigan team in collaboration with Purdue University created a new device that still accommodates speed along circular paths, but for producing lower light frequencies in the terahertz range of applications such as identifying counterfeit dollar bills or distinguishing between cancerous and healthy tissue. “In order to get light to curve, you have to sculpt every piece of the light beam to a particular intensity and phase, and now we can do this in an extremely surgical way,” said Roberto Merlin, the University of Michigan’s Peter A. Franken Collegiate Professor of Physics. The work is published in the journal Science. Ultimately, this device could be conveniently adapted for a computer chip. “The more terahertz sources we have, the better. This new source is also exceptionally more efficient, let alone that it’s a massive system created at the millimeter scale,” said Vlad Shalaev, Purdue’s Bob and Anne Burnett Distinguished Professor of Electrical and Computer Engineering. The device that Michigan and Purdue researchers built generates so-called “synchrotron” radiation, which is electromagnetic energy given off by charged particles, such as electrons and ions, that are moving close to the speed of light when magnetic fields bend their paths. Several facilities around the world, like DESY, generate synchrotron radiation to study a broad range of problems from biology to materials science. But past efforts to bend light to follow a circular path have come in the form of lenses or spatial light modulators too bulky for on-chip technology. A team led by Merlin and Meredith Henstridge, now a postdoctoral researcher at the Max Planck Institute for the Structure and Dynamics of Matter, substituted these bulkier forms with about 10 million tiny antennae printed on a lithium tantalite crystal, called a “metasurface,” designed by the Michigan team of Anthony Grbic and built by Purdue researchers. The researchers used a laser to produce a pulse of visible light that lasts for one trillionth of a second. The array of antennae causes the light pulse to accelerate along a curved trajectory inside the crystal. Instead of a charged particle spiraling for kilometers on end, the light pulse displaced electrons from their equilibrium positions to create “dipole moments.” These dipole moments accelerated along the curved trajectory of the light pulse, resulting in the emission of synchrotron radiation much more efficiently at the terahertz range. “This isn’t being built for a computer chip yet, but this work demonstrates that synchrotron radiation could eventually help develop on-chip terahertz sources,” Shalaev said. The research was supported by the National Science Foundation (grant DMR-1120923) and the Air Force Office of Scientific Research (grant FA9550-14-1-0389).
One of the first things many of us look at when buying a new phone is battery life. A low battery notification sends us into a panic. Losing your charger is a disaster. So, imagine a phone that would run without a battery. It seems like a crazy idea but it is actually being developed in a lab at the University of Washington, Seattle. Vamsi Talla is a research associate who has been working on a prototype cell phone at the lab of Joshua Smith, a computer science and electrical engineering at UW. The aim is the develop a phone that can still make calls and texts, even when the battery runs out. To do so, a re-think of how phones operate was required and the phone would need to rely on energy that came from sources in the environment rather than the battery. Solar panels or photodiodes can turn light into electricity and an antenna can convert radio-frequency TV and Wi-Fi broadcasts into energy. Those two sources can generate a small number of microwatts, which is nowhere near enough the 800 milliwatts required to make a simple call. The lab had to firstly develop a technique called backscatter to communicate by reflecting incoming radio waves. Collecting enough power to convert human speech into a digital signal is problematic and analogue technology is more power-efficient. The phone can use a digital signal to dial numbers but using backscatter for voice calls means it is utilising analogue technology. The phone with no battery is still a long way off. It has a touch-sensitive number pad and a tiny red LED. To operate as a touchscreen phone it would need over 100,000 times as much as Talla’s phone needs. It functions more like a walky-talky than a phone, with buttons being pressed in order to talk to the caller and a lot of static. They’re working on a next generation device with an E-Ink display and possibly a camera. To read more about this, visit the story’s page on Wired.
Communication around concepts (e.g., content we want students to learn) should be more than just a teacher articulating what the learning goals are. How many times have we asked students to do an activity without engaging them in a way that makes the activity feel relevant and meaningful? It’s important that students can explain why they are doing a particular activity, how it’s related to other subject areas, and, most importantly, other areas of their daily lives. When asking students to discuss concepts, we must go further than telling them to “share your answer with a shoulder partner.” Their conversations should be built on a prompt that’s engaging, instructionally challenging, highly cognitive, and conducive to multiple entry points and solution pathways. Prompts to initiate discourse should be posed in ways that invite wonder, speculation and exploration. Student discourse doesn’t happen just by telling students to get in a group and work together. One way to turn a prompt into an engaging discourse is to have your students create a video reflection. In this learning experience, students engage by questioning the reasoning of their peers. Strategies that work well in this collaborative environment include: wait time and think time, turn-and-talk, think-pair-share, think-write-pair-share, and one of my favorites, “think-pair-on-air.” Instead of sharing their conclusions and thinking with the class, students can create a video to share on the class YouTube channel. Allow your students to find a space to record their video reflection, whether it’s at their desk or a designated space for video creation. If students need alone time, let them record at home or find a supervised place at school where they can spread out. Based on the prompt, teachers should encourage students to talk about how or why they did what they did, or what they believe. Most importantly, the classroom culture must support curiosity and sense-making, which is demonstrated in the questions students ask in their reflection videos and to one another. Once the videos are shared with the class, it’s important that students build on each other’s thinking and generate arguments based on the videos they viewed. Teachers can ask for justification and encourage students to question and extend their own thinking, even after the video creation process. Student discourse plays a crucial role in developing higher-level cognitive thinking. This student discourse video experience provides students with multiple opportunities to share, compare, contrast, reflect, revise and refine. The “Think-Pair-On-Air” strategy allows students to process their own thoughts before sharing with others, allowing them to organize their thinking in creative and meaningful ways. The next time you ask your students to share with a partner, try this activity for a more rigorous, engaging and meaningful experience. For more classroom inspiration, be sure to check out Dr. Lang-Raad’s new book, “WeVideo Every Day: 40 Strategies to Deepen Learning in Any Class”.
- What is acrylamide? - What are the known health effects of acrylamide? - Does acrylamide increase the risk of cancer? - Is the acrylamide in food? - How does cooking produce acrylamide? - Are there other ways humans are exposed to acrylamide? - Are acrylamide levels regulated? - How do the levels of acrylamide in food compare to allowable levels set for drinking water? - Should I change my diet? - What research is needed? Acrylamide is a chemical compound that occurs as a solid crystal or in liquid solution. Its primary use is to make polyacrylamide and acrylamide copolymers. Trace amounts of the original (unreacted) acrylamide generally remain in these products. Polyacrylamide and acrylamide copolymers are used in many industrial processes, including production of paper, dyes, and plastics, and the treatment of drinking water, sewage and waste. They are also present in consumer products such as caulking, food packaging and some adhesives. Historically, exposure to high levels of acrylamide in the workplace has been shown to cause neurological damage. Acrylamide has not been shown to cause cancer in humans. However, the relationship between acrylamide and cancer has not been studied extensively in humans. Because it has been shown to cause cancer in laboratory rats when given in the animals' drinking water, both the Environmental Protection Agency (EPA) and the International Agency for Research on Cancer (IARC) in Lyon, France, consider acrylamide to be a probable human carcinogen. The National Toxicology Program's Ninth Report on Carcinogens states that acrylamide can be "reasonably anticipated to be a human carcinogen." Recent studies by research groups in Sweden, Switzerland, Norway, Britain and the United States have found acrylamide in certain foods. It has been determined that heating some foods to a temperature of 120 C (248 F) can produce acrylamide. Potato chips and french fries have been found to contain relatively high levels of acrylamide compared to other foods, with lower levels also present in bread and cereals. A joint World Health Organization and Food and Agriculture Organization (WHO/FAO) consultation in June 2002 concluded that the levels of acrylamide in foods pose a major concern and called for more research to determine what the risk is and what should be done. In September 2002, researchers discovered that the amino acid asparagine, which is present in many vegetables, with higher amounts in some varieties of potatoes, can form acrylamide when heated to high temperatures in the presence of certain sugars. High-heat cooking methods, such as frying, baking or broiling, are most likely to result in acrylamide formation. Boiling and microwaving appear less likely to form acrylamide. Longer cooking times increase the amount of acrylamide produced when the temperature is high enough. There are other ways humans are exposed to acrylamide, but exposure through food is one of the largest sources. Cigarette smoke may be a major source for some people. Exposure to acrylamide from other sources is likely to be significantly less than that from food or smoking, although scientists do not yet have a complete understanding of all the sources. There are some industrial and agricultural uses of acrylamide and polyacrylamide. However, regulations are in place to limit exposure in those settings. The EPA regulates acrylamide and has established acceptable levels for air and drinking water, at which exposure is considered to have no effect. These levels are set low enough to counteract any uncertainty arising from the lack of human data on the relationship between acrylamide and cancer. FDA regulates the amount of residual acrylamide in a variety of materials that come in contact with food. There are currently no guidelines governing the presence of acrylamide in food itself. In setting its level for acrylamide in drinking water, EPA assumes people drink two liters, approximately four and a half pounds, of water a day. Since people do not eat four and a half pounds a day of foods like french fries or potato chips, a direct comparison of drinking water to these products without considering absolute food intake is inappropriate Scientists also do not know whether the absorption in the gut of acrylamide from food is similar to that from water. The simplest way to think about this is that the levels in food are, as the World Health Organization put it, a major concern. However, scientists still do not know whether the acrylamide that has been in food for thousands of years has any effect on health. The best advice at this early stage in our understanding of this complex issue is to follow established dietary guidelines and eat a healthy, balanced diet that is low in fat and rich in high-fiber grains, fruits, and vegetables. The WHO/FAO consultation concluded that further research is necessary to determine how acrylamide is formed during the cooking process and whether acrylamide is present in foods other than those already tested. They also recommended population-based studies of those cancers that could potentially develop due to exposure to acrylamide.
New research on the two-dimensional (2D) material graphene has allowed researchers to create smart adaptive clothing which can lower the body temperature of the wearer in hot climates. A team of scientists from The University of Manchester's National Graphene Institute have created a prototype garment to demonstrate dynamic thermal radiation control within a piece of clothing by utilising the remarkable thermal properties and flexibility of graphene. The development also opens the door to new applications such as, interactive infrared displays and covert infrared communication on textile. The human body radiates energy in the form of electromagnetic waves in the infrared spectrum (known as blackbody radiation). In a hot climate it is desirable to make use the full extent of the infrared radiation to lower the body temperature that can be achieved by using infrared-transparent textiles. As for the opposite case, infrared-blocking covers are ideal to minimise the energy loss from the body. Emergency blankets are a common example used to deal with treating extreme cases of body temperature fluctuation. The collaborative team of scientists demonstrated the dynamic transition between two opposite states by electrically tuning the infrared emissivity (the ability to radiate energy) of the graphene layers integrated onto textiles. One-atom thick graphene was first isolated and explored in 2004 at The University of Manchester. Its potential uses are vast and research has already led to leaps forward in commercial products including; batteries, mobile phones, sporting goods and automotive. The new research published today in journal Nano Letters, demonstrates that the smart optical textile technology can change its thermal visibility. The technology uses graphene layers to control of thermal radiation from textile surfaces. Professor Coskun Kocabas, who led the research, said: "Ability to control the thermal radiation is a key necessity for several critical applications such as temperature management of the body in excessive temperature climates. Thermal blankets are a common example used for this purpose. However, maintaining these functionalities as the surroundings heats up or cools down has been an outstanding challenge." Prof Kocabas added: "The successful demonstration of the modulation of optical properties on different forms of textile can leverage the ubiquitous use of fibrous architectures and enable new technologies operating in the infrared and other regions of the electromagnetic spectrum for applications including textile displays, communication, adaptive space suits, and fashion". This study built on the same group's previous research using graphene to create thermal camouflage which would fool infrared cameras. The new research can also be integrated into existing mass-manufacture textile materials such as cotton. To demonstrate, the team developed a prototype product within a t-shirt allowing the wearer to project coded messages invisible to the naked eye but readable by infrared cameras. "We believe that our results are timely showing the possibility of turning the exceptional optical properties of graphene into novel enabling technologies. The demonstrated capabilities cannot be achieved with conventional materials." "The next step for this area of research is to address the need for dynamic thermal management of earth-orbiting satellites. Satellites in orbit experience excesses of temperature, when they face the sun, and they freeze in the earth's shadow. Our technology could enable dynamic thermal management of satellites by controlling the thermal radiation and regulate the satellite temperature on demand." said Kocabas. Professor Sir Kostya Novoselov was also involved in the research: "This is a beautiful effect, intrinsically routed in the unique band structure of graphene. It is really exciting to see that such effects give rise to the high-tech applications." he said. Advanced materials is one of The University of Manchester's research beacons - examples of pioneering discoveries, interdisciplinary collaboration and cross-sector partnerships that are tackling some of the biggest questions facing the planet. #ResearchBeacons
Tonsillitis is an inflammation of the throat glands, which are called the tonsils. The tonsils are a pair of soft tissue masses located at the rear of mouth on each side of the throat. The tonsils are part of the lymphatic system, which helps to fight against the infections. Tonsillitis occurs when bacterial or viral organisms cause inflammation of the tissue. Most of the time, tonsillitis is caused by a viral infection. Tonsillitis can occur at any age and is a common childhood ailment but Tonsillitis is common, especially in children. The condition can occur occasionally or recur frequently. Tonsillitis is transmitted most commonly from one person to another by social contact such as droplets in the air from sneezing. Tonsillitis can be caused by either viral or bacterial infection. Most of the time, tonsillitis is caused by a viral infection such as Adenoviruses, Influenza virus, Enteroviruses and Para influenza viruses. Bacterial tonsillitis can be caused by Streptococcus pyogenes, the organism that causes strep throat. - Pain in Throat - Red, swollen tonsils sometime with a white or yellow coating on the tonsils - Hoarseness or loss of voice - Pain and difficulty with swallowing - Nausea, vomiting and abdominal pain - Loss of appetite - Breathing through the mouth - Redness in eyes and ear pain - Jaw and neck tenderness (due to swollen lymph nodes) Diagnosis of tonsillitis is based on a medical history and a physical examination. Your doctor may use a swab to take a sample from the back of your throat. This sample may be used for a rapid strep test or a throat culture. In some cases, your doctor may do a blood test to find out what is causing your infection. For example, a blood test can check for mononucleosis. However, if the strep throat lab test is negative, the CBC may be needed to help determine the cause of tonsillitis. Role of Homeopathy in Tonsillitis and Adenoiditis Tonsillitis and adenoiditis is a very common health issue in children. Tonsillitis is a big health issue which in some cases disturbs the child and his/ her whole family. When a child gets frequent attacks of tonsillitis and adenoiditis surgery is often advised, but it is not a definite solution to the problem. Homeopathy offers a very safe and effective way to reduce enlarged adenoids and tonsils and can save the little ones from the surgeon’s knife. So, if you are going to remove the adenoids or tonsils, it is just like removing the guards who often get attacked by infections while safeguarding us, which is not a wise decision. Homeopathic philosophy has always stood by the view that whenever possible the tonsils and adenoids should be saved as they are an important defense tissues of our body and provide the children with the much-required immunity. Dr Choudhary has a vast experience in treating thousands of cases of tonsillitis and adenoiditis and he strongly suggests homeopathy for tonsillitis and adenoiditis.