content
stringlengths
275
370k
Ads by Chem-Online.org Chirality (Greek handedness, derived from the word stem χειρ~, ch[e]ir~ - hand~) is an asymmetry property important in several branches of science. An object or a system is called chiral if it differs from its mirror image. Such objects then come in two forms, which are mirror images of each other, and these pairs of mirror image objects are called enantiomorphs (Greek opposite forms) or, when referring to molecules, enantiomers. A non-chiral object is called achiral (sometimes also amphichiral). Tacrolimus, Fk506. In chemistry, a molecule is chiral if it is not superimposable on its mirror image regardless of how it is contorted. Your hands are also chiral - mirror images of one another and non-superimposable - and chiral molecules are often described as being 'left handed' or 'right-handed'. The study of chirality falls in the domain of stereochemistry. The two non-superimposable, mirror-image forms of chiral molecules are referred to as enantiomers. Chiral compounds exhibit optical activity, so enantiomers are also sometimes called optical isomers. The two enantiomers of such compounds may be classified as levorotary or dextrorotary depending on whether they rotate plane-polarised light in a left- or right-handed manner, respectively. A 50/50 mixture of the two enantiomers of a chiral compound is called a racemic mixture and does not exhibit optical activity. Chiral molecules are sometimes referred to as being "dissymmetric"; chirality and dissymmetry being one in the same.Tiagabine In more technical terms, the symmetry of a molecule (or any other object) determines whether it is chiral or not. A molecule is chiral (that is, not chiral) if and only if it has an axis of improper rotation, that is, an n-fold rotation (rotation by 360°/n) followed by a reflection in the plane perpendicular to this axis which maps the molecule on to itself. Thus a molecule is chiral if and only if it lacks an improper rotation axis. They are not necessarily asymmetric (i.e. without symmetry), because they can have other types of symmetry, for example rotational symmetry. However, all naturally-occurring amino acids (except glycine) and many sugars are asymmetric as well as chiral. Chirality may also be defined in mathematical terms. Chirality is of critical importance in chemistry and unites the traditionally-defined subdisciplines of inorganic chemistry, organic chemistry and physical chemistry. Many biologically-active molecules are chiral, including the naturally-occurring amino acids (the building blocks of proteins) and vitamins. Interestingly, these compounds are homochiral, that is all of the same chirality. The origin of homochirality in the biological world is the subject of vigorous debate. Many coordination compounds are also chiral, for example the well-known [Ru(2,2'-bipyridine)3]2+ complex in which the bipyridine ligands adopt a propeller-like arrangement. Enzymes, which themselves are always chiral, often distinguish between the two enantiomers of a chiral substrate. This can be visualised in everyday terms by imagining the enzymes to have three-dimensional glove shaped cavities which bind these substrates. If this "glove" is right-handed, then right-handed molecules will fit inside snugly and thus be bound tightly. On the other hand, left-handed molecules won't fit well - just like putting your left hand into a right-handed glove. Although this is an oversimplification of the recognition process (enzyme cavities are not really "glove shaped"), it is a useful illustration of a more general point: chiral objects have different interactions with the two enantiomers of other chiral objects. Other biological processes may be triggered by only one of the two possible enantiomers of a chiral molecule, often being unresponsive to the other enantiomer. For example, S-carvone ("left-handed") is the flavor of caraway, while R-carvone ("right-handed") is the flavor of spearmint. Many chiral drugs must be made with high enantiomeric purity due to toxic activity of the 'wrong' enantiomer. An example of this is thalidomide which is racemic — that is, it contains both left and right handed isomers in equal amounts. One enantiomer is effective against morning sickness, and the other is teratogenic. It should be noted that the enantiomers are converted to each other in vivo. That is, if a human is given D-thalidomide or L-thalidomide, both isomers can be found in the serum. Hence, administering only one enantiomer will not prevent the teratogenic effect in humans. Most commonly, chiral molecules have point chirality which centers around a single asymmetric atom (usually a carbon atom). This is the case for chiral amino acids where the alpha carbon atom is the stereogenic centre, having point chirality. A molecule can have multiple chiral centers without being chiral overall if there is a symmetry element (mirror plane or inversion center) which relates those chiral centers. Such compounds are referred to as meso compounds. It is also possible for a molecule to be chiral without any specific chiral centers in the molecule. Examples include 1,1'-bi-2-naphthol (BINOL) and 1,3-dichloro-allene which have planar chirality or axial chirality. The [Ru(2,2'-bipyridine)3]2+ complex above is an example of a chiral molecule that has high symmetry. It belongs to the symmetry point group D3., meaning it has one three-fold rotational symmetry axis and three perpendicular two-fold axes. In this case, the Ru atom may be regarded as a stereogenic centre with the complex having point chirality. One must make a clear distinction between conformation and configuration when discussing chirality in a molecular context. Conformations are temporary positions atoms in a molecule can assume as a result of bond rotation, bending, or stretching as long as no bonds are broken. Configurations are structures of a molecule which are assumed not to be interconvertible under ambient conditions. Enantiomers, and other optically active isomers such as diastereomers, are examples of configurational isomers. Optical isomerism is a form of isomerism (specifically stereoisomerism) whereby the different 2 isomers are the same in every way except being non-superposable mirror images1 of each other. Optical isomers are known as chiral molecules (prounounced ki-rall) . What is optical isomers? The (-)-form of an optical isomer rotates the plane of polarization of a beam of polarized light that passes through a quantity of the material in solution counterclockwise , the (+)-form clockwise. It is due to this property that it was discovered and from which it derives the name optical. The property was first observed by Louis Pasteur in 1848 in racemic acid. The study of optical isomerism is now called stereochemistry. Optical isomers are often called stereoisomers (in fact, stereoisomers constitute a more general group, since stereoisomerism needn't necessarily imply optical activity). Two types of molecules which differ only in their relative stereochemistry are said to be enantiomers of each other. A mixture of equal amounts of both enantiomers is said to be a racemic mixture. This form of isomerism can arise when an atom (usually carbon) is surrounded by four different functional groups. Swapping two of the groups can arise in two different molecules - mirror images of each other. What is enantiomer? In chemistry two stereoisomers are said to be enantiomers if they are mirror images of each other. Much as a left and right hand are different but one is the mirror image of the other, enantiomers are stereoisomers whose molecules are nonsuperposable mirror images of each other. Enantiomers have - when present in a symmetric environment - identical chemical and physical properties except for their ability to rotate plane polarized light by equal amounts but in opposite directions. A solution of equal parts of an optically active isomer and its enantiomer is known as a racemic solution and has a net rotation of plane polarized light of zero. A more in-depth explanation of this is in the footnotes for optical isomerism. In a non-symmetric environment such as in a biological environment enantiomers may react at different speed with other substances. This is the basis for Chiral synthesis. There are several conventions used for naming chiral compounds, all displayed as a prefix before the chemical name of the substance: (+)- vs. (-)- D- vs. L- (R)- vs. (S)- The (+)- vs. (-)- convention is based on the substances ability to rotate polarized light. The other two conventions are based on the actual geometry of each enantiomer. In nature, many chiral substances are only produced in one optical form, while (most) man-made chiral substances are racemic mixtures. Stereo chemistry of enantiomers is of great importance nowadays. Food and Drug Administration (FDA) of the United States of America, recently recommended that drug molecules having stereocentres should be given to the patients only in the active enantiomeric form and not as a racemic mixture. Any non-racemic chiral substance is called scalemic A chiral substance is enantiopure or homochiral when only one of two possible enantiomers is present. A chiral substance is enantioenriched or heterochiral when an excess of one enantiomer is present but not to the exclusion of the other. enantiomeric excess or ee is a measure for how much of one enantiomer is present compared to the other. For example in a sample with 40% ee in R, the remaining 60% is racemic with 30% of R and 30% of S so that the total amount of R is 70%. What is Optical Rotation? When polarized light is passed through a substance containing chiral molecules (or nonchiral molecules arranged asymmetrically), the direction of polarization can be changed. This phenomenon is called optical rotation or optical activity . Polarized light is usually understood to be linearly polarized. The rotation of the orientation of linearly polarized light was observed in the early 1800's ( Jean Baptiste Biot was one of the early investigators) before the nature of molecules was understood. Simple polarimeters have been used since this time to measure the concentrations of simple sugars, such as glucose , in solution. In fact, one name for glucose, dextrose , refers to the fact that it causes linearly polarized light to rotate to the right or dexter side. Similarly, levulose , more commonly known as fructose , causes the plane of polarization to rotate to the left. Fructose is even more strongly levorotatory than glucose is dextrorotatory. Invert sugar, formed by adding fructose to a solution of glucose, gets its name from the fact that the conversion causes the direction of rotation to "invert" from right to left. The degree of rotation depends on the color of the light (the yellow sodium D line near 589 nm wavelength is commonly used), the optical path length, the specific rotation (a characteristic of the material), and the concentration of the material. For a pure substance in solution, if the color and path length are fixed and the specific rotation is known, the degree of rotation can be used to determine the concentration. The polarimeter is a tool of great importance to those who trade in or use sugar syrups in bulk. The variation in rotation with the wavelength of the light is called ORD . ORD spectra and circular dichroism spectra are related through the Kramers-Kronig relations . Complete knowledge of one spectrum allows the calculation of the other. In the presence of magnetic fields all molecules have optical activity. A magnetic field aligned in the direction of light propagating through a material will cause the rotation of the plane of linear polarization. This Faraday effect is one of the first discoveries of the relationship between light and electromagnetic effects. Solviolence.org
Maintaining Healthy Soils The success of a gardener can depend on maintaining a healthy population of earthworms and microorganisms in the garden soil. How does the earthworm have such an effect on plant growth? Earthworms and microorganisms transform fertilizer nutrients from one form to another, making the nutrients available to plants. They digest dead plants and animals, turning them into organic matter, which stores and releases nutrients for plant growth. Earthworms also aerate the soil, which helps preserve an air-water balance important for plant growth. How can a healthy soil be maintained? One way is to avoid the use of pesticides on a lawn except when necessary. These may kill earthworms and microorganisms in the soil as well as the disease organisms. Instead of routinely using a lawn pesticide, use one only when a problem arises and only after natural treatments have been considered. Treat problem spots, if possible, instead of treating the entire lawn. Composting yard wastes and recycling organic matter back into soil supports beneficial microorganisms and earthworms. Add compost to annual flower and vegetable gardens, and recycle grass clippings into the lawn. Add organic matter to the soil and adopt care practices that preserve soil health. You will be amply rewarded. For more information, see the following Colorado State University Extension fact sheet(s). - Organic Fertilizers - Vegetable garden: Soil Management and Fertilization - Choosing a Soil Amendment - Composting Yard Waste - Lawn Care - Xeriscaping: Creative Landscaping Do you have a question? Try Ask an Expert! Updated Friday, October 31, 2014
Binary and IP Address Basics of Subnetting The process of learning how to subnet IP addresses begins with understanding binary numbers and decimal conversions along with the basic structure of IPv4 addresses. This paper focuses on the mathematics of binary numbering and IP address structure. The process of subnetting is both a mathematical process and a network design process. Mathematics drive how subnets are calculated, identified, and assigned. The network design determines how many subnets are needed and how many hosts an individual subnet needs to support based on the requirements of the organization. This paper focuses on the mathematics of binary numbering and IP address structure. It covers the following topics: 1. Construct and representation of an IPv4 address. 2. Binary numbering system. 3. Process to convert a decimal number to a binary number. 4. Process to convert a binary number to a decimal number. 5. Fundamental aspects of an IPv4 address. Note: Throughout this document, the term IP address refers to an IPv4 address. This document does not include IPv6. IP Address Construct and Representation An IP address is a thirty-two-bit binary number. The thirty two bits are separated into four groups of eight bits called octets. However, an IP address is represented as a dotted decimal number (for example: 188.8.131.52). Since an IP address is a binary number represented in dotted decimal format, an examination of the binary numbering system is needed. The Binary Numbering System Numbering systems have a base, which indicates how many unique numbers they have. For example, humans use the decimal numbering system, which is a base ten numbering system. In the decimal numbering system there are only ten base numbers-zero through nine. All other numbers are created from these ten numbers. The position of a number determines its value. For example, the number 2,534 means the following: there are two thousands; five hundreds; three tens; and four ones. The table below shows each number, its position, and the value of the position. Computers, routers, and switches use the binary numbering system. The binary numbering system is a base two numbering system, meaning there are only two base numbers-zero and one. All other numbers are created from these two numbers. Just like in the decimal numbering system, the location of the number determines its value. The table below shows the value of the first eight binary positions. For exponents above 7, double the previous place value. For example, 28 = 256, 29 = 512, 210 = 1,024, and so on. Decimal to Binary Conversion Since IP addresses are a binary number represented in dotted decimal format, it is often necessary to convert a decimal number to a binary number. In the figure above, the decimal number 35 is converted to the binary number 00100011. The steps to perform this conversion are below. 1. Determine your decimal number. In this scenario, it is 35. 2. Write out the base number and its exponent. Since an IP address uses groups of eight binary bits, eight base two exponents are listed. 3. Below the base number and its exponent, write the place value. For example, 20 has a value of 1; 22 has a value of 4; 23 has a value of 8; etc. 4. Compare the value of the decimal number to the value of the highest bit position. If the value of the highest bit position is greater than the decimal number, place a 0 below the bit position. A 0 below the bit position means that position is not used. However, if the value of the highest bit position is less than or equal to the decimal number, place a 1 below the bit position. A 1 below the bit position means that position is used.
By: Educate Inspire Change It wasn’t until Europeans took over North America that natives adopted the ideas of gender roles. For Native Americans, there was no set of rules that men and women had to abide by in order to be considered a “normal” member of their tribe. In fact, people who had both female and male characteristics were viewed as gifted by nature, and therefore, able to see both sides of everything. According to Indian Country Today, all native communities acknowledged the following gender roles: “Female, male, Two Spirit female, Two Spirit male and Transgendered.” “Each tribe has their own specific term, but there was a need for a universal term that the general population could understand. The Navajo refer to Two Spirits as Nádleehí (one who is transformed), among the Lakota is Winkté (indicative of a male who has a compulsion to behave as a female), Niizh Manidoowag (two spirit) in Ojibwe, Hemaneh (half man, half woman) in Cheyenne, to name a few. As the purpose of “Two Spirit” is to be used as a universal term in the English language, it is not always translatable with the same meaning in Native languages. For example, in the Iroquois Cherokee language, there is no way to translate the term, but the Cherokee do have gender variance terms for ‘women who feel like men’ and vice versa.” The “Two Spirit” culture of Native Americans was one of the first things that Europeans worked to destroy and cover up. According to people like American artist George Catlin, the Two Spirit tradition had to be eradicated before it could go into history books. Catlin said the tradition: “..Must be extinguished before it can be more fully recorded.” However, it wasn’t only white Europeans that tried to hide any trace of native gender bending. According to Indian Country Today, “Spanish Catholic monks destroyed most of the Aztec codices to eradicate traditional Native beliefs and history, including those that told of the Two-Spirit tradition.” Throughout these efforts by Christians, Native Americans were forced to dress and act according to newly designated gender roles. One of the most celebrated Two Spirits in recorded history was a Lakota warrior aptly named Finds Them And Kills Them. Osh-Tisch was born a male and married a female, but adorned himself in women’s clothing and lived daily life as a female. On June 17, 1876, Finds Them And Kills Them gained his reputation when he rescued a fellow tribesman during the Battle of Rosebud Creek. An act of fearless bravery. Below is a picture of Osh-Tisch and his wife. In Native American cultures, people were valued for their contributions to the tribe, rather than for masculinity or femininity. Parents did not assign gender roles to children either, and even children’s clothing tended to be gender neutral. There were no ideas or ideals about how a person should love; it was simply a natural act that occurred without judgment or hesitation. Without a negative stigma attached to being a Two Spirit, there were no inner-tribal incidents of retaliation or violence toward the chosen people simply due to the fact that individuals identified as the opposite or both genders. “The Two Spirit people in pre-contact Native America were highly revered and families that included them were considered lucky. Indians believed that a person who was able to see the world through the eyes of both genders at the same time was a gift from The Creator.” Religious influences soon brought serious prejudice against “gender diversity,” and so this forced once openly alternative or androgynous people to one of two choices. They could either live in hiding, and in fear of being found out, or they could end their lives. Many of whom did just that.
Warp drive is a technology that allows space travel by distorting the fabric of space to propel a vessel to velocities that exceed the speed of light. These velocities are referred to as warp factors. In a starship warp engine, high-energy plasma, created by a matter-antimatter reaction, is pumped through a series of "warp coils" cast from an artificial material called verterium cortenide. Verterium cortenide provides a bridge between electromagnetic and gravitational forces. By design, it has the property that when a high-energy plasma circulates through appropriately fashioned verterium cortenide castings, a "warp field" is generated. A single mathematical formulation unifies all four fundamental forces in nature: gravitation, electromagnetism, and the strong and weak nuclear forces. Since gravity is simply one manifestation of a single general force, it is possible to manipulate gravitational forces through the application of electromagnetic forces. This gives the capability to control the geometry of space with electromagnetic forces and the verterium cortenide warp coils are the medium through which electromagnetic forces are used to alter the geometry of space. Electromagnetic interactions between waves of superhot plasma and the verterium cortenide coils change the geometry of space surrounding the engine nacelles. In the process, a multilayered wave of warped space is created and as the space-time around the starship is moving, the starship cruises off on this distortion to its next destination at hundreds of times the speed of light (relative to normal space). Within the warp field, however, the starship does not exceed the local speed of light and, therefore, does not violate the principal tenet of special relativity.
The genus Arctomecon (also known as bear-poppy), contains three beautiful species restricted to southwestern North America. In this exceptional genus, dwarf bear-poppy (Arctomecon humilis) is the rarest — and perhaps most remarkable — species, due to its profusion of delicate white flowers and unique habitat. In many ways, the dwarf bear-poppy is a poster child for rare plant conservation. It is restricted to a small area in southwestern Utah close to the Arizona border, near the city of St. George. A stunning plant, it grows in a notably hostile habitat — it is not uncommon to find this species growing completely alone on gypsum soils on steep, exposed hillsides and ridges. Dwarf bear-poppy is topped by a mass of white flowers in late April and early May that often cover the entire plant. Each flower consists of four delicate, white petals surrounding myriad yellow stamens, all of which sit atop a plump green, round ovary. Unopened, graceful, green flower buds droop, waiting to open. When looking out over this landscape, backlit plants in full bloom seem to glow in the early morning or late afternoon light. A close inspection of dwarf bear-poppy’s hairy leaves with their three “claw-tipped” teeth at their apex sheds light on the origin of bear-poppy as a common name for these plants. Everything about dwarf bear-poppy exudes beauty. It is found in approximately ten locations in Washington County, Utah, and is listed as an endangered species by the U.S. Fish and Wildlife Service. Dwarf bear-poppy is threatened by development, mining, and off-highway vehicle (OHV) damage to its habitat. The dwarf bear-poppy grows in soil that forms a thick, crunchy, structurally-complex biological soil crust that is rich in gypsum. The habitat is easily damaged by hiking, grazing, and OHV use and is slow to recover. The habitat for the dwarf bear-poppy in the photo shown below has not had OHV activity for at least seven years (and likely longer) and still shows signs of heavy damage. In fact, full recovery of a soil crust can take up to 250 years, so a little damage can have long-lasting impacts. Therefore, conservation efforts have included fencing as well as the establishment of nature preserves, managed by The Nature Conservancy, to prevent further OHV damage. Dwarf bear-poppy is a strikingly-beautiful, rare, and threatened species. Conservation and restoration efforts should ensure that it continues to be a botanical treasure for future generations to cherish. ©2013 Chicago Botanic Garden and my.chicagobotanic.org
Termites are insects. They're most common in tropical environments, although they can live just about anywhere as long as the ground doesn't completely freeze in the winter. Although many people think termites resemble ants, they're more closely related to cockroaches. All termite species are social, and termite colonies are divided into groups, or castes. Members of each caste have different jobs and different physical features: - Reproductives lay eggs. Most colonies have one pair of primary reproductives -- the king and queen. In some species, secondary and tertiary reproductives assist with the egg-laying. Only the king and queen have eyes. The rest of the termites are blind and navigate using scent and moisture trails. Kings and queens are usually darker than the rest of the termites in the colony. - Soldiers defend the nest from invaders, typically ants and termites from other colonies. In most species, soldiers have large heads and strong, pincer-like mandibles. Soldiers' heads are often darker than their bodies. Some species can secrete a toxic or sticky substance from their heads, which they use to kill or subdue intruders. - Workers are a milky or creamy color. They have smaller, saw-toothed mandibles, which allow them to take small bites of wood and carry building materials. As their name suggests, they do most of the work in the colony. They dig tunnels, gather food and care for young. They also feed the king, queen and soldiers, who are unable to feed themselves. Workers and soldiers are sterile. Termites' food comes from cellulose. Cellulose is a polymer, or a compound made of lots of identical molecules. It's a tough, resilient compound found in plants. Cellulose is what gives trees and shrubs their structure. The molecules that make up cellulose are glucose molecules -- as many as 3,000 of them. In other words, cellulose is made of sugar. However, unlike the sugars glucose, sucrose and lactose, people can't digest cellulose. The human digestive system uses special proteins called enzymes to break sugary polymers down into their simple glucose components. We then use glucose as a source of energy. For example, the enzyme sucrase breaks down sucrose, and lactase breaks down lactose. Our bodies do not produce cellulase, the enzyme that breaks down cellulose. John Breznak/Michigan State University/NSF Termites don't produce cellulase, either. Instead, they rely on microorganisms that live in a part of their digestive system called the hindgut. These organisms include bacteria and protozoans. They live in a symbiotic relationship with the termites -- neither the termites nor the microorganisms could live without the other. The types of organisms found in the hindgut divide termites into two rough categories. Higher termites have bacteria in their gut but no protozoans, while lower termites have bacteria and protozoans. You can also categorize termites by where they live. Subterranean termites build large nests underground. Many primitive termites form colonies in the wood they are consuming. A termite colony is essentially a multigenerational family. We'll look at termites' reproductive cycle and how it allows them to form large colonies next.
Perhaps the most important property of soil as related to plant nutrition is its hydrogen ion activity, or pH (the term "reaction" is also used, especially in older literature). Soil reaction is intimately associated with most soil-plant relations. Consequently, the determination of pH has become almost a routine matter in soil studies relating directly or indirectly to plant nutrition. Knowledge of soil acidity is useful in evaluating soils because pH exerts a very strong effect on the solubility and availability of many nutrient elements. It influences nutrient uptake and root growth, and it controls the presence or activity of many micro-organisms. The pH scale is based on the ion product of pure water. Water dissociates very slightly: H2O « H+ + OH- Kw = [H+] * [OH-] = 10-14 at 23°C where Kw is the ion product for water and indicates the activity of each component in moles per liter of solution. Since [H+] = [OH-] in pure water at 23°C, each is equal to (10-14)1/2 = 10-7. The pH of a solution is defined as the negative log (base 10) of the H ion activity or the log of the reciprocal of [H+]: pH = -log10 [H+] = log10 (1/[H+]) For example, a hydrogen ion activity of 1/10,000 (or 10-4) mol/L would equal pH 4. Water with equal numbers of H+ and OH- (hydroxyl) is neutral at pH 7 at 23°C . pH values below 7 are increasingly acid with excess H+ or hydrogen ions. At 100°C the pH of pure water is 6.0 and at 0°C is 7.5 (i.e., temperature affects pH). Carbon dioxide dissolves in water to form carbonic acid. Otherwise-pure water in equilibrium with CO2 at its standard atmospheric concentration of 0.033% (330 ppmv) will have a pH of 5.72. CO2 concentration may be as high as 10% in poorly aerated soil pores; water in equilibrium with this air would have a pH of 4.45, although other components of soil solution can raise or lower it. Three soil pH ranges are particularly informative: a pH <4 indicates the presence of free acids, generally from oxidation of sulfides; a pH <5.5 suggests the likely occurrence of exchangeable Al; and a pH from 7.8-8.2 indicates the presence of CaCO3 (Thomas 1967). The fundamental property of any acid in general (and therefore of a soil acid) is that of supplying protons, and therefore the H+ ion activity of a system is fundamentally its proton supplying power. In an analogous fashion, the redox potential (Eh) of a system is its electron supplying power. Hydrogen ions in solution are in equilibrium with those held on soil particle surfaces (i.e., on exchange sites) The soil pH as actually measured represents the active (in solution) hydrogen ion concentration. The total acidity of the soil includes both active and "reserve" (or exchangeable) acidity. Thus, two soils with the same pH may have much different amounts of reserve acidity and one may be more difficult to neutralize than another. Exchangeable aluminum also contributes to soil acidity. When an Al3+ ion is displaced from an exchange site into the soil solution, it hydrolyzes, splitting water and releasing a hydrogen ion to solution: Al3+ + H2O = AlOH2+ + H+ Lime requirement is the amount of a base (in practice, lime or calcium carbonate) needed to neutralize enough of the exchangeable acidity to raise soil pH to a desired value that is more suitable for crop growth. In most soils it has been noticed that pH tends to increase with depth. This is because the upper horizons receive maximum leaching by rainfall, and by dissolved carbonic acid and organic acids which remove metal cations (eg., Ca++, K+, Mg++) and replace them with H+ ions. Lower horizons are not so strongly leached and, in fact, in dryer areas may accumulate calcium and other materials removed from the upper soil. There are many factors that affect soil reaction as measured in the laboratory. The pH of many soils tends to increase as the sample is diluted with water. Such pH changes may be caused by variables such as carbon dioxide partial pressure, salt concentration, hydrolysis, and solubility of soil constituents. Various soil:water ratios have been proposed for pH determinations. These range from very dilute suspensions (1:10 soil:solution ratio) to soil pastes. The general effect sees the pH of most soils increasing with dilution, and becoming constant at about a soil:water ratio of 1:5. There is no standard procedure for measuring soil pH. Some of the details that vary from one laboratory to the next are: soil:solution ratio, use of a salt solution (e.g., 0.01 M CaCl2) rather than water, method of mixing, time of standing before reading, etc. Soil may be weighed, or measured as a volume (McLean 1982). Therefore, when reporting sol pH, it is essential to include at least a brief summary of the procedure followed. The exact placement of the pH electrode in the sample may be important. When placed in the settled sediment of a suspension of soil of appreciable cation exchange capacity (CEC), a lower pH is generally measured compared to the measurement obtained in the supernatant solution (called the suspension effect). However, the sediment pH can be lower than, equal to, or higher than that of the supernatant depending on the soil and existing conditions. For example, if the soil has a net positive charge and more OH than H ions are dissociated from the soil, the sediment may have a higher pH than the supernatant (Coleman and Thomas 1967). Soil factors in the field that influence soil reaction include degree of base saturation, type of colloid, carbon dioxide partial pressure, oxidation potential, soluble salts, and so on. In addition to these factors the measured pH may vary because of the manner in which the sample is handled in the laboratory before and during the determination. Acquaintance with these variables is necessary for intelligent measurement and interpretation of soil reaction. pH can be determined using either colorimetric or electrometric methods. The choice of method depends upon the accuracy required, the equipment available, or convenience. Many organic dyes are sensitive to pH, the color of the dye changing more or less sharply over a narrow range of H-ion activity. These methods tends to be slower, less precise, and obscured from view by soil particles and organic matter. Hence, they are used mostly in the field where pH is to be approximated. The electrometric method involves a glass electrode that is sensitive to H+: there is an exchange of ions between solution (H+) and glass (Na+) (Westcott 1978). A reference electrode that produces a constant voltage is also required. The electrode pair produces an electromotive force (emf or voltage) that is measured by a millivoltmeter. The relation between emf and pH is governed by the Nernst equation: where E = emf produced by electrode system Eo = a constant dependent on the electrodes used R = gas constant T = absolute temperature n = number of electrons involved in equilibrium (1 in this case) F = Faraday constant (Willard et al., 1988, p. 675). Note that temperature is a factor in the equation. At 25°C this equation simplifies to E = E° + 0.0591 pH which means a change of 1 pH unit produces a change in emf of 59.1 mV, at 25°C. This temperature-dependence of pH is important to remember when calibrating a pH meter. Soil reaction classes The following descriptive terms are used for specified ranges of soil pH (Soil Survey Division Staff, 1993): Ultra acid <3.5 Extremely acid 3.5-4.4 Very strongly acid 4.5-5.0 Strongly acid 5.1-5.5 Moderately acid 5.6-6.0 Slightly acid 6.1-6.5 Slightly alkaline 7.4-7.8 Moderately alkaline 7.9-8.4 Strongly alkaline 8.5-9.0 Very strongly alkaline >9.0 Cole, CV. 1957. Hydrogen and calcium relationships of calcareous soils. Soil Sci. 83:141-150. Coleman, NT and GW Thomas. 1967. The basic chemistry of soil acidity. In RW Pearson and F Adams (eds.) Soil acidity and liming. Agronomy 12:1-41. Am. Soc. of Agron., Inc., Madison, Wis. McLean, EO. 1982. Soil pH and lime requirement. In AL Page (ed) Methods of Soil Analysis, Part 2: Chemical and Microbiological Properties. 2ed. Agronomy #9. pp. 199-224. American Society of Agronomy Inc, and Soil Science Society of America Inc. Madison, Wisconsin. Soil Survey Division Staff. 1993. Soil Survey Manual. United States Department of Agriculture Handbook N0. 18. Washington, DC. 437 pp. Sposito, G. 1989. The Chemistry of Soils. New York: Oxford University Press. 277 pp. Thomas, GW. 1967. Problems encountered in soil testing methods. p. 37-54. In Soil testing and plant analysis, Part 1. Soil Sci. Soc. of Am. Spec. Pub. no. 2, Madison, Wis. Westcott, CC. 1978. pH Measurements. San Diego: Academic Press, Inc. 172 pp. Willard, H.H., L.L. Merritt, J.A.Dean, and F.A. Settle. 1988. Instrumental Methods of Analysis. 7th ed. Wadsworth Publishing C. Belmont, CA. Procedure: Soil pH (Electrode Method) 1. Sieved soil 2. Plastic beakers (50 or 100 ml). 3. Glass stirring rod. 4. pH meter with combination electrode. 1. 2 M CaCl2. 2. pH buffer solutions -- 7, 4, 10 as needed. Note: The actual soil:water ratio used, and the choice of water or salt solution, are up to the discretion of the analyst, and should be reported with your results. Determining pH in both water and 0.01M CaCl2 gives you useful information with only a little more work. Another note: Be aware that a small amount of fill solution from the reference electrode may leak into your sample. This solution may contain K+, Cl-, or Ag+, as well as mercury, a common preservative in commercial buffer solutions. 1. Weigh 10.0 g soil (air-dry or moist) into a plastic beaker. Add 50.0 ml DDW. Stir with glass rod to mix; let stand 30 minutes, stirring occasionally. 2. Calibrate pH meter at pH 7 and 4 (or 7 and 10 for alkaline soils). 3. Swirl the suspension, then carefully insert the combination pH electrode into it. Record pH of supernatant after 15 seconds of settling (or when the reading settles down). The observed pH will vary with where and when in the suspension you take the reading, so be consistent in your method. 4. Rinse electrode with DW after each measurement; check calibration periodically. 5. Add 0.25 ml of 2 M CaCl2 to the 1:5 suspension, to make a 0.01 M solution. Mix, let stand, and measure pH as above. 6. Optional: Saturated paste -- Weigh 20.0 g soil into a 50-ml beaker. Add small increments of water and mix thouroughly until a saturated paste is formed. This stage occurs when there is a smooth, shiny surface to the mixture, but no free water on top. Let stand 30 minutes and measure pH as above. The Care and Feeding of pH Electrodes Keep bulb in water/solution as much as possible. Don't push bulb into bottom of container, or scratch it. Rinse electrode(s) thoroughly between sample/buffer measurements. After rinsing, blot dry—don’t wipe, which will cause static charges to build up in the electrode. Check level of fill solution (saturated KCl, or 4M KCl/AgCl, depending on the type of electrode) in reference electrode; it should be well above the level of the solution you are measuring, to provide sufficient hydrostatic head for a steady flow. Refill if neccessary. There should be free KCl crystals in bottom of reference electrode (that's how you know it's a saturated solution). However, be sure crystals are not plugging up ceramic junction. Open fill hole when using, to allow free flow of ions during measuring; close fill hole when done, to minimize wasting of fill solution. Store combination electrode in 10:1 pH 4 buffer solution:saturated KCl (i.e., dilute the KCl solution with buffer. If using separate electrodes: store glass (pH) electrode in pH 4 buffer; store reference electrode in fill solution diluted by 10. Because fill solution is flowing out of the reference electrode, you contaminate a sample whenever you place a pH electrode in it, making it unusable for other measurements. In addition to KCl, the fill solution may contain slight amounts of buffer solution that diffuses into it through the porous junction. Commercial buffer solutions often contain mercury as a preservative. If all of your sample pH's are clustered closely around a single value, you can use a one-point calibration. However, in most cases you should do a two-point calibration, which will allow you to measure a range of pH.. Buffers and samples must be at the same (preferably room) temperature. For most precise calibration, use fresh buffer solution. Rinse electrode thoroughly between each step. 1. Place electrode in pH 7 buffer. Set to 7.00 with CALIBRATE control. This control sets the intercept of the pH vs voltage regression line. 2. Place electrode in pH 4 (OR 10) buffer. Set to 4.00 (OR 10.00) with TEMPERATURE control. This action sets the slope of the regression line. Repeat steps 1 and 2 until both readings are accurate without changing the controls.
Lyme Disease Fact Sheet Revised April 2016 Download a PDF version of this document for print: Lyme Disease Fact Sheet (PDF) What is Lyme disease? Lyme disease is one of many tickborne diseases in Minnesota and the most common tickborne disease in Minnesota and the United States. The disease can cause a variety of symptoms that affect many different parts of the body. It is called Lyme disease because it was discovered in the Lyme, Connecticut area in 1975. How do people get Lyme disease? People can get Lyme disease through the bite of a blacklegged tick (deer tick) that is infected with the bacteria Borrelia burgdorferi. Not all blacklegged ticks carry these bacteria and not all people bitten by a blacklegged tick will get sick. The tick must be attached to a person for at least 24-48 hours before it can spread Lyme disease bacteria. Blacklegged ticks live on the ground in areas that are wooded or with lots of brush. The ticks search for hosts at or near ground level and grab onto a person or animal as they walk by. Ticks do not jump, fly, or fall from trees. In Minnesota, the months of April - July and September - October are the greatest risk for being bitten by a blacklegged tick. Risk peaks in June every year. Blacklegged ticks are small; adults are about the size of a sesame seed and nymphs (young ticks) are about the size of a poppy seed. Due to their small size, a person may not know they have been bitten by a tick. What are the symptoms of Lyme disease? Early symptoms of Lyme disease usually appear within 30 days of a bite. It is common to have a red and itchy spot, up to the size of a quarter, right after being bitten by a tick. This is simply due to irritation from the tick’s saliva and is not a symptom of Lyme disease. However, contact your doctor if you notice any of the following symptoms: - May look like a bull’s-eye, red ring with a clear center that may grow to several inches in width - May not be itchy or painful - Not everyone gets or sees the rash and not all rashes look like a bull’s-eye - Fever and chills - Muscle and joint pain - Tired and weak If a person is not treated early, one or more of these symptoms may occur weeks or months later: multiple rashes, paralysis on one side of the face, weakness or numbness in the arms or legs, irregular heartbeat, persistent weakness and tiredness, or swelling in one or more joints. How is Lyme disease diagnosed? If a person suspects Lyme disease, they should contact a doctor immediately for diagnosis and treatment. The diagnosis of Lyme disease is based on: - History of exposure to blacklegged ticks or tick habitat - Physical exam (rash and other symptoms) - Blood tests may be performed to confirm the diagnosis How is Lyme disease treated? Lyme disease is treated with antibiotics. Treatment works best early in the disease. Lyme disease detected later is also treatable with antibiotics but symptoms may take longer to go away, even after the antibiotics have killed the Lyme disease bacteria. In most cases, symptoms go away after treatment. It is possible to get Lyme disease more than once so continue to protect yourself from tick bites and contact your doctor if you suspect you may have symptoms of Lyme disease. How can I reduce my risk? There is currently no human vaccine available for Lyme disease. Reducing exposure to ticks is the best defense against tickborne disease. Protect yourself from tick bites: - Know where ticks live and when they are active. Blacklegged ticks live in wooded or brushy areas. In Minnesota, blacklegged tick activity is greatest from April - July and September - October. - Use a safe and effective tick repellent if you spend time in areas where ticks live. Follow the product label and reapply as directed. - Use DEET-based repellents (up to 30%) on skin or clothing. Do not use DEET on infants under two months of age. - Pre-treat clothing and gear with permethrin-based repellents to protect against tick bites for at least two weeks without reapplication. Do not apply permethrin to your skin. - Wear light-colored clothing to help you spot ticks more easily. Wear long-sleeved shirts and pants to cover exposed skin. - Talk with your veterinarian about safe and effective products you can use to protect your pet from ticks. Check for ticks at least once a day after spending time in areas where ticks live: - Inspect your entire body closely for ticks, especially hard-to-see areas such as the groin and armpits. - Remove ticks as soon as you find one. - Use tweezers and grasp the tick close to its mouth and pull the tick outward slowly and gently. Clean the area with soap and water. - Examine your gear and pets for ticks too. Manage areas where ticks live: - Keep lawns and trails mowed short. - Remove leaves and brush. - Create a landscape barrier of wood chips or rocks between mowed lawns and woods.
What is it used for? - Preventing dangerous blood clots (thromboembolism) in people with artificial heart valves. How does it work? Persantin tablets contain the active ingredient dipyridamole, which is a type of medicine known as an antiplatelet. It prevents blood cells called platelets from clumping together inside the blood vessels, and is sometimes referred to as a 'blood thinner'. It also dilates the blood vessels. (NB. Dipyridamole is also available without a brand name, ie as the generic medicine.) Platelets are the blood cells that start off the process of blood clotting. Blood clots normally only form to stop bleeding that has occurred as a result of injury to the tissues. The process is complicated and begins when platelets stick to the site of damage and clump together. They then produce chemicals that attract more platelets and clotting factors to the area, and eventually a solid clot is formed. This is the body’s natural way of repairing itself. Sometimes, however, a blood clot can form inside the blood vessels. This is known as a thrombus and can be dangerous because the clot may detach and travel in the bloodstream (thomboembolism). It may eventually get lodged in a blood vessel, thereby blocking the blood supply to a vital organ such as the brain. This can cause a stroke or mini-stroke (transient ischaemic attack). Some people have an increased tendency for blood clots to form within the blood vessels. This is usually due to a disturbance in the blood flow within the blood vessels. For example, fatty deposits on the walls of the blood vessels (atherosclerosis) can disrupt the blood flow, giving a tendency for platelets to clump together and start off the clotting process. People with heart valve disease who have had an artificial heart valve inserted are also at increased risk of blood clots, because platelets can stick to the artificial valve. This may lead to a blood clot forming on the valve, which could then detach and travel to the brain, causing a stroke. Dipyridamole is used to prevent platelets from forming blood clots. It works by blocking the action of an enzyme found in platelets called phosphodiesterase. Inside the platelets phosphodiesterase normally breaks down a chemical called cyclic AMP. Cyclic AMP plays a key role in blood clotting. If the level of cyclic AMP in the platelets is high this prevents the platelets from clumping together. Dipyridamole causes the levels of cyclic AMP in the platelets to rise, because it stops phosphodiesterase from breaking it down. This means that dipyridamole stops the platelets from clumping together and causing a blood clot. Dipyridamole is used to prevent blood clots forming on artificial heart valves. It is used in combination with an anticoagulant medicine such as warfarin, which prevents blood clots in a different way. The combination of medicines reduces the chance of a clot forming on the valve and then detaching and travelling in the blood vessels. - Dipyridamole is sometimes given by injection during certain types of diagnostic tests to check if the heart is functioning properly. However, dipyridamole injection should not usually be given to people who are taking dipyridamole by mouth. If you are due to have any tests on your heart, you should make sure the doctor knows you are taking this medicine, as they may want you to stop taking it 24 hours before the test. Use with caution in - Severe coronary artery disease. - Angina not well controlled by medical treatment (unstable angina). - People who have recently had a heart attack. - Narrowing of the main artery coming from the heart (aortic stenosis). - Heart failure. - Low blood pressure (hypotension). - Abnormal muscle weakness (myasthenia gravis). - Blood clotting disorders. Not to be used in - Known sensitivity or allergy to any ingredient. - This medicine is not recommended for use in children. This medicine should not be used if you are allergic to one or any of its ingredients. Please inform your doctor or pharmacist if you have previously experienced such an allergy. If you feel you have experienced an allergic reaction, stop using this medicine and inform your doctor or pharmacist immediately. Pregnancy and breastfeeding Certain medicines should not be used during pregnancy or breastfeeding. However, other medicines may be safely used in pregnancy or breastfeeding providing the benefits to the mother outweigh the risks to the unborn baby. Always inform your doctor if you are pregnant or planning a pregnancy, before using any medicine. - This medicine is not known to be harmful if used during pregnancy. However, as with all medicines, it should be used with caution during pregnancy, and only if the expected benefit to the mother is greater than any possible risk to the developing baby. This is particularly important in the first trimester. Seek medical advice from your doctor. - This medicine may pass into breast milk in small amounts. It should not be used during breastfeeding unless considered essential by your doctor. Seek medical advice from your doctor. - Take this medication half to one hour before food. Medicines and their possible side effects can affect individual people in different ways. The following are some of the side effects that are known to be associated with this medicine. Just because a side effect is stated here does not mean that all people using this medicine will experience that or any side effect. - Nausea and vomiting. - Feeling faint. - Indigestion (dyspepsia). - Throbbing headache (normally disappears with long-term use). - Pain in the muscles (myalgia). - Hot flushes. - Faster than normal heart beat (tachycardia). - Low blood pressure (hypotension). - Temporary worsening of chest pain (angina) at the start of therapy. - Increased bleeding during or after surgery. - Allergic reactions such as skin rash, hives, narrowing of the airways (bronchospasm), or swelling of the lips, tongue and throat (angioedema). The side effects listed above may not include all of the side effects reported by the drug's manufacturer. For more information about any other possible risks associated with this medicine, please read the information provided with the medicine or consult your doctor or pharmacist. How can this medicine affect other medicines? It is important to tell your doctor or pharmacist what medicines you are already taking, including those bought without a prescription and herbal medicines, before you start treatment with this medicine. Similarly, check with your doctor or pharmacist before taking any new medicines while taking this one, to ensure that the combination is safe. Dipyridamole enhances the anti-blood-clotting effect of the following medicines: - anticoagulants, eg warfarin - other antiplatelet medicines, eg low-dose aspirin, clopidogrel. Antacids for indigestion and heartburn may reduce the absorption of dipyridamole from the gut. As this could make it less effective, antacids should preferably not be taken in the two hours before or after taking this medicine. Dipyridamole increases the effect of a medicine for irregular heart beats called adenosine. The dose of adenosine needed in people taking dipyridamole will be much lower than normal. Dipyridamole may decrease the effect of the anti-cancer medicine fludarabine. Other medicines containing the same active ingredient Dipyridamole tablets and suspension are also available without a brand name, ie as the generic medicine.
This is interesting because until now trace fossil diversity in the Ediacaran has been very limited, with only three recognised traces fossil types: The most common from, composed of simple groove traces with levees. Probably formed in the topmost 10mm of the sediment. They are commonly preserved as negative epireliefs or negative hyporeliefs, indented into the bottom of the overlying beds. Sediment is commonly displaced to form marginal raised ridges. There were probable tubes indicating organisms that may have been round. Some directional meandering is evident. A form with fine ridges, arranged in fans and associated with Kimberella in a few instances. Broadly analogous to mollusc radula-like grazing. Kimberella occurs at the apex of the fans and appears to have scraped bio-material from the sediment with a proboscis. 3) Resting traces, particularly Dickinsonia Evidence of serial mat "feeding" dissolution by creeping mat-like animals. These are outlines of organisms that have been impressed into algal mats covering the sediment, often in association with similar-sized body fossils. These are often found as positive epirelief - sticking out from the under side of the bed, as apposed to normal Dickinsonia body fossils which are found as negative epirelief - an indentation into the bottom of the bed. The latest finds from Newfoundland have been found on the top of green mudstone overlain by a volcanic tuff which has protected the traces. If should be noted that the preservation at Mistaken point differs from most Ediacaran sites in that the fossils are preserved on the top of beds, under a volcanic tuff which blanketed the forms when alive and protected them. Most Ediacaran fossils occur on the base of coarse sandstone beds which smothered the organisms (see An Introduction to the Ediacaran Fauna). Over 70 straight traces, ranging from 1.5 to 17.2 cm in length and up to 13 mm in width have been found. The surfaces of the traces are marked by regular crescentic internal divisions, formed by thin ridges of siltstone with a spacing of approx. 1 mm. A) Largest observed trail on bedding plane. B–D are close-up images of crescentic internal divisions in A. B) Distal end of trail. Note pyrite crystals embedded in ash surrounding trail. C) Central section of trail. D: Proximal section of trail with terminal circular impression. Scale bars = 1 cm. Each trace typically bears marginal ridges which the authors claim provides key evidence for movement of an object along the surface of the sediment, and can be used to distinguish trace fossils from abiogenic structures. At the far end of several specimens, a negative circular impression can also be seen, which the authors interpret as the mould of the trace maker itself. The authors interpret the trace as being made by a cnidarian-like organisms similar to Urticina, a modern sea anemone. marine aquaria. Note concave-forward hemispherical structures (at left) and positive marginal ridges (right). Scale bar = 3 cm. This is a big claim - that a cnidarian-grade organism was crawling around the Ediacaran. Of course a number of people have been claiming that this level of organisation was around then, but this would be an important step in supporting evidence. There are a few issues however. These traces are extremely rare, and are not found in other locations. An explanation for that is the differing preservatonal styles, but even so, some similar finds would be expected. Also a number of finds of tube-like remains have been found along with the 'stitch and groove' pattern shown by the Mistaken Point forms, and have been interporeted as body fossils. So there is still some uncertainty here. Of course, finding a sea-anemone-like form at the end of one of these trails would be nice, but there is still a lot we need to understand about stitch and groove forms before we can say with any degree of certainty that these forms were made by cnidarian-grade organisms Dickinsonia trace: www.evolbiol.ru/fedonkin_metazoa.htm Liu, A., Mcllroy, D., & Brasier, M. (2010). First evidence for locomotion in the Ediacara biota from the 565 Ma Mistaken Point Formation, Newfoundland Geology, 38 (2), 123-126 DOI: 10.1130/G30368.1
A series of short posts about specific elements of teaching practice that I think are effective and make life interesting. Some are based on my own lessons and others are borrowed from lessons I’ve observed. This is a tried and tested method that scores well in the Hattie effect-size rankings. It’s a process with a great deal of potential. From my experience, it works best when students are asked to go beyond explaining something they’ve understood themselves; they are actually asked to teach it so that other students also understand. As we all know, when preparing to teach a set of concepts, it requires deeper understanding than a straight-forward explanation might; at the very least it demands a greater level of clarity in the way the explanation is communicated. For example, my son’s class in Year 4 had a lovely exercise to help them learn the language of giving instructions. They each had to teach their classmates a skill and write out a full script of everything they needed to say. My son taught them all how to play the ‘Smoke on the water’ guitar riff during the class group-guitar lesson. In working out the sequence of notes and how to explain each chord, he reinforced his own knowledge. The class enjoyed several weeks where they were taught some interesting things by their peers. I’ve seen this method applied to lots of different subjects and in my own lessons, I find this very useful. If I am teaching something that requires a more extended exposition, to help ensure students have understood it, I then ask them to prepare to teach it back in the next week or so. It wouldn’t work for them all to do it so I either select someone specific in advance or ask them all to prepare knowing that I’ll pick someone out to teach it back. Currently, my Year 13s are getting ready to teach back this bit of physics magic – one of my favourite derivations: It requires quite a lot of conceptual thought with lots of steps in the right order to lead to the final solution. By getting ready to explain this in depth to the whole class, the students will need to make sure they understand it in significant detail; it won’t be enough just to learn it by heart without understanding it. ‘Teaching it back’ is a cornerstone of the co-construction process I use with my Year 9s. I spend a bit of time with the leading team making sure they understand the concepts before they teach the class. Here Asad is explaining how a lux meter works and how students can use it to explore the phenomena of absorption and transmission. In this example, Trevor is explaining the ideas behind the pressure can demonstration: In this example, Kieret is setting out some questions relating to basic circuit theory leading on to a practical where students tested their own circuits, measuring currents at various positions. One of the most important aspects of this process is that students make mistakes; each mistake highlights an area where their conceptual understanding isn’t as deep as you might have imagined from some more routine classwork. In giving their explanation in teacher-mode, they reveal more about what they do and do not understand. This is your cue to intervene to clarify or challenge as necessary. Teaching it back, is one of the best ways to flush out misconceptions – in my experience. Occasionally, a student does the job so well, you feel you couldn’t have done it better yourself. Here was one:
When is our brain ready to learn anything new? Scientists might be closer to evolving a technique to find that out. That could help in dealing with students and also monitoring workers who have to stay alert. An MIT team led by Professor John Gabrieli has shown that activity in a specific part of the brain, known as the parahippocampal cortex (PHC), predicts how well people will remember a visual scene. Broadly they conclude that our memories work better when our brains are prepared to absorb new information The new study, published in the journal NeuroImage , found that when the PHC was very active before people were shown an image, they were less likely to remember it later. "When that area is busy, for some reason or another, it's less ready to learn something new," says Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and Cognitive Neuroscience and a principal investigator at the McGovern Institute for Brain Research at MIT. The PHC, which has previously been linked to recollection of visual scenes, wraps around the hippocampus, a part of the brain critical for memory formation. However, this study is the first to investigate how PHC activity before a scene was presented would affect how well the scene was remembered. Lead author of the paper is Julie Yoo, a postdoc at the McGovern Institute. Subjects were shown 250 color photographs of indoor and outdoor scenes as they lay in a functional magnetic resonance imaging (fMRI) scanner. They were later shown 500 scenes — including the 250 they had already seen — as a test of their recollection of the first batch of images. The fMRI scans revealed that images were remembered better when there was lower activity in the PHC before the scenes were presented. The precise area of activation was slightly different in each person studied, but was always located in the PHC. In a second experiment, the researchers used real-time fMRI, which can monitor subjects' brain states from moment to moment, to determine when the brain was "ready" or "not ready" to recall images. Those states were used as triggers to present new visual scenes. As expected, images presented while the brain was in a "ready" state were better remembered. The finding adds a new element to the longstanding question of why we remember certain things better than others, says Nicholas Turk-Browne, assistant professor of psychology at Princeton University, who was not involved in this study. Traditionally, scientists have believed that memory is based on the inherent memorability of specific events, with strongly emotional events likeliest to be remembered. More recently, cognitive neuroscientists have found that the brain's ability to consolidate, store and retrieve information is also important. "The significance of this study is that it suggests that beyond the inherent memorability of things, and how well the memory systems are working, there's a huge role to be played by how well prepared you are to process what's coming in," Turk-Browne says. In theory, this method could be used to determine when a student is best prepared to learn new material, or to monitor workers who need to stay alert. "That's what we would like to think — that we are able to measure states of receptivity for learning, or preparedness for learning," Gabrieli says. "In terms of how that would be translated to real life, there are still a few steps to go." The main hurdle is that fMRI scanners are very large, and at this point, they cannot be made into small, portable devices. A possible alternative is using electroencephalography (EEG), a more easily miniaturized technology that measures electrical activity along the scalp. The researchers are now working on ways to use EEG to measure activity in the PHC.
Merit Badge: Nature Requirements for the Nature merit badge: - Name three ways in which plants are important to animals. Name a plant that is protected in your state or region, and explain why it is at risk. - Name three ways in which animals are important to plants. Name an animal that is protected in your state or region, and explain why it is at risk. - Explain the term "food chain." Give an example of a four-step land food chain and a four-step water food chain. - Do all of the requirements in FIVE of the following fields: - In the field, identify eight species of birds. - Make and set out a birdhouse OR a feeding station OR a birdbath. List what birds used it during a period of one month. - In the field, identify three species of wild mammals. - Make plaster casts of the tracks of a wild mammal. - Reptiles and Amphibians - Show that you can recognize the poisonous snakes in your area. - In the field, identify three species of reptiles or amphibians. - Recognize one species of toad or frog by voice; OR identify one reptile or amphibian by eggs, den, burrow, or other signs. - Insects and Spiders - Collect, mount, and label 10 species of insects or spiders. - Hatch an insect from the pupa or cocoon; OR hatch adults from nymphs; OR keep larvae until they form pupae or cocoons; OR keep a colony of ants or bees through one season. - Catch and identify two species of fish. - Collect four kinds of animal food eaten by fish in the wild. - Mollusks and Crustaceans - Identify five species of mollusks and crustaceans. - Collect, mount, and label six shells. - In the field, identify 15 species of wild plants. - Collect and label seeds of six plants OR the leaves of 12 plants. - Soils and Rocks - Collect and identify soils found in different layers of a soil profile. - Collect and identify five different types of rocks from your area. NOTE: In most cases all specimens should be returned to the wild at the location of original capture after the requirements have been met. Check with your merit badge counselor for those instances where the return of these specimens would not be appropriate. Under the Endangered Species Act of 1973, some plants and animals are or may be protected by federal law. The same ones and/or others may be protected by state law. Be sure that you do not collect protected species. Your state may require that you purchase and carry a license to collect certain species. Check with the wildlife and fish and game officials in your state regarding species regulations before you begin to collect. Click one to vote: Did you like it? Or hate it? Contest - Ask a Question - Add Content Find more Scouting Resources at www.BoyScoutTrail.com Follow Me, Boys
|Name: _________________________||Period: ___________________| This test consists of 5 short answer questions, 10 short essay questions, and 1 (of 3) essay topics. Short Answer Questions 1. Each of the bays is the square area between ________ piers at the corners. 2. The glass is removed from the pipe and _________. 3. The exterior of the wax is then carved into to create ___________. 4. The moldings and capitals are also finished and stone slabs are laid on the floor in what pattern? 5. The ceiling will be constructed using how many bays at a time? Short Essay Questions 1. What are cast for the cathedral? How are they unique? 2. How do the glass makers create the various shapes of glass? 3. What happens in 1306? 4. What are vassoirs? 5. What do the glass makers begin in 1302? Of what is this made? 6. Why is Wiliam of Planz replaced? 7. How tall is the vaulted ceiling? What is used to support this massive structure? 8. When is the transept and most of its vaulting completed? 9. How is the ceiling constructed? 10. Describe the workings of the great wheel. Write an essay for ONE of the following topics: Essay Topic 1 The spire is completed in 1331. Part 1) What is the spire? What is the purpose of a spire? Part 2) What might the spire signify to the people in regards to the cathedral? Part 3) What other aspects of the cathedral stand out? What purpose do these architectural details serve? Essay Topic 2 The cathedral has three huge doors. Part 1) Describe these doors. How are they made? Part 2) Why are they so large? How is the structure made to support these large doors? Part 3) How might you feel walking in through this cathedral's large doors? How do these doors affect the aesthetics of the entire cathedral? Essay Topic 3 Several deaths are mentioned in this story. Part 1) Describe these deaths. How do they occur? How do they affect construction of the cathedral? Part 2) Would building projects today continue with deaths such as these? Why or why not? Part 3) Do deaths like this occur in building projects today? Why or why not? This section contains 773 words (approx. 3 pages at 300 words per page)
Project Horizon was a study to determine the feasibility of constructing a scientific / military base on the Moon. On June 8, 1959, a group at the Army Ballistic Missile Agency (ABMA) produced for the U.S. Department of the Army a report entitled Project Horizon, A U.S. Army Study for the Establishment of a Lunar Military Outpost. The project proposal states the requirements as: "The lunar outpost is required to develop and protect potential United States interests on the moon; to develop techniques in moon-based surveillance of the earth and space, in communications relay, and in operations on the surface of the moon; to serve as a base for exploration of the moon, for further exploration into space and for military operations on the moon if required; and to support scientific investigations on the moon. The permanent outpost was predicted to cost $6 billion and become operational in December 1966 with twelve soldiers. A lunar landing-and-return vehicle would have shuttled up to 16 astronauts at a time to the base and back. Horizon never progressed past the feasibility stage in an official capacity. Rocket-vehicle energy requirements would have limited the location of the base to an area of 20 deg latitude/longitude on the Moon, from ~20° N, ~20° W to ~20° S, ~20° E. Within this area, the Project selected three particular sites: - northern part of Sinus Aestuum, near the Eratosthenes crater - southern part of Sinus Aestuum near Sinus Medii - southwest coast of Mare Imbrium, just north of the Montes Apenninus mountains - 1964: 40 Saturn launches. - January 1965: Cargo delivery to the moon would begin. - April 1965: The first manned landing by two men. The build-up and construction phase would continue without interruption until the outpost was ready. - November 1966: Outpost manned by a task force of 12 men. This program required a total of 61 Saturn I and 88 Saturn II launches up to November 1966. During this period the rockets would transport some 220 tonnes of useful cargo to the Moon - December 1966 through 1967: First operational year of the lunar outpost, with a total of 64 launches scheduled. These would result in an additional 120 tons of useful cargo. The base would be defended against Soviet overland attack by man-fired weapons: - Unguided Davy Crockett rockets with low-yield nuclear warheads - Conventional Claymore mines modified to puncture pressure suits The basic building block for the outpost would be cylindrical metal tanks, 3.05 m in diameter and 6.10 m in length. Two nuclear reactors would be located in pits to provide shielding and provide power for the operation of the preliminary quarters and for the equipment used in the construction of the permanent facility. Empty cargo and propellant containers would be assembled and used for storage of bulk supplies, weapons, and life essentials. Two types of surface vehicles would be used, one for lifting, digging, and scraping, another for more extended distance trips needed for hauling, reconnaissance and rescue. A lightweight parabolic antenna erected near the main quarters would provide communications with Earth. At the conclusion of the construction phase the original construction camp quarters would be converted to a bio-science and physics-science laboratory. - Wernher von Braun (1959). Project Horizon Volume II: Technical Considerations and Plans (PDF). United States Army. p. 307. - Project Horizon Report: Volume I, Summary and Supporting Considerations (PDF). United States Army. June 9, 1959. - Project Horizon Report: Volume II, Technical Considerations & Plans (PDF). United States Army. June 9, 1959.
Facts, Identification & Control Workers are 4 to 4.5 mm long and yellow in color. When crushed, they produce a lemon scent that is often described as citronella. Behavior, Diet & Habits Moisture ants get their name from their habit of nesting in high-moisture areas. Some people call them yellow ants because the workers are yellowish in color. There are several species in the United States. One of the largest species is Lasius interjectus (Mayr), also known as the citronella ant. Moisture ants are common from the Pacific Northwest to New England. Their range extends southward to Florida and Mexico. Outdoors, they often nest under rocks or logs. They sometimes nest above the ground in rotting logs. Moisture ants feed on honeydew. The workers get honeydew from aphids and scale insects that feed on plant roots. Moisture ants often tend aphids to collect the honeydew that they produce. Some moisture ant colonies make their nests against the foundation of homes. When colonies are under slabs, the ants often push soil up through cracks in the concrete while they are digging galleries underneath. When this soil appears in basement floors, it can cause distress for the homeowners. Many people mistake this soil for a sign of termite activity. When moisture ants move indoors, they often nest in wood that is moisture damaged. They frequently find damaged wood in areas like bath traps. They sometimes nest inside walls where there is a plumbing leak. There have been cases of these ants nesting in damp soil in crawlspaces. In these situations, the workers made mounds of excavated soil in the crawl space. If the ants have nested in damp or damaged wood, correcting the moisture problem and replacing the wood will be a priority. In damp or humid areas, treated wood may be a good replacement. Each colony has a single queen that generates the colony’s members. Mating occurs when winged males and females, called swarmers, swarm from the colony in the summer. Mated females go on to found new colonies. Signs of a Moisture Ant Infestation The most obvious signs are the yellow workers or the swarmers. Do Moisture Ants Fly Moisture ant is a common name that most frequently includes ants in the genus Lasius. The only members of a moisture ant colony that fly are the female and males who participate in mating swarms. If winged ants have swarmed inside the home, remove them with a vacuum cleaner. Empty the vacuum bag promptly and take it outside to the trash. It is sometimes hard to tell whether winged insects are ants or termites. Call the local pest control professional for an inspection and identification.
Forest and Wildlife Resource CBSE Class 10 SST Geography NCERT Solutions Forest and Wildlife Resource NCERT Class 10 SST Geography NCERT Solutions What is biodiversity? Why is biodiversity important for human lives? Biodiversity is the degree of variation of life forms within a given ecosystem, or on an entire planet. There are millions of living organisms on planet earth. All these living organisms, including man, are interdependent on each other. How have human activities affected the depletion of flora and fauna? Explain. Cutting down of forests for agricultural expansion, large scale developmental projects, grazing and fuel wood collection and for urbanization has led to the depletion of flora and fauna. Describe how communities have conserved and protected forests and wildlife in India? In India many traditional communities still live in the forests and depend on their livelihood for forest produce. These communities are working hand in hand with the government to conserve forests. In Sariska Tiger Reserve, Rajasthan, villagers fought against mining activities. In Alwar district of Rajasthan, local communities belonging to five villages have set their own rules and regulations in 1,200 hectares of forest land. They have named it as the Bhairodev Dakav ‘Sonchuri’. Hunting is not allowed in these lands and outside encroachments are prohibited. The famous Chipko movement was started in the Himalayan region to stop deforestation. People belonging to the local community took to afforestation in a big way. Indigenous species were cultivated and protected. Involving local communities in protecting the environment, and stopping degradation of forests has reaped many benefits. Write a note on good practices towards conserving forest and wildlife. In 1972, the Indian Wildlife (Protection) Act was implemented. It made protecting specific habitats a law. A list of wildlife species that had to be protected was published and hunting these animals was against the law. National Parks and Wildlife Sanctuaries were set up in many states to protect endangered species. Under the Wildlife Act of 1980 and 1986, several insects have also been included in the list of protected species. Butterflies, moths, beetles, dragonflies and even certain plants are included in the protected list. “Project Tiger” was initiated in 1973 by the government of India to protect tigers. It is one of the most well publicized wildlife campaigns in the world.
1. Getting Started Click the button at the right to open a MAPLE worksheet entitled simpstudent.mws. If you are given a choice, you should save the file to your preferred directory, then navigate to that directory and open the file from there. In the MAPLE worksheet, position your cursor anywhere in the line [ > restart ; and press Enter. Pressing the Enter key executes the MAPLE code on the current line. Activating the restart command will clear all MAPLE variables, and it is important to do this whenever you start a new MAPLE project. Now resize your MAPLE and browser windows so that you can see them both, side-by-side. Click in either window to make it the active window. Your screen should look something like this: 2. Collection of Data Notice that the MAPLE worksheet needs your input of the coordinates of points from a map of Virginia. We will use the Virginia map you saw earlier to find and mark these points. Click the picture at the right to learn how to get the point coordinates and store your coordinates in the MAPLE worksheet. 3. Visualization of Data Now that you have entered your x and y coordinates, you can work through the MAPLE worksheet by pressing the Enter key on your computer to execute the MAPLE commands. (Note: The MAPLE window must be the active window.) The first output you see should be a red horizontal line (the southern border of Virginia) and eleven black points (the eleven, evenly spaced boundary points that you determined in section 2). You should be able to "see" that this line and eleven points give a rough outline of the state of Virginia. Repeat sections 2 and 3 until you are satisfied that you have obtained a reasonable rough outline of the state of Virginia. 4. Construction of Approximating Parabolas This section includes the MAPLE commands that define the 5 quadratic functions we will use to construct our Simpson's rule approximation. Press the Enter key to execute the block of MAPLE commands. 5. Visualization of Approximating Parabolas This section begins with a loop of MAPLE code that defines the vertical partition lines and ends with a display of the five approximating parabolas. Does the red outline look like Virginia? Compare your picture with the one at the top of this page . Note that the parabolic regions extend from the northern red boundary of Virginia to the x-axis. 6. Area Calculation by Integration These MAPLE commands compute the sum of the areas between the approximating parabolas and the x-axis. This sum includes the area of the red region (the state of Virginia) as well as the area of the rectangular region between the southern boundary of Virginia and the x-axis. Thus, our Simpson's rule approximation (in square pixels) must be adjusted. 7. Area Calculation by Simpson's Rule Formula MAPLE commands in this section use the Simpson's rule formula (from your textbook) to calculate the area between the parabolic curves and the x axis. We must now convert square pixels to square miles. Use the map scale to determine how many pixels are equivalent to 80 miles. In the MAPLE worksheet, determine the conversion factor from square pixels to square miles, and use this factor to determine the Simpson's rule approximation (using 10 subintervals) for the area of Virginia. 9. Simpson's Rule Summary The image at the top of this page shows the Simpson's rule approximation superimposed on the map of Virginia. Use this image to determine whether this approximation using 10 subintervals is an over- or under- approximation. Explain your reasoning, and include your response in the text cell provided in the MAPLE worksheet. Save your worksheet as simp10****.mws. (Replace the **** with your first initial and last name.) The purpose of this assignment is to increase the accuracy of your approximation by using more parabolas. In this assignment, you will repeat the above process based on the collection of more data points. Step 1: Open your simp10****.mws worksheet and save it as simp30****.mws. (You now have two copies of the same worksheet.) Step 2: Use a live map of Virginia to collect 31 evenly spaced points on the northern boundary of Virginia. Your first point must be (24, 98) and the last point must be (474, 98). Paste these coordinates into your simp30****.mws worksheet. Step 3: Repeat sections 3-9 (above) for your new data points by modifying your simp30 worksheet.
Astronomical Twilight – Astronomical Dawn & Dusk Astronomical twilight is the darkest of the 3 twilight phases. It is the earliest stage of dawn in the morning and the last stage of dusk in the evening. Twilight is the time between day and night when the Sun is below the horizon but its rays still light up the sky. Astronomers differentiate between 3 phases: Each twilight phase is defined by the solar elevation angle, which is the position of the Sun in relation to the horizon. During astronomical twilight, the geometric center of the Sun's disk is between 12 and 18 degrees below the horizon. To the naked eye, and especially in areas with light pollution, it may be difficult to distinguish astronomical twilight from night time. Most stars and other celestial objects can be seen during this phase. However, astronomers may be unable to observe some of the fainter stars and galaxies as long as the Sun is less than 18 degrees below the horizon – hence the name of this twilight phase. Astronomical Dawn and Astronomical Dusk The twilight phases in the morning are often called dawn, while the twilight phases in the evening are referred to as dusk. However, unlike the term twilight, which describes a time span, the terms dawn and dusk refer to moments during the transitions between day and night. Astronomical dawn is the moment when the geometric center of the Sun is 18 degrees below the horizon in the morning. It is preceded by night time. Similarly, astronomical dusk is the instant when the geometric center of the Sun is 18 degrees below the horizon in the evening. It marks the beginning of night time and the disappearance of the last shimmer of natural daylight. Timing & Length The duration of each twilight phase depends on the latitude and the time of the year. In locations where the Sun is directly overhead at noon – for example at the Equator during the equinoxes – the Sun traverses the horizon at an angle of 90°, making for swift transitions between night and day and relatively short twilight phases. For example, in Quito, Ecuador, which is very close to the Equator, astronomical twilight begins only about 70 minutes before sunrise during the equinoxes. At higher latitudes, in both hemispheres, the Sun's path makes a lower angle with the horizon, so the twilight phases last longer: - In New York (about 40° North) and Wellington (about 40° South), during the equinoxes, it takes about 1 hour and 30 minutes from the beginning of astronomical twilight until the Sun rises. - In Oslo (about 60° North) and the northernmost tip of Antarctica (about 60° South), the same process takes roughly 2 hours and 30 minutes. Twilight Around the Poles At high latitudes and around the summer solstice, the Sun does not move lower than 18° below the horizon, so twilight can last from sunset to sunrise. The area experiencing all-night astronomical twilight around the summer solstice lies between about 48°33′ and 54°33′ North and South. In the northern hemisphere, this roughly correlates with the area between locations just south of the US-Canadian border and Canadian cities like Edmonton, Alberta. In Europe, it covers much of Germany. An all-night period of astronomical twilight does not constitute a white night, which requires the Sun to remain less than 6 degrees below the horizon all night, causing civil twilight from sunset to sunrise. Within the polar circles, the Sun does not set at all in the summer, so there is no twilight during that time of the year. However, in locations around the poles that experience polar night during the winter months, the Sun may reach an angle of 12-18° below the horizon around midday, causing a short daily period of astronomical twilight, a temporary break from the complete and permanent darkness that envelops polar regions in the winter. Astronomical Twilight Today Astronomical Twilight from 3:43 am-4:29 am in the morning. Astronomical Twilight from 9:48 pm-10:35 pm in the evening. The Sky TonightWhich planets are visible in the night sky from your location.
Internet Articles (2015) n the late 10th century AD the Norse Icelandic Vikings settled Greenland. In that land they found fertile fields, an abundance of trees and navigable rivers teeming with cod and seals. The summers were long enough to grow bountiful crops. By the mid-12th century the Viking communities were thriving. In its heyday, there were about 5,000 people living in two major settlements in Greenland. One settlement was located along the coast on the southeast. About 80% of the settlers lived here. About 1,000 settlers lived in a smaller settlement on the southwestern coastal area. Over a period of about 350 years, the land was almost completely stripped of lumber. When settlements were originally founded, many of the trees were cut to build primitive log homes. Within 300 years, most of the remaining trees were consumed as fuel. A major temperature shift known today as the Little Ice Age began around 1300 AD. By 1378 AD almost all of the Vikings were gone. Those who remained died of starvation, froze to death in the devastatingly cold winters, or were killed by Induit Indians. Around 1350 AD as the Vikings began to leave, the Arctic Induits began migrating South as temperatures in the more northern regions plummeted quickly and dramatically transforming their lands into an uninhabitable frozen desert of ice and snow. What happened? Climate change. In 1350 AD a Norse bishop was sent by his church to the western settlement of Garoar, Greenland. When he arrived he discovered, with the exception of a few Induits, there was no one there. The settlement had been abandoned. By the first or second decade in the 1500s the climate became dangerously inhospitable. Only a handful of Europeans remained in Greenland. By the middle of that century only the Induits, who were able to adapt to the inhospitable climate, remained. The contemporary writings of ecoalarmists suggest that Icelandic Vikingsin particular Erik the Redwanted to fool Norwegian and Danish settlers into believing that the barren inhospitable terrain of Kalaallit Nunaat (the green land) was hospitable, and that it had moderate temperatures suitable for farming and for community life. According to "The Book of Icelanders" and the "Saga of Erik the Red," (extracted from the oral history of Iceland) Erik the Red named the glacier island "Greenland" saying that settlers "...would be eager to go there if it had a good name." If Greenland was really so barren in the 12th century that settlers would have to be tricked to go there, the only value Greenland would have to the Icelandic Vikings would have been as military outposts to keep their enemies from building forts and staging areas in southern Kalaallit Nunaat to attack the Viking settlements in warmer Iceland. The archeology of the area suggested a hurried departure. Scientific excavations in the two settlements were revealing. The diggings in the Greenlandic garbage heaps show a shift in livestock from 1000 AD to 1300 AD. The early Greenlandic farmers raised cows and pigs. As the winters began to lengthen and the farmers suffered colder temperatures, there was not enough hay to get the farmers through the increasingly severe winters. The farmers switched from cows and pigs to sheep and goats. In the more temperate regions along the coast where the air was still warmed by gulf stream, settlers ate both cattle and caribou. By 1400 AD there were no more traces of domestic livestock, nor even, of caribou. The settlersprobably all Induitate seals and fish. It is very likely that those Europeans who remained behind in Greenland after 1378 AD were killed by the Induit since archeological digs of Induit sites reveal many Viking artifacts materials made of metal and wood, two staples not possessed by the Induit. Oxygen isotope core samples taken from the Greenland ice sheets reveal that the Viking inhabitation of Greenland took place during the Medieval Warm Period from roughly 800 AD to about 1300 AD. The temperature was 1.4 to 3.5 degrees Celsius higher than it was in 1450 AD. The Little Ice Age lasted until 1850. Cooling periods on Earthlike warming periodsare caused by one thing and one thing only. Cyclic solar activity. The scientific reality of Greenland runs counter to the fables about Greenland generated by ecoalarmists to benefit politicians who want to persuade the gullible of the world that humankind is responsible for what the ecoalarmists believe is carbon dioxide-generated global warming. In their minds, the current warming trends in Greenland is proof positive that catastrophic flooding that caused by manwill erase the coastal lowlands around the world within the next 100 years. Before we start looking at the scientific numbersthose who believe global warming is caused by man verses those who know it is a cyclic event caused by solar eruptionsand the physical evidence that disputes the ecoalarmists, it is important for the layman to understand one simple truth about global warming: green is the new red. Yesterday's socialists are today's environmentalists. Green is the anti-capitalist, antihuman agenda of the Vietnam Era far left hippies who finally overthrew the Rooseveltian democracy of the 1930s and 40s beginning in 1992 when the politically-correct antiwar hippies Bill and Hillary Rodham Clinton came to Washington, vice president Al Gore became the doomsday prophet of Utopia. When communism became so unpopular that it was financially unsustainable in the Soviet Union, the leadership faked the collapse of the Iron Curtain in 1991, as the taskmasters of Utopia simply discarded their red uniforms and donned environmental green, becoming the ecological guardians of the green planet and the enemies of rhe industrialists who were poisoning Earth with carbn dioxide. The transnational bankers, industrialists and merchant princes who had financed Leon Trotsky and Vladimir Ilyich Ulyanov (more commonly known as Lenin), who were determined to overthrow Tzar Nicholas II in 1905 found their opportunity in 1917 while the world was at war. The American International Corporation financed the Russians, who promised America's richest capitalists they would be given control of developing the industrial infrastructure of the new nation. Of course, Trotsky and Lenin lied. But the deed was done. The greediest bankers and industrialists in this country, whose money created the Soviet Union on the bloody corpses of the Romanov family, which was assassinated on Feb. 17, 1917, altered the history of the world. The watermelons now use taxation and costly bureaucratic regulations to prevent new-generation-wealth from overtaking them in the economic marketplace. Using the tax system they created between 1929 and 1933, the princes of industry and the barons of banking and business have become the Masters of the Universe. And, during that period AIC quietly morphed into the American International Group, and erased the stain of their complicity in the assassinations of Tzar Nicholas II and the Romanov family. Today the communists have become environmentalists. Now, as greenies instead of reds, Hollywood, the media and the wealthy elites can openly pour millions of dollars into their socialist advocacy while appearing to be concerned about our planet and the people they blame for the cyclic sunspot activity that actually creates the 1,500 year warming and cooling weather cycles on Earth. When working class Americans look at one another, each sees another human being with equal rights who is a member of the world's greatest free enterprise society. When members of the environmental elite looks the "human machine" they don't see flesh and blood but human capital that disproportionately consumes the world's natural resources and proliferates waste. People, the pawns of the rich in the promulgation of wealth, are viewed by the watermelons as chattel of the Statethe same view held by the Soviet and Chinese communists. We are simply human capital, a commodity of the rich. Despite their abhorrence of humankind whom they believe is destroying the planet, it is the view of the elites that the unborn should not be born and the elderly should volunteer to be euthanized. The watermelons all have one thing in common: they expect the human chattel, not themselves, to make those sacrifices. Look in the private papers of the wealthy and one thing you won't find is a living will that will allow the healthcare bureaucracy to euthanize them when they become old and feeble and can no longer produce enough to sustain themselves. When a person consumes more than they produce, they have a negative societal value. In the event they are hospitalized for anythingeven a minor illness or injurythey will be pressured by the health care facility to sign a living giving that facility the right to euthanize them if they are viewed to be terminal. Need I remind you that old age is a terminal condition? No one recovers from it. Plastic surgery may conceal age, but under the youthful wrapper, age is still there. The American watermelons, like their communist counterparts, claim to be the advocates of the have-nots. But, it's not poverty they fight but the impact of capitalism (which brings both the benefits of technology and affluence not only to the middle and upper classes but to the "poor" as well). Since the ecoalarmists believe that people are the root cause of global warming, population + affluence + technology = ecological disaster a formula that requires government control since the watermelons are convinced that human development and prosperity are detrimental to the environment. The human influence on climate has never been documented even though the Intergovernmental Panel on Climate Change claimed, in its 1996 assessment, that the human impact on climate had been proven based on 130 peer-reviewed studies. In reality the "consensus" was based entirely on two research papers, not 130. Neither had been subjected to peer review. Neither was supported by fact. Both papers were written by Dr. Benjamin D. Santer, a junior research scientist working at the US government's Lawrence Livermore National Laboratory. Santer was busy trying to make a name for himself in the Clinton Administration. Lending credibility to his report was then Acting Deputy Assistant Secretary of State Day Olin Mount, who reported to Undersecretary of State for Global Affairs Timothy Wirth who, with then Vice President Al Gore, Jr., wrote the Kyoto Protocol that, if fully implemented, will return the First World economies to the Dark Ages as it elevates the Third World (and China)which are exempt from the carbon dioxide greenhouse gas restrictionsto First World economic status. Mount held up the printing of the report for a final addendum in a letter to Sir John Houghton, chairman of the IPCC Working Group. The addendum was a back room "edit" of Santer's work. In the approved text that did get peer-reviewed, Santer said: "None of the studies cited above has shown clear evidence that we can attribute the observed [climate] changes to the specific cause of increases in greenhouse gases [human interference]." However, in the version that was published and released in May, 1996, Santer said: "There is evidence of an emerging pattern of climate response...by greenhouse gases...from the geographical, seasonal...patterns of temperature change...These results point toward a human influence on global climate." Mount was rewarded for his diligence by being appointed US Ambassador to Iceland. Santer was awarded a MacArthur Foundation "genius" fellowship with a $270 thousand grant. Dr. Frederick Seitz accused Santer of doctoring his 1996 report in a June 12, 1996 op-ed piece for the Wall Street Journal. In 2010 the United States will be required to limit carbon dioxide emissions by 7% below its 1990 levels. Seven percent does not seem like much. However, US industrialists have warmed that due to increased productivity, what factories remain in the US will produce 16% more greenhouse gases in 2010 than it did in 1990. So what sounds like a 7% reduction is actually a 23% reduction. In the second phase of Kyoto, the United States would have to reduce carbon dioxide emission by 60% below its 1990 level. This would virtually end manufacturing of any type in the United States of America. At that point the United States would have to rely completely on its new step-siblings in the southern hemisphere where carbon dioxide doesn't appear to cause pollution by the rules established by those transferring the wealth of the First World to the Third World where the human capital that will become the primary consumers of the 21st century live. The ecoalarmists insist that climate change is triggered by increases in greenhouse gases that shroud the planet over time, trapping heat in the atmosphere like an atrium and raising the surface temperature of the planet. Climate change. Satellite imagery and high altitude balloon temperature data retrieval confirm that the lower atmosphere is not trapping the volume of heat needed to impact global climate since it is high altitude temperature upheavals not low altitude temperature changes that determine climate change. Second, oxygen-16 and oxygen-18 isotope ice core samples in both the Arctic and sub-Arctic and the Antarctic and carbon-14 and beryllium-10 isotope samples presents hard evidence of both the Earth's temperatures and the history of increases in carbon dioxide levels. It is an inescapable fact that carbon dioxide (greenhouse gas) is not the culprit that is causing climate change since in every instance where both factors are measured, temperature increases always precede the buildup of carbon dioxide. What that means is that it warms first, and the increased temperatures precipitate increased levels of Co2. That, also, is an inescapable fact even though the ecoalarmists are doing their level best to erase the world's climate history since the facts disprove the theory that increased levels of carbon dioxide cause global warming. Ice core graphs from the north and south extremities of the planet, together with isotope samples in more moderate regions of the planet have confirmed the medieval warming period that made Greenland "green" and warmed the northern hemisphere. Clearly the medieval warming was not caused by the exorbitant use of fossil fuels just as carbon dioxide from factories and car exhausts is not causing the current cycle of climate change. The impact man has on the climate is less than 1%. All the money in the world, and all of the punitive regulations that can be implemented by the watermelon bureaucracies around the world to limit pollution, curb Co2, and make people bath 20 times a day to reduce body heat, or produce flatulence-free feeds to keep cattle from passing gas will stop, slow or moderate, global warming. The world's leading climatologists and astrophysicists are in complete accord (at least those not paid by the watermelon environmental groups or the barons of business and princes of industry in whose best interest it is to transfer the wealth of the First World nations to the human capital-rich Third World where the prime consumers of the 21st century reside) that global warming is 99% created by cyclic solar activity. To dispute that there was medieval warming period, Raymond S. Bradley wrote an article for Science Magazine on October 17, 2003. The article was entitled "Climate in Medieval Times." Bradley is a climatologist and a professor in the Department of Geosciences at the University of Massachusetts Amherst. Bradley wrote a history of the last 1,000 years, disputing that there was a medieval warming period. To accomplish this Bradley, with the help of Michael Mann, a young Ph.D from the University, substituted Mann's temperature proxies for the the ice core isotopes (tree rings) from moderate urban, heat zones, then using the official temperatures in those urban heat zones, postulated what he thought the temperatures all over the planet would have been over the past 1,000 years. (The Clinton Administrationor rather Al Goresingled out Mann's work in 2000 to be included, as fact, in the US National Assessment of the Potential Consequences of Climate Variability and Change. Gore liked the Mann Study because it contradicted hundreds of embarrassing historic documents that factually confirmed the medieval warming period had occurred. Mann's statistics showed 900 years of stable global temperatures that suddenly spiraled upward after 1910 with the advent of the automobile. To the ecoalarmist, evidence that climate change is cyclic is the kiss of death since that means regardless what man does, global warming and cooling is going to happen. Once Mann caught the attention of ecoalarmist Al Gore, his theory gave the IPCC an answer to the embarrassing questions concerning the medieval warming period: it simply didn't happen. For his groundbreaking work, Mann became a primary author of IPCC, and a key editor of The Journal of Climate. In 2004 two researchers from the University of Guelph in Canada, Stephen McIntyre and Ross McKittrick (who were studying the Medieval Warming Period), were stymied by Mann's claims and requested the original study data from the IPCC. The documents they received were incomplete and the data presented could not produce the claimed conclusion. They discovered that Mann's work had never been subjected to peer review. Using corrected and updated data, McIntyre and McKittrick recalculated Mann's temperature index (using Mann's own methodology). The results were published in Energy & Environment later that year. McIntyre and McKittrick reported that "...[t]he major finding is that the [warming] in the early 15th century exceed[s] any [warming] in the 20th century." Mann's study became known by honest climatologists all over the world as the "Hockey Stick Theory" because, in his model, global temperatures were static until early in the 20th century when they suddenly spiked upward, presenting a graph that looked like a hockey stick. Mann's theory completely ignored Co2. Had Mann paid attention to that factor he would have had to explain why tree ring samples collected from bristlecone pines in the Sierra Nevada Mountains showed a major growth spurt after 1910. Temperature alone cannot account for major growth spurts in mature trees. Growth spurts come from feeding the tree. Co2 acts as fertilizer for trees and other plants. Plantlife needs Co2 to grow. When tree rings show growth spurts there is always a parallel increase in the level of Co2 in the air. The more Co2, the more growth. Temperature can be calculated from the distance between the tree rings, with wider spaces between the rings as the climate warms. The Mann samples actually prove that the warming occurred first, followed by increased levels of Co2once again, disproving the theories of the ecoalarmists who insist that manmade greenhouse gases are responsible for global warming. In An Inconvenient Truth, Al Gore alleges that "...temperature change is the cause, not the result, of changing atmospheric carbon dioxide levels." Ninety-one percent of the world's climatologists disagree, arguing that the collected evidence overwhelmingly shows that rising temperatures always precede increased levels of Co2. Core sample studies from the Antarctica date back hundreds of thousands of years. The isotope samples provide data about the temperature and levels of carbon dioxide in the atmosphere at the time the ice was produced. The Antarctic ice core samples conclusively prove that temperature spikes precede atmospheric carbon dioxide increases. Scientists discovered from the core sample studies that Co2 levels dramatically increase roughly 200 to 1,000 years after the temperature elevations begin. This is the reason why, when Al Gore was challenged by the Heartland Institute to debate the veracity of his claims, he refused, saying "the debate is over. Climate change is a settled argument." The debate is over not because the Gorites are correct, but simply because the princes of business and industry and the earls of the ecology who fabricated global warming as a devise to redistribute the wealth of the world have deemed it to be over because the transfer of that wealth is underway and cannot be stopped. And, fearful that intelligent peoplearmed with facts instead of fictionwill begin to question the "need" to send their jobs to the third world, the watermelons are simply no longer willing to debate an issue already bought and paid for in the private congressional and senatorial offices on Capitol Hill. Carbon dioxide is a chemical compound that is critically necessary to sustain life on planet Earth. It is the food plants consume to grow the crops we eat. Eliminate Co2 and we starve. As plants consume the Co2 to grow the food we require to live, they excrete a byproduct known as oxygen. Oxygenone of the most caustic pollutants on Earthis essential for life. Without oxygen, mankind will suffocate. Since oxygen is created only by photosynthesis, deplete carbon dioxide and you deplete the quality and quantity of the air we breathe. It's that simple. The eco-idiots, headed by Al Gore, Jr.the man who believes he is destined to be the leader of the free worldand the American Civil Liberties Union [ACLU}, have already had the US Circuit Court of Appeals for the District of Columbia classify carbon dioxide as a pollutant that must be eliminated to save the world. When the Bush-43 Administration correctly ordered the Environmental Protection Agency not to enforce the non-ratified Kyoto Protocol which was implemented by former President Bill Clinton by Executive Order a few hours before he left office on January 20, 2001, the ACLU, the Sierra Club, Environmental Defense, and the Natural Resources Defense Council, and several of the most liberal States: California (Gov. Arnold Swarzenegger [R]), Connecticut (Gov. Jodi Rell [R]), Maine (Gov. John Valdacci [D]), Massachusetts (Gov. Deval Patrick [D]), New Mexico (Gov. Bill Richardson [D]), New York (Gov. Eliot Spitzer [D]), Oregon (Gov. Ted Kulongoski [D]), Rhode Island (Gov. Don Carcieri [R]), Vermont (Gov. Jim Douglas [R]), and Wisconsin (Gov. Jim Doyle [D]), filed suit to demand the federal government tighten pollution controls on the newest generation of power plants, claiming that these facilities were speeding up global warming. (Spitzer has been removed from office.) If you live in any of these Statesor your State has watermelon members of Congressand you plan to do any serious breathing or eating for the rest of your life, you need to dump these clowns before they kill you. Go to the Federal Election Commission website and check the big money donors who have bought their allegiance and you will find the Seven Sisters and the green giants among them. When you find these people in their donor bases, fire themquickly. Don't wait for the next election cycle, recall them and fire them. The battle between the Bush-43 White House and the environmentalists, which began in 2001, heated up in 2005 when a three judge panel upheld the Bush EPA decision not to regulate carbon dioxide emissions from cars and trucks under the Clean Air Act.which was what Clinton attempted to implement by ordering the Highway Transportation Safety Board to regulate the new emissions standards. To combat the Bush Administration, in March, 2008 Sen. Dianne Feinstein [D-CA] (an eco-idiot) urged the EPA to classify Co2 as a hazardous pollutant under the Clean Air Act. In a recent letter to the EPA, Feinstein urged the EPA to find that Co2 is a danger to public health. Since oxygen is an even more dangerous element, I'm surprised she hasn't asked the EPA to outlaw it as well. Another eco-idiot, Dr. Jonathan Patz, MD, Professor of Environmental Studies and Population Health crawled out of the watermelon atrium in November of 2007 with a paper published in the journal EcoHealth in which he claimed the carbon dioxide emitted in the United States was causing the greatest harm to the world's poorest nations. Patz claimed that global warming was devastating the third world with climate-sensitive diseases such as malaria. Malaria is a disease that was almost completely eradicated throughout the world through the use of DDTthe safest pesticide ever developed. When chemophobic fans of Rachel Carson's "Silent Spring" succeeded in banning the use of DDT in the United States in 1970, the World Heath Organization, the National Academy of Science, and the EPA argued that "sound science" found no evidence that DDT was a carcinogenic (the claim of the chemophobes). After seven months of intensive hearings by the scientific community in the United States and over 9,000 pages of testimony, the scientific community concluded that "...DDT is not a mutagenic or teratogenic hazard...The use of DDT under the regulations involved...does not have a deterious effect on freshwater fish, estuarine organisms, wild birds or the wildlife." The fact that DDT was not hazardous, and was extremely beneficial to the farming community and to the population-at-large did not prevent it from being banned in the United States. When you have an MD after your name, it apparently doesn't matter that you don't know what you're talking about. The watermelon bureaucracy in Washington, DC will listen to you. And, the more extreme your views, the more diligently they will listen. So will any congressional committee in Congress. The irrational rant will be viewed as credible science without a shred of evidenceother than Al Gore's An Inconvenient Truth. Gore's scientific credibility comes from the fact that he won an Academy Award and Nobel Prize for his sci-fi documentary. whole truth about climate change Based on the NASA statistics, the world's astrophysicists sided with the climatologists who denounced global warming as a man-made dilemma. Nir Shaviv, an Israeli astrophysicist argued before the IPCC that "...solar activity can explain a large part of 20th century global warming." Shaviv argued that while "...the melting of the arctic ice sheets is indicative of global warming, there is absolutely no scientific evidence that proves Co2 and other greenhouse gases are the culprits that caused it...Using computer models to find [ecological] fingerprints is hard..." because computer models are not based on facts but on suppositions of what previously happened, and what the author of the model thinks will happen in the future. Everyone's models are different because each is based on opinion and not evidence. However, the statistics compiled by the National Oceanic & Atmospheric Administration [NOAA}. NASA and the International Solar Energy Society [ISES] are not based on suppositions and guesswork but on factual evidence. Solar activity determines the temperatures of Earthand, of course, on Mars and Venus and the other planets as well. Variations in the cyclic solar activity drives warming and cooling throughout the solar system. The environmentalists do not want to discuss global warming on Mars and Venus since it raises questions about the origin of climate change and makes their suppositions that sweaty old fat people, flatulating cows, car exhaust or carbon fuel pollutants from factories are responsible for climate changeeven on neighboring planet where there are no fat, sweaty people, flatulating cows, or factories churning out tons of greenhouse gases. Sixty scientists in Canada wrote the Canadian Prime Minister last year to express their misgivings about buying into the global warming rhetoric, saying: "If, back in the mid-1990s, we knew what we know today about climate, the Kyoto Protocol to combat climate change would almost certainly not exist, because we would have concluded it was not necessary." Ian Wilson, a former astronomer at the Hubble Space Telescope Institute said that under the Kyoto Protocol the taxpayers of the United States will ultimately be obligated to pay trillions of dollars in additional taxes that will result in a negligible environmental impact since not even a substantial decreases in atmospheric carbon dioxide would impact climate change since climate change is controlled about 99% by the sun, and 1% by man. In 1983 climatologists Willi Dansgaard (from Denmark) and Hans Oeschger (from Switzerland) drilled two 1-mile deep ice cores in the Greenland Ice Sheet exposing 250 thousand years of Earth history. Their work is known as the Dansgaard-Oeschger Climate Cycle. Dansgaard and Oeschger established the 1,500 cycle. Within this climate cycle are two solar cycles, one is an 87 year period (known as the Gleissberg Cycle), the other is a period of 210 years (known as the DeVries-Suess Cycle). Based on all of the climate models built by all of the well-paid eco-idiots, if the greenhouse theory of climate change was correct, then global temperatures in the Arctic and Antarctic should have been steadily rising since 1940. Polish climatologist Rajmund Przybylak compiled ice core readings from 37 Arctic and sub-Arctic weather stations to compute the Arctic air temperatures since 1930. The warmest Arctic temperatures occurred in the early 1930s. Even the brief warming in the 1950s was colder than the warmest period in the 1930s Dr. Syun Akasofu, founding director of the International Arctic Research Center at the University of Alaska-Fairbanks recently sent a letter to the IPCC documenting the flaws in their evaluation procedure and urging the United Nations to guarantee an honest debate on the subject of climate change. In his letter, Akasofu charged that "...this great interest by the public in climatology...[is] based on misinterpreted information about the green house effect of carbon dioxide...The public is alarmed and thus concerned about climate change largely because they are confused by...msinformation...I am concerned about the inevitable backlash against science and scientists, when the public learns the correct information about climate change. Even if the IPCC is not directly responsible for the present confusion, they should take responsible action to help rectify the situation." It is not likely the IPCC, the eco-idiots behind the global warming scare tactics and the watermelon bureaucrats who are using climate change to implement totalitarian controls over the people of the world, are going to do anything soon to rectify the situation. With carbon dioxide classified as a hazardous pollutant, the watermelons will soon see the doomsday prophecies of Paul Ehrlich come true, as greatly reduced levels of carbon dioxide bring about massive food shortages as crop yields are radically reduced because carbon dioxidethe food crops eathas been dramatically curtailed by federal regulation, and the air quality over the largest industrial centers will have become so bad that respiratory illness will become worlds' deadliest and most prevalent killer. In the end, the lunatic rantings of the eco-idiots will become fact. Because the nations themselves choice to believe the eco-idiots and agree to limit the natural production of carbon dioxide, the world will no longer be able to support its population because the food those crops need to produce in abundanceCo2was outlawed as the cause of global warming. Given time, mankind would also have found a way to outlaw the most corrosive element in nature, too. That element causes food to spoil and iron to rust. That element, of course, is oxygen. I am amazed that mankind, listening to the voodoo science of eco-idiots like Al Gore, Rachel Carson and Paul Ehrlich, has survived this long. But time is now rapidly running out. When mankind realizes what the pseudo-climate crackpots and watermelon bureaucrats have done to them for the world's greedy elite who want to recast the world to fit their changing economic needs, neither group will be welcome anywhere on this planet.
Just as the stem cell research debate rages today, the rift between the then-fledgling studies of genetics and embryology raged during the 1930s. The question at the forefront was how genetics should be seen as part of the study of biology. Scientists were also consumed with cross-disciplinary arguments over whether genetics and embryology should be two separate studies. The most significant contributor to the solution of that argument was a South Carolina-born zoologist and embryologist named Ernest Everett Just. Born in 1883, Just graduated from Dartmouth University and moved to Washington D.C. to teach at Howard University. He soon became head of the biology and zoology departments there, and in 1916, earned his doctorate from the University of Chicago, a year after being awarded the very first Spingarn Medal from the NAACP. He later studied in Europe where he won acclaim in his work with marine life. Returning to the U.S., he began work on thousands of experiments at the Marine Biology Laboratory at Woods Hole, Mass., on the marine mammal cell. In 1922, while conducting thousands of experiments on marine mammal cells at the Marine Biology Laboratory at Woods Hole, Mass., he successfully challenged Jacques Loeb's theory of artificial parthenogenesis in which Loeb claimed to cause asexual fertilization of sea urchin eggs by changing certain factors in their environment. This led to a falling out with Loeb, but undaunted, Just persevered saying and proving that factors aside, cells performing this type fertilization do it naturally and they have all the tools they need within their own ectoplasm. He also insisted that lab factors for egg experimentation should be as close as possible to those found in nature. This proof led to his later determination that an egg's structure, no matter what the species, was as important, if not more, as any external factor for development and subsequently evolution. His findings and other Woods Hole works were published in his famed work Basic Methods for Experiments on Eggs of Marine Animals, and gave birth to a new scientific area: ecological developmental biology, which has only been appreciated by the scientific community in recent years. However, racism and discrimination against black scientists was prevalent at the time and Just believed he would never achieve a tenured position at a major American university. In 1929, he left again for Europe and conducted research and experiments in Naples, was invited to the Kaiser Wilhelm Institute in Berlin and moved permanently to Paris in 1938. The next year, Just published his most famous work, The Biology of the Cell Surface which argued that all life derives from a complex organic structure. "Life," he wrote, "is the harmonious organization of events, the resultant of a communion of structures and reactions." As World War II ensued, Just was briefly imprisoned by Hitler's forces and his already weakened health deteriorated. The U.S. State Department rescued him and returned him to Washington, where he died in 1941.
Go Tell It On The Mountain is one of the most famous works by James Baldwin . Here are a few questions for study and discussion. - What is important about the title? What about the song? - What are the conflicts in Go Tell It On The Mountain? What types of conflict (physical, moral, intellectual, or emotional) did you notice in this novel? - How does James Baldwin reveal character in Go Tell It On The Mountain? - What are some themes in the story? How do they relate to the plot and characters? - What are some symbols in Go Tell It On The Mountain? How do they relate to the plot and characters? - Is John consistent in her actions? Is he a fully developed character? How? Why? How does James Baldwin describe John? - Do you find the characters likable? Are the characters persons you would want to meet? - Does the story end the way you expected? How? Why? - What is the central/primary purpose of the story? Is the purpose important or meaningful? - How essential is the setting to the story? Could the story have taken place anywhere else? - What is the role of women in the text? How are mothers represented? What about single/independent women? - Would you recommend this novel to a friend?
A new sensing instrument that can now dissolve on its own inside the body could offer a host of opportunities for health professionals looking to monitor bodily processes in real time. Fiber Bragg gratings are sensing elements that have been inscribed in an optical fiber via a laser. They are made with a special type of glass: phosphorous oxide combined with oxides of calcium, magnesium, sodium, and silicon. In the past, fiber Bragg gratings inside optical fibers have been used as a sensing instrument for “real-time monitoring” of bridges and airplanes, ensuring that they are stable and structurally sound. Now, fiber Bragg gratings are capable of dissolving inside the body, similar to the way stitches have been designed to dissolve on their own. This means they can safely be used to explore “sensitive organs” like the heart and brain, a major advantage for using this technology in the body. "Because they dissolve, these sensors don't need to be removed after use and would enable new ways to perform efficient treatments and diagnoses in the body,” explained Maria Konstantaki. Dissolvable fiber Bragg gratings inside optical fibers could aid in sensing joint pressure, evaluating the heart, and improving laser-based techniques for removing cancerous tumors. "This is the first time that a widely used and well-calibrated optical element such as a Bragg grating has been etched into a bioresorbable optical fiber," Konstantaki explained. "Our approach could potentially be used to create various types of interconnected structures in or on bioresorbable optical fibers, allowing a wide range of sensing and biochemical analysis techniques to be performed inside the body." Konstantaki is from the Institute of Electronic Structure and Laser (IESL) of the Foundation of Research and Technology and was joined in the present study by collaborators from Politecnico di Torino and Istituto Superiore Mario Boella. Because of their unique design, fiber Bragg gratings exhibit optimal visual properties, and they are biocompatible and water soluble. Going forward, Konstantaki and the other researchers involved in designing the new fiber Bragg gratings will continue to analyze their composition and how quickly they dissolve in the body in response to ultraviolet laser irradiation. They have the potential to create fiber Bragg gratings that dissolve in a certain amount of time, but the design will have to be tested in animal models before conducting human clinical trials. The present study was published in the journal Optics Letters. Source: The Optical Society
BY JIM D’VILLE | FROM THE WINTER 2018 ISSUE OF UKULELE The technique of solfège involves assigning the notes of a scale a particular syllable, and then practicing by singing different note sequences using these syllables. Italian music scholar Guido of Arezzo created the system in the 11th century. No doubt, the most famous application of the solfège (pronounced sol-fej) is Julie Andrews singing “Do-Re-Mi” in the musical motion picture The Sound of Music. One of the most important aspects in learning to play any musical instrument is ear training and becoming familiar with the intervals (distance between two tones) of the major and chromatic scales. Learning solfège will improve your listening skills if you incorporate its use into your daily practice. Major Scale Solfège The first step is to play and sing the notes of the C major scale using the solfège syllables. Go slowly, playing and singing each note. C–do, D–re, E–mi, F–fa, G–sol, A–la, B–ti, C–DO (pronounced doe–ray–me–fah–sol–la–tea–DOE) C–DO, B–ti, A–la, G–sol, F–fa, E–mi, D–re, C–do (pronounced DOE–tea–la–sol–fah–me–ray–doe) The first solfege exercise is for daily practice and will have you playing and singing the ascending and descending intervals of the major scale (Example 1). The Movable Do This system is known as The Movable Do, which means that to play in any of the other eleven musical keys “do” is the note you start on that names the key. You then simply follow the major-scale pattern of: To play the E major scale, for example, play the 2nd string open and then follow the whole-step/half-step pattern up the 2nd string. (Example 2) The twelve notes available in any given key are called a chromatic scale. To play a chromatic scale, start on any note and go up the fingerboard one half-step at a time until you reach the starting note, one octave higher. The sharp/flat notes (non-major scale notes) in the chromatic scale also have solfège syllables associated with them. The accidentals have an “e” sound when ascending and an “a” sound when descending. [Chromatic scale pronunciation ascending: doe, dee, ray, ree, me, fah, fee, sol, see, la, lee, tea, DOE; Chromatic scale pronunciation descending: DOE, tea, tay, la, lay, sol, say, fa, me, may, ray, rah, doe.] Note that when you descend the chromatic scale, as you do in bars 3 and 4 of Example 3, the notes change from sharp to flat to reflect the flatted nature of the non-major scale notes. If you played the C chromatic scale on a piano, the sharp/flat notes would be played on the black keys. Third Interval Exercise One of the most beautiful sounding intervals found in the major scale is the third interval, for example do–mi. Practice playing and singing the following exercises highlighting major and minor third intervals. Example 4a shows the C major scale, ascending and descending in thirds, with corresponding solfège syllables. Example 4b again ascends and descends the C major scale, this time using the pattern of going up-a-3rd/down-a-3rd. You might notice that the up-a-3rd/down-a-3rd exercise is the basis for The Sound of Music’s “Do-Re-Mi.” The Solfège & Melody Once you can comfortably play and sing the major and chromatic scales you can start using the syllables to sing melodies. Begin with simple major-scale melodies like those from nursery rhymes. Row, Row, Row Your Boat do, do, do re–mi mi re–mi fa–sol Mary Had A Little Lamb mi–re–do–re mi mi mi mi–re–do–re mi mi mi do re–re mi–re–do Scales & Arpeggios The solfège syllables are also great for practicing scales and arpeggios in different keys. Notice that in the minor scales and blues scale, we use the flatted solfège syllables, also known as blue notes. Major scale arpeggio do mi sol DO Pentatonic scale do re mi sol la (DO) Minor pentatonic scale do may fa sol tay (DO) Blues scale do may fa say sol tay (DO) Natural minor scale do re may fa sol lay tay (DO) Harmonic minor scale do re may fa sol lay ti (DO) Melodic minor scale do re may fa sol la ti (DO) Whole tone scale do re mi fi si li (DO) Diminished scale do re may fa say lay la ti (DO) To get the most from your solfège practice, be sure to sing the notes as you play them. Before you know it, these helpful syllables will be fully assimilated into your musical ear and you’ll be hearing and learning music in a new way. Music educator and facilitator Jim D’Ville is on a mission to get ukulele players off the paper and into playing music by ear. Over the last six years he has taught his “Play Ukulele By Ear” workshops in the United States, Australia, and Canada. Jim is the author of the Play Ukulele By Ear DVD series and hosts the popular Play Ukulele By Ear website www.PlayUkuleleByEar.com.
A liability is a debt or legal obligation of the business to another individual, bank, or entity. There could be both short-term liabilities as well as long-term liabilities. Liability is a type of borrowing that creates an obligation of repayment to the other party involved. It is an outcome of past events or transactions and results in the outflow of resources. Therefore, it involves future sacrifices of the economic benefits of the firm. - Meaning of Short-term Liabilities and Long-term Liabilities - Types of Short-term Liabilities - Conclusion of Short-term Liabilities There are mainly two types of liabilities: - Short-term Liabilities - Long-term Liabilities Besides short-term and long-term liabilities, there is another type of liability called contingent liabilities. However, it is not necessary that they take place. They are payable only when some event or contingency occurs. Meaning of Short-term Liabilities and Long-term Liabilities The short-term liabilities are the current liabilities. It means the debts or liabilities that are expected to be paid off within one year—for example, short-term debts, accrued expenses, and customer deposits. The long-term liabilities are the non-current liabilities. It means the debts or obligations of the firm that are due beyond one year. These liabilities act as long-term sources of finance. For example, long-term loans, long-term leases, bonds payable, and pension obligations. Types of Short-term Liabilities There are many types of short-term liabilities. Some of them are as follows: Accounts payable is the amount of money that a business owes to its creditors or suppliers. It may arise due to the purchase of goods and services from the suppliers on a credit basis. It is also known as trade payable or trade accounts payable. In the normal course of business, the company pays off the accounts payable within one year. The entry of accounts payable comes under the liabilities section on a company’s balance sheet until the company makes its final payment. Short-term debt is any debt or bond payable within one year from its accrual. On the contrary, long-term debts are those which have long repayment periods beyond one year. Short-term debts act as a useful tool for a business to address short-term needs. Accrued expenses refer to those expenses which have been recognized by the books of accounts before the actual payment. Instead, a journal entry records the incurring of an accrued expense in the same accounting period. Also Read: Meaning and Types of Liabilities Accrued expenses are the opposite of prepaid expenses. The prepaid expense is one that has been paid in advance, whereas an accrued expense which has been due but not yet paid off. Taxes payable are the amount of taxes due to the government entities. It is a liability on the business until paid. After the final payment, a debit entry is passed to record the money paid as taxes paid in the books. There are various kinds of taxes payable, such as sales taxes payable, corporate income taxes payable, and payroll taxes payable accounts. The accountant records the liability when they accrue and records their payment when the company settles their payment. A customer deposit refers to the cash a customer deposits with the company before receiving the final goods and services. The company is yet to earn it, and thus, it is a liability on the company. There is an obligation to provide either goods and services or return the money to the customer. The entry in the credit side of the current liabilities account shows the amount of customer deposits. Poor credit records of the customer can be one of the reasons a company may ask to deposit the cash in advance. Also, it is useful in the case of expensive and customized goods. After fulfilling the obligation, the company records a debit entry in the liabilities account and a credit entry in the revenues account. Dividends payable is the amount of cash dividends that are payable to the stockholders as declared by the board of directors of the company. It is a liability until the company distributes/pays the dividend among the shareholders. The company intends to pay the dividend within a year. Therefore, the dividends payable come under the category of current/short-term liabilities. A large liability in the category of dividends payable reflects upon the good profitability of the firm. However, there could be an adverse effect on the liquidity ratios. Current Portion of the Long-term Debt The current portion of the long-term debt is the portion of the principal amount that is payable within one year of the balance sheet. Let’s take, for example, the installment of the loan or debt that is due for payment in the current year will count as this kind of short-term liability. Creditors, lenders, and other investors have a close look at this liability to understand whether the company is capable of paying its short-term liabilities or not. It helps in knowing the liquidity position of the company. Conclusion of Short-term Liabilities So, these were some kinds of short-term liabilities. Apart from that, there could be other short-term obligations that are to be payable within one year. The short-term debts help in meeting the working capital requirements of the firm. The short-term liabilities impact various ratios, including profitability ratios and liquidity ratios. Consequently, they are useful in determining the overall financial position of the company in the short term and developing business strategies accordingly.
Chapter 2: Who are Europeans Europe is an incredibly diverse area with many languages, traditions, countries, and peoples. To illustrate the diversity of Europe, let’s draw a map. It’s about 250 miles from the University of North Carolina at Chapel Hill to the nation’s capital, Washington D.C. If we were to draw a circle around Chapel Hill with a 250-mile radius, this circle would include places like Charleston, West Virginia, Columbia, South Carolina, and nearly get us to Knoxville, Tennessee. Within this area, English is the common language, the American dollar is the only currency, and almost everyone would identify themselves as Americans. This circle would only get us halfway to New York, a third of the way to Chicago, and a fraction of the distance to Houston, Texas, or Los Angeles, California, the major economic hubs of the United States. Next, let’s draw a similar circle around Brussels, Belgium. The area encompasses a huge range of peoples, geographies, and political entities. While our American circle includes just the United States, the circle around Brussels crosses into five other countries beyond Belgium: England, the Netherlands, Germany, Luxembourg, and France, and only nearly misses parts of Denmark, Austria, Liechtenstein, and Switzerland. This part of Europe is densely populated, much more so than the Untied States, and our circle includes several major cities: Amsterdam, Cologne, Frankfurt, Paris, and London. Four major languages are spoken in this area: German, French, English, and Dutch, not to mention the regional dialects spoken in many areas. Can you imagine driving 4 to 5 hours and being in a different country, not to mention understanding street signs in another language and different rules of the road? Pro Tip: Do this on your own at Map Developers - European borders have changed frequently over the centuries. Modern borders are a product of the post-World War II era. - The idea of belonging to a nation – or nationalism – is a relatively recent phenomenon. - Just as borders moved over time, so did people. Migration of European populations was not rare. The people living in this circle are Dutch, French, German, Belgian, and British, but they are also Europeans. But they were not always Dutch, French, or German. First, borders historically changed after every major European war, which occurred regularly until World War II. Second, the idea of a national identity came after modern-day states were created. A state, as defined by the German philosopher Max Weber, is a community with a monopoly over the legitimate use of force in a territory. States came about in the 17th – 20th centuries, with Germany and Italy being some of the most recent states in the 1870s, much later than the United States. The idea of a nation, or an “imagined community” with a shared culture and history that binds them together, came about after states were founded. These national identities, in turn, emerged from public school systems with set national curricula. Think about your time in school in the United States. How often did you say the pledge, and how often did you learn about America’s past, often in glorified ways? Sometimes, groups’ identities and the state they live in do not match. This is often the case with ethnic and linguistic communities, who often have an identity more important than the dominant identity in that state. For example, Catalans in Spain, Hungarians in Slovakia, and Russian-speakers in the Baltics might identify more strongly with their regional identity, in the case of the Catalans, or ethnic and linguistic identity, in the case of Hungarians and Russian-speakers, than they identify as Spanish, Slovak, or Lithuanian, Latvian, or Estonian (the Baltic countries). Europe is also home to several indigenous communities, including the Saami in Finland and Sweden. In an already diverse continent, differences between groups within a country are also important for understanding the politics, history, and culture of an area. Differences between minority communities and the dominant group in that country are also reflected in these groups’ cultures, religions, political attitudes, and voting behavior. Some identity groups want to be their own country, like Catalonia, Alto Adige, and Scotland, and have advocated for greater regional autonomy from the central government. Even as borders have changed over time, people have also moved over time. Some of the largest worldwide movements of people were after major wars in European history, including population movements of millions of Germans after World War II, nomadic lifestyle of the Roma/Sinti peoples, or the expulsion of Jews during medieval times. After the second World War, immigration to Europe has come from its former colonies in Africa, Latin America, and Asia. More recently, it has been of people who were invited to work in Europe from Turkey and the Middle East as a result of labor shortages after World War II, many of whom ended up settling in Germany, the Netherlands, and France. Since 1992, Europeans have been allowed to freely move across borders and settle in any European Union country. In the past decade, millions of Syrian and other refugees have settled in Europe following civil strife and violence in the Middle East. Europe’s populations have always been changing and will continue to change in the future. - Many citizens of European Union countries feel both like they are their own nationality and “European.” - The European Union has fostered European identity by brining countries closer together through economic development, study abroad programs, and European symbols. The proximity of European countries and cultures to one another has important implications for peoples’ lives. Thanks to the European Union, those living in the EU have the right to work and live in any other European Union country. Some people living close to borders might even live in one country and work in another. The Schengen agreement, which was established in 1995, eliminated border controls between many EU and non-EU countries. This allows people and goods to flow more easily from one country to another, and is a key part of the EU’s goal to bring European peoples and economies closer to one another. Instead of imposing definitions on a very diverse group of cultures and countries, we might think of “European-ness” as an identity that people hold to varying degrees, just like people have different attachments to being from North Carolina or from the South. Some people tend to feel closer to their national or regional identity (ie. Venetian – for someone from Venice – and Italian) than they do European. Others might see themselves primarily as European, and Italian as a secondary identity. Some research has been done to investigate why and when people might identify as European (as opposed to national or regional identities). The extent to which people identify as European varies by country and demographic groups. Young people are more likely to feel European, as are those with higher levels of education and those who studied abroad or lived in another country. Being European is not just a matter of identity, but also a legal designation. As of 1992, the European Union created a new citizenship – European citizenship. Citizens of EU member states automatically received European citizenship, and this is now printed on all passports issued by European countries. This citizenship comes with certain rights – including the right to move and live in any other European member country without having to apply for citizenship. European citizens can also vote in local elections, regardless of which EU country they live in, and vote in European elections, which are held every 5 years for the 705-seat European Parliament, it’s legislative body. By contrast, consider those living on the North American continent. Do people identify as “North Americans” if they are from the U.S., Canada, or Mexico? How Different is life in Europe from Life in the US? - Europeans move less than Americans do. Americans are some of the most frequent movers in the world. - Because Europeans move less often, local identities are more important. - Many Europeans have to study more than one foreign language in school. Americans move a lot compared to the rest of the world. High school students often move to another city and sometimes to another state to attend college, graduates move across the country to take jobs, and people often move to warm places, like Florida, when they retire. Americans move an average of 11 times in their lifetimes, while the average for Europeans is about four. About 0.3 percent of Europeans move between countries each year, while it’s three percent in the U.S. between states, and that figure is higher for younger people between 18 and 24. In fact, it’s more common for children in Europe to live in their parents’ homes until they are well into their twenties. How is this the case? Europeans are more likely than Americans to know one or more languages beyond their own. As we will discuss in the next section, it is incredibly easy to get around Europe. One of the key features of European Union citizenship is that any citizen of one country can move to live in another without having to apply for a visa or work permit. This can be partially explained by language skills and qualification barriers, but also because despite the way in which the European Union has brought the group of 27 countries closer together, they are still separate countries with unique cultures. This lack of movement has important consequences. It (along with other factors) means that regional identities tend to be stronger, and that people form tight-knit communities in the places where they and their parents lived. We might think about particular regions like Catalonia and Basque country in Spain or Alto-Adige (South Tyrol) in Italy, as regions with histories and languages that make them different from the national culture. High in the valleys of the Dolomites, many communities in Italy either speak German or Ladin, a Romance language distinct from Italian. Similarly, Basque and Catalan are the primary languages spoken in their respective parts of Spain, while Spanish is a secondary language. These particular regions aside, local identities associated with being from a particular city are more pronounced in Europe than in the United States. This is reinforced by annual festivals, like Carnival (Mardis Gras) or other religious festivals, sports teams (particularly soccer), and dialects. Even though Americans from South Carolina or North Carolina might sound different from those from Minnesota and Wisconsin, these differences are matters of accents, or how words are pronounced. In Europe, the barriers to understanding one another is much higher, because linguistic differences are much stronger, ranging from dialects to completely different languages. You can also watch this Youtube clip that explains European citizenship. - There are lots of ways to get around Europe: by car, train, boat, and plane. - Public transportation networks are better developed in Europe than in the United States. - High speed rail is connecting cities in Europe that were once far apart. It is now quicker to take a train between some cities than it is to fly! A second major difference is also related to mobility. Because of the continents’ population density (107 people per square kilometer in the EU vs 36 people per square kilometer in the U.S.), mass transit systems were developed beginning with the invention of railways to connect people within and between European cities. Most European cities have a combination of subways, light rail (or streetcars), and buses that allow people to get around town. These are built to both connect people from the suburbs to the center of the city (where there is often a major train station) and for people to move from one neighborhood to another. Many European school children, rather than board a school bus that stops by their home instead get on public transportation to get to school. There is also great connectivity between European cities by a combination of high speed rail and flight networks. In many cases, it is faster to take the train, which stops in the middle of the city, than it is to get to an airport, which can be miles away from the historic city center (for example, Paris’ Charles de Gaulle Airport is 15 miles from the city center, or a 45-55 minute train or taxi ride). Many of the projects to bring Europe closer together through improved transportation infrastructure are funded by the European Union, because connections facilitate trade and cooperation. High speed rail makes use of several novel technologies, including rails that are constructed at an angle to facilitate high speed turns, to achieve speeds of up to 330 km/h, or 190 mph. Whereas a car ride between Madrid and Barcelona takes just under six hours, the high speed train ride lasts less than half that time (2 hours and 53 minutes). To take another example, the train between Amsterdam and Paris takes 3 hours and twenty minutes, and a car ride lasts over 5 hours. In some cases, European airlines have given up flying short domestic fights, and passengers instead take the train from their hometown to a major airport hub. If you don’t own a car, Europe is a great place to live! Did you know that a train ride on the TGV between Paris and Bordeaux takes about 2 hours and 14 minutes, covering a distance of over 300 miles. On average, you’re traveling about 150 miles per hour! - European school systems are different in every country. - The German school system has three different tracks, depending on which career you want to be in. - In France, high school students take a big exam, called “le Bac” that is key to getting into university. The school system is another important difference between the United States and the European Union member states. The American school system is relatively straightforward: you go to Kindergarten the year before starting first grade, Elementary School runs through fifth or sixth grade, Middle School through 8th grade, and High School through 12th grade. After High School, students may enter the workforce, go to community college or vocation school, or attend a four-year university. In this section, we’ll briefly outline how the school system works in Germany and France. In Germany, the only commonality in the school system is that students all attend an elementary school that runs from 1st through 4th grade. During the fourth grade, students and parents decide if a student will move on to one of three different schools: A General High School (Hauptschule), Secondary School (Realschule), or Academic Secondary School (Gymnasium). The General Secondary Schools and Realschule run through either tenth or thirteenth grade, after which students can enter vocational schools, technical colleges, or apprenticeships. This part of the education system is well known for training German students for the job market through a combination of applied and theoretical trainings. Students will often work part time at a company and spend the afternoons studying. The final school – Gymnasium – lasts through the 13th grade, after which students take the Abitur, a strenuous comprehensive exam that gives students a grade they can use to apply to universities. If students apply to university, they apply to study a particular subject – unlike in the U.S., where students often don’t declare a major until their second year of college. Instead, students study medicine, law, or any other traditional subject for four years. One significant difference between the US and Europe is that in most EU countries, tuition for college and universities is free or very cheap (think 500 euros per semester). In France, all students begin “school” before the first grade in a Kindergarten. Close to all 3-5 year olds are enrolled in some sort of formal early childhood education programs, compared to only about 80 % of American students who attend a year of Kindergarten before first grade. Like in Germany, public mandatory education includes an elementary school and some type of secondary schooling, called a collège for Middle School (up to age 15) and a lycée for High School. Lycées come in three types – a technical, professional, and a general stream. The general stream is for students aiming to go to university, while the technical and professional schools provide job training for particular industries. Just like in Germany, going to higher education (of any kind) is very cheap – a few thousand euros per academic year, not including other costs like housing and other living expenses. At the university level, the European Union promotes exchanges between countries of all kinds. One program, called the Erasmus program, facilitates and funds students from one EU country to study and live in another for a semester or academic year. More than nine million people have participated in the Erasmus program, and nearly 4,000 European universities participate. This encourages European university students to gain another language skill, develop connections in other countries, and therefore is part of the broader goal to encourage unity and cooperation across European countries. Who governs these educational systems? In the United States, where a combination of states, counties and school districts control education and its funding through property takes. Germany is a federal country, just like the US, where states that have significant authority, especially over education. The national government may set rules regarding education, but the implementation of these laws is up to the individual states and localities. The situation in France is very different, where decisions about education for the whole country are more tightly controlled by the Ministry for Education and national government in Paris. This centralization of political authority in France is not just unique to education! Many other areas of French political life are controlled by the central government, with little autonomy given to different regions and cities. Case Study: Dispelling the Myth of a Monolithic Europe - Ancient Europe (and contemporary Europe) has always been a continent of bright colors and diversity. There is/was no one aesthetic to define the region. Ancient Europe (and contemporary Europe) has been and continues to be a continent of bright colors and diversity. There is and was no single aesthetic to define the region. When we visit a museum and see sculptures from ancient Greece and Rome, we see often see or think of them as devoid or color or ‘white,’ which has led to presumptions about the whiteness of these civilizations. This has led to their association with whiteness and their use in propaganda as ancient symbols modern fascist movements have used to legitimate their cause. These civilizations and the marble sculptures they created were hardly color-less or white. When this sculpture was rediscovered during the Renaissance, these objects emerged after centuries of being hidden away. Over the course of this time until their re-discovery, they had lost their original paintings and color, each of which depicted a range of what we today would call races. It’s important to note that race and the concept of whiteness was invented by Europeans a long time after the Roman and Greek civilizations had disappeared. So rather than using statues as a way to convey racial differences, they would have just reflected the surroundings of the places where statues were made: the Mediterranean basin, extending from Spain to modern day Israel and Turkey. These ancient sculptures had such an effect on Renaissance artists that the Renaissance artists created statues without any paint or coating, which cemented them in people’s minds as white. The true color of the ancient statues is being revealed with the help of new technologies that can scan the current statues for trace amounts of the materials and colors used when they were first made. While the norm is that these statues are presented devoid of color, in the next few years, you might be more likely to see restored ancient sculptures restored to their original vibrancy! Dispelling the Myth of a Monolithic Europe Lesson Outline You can watch this YouTube (in color!) about Roman statues. How to teach this section - Do you think feeling like you are European is similar or different from how we think of ourselves as “American” and/or “North Carolinian”? - What stereotypes do you have of Europeans? Do you think these are accurate or not? - Can you think of other ways life in the United States might be different from life in Europe? If you were to create a new country that combined the best parts of both Europe and the US, what would it look like? Suggested Lesson Plans Exploring EU cities through Google Earth Using Google Earth, students can explore the center of Europe’s historic cities (Geography). Middle School & High School Immigration and Refugees: Comparing the Contemporary Refugee Crises to Historical Immigration And Refugee Movements Students can learn about the movement of populations by comparing and contrasting contemporary and historical migration patterns in Europe (History). Family Life in Europe Students can learn about family life in Europe (Grades 9-Post Secondary). After reading about life in Europe, students can work in groups to make TikToks representing their lives in America (Social Studies). Population, Migration, and IdentityStudents can explore identity and its intersection with the European Union (Social Studies). ¿Qué es la Unión Europea? Students can practice their Spanish while learning about the countries in the EU and transportation in Europe (Spanish). Order a European culture kit from Carolina Navigators to share with your classes.
Once you have an idea for your paper, you're ready to move on to the next stage of the writing process. For some people, that means moving directly on to a first draft; however, many experienced writers use other organizational techniques before writing. These strategies help create a road map for your paper. They save time and energy by organizing your ideas and showing how they are related. Outlining is a way to organize your ideas and see the connections between them. It can be useful before you write or when you revise, helping you find flaws in your argument. There are many different styles of outlining. The extent of your outlining can vary from including just the main points of your paper to including every paragraph. Use whatever works for you, but try to highlight the connections between ideas instead of just listing them. This will help you organize more effectively. When you outline, you move from broad ideas to specific details. Start with your thesis or main idea, and identify the major points that support it. These can become topic sentences, which explain the main point of each paragraph and how it relates to your thesis. Under your topic sentences, list the evidence and details that support them. Here are some questions to think about when organizing: - Is there an obvious way to organize your material? Is there a natural chronology to follow? - Do some points depend on others? What does the reader need to know first, second, etc.? - Do some of your points rely on similarities? Do some of them require highlighting differences? - Do you have many counterarguments to address? If you're still stuck on organization, bring your ideas to the Writing Center. We can help you organize your thesis, claims, and evidence into a coherent structure, even if you haven't written anything yet. There are resources in our library that can help you get started as well. Check out p. 216 in The Craft of Argument (Williams and Colomb, 2001). Try putting a list of all that you want to say (phrases, ideas, references, quotes) at the top of your page (or type them before your first paragraph). Try to organize them based on chronology or logic. You can also list by topic, to see if paragraphs develop logically from the groupings. If these tactics don't work, choose two ideas that are related and just write what you want to say about them. Continue fitting things in as you can until everything on your list has been incorporated into your text. Write without worrying about organization or grammar. Now you can go back through your paper and move, add, or remove sentences to create a better structure for your paper. Reading for Flow If you want to double-check your paper for coherence after you write (and that's never a bad idea), there are a few ways to do so. Perhaps the simplest is asking someone else to read it over. Writing Center WAs are available to help you, or you can ask a friend or classmate. Another good idea is to create a flow chart or post-outline. You don't need anything fancy; just read through your paper and write a brief summary of each paragraph in the margin. Go back to the start and read through them in sequence. Here are some questions to ask yourself as you move through the paper: - Are there any gaps in moving from one point to the next? - Are the connections between your ideas presented out of order or not presented at all? - Are there some repetitive ideas that can be condensed? - Are there paragraphs that stray from your main topic?
By Thomas E. Lambert The Byzantine Empire was affected by climate change. New research reveals how warming and cooling trends correspond to economic upswings and declines that took place in Byzantium. Modern-day climate change has generated quite a bit of debate and concern among scientists, politicians, business people, economists, and people of all backgrounds. To better understand the effects that climate change will have upon us, it is useful to look at how it changed past societies. Some researchers have analyzed how past climate changes could have affected and caused the decline of Imperial Rome, and others have shown the results of archaeological and geological findings that indicate the eastern part of the remaining half of the Roman Empire, the Byzantine Empire, probably was affected by climate changes such as the Late Antique Ice Age from around the 6th and 7th centuries AD; the Medieval Climate Anomaly (aka the Medieval Warm Period) from around the time of the 10th to the 13th centuries; and the Little Ice Age from around the 14th to 18th centuries. The findings mostly point to decreasing crop output thanks weather that was too cold or too warm, and these in turn have often been offered as to why the Byzantine Empire undertook conquest in order to get more land for more/better agricultural production for some foods, or why the empire was the victim of others attacking it in order to get more land and food production for their nations. In a paper I wrote for the journal Human Ecology, I developed from historical accounts some estimates of the Byzantine Empire’s real Gross Domestic Product (GDP) per person in nomismata and an agricultural land output index of 0 to 1 (wheat production, a major staple) over the time of the empire’s life. Although approximations, the trend in Figure 1 shows that declines and upswings in Byzantine real GDP per capita roughly correspond to the three periods of climate change. During the Late Antique Ice Age, the empire suffered declines in economic output as noted by the downward trend in real GDP per capita shown in Figure 1. Some of this was due to declining agricultural productivity in the 6th century perhaps due to colder climate conditions then (see Figure 2) and losses in territory during the 7th century (see Figure 3) due mostly to setbacks on the battlefield. Perhaps the wars occurring during the 7th century in the region are due to rival nations looking for new territory to conquer in order to make up for failing crop production and better lands. During the Medieval Climate Anomaly Period of the 10th to 13th centuries, Figures 1 and 3 show the fortunes of the empire at first improving and then declining. Real output or GDP per person climbs during most of this time period as the empire adds new lands with beneficial natural resources in the 10th and 11th centuries through conquest under the leadership of emperors such as Nikephoros II Phokas, John Tzmiskes, and Basil II. Yet these lands are later lost in subsequent decades. Estimated wheat productivity does well during the 9th century but then falls afterward and through the 13th century (see Figure 2). Too hot weather and too little rain harm wheat production and the yield of other types of crops, yet according to some historical accounts, the empire has enough of a warm weather assortment of crops to make up for lost wheat production. Since wheat is a staple of many other food items, the decline in wheat productivity, however, could have been a catalyst for the empire to undertake conquest opportunities, and if rivals were suffering from low agricultural output at this time, their food shortages could have made them easier to defeat on the battlefield. The remaining centuries of the empire correspond to the first two centuries of what climate scientists call the Little Ice Age. Real output per capita continues to fall as the empire shrinks in size (see figures 1 and 3), and despite estimates of wheat output rising during the 13th and 14th centuries, it falls during the 15th century as the empire comes to the end of its existence in 1453 (figure 2). As the empire gained and lost regions with vast natural resources, its real GDP per capita fluctuated correspondingly. At the same time, these fluctuations in territory and output also corresponded to past climate changes which could have either prompted territorial conquests by the empire or led to the empire losing land and other assets that were needed for continued economic growth. In the statistical models developed for my paper, conjectures by scientists for past climate variables such as temperature, solar irradiance, greenhouse gas emissions, tree ring growth data (for rainfall estimates), and volcanic ash are useful and statistically significant in predicting real GDP per capita trends when controlling for empire territorial size. These variables are also usually good predictors of the occurrence of Byzantine weather cataclysms (floods, droughts, etc.) that some historians have chronicled. Climate change seems to have mattered in the economic fortunes of the empire as others have shown through archeological findings, and the conjectures of real GDP per capita appear to support these. Historians and other social scientists have often looked to events in the past in order to try to understand current events or to anticipate future events. In recent decades, estimates of economic data for different nations going back to ancient and/or medieval times have been created by various economic historians so as to help understand how societies transitioned from slavery to feudalism to capitalism. In studying the Byzantine Empire, we possibly can find clues as to how current climate change can affect the economies of societies throughout the globe. More likely than not, current and future global economic output will be affected by predicted rising temperatures due to global warming. Thomas E. Lambert is Assistant Professor of Practice, Economics at the University of Louisville. Click here to visit his university webpage. His paper, “Byzantine Empire Economic Growth: Did Past Climate Change Play a Role?” is published in Human Ecology. Click here to read it.
Effective lesson planning is half the battle won when it comes to implementing and delivering effective and engaging learning in the classroom. Teaching and learning in the IGCSE classroom requires teachers to be highly reflective and focus on the processes of learning rather than the outcomes of learning. There are two main concerns that we often have as IGCSE teachers. One is meeting the needs of students at different learning levels and assigning students work that is both challenging and achievable. Two is juggling between making students ‘exam ready’ and creating robust lesson plans to meet all the curriculum requirements. With these time-saving, five-minute lesson plan templates, you can modify the structure of your lessons, run quick checks to ensure that each lesson supports diverse learners, and include clear learning objectives for each lesson. These five minute templates support single-subject lesson planning in the IGCSE and are especially powerful for languages, arts, sciences, and global perspectives. Use this resource to: - Ideate on key learning objectives and lesson plan ideas during collaborative planning meetings - Include a combination of lesson fundamentals and lesson extras to create balanced lesson plans - Reflect on teaching and learning
6 Vocabulary Building Activities For Kids All parents want their child to do well in school. One way to help your child succeed both inside and outside of the classroom is to help him or her build vocabulary skills. A good vocabulary is an important part of any student’s education. Unfortunately, it can be an overlooked aspect of education, both at school and at home. As slang and forms of shortened communication have become more common, students may not have the vocabulary skills needed for success. Luckily, there are a number of activities that parents can do with their children to help improve their vocabulary skills. The Importance Of A Good Vocabulary Knowing how to improve vocabulary for kids is important, but knowing why is even more so. A good, broad vocabulary will help your improve your child’s communication skills, including reading, writing, and speaking. This helps your child the skills important for academic success. In addition to performing well in school, strong communication skills will also help your child develop stronger interpersonal relationships. This can positively affect a child’s self-esteem, which can be directly tied to his or her success and happiness in life. How To Improve Your Child’s Vocabulary Check out these tips for building your child’s vocabulary, including activities and games that will get your child off to the right start. - Read every night - Have conversations with your child - Have your child tell you stories - Label household items - Play word games - Answer questions How does reading improve your vocabulary? The answer is that reading makes children ask questions. When you read to your child, it gives him or her a chance to ask questions about what words mean. When he or she does ask, be specific and imaginative. If the word is ‘enormous’, explain to them that it doesn’t just mean ‘big’, but ‘giant’ like a building or a mountain. Few techniques can have a more direct impact on your child’s vocabulary than having frequent conversations with him or her. Ask your child about his or her day, and ask him or her to elaborate on events that happened. As your child speaks, correct him or her when he or she misuses words. Stories are a great way to get your child to really open up his or her mind and explore language. The more fantastic, the better. Give your child a situation and tell him or her to continue the story. When he or she struggles to describe something, give him or her a word that fits the situation and define it. Just like when learning a new language, labeling items around the house is a great way to help children learn the proper names of items. You can also make a game out of it by creating the labels and having your child attach them to the proper item. Word games like Scrabble and Pictionary are a great way to get kids thinking about words they know, including their definitions and spellings. These games work for both young and older kids, teaching large and small words depending on your child’s vocabulary level. If you’re having a conversation and a child asks you what a word you used means, take the time to define it until he or she understands it. Kids absorb almost everything they hear, and if they know what a word means, they can apply it in later situations. Building Vocabulary Skills Can Be Fun! There are plenty of activities that can help your child build his or her vocabulary skills. It won’t happen overnight, but with these activities you can help your child build his or her vocabulary little by little. Just remember to be patient, and to always take time to answer questions.
Diet is defined as an organized arrangement of food consumption made with the goal of improving health. It also includes dietary counseling and other measures to enhance dietary intake. In nutrition, dietary plan is the combination of food intake by an organism or a single person. The goal of diet planning is to achieve specific dietary requirements by avoiding undesirable combinations of food groups. Among the main elements of any diet plan is the amount of calories consumed and how they are consumed. The basic principle of calorie counting is to estimate how many calories are being consumed, how many calories are burned, and the average energy expenditure for each meal. The key idea is to consume less calories than burned, to avoid weight gain and excessive fat accumulation. Calorie intake can be increased by reducing consumption of calories, but the amount should be within the energy needs of the body. When there is an imbalance in the diet, this can cause the body to use energy for an extended period of time while it does not have adequate amounts of energy to meet the demand for energy. When this occurs, it causes a decrease in the body’s ability to function properly and may lead to obesity, a major risk factor for diabetes and other diseases. The most common foods that are affected by an over-consumption of calories are from carbohydrates. Diets that include these foods increase the chances of gaining weight and the effects of diabetes can be aggravated. There is a need to choose a low-fat diet plan when the person’s diet is too high. The primary goal of a low-fat diet is to reduce the consumption of saturated fats, trans fats and other unhealthy foods. It also reduces the consumption of salt, sugar, alcohol and caffeine. Other foods that are included in a low-fat diet plan are whole grains, fresh fruits, vegetables and low-fat dairy products. In addition, some low-fat diets recommend that people with certain types of medical conditions should follow a low-fat diet. These conditions include individuals who are diabetic and pregnant women. A balanced diet, including physical activity, can help a person maintain the proper amount of energy in the body. An individual should choose a workout program that is appropriate for his or her age and physical condition. Some exercises are specifically designed for those who are obese, while others are meant for those who are thin. In order to lose weight, a weight loss program should be developed. It should focus on activities that burn calories efficiently and increase the metabolism to burn even more calories. A healthy lifestyle consists of eating well-balanced meals, exercising on a regular basis, and practicing good health habits. The right balance in the intake of nutrients, exercise, mental stimulation and healthy stress management can ensure that the body is functioning properly and maintains proper health. Proper diet and exercise to help maintain a healthy lifestyle for a long time. In addition, when a person eats the right food and follows an appropriate exercise regimen, the person can live longer, have less stress, and live a stress free life.
Understanding the Gopher Problem Gophers, known for their burrowing habits, can create significant issues in gardens, lawns, and agricultural fields. They dig extensive tunnel systems that can disrupt plant roots, lead to soil erosion, and create unsightly mounds. Understanding the complexity of the gopher problem and knowing the various methods available for their control is vital in tackling this nuisance effectively. Identifying Signs of Gopher Infestation Detecting the presence of gophers is the first step towards implementing control measures. Gophers leave behind distinct signs such as crescent or horseshoe-shaped mounds of loose soil, damaged plants with wilted appearance due to root disruption, and sometimes even audible sounds of their digging activities. Gopher Repellent Options There are several gopher repellent options available in the market, each with its own set of benefits and limitations. Many commercial repellents are formulated with strong odors that gophers find offensive, such as garlic, castor oil, or peppermint. These repellents can be applied to the soil or sprayed on plants to discourage gophers from inhabiting the area. For those who prefer a more natural approach, homemade repellents can be crafted using ingredients like garlic, hot pepper, and essential oils. The effectiveness of these repellents can vary, and they typically require frequent reapplication. Using Plants as Repellents Certain plants, such as gopher spurge (Euphorbia lathyris) and marigolds, are known to deter gophers. Integrating these plants into the landscape can create a natural barrier against these burrowing pests. Trapping remains one of the most effective ways to control gophers. Understanding the different types of traps and how to use them properly is crucial. Live traps allow for the humane capture of gophers, where they can be relocated to a more suitable environment. These traps require regular monitoring and careful handling to ensure the well-being of the captured animal. Lethal traps provide a more immediate solution to a gopher infestation. These traps must be set with caution and precision to be effective and are best used by those who have experience in handling them. Other Means of Gopher Control Beyond repellents and traps, other methods can be employed to control gopher populations. Fencing and Barriers Installing underground fencing or barriers made of wire mesh can prevent gophers from entering specific areas. This approach requires careful planning and installation to be successful. Encouraging natural predators such as owls, hawks, and snakes in the area can create a biological control system. Providing nesting sites and preserving natural habitats for these predators can enhance their presence and effectiveness in controlling gophers. In cases of severe infestation, hiring professionals specializing in pest control may be the best course of action. They have the knowledge, experience, and tools necessary to assess and implement the most effective control strategy. Educating Yourself and the Community Knowledge is a powerful tool in the battle against gophers. By understanding their behavior, habitat preferences, and the various control options available, homeowners, gardeners, and community members can work together to manage and reduce the impact of gophers on the landscape. Engaging with Local Authorities and Organizations Local authorities, agricultural extension offices, and wildlife organizations often offer resources, workshops, and guidance on gopher control. Engaging with these entities fosters a collaborative and informed approach to dealing with gopher problems in the community. Gophers may be a common and frustrating pest, but with a comprehensive understanding of the problem and a multifaceted approach to control, it is possible to minimize their impact and maintain the health and beauty of gardens and landscapes.
Quantum clocks are shrinking, thanks to new technologies developed at the University of Birmingham-led UK Quantum Technology Hub Sensors and Timing. Working in collaboration with and partly funded by the UK's Defence Science and Technology Laboratory (Dstl), a team of quantum physicists have devised new approaches that not only reduce the size of their clock, but also make it robust enough to be transported out of the laboratory and employed in the 'real world'. Quantum – or atomic – clocks are widely seen as essential for increasingly precise approaches to areas such as online communications across the world, navigation systems, or global trading in stocks, where fractions of seconds could make a huge economic difference. Atomic clocks with optical clock frequencies can be 10,000 times more accurate than their microwave counterparts, opening up the possibility of redefining the standard (SI) unit of measurement. Even more advanced optical clocks could one day make a significant difference both in everyday life and in fundamental science. By allowing longer periods between needing to resynchronise than other kinds of clock, they offer increased resilience for national timing infrastructure and unlock future positioning and navigation applications for autonomous vehicles. The unparalleled accuracy of these clocks can also help us see beyond standard models of physics and understand some of the most mysterious aspects of the universe, including dark matter and dark energy. Such clocks will also help to address fundamental physics questions such as whether the fundamental constants are really 'constants' or they are varying with time Lead researcher, Dr Yogeshwar Kale, said: "The stability and precision of optical clocks make them crucial to many future information networks and communications. Once we have a system that is ready for use outside the laboratory, we can use them, for example, on -ground navigation networks where all such clocks are connected via optical fibre and started talking with each other. Such networks will reduce our dependence on GPS systems, which can sometimes fail. "These transportable optical clocks not only will help to improve geodetic measurements – the fundamental properties of the Earth's shape and gravity variations – but will also serve as precursors to monitor and identify geodynamic signals like earthquakes and volcanoes at early stages." Although such quantum clocks are advancing rapidly, key barriers to deploying them are their size – current models come in a van or in a car trailer and are about 1500 litres – and their sensitivity to environmental conditions limiting their transport between different places. The Birmingham team, based within the UK Quantum Technology Hub Sensors and Timing, have come up with a solution that addresses both of these challenges in a package that is a 'box' of about 120 litres that weighs less than 75 kg. The work is published in Quantum Science and Technology. A spokesperson for Dstl added: "Dstl sees optical clock technology as a key enabler of future capabilities for the Ministry of Defence. These kinds of clocks have the potential to shape the future by giving national infrastructure increased resilience and changing the way communication and sensor networks are designed. With Dstl's support, the University of Birmingham have made significant progress in miniaturising many of the subsystems of an optical lattice clock, and in doing so overcame many significant engineering challenges. We look forward to seeing what further progress they can make in this exciting and fast-moving field." The clocks work by using lasers to both produce and then measure quantum oscillations in atoms. These oscillations can be measured highly accurately and, from the frequency, it is possible to also measure the time. A challenge is minimising the outside influences on the measurements, such as mechanical vibrations and electromagnetic interference. To do that, the measurements must take place within a vacuum and with minimal external interference. At the heart of the new design is an ultra-high vacuum chamber, smaller than any yet used in the field of quantum time-keeping. This chamber can be used to trap the atoms and then cool them down very close to the 'absolute zero' value so they reach a state where they can be used for precision quantum sensors. The team demonstrated that they could capture nearly 160 thousand ultra-cold atoms within the chamber in less than a second. Furthermore, they showed they could transport the system over 200 km, before setting it up to be ready to take measurements in less than 90 minutes. The system was able to survive a rise in temperature of 8 degrees above room temperature during the journey. Dr Kale, added: "We've been able to show a robust and resilient system, that can be transported and set up rapidly by a single trained technician. This brings us a step closer to seeing these highly precise quantum instruments being used in challenging settings outside a laboratory environment."
Construction of an underground railway in London started in 1863 and was financed by the Metropolitan Railway. It was an engineering feat, initially using ‘cut and cover’ shallow tunnels, and eventually involving the construction of deep tunnels. The first line connected the City to railway termini at Paddington, Euston and King’s Cross and was later absorbed into the Circle, Hammersmith & City and Metropolitan lines. Using funding from further Railway companies, more lines were introduced, including the District (1868), which, together with the Circle, was electrified in 1905. By the late 19th century the Metropolitan line extended more than 50 miles (80 km), deep into rural Buckinghamshire. Housing followed the railway and the suburban idyll became known as ‘Metroland’. In 1890, the Northern Line opened; the first section, constructed between Stockwell and Borough, is the oldest stretch of deep-level tube line on the network. The Central Line (1900), Bakerloo line (1906) and Piccadilly line (1906) were added to the growing network, with the Victoria Line (1968) and Jubilee line (1979) as later additions. The curving, sinuous lines, constructed to navigate London’s complex physical and human geography, were stylized into geometric regularity in the world-famous London tube map, designed by Harry Beck in 1931. — OR —
What Does Thread Mean? A thread, in the context of Java, is the path followed when executing a program. It is a sequence of nested executed statements or method calls that allow multiple activities within a single process. All Java programs have at least one thread, known as the main thread, which is created by the Java Virtual Machine (JVM) at the program’s start, when the main method is invoked with the main thread. Each thread comes with its own local variables, program counter, and stack, and is used to perform background processing that makes graphical user interface (GUI) application smoother. It does this by simplifying program logic, especially when multiple independent entities must run asynchronously. In Java, creating a thread is accomplished by implementing an interface and extending a class. Every Java thread is created and controlled by the java.lang.thread class. Like any sequential program, a single thread is constituted by a sequence and a single point of execution during its runtime. Unlike a program, however, it does not run on its own, but it’s run within the program. Since they share both data and code (as they share the same address space), threads take advantage of the resources allocated for a program, and are therefore much more light-weighted than processes. In fact, threads are sometimes called “lightweight processes” since they are also sequential flows of control. Their cost of intercommunication is usually quite low as well, and switching between them is much easier and quicker. In some texts, a thread is sometimes called execution context because it runs a code that works only within the context of the thread itself. Techopedia Explains Thread Whenever a thread is invoked, two paths will both execute it and follow its statement after the invocation. Java is a multi-threaded application that allows multiple thread execution at any particular time. In a single-threaded application, only one thread is executed at a time because the application or program can handle only one task at a time. Each thread will have a separate stack and memory space. For example, a single-threaded application may allow for the typing of words. However, this single thread requires an additional single thread allowing for the recording of keystrokes in order to type the words. Thus, a single-threaded application records the keystrokes, allowing the next single-threaded application (the typing of words) to follow. However, a multi-threaded application allows for the handling of both tasks (recording and typing the keystrokes) within one application. Multiple threads can be invoked in a single program to perform different tasks at once, and they will all run at the same time. For example, in a HotJava Web browser, a file can be downloaded in the background while the user is interacting with the GUI in other ways, such as by playing a video on a media player or scrolling the page up and down. When a thread is created, the Java program creates at least one main thread. Then, if more than one thread is created, every one of them is assigned a priority. The thread with higher priority is executed first, followed by lower-priority threads. Since threads running concurrently might share the same data, two or more of them try to run an operation on the same variable at the same time. Since this occurrence will result in corrupt data, threads can be synchronized by allowing each one of them to lock that data until the operation is over. Once the first call returns, or a wait () is called within the method, the other threads that were denied access will resume working on that data. The JVM stops executing threads under either of the following conditions: The exit method has been invoked and authorized by the security manager All of the daemon threads of the program have died An Error or Exception has been thrown A stop() method gets called by another thread This definition was written in the context of Java
The list of distinctive human abilities keeps getting shorter. Once again, our close cousin the chimpanzee has chipped away a bit of our uniqueness. A new study demonstrates that these great apes possess the ability to mix elements of existing techniques to improve efficiency—a part of “cumulative culture.” Investigators have long established that chimps and other animals develop new behaviors and copy them from each other. Researchers label this process “culture.” For example, chimps in different groups hunt in distinct ways, and succeeding generations learn these skills. But apparently humans are the only species to intentionally modify such techniques across generations, cumulatively creating a behavior or technology that no single human could have devised. (In fact, the incremental discovery of commonalities between humans and other primates is an example of cumulative culture.) Research recently published in Scientific Reports asked whether chimps might have the ability to do the same—just at a more modest level. The answer was that chimps at least exhibit a type of creativity that’s a necessary part of cumulative culture—but researchers still haven’t observed the full process. Scientists from the University of St. Andrews, in Scotland, invited chimps held captive at the National Center for Chimpanzee Care, in Texas, to participate in a series of three related tasks. Each task required that they use fingers to move a token out of a linear box with obstacles. In the first two challenges, extracting the token efficiently required different techniques. The third challenge prompted some of the chimps to eventually improve their efficiency by combining the techniques they had learned during the first two rounds. The researchers argue that this ability to mix methods to increase efficiency represents the type of creativity that humans often employ in the improvement of processes across generations. This finding suggests that the last common ancestors of chimps and humans, perhaps six million years ago, possessed this ability as well.
When it comes to visual storytelling, two commonly used tools are storyboards and comic strips. While they may seem similar, there are some key differences between the two that are worth noting. What is a Storyboard? A storyboard is a sequence of drawings or illustrations that tell a story or convey an idea. They are often used in film, television, and advertising to plan out shots and visualize scenes before filming begins. Storyboards typically include sketches of the characters, background details, and notes about camera angles and movement. The Benefits of Using Storyboards Using storyboards can be incredibly helpful in a number of ways. For one, they allow creators to visualize their ideas before putting them into action. They can also help identify potential issues with pacing or continuity before filming or animating begins. In addition, they can be shared with others on the team to ensure everyone is on the same page. - Visualize Ideas: By breaking down a story into individual panels, creators can get a better sense of how everything fits together. - Identify Issues: By planning out each shot ahead of time, creators can identify pacing issues or inconsistencies in the story. - Collaboration: Sharing storyboards with others on the team ensures everyone is working towards the same vision. What is a Comic Strip? A comic strip is a form of sequential art that tells a story through panels and speech bubbles. They are often found in newspapers and online publications as a way to convey short stories or jokes. Comic strips typically feature recurring characters and themes. The Benefits of Using Comic Strips Comic strips offer several benefits over other forms of storytelling. For one, they allow creators to tell a story in a concise and visually engaging way. They can also be used to convey complex ideas or themes in a way that is easily understood by readers. - Concise: Comic strips allow creators to tell a story in a short amount of space. - Visually Engaging: With their use of panels and speech bubbles, comic strips are visually dynamic and engaging. - Easily Understood: Complex ideas or themes can be conveyed through the use of imagery and symbolism. The Differences Between Storyboards and Comic Strips While storyboards and comic strips share some similarities, there are some key differences between the two. - Purpose: Storyboards are typically used to plan out shots for film or animation, while comic strips are used to tell short stories or jokes. - Format: Storyboards are usually presented as sketches or illustrations, while comic strips feature finished artwork with inked lines and color. - Length: Storyboards can be any length depending on the project, while comic strips are usually limited to one page or less. While storyboards and comic strips may seem similar at first glance, they serve different purposes and have their own unique benefits. Whether you’re planning out shots for a film or trying to convey an idea through sequential art, these tools can help bring your vision to life.
Upon the discovery of stem cell research, its application has raised controversial issues on its benefits. The research, intended to improve on therapeutic procedures has turned into moral and ethical issues rather than its curative advantages. Stem cells are cells capable of dividing, giving rise to two daughter cells, one that is identical to the mother cell and the other to a specialized cell (Panno, 2005). They function as an internal repair system for the worn out tissues in the body. Stem cells from animals and human beings have been used for research; these cells are embryonic stem cells and somatic/adult stem cell. Even though the benefits of stem cell research are realistic, critics tend to harp at the various uncertainties associated with the technology. The use of embryonic stem cell for research has been criticized by many especially those who believe that life begin at conception. This is because it involves the destruction of blastocysts formed from human fertilized eggs. This has raised issues to prohibit this technology. Stem cell research and therapy is beneficial to human life, its discovery was a good idea that should be viewed as cure to some diseases. This technology has been opposed by many pledging for its prohibition. First, there have been claims that embryonic stem cell research involves the destruction of blastocysts, an early aged human life. This has raised moral and ethical issues among individuals. It is argued that in embryonic stem cell research, the used embryo is not of the patient’s cell thus creating high chances of rejection by the patient’s body (Almeder, 2004). This would then result to immunogenic reactions by the patient’s body. Another criticism is that there has been no cell line for embryonic stem cells since they are difficult to control, attempts have led to mutations making the cells unusable to patients. The fear of tumor development from these cells leading to cancer has raised the controversy on the technology. There have been no clear debates on the long term effects of the technology thus there exist unknowns associated with the adult stem cell (Sell, 2004). There has been touting concerning the harvesting of stem cells citing that adult stem cell entails a difficult procedure in harvesting. Still, people claim that stem cell research has limited flexibility and seizes cells to be programmed to any type of cell in the body. The fact about embryonic stem cell research is that blastocysts have no developed organ but only a single cell that cannot be equivalent to human life. Many countries have legalized abortion this might well allow the embryo be used for research rather than carrying out abortion. Immunogenic reactions can be nullified by use of somatic cell nuclear transfer, a technology that enables patients to use their own cells. The embryonic stem cell lines remain intact for long periods of time this paves way for the production of large numbers of cells. Embryonic stem cells can develop to any type of cell in the body thus they are versatile and flexible. In adult stem research, chances of tumor formation are rare since they have a low tumorigenic potential. This technology has enabled scientists to research on cell growth and development. This has aided in preventing birth defects. The discovery of stem cell research poses a great benefit to the population at large. Its application in the medical field has done humanity great justice. The technology has been used in cell-based therapies where cells and tissues are formed for medical use. These cells have been used to treat diseases such as cardiovascular diseases, Parkinson’s, and type I diabetes by replacing the worn out or damaged cells (Yanhong Shi, 2008). Stem research has enabled scientists to understand procedures of some conditions making it possible for them to reverse them for instance birth defects. In the pharmaceutical field, stem cells research has been of great advantage. They have been used to test for new drugs and effects of the drugs studied before testing them in laboratory animals and human subjects. Only those drugs that seem to have beneficial effect on the stem cell would be recommended. This has improved accuracy and viability of drugs being dispensed to patients and drugs to be procured. The technology has benefited the human health. This has been achieved by acquiring a source of replacement cells through the formation of specialized cell types. This will help cure diseases such as Leukemia by stem cell transplant of the bone marrow. In summary, the benefits of stem cell research outweigh the disadvantages. The discovery of these cells has led to the understanding of these cells and their role in our organs functioning. They have enabled scientists to identify diseases related to cell division and their possible causes, this has seen to their treatment and control. Stem cells have proved to be of good use to humankind since they act as ‘spare parts’. The hope and promise of these new procedures is astonishing in its scope and range pertaining to the advancement of the human species. I strongly hope for intense cures from this technology. Use the order calculator below and get started! Contact our live support team for any assistance or inquiry.[order_calculator]
- Dhamma - (Pali, the language of the oldest, original teachings of Buddha) - Dharma - (Sanskrit, the language of the Mahayana Buddhist scriptures and those of some of the other older Dharma Paths) - 1. The Buddha’s teachings. - 2. Truth - 3. Wisdom - 4. A natural condition - 5. Mental quality. - 1. Dhammic - acting in accordance with the Buddha’s teachings, the Dhamma. Some Buddhists take refuge in the Buddha, Dhamma, Sangha (Triple Gem) to show respect and appreciation for the teachings. Even the Buddha had refuge to go to. For him, it is the Dhamma. After enlightenment the Buddha said, “Let me then honor and respect and dwell in dependence on this very Dhamma to which I have fully awakened.” Anguttara Nikaya 4.21 Since the Dhamma is a term for the all-inclusiveness of the teachings, the Buddha emphasized the importance of Dhamma: “Remain with the Dhamma as an island, the Dhamma as your refuge, without anything else as a refuge.” Samyutta Nikaya 47.13 and also at Digha Nikaya 26. The Six Qualities of The Dhamma Anguttara Nikaya 11.12 The Six qualities of the Dhamma: 1. Svakkhato: The Dhamma is not a speculative philosophy, but is the Universal Law found through enlightenment and is preached precisely. Therefore it is Excellent in the beginning (Sila: Moral principles), Excellent in the middle (Samadhi: Concentration) and Excellent in the end (Panna: Wisdom), 2. Sanditthiko: The Dhamma is testable by practice and known by direct experience, 3. Akaliko: The Dhamma is able to bestow timeless and immediate results here and now, for which there is no need to wait until the future or next existence. 4. Ehipassiko: The Dhamma welcomes all beings to put it to the test and to experience it for themselves. 5. Opaneyiko: The Dhamma is capable of being entered upon and therefore it is worthy to be followed as a part of one's life. 6. Paccattam veditabbo vinnunhi: The Dhamma may be perfectly realized only by the noble disciples who have matured and enlightened enough in supreme wisdom. Dhamma written in ten major scripts Dhamma is a Pali word but Pali does not have a script of its own. It uses the script of various languages where the Dhamma is being taught. The Tipitaka was an oral tradition and written down in the first century BCE in the Pali language, using the Sinhala script. But several hundred years before that, King Ashoka had many edicts written in Magadhi and other languages similar to Pali using the Brahmi script. Listed above is the word Dhamma written in ten major scripts. Notice that the Brahmi script has the first character which makes the "Dha" sound and is almost identical to our letter "D."
You may have heard your mother say, “Eat your vegetables!” or “Have some fruit!” when you were growing up. Studies show that most Americans eat too few fruits and vegetables, according to Kathleen Mahan and Sylvia Escott-Stump in “Krause’s Food, Nutrition & Diet Therapy.” Vegetables and fruits provide needed nutrients with few calories. When you eat more fruits and vegetables you are likely to eat fewer high-fat and high-starch foods and reduce your risk for certain chronic diseases. Guidelines for a nutrient-dense healthful food plan are included in the USDA MyPyramid food guide. The recommended daily amount of fruits and vegetables varies depending on your age, sex and level of physical activity and is within the range of 1 ½ to 2 cups of fruit and 2 ½ to 3 cups of vegetables for adults. Vitamins and Minerals Fruits and vegetables are generally low in fat, sodium and calories and none have cholesterol, according to the USDA MyPyramid food guide. Fruits and vegetables are sources of potassium, fiber and vitamin C. Fruits contain folate and vegetables contain vitamin A and vitamin C. Potassium helps maintain blood pressure and fiber helps reduce cholesterol. Folate helps build red blood cells and prevent birth defects. Vitamin A promotes healthy eyes and skin. Vitamin E protects essential fatty acids from free radical damage. Vitamin C helps wounds heal, promotes the health of teeth and gums and assists in the absorption of iron. The fiber in fruits and vegetables can help you control your weight, lower blood cholesterol and helps prevent colon cancer, diabetes, appendicitis and diverticulosis -- pouches of infection that develop in weakened areas of the intestinal wall. Most fruits and vegetables -- for instance 1 cup of raw carrots or 1 medium apple -- contain about 2 g of fiber per serving. The protective effect of fruits and vegetables depends, in part, on nonnutrient compounds called phytochemicals that help protect you from chronic disease. Phytochemicals deliver taste, aroma and color. Some act as antioxidants that protect your body from tissue damage. Examples of phytonutrients in fruits and vegetables include the carotenoids in carrots, broccoli and spinach that act as antioxidants and possibly reduce the risks of cancer, according to MayoClinic.com. The capsaicin in peppers reduces the risk of fatal clots in heart and artery disease. Phenolic acids in apples, blueberries and cherries may influence the production of enzymes that make carcinogens water soluble so they can be excreted, according to Eleanor Whitney and Sharon Rolfes in “Understanding Nutrition.” Persons with diabetes may benefit from limiting fruits like watermelon that have a high glycemic index. this means they produce a rapid rise and sudden fall in blood glucose levels, which can harm diabetics and people with hypoglycemia. High insulin production also causes higher triglycerides. Bananas, pineapples and orange juice have a moderate effect. Peaches, apples and oranges produce a less pronounced effect on blood sugar, according to Eleanor Whitney and Sharon Rolfes in “Understanding Nutrition.” The fructose sugar content of fruit, juices and sweetened beverages may promote abdominal fat storage, according to Liwei Chen and colleagues in the May, 2009 issue of the “American Journal of Clinical Nutrition.” If you have diabetes or high triglycerides talk to your dietitian about the amount, frequency and types of fruits you should eat.
65 million years ago, a massive asteroid slammed into the Yucatan peninsula, creating a giant dust cloud that contributed to the extinction of terrestrial dinosaurs. In the resulting re-adjustment of global ecosystems, a new plant tissue evolved, which paved the way for the eventual appearance of humans: fruit. Fruit represents a finely crafted symbiosis between plants and animals, in which the plant provides a nourishing morsel, and the animal disperses the plant's seeds inside a packet of rich fertilizer. Fruit was such a powerful selective pressure that mammals quickly evolved to exploit it more effectively, developing adaptations for life in the forest canopy. One result of this was the rapid emergence of primates, carrying physical, digestive and metabolic adaptations for the acquisition and consumption of fruit and leaves. Primates also continued eating insects, a vestige of our early mammalian heritage. The Eocene epoch began 55.8 million years ago, just after the emergence of primates. For most of the time between the beginning of the Eocene and today, our ancestors ate the archetypal primate diet of fruit, leaves and insects, just as most primates do today. In contrast, the Paleolithic era, marked by the development of stone tools and a dietary shift toward meat and cooked starches, began only 2.6 million years ago. The Paleolithic era represents only 5 percent of the time that shaped our primate genome-- 95 percent of primate evolutionary history occurred prior to the Paleolithic. The Neolithic period, since humans domesticated plants roughly 10,000 years ago, accounts for only 0.02 percent. Therefore, we are not well adapted to eating grains, legumes and dairy, and we aren't well adapted to eating meat and starch either. Our true, deepest evolutionary adaptations are to the foods that sustained our primate ancestors for the tens of millions of years prior to the Paleolithic. That's why I designed the Eocene Diet (TM). The Eocene Diet is easy. You simply eat these three foods: - Raw fruit - Raw leaves (no dressing!) - Live insects Fruit and leaves are easy to find, but what about insects? With a little practice, you'll see that they're easy to find too, often for free. Here are some tips: - Pet stores. They usually sell crickets and mealworms. - Look under rotting logs. - Find a long, flexible stem and stick it into a termite mound. Termites will grab onto it and you can eat them off the stem. She looks pleased.
Sobre este cursoOmitir Sobre este curso Have you ever wondered how software architects, requirements engineers and business analysts sketch and draw out their plans for a software system? In this computer science course, you will gain an in-depth understanding of Unified Modeling Language (UML) class diagrams, which are used to visually represent the conceptual design of a system. You will learn about UML class diagrams and how they are used to map out the structure of a business domain by showing business objects, their attributes, and associations. Taught by an instructor with decades of experience in requirements engineering and domain modelling, this course will equip you with the skill of in-depth understanding of a UML class diagram and will enable you to judge the functional fit of a UML class diagram as blueprint for the development of an enterprise information system. The Unified Modeling Language (UML) has become an in-demand skill in software development and engineering. In fact, some of today’s top jobs, i.e. business analysts, enterprise architects, but also developers, technical consultants and solutions architects, require UML knowledge. Enroll today and gain knowledge in an in-demand skill that will help set you apart from the competition. Lo que aprenderásOmitir Lo que aprenderás - In-depth understanding of a UML class diagram - Basics of domain modeling and its importance - The basic building blocks of a class diagram: the concepts of "class", "attribute" and "association" - Advanced concepts of "inheritance" and "AssociationClass" Plan de estudiosOmitir Plan de estudios Week 1: Introduction and UML Class Diagram Basics (part1) Introduction as to what a data model is, why data modelling matters, and the concepts of modelling languages and notations. Introduction to the notions of "Class" and "Attribute." Week 2: UML Class Diagram Basics (parts 2 and 3) Introduction to the concept of "Association" and its different variants: "unary" and "ternary associations," and "aggregation." Learning to navigate a larger UML diagram. Week 3: UML Class Diagrams Advanced Topics Introduction to the concept of "inheritance" and learning to read a model with inheritance. Introduction to the concept of "AssociationClass" and learning to reify an association. Conoce a tus instructores Obtén un Certificado Verificado para destacar los conocimientos y las habilidades que adquieras$49 USD Oficial y verificado Obtén un certificado con la firma del instructor y el logotipo de la institución para demostrar tus logros y aumentar las posibilidades de conseguir trabajo Fácil de compartir Agrega el certificado a tu currículum o publícalo directamente en LinkedIn El certificado te da un motivo más para completar el curso Apoya nuestra labor edX, una organización sin fines de lucro, se sustenta con los certificados verificados para financiar la educación gratuita para todo el mundo Testimonios de los estudiantes Quotes from learners from previous runs: • "I found this course extremely useful in learning different class diagram concepts. Thank you to the professor and people who helped to make this course a success." • "I really liked the content of the course, as I did like the alternation between the theory videos and the quizzes!" • "The videos were great and the quizzes helped assure I understood the material."
1 edition of A parent"s guide to the Common Core found in the catalog. A parent"s guide to the Common Core |Other titles||Common Core., Kaplan parent"s guide to the Common Core., Kaplan a parent"s guide to the Common Core.| |LC Classifications||LB1571 .P376 2014| |The Physical Object| |Pagination||xxiv, 143 p.| |Number of Pages||143| 1. There will be instructional shifts: The Common Core introduced three major shifts in classroom instruction designed to guide critical readers through a range of grade-level, complex texts or reading materials. Classroom instruction will be focused on. The Everything Parent's Guide to Common Core Math: Grades features examples and exercises that correspond to each standard, so you'll have the confidence you need to help your kids succeed and thrive in the new school standards. The Common Core State Standards aim to raise student achievement by standardizing what's taught in schools across the United States, but have . The Everything Parent's Guide to Common Core Math Grades Understand the New Math Standards to Help your Child Learn and Succeed (Book): Sirois, Jamie L.: If you learned math the "old" way, the new teaching methods may be unfamiliar to you. Sirois and Wiggin provide examples and exercises that correspond to each standard of the new Common Core national . Get this from a library! A parent's guide to the Common Core. Grade [Kaplan Publishing.;] -- Provides parents with strategies and tips for helping their third grade student prepare for the Common Core assessment, offering easy-to-follow lessons, practice questions, and mini-quizzes. Common Core math can make any parent's head spin. The next time your child asks for a little homework help, follow these tips to understand Common Core math a little better so it won't leave you with a headache. Mechanisms of cell stability Twenty-nine tales from the French. The Mycenaean world Letter to Mr. Matthew H. Rickey against reviving a reign of corruption and establishing a policy of protection Pyramids of ancient Egypt 52 questions on the nationalization of Canadian railways Private libraries in Renaissance England Sunshine sketches of a little town. A guide to the Norton reader Feminism in the 1980s. Dress Up with Other Older Americans Information Directory Design analysis of wide-body aircraft Effect of pisatin on clones of Fusarium solani pathogenic and nonpathogenic to peas. Understanding the Common Core Curriculum: A A parents guide to the Common Core book for Parents The Common Core state standards have created excitement and controversy. Scholastic Parent & Child reveals what they mean for your child. which helps give the entire Common Core reading experience an exciting, book-clubby feel that teachers and students love. If knowledge truly is power, then this book is a must-read for parents who have felt concerned, confused, or threatened by the apparent invasion of the controversial Common Core State Standards into their child's math education. This book is written as a practical, survival guide for parents of elementary aged students/5(15). With easy-to-understand examples, problem-solving tips, and lots of practice exercises, The Everything Parent's Guide to Common Core Math: Grades K–5 will give you the confidence you need to help your kids meet the mathematical expectations /5(16). Common Sense Media is the leading source of entertainment and technology recommendations for families. Parents trust our expert reviews and objective advice. The Parents’ Guides to Student Success were developed by teachers, parents and education experts in response to the Common Core State Standards that more than 45 states have adopted. Created for grades K-8 and high school English, language arts/literacy and mathematics, the guides provide clear, consistent expectations for what students should be. The Common Core Standards are State-Driven. The common core state standards are a set of learning skills that all American students should achieve, not a federal curriculum. They set the benchmarks and guidelines for what each student should learn, not how or what teachers teach. Parent Tip: Find out if your state has adopted the common core at /5(54). In a nutshell, the Common Core State Standards are a set of learning goals currently adopted by 43 states, specifying exactly what students should. The Common Core State Standards were developed as a means to prepare K students for success in college or the workforce upon graduation from high school. Since their inception, they have been adopted by 43 states. While much support has been given for the standards, many criticisms have emerged as well. What Parents Should Know Today’s students are preparing to enter a world in which colleges and businesses are demanding more than ever before. To ensure all students are ready for success after high school, the Common Core State Standards establish clear, consistent guidelines for what every student should know and be able to do in math and. Read Common Sense Media's The Core review, age rating, and parents guide. This is a big, dumb, explosion movie. Read Common Sense Media's The Core review, age rating, and parents guide. close(x) Listen to Parent Trapped, our new weekly podcast with stories and tips Stanley Tucci wildly overacts as a fame-seeking scientist with a book deal 2/5. Common Core Math for Parents: Asking Questions at Homework Time. Homework is a hot topic in the transition to Common Core Standards. Homework assignments that ask students to think in new ways can be intimidating to parents. There has been a lot of talk about Arkansas’s move to adopt Common Core State Standards. One thing is clear—there is a lot of incorrect information out there about Common Core. This guide will give you the information you need to know as parents to be informed about the issue. Some parents and organizations oppose Common Core File Size: 1MB. The Everything Parent's Guide to Common Core ELA: Grades K–5 will give you the confidence you need to help your children meet the new ELA expectations for their grade level and excel at school. About The AuthorReleased on: Septem Parents’ Guide to Student Success THIS GUIDE INCLUDES This guide is based on the new Common Core State Standards, which have been adopted by more than 45 states. If your child is meeting the Stating an opinion or preference about a topic or book in writing (e.g., My favorite book File Size: 1MB. About GreatKids State Test Guide for Parents. GreatKids created this guide to help you understand your child's state test scores and to support your child's learning all year long. We worked with SBAC and leading teachers in every grade to break down what your child needs to know and exactly how you can help. The Everything Parent's Guide to Common Core ELA, Grades 6–8 can help. With easy-to-understand examples, comprehension tips, and practice exercises, this comprehensive guide will explain: What your child will be learning in 6th, 7th, and 8th grade; The types of books and passages your child will be readingPages: The rationale behind the Common Core math standards How to help your child with homework and studying With easy-to-understand examples, problem-solving tips, and lots of practice exercises, The Everything Parent's Guide to Common Core Math: Grades K–5 will give you the confidence you need to help your kids meet the mathematical expectations Released on: A Parent’s Guide to the 1st Grade Math Common Core - Febru ; A Parent’s Guide to the Kindergarten Math Common Core - Janu ; First in Math and Reflex Math: A Program Comparison - November 5, ; Procedures versus Concepts: A Mathematical Dilemma - Octo ; The Mathematical Workshop Model: How Data, Differentiation. A Guide to Common Core. Help for Homework Help: Teaching Parents Common Core Math The foundation has sold or given copies of the book to districts, parents and others. It's. Common Core: A Parent’s Guide to Arkansas Common Core State Standards. Arkansas Advocates for Children and Families (AACF) is releasing three documents aimed at helping low-income parents understand Common Core State Standards (Common Core). The documents are written in family-friendly language that is more easily understood by non. Please view all of the videos on the NCTM Teaching and Learning Mathematics with the Common Core Web page. Common Core Math Explained in 3 Minutes This article and video, produced byhelps address many parent and community member concerns with Common Core math approaches. Unfortunately, the term "new math” is a misnomer; the math.Parents’ Guide to student success Parents’ Guide to he or she studies throughout the school year. this guide is based on the new Common Core state standards, which have been adopted by more than 40 states. these K–12 standards are informed book in writing (e.g., “My favorite book is”) Taking part in classroom.Primary Mathematics epitomizes what educators love about the Singapore math approach, including the CPA (Concrete, Pictorial, Abstract) progression, number bonds, bar modeling, and a strong focus on mental math. It is a rigorous program that balances supervised learning and independent practice while encouraging communication.
Power plants can generate large amounts of energy in the form of electricity using a variety of resources. Some require raw materials such as coal, wood, fossil fuel, and metals such as uranium and plutonium. Some depend on the forces of nature such as moving bodies of water and wind. Among all of the above, sunlight is the most recently exploited source of renewable energy. Solar power plants come in many forms depending on the technology used and the manner in which solar energy is converted. They can be broadly categorised into three groups: photovoltaic solar power plant, solar thermal energy plant and concentrating solar power plant. These power plants can operate according to different solar systems. They are off-grid, grid-connected and hybrid solar systems. These three systems will be covered in a separate article. Photovoltaic Solar Power Plant Also generally known as solar farms, PV solar power plants utilise great numbers of photovoltaic (PV) arrays to capture solar energy to be converted directly into electricity. The type of PV cells used for solar farms can vary. Some farms use crystalline solar panels while others may use thin-film solar panels. The PV material in crystalline solar panels are either monocrystalline, polycrystalline or multi-crystalline. Monocrystalline is known to be more efficient but this material is more expensive than the latter two. Thin-film solar panels are able to absorb light in different parts of the solar spectrum. Made from a wide range of other materials, thin-film is more flexible and can be flexed into curved structures. The type of electricity generated is direct current (DC), which can be stored in batteries. However, DC needs to be converted into alternating current (AC) which is the form suitable for use and fed into a power grid. Because of the need to convert DC to AC, heavy duty inverters are an essential component of PV solar power plants. If a plant generates more than 500kW, it will usually also employ step-up transformers. This type of power plant usually has some form of monitoring system to control and manage the plant, and the amount of power generated. More reading on PV solar farms: - Everything You Need to Know About Starting a Solar Farm - Solar Panel In China - Solar Farms In Australia Solar Thermal Power Plant This category of solar power plant also collects sunlight but not to convert solar energy into electricity directly. Instead, it uses sunlight to generate heat which is then converted to electricity using different processes. A miniature scale of solar thermal power plants is the solar-powered water heater which retail consumers are using. These involve the use of PV panels. An example of a large-scale solar thermal power plant is the solar pond, which also harnesses the power of the sun but by using saline water instead of PV panels. A solar pond does not use photovoltaic panels to collect solar energy. Instead, it uses salinity-gradient technology. The technique makes use of a large body of saltwater; say a saltwater pond, to store solar thermal energy. A large body of saltwater naturally has a vertical salinity gradient. This means the salinity of water differs from the top to the bottom of the pond. The top layer, called halocline, has low salinity. The salinity progresses to a higher concentration at the bottom. The deeper the saltwater body, the more concentrated the salinity level. In freshwater, solar rays heat water at the bottom. The warm water becomes less dense and rises to the surface. But in a saline pond, salt is added until the bottom layer becomes very saturated and dense. This then impedes the movement of heated water to the surface. Because water of different salinity concentrations doesn’t mix easily, convection currents are contained within each layer of salinity level. This prevents heat loss from the pond. Highly saline water can reach as high as 90 degrees Celsius while low-salinity layers can maintain around 30 degrees Celsius. The solar power plant then pumps the hottest layer of saline water through a turbine to generate electricity. Examples of solar ponds can be found in Israel and India. Concentrating Solar Power Plant CSP plants do not have photovoltaic arrays either. Instead, they utilise turbines, mirrors, engines and tracking systems. The principle behind CSP plants is to concentrate solar energy to create heat. The heat generated is then used to drive turbines or engines to produce electricity. In this sense, this feature also makes them solar thermal power plants. Thermal energy concentrated in a CSP plant can be stored to produce electricity whenever it is needed. There are a few types of concentrating power plants: - Solar dish - Solar tower - Parabolic trough - Compact linear Fresnel reflector Solar Dish Power Plant Also called dish-engine, this type of CSP technology uses a gigantic parabolic dish lined with mirrors to concentrate sunlight onto a fixed receiver. The fixed receiver contains a working fluid such as hydrogen. The liquid can be heated to at least 1,200 degrees Fahrenheit or 749 degrees Celsius. The heated fluid then drives pistons in the engine. The mechanical power from the pistons is channelled to a generator or alternator to produce electricity. Thus, the name dish-engine. The most common kind of heat engine utilised is the Stirling engine. A solar dish power plant has at least a few hundred of these dish or engine systems. Each dish in the plant rotates along two axes to track the movement of the sun. This helps to always face the sun directly and concentrate the solar energy at the focal point of the dish. The concentration ratio of a solar dish is higher than that of linear concentrating systems. Solar Tower Power Plant Also called a solar power tower, this type of concentrating solar power plant also uses mirrors, a central receiver system and sun-tracking system like the solar dish. Instead of parabolic dishes, the mirrors of a solar tower are lined on flat panels which track the sun. These are called heliostats. The heliostats are controlled by computers which program them to track the sun along two axes so that sunlight is focussed on a receiver at the top of a high tower. The tower is placed at the center of all the heliostats. It is filled with a medium – either water or air. The heated medium is captured in a boiler which produces electricity with the aid of a steam turbine. This method of concentrating sunlight can multiply the energy of direct sunlight by as much as 1,500 times. The result is a concentrated heat of up to 700 degrees Celsius or over 1,000 degrees Fahrenheit. Research is being done on using nitrate salts as the heating medium, which is believed to have higher heat transfer and storage properties than pure water and air. This energy storage capability will allow the solar farm produce electricity even at night or on cloudy days. A solar tower facility can spread over an area of 18,000 square kilometres, housing over 2,000 heliostats. The central tower can be as high as 60 meters. Parabolic Trough Power Plant The mirrors at a parabolic trough solar power plant are also arranged in long strips but the strips are curved in the center instead, like a trough. Each trough is usually about 15 to 20 feet tall and measures 300 to 450 feet long. The curved troughs focus sunlight onto a receiver tube which runs down the center of each trough. The receiver tube contains a high-temperature heat transfer fluid such as synthetic oil. The fluid absorbs the heat then passes through a heat exchanger where water is heated to produce steam. The steam then powers a conventional steam turbine power system to produce electricity. The temperature of this fluid can reach at least 750 degrees Fahrenheit or 399 degrees Celsius. A normal parabolic trough power plant can consist of hundreds of parallel troughs connected in a series of loops. They are arranged on a north-south axis so that they track the sun from east to west. Compact Linear Fresnel Reflector Power Plant A compact linear Fresnel reflector (CLFR) is also called ‘linear concentrating system’ in short. The name comes from using the Fresnel lens effect which utilises a large concentrating mirror with a large aperture and short focal length. The Fresnel lens effect can focus sunlight to about 30 times stronger than the normal intensity. The CLFR uses the same principles as the parabolic trough system – tracking the sun using U-shaped long rows of mirrors (modular reflectors), and concentrating the sun’s energy on a central receiver. There are three key differences. One is the use of low-cost mirrors arranged in long parallel rows instead of loops. The second is that the sun-tracking path is from north to south so as to maximise sunlight capture. The third difference is that the modular reflectors are elevated. The receiver sits above the modular reflectors which deflect concentrated sunlight onto its surface. Inside the receiver is a system of tubes filled with flowing water. The heat generated by concentrated sunlight onto the receiver boils the water. This heated water is then passed through a turbine which generates high-pressure steam for use in power generation and industrial steam applications. Of all the types of solar power plants, only the photovoltaic power plant makes use of photovoltaic panels to directly convert solar energy into electricity. The other solar power plants are also powered by solar energy. Instead, saline water or mirrors are used to concentrate solar energy to heat a medium. It is the heated medium which is then passed through a turbine or engine to generate electricity. Other related reading:
often hears the question: birds - are animals or not?After reviewing all the features of the structure and activity of this class, you can confidently answer it. Class Birds includes 9,000 species, united in following superorders: ratites, or running (ostrich, kiwi), penguins, or floating (Penguin Imperial, spectacled, Magellan, Galapagos, crested and others), pitching,or flying (chicken, pigeon, sparrow, crow, etc.). Birds on a structure similar to the reptiles and are progressive branch, which was able to adapt to the flight.Their front limbs during evolution transformed into wings.For the birds characteristic of a constant body temperature, characteristic of higher vertebrates, thus birds - warm-blooded animals.This is the first answer to the question "Bird - an animal or not." The origin of birds required to ancient reptiles pseudosuchian with a similar structure of the hind limbs. Body and skin bird's body is streamlined with a small head and long neck mobile.The body ends tail. skin is thin, dry and almost devoid of glands.Only a few birds (waterfowl) have an oil gland produced by fat-like secret with water-repellent properties.Horny education (derived epidermis) cover the beak, claws, scales fingers and tarsus (lower part of the lower leg).Feathers are also derived from the skin.They are divided into two groups: the contour and down.Outlined in turn, are steering (flight control), Makhov (maintenance of the birds in the air) and opaque (located on top of the body).Under the contour are down feathers.They help maintain body heat.During molting older feathers fall out completely and new grow in their place. skeleton and muscular system Birds skeleton is particularly robust and easy due to the bone cavity filled with air.It consists of the following sections: the cervical and thoracic, lumbar, and sacral and caudal.Extremely moving is due to the cervical vertebrae of the set.The thoracic vertebrae are fused tightly and edges are slidably connected to the sternum and rib cage generators.For the attachment of muscles, resulting in movement of the wings, there is a projection on the breastbone - keel.As a result of fusion of the lumbar and sacral and caudal vertebrae in part with each other and with the pelvic bones form the sacrum, which serves the rear limbs support. muscular system is well developed in birds.Depending on the ability to fly a particular development reaches a certain department.Birds that fly well, well developed muscles that move the wings, and those who have lost this ability - the muscles of the hind limbs and neck. digestive and excretory systems digestive system is characterized by the absence of teeth.For grasping and holding food used with horny beak covers on the jaws.After food enters the mouth into the throat, and after it - a long esophagus, which has a pocket-enlargement (goiter) to soften it.The rear end of the esophagus opens into the stomach, which is divided into two sections, the glandular and muscular (where food passes mechanical grinding).The intestine consists of the duodenum, which open ducts of the liver, and pancreas, as well as a thin and short rectum, ending cloaca.This structure facilitates the rapid removal of undigested residues out. The organs allocation birds are paired kidneys and ureters, which open into the cloaca.From her urine with faeces discharged to the outside. Respiratory birds most adapted for flight.Through the nasal cavity air enters the pharynx and trachea, chest which is divided into two bronchi.Here is the voice box.Once in the lungs, the bronchi branch out much.Lungs themselves have a complex structure and consist of multiple-through tubes.Some of them extend, forming air bags are disposed between the internal organs, muscles and in tubular bones.Birds tend to double breath.This is due to the fact that during the flight of the air passes twice through the lung: the sucking in stroke of the wing to its ejection and during lowering of the compression of the bags. organization of the nervous system in birds is quite complex and similar to that in higher vertebrates.This once again gives an affirmative answer to the question "Bird - an animal or not?"System consists of two divisions: the brain and spinal cord.The head department of a well-developed cerebellum, responsible for the coordination of movements, as well as the front hemisphere and the average brain responsible for complex forms of behavior.The spinal cord is the most developed in the shoulder, lumbar and sacral, which provides good motor functions.These features also give a clear affirmative answer to the question "Bird - an animal or not?" behavior of birds is based on the unconditional (congenital) reflexes: nutrition, reproduction, nesting, egg laying, courtship, singing.Unlike the class of reptiles they can be configured and secured conditional (acquired during life) reflections, indicating their higher stage of evolution.One example of conditioned reflexes may be the fact that they successfully domesticated by man.It is believed that a bird - pets that are easy to rebuild their behavior and way of life from the wild (natural) such as the cultural (home). bodies circulatory system of birds, like most higher vertebrates, represented four-chambered heart consisting of atrial (2) and the ventricles (2), as well as vessels.Their blood is completely divided into the venous and arterial.It runs two circulation (small, large). Breeding Birds - dioecious animals with a complex and highly developed system of mating behavior, breeding using eggs and care for it. All of the above characteristics of the class give a definite answer to the question "Bird - an animal or not?"Of course, the birds are animals.
A wave has a repeating pattern. One complete repetition of the pattern is a cycle. The time to complete a cycle is the period. The distance that a sound wave travels in one period is called the wavelength. Wavelength is related to the speed at which sound travels and can be calculated by dividing the speed of sound by the frequency of the sound: Wavelength = Speed of sound / Frequency of sound It is important to remember that a sound wave can be thought of in relation to time and space (distance). The animation below shows a wave pulse traveling from left to right. As the wave passes a location, a red dot highlights the time and space behavior at that specific location in the wave’s motion. The graph on the lower left shows the time history of the movement of this red dot as the wave passes by. This graph represents the wave as a function of time for a specific location. When the leading edge of the wave reaches the specified location (with the red dot), the position of the dot moves up and down with time as the wave passes. After the wave pulse has passed through, the dot stops moving. The graph on the lower right represents a snap shot of the wave position at each second after the waves travels for a duration of 27 seconds. This graph represents the wave as a function of position at a specific time. A high-frequency sound has a shorter wavelength than a low-frequency sound. Using 1,500 meters per second for the approximate speed of sound in seawater we find the following relationships between frequency and wavelength, calculated using the equation given above: |Frequency (Hertz, or cycles per second)||Wavelength (meters)|
Why should we be concerned with wildlife in our gardens? Put simply, an environment that is healthy for wildlife, is also healthy for us. Moreover, we depend upon wildlife – bees, pollinate many of our essential crops, worms produce the soil we depend upon, and a host of predators from wasps to ladybirds to thrushes and hedgehogs, control garden pests. There is an interdependent web of life and if we destroy part of it with chemicals or habitat destruction, it has a knock on effect for other wildlife. Gardens are increasingly important habitats for wildlife as natural habitats decline. So, how can we make our gardens more wildlife friendly ? Avoid using chemicals. If you spray greenfly or blackfly, for instance, you will also kill the ladybirds and lacewings that control them. Moreover, the greenfly will recover more quickly than the predators, so you will increase the pest problem and escalate the spraying regime. Nature needs to be allowed to find a balance, which means a certain amount of leaving alone, and also accepting imperfections in the garden such as the odd leaf being eaten by slugs. Other things we can do to make our garden more attractive to wildlife are: raise the blades on the mower to improve the habitat for insects and creeping wild flowers; accumulate a stock pile of dead logs, prunings and fallen leaves for small mammals (including hedgehogs), to hide in; provide nectar rich flowerbeds and extra food for birds; leave flowerheads to run to seed – providing food for birds such as finches, through the winter. Don’t be too tidy: dead wood is an ideal habitat for beetles and fungi which attracts other wildlife. A compost heap is not only a good way of keeping the soil fertile and healthy, it increases the worm population in the garden which encourages toads, hedgehogs and birds; slowworms give birth to young in warmth of compost heap and grass snakes lay eggs there. So, what can we plant to make our gardens attractive to wildlife? Whilst it is not essential to have only native species, some native plants are essential to make the best wildlife habitats. “…we have a complex, mixed community of wild plants and animals which have ‘grown up together’ since the last Ice Age… Not all our animal life feeds directly on plants… but even the most carnivorous of predators feed on other animals which themselves feed on plants, or on other plant eating creatures. The leaves and shrubs of native plants provide the basic platform for our animal life… many plant-eating insect larvae… only eat the leaves of one specific type of plant, and that plant will always be a native one.” Chris Baines How to Make a Wildlife Garden (p.47). By extending the season of interest in our gardens, we not only increase the delight for ourselves, but make it more hospitable to wildlife. It should be possible to have something in flower and fruit every month of the year, so providing food for a variety of wildlife. Single flowers that most closely resemble the wild form in a wide variety of shapes will attract a variety of pollinators. Don’t go for highly bred double forms which confuse pollinators and offer little or no nectar. Seeds and berries are an important food source, so choose varieties with attractive seedheads and allow them to seed, and plants with berries, haws or hips, such as hawthorn, pyracantha, cotoneaster, and dog roses. Our gardens need to provide three crucial elements in order to attract wildlife – food, water and shelter. Trees and shrubs are good for shelter, (as are piles of leaves, compost heaps and dead leaves of perennials left over winter). Food is provided by the plants we choose (and any additional food we may leave for birds). That leaves water. A pond is a very valuable wildlife resource. If you have children you can put a grate over it. If you don’t have room for, or don’t want a pond, even a birdbath can be a valuable wildlife resource, not only to birds, but to insects, who also need to drink, and if you leave it on the ground, ground-living creatures such as hedgehogs can drink from it. Do keep the water clean to avoid diseases, and never put chemicals in the water. One last thing, look at the wider landscape and make your garden to fit into it “… go for garden habitats which compliment the local wildlife community… an isolated island of peat bog in the heart of suburbia is never likely to be anything more than a collection of plants. The appropriate animal life simply won’t be around to colonize it. Conversely, if you create a mini habitat typical of the area you are likely to be very successful at attracting wildlife to join you.” Chris Baines How to Make a Wildlife Garden (p.33) This is one of a series of blogs I will be doing on how to create wildlife habitats. Please look out for my next blogs for more detailed information about how to attract wildlife to your garden.
The Middle Ages and the Modern Age The Middle Ages and the Modern Age, 20th century The Middle Ages in Dalmatia, as well as on the islands of Zadar, was marked by various state and legal arrangements that took turns through this period of almost a thousand of years. They started with the Byzantine administration, continued with the ruling of Croatian national rulers, the Hungarian-Croatian kingdom, the Anjou period, and ended with the Venetian rule that lasted until 1797. After the dissolution of the Venetian Republic came the so-called Period of the First Austrian Rule, followed by a short French administration (1805 – 1813), which was replaced by the Austro-Hungarian Empire, which lasted until the end of World War I (1918). Upon the dissolution of the Austro-Hungarian Empire the islands of Zadar fall under the authority of the State of Slovenes, Croats and Serbs which becomes part of the Kingdom of Serbs, Croats and Slovenes, or later Kingdom of Yugoslavia. World War II was marked by Italian and German occupation. Since the end of World War II, the areas were part of the Federal Republic of Yugoslavia which gave rise to today’s independent Republic of Croatia. Despite all the political turmoil the life of island farmers has not change significantly. The basis of survival has always been his skills and fruits of the stingy land and the rough sea. The process of “farmering” the population of the island can be traced through surviving documents of notaries of Zadar from the 13th century on, with a special mention of land for livestock breeding. The basic farmer mass of the island population that lived on someone else’s land or leased grazing herds was differentiated from the class of people who owned a vineyard or had their own cattle. That class was called didići. The more developed forms of society appear in the 14th century with the establishment of fraternities which played an important role in the everyday life of the island’s population. The basic activities were agriculture, livestock breeding and fishing. The settlements were originally located in the fields of Telašćica, which still hold the remains of churches, drywalls, and olive grove fences. The change of method of catching sardines (night light) had put Sali among the important fishing centres, and in the 16th century it became the most important fishing centre of the Adriatic, which slowly put out settlements in the fields, while in the port of Sali a number of residential buildings were erected that have survived until today. Only after the end of the medieval period, in the circumstances of the 17th and 18th century, the institution of rural communes appeared on the island, with a task of protecting the interests of the farmers population of the island. In addition to a strong fishing tradition, a series of agrarian reforms in the 19th and 20th century initiated livestock breeding and agricultural activities. In the Nature Park there is a large number of shepherds’ dwellings, and various piers for mooring fishing boats that bear witness to past times. Today, residents are increasingly turning to tourism, but old habits such as fishing, maintenance of olive groves and vegetable gardens remain an integral part of island life, only to a much lesser extent. The main problem is constant emigration that commenced during the 19th century.
Researchers at University of Notre Dame, in Indiana, have demonstrated a way to significantly improve the efficiency of solar cells made using low-cost, readily available materials, including a chemical commonly used in paints. The researchers added single-walled carbon nanotubes to a film made of titanium-dioxide nanoparticles, doubling the efficiency of converting ultraviolet light into electrons when compared with the performance of the nanoparticles alone. The solar cells could be used to make hydrogen for fuel cells directly from water or for producing electricity. Titanium oxide is a main ingredient in white paint. The approach, developed by Notre Dame professor of chemistry and biochemistry Prashant Kamat and his colleagues, addresses one of the most significant limitations of solar cells based on nanoparticles. (See “Silicon and Sun.”) Such cells are appealing because nanoparticles have a great potential for absorbing light and generating electrons. But so far, the efficiency of actual devices made of such nanoparticles has been considerably lower than that of conventional silicon solar cells. That’s largely because it has proved difficult to harness the electrons that are generated to create a current. Indeed, without the carbon nanotubes, electrons generated when light is absorbed by titanium-oxide particles have to jump from particle to particle to reach an electrode. Many never make it out to generate an electrical current. The carbon nanotubes “collect” the electrons and provide a more direct route to the electrode, improving the efficiency of the solar cells. As they wrote online in the journal Nano Letters, the Notre Dame researchers form a mat of carbon nanotubes on an electrode. The nanotubes serve as a scaffold on which the titanium-oxide particles are deposited. “This is a very simple approach for bringing order into a disordered structure,” Kamat says. The new carbon-nanotube and nanoparticle system is not yet a practical solar cell. That’s because titanium oxide only absorbs ultraviolet light; most of the visible spectrum of light is reflected rather than absorbed. But researchers have already demonstrated ways to modify the nanoparticles to absorb the visible spectrum. In one strategy, a one-molecule-thick layer of light-absorbing dye is applied to the titanium-dioxide nanoparticles. Another approach, which has been demonstrated experimentally by Kamat, is to coat the nanoparticles with quantum dots–tiny semiconductor crystals. Unlike conventional materials in which one photon generates just one electron, quantum dots have the potential to convert high-energy photons into multiple electrons. Several other groups are exploring approaches to improve the collection of electrons within a cell, including forming titanium-oxide nanotubes or complex branching structures made of various semiconductors. But experts say that Kamat’s work could be a significant step in creating cheaper, more-efficient solar cells. “This is very important work,” says Gerald Meyer, professor of chemistry at Johns Hopkins University. “Using carbon nanotubes as a conduit for electrons from titanium oxide is a novel idea, and this is a beautiful proof-of-principle experiment.”
Vaccinations should begin early in an infant’s life and continue regularly throughout childhood and adolescence, to protect from diseases. Immune defenses keep us protected from harmful disease-causing bacteria, viruses and fungi. Diseases (caused due to lack of vaccination) can be severe – even deadly – mainly for infants and young children. Measles and Whooping cough are examples of how vaccine-preventable severe diseases could be. Measles is still a common disease in many parts of the world. It can cause pneumonia, encephalitis (swelling of the brain), and even death. Children are at the highest risk of complications, which, in severe cases, can cause death. Benefits of Vaccination— It is the first line of defence for every human. Vaccination is a process by which a person is made disease resistant with the help of vaccines. It stimulates the immune system, improving one’s health. Vaccines work with the baby’s natural defences to help them safely build immunity to deadly diseases. Vaccines are a valid medical advancement that continues to save lives every day. They keep other children safe by eliminating or decreasing dangerous conditions that can quickly spread from one child to the other. During and after pregnancy, the mother and her baby have a feeble immune system making them susceptible to diseases. Therefore, they must receive regular vaccines to keep their immunity up. Sudden Infant Death Syndrome (SIDS) can occur between 2-4 months. These vaccines protect the mother and child from underlying illnesses like cough, cold, and fever to life-threatening diseases like hepatitis, measles, polio, mumps, tetanus and even diphtheria. Infant mortality in India is still at a shocking high – simple schedule of vaccination, however, can make a world of difference. Studies show that following an immunisation schedule, prevents about four million deaths every year, WHO states that available vaccines can prevent 20% of infant deaths annually. Keeping track of immunisations Most of the child’s vaccinations are completed between birth to 6 years. Some vaccines are given more than once, at different ages, and in combinations. Keep a careful record of your child’s shots. Studies show that about two-thirds of preschool children are missing at least one routine vaccination. Therefore, it is imperative to make up for the missed immunisations. We, as a community, can strengthen the foundation of a healthy childhood by supporting easy access to immunisation where it matters the most. Our team at EKAM Foundation can help with education about the right immunisation schedule. Log onto www.ekamoneness.org to connect and learn more.
Johann Gottlieb Fichte began his argument by outlining what makes a natural border for a people. He determined that language was a natural border that defines a people because they can communicate and grow. Germany was united by a common language and way of thinking. He then argued that foreign countries intentionally divided us the German peoples for their own benefit. Germany was unsuspecting and naively fell for their tricks. Fichte claimed that foreign countries manipulated Germany for their own selfish benefit. Some people were considering a universal monarchy as a remedy but Fichte exclaimed that monarchy was the very opposite of what the Germans needed to unify. Perhaps he too saw the repeated mistake the French made or making progress, then undoing it by reinstating a monarch. Instead he wanted to let natural borders reunify the German people. Otherwise, the nation established would not hold up to the test of time. Fichte emphasized that the intention of foreign countries was to manipulate unsuspecting Germans and turn them against one another for their own selfish benefit. However, it is difficult to believe that all the blame ought to be on the foreign countries. Maybe the foreigners did have selfish intentions, but they were more likely meant to benefit the foreigners less than to malign Germans. It just happened that Germans suffered from their gains. Also, Germans should have realized. Therefore, Germans were not as united as Fichte claimed in the first place. Were natural borders like language and common ways of thinking truly determinant of a people, Germans would not have been so susceptible.
The giant tortoise Lonesome George died in the Galapagos Islands this past Sunday. He was the very last of his sub-species, the Pinta Island Tortoise (Chelonoidis nigra abingdonii), which is now sadly extinct. He reached a healthy 200 pounds and five feet in length, and died of old age at more than 100. Lonesome George was discovered in 1972 on Pinta Island when it was thought that tortoises on the island were extinct, and was moved to Santa Cruz Island in the Galapagos. Despite conservationists’ best efforts over the next 40 years, he remained a confirmed bachelor with no known offspring. The lone tortoise became a symbol for the Galapagos Islands and for endangered species. Thankfully not every endangered species’ story has to end as Lonesome George’s did. Several thousand miles to the northwest, on a different Santa Cruz Island – this one off the coast of Santa Barbara in the Channel Islands – we learned some valuable lessons bringing a tiny fox back from the brink. The Santa Cruz Island fox recovery is on track to be one of the fastest endangered species recoveries on record. Islands around the world often have unique species that live there and nowhere else. In fact, according to Island Conservation, islands play an outsized role in nature. They cover only about 3 percent of the world, but harbor 20 percent of known species and 50 percent of endangered species. The Galapagos are well known for inspiring Charles Darwin to develop his theory of evolution and natural selection from what he observed there in the 1830s. Our own Channel Islands, another example, have plant and animal species you can’t find anywhere else on Earth. One of the main drivers of extinction on islands is introduced species – like sheep, goats, pigs, and rats – brought by people and gone wild in new homes with no predators. In the case of Santa Cruz Island, pigs and sheep brought there devastated the natural habitat. That combined with golden eagles that had taken up residence on the island, preying on foxes, brought them close to extinction. Using a targeted, science-based approach, a coalition engaged in an intensive, recovery project to save the island fox. The sheep had been completely removed by the 1990s, and the pigs were eradicated in a record 15 months in 2006. The last pair of breeding golden eagles was also removed soon thereafter. In 2004, there were fewer than 100 foxes and today more than 1,200 foxes now live in the wild. The work on Santa Cruz Island has demonstrated that, with a scientific approach, restoration of islands is possible and can deliver extraordinary results. Indeed, islands can act as a real world R&D lab for discovering and testing out solutions that help threatened species survive. There are a lot of new threats on the horizon – climate change, invasive species, habitat loss and so forth. These are the challenges we need to engage in if we are going to help other species avoid Lonesome George’s fate.
Francophones in early Toronto: A little-known heritage An obelisk at the site of Fort Rouillé, better known as Fort Toronto, at Exhibition Place near the downtown core, a few historic plaques in Étienne Brulé Park along the Humber River in the western part of the city, and a few mentions of the French period in the Fort York National Historic Site exhibitions testify to the historical heritage dating back to New France in Toronto. The contribution of these first European inhabitants to Toronto’s development deserves to be remembered. To learn more… Humber River: A Canadian Heritage River The Humber River flows towards downtown Toronto from former suburb of Etobicoke, now part of the city. In 1999, the Canadian Heritage Rivers System designated Humber River a Heritage River because the Aboriginal peoples followed it for thousands of years to travel between Lake Ontario, Lake Simcoe and Georgian Bay, where the Hurons-Wendats lived. French explorers and fur traders also sailed the river in the 17th and 18th centuries. Étienne Brûlé was the first “White” to travel on what is now known as the Toronto Carrying Place Trail between Georgian Bay and Andastes country south of Lake Ontario in 1615. Explorer and founder of Quebec City Samuel de Champlain had sent the young man to live among the Huron-Wendat in 1610 to learn their language and customs. Étienne Brûlé is recognized today as the first Franco-Ontarian, and the park that borders the Humber River is named after him. Historic plaques in Étienne Brûlé Park attest to the presence of Aboriginal villages, some thousands of years old, while others commemorate Étienne Brûlé himself and other French explorers, as well as the construction of two fur trading posts at the time of New France. Forts Douville, Portneuf and Rouillé-Toronto Although the French regularly visited the Great Lakes as early as the 1630s, they avoided Lake Ontario, which was controlled by the Iroquois, who were enemies of the French Aboriginal allies, and therefore enemies of the French. Although there was a peaceful period during which explorer Cavelier de La Salle and missionary Louis Hennepin reported meeting with the Iroquois, now in possession of the former Huron-Wendat Humber, or Kabechenong, River territory in 1675 and 1680, it was not until the 18th century that the French built their first trading post there. In 1720, young Alexandre Daigneau Douville, whose family were fur traders, erected a modest fort at the place now known as Baby Point (in reference to James Baby, who settled there in 1815) on the Humber River, approximately five kilometres from Lake Ontario. The few French who lived there took Aboriginal wives in “country marriages.” However, there were not many furs, and the French abandoned the fort 10 years later. In 1749, Pierre Robineau de Portneuf oversaw the construction of a new, larger fort at the mouth of the Humber River. The volume of furs traded was so large that a second fort was built 10 kilometres to the east in 1750–1751 to intercept Aboriginal traffic to the British trading post in Oswego on the south shore of Lake Ontario, five days by canoe futher east. This 29-square-metre French fort with a bastion at each of its four corners was named Fort Rouillé in honour of the Secretary of State for the Navy and the Colonies, Antoine Louis Rouillé, but it is better known as Fort Toronto, a name derived from an Aboriginal place name. Fort Toronto was abandoned and burned down by the French in 1759, when they withdrew to Montreal just before the surrender of New France to Great Britain. A commemorative obelisk was erected at the site of the fort in 1887. Toronto’s Francophone historical heritage In short, there are few signs in today’s Toronto of the founding role of these 17th and 18th century French explorers, traders and soldiers. Only a few plaques, the outline of Fort Rouillé on the ground around the commemorative obelisk and the information provided at Fort York mark their contribution. The name Toronto, which was first used by French fur traders and voyageurs who lived among the Aboriginal peoples and knew their language, can also be considered a legacy of the period, since the British colonial city of York was renamed Toronto in 1834. Jean-Baptiste Rousseaux, son of Montreal trader Jean-Bonaventure Rousseaux, who was granted the first licence to trade on the Humber River after the British Conquest, opened his home to the Lieutenant-Governor of Upper Canada, John Grave Simcoe, and interpreted for him when he came to grant York the status of capital of Upper Canada in 1793.
Four key components of a lesson plan are setting objectives, determining performance standards, anticipating ways to grab the students' attention and finding ways to present the lesson. Teachers should also focus on closing the lesson and encouraging students to engage in independent learning. Carefully consider several options when developing a lesson plan. - Determine the lesson's objective Before writing the lesson plan, the teacher needs to identify their objective. This means highlighting what the student should achieve by the end of the lesson. - Identify the students' standards The lesson plan should then identify what standards the students should achieve by the end of the lesson. Some schools set standards for their teachers. - Find ways to get the students' attention Lesson plans need to identify ways the teacher can get the students' attention. This can be through statements or actions. - Develop ways to present the lesson Teachers should find ways to present the lesson, such as through presentations, videos and activities. This also needs to include highlighting ways to check on the students' understanding of the content. - Conclude the lesson All lessons should include a closing statement that concludes the aims and learning outcomes. At this point, the teacher needs to reinforce what the students need to learn. - Encourage independent learning Teachers can encourage independent learning either through classroom activities or homework. This should include giving students feedback.
Normal body temperature can vary slightly from person to person. [Credit:Xavi Sanchez] When the German physician, Carl Wunderlich, first reported 37 degrees Celsius (or 98.6 degrees Fahrenheit) as the average human body temperature in 1861, he claimed to have drawn his conclusion from more than a million armpit measurements of 25,000 patients. As unlikely as that sounds, it’s true that “normal body temperatures” are largely based on observation, and not any comprehensive theory. In fact, normal body temperature not only varies between individuals, but also flutters within the same person with time of day and age, usually between 96.9 °F and 100 °F. If you measure your own temperature at different parts of the body, say in your mouth and under your arms, you’ll notice that the temperatures are different. The general rule is that the thinner a body part is, the less contact it has with the outside environment, and therefore the higher temperature you’ll observe. As with all other mammals, humans maintain a relatively constant temperature by breaking down carbohydrates, proteins and fats for energy, much like a power plant that burns coal for energy. The process occurs inside our cells, where oxygen, water and nutrients chemically react to produce carbon dioxide, energy and heat. That heat is then absorbed by blood and distributed throughout the body via a network of veins, arteries and capillaries. The elasticity of those capillaries plays a central role in our ability to maintain constant body temperatures. When there’s too much heat in the body, our capillaries automatically expand and increase the blood flow to the skin, allowing the excess heat to transfer to the air. This is why people become flushed after working out. Conversely, when we don’t have enough energy to balance out the heat loss, capillaries narrow to slow down the blood flow and therefore minimize energy escape. However, not all fluctuations of our body temperature fall under the control of blood vessels. For example, you are likely to have a higher temperature right after a 100-meter sprint than when you are fast asleep. Intense physical activities temporarily boost your metabolic rate as your body burns more fuels to balance your energy consumption. Body temperatures wax and wane with hormone levels, too. That’s why a woman’s basal body temperature, or her temperature on waking after a normal night’s sleep, is often used as an indicator of ovulation. Characterized by the surge of luteinizing hormone, a kind of hormone needed for proper reproductive function, ovulation usually increases basal body temperature by 0.4 °F to 1 °F. Women also tend to have higher rectal body temperatures, or temperatures taken directly inside the body cavity, than men, according to a 2001 study by a group of Dutch scientists. They largely attributed the difference to women’s reproductive cycle, which may in turn explain why men and women have slightly different ways to maintain their body temperatures. Other possible explanations include different abilities to contract blood vessels and differences in resting metabolic rates. Meanwhile, controlling body temperatures has recently emerged as a potential treatment for stroke. Clot-causing cells, the main culprit for blocking blood vessels and inducing stroke, were found to be less active at lower temperatures. The commonly accepted target temperature is now set at 91.4 °F, or 33 °C, but clinical trials are still underway in the search for optimal conditions for treatment. While those treatments require a change in body temperature, it is generally true that a healthy person will have a fairly constant body temperature. In fact, it’s so important that your body spends 90 percent of its metabolic energy to make very sure that your temperature is as close to 98.6 °F as possible. So, even though you may feel hot or cold, or worry that your body temperature isn’t 98.6 °F all the time, rest assured, your body is working very hard to maintain that temperature.
Featured Image Caption: Useful Calculators What is perpendicular bisector? How does an archaeologist assess the size of a surface if there is only one piece of it? How does a landscaper assess the placement of sprinklers for the most efficient use of water? It points out that in each of these concerns a single line, named the perpendicular bisector, can be very important. The line segment AB perpendicular bisector is a line which includes two main aspects: - Split the line segment AB into or bisects equal size sections - Allows a right angle with the AB (perpendicular) line segment The perpendicular bisector at point C intersects section AB. The distance from point A to point C is similar to that between point B and point C. A valuable aspect is that each position on the perpendicular bisector is the same distance between point A and point B. Online perpendicular bisector calculator Students find it very hectic to calculate or finding geometrical theorems manually when they have variety of options available online. Now, in the age of technology and science there is a solution to every problem. Multiple online calculators help you determining the perpendicular bisector step by step. Meracalculator provides Perpendicular Bisector Calculator that is a digital geometric computation tool designed to figure out a line’s perpendicular bisector by the coordinates given (x1, y1) and (x2, y2). A perpendicular bisector in geometry is a set of points that are equidistant from coordinates that are (x1, y1), and (x2, y2). Each point on the perpendicular bisector is as far as coordinates (x1, y1) and coordinates (x2, y2) are concerned. In this calculator, the given line coordinates (x1, y1) and (x2, y2) in the XY plane are used to figure out a line’s perpendicular bisector. Finding the perpendicular bisector manually To identify the two-point perpendicular bisector, all you have to do is determine their midpoint and reciprocal negative and put those results into the slope-intercept line equation. Below is the step by step guidance to find the perpendicular bisector of two points easily. Formula to find the perpendicular bisector The general formula for perpendicular bisector is y – y1 = m (x – x1) - Here m is representing the slope that equals to (y2-y1)/(x2-x1) - Y2 and y1 are the two y coordinates - x2 and x1 are the two x coordinates Follow the given steps for finding the perpendicular bisector of two points Finding the midpoint The first step of determining the perpendicular bisector is simply finding out the midpoint of these two lines or points. To determine the midpoint, simply inserted them into the midpoint formula that is represented as (x1 + x2)/2 (y1 + y2)/2 For finding out the midpoint of the two coordinates, it is significant step to take the sum of the two sets of points’ x and y coordinates. Let’s assume you are dealing with the coordinates (x1, y1) = (6, 4) (x2, y2) = (10, 4) Here’s the midpoint for those two points. - = [(6+10)/2, (4 +4)/2] - = (16/2, 8/2) - = (8, 4) So, the midpoints are 8 and 4 Next step is finding out the slops of two points and it is also a simple step, you need to put the slop formula of two points and that’s it. The slope of the two points can be determined simply by the slope formula that is represented as (y2 – y1) / (x2 – x1) If we simply define a slop, then a slop measures the distance of its vertical change over the distance of its horizontal change. Here is the justification to find the slope of the line that goes through the points (x1, y1) = (6, 4) (x2, y2) = (10, 4) Slop = (y2-y1)/(x2-x1) = (4-6)/ (10-4) As a result, the slope of the line we get is -1/3. But for determining this slope, you need to cut 2/6 to its lowest terms as1/3, it is because the both numbers are evenly divisible by 2. Negative reciprocal of the slop of two points It is easy to take the reciprocal negative of a slope; all you need to do is taking the slope’s reciprocal and change the sign. You can take the reciprocal negative of a number by easily switching the coordinates x and y and changing the symbol. 1/2 is -2/1, or only -2; the opposite of -4 is 1/4. 3 is the reciprocal negative of -1/3, it is because 3/1 will be the reciprocal of 1/3 and it is obvious that the sign has shifted from negative to positive. Now you have found the slope as above so you can solve the equation with the value of slope and the midpoints. Let’s say find the equation of the AB with Y = mx + b Here slop is equaled to m = 3 Thus, the equation becomes Y = 3x + b Now the midpoints are x, y = 8, 4 The next step is taking the values of x and y and put them in the equation 4= 3 x 8 + b 4= 24 +b b= 4 -24 b = -20 m = 3 and b = -20 so, the final equation of perpendicular bisector between two points becomes y = mx + b y = 3x -11 Thus, the perpendicular bisector of two points x and y equal to y = 3x -11. By Ezza Dugan who writes for business marketers and SEO tool users to make their websites rank on google. She has written for a number of websites i.e., calculators(.)tech, inside tech box and eLearning industry. She is a regular contributor to prepostseo(.)com with most digital marketing, SEO techniques, and tech-related articles. Do you enjoy writing and have something interesting to share? You are at the right place. Inspiring MeMe is the world's fastest growing platform to share articles and opinions. We are currently accepting articles, blogs, personal experiences & tips and would love to have you onboard.
The Earth is surrounded, it's said, by cosmic flypaper areas of space that cancel gravity and trap unwilling objects inside. Quite what those objects are have remained a mystery... but we're about to find out. The cosmic flypaper is actually known as "Lagrangian points" after mathematician Joseph-Louis Lagrange, who discovered it in 1772, and there are five such points in Earth's orbit (known by scientists as L1, L2, L3, L4 and L5). For billions of years, these enormous (millions of kilometers wide in the cases of Ls 4 and 5) areas of space that lack any gravity - due to Earth's gravitational pull canceling out that of the Sun - have been trapping all manner of space debris inside, and now scientists are aiming to use two NASA probes to find out just what lies in there, and what it can tell us about the origins of our galaxy... and more specifically, of our moon: Most astronomers believe that the moon formed from the debris generated when a Mars-sized object struck the Earth a glancing blow about 4 billion years ago. Their problem is in understanding where the object came from. Computer models show that incoming objects from elsewhere in the solar system would tend to strike the Earth with too much energy. Instead of creating the moon, they obliterate the Earth. So the impactor must have originated close by, the theory goes, where it could not accelerate too much before hitting. Another clue is that the moon contains the same abundance of oxygen isotopes as the Earth, hinting that whatever hit us must also have had the same isotope abundance. When astronomers look out into the solar system, to Mars for example, the isotope abundances are different. So this, too, hints that the impactor formed close by. But where? What is puzzling is how an object could grow so close to the Earth and reach the size of Mars before a collision took place. Their mutual gravity should have pulled them together long before. Unless, says [Princeton University astrophysicist Richard] Gott, it formed at a Lagrangian point. "An object could sit at one of these stable points and just grow," he says. Once it grew sufficiently large, gravitational interactions with other objects, such as Venus, could nudge it out of the Lagrangian point and onto a collision course with Earth. NASA's probes - STEREO 1 and 2 - will begin journeying through L4 and L5 later this year. Do gravity holes harbour planetary assassins? [New Scientist]
It is pretty difficult to explain (and also to treat) astigmatism which is one of the most widespread causes of poor vision. Very often astigmatism goes hand in hand with myopia (myopic astigmatism) or hyperopia (hypermetropic astigmatism). Astigmatism in Latin means absence of a focal point. This ailment comes as a result of an incorrect (non-spherical) shape of the cornea (rarely of the crystalline lens). In a normal state the cornea and the crystalline lens of a healthy eye have an even spherical surface. And astigmatism distorts it. The spherical surface displays uneven curvature in various directions. Accordingly, astigmatism introduces different refractive power in different meridians and the image is distorted when light pass through such a cornea. Some parts of the image may be focused on the retina, while others are focused either before or after the retina (there may be more complicated cases). The result is that instead of a normal image a person sees a distorted one in which some lines are sharp while other lines are blurred. One can get an idea of this by looking at the reflection in an oval-shaped teaspoon. A similar image distortion on retina occurs in case of astigmatism. Specialists distinguish corneal astigmatism and lenticular astigmatism. The corneal type has greater impact on vision than the lenticular one since the cornea has greater refractive power. The difference in the refractive strength of the strongest and the weakest meridians defines the value of astigmatism in diopters. The direction of the meridians is defines by the astigmatism axis expressed in degrees. Grades of astigmatism Specialists identify three grades of astigmatism. - light astigmatism – up to 3.0 D - medium astigmatism – from 3.0 D to 6.0 D - high astigmatism – over 6.0 D Types of astigmatism In terms of its nature, astigmatism is either congenital or acquired - Congenital astigmatism of up to 0.5 D is found in most children and is identified as "functional", i.e. such astigmatism does not affect sharpness of sight and the development of binocular vision. However, if astigmatism exceeds 1.0 D and more, it reduces the vision substantially and requires treatment using glasses. - Acquired astigmatism results from scars on the cornea due to injury or surgery. There are three methods for correcting astigmatism – glasses, contact lens, and excimer laser correction. Astigmatism correction with glasses Astigmatic patients are usually prescribed special "intricate" glasses with special cylindrical lenses. Specialists note that wearing such glasses may cause unpleasant symptoms for patients with high astigmatism – e.g. giddiness, eye gripes, and vision discomfort. In contrast to standard glasses, prescriptions for "intricate" glasses for astigmatism patients include data on the cylinder and its axis. It is very important to conduct a very detailed diagnosis prior to selection of glasses. Quite often people with an astigmatis have to change their glasses several times. Contact lenses for astigmatic patients Speaking of contact lens correction for astigmatism, it is important to note that until recently the only method to correct astigmatism was to use hard contact lenses. These lenses not only caused discomfort to patients but also had a negative effect on the cornea. But medical science is constantly in progress and special toric contact lenses are used today. - Following prescription of glasses or contact lenses, it is essential to be under constant observation of an ophthalmologist to ensure timely replacement of the glasses or contact lens for more powerful or less powerful ones. - Glasses and contact lens are not the ultimate solution to the problem of astigmatism – they are merely an instrument for temporary correction of vision. Only surgery allows full elimination of astigmatism. Excimer laser correction of astigmatism. Recently, excimer laser correction has been usually applied for the treatment of astigmatism (up to ±3.0 D). Laser correction based on the LASIK can hardly be classified as surgery. This procedure lasts 10–15 minutes with local drop anaesthesia and the laser works does only 30-40 seconds depending on the complexity of the case. During the LASIK procedure, a special microkeratom instrument is used to separate 130–150 microns flap of the cornea surface layer to open the path to the laser beam into deeper layers of the cornea. The laser then evaporates a part of the cornea. The flap is then replaced and fixed by the own collagen of the cornea. No suturing is required since there is self-restoration of the epithelial tissue along the edges of the flap. LASIK vision correction requires only a short period of rehabilitation. A patient has good vision within 1 to 2 hours after the treatment and his vision is fully restored within one week. Dangers of astigmatism If astigmatism has not been taken care of, it may lead to strabismus and a acute regression of vision. Without proper correction astigmatism may cause headaches and eye gripes. For this reason it is important to visit an ophthalmologist regularly.
WATER AND THE BEGINNING OF LIFE Dr. Constantinos E. Vorgias Professor of Biochemistry National and Kapodistrian University of Athens The blue planet Earth is c.a 4.6 billion years old. During the initial 0.7 billion years following its formation, the ear ly Earth was heavily bombarded by solar system materials, such as comets and asteroid-sized objects. The energy released by the largest impacts was sufficient to evaporate the oceans and destroy any existing life on the Earth’s surface. The first signs of life evidenced by the fossil record came into being approximately 3.5 billion years ago. Life emerged through a complex chain of evolutionary events, dictated by the physical-chemical environment on the early Earth. The reducing atmosphere, provided favourable energetic surroundings for the formation of relatively complex polymers from organic monomers which were already present on the primitive Earth. The monomers have been demonstrated to be from two sources: either formed from terrestrial synthetic pathways or were derived extraterrestrially from solar system materials. Over time, simple molecules developed into larger, more complex biological molecules and eventually to cells. Following further diversification, some cells developed that became metabolically capable of photosynthesis. This caused a cascade of irreversible events, interconnected by biogeochemical cycles. The atmosphere of the Earth changed to that of an oxidizing one and subsequently developed an ozone layer. The introduction of oxygen no l onger supported the development of new life forms from the primordial building blocks, but instead supported the biological development and diversification of the early microorganisms. The ozone layer served as a means of protection, filtering the harmful UV radiation. These dramatic changes transformed the early Earth into our present day biosphere. Assembly of the first cellular life on the prebiotic Earth required the presence of three essential substances: water, a source of free energy and a source of organic compounds. Liquid water is essential for all life today and it is highly improbable that life as we know it can exist in its absence. The exact origin of water on Earth is unknown; however, it has been suggested that water became available when the Earth cooled enough for water vapour to precipitate from the atmosphere. Water has also been suggested to be derived from rocky material formed in the Earth’s region of the solar nebula or delivered by comets. Regardless of its source, water was present prior to the appearance of the first microfossils 3.5 billion years ago. To polymerize small molecules to more complex forms, energy sources are required. Sunlight, lightning, volcanoes as well as the intense UV radiation that penetrated the primitive atmosphere prior to the formation of ozone provided substantial amounts of free energy. This implies that significantly more UV radiation was released during the stage when the Earth had a reducing atmosphere. Biological compounds such as amino acids can be synthesized simply from the constituents of the prebiotic soup and the environmental conditions on the primitive Earth. The most widely accepted model for this phenomenon is Miller and Urey’s. The experimental setup did account for an energy source, a reducing atmosphere, an aqueous environment, evaporation of the ocean and rainwaters, but was not able to simulate the effects of alternating day and night. This discovery was pivotal as it inspired a major effort on the part of others to determine whether other biologically important compounds were also present on the primeval Earth. The earliest forms of life likely required membranes. Phospholipids are the primary components of modern cell membranes, but it is improbable that such complex molecules were part of the prebiotic soup. Instead, simpler membranogenic amphiphilic molecules probably served as precursors, which then evolved chemically to the varied and complex phospholipid form. Amphiphilic molecules have one end that favourably interacts with water molecules (hydrophilic) and another that is far less interactive (hydrophobic). This attribute of amphiphiles allows them to self- assemble into vesicles and bilayers. It is speculated that although modern phospholipids were absent, these amphiphilic molecules were abundant in the prebiotic environment, given that virtually any molecule that has both a hydrophobic and hydrophilic component is considered amphiphilic. Like to the primordial simple organic monomers, amphiphilic molecules have been demonstrated to be formed from both endogenous and exogenous sources. These components, whether from meteoritic or synthetic mixtures, are capable of spontaneously forming stable membrane vesicles with defined compositions and organization. When amphiphilic molecules self-assemble into membranes, their vesicular organization creates an effective permeability barrier between interior and the exterior aqueous compartments. The selective entry of the early membranes that formed the boundary of primitive cells permitted the pe rmeation of essential nutrients. However, less sophisticated than their modern counterparts, the early membranes would have been impermeable to larger, polymeric molecules, such as the precursors of nucleic acid and protein polymers. Thus, it follows that in order to encapsulate the larger moieties, a course of action in which the membrane bilayers are first disrupted, allowing entry, and then resealed to trap the molecular entities within was required. Laboratory simulations of hydration-dehydration cycles in intertidal zones or of an evaporating lagoon on the early earth have verified the ability of early membranous boundaries to encapsulate functional macromolecules. This was a critical property of the early membrane microenvironments as only in this manner would polymeric products of primitive biosynthesis have accumulated in the encapsulated volume. Essentially, as the composition of the interior compartment became more specific, a population of these bounded molecular systems advanced and increased in metabolic complexity. This was followed by the evolution of controlled cell growth and ultimately the emergence and growth of cells as we know them today. The amphiphilic molecules on the primitive earth have undeniably undergone considerable evolution as the first forms of life emerged and acquired new catalytic capacities. There is good evidence that membrane vesicles are the intermediate between prebiotic cells and the first cells capable of growth and division. Indeed, the earlier a membrane structure was present on the earth, the easier it would have been for cells to assemble metabolic pathways and genetic material. Organic compound concentrations in the water bodies of the ancestral earth have been estimated to be approximately one micromolar. The inherent self-aggregation of amphiphilic molecules would have constituted local high concentrations within the dilute solution of organic compounds. Held together primarily by weak non-covalent interactions driven by hydrophobic forces, the early amphiphilic assemblies would have been extremely stable over time. Regardless of stability, exchange of components and subsequent growth of the cell would have required only minimal energy. Once formed, cell membranes also have the potential to maintain a concentration gradient, providing a source of free energy that can drive transport processes across the membrane boundary. The molecular systems from which life emerged were likely subject to the same physical and chemical laws that guide self-assembly processes of cur rent life. The high salt concentration of a marine environment would not have favoured the self- assembly of membrane structures due to the osmotic pressure it would have created across the membrane. In addition, the prevalence of divalent cations in the an cestral environment binding to the anionic head groups of amphiphilic molecules would have greatly inhibited their ability to form stable membrane structures. Thus, these parameters suggest that, from the perspective of membrane biophysics, the environment most suitable for the origin of life would be a pond, which has low ionic strength and a submillimolar concentration of divalent cations. Liposomes are artificial vesicle membranes, which form upon hydration of membranogenic lipids in an aqueous medium. They are commonly used as model systems, among others, for the study of the physical-chemical attributes of early membrane processes. Due to the diversity of the chemical nature of lipids, the physical-chemical properties of liposomes can be modulated to satisfy specific functional objectives, which is an invaluable tool. For example, the degree of bilayer packing and fluidity of the liposome may be controlled simply by selecting lipids with different acyl chain lengths. The lipid bilayer organization creates an effective permeability barrier between the interior and the exterior aqueous compartments whereby hydrophilic agents are situated in the interior aqueous core and hydrophobic agents are solubilized with the hydrophobic regions. The origin of life requ ired a combination of elements, compounds and environmental physical- chemical conditions. This suggests that many different perspectives and scientific disciplines must be harnessed to comprehend the origin of the biological world. Lipids and amphiphilic molecules, the building blocks of membrane bilayers, are definitive of life today. Accumulating evidence effectively demonstrates amphiphilic molecules to be the first biological molecules. Although viewed to have primarily structural roles, amphiphilic molecules and their ability to spontaneously form membranous microenvironments definitely underlie some of the earliest key events that led to the emergence of biological complexity.
Gerald Ford (with funds cut off, declared the US war ended April 23, 1975)...Gerald Ford governed the nation in a difficult period. Though president for only 895 days (the fifth shortest tenure in American history), he faced tremendous problems. After the furor surrounding the pardon subsided, the most important issues faced by Ford were inflation and unemployment, the continuing energy crisis, and the repercussions—both actual and psychological—from the final "loss" of South Vietnam in April 1975. Ford consistently championed legislative proposals to effect economic recovery by reducing taxes, spending, and the federal role in the national economy, but he got little from Congress except a temporary tax reduction. Federal spending continued to rise despite his call for a lowered spending ceiling. By late 1976 inflation, at least, had been checked somewhat; on the other hand, unemployment remained a major problem, and the 1976 election occurred in the midst of a recession. In energy matters, congressional Democrats consistently opposed Ford's proposals to tax imported oil and to deregulate domestic oil and natural gas. Eventually Congress approved only a very gradual decontrol measure. Ford believed he was particularly hampered by Congress in foreign affairs. Having passed the War Powers Resolution in late 1973, the legislative branch first investigated, and then tried to impose restrictions on, the actions of the Central Intelligence Agency (CIA). In the area of war powers, Ford clearly bested his congressional adversaries. In the Mayaquez incident of May 1975 (involving the seizure of a U.S.-registered ship of that name by Cambodia), Ford retaliated with aerial attacks and a 175-marine assault without engaging the formal mechanisms required by the 1973 resolution. Although the actual success of this commando operation was debatable (39 crew members and the ship rescued, at a total cost of 41 other American lives), American honor had been vindicated and Ford's approval ratings rose sharply. Having succeeded in defying its provisions, Ford continued to speak out against the War Powers Resolution as unconstitutional even after he left the White House.
Guide to Propulsion Balloon Rocket Car (Easy) Activity Students will learn the concepts of Newton’s Law of Motion, friction, jet propulsion, and air resistance by designing and constructing a balloon powered rocket car. To build a Balloon Rocket Car that can extract the most energy out of the inflated balloon and make the vehicle travel the longest distance. The thrust of a jet engine is similar to the thrust produced in the balloon rocket car. When the balloon is blown up the air is pushing on the balloon skin keeping it inflated. Covering the nozzle of the balloon keeps this high pressure air trapped and at this point all the forces are balanced. Once the nozzle is opened the forces inside the balloon are no longer balanced and the high pressure air wants to escapes through the nozzle which produces thrust and makes the car accelerate. Similarly, in a jet engine the air enters the engine where it is compressed and heated to create a high pressure region which is then accelerated through a nozzle to produce a thrust force. This principle follows Newton’s Second Law of Motion, Force= mass x acceleration. Otherwise stated, “if an object is acted on by an unbalanced force it will undergo an acceleration. The amount of acceleration depends on the force and the mass of the object.” Engines must provide enough thrust to overcome the forces of drag on the aircraft as shown in the illustration below. This can also follow Newton’s First Law of Motion, “an object at rest will stay at rest and an object in motion will stay motion in a straight line unless acted upon by an unbalanced force.” Therefore the forces pushing the engine and aircraft forward should be stronger than the force of the drag. Likewise the thrust of the balloon rocket car must be more than the forces acting on the car itself. What forces are acting on the balloon rocket car? There are two main forces acting on the balloon rocket car: Friction and Air resistance. The friction force is the resistance between two objects sliding against each other. While building your car identify the places where objects will be rubbing against each other creating friction. Air resistance is also another form of friction where an object is sliding against air particles. You can experience this air resistance when riding a bike and the wind is hitting your face. You must pedal fast enough to overcome the wind. The rocket car has the greatest air resistance when fully inflated and begins moving because there is more area that has to push past the air particles. One last item to consider before constructing the balloon rocket car is how the nozzle size will affect the distance the car will travel. Keep in mind the nozzle size will determine how much pushing force (thrust) the balloon will create. The greater the size the greater the thrust but the faster the air will escape. The smaller the nozzle the smaller the thrust but the car may roll longer. - Water Bottle (Chassis) - Balloon, Vinyl Tubing, rubber band (Motor) - Wooden Skewers and straws (Axle) - Various Materials for wheels - Each team is provided with a kit to construct their Balloon Rocket Car. - Teams are to select a nozzle (vinyl tubing) size and wheels they would like to use on their car. Keep in mind how the size may affect how far the car will go. - Each team should select an individual in charge of blowing up their balloon. Note: Blow the balloon up a couple times to stretch it out. - Assemble the Chassis and Suspension: a. Cut the straw into two pieces. The length should be equal to the width of the water bottle. b. Tape the two straw pieces underneath the water bottle where you feel the front and rear wheels should go. Keep the straws lined up so the car travels in a straight line. c. Cut two pieces of the wooden skewer. The length should be between an inch to an inch and a half longer than the straw that was taped to the bottle. d. Put one end of each wooden skewer through your wheel. If the wheel is loose on the skewer use modeling clay to hold in place. e. Slide the skewers through the straw and attach the rest of the wheels to the skewer. f. Now you should have a rolling chassis! - Assemble the Motor: a. Insert the nozzle part way into the balloon. b. Use a rubber band and secure the nozzle to the balloon. c. Insert the nozzle through the slit on the top of your water bottle. d. Make sure about an inch of the nozzle is sticking out of the mouth of the bottle. e. Now your team is ready to test! - Make your way over to the test track. Blow the balloon up and pinch the balloon at the base so the air won’t escape. - Line the Rocket car up on the starting line and when the track is clear release the balloon. - Record the distance the car went. - Each team is allowed to change the car once (adjust wheels and/or nozzle) but the team may run the car on the track as many times as they feel necessary. This activity is taken from the following resources below: Home Science Tools SAE A World in Motion (AWIM) JetToy
Students dig through a compost heap to see what they can discover about decomposition and then take a sample from the heap to look at through lenses and microscopes. - ID Sheet, pencils for writing, and drawing clipboard - microscopes, flashlights, magnifying lenses - shovel, trowel, pick, plastic petri dishes - hand wipes, sanitizing gel, tweezers - Docent asks students what they know about a compost heap. - Docent prompts students to hypothesize about how composting works. - Docent hands out collection dishes and explains that they are for collecting a sample at the compost pile. - Docent takes students to compost heap. - Students explore heap with a shovel, trowels, and a pick learning that compost heaps have to be turned to help keep the temperature hot to aid in decomposition. (Older students will take temperature readings.) - They choose a sample and put it in their collection dishes. - Docent takes students back to pavilion where they can examine the sample they took through microscopes and magnifying glasses. - Older students can use an ID sheet to identify some of the critters found in a compost heap and read about the function that critter performs. Younger students can use ID sheets, but Docent, teachers and helpers can aid them in understanding the critters role in a compost heap. Alabama Course of Study for Science This activity meets goals specified for grades: - K–#âs 1, 2, 3, 4, 5, 14, 15, 17, 18, 20 - 1st–#âs 1, 2, 3, 4, 5, 20, 21, 26, 27 - 2nd–#âs 1, 2, 3, 4, 5 - 3rd–#âs 1, 2, 3, 4, 6, 13, 32, 38, 39 - 4th–#âs 1, 2, 3, 4, 6, 41, 43, 46 - 5th–#âs 1, 2, 3, 4, 6, 30 Students will discover the cycle of decomposition and the part composting plays in that larger cycle. Students will also understand the cycle of life within the compost heap. - Students will be able to: - examine a compost heap. - explore and hypothesize what composting is. - identify and examine parts of a compost heap, and critters that live in a compost heap.
Europe needs adaptation strategies to limit climate change impacts Copenhagen, 18 August 2004 Europe needs adaptation strategies to limit climate change impacts More frequent and more economically costly storms, floods, droughts and other extreme weather. Wetter conditions in northern Europe but drier weather in the south that could threaten agriculture in some areas. More frequent and more intense heatwaves, posing a lethal threat to the elderly and frail. Melting glaciers, with three-quarters of those in the Swiss Alps likely to disappear by 2050. Rising sea levels for centuries to come. These are among the impacts of global climate change that are already being seen in Europe or are projected to happen over the coming decades as global temperatures rise, according to a new report from the European Environment Agency (EEA). Strong evidence exists that most of the global warming over the past 50 years has been caused by human activities, in particular emissions of heat-trapping greenhouse gases, such as carbon dioxide (CO2) from the burning of fossil fuels. The concentration of CO2, the main greenhouse gas, in the lower atmosphere is now at its highest for at least 420,000 years - possibly even 20 million years - and stands 34% above its level before the Industrial Revolution. The rise has been accelerating since 1950. The summer floods of 2002 and last year's summer heatwave are recent examples of how destructive extreme weather can be. The serious flooding in 11 countries in August 2002 killed about 80 people, affected more than 600,000 and caused economic losses of at least 15 billion US$. In the summer 2003 heatwave western and southern Europe recorded more than 20,000 excess deaths, particularly among elderly people. Crop harvests in many southern countries were down by as much as 30%. Melting reduced the mass of the Alpine glaciers by one-tenth in 2003 alone. "This report pulls together a wealth of evidence that climate change is already happening and having widespread impacts, many of them with substantial economic costs, on people and ecosystems across Europe," said Prof. Jacqueline McGlade, EEA Executive Director. She added: "Europe has to continue to lead worldwide efforts to reduce greenhouse gas emissions, but this report also underlines that strategies are needed, at European, regional, national and local level, to adapt to climate change. This is a phenomenon that will considerably affect our societies and environments for decades and centuries to come." The extent and rate of the climate changes under way most likely exceed all natural variation in climate over the last thousand years and possibly longer. The 1990s were the warmest decade on record and the three hottest years recorded - 1998, 2002 and 2003 - have occurred in the last six years. The global warming rate is now almost 0.2 °C per decade. Europe is warming faster than the global average. The temperature in Europe has risen by an average of 0.95 °C in the last hundred years and is projected to climb by a further 2.0-6.3 °C this century as emissions of greenhouse gases continue building up. As a first step towards reversing this trend, the world's governments in 1997 agreed the Kyoto Protocol, an international treaty under which industrialised countries would reduce their emissions of six greenhouse gases by around 5% between 1990 and 2012. So far 123 countries, including all member states of the European Union, have ratified the treaty but the US, the biggest emitter of greenhouse gases, has decided against doing so. To enter into force the Protocol still needs ratification by Russia. In addition to those mentioned above, a broad range of current and future impacts of climate change in Europe are highlighted in the report, including the following: - Almost two out of every three catastrophic events since 1980 have been directly attributable to floods, storms, droughts or heatwaves. The average number of such weather and climate-related disasters per year doubled over the 1990s compared with the previous decade. Economic losses from such events have more than doubled over the past 20 years to around 11 billion US$ annually. This is due to several reasons, including the greater frequency of such events but also socio-economic factors such as increased household wealth, more urbanisation and more costly infrastructure in vulnerable areas. - The annual number of floods in Europe and the numbers of people affected by them are rising. Climate change is likely to increase the frequency of flooding, particularly of flash floods, which pose the greatest danger to people. - Climate change over the past three decades has caused decreases in populations of plant species in various parts of Europe, including mountain regions. Some plants are likely to become extinct as other factors, such as fragmentation of habitats, limit the ability of plant species to adapt to climate change. - Glaciers in eight of Europe's nine glacial regions are in retreat, and are at their lowest levels for 5,000 years. - Sea levels in Europe rose by 0.8-3.0 mm per year in the last century. The rate of increase is projected to be 2-4 times higher during this century. - Projections show that by 2080 cold winters could disappear almost entirely and hot summers, droughts and incidents of heavy rain or hail could become much more frequent. Climate change does appear to have some positive impacts too, however. - Agriculture in most parts of Europe, particularly the mid latitudes and northern Europe, could potentially benefit from a limited temperature rise. But while Europe's cultivated area may expand northwards, in some parts of southern Europe agriculture could be threatened by water shortages. And more frequent extreme weather, especially heatwaves, could mean more bad harvests. Whether positive impacts occur will greatly depend on agriculture's capacity to adapt to climate change. - The annual growing season for plants, including agricultural crops, lengthened by an average of 10 days between 1962 and 1995 and is projected to continue getting longer. - The survival rate of bird species wintering in Europe has improved over the past few decades and is likely to increase further as winter temperatures continue rising. The report, Impacts of climate change in Europe: An indicator-based assessment, is available at http://reports.eea.europa.eu/climate_report_2_2004/en. Notes to editors - The 1997 Kyoto Protocol to the UN Framework Convention on Climate Change (UNFCCC) will control industrialised countries' emissions of CO2, methane (CH4) and nitrous oxide (N2O), plus three fluorinated industrial gases: hydrofluorocarbons (HFCs), perfluorocarbons (PFCs) and sulphur hexafluoride (SF6). - The Kyoto Protocol is a first step towards the UNFCCC's ultimate objective to "achieve stabilisation of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic [human] interference with the climate system." What this level should be is not stated, but the EU has defined an indicative target for long-term global temperature rise of not more than 2 °C above pre-industrial levels. On present trends this target is likely to be exceeded around 2050. Achieving both the EU temperature target and the UNFCCC objective would require a substantial reduction in global greenhouse gas emissions from 1990 levels. - The report examines the state of climate change and its impacts in Europe by using 22 indicators that fall into eight broad categories: atmosphere and climate; glaciers, snow and ice; marine systems; terrestrial ecosystems and biodiversity; water; agriculture; economy; and human health. For almost all the indicators a clear trend exists and impacts are already being observed. The 22 indicators illustrate only a small range of the potential consequences of climate change, but in other areas insufficient data are available for Europe or uncertainty exists over whether climate change is the cause of changes in the indicators. The report was prepared for the EEA by its European Topic Centre on Air and Climate Change, including the Umweltbundesamt (Federal Environmental Agency, Germany) and RIVM (National Institute of Public Health and the Environment, the Netherlands) who both also contributed through additional national funding. About the EEA The European Environment Agency is the leading public body in Europe dedicated to providing sound, independent information on the environment to policy-makers and the public. Operational in Copenhagen since 1994, the EEA is the hub of the European environment information and observation network (Eionet), a network of around 300 bodies across Europe through which it collects and disseminates environment-related data and information. An EU body, the Agency is open to all nations that share its objectives. It currently has 31 member countries: the 25 EU Member States, three EU candidate countries - Bulgaria, Romania and Turkey - and Iceland, Liechtenstein and Norway. A membership agreement has been initialled with Switzerland. For references, please go to www.eea.europa.eu/soer or scan the QR code. This briefing is part of the EEA's report The European Environment - State and Outlook 2015. The EEA is an official agency of the EU, tasked with providing information on Europe’s environment. PDF generated on 07 Oct 2015, 04:56 AM
In a research study done by Adele Diamond and Kathleen Lee, and published by www.sciencemag.org on August 31, 2011, diverse activities were shown to improve children’s executive functions: computerized video game training, non-computerized games, aerobics, martial arts, yoga, mindfulness, and school curricula. All successful programs involve repeated practice and progressively increase the challenge to executive functions. Children with worse executive functions benefit most from these activities; thus, early executive-function training may avert widening achievement gaps later. To improve executive functions, focusing narrowly on them may not be as effective as also addressing emotional and social development (as do curricula that improve executive functions) and physical development (shown by positive effects of aerobics, martial arts, and yoga). For more information on the study, please click here. Cogmed Working Memory Training, one such example of computerized video game training, is a home-based computerized brain training program that is designed to help people sustainably improve their working memory capacity. Clinically-proven results demonstrate that after training, users increase their ability to concentrate, control impulsive behavior, and better utilize complex reasoning skills. In the end, better academic performance can be achieved especially in math and reading. In the August 2011 Special Section of Science, Cogmed was featured as the “most researched approach” for improving executive functions in school children 4 to 12 years of age. In evaluating Cogmed, as well as other approaches such as: combination computerized-non computerized training, aerobic exercise, martial arts/ mindfulness practice, classroom curricula and add-ons to classroom curricula, researchers came to some main conclusions specifically related to Cogmed: a. Cogmed training improves working memory b. Cogmed training has shown transfer to other executive functions but, this transfer is narrow c. Children with the poorest executive functions benefit most from training programs d. Executive function training has the potential to impact academic achievement in children e. Adaptive training is necessary because executive functions must be continually challenged in order to improve f. A key element to improving executive functions is the child’s motivation, that is, their willingness to devote time to the activity g. One benefit of computerized training over other approaches is that it can be done at home Importantly, this review of computerized training in Science parallels Cogmed’s standpoint that adaptive and supported computerized working memory training benefits individuals with working memory constraints, impacts executive functions and influences academic outcomes. Further, a review of Cogmed in the journal Science and in the context of improving executive functions in school children represents a growing acceptance of Cogmed Working Memory Training within the scientific community.
Depending on the circumstances, hot water can actually freeze faster than cold water. However, this is not always the case, as a large number of variables affect how fast water freezes, including the shape of the container, the cooling conditions and the specific water temperatures.Continue Reading This phenomenon of hot water freezing faster than cold is known as the Mpemba effect, named after a Tanzanian high school student who documented this experiment in 1969. However, this was actually a rediscovery of the phenomenon, as famous scholars like Aristotle, Rene Descartes and Sir Francis Bacon had also recorded its existence. Although scientists are aware of the phenomenon, there is still no definitive answer as to why it occurs since many different factors can play a role under certain circumstances. One theory is that the warmer water's quicker rate of evaporation may be at least partly responsible. Another possibility is that the hot water's lowered ability to contain absorbed gases may speed its freezing. It may also be due to the fact that cold water experiences a strong supercooling effect, which makes it more difficult for it to turn into solid when it reaches its freezing point. One final possibility is that convection allows the warmer water to lose its heat faster than the colder water.Learn more about Thermodynamics
What is Gangrene? What is Gangrene? Gangrene is a condition that occurs when body tissues die. This serious condition generally originates from the most end of body parts such as limbs, toes, or fingers. However, gangrene can also occur in muscles and internal organs. Inhibition of blood circulation is a major cause of gangrene. Blood flow not only carries nutrients and oxygen If blood does not flow smoothly and freely throughout the body, our cells will die. Coupled with infection in the area that is not handled, the surrounding tissue will die, resulting in gangrene. The condition may be triggered by various factors. Some of these include: - Severe injuries and surgical wounds. - Diabetes. High sugar levels can damage the nerves and blood vessels. - Disorders of blood vessels, such as Peripheral Artery Disease (PAD), or atherosclerosis (arterial narrowing and blockage of fatty deposits in the arteries). - Obesity. In addition to gangrene, obesity can also increase the risk of other diseases. - Raynaud’s phenomenon, a condition in which the blood vessels that supply blood to the skin (especially, on the toes or fingers) have an abnormal reaction to cold temperatures. - Weak immune systems (eg in people living with HIV), malnutrition, chronic alcoholism, drug use, and chemotherapy. In the group of people, even mild infections can turn serious and trigger gangrene. Types of Gangrene Gangrene is divided into several categories based on the cause. The 4 main categories of gangrene are as follows. - Dry gangrene that occurs due to inhibition of blood flow to certain body parts. - Wet gangrene is triggered by injury and bacterial infections. - Gas gangrene that attacks muscle tissue. The bacteria cause the release of gas, so the skin will eventually form air bubbles, such as blisters. - Internal gangrene due to inhibition of blood flow to internal organs. Symptoms of Gangrene Gangrene has a wide range of symptoms, depending on the cause. In general, gangrene symptoms may include: - Initially visible signs of red infection and swelling. - In the internal gangrene, the affected part feels very sick or numb (loss of touch sensation at all). - Appearing wounds or blisters that are bloody or accompanied with a foul-smelling pus. - The skin on the affected area appears wrinkled and dry, and is clearly bounded with healthy skin areas. - Skin color changes, such as pale, red, purple, or even black. Gangrene is a serious condition requiring emergency treatment. Immediately to the hospital if you experience any of the above symptoms. People with gangrene also have high potential for septic shock due to bacteria entering the bloodstream. This condition will trigger blood pressure that drops drastically and life-threatening. Gangrene Diagnosis Process In the early stages of the examination, the doctor will check the patient’s physical condition and injury, and ask the patient’s and family’s medical history. To confirm the diagnosis, the doctor may also recommend some further tests such as: - Blood tests to check for infection. - Sampling tissue, fluid, or blood from the wound. If needed, a small operation will be performed to take tissue samples from the inside of the affected organ. This step is done to check the spread of gangrene in the body. - MRI or CT scan. Gangrene Treatment Method Networks that have experienced gangrene can not be cured anymore. Therefore, handling gangrene as early as possible will increase your chances of recovery. The steps of gangrene treatment generally include: - Operation. This step is used to remove dead tissue so that the spread of gangrene can be prevented, while allowing healthy tissue to recover. Skin grafts will then be performed to repair damaged skin by gangrene. However, sometimes there are people with gangrene who are forced to undergo amputation, especially in severe gangrene conditions. Surgery to repair blood vessels for smooth blood flow and blood supply is also possible. - Handle or prevent infection with antibiotics in the form of drinking or intravenous drugs. - Hyperbaric oxygen therapy, ie healing process using oxygen-pure oxygen chamber or tube strong. Increased levels and oxygen pressure will allow the blood to carry more oxygen so that bacterial development can be slowed. Gangrene can be avoided if handled before tissue damage occurs. Some preventive steps we can do is: - Controlling the condition of the cause of gangrene. For example, maintaining foot health in people with diabetes or atherosclerosis. Check with your doctor if there are any injuries, infections, or discoloration of the skin on your feet. - Apply a healthy lifestyle, for example by avoiding fatty foods to prevent fat accumulation in blood vessels, lose weight to ideal numbers, and exercise routine. - Quitting smoking because smoking can trigger clogging of the arteries. - Prevent infection. Treat and keep open wounds clean and dry until cured to avoid infection. - Limit alcohol consumption. The recommended limit of alcohol consumption in a day is 2-2.5 cans of alcoholic beer 4.7 percent for men, and a maximum of 2 cans of alcohol content 4.7 percent for women.
A meteorite is a natural object originating in outer space that survives an impact with the Earth's surface. While in space it is called a meteoroid. When it enters the atmosphere, impact pressure causes the body to heat up and emit light, thus forming a fireball, also known as a meteor or shooting star. The term bolide refers to either an extraterrestrial body that collides with the Earth, or to an exceptionally bright, fireball-like meteor regardless of whether it ultimately impacts the surface. Meteorites that are recovered after being observed as they transited the atmosphere or impacted the Earth are called falls. All other meteorites are known as finds. As of mid-2006, there are approximately 1,050 witnessed falls having specimens in the world's collections. In contrast, there are over 31,000 well-documented meteorite finds. Meteorites are always named for the place where they were found, usually a nearby town or geographic feature. One notable exception is Barringer Crater (commonly referred to as Meteor Crater) in Arizona which is named after a man who posited that it was formed in an impact with an extraterrestrial object. In cases where many meteorites were found in one place, the name may be followed by a number or letter (e.g., Allan Hills 84001 or Dimmitt (b)). Some meteorites have informal nicknames: the Sylacauga meteorite is sometimes called the "Hodges meteorite" after Ann Hodges, the woman who was struck by it; the Canyon Diablo meteorite, which formed Meteor Crater has dozens of these aliases. However, the single, official name designated by the Meteoritical Society is used by scientists, catalogers, and most collectors. Meteorites have traditionally been divided into three broad categories: stony meteorites are rocks, mainly composed of silicate minerals; iron meteorites are largely composed of metallic iron-nickel; and, stony-iron meteorites contain large amounts of both metallic and rocky material. Modern classification schemes divide meteorites into groups according to their structure, chemical and isotopic composition and mineralogy. See Meteorites classification. Most meteoroids disintegrate when entering the Earth's atmosphere. However, an estimated 500 meteorites ranging in size from marbles to basketballs or larger do reach the surface each year; only 5 or 6 of these are typically recovered and made known to scientists. Few meteorites are large enough to create large impact craters. Instead, they typically arrive at the surface at their terminal velocity and, at most, create a small pit. Even so, falling meteorites have reportedly caused damage to property, livestock and people. Very large meteoroids may strike the ground with a significant fraction of their cosmic velocity, leaving behind a hypervelocity impact crater. The kind of crater will depend on the size, composition, degree of fragmentation, and incoming angle of the impactor. The force of such collisions has the potential to cause widespread destruction. The most frequent hypervelocity cratering events on the Earth are caused by iron meteoroids, which are most easily able to transit the atmosphere intact. Examples of craters caused by iron meteoroids include Barringer Meteor Crater, Odessa Meteor Crater, Wabar craters, and Wolfe Creek crater; iron meteorites are found in association with all of these craters. In contrast, even relatively large stony or icy bodies like small comets or asteroids, up to millions of tons, are disrupted in the atmosphere, and do not make impact craters. Although such disruption events are uncommon, they can cause a considerable concussion to occur; the famed Tunguska event probably resulted from such an incident. Very large stony objects, hundreds of meters in diameter or more, weighing tens-of-millions of tons or more, can reach the surface and cause large craters, but are very rare. Such events are generally so energetic that the impactor is completely destroyed, leaving no meteorites. (The very first example of a stony meteorite found in association with a large impact crater, the Morokweng crater in South Africa, was reported in May 2006.) Several phenomena are well-documented during witnessed meteorite falls too small to produce hypervelocity craters. The fireball that occurs as the meteoroid passes through the atmosphere can appear to be very bright, rivaling the sun in intensity, although most are far dimmer and may not even be noticed during daytime. Various colors have been reported, including yellow, green and red. Flashes and bursts of light can occur as the object breaks up. Explosions, detonations, and rumblings are often heard during meteorite falls, which can be caused by sonic booms as well as shock waves resulting from major fragmentation events. These sounds can be heard over wide areas, up to many thousands of square km. Whistling and hissing sounds are also sometimes heard, but are poorly understood. Following passage of the fireball, it is not unusual for a dust trail to linger in the atmosphere for some time. As meteoroids are heated during passage through the atmosphere, their surfaces melt and experience ablation. They can be sculpted into various shapes during this process, sometimes resulting in deep "thumb-print" like indentations on their surfaces called regmaglypts. If the meteoroid maintains a fixed orientation for some time, without tumbling, it may develop a conical "nose cone" or "heat shield" shape. As it decelerates, eventually the molten surface layer solidifies into a thin fusion crust, which on most meteorites is black (on some achondrites, the fusion crust may be very light colored). On stony meteorites, the heat-affected zone is at most a few mm deep; in iron meteorites, which are more thermally conductive, the structure of the metal may be affected by heat up to 1 cm below the surface. Meteorites are sometimes reported to be warm to the touch when they land, but they are never hot. Reports, however, vary greatly, with some meteorites being reported as "burning hot to the touch" upon landing, and others forming a frost upon their surface. Meteoroids that experience disruption in the atmosphere may fall as meteorite showers, which can range from only a few up to thousands of separate individuals. The area over which a meteorite shower falls is known as its strewn field. Strewn fields are commonly elliptical in shape, with the major axis parallel to the direction of flight. In most cases, the largest meteorites in a shower are found farthest down-range in the strewn field. About 86% of the meteorites that fall on Earth are chondrites, which are named for the small, round particles they contain. These particles, or chondrules, are composed mostly of silicate minerals that appear to have been melted while they were free-floating objects in space. Chondrites also contain small amounts of organic matter, including amino acids, and presolar grains. Chondrites are typically about 4.55 billion years old and are thought to represent material from the asteroid belt that never formed into large bodies. Like comets, chondritic asteroids are some of the oldest and most primitive materials in the solar system. Chondrites are often considered to be "the building blocks of the planets". About 8% of the meteorites that fall on Earth are achondrites, some of which appear to be similar to terrestrial mafic igneous rocks. Most achondrites are also ancient rocks, and are thought to represent crustal material of asteroids. One large family of achondrites (the HED meteorites) may have originated on the asteroid 4 Vesta. Others derive from different asteroids. Two small groups of achondrites are special, as they are younger and do not appear to come from the asteroid belt. One of these groups comes from the Moon, and includes rocks similar to those brought back to Earth by Apollo and Luna programs. The other group is almost certainly from Mars and are the only materials from other planets ever recovered by man. About 5% of meteorites that fall are iron meteorites with intergrowths of iron-nickel alloys, such as kamacite and taenite. Most iron meteorites are thought to come from the core of a number of asteroids that were once molten. As on Earth, the denser metal separated from silicate material and sank toward the center of the asteroid, forming a core. After the asteroid solidified, it broke up in a collision with another asteroid. Due to the low abundance of irons in collection areas such as Antarctica, where most of the meteoric material that has fallen can be recovered, it is possible that the actual percentage of iron-meteorite falls is lower than 5%. Stony-iron meteorites constitute the remaining 1%. They are a mixture of iron-nickel metal and silicate minerals. One type, called pallasites, is thought to have originated in the boundary zone above the core regions where iron meteorites originated. The other major type of stony-iron meteorites is the mesosiderites. Tektites (from Greek tektos, molten) are not themselves meteorites, but are rather natural glass objects up to a few centimeters in size which were formed--according to most scientists--by the impacts of large meteorites on Earth's surface. A few researchers have favored Tektites originating from the Moon as volcanic ejecta, but this theory has lost much of its support over the last few decades. Most meteorite falls are recovered on the basis of eye-witness accounts of the fireball or the actual impact of the object on the ground, or both. Therefore, despite the fact that meteorites actually fall with virtually equal probability everywhere on Earth, verified meteorite falls tend to be concentrated in areas with high human population densities such as Europe, Japan, and northern India. A small number of meteorite falls have been observed with automated cameras and recovered following calculation of the impact point. The first of these was the Pribram meteorite, which fell in Czechoslovakia (now the Czech Republic) in 1959. In this case, two cameras used to photograph meteors captured images of the fireball. The images were used both to determine the location of the stones on the ground and, more significantly, to calculate for the first time an accurate orbit for a recovered meteorite. Following the Pribram fall, other nations established automated observing programs aimed at studying infalling meteorites. One of these was the Prairie Network, operated by the Smithsonian Astrophysical Observatory from 1963 to 1975 in the midwestern US. This program also observed a meteorite fall, the Lost City chondrite, allowing its recovery and a calculation of its orbit. Another program in Canada, the Meteorite Observation and Recovery Project, ran from 1971 to 1985. It too recovered a single meteorite, Innisfree, in 1977. Finally, observations by the European Fireball Network, a descendant of the original Czech program that recovered Pribram, led to the discovery and orbit calculations for the Neuschwanstein meteorite in 2002. Until the 20th century, only a few hundred meteorite finds had ever been discovered. Over 80% of these were iron and stony-iron meteorites, which are easily distinguished from local rocks. To this day, few stony meteorites are reported each year that can be considered to be "accidental" finds. The reason there are now over 30,000 meteorite finds in the world's collections started with the discovery by Harvey H. Nininger that meteorites are much more common on the surface of the Earth than was previously thought. In the late 1960s, Roosevelt County, New Mexico in the Great Plains was found to be a particularly good place to find meteorites. After the discovery of a few meteorites in 1967, a public awareness campaign resulted in the finding of nearly 100 new specimens in the next few years, with many being found by a single person, Mr. Ivan Wilson. In total, nearly 140 meteorites were found in the region since 1967. In the area of the finds, the ground was originally covered by a shallow, loose soil sitting atop a hardpan layer. During the dustbowl era, the loose soil was blown off, leaving any rocks and meteorites that were present stranded on the exposed surface. Although meteorites had been sold commercially and collected by hobbyists for many decades, up to the time of the Saharan finds of the late 1980s and early 1990s, most meteorites were deposited in or purchased by museums and similar institutions where they were exhibited and made available for scientific research. The sudden availability of large numbers of meteorites that could be found with relative ease in places that were readily accessible (especially compared to Antarctica), led to a rapid rise in commercial collection of meteorites. This process was accelerated when, in 1997, meteorites coming from both the Moon and Mars were found in Libya. By the late 1990s, private meteorite-collecting expeditions had been launched throughout the Sahara. Specimens of the meteorites recovered in this way are still deposited in research collections, but most of the material is sold to private collectors. These expeditions have now brought the total number of well-described meteorites found in Algeria and Libya to over 2000. As word spread in Saharan countries about the growing profitibility of the meteorite trade, meteorite markets came into existence, especially in Morocco, fed by nomads and local people who combed the deserts looking for specimens to sell. Many thousands of meteorites have been distributed in this way, most of which lack any information about how, when, or where they were discovered. These are the so-called "Northwest Africa" meteorites. The recovery of meteorites from Oman is currently prohibited by national law, but a number of international hunters continue to remove specimens now deemed "national treasures." This new law provoked a small international incident, as its implementation actually preceded any public notification of such a law, resulting in the prolonged imprisonment of a large group of meteorite hunters primarily from Russia, but whose party also consisted of members from the U.S. as well as several other European countries. Beginning in the mid-1990s, amateur meteorite hunters began scouring the arid areas of the southwestern United States. To date, meteorites numbering possibly into the thousands have been recovered from the Mojave, Sonora, Tule, and Lechuguilla Deserts, with many being recovered on dry lake beds (playas). Significant finds include the Superior Valley 014 Acapulcoite, one of two of its type found within the United States as well as the Blue Eagle meteorite, the first Rumuruti-type chondrite yet found in the Americas. Perhaps the most notable find in recent years has been the Los Angeles meteorite, a martian meteorite of unknown origin that was purportedly discovered somewhere in the Mojave desert, only to be recognized years later in a pile of rocks in his back yard. There is some question to this claim made by Robert Verish, as such circumstances appear highly unlikely, and successfully circumvent the provisions of the Antiquities Act, which would suggest the uncompensated requisition of the stone by the Smithsonian. A number of finds from the American Southwest have yet to be formally submitted to the Meteorite Nomenclature Committee, as many finders think it is unwise to publicly state the coordinates of their discoveries for fear of 'poaching' by other hunters. Several of the meteorites found recently are currently on display in the Griffith Observatory in Los Angeles. A famous case is the alleged Chinguetti meteorite, a find reputed to come from a large unconfirmed 'iron mountain' in Africa. There are several reported instances of falling meteorites having killed both people and livestock, but a few of these appear more credible than others. The most infamous reported fatality from a meteorite impact is that of an Egyptian dog that was killed in 1911, although this report is highly disputed. This particular meteorite fall was identified in the 1980s as Martian in origin. However, there is substantial evidence that the meteorite known as Valera hit and killed a cow upon impact, nearly dividing the animal in two, and similar unsubstantiated reports of a horse being struck and killed by a stone of the New Concord fall also abound. Throughout history, many first and second-hand reports of meteorites falling on and killing both humans and other animals abound, but none have been well documented. The first known modern case of a human hit by a space rock occurred on 30 November 1954 in Sylacauga, Alabama. There a 4 kg stone chondrite crashed through a roof and hit Ann Hodges in her living room after it bounced off her radio. She was badly bruised. Other than the Sylacauga event, the most plausible of these claims was put forth by a young boy who stated that he had been hit by a small (~3 gram) stone of the Mbale meteorite fall from Uganda, and who stood to gain nothing from this assertion. The stone reportedly fell through a number of banana leaves before striking the boy on the head, causing little to no pain, as it was small enough to have been slowed by both friction with the atmosphere as well as that with banana leaves, before striking the boy. Although it is impossible to prove this claim either way, it seems as though he had little reason to lie about such an event occurring. Several persons have since claimed to have been struck by "meteorites" but no verifiable meteorites have resulted. Indigenous peoples often prized iron-nickel meteorites as an easy, if limited, source of iron metal. For example, the Inuit used chips of the Cape York meteorite to form cutting edges for tools and spear tips. Other Native Americans treated meteorites as ceremonial objects. In 1915, a 135-pound iron meteorite was found in a Sinagua (c.1100-1200 AD) burial cyst near Camp Verde, Arizona, respectfully wrapped in a feather cloth. A small pallasite was found in a pottery jar in an old burial found at Pojoaque Pueblo, New Mexico. Nininger reports several other such instances, in the Southwest US and elsewhere, such as the discovery of Native American beads of meteoric iron found in Hopewell burial mounds, and the discovery of the Winona meteorite in a Native American stone-walled crypt. In the 1970s a stone meteorite was uncovered during an archaeological dig at Danebury Iron Age hillfort, Danebury England. It was found deposited part way down in an Iron Age pit. Since it must have been deliberately placed there, this could indicate one of the first (known) human finds of a meteorite in Europe. Apart from meteorites fallen onto the Earth, "Heat Shield Rock" is a meteorite which was found on Mars, and two tiny fragments of asteroids were found among the samples collected on the Moon by Apollo 12 (1969) and Apollo 15 (1971) astronauts.
A newly released image of the planet Neptune shows just how far telescope technology has come in recent years. This view of Neptune is almost impossibly clear compared with past attempts, thanks to a recent upgrade to the Very Large Telescope (VLT) at the European Southern Observatory (ESO) in Chile. You can actually make out cloud patterns on Neptune with the upgraded VLT, which is something even Hubble can’t do. You’re probably thinking this image doesn’t look all that clear. Indeed, there are crisper snapshots of the outermost planet, but those came from NASA’s Voyager 2 during its 1989 flyby. There are no spacecraft in orbit of Neptune, so the only way to get new images of the gas giant is to capture them from 2.9 billion miles away on Earth. Until now, Hubble was the best way to look at Neptune, but the planet is rather small and dim compared with most of the objects Hubble surveys. The comparison image below shows how much better the new VLT is for observing objects like Neptune compared with Hubble. The image of the planet Neptune captured with VLT and Hubble. The Very Large Telescope consists of four separate 8.2 meter (27 foot) mirrors. That’s a lot of surface area to scan the sky, but Earth’s atmosphere distorts celestial objects. That’s why space telescopes like Hubble and the upcoming James Webb are so important. The ESO developed a new adaptive optics mode based on laser tomography to counteract that. The system consists of MUSE (Multi Unit Spectroscopic Explorer) and an optical unit called GALACSI. Using adaptive optics with the VLT is like giving it eyeglasses that correct for atmospheric distortion. In order to correct the blur, you need to know how much the atmosphere is distorting light. The VLT projects four high-intensity lasers into space, which act like an artificial star. The blur detected from the laser tells the system how to change the mirror’s shape to take sharper images. As you can see below, Neptune is just a blurry disk without adaptive optics. Without adaptive optics, the VLT can barely make out Neptune. The newly released images were taken in “narrow-field mode.” That means the telescope can only observe a small part of the sky (like imaging Neptune). In wide-field mode, the VLT can take capture more of the sky, but the system can only correct for a kilometer of atmospheric distortion. The upgraded VLT won’t be able to match the Webb Space Telescope, but it’s already operational, and NASA’s launch schedule for Webb keeps slipping. Let’s block ads! (Why?) ExtremeTechExtreme – ExtremeTech
Solar Heliosphere: History Apart from the many discussions of solar ‘corpuscular’ radiation in the 1800’s to account for aurora, in 1903 Kristian Birkeland in Norway explained aurora as some kind of medium consisting of a stream of electrons that travels from sun to earth. Sidney Chapman again raised the idea of solar electron streams in a 1918 paper on magnetic storms. Frederick Lindemann, Oxford professor of physics, pointed out that the negative charge accumulated on the Earth would disrupt the process. Lindemann then suggested that any cloud or stream expelled from the Sun would have to be electrically neutral, containing equal charge from ions and electrons. The real turning point for the solar wind concept came 25 years later in 1943 when astronomer Cuno Hoffmeister in Germany provided the crucial observations of a gas tail aberration of about 6 degrees, i.e. the angle between the observed tail and the anti-solar direction. Ludwig Biermann (1951) at the University of Gottingen correctly interpreted this deflection in terms of the interaction between the cometary ions in the tail and the solar wind. The tails should always point directly away from the sun if the only thing acting on them was the pressure from sunlight. Comet tails, like million-mile-long windsocks, pointed in the direction that the solar wind was blowing near them. Biermann showed that the pressure from Sunlight was not enough, and that the force must be provided by a stream of particles travelling away from the Sun at speeds of hundreds of kilometers per second. In 1955, Sydney Chapmen (Britain) concluded that because the corona was so hot (million degrees) that it must exist beyond the orbit of the sun. A few years later, Eugene Parker showed mathematically that an expanding, supersonic, hot corona has to produce a solar wind that accounts for comet tail deflections. Despite the work by Hoffmeister, Biermann and Chapman the concept of a solar wind was still considered controvercial by many researchers in the 1950’s. This wasn’t settled until space probes were flown that were able to record this stream of material high above the Earth's atmosphere, proving its existence. Much of the early discussions about the heliosphere were the result of cosmic ray studies. In 1956, studies of cosmic ray energies by Philip Morrison (1915-2005) at Cornell (Phys Rev 101, p. 139 see abstract below) led to the realization that Earth had to be immersed in a region of tangled interplanetary magnetic field of solar origin. Leverett Davis (1955, Phys Rev `100, p. 144; see abstract below) at CalTech, and Meyer (1956, Phys rev 104, p. 768 abstract below) at the University of Chicago concluded from their studies that a good fit to the data would obtain if the cavity were about 200 AU in diameter. At the outer boundary, cosmic rays from solar flares would be scattered back into the inner solar system and detected at earth. Hannes Alfven (1957) later introduced the notion of an interplanetary magnetic field which is carried along with the solar wind. Meanwhile, Konstantin Gringauz in 1959 flew "ion traps" on the Soviet Lunik 2 and 3 missions, instruments measuring the total electric charge of arriving ions. He reported that the signal fluctuated as the spacecraft spun around its axis, suggesting an ion flow was entering the instrument whenever it faced the Sun. But a more careful analysis of the data failed to find the necessary signal. In 1961 Herbert Bridge with Bruno Rossi and the MIT team obtained more detailed observations with an elaborate ion trap on NASA's Explorer 10, but the data were still not convincing to many because the probe was designed to study the magnetotail,which confused the analysis. Then in 1962, Mariner II (built on a rush 11-month schedule at JPL) flew towards Venus. It not only detected a continuously flowing solar wind, but also observed in it fast and slow streams, approximately repeating at 27 day intervals, suggesting that their sources rotated with the Sun. The discovery of the solar wind is almost universally credited to the Mariner 2 probe which flew past Venus on December 14, 1962. From "Interplanetary Magnetic Fields and Cosmic Rays" by Leverett Davis, Jr. 1955, Phys Rev 100, p. 144: The existence in the region around the sun of a field-free cavity in the galactic magnetic field seems indicated by the low-energy cosmic rays that reach the earth from the sun. Such a cavity would be produced by the solar corpuscular emission. A mean radius of the order of 200 times the distance from the sun to the earth may be estimated for this cavity by balancing the flux of momentum against the lateral pressure exerted by a field of 10-5 gauss. Such a cavity would trap cosmic rays of energy less than 100 Bev for periods long compared to a sunspot cycle, but does not seem to make possible a solar origin of cosmic rays. Expected fluctuations in cavity size would explain the 4% fluctuation in cosmic-ray intensity observed by Forbush. A simple model of the cavity is considered in some detail, rates of escape from and entry to the cavity, acceleration by the Fermi mechanism, and change in energy density being estimated. More complicated models involving a solar magnetic field are considered briefly. From: "Solar Cosmic Rays of February, 1956 and Their Propagation through Interplanetary Space" P. Meyer, E. N. Parker, and J. A. Simpson Phys. Rev. 104, p. 768: The data from six neutron-intensity monitors distributed over a wide range of geomagnetic latitudes have been used to study the large and temporary increase of cosmic-ray intensity which occurred on February 23, 1956, in association with a solar flare. During the period of enhanced intensity a balloon-borne neutron detector measured the absorption mean free path and intensity of the flare particles at high altitudes. From these experiments the primary particle intensity spectrum as a function of particle rigidity, over the range <2 to> 15-30 Bv rigidity, has been deduced for different times during the period of enhanced intensity. It is shown that the region between the sun and the earth should be free of magnetic fields greater than ∼10-6 gauss and that the incoming radiation was practically isotropic for more than 16 hours following maximum flare particle intensity. The decline of particle intensity as a function of time t depends upon the power law t-3/2, except for high-energy particles and late times, where the time dependence approaches an exponential. The experiments lead to a model for the inner solar system which requires a field-free cavity of radius greater than the sun-earth distance enclosed by a continuous barrier region of irregular magnetic fields [B(rms)≈10-5 gauss] through which the cosmic-ray particles must diffuse to reach interstellar space. This barrier is also invoked to scatter flare particles back into the field-free cavity and to determine the rate of declining intensity observed at the earth. The diffusion mechanism is strongly supported by the fact that the time dependence t-3/2 represents a special solution of the diffusion equation under initial and boundary conditions required by experimental evidence. The coefficient of diffusion, the magnitude of the magnetic field regions, the dimensions of the barrier and cavity, and the total kinetic energy of the high-energy solar injected particles have been estimated for this model. Recent studies of interplanetary space indicate that the conditions suggested by the experiments may be established from time to time in the solar system. The extension of the model to the explanation of earlier cosmic-ray flare observations appears to be satisfactory. - Hoffmeister, C., Physikalisch Untersuchungen auf Kometen, I, Die Beziehungen des primaren Schweifstrahl zum Radiusvektor, Z. Astrophys., 22, 265-285, 1943. - Hoffmeister, C., Physikalisch Untersuchungen auf Kometen, ll, Die Bewegung der Schweifmaterie und die Repulsivkraft der Sonne beim Kometen, Z. Astrophys., 23, 1-18, 1944. - Ludwig Biermann and R. Luest, “The Tails of Comets,” Scientific American, October, 1958. - Neugebauer, M., Snyder, C., 1962, “Solar Plasma Experiment”, Science, 138, 1095-1097 - Sonett, C. P. "A summary review of the scientific findings of the Mariner Venus Mission." Space Science Review, 2, No. 6, 751-777, December 1963. - Sound recording of an interview with Biermann discussing his solar wind work. American Institute of Physics. Center for History of Physics. Niels Bohr Library. One Physics Ellipse, College Park, MD 20740, USA. "The heliosphere, in which the Sun and planets reside, is a large bubble inflated from the inside by the high-speed solar wind blowing out from the Sun." - NASA, Solar System Exploration.
Clipping a Polygon The basic routine of the package is a function that clips a polygon along a curve given by an implicit equation . The coordinates of a vertex are substituted into . Those vertices for which the result is positive are discarded and those for which the result of the substitution is negative or zero are kept. Figures 5 and 6 illustrate the process for a square clipped by the circle . Figure 5. Circle and square. Figure 6. Clipped square. In Figure 5 a square polygon, shown in outline, is crossed by the circle, only a portion of which is shown. In Figure 6 the polygon has been clipped by discarding the vertex that lies outside the circle and adding vertices where the circle crosses the sides of the polygon. The clipped polygon lies entirely within or on the clipping circle. The new vertices are computed as follows. Suppose that and are adjacent vertices on which the clipping function has different signs. The edge joining and may be parametrized as , and then is a real-valued function of the real variable , which has at least one root between 0 and 1. A root is located with FindRoot and the value substituted into to define the new vertex. Since vertex coordinates are converted to approximate real numbers for this computation, problems with roundoff may occasionally arise. For this reason the test for retaining a vertex is not but where is small number internally referred to as Fuzz. The default value of Fuzz is but may be changed by the user. Copyright © 2002 Wolfram Media, Inc. All rights reserved.
||The English used in this article or section may not be easy for everybody to understand. (March 2017)| The Protestant Reformation is a term used to describe a series of events that happened in the 16th century in the Christian Church. Because of corruption in the Catholic Church, some people saw a need to change the way it worked. People like Erasmus, Thomas More, Huldrych Zwingli, Martin Luther and John Calvin saw this corruption, and acted to stop it. This led to a schism in the church, into Catholics and a number of Protestant churches. Martin Luther was the first person to translate the Bible into German. He could print copies, because Johannes Gutenberg had invented a way to print a number of copies (approximately 50-100) at a relatively low price. The Protestant reformation triggered the Catholic Counter-Reformation. In general, Martin Luther's posting of the 95 theses on the door of the church at Wittenberg is seen as the start of the Protestant Reformation. This happened in the year 1517. The Peace of Westphalia of 1648 recognised Protestants, and is generally seen as the end of this process. Causes[change | change source] In the beginning of the 16th century, many events occurred that led to the protestant reformation. Clergy abuse caused people to begin criticizing the Catholic Church. The central points of criticism were the following: The split was over doctrine not corruption. The doctrines at question were, Indulgences and Justification... clergy corruption was tangential not the major reason. - The church sold tickets of indulgences (forgiveness) from sins for money. This suggested that the rich could buy their way into Heaven while the poor could not - quite the opposite of what the Bible says. (See Gospel of Matthew 19:24) - Many people did not understand the sermon, because it was in Latin. The sermon is that part of the service where the priest teaches people things from the Bible. Because of this, ordinary people did not know very much about Christianity. - Religious posts were often sold to whoever was willing to pay the most money for them. See Simony. This meant many priests did not know much about Christianity. So they told the people many different things. Some of the things had little to do with what was written in the Bible. The greed and scandalous lives of the clergy had created a split between them and the peasants. Furthermore, the clergy did not respond to the population needs because they did not speak the local language, or live in their own diocese. The papacy lost prestige. The recent invention of the printing press helped spread awareness of the Church's abuses, and coordinate a response. In 1515, the pope started a new indulgence campaign to raise money for the rebuilding of St. Peter's Basilica, a church in Rome. This was the last straw for Martin Luther, a Catholic monk from Germany. On October 31, 1517, he sent his 95 theses to the local archbishop in protest. He may also have nailed a copy to the door of the Wittenberg chapel. Luther, who appeared as an enemy of the pope, was excommunicated. In the beginning, Luther had not planned to separate from the Catholic Church or to create a new religion; he wanted to reform the Catholic Church. Consequences[change | change source] In 1524-1525, millions of peasants rebelled against the nobles in the name of equality of the humanity in front at God. Many countries in Europe followed the trend of Protestant reformation and Europe was divided by denomination. This brought religious wars such as the French Wars of Religion. For a short time, Protestant and Catholic had managed to live with one another and with the Peace of Augsburg in 1555. This Peace recognized the confessional division of the German states and gave the right to Protestants to practice their religion. The Pope reestablished the inquisition to combat heresy. The Catholic Church responded to the protestant reformation with the counter-reformation. Force was not entirely successful, so the Pope created new religious orders like the Jesuits. These new religious orders were charged to combat Protestantism while educating the population to Catholicism. The Pope made the Index Librorum Prohibitorum, a list of banned books. It had a big influence in its first centuries and was ended the 1960s. The Catholic Church used baroque art to touch the religious feeling of the faithful and bring them to the Catholic religion. Impact[change | change source] Protestant denominations have multiplied in different forms, especially in Protestant countries. Catholic countries such as Spain and Mexico for a long time forbade Protestants to immigrate, and Protestant countries sometimes forbade Catholics. Protestants are influential in the United States and the English Canada. After the Seven Years War the British imposed the Quebec Act granting freedom of religion in Quebec, hoping it would become Protestant. In later centuries, many Protestant churches were established in the province of Quebec despite Britain's failure to do so. References[change | change source] - "The Reformation". History Channel website: A&E Television network. 1996–2014. Retrieved 11 February 2014. - "Les Réformes protestantes" (in French). Département de philosophie, UQÀM. 2010. Retrieved 11 February 2014. - LAVILLE, Christian, SIMARD, Marc. Histoire de la civilisation occidentale, Ville Saint-Laurent, Erpi, 3e edition, 2010, p. 175 to 191 Related pages[change | change source]
What we call a coffee bean is actually the seeds of a cherry-like fruit. Coffee trees produce berries, called coffee cherries, that turn bright red when they are ripe and ready to pick. The fruit is found in clusters along the branches of the tree. The skin of a coffee cherry (the exocarp) is thick and bitter. However, the fruit beneath it (the mesocarp) is intensely sweet and has the texture of a grape. Next comes the parenchyma, a slimy, honey-like layer, which helps protect the beans. The beans themselves are covered by a parchment-like envelope called the endocarp. This protects the two, bluish-green coffee beans, which are covered by yet another membrane, called the spermoderm or silver skin. There is usually one coffee harvest per year. The time varies according to geographic zone, but generally, north of the Equator, harvest takes place between September and March, and south of the equator between April and May. Coffee is generally harvested by hand, either by stripping all of the cherries off the branch at one time or by selective picking. The latter is more expensive and is only used for arabica beans. Once picked, the coffee cherries must be processed immediately.
ITHACA, N.Y. – Researchers at Cornell University published a study last week in which they claim that clay helped life spontaneously arise from non-life millions of years ago. On Thursday, scientists affiliated with Cornell University released a statement detailing new research findings regarding the initial development of life—also known as abiogenesis. In the statement, the researchers suggest clay was a key ingredient when—according to the university—life spontaneously emerged from non-life in earth’s early years. “We propose that in early geological history clay hydrogel provided a confinement function for biomolecules and biochemical reactions,” said Dan Luo, a professor at Cornell. The statement from Cornell further suggests that, “over billions of years,” clay could have “confined and protected” certain chemical processes, much like cell membranes do today. Then, the protected chemicals “could have carried out the complex reactions that formed proteins, DNA and eventually all the machinery that makes a living cell work.” A 14-page scientific report explains the Cornell researchers’ findings in more technical terms: “Here we mimic the confinement function of cells by creating a hydrogel made from geological clay minerals, which provides an efficient confinement environment for biomolecules,” the report explains. “[O]ur results support the importance of localized concentration and protection of biomolecules in early life evolution, and also implicate a clay hydrogel environment for biochemical reactions during early life evolution.” According to the report, clay may have protected the very first life forms as they formed and developed. For evolutionists, life’s origin is a difficult topic, since—despite countless attempts—abiogenesis has never been replicated; nor has it been observed in nature. Thus, many scientists speculate that life somehow arose in a primordial ocean, perhaps due to input from lightning or a volcanic vent. However, other scientists reject the theory of naturalistic abiogenesis. Dr. Kevin Anderson is a microbiologist with a Ph.D. in microbiology and many years of research experience. He told Christian News Network that last week’s Cornell report does not realistically portray any type of leap from non-life to life. Rather, it vaguely suggests that abiogenesis is possible and proven, without citing tangible evidence. “This type of ‘hand wave’ is common—act like everything is all figured out—and is frequently done so as to avoid having to actually acknowledge that abiogenesis has no evidence,” Anderson stated. “Thus, with a ‘hand wave’ [evolutionists] can pretend there are just a few minor points to address, and rationalize that there is no need to answer creationists’ challenges.” “However,” Anderson continued, “creationists have long scoffed at such a hand wave, pointing out that even under almost pristine conditions, evolutionists’ experiments rarely achieve anything but a D & L mixture of a few amino acids or a few bits of other organic molecules (and often ignore many other very toxic molecules, such as formate, that are also formed during the process).” In terms of the Cornell scientists’ findings, Anderson says they concluded that clay (or some type of gel) would be necessary to protect early biomolecules, but never explain how those living molecules formed in the first place. Ultimately, Anderson told Christian News Network, the idea that life spontaneously appeared without a Creator takes an enormous leap of faith. “The immense speculation, and lack of any significant evidence or mechanisms for abiogenesis strongly support the creationists’ claims that abiogenesis is really nothing more than ‘wishful thinking’ on the part of the materialists,” he concluded. “In fact, the more the problem is studied, the more difficulties arise. Adding to the problem for materialists, the more we understand about cells and living systems the greater the gulf becomes between life and non-life. Thus, the final conclusion is that there is not a shred of evidence that life can form spontaneously under any conditions.”
Quantum Computers May Be Going Blue There are many materials that have multiple uses you would not expect, such as nitroglycerine which is an explosive and a heart medication. Now, thanks to researchers at University College London, we have discovered that a common blue pigment can also serve as qubits for a quantum computer. Copper phthalocyanine (CuPc) is used in the British £5 note and is also a low-cost organic semiconductor that can be made into thin films. The researchers have discovered that the electrons in CuPc are able to exist in a superposition for an extraordinarily long time. Superposition is a quantum mechanical phenomenon that allows a particle to exist in multiple, contradictory states at the same time, and is used by qubits, or quantum bits, to store and transmit information in a quantum computer. The longer the qubit can exist in a superposition, the more useful it is for quantum computing purposes, so the longevity of CuPc's superpositions, make it very interesting. Another interesting property of copper phthalocyanine is that it is easily modified by chemical and physical processes, which means its properties can be altered to fit whatever it is being used for. It could even have some uses for spintronics too, as the spins of its electrons can also be affected. Source: University College London
This book is designed to provide essential information in a convenient format for anyone beginning the historical study of the Christian Gospels. With clarity and verve, Mark Allan Powell describes the contents and structure of the Gospels, their distinctive characteristics, and their major themes. An introductory chapter surveys the political, religious, and social world of the Gospels, methods of approaching early Christian texts, the genre of the Gospels, and the religious character of these writings. Included also are comments on the Gospels that are not found in the New Testament. Special features: map, illustrations, and more than two dozen special topics provide information that is important for the understanding of the Gospels.
Can you imagine a computational fluid dynamics program that simulates the behavior of different materials separated by well-defined interfaces that are subject to arbitrarily large deformations? Can you also imagine this program capturing shock waves and tracking rarefactions, slip surfaces, and other non-linear hydrodynamic phenomena? Developing such a program would be a daunting task. You may be surprised to learn that such a program was operating in 1955, long before computer graphics or mechanical pen plotters were available, and even before high-level programming languages like Fortran were popular. Fortran, or Formula Translation System, was proposed by IBM in 1954. The program having these amazing capabilities was a Particle-In-Cell (PIC) method originated by Francis H. Harlow of the Los Alamos National Laboratory (Harlow, F.H., “A Machine Calculation Method for Hydrodynamic Problems,” Los Alamos Scientific Laboratory report LAMS-1956, Nov. 1955). Central to the PIC method is the concept of a Lagrangian particle defined by a location (x,y,z). A particle is said to be Lagrangian when it moves as though it is an element of fluid. The particle may be thought of as the location of the center of mass of the fluid element. In addition to a location, Lagrangian particles are sometimes assigned one or more property values. In the PIC method, for instance, particles have specified masses and a label indicating what material they belong to. While the underlying computational scheme used in the PIC method employs a fixed Eulerian grid, Lagrangian particles are used to move mass, momentum, and energy through this grid in a way that preserves the identities of the different materials. There are no connections between particles so they are free to move and follow the dynamics of a flow regardless of its complexity, Figure 1. Lagrangian particles are, in fact, the key feature in the PIC method that allows it to track large fluid deformations. Why, then, isn’t the PIC method more widely used for continuum fluid mechanics? For example, there are no commercial CFD programs based on this method. It could be argued that the PIC method is best for compressible flows, while most commercial applications deal with incompressible-fluid situations. Two additional reasons why the PIC method is not more wisely used are associated with the discreteness of Lagrangian particles. It is these discrete properties and their consequences that are the subject of this note. One obvious property is that finite changes in numerical values may occur because of changes in the number of particles. The other property is less obvious and is associated with a fundamental characteristic of fluids that generally makes it difficult to track a fluid element simply by tracking its center of mass (a discrete) location. The Discrete Problem In the PIC method particles have finite masses. This means that when a particle moves from one control volume of the fixed Eulerian grid into another it causes discrete changes to be recorded in the mass, momentum, and energy of the cells losing and gaining the particle. Such changes introduce fluctuations in the computed values of all fluid dynamic quantities. The magnitude of the fluctuations is inversely proportional to the square root of the average number of particles in a grid cell. Experience has shown that the PIC method works best with at least 16 particles per cell (i.e., a 4 by 4 array in two dimensions or 64 particles per cell in three dimensions). A smaller number of particles could be used when larger fluctuations could be tolerated (or when computing resources did not allow for a larger number, a frequent situation in the early days of CFD). Experience also showed that better results were obtained when the initial placement of particles was not regular, but staggered. It is easy to see why this is so. Suppose the particles are arranged in a regular 4 x 4 array in x-y space. If the flow is only in the x direction then a column of four particles will pass from one cell to another at the same time, which would result in a very large change in the cell values. If the particles are staggered in space, however, it is more likely that only one particle at a time will cross a cell boundary, causing the minimum discrete change in cell values. In more recent times another approach has been used to reduce the effect of discrete changes as particles move from cell to cell. This is the “smooth particle hydrodynamics” method in which particles have finite volumes that can overlap more than one grid cell at a time. As a particle approaches a cell boundary its volume continuously sweeps from one cell to the next. The Element Distortion Problem A more difficult problem associated with Lagrangian particles is that fluid elements rarely retain simple, convex shapes. Most often a fluid element will find itself subjected to shearing, expanding, or contracting flow processes that quickly draw it out into a long ribbon-like shape. To visualize this, you might try introducing small volumes of smoke into a strong light (e.g., from a slide projector) and see how rapidly they deform into thin curtains of smoke. This type of deformation means that material in a fluid element will not remain localized, and a Lagrangian particle following its center of mass will no longer be a good representation of the element. In a computational method element distortion can lead to a variety of problems. One of the most common problems is that particles will not retain a uniform distribution, but will tend to bunch up in some places and move apart in others. A simple example of these processes occurs at stagnation point. Figure 2 shows what happens to a regular array of particles in a liquid jet when it strikes a wall and flows to either side of a stagnation point that is at the center of impact. The particles bunch together in a direction normal to the wall while at the same time move further apart along the wall. If the particles in the initial distribution are staggered these deformation processes are greatly reduced. See Figure 3. Unfortunately, staggering cannot completely eliminate this problem. In other circumstances, at a separation point or in regions of strong shear, particle staggering is not sufficient to keep particles evenly distributed. Numerical techniques can be used to add particles in expanding regions or eliminate them in regions of convergence. Or continuous repartitioning methods can be used to relocate particles for more even coverage. However, these operations introduce local smoothing that is effectively equivalent to an Eulerian computational method and throws away one of the best features of particles, namely that of their identity. Flow separation regions cause difficulties not only because of the difficulty of maintaining a uniform particle distribution but also because of the curvature of the flow near a separation point. To understand why flow curvature can be a problem, consider the rigid-body rotation of a fluid. Lagrangian particles placed in such a flow should move in circles about the axis of rotation. In practice this rarely happens because most particle implementations advance the location of a particle using a linear expression of velocity. For instance, the x-location of a particle at time-step n+1 would be computed as xn+1=xn+dtU, where dt is the time-step size and U is the x-component of the flow velocity at the location of the particle. This expression, which is linear in the velocity, moves the particle in a direction tangent to the circle. Consequently, when the particle is moved along the tangent it moves to a slightly larger radius. After a sufficient number of time steps, particles will appear as though they are being thrown outward, a kind of numerical centrifugal effect. The only way to correct for this type of behavior is to sense when the flow has curvature and to use a second-order, quadratic expression to compute new particle positions. Diffusion processes are easy to include in particle methods using a type of random walk, or Monte Carlo model. One technique is to imagine a particle to be a point source for material that is diffusing outward. For a short time, dt, the diffusion can be represented as having a Gaussian distribution (i.e., having the solution to the diffusion equation for a point source). Since the particle cannot be subdivided, the distribution is instead treated as a probability distribution. The particle is then moved in the time interval dt to its most probable location. A random number generator is used to select a location in this probability distribution. The idea is that if enough trials are made the number of times the particle reaches a given position is proportional to the Gaussian distribution. When particles are used as flow markers they make particularly nice graphic displays. A good example can be found in the Marker-and-Cell (MAC) method for free surface hydrodynamics (Harlow, F.H., Shannon, J.P., and Welch, J.E., “Liquid Waves by Computer,” Science 149, 1092 (1965)). In this method Lagrangian particles do not carry mass but are simply used as markers to define grid regions occupied by fluid. Results produced by the MAC method have appeared in many publications to illustrate the impressive things that can be done with computational fluid dynamics. Figure 4 shows a MAC-like computation of the flow of liquid originating from the collapse of a circular column (shown in outline to the left) and splashing over a cylindrical dyke. The small finger of marker particles at the top of the splash appears especially realistic. As it happens, this computation was performed using a Volume-of-Fluid (VOF) method in which Lagrangian particles had no computational role. The particles in the picture were only included in the computation to make the graphical display. This example shows that what seems to be a strong argument for the accuracy of discrete particles, that is, their ability to capture local details, is mostly a visual effect in this case since the dynamics was computed from purely cell-averaged quantities. Lagrangian particles are an extremely useful computational tool, especially when they are used to track small amounts of material whose dispersion is to be minimized. When particles are used as a discrete model for a continuous medium, however, it must be remembered that they have some limitations. In this sense, particles are no different than any other discrete computational method. Some of the issues that should be considered when using Lagrangian particles have been, we hope, discreetly presented in this note.
When exoplanet scientists first spotted patterns in disks of dust and gas around young stars, they thought newly formed planets might be the cause. But a recent NASA study cautions that there may be another explanation — one that doesn’t involve planets at all. Arcs, rings and spirals appear in the debris disk around the star HD 141569A. The black region in the center is caused by a mask that blocks direct light from the star. This image incorporates observations made in June and August 2015 using the Hubble Space Telescope's STIS instrument. Credits: NASA/Hubble/Konishi et al. 2016 Exoplanet hunters watch stars for a few telltale signs that there might be planets in orbit, like changes in the color and brightness of the starlight. For young stars, which are often surrounded by disks of dust and gas, scientists look for patterns in the debris — such as rings, arcs and spirals — that might be caused by an orbiting world. “We’re exploring what we think is the leading alternative contender to the planet hypothesis, which is that the dust and gas in the disk form the patterns when they get hit by ultraviolet light,” said Marc Kuchner, an astrophysicist at NASA's Goddard Space Flight Center in Greenbelt, Maryland. Kuchner presented the findings of the new study on Thursday, Jan. 11, at the American Astronomical Society meeting in Washington. A paper describing the results has been submitted to The Astrophysical Journal. When high-energy UV starlight hits dust grains, it strips away electrons. Those electrons collide with and heat nearby gas. As the gas warms, its pressure increases and it traps more dust, which in turn heats more gas. The resulting cycle, called the photoelectric instability (PeI), can work in tandem with other forces to create some of the features astronomers have previously associated with planets in debris disks. Astronomers thought patterns spotted in disks around young stars could be planetary signposts. But is there another explanation? A new simulation performed on NASA's Discover supercomputing cluster shows how the dust and gas in the disk could form those patterns — no planets needed. Credits: NASA's Goddard Space Flight Center Kuchner and his colleagues designed computer simulations to better understand these effects. The research was led by Alexander Richert, a doctoral student at Penn State in University Park, Pennsylvania, and includes Wladimir Lyra, a professor of astronomy at California State University, Northridge and research associate at NASA’s Jet Propulstion Laboratory in Pasadena, California. The simulations were run on the Discover supercomputing cluster at the NASA Center for Climate Simulation at Goddard. In 2013, Lyra and Kuchner suggested that PeI could explain the narrow rings seen in some disks. Their model also predicted that some disks would have arcs, or incomplete rings, which were first directly observed in 2016. “People very often model these systems with planets, but if you want to know what a disk with a planet looks like, you first have to know what a disk looks like without a planet,” Richert said. Richert is lead author on the new study, which builds on Lyra and Kuchner’s previous simulations by including an additional new factor: radiation pressure, a force caused by starlight striking dust grains. Light exerts a minute physical force on everything it encounters. This radiation pressure propels solar sails and helps direct comet tails so they always point away from the Sun. The same force can push dust into highly eccentric orbits, and even blow some of the smaller grains out of the disk entirely. The researchers modeled how radiation pressure and PeI work together to affect the movement of dust and gas. They also found that the two forces manifest different patterns depending on the physical properties of the dust and gas. The 2013 simulations of PeI revealed how dust and gas interact to create rings and arcs, like those observed around the real star HD 141569A. With the inclusion of radiation pressure, the 2017 models show how these two factors can create spirals like those also observed around the same star. While planets can also cause these patterns, the new models show scientists should avoid jumping to conclusions. “Carl Sagan used to say extraordinary claims require extraordinary evidence,” Lyra said. “I feel we are sometimes too quick to jump to the idea that the structures we see are caused by planets. That is what I consider an extraordinary claim. We need to rule out everything else before we claim that.” Kuchner and his colleagues said they would continue to factor other parameters into their simulations, like turbulence and different types of dust and gas. They also intend to model how these factors might contribute to pattern formation around different types of stars. A NASA-funded citizen science project spearheaded by Kuchner, called Disk Detective, aims to discover more stars with debris disks. So far, participants have contributed more than 2.5 million classifications of potential disks. The data has already helped break new ground in this research. By Jeanette Kazmierczak NASA's Goddard Space Flight Center, Greenbelt, Md.
This Clay Float: Exploring Archimedes' Principle lesson plan also includes: - Join to access all included materials Use this resource to discuss the Archimedes Principle and buoyancy in your classroom. Learners use modeling clay to make, test, and record water data and analyze this principle. They come up with ideas to make the ball of clay float, and then discuss why some techniques worked and some did not.
|Product #: TCR8964I_TQ| Standards-Based Science Investigations: Grade 4 (Enhanced eBook) (Resource Bo eBookGrade 4 Please Note: This ebook is a digital download, NOT a physical product. After purchase, you will be provided a one time link to download ebooks to your computer. Orders paid by PayPal require up to 8 business hours to verify payment and release electronic media. For immediate downloads, payment with credit card is required. As they read about science, students develop content vocabulary and scientific fluency. By doing science experiments, students see science in action and gain a clearer understanding of scientific principles and properties. When they practice the inquiry process of scientific investigation, students get a peek at how scientists work, how they learn, discover, explain and modify the world around us. That's the philosophy behind this series of books, reading, doing, and critical thinking. The language is clear, simple, and scientifically correct. The imaginative and effective lessons cover life, earth, and physical sciences. Helpful extras include science inquiry worksheets, an inquiry assessment rubric, and alignment to standards. For Grade 4, sample activities include observing phototropism, investigating pill bugs, testing hardness of rocks, recording temperatures, working with switches and circuits, floating and sinking boats, and working with light and rainbows. This enhanced eBook gives you the freedom to copy and paste the content of each page into the format that fits your needs. You can post lessons on your class website, make student copies, and more. For more information on enhanced eBooks, Click Here. Submit a review
Beneath the depths of the Indian Ocean, underneath the island nation of Mauritius, lies an entire lost continent. The first suggestion that a lost continent may exist in the Indian Ocean came as recently as 2013, when researchers found geological evidence that suggested an ancient land mass exists in the area, once a part of the ancient supercontinent of Gondwana. Nearly four years later, researchers have confirmed its existence for the first time, revealing that there may be more out there than we once thought. It is believed that the submerged land mass was left following the break-up of Gondwana, which started about 200m years ago, and was subsequently covered by young lava during volcanic eruptions on the island. This continent then broke off from the island of Madagascar, when Africa, India, Australia and Antarctica split up to form the Indian Ocean of today. In a paper published to Nature Communications, a team of South African, German and Norwegian researchers came to the conclusion of a lost continent when they discovered a mineral called zircon on the island of Mauritius. Zircon holds the key The discovery of zircon in any geological find would suggest ancient tectonic movements, but the age of the zircon discovered on Mauritius – estimated to be 3bn years old – is much higher than any other rock found on the island. “Earth is made up of two parts: continents, which are old, and oceans, which are young,” said Wits University’s Prof Lewis Ashwal, author of the paper. “On the continents, you find rocks that are over 4bn years old, but you find nothing like that in the oceans, as this is where new rocks are formed. “The fact that we have found zircons of this age proves that there are much older crustal materials under Mauritius that could only have originated from a continent.” Not the only lost continent This new data confirms the existence of the lost continent as, following the 2013 research, critics claimed it was possible that zircon might have been a mineral alien to the area, having been brought in either by wind or even by the tracks of car tyres. “The fact that we found the ancient zircons in rock corroborates the previous study and refutes any suggestion of wind-blown, wave-transported or pumice-rafted zircons for explaining the earlier results,” Ashwal confirmed. The team also said that there are many pieces of various sizes of undiscovered continent spread across the Indian Ocean that is collectively called Mauritia, created following the break-up of Gondwana.
- slide 1 of 2 Take a Nature Walk To begin your study of coniferous and deciduous trees, take your class for a walk on school grounds. If you do not have these two types of trees on your school property, find a location close by where you can take a short walking field trip. When you take your walk, have students take a clip board, paper, and pencil along for recording their observations. Before leaving the classroom, have students make a T chart on their paper and label the one column deciduous and the other coniferous. As you take your walk, point out each specific tree and give the students 5 minutes to record their observations. They should just list what they observe. - slide 2 of 2 Compare Coniferous and Deciduous Upon returning from your walk, make a T chart on the board or on chart paper. Invite students to share their observations and record them on the class T chart. If it is not shared, be sure to ask questions to guide students toward sharing how the leaves on each tree differ. If you are taking your walk in the fall and you live in a part of the country where leaves change color, this characteristic will be obvious to your class. If these do not apply, ask the class what happens to the leaves on deciduous trees. You want them to recognize that one of the major differences when comparing deciduous trees and coniferous trees is that deciduous trees lose their leaves in the fall and coniferous trees do not. To take the comparison further, on another day you can make a Venn Diagram to list the characteristics of each type of tree. This will help the class to look at the similarities and differences between the deciduous and coniferous trees. Before beginning the next activity, you can share some pictures of coniferous and deciduous trees with the class. Ask them to identify each as you share the pictures. If you do not live where students can observe the changing of the leaves, be sure to share some pictures of this. Hand out an 11x17 piece of paper, preferably the type with lines on the bottom and a space for drawing on top. Instruct students to divide their paper in half and label each side with deciduous trees and coniferous trees. Students will draw a picture of each type of tree and write down at least four observations about each. If there is time, have the students share their pictures and observations.
Key points: "Weather" features; internal energy; nature of surface; interior composition; contrast of a giant planet with the terrestrial ones Jupiter is the largest planet in the Solar System (as massive as all the rest together). It is a "giant planet," with very different properties from the "terrestrial" planets we have studied so far. (To left, from the Galileo Project, http://www.jpl.nasa.gov/pictures/jupiter/ Important characteristics of giant planets are: Their average densities are low, similar to water: Jupiter is 1.3 grams/cm3 and Saturn is 0.7 grams/cm3 - compared with densities of 3 - 6 grams/cm3 for terrestrial planets. Their composition is similar to Sun -- (especially Jupiter and Saturn) whereas the terrestrial planets are made almost entirely of those "other elements." They have liquid or icy surfaces and dense atmospheres with violent and long lasting storms. |Jupiter rotates in 9 hr 50 minutes at its equator. However, careful observation shows that the rotational rate is slower moving towards the pole -- 9 hr 56 minutes.| |This effect of rotation rate varying with latitude is called differential rotation. It is dramatized here by "freezing" the polar motions and letting the rest move relative to the poles. (From NASA, JPL, Cassini/Huygens Project, Wikipedia Commons) (the dark dots that appear briefly are shadows of Galilean satellites). The atmosphere is full of whirlpool storm systems!| Differential rotation implies that we are not seeing a solid surface, which would have to have a constant rotational speed. Storms have been seen to last for very long times -- the Great Red Spot has existed for over 300 years. (NASA via http://wanderingspace.net/category/jupiter/) Voyager images of the Great Red Spot showed a whirlpool character confirming that it is a large high-pressure cell like some on Earth. The clouds show powerful turbulent motions indicating high winds. The inner terrestrial planets have a close balance between the energy absorbed from the sun and the energy radiated into space. To observe in the far infrared, in 1969 a 12 inch telescope was mounted in the escape hatch of a Lear Jet so it could be carried above most of the atmosphere (which absorbs far infrared light) (from http://www.nasm.si.edu/research/dsh/artifacts/SS-LearJetScope.htm). It was discovered that Jupiter emits twice as much energy as it absorbs from the sun. Subsequently, it was found that Saturn and Neptune (but not Uranus) also emit about twice as much energy as they absorb from the sun. For Jupiter, this can be trapped heat from the time of its formation; the cause is not well understood for Saturn and Neptune. |The convection from the internal heat contributes to the extreme activity of atmospheric motions. Convection arises when we heat a liquid or gas under a cooler layer, the heated part expands and floats up to the top where it cools, contracts, and sinks down to the bottom. We see it in everyday experiences like LavaLamps. (From http://www.ihatemycubicle.com/2004/12/stupid_man_dies.html and Windows to the Universe, http://www.windows.ucar.edu/tour/link=/jupiter/interior/J_int_motions_overview.html&edu=high)| |Jupiter has the strongest magnetic field of any body in the solar system. The magnetic field on Jupiter implies that it must contain a liquid metal which contradicts expectations from its density. However, at a depth of ~15,000 km below its cloud tops, the pressure becomes so great that hydrogen becomes a liquid metal (a rarity because both temperature and pressure must be very high). "Metallic" in this sense means it conducts electricity readily. Most of the interior of Jupiter is in this state. (http://www2.jpl.nasa.gov/galileo/jupiter/interior.html)| Protons and electrons trapped by this magnetic field produce synchrotron radiation at radio wavelengths making Jupiter the only planet detectable easily by radio telescopes. The bands of charged particles surrounding Jupiter are so intense that they actually knock atoms off the surface of Io, the innermost of the Galilean moons. (http://www.astronomy.org.au/ngn/ Jupiter also has aurorae similar to those on Earth. (http://nssdc.gsfc.nasa.gov/ Not long after the Galileo mission arrived at Jupiter, a probe that fell through the atmosphere was released. As the probe fell, it returned data on temperature, pressure, composition, wind speeds, and prevalence of clouds. These results showed that Exactly how giant planets form quickly, before the gas escapes from protoplanetary disks, is a problem we are still struggling to understand. |Therefore, the interior structure of the planet is as shown to the right. In place of the molten rock mantle of Earth, Jupiter has a thick layer of liquid hydrogen. Far below this liquid hydrogen "mantle" the pressure is so great that the hydrogen changes state to become electrically conductive. The strong magnetic field is a product of the rapid rotation and this conductive layer. We suspect that the rocky materials have settled into a core not too different in size than the earth, but dwarfed in this case by the overlying layers of hydrogen and helium.| |We do not believe Jupiter has any solid surface (despite what you may have seen in the movies!). Instead, the "surface" is probably liquid molecular hydrogen under great pressure from the thick atmosphere. Perhaps it would look like this artist's concept!| Test your understanding before going on |Horus, Egyptian god of Mars, http://grenier2clio.free.fr/egypte/horus.htm|| Shani, Vedic god of Saturn, http://www.payer.de/kommkulturen/kultur123.htm Click to return to syllabus |Click to return to Mars|| hypertext G. H. Rieke Click to go to Saturn, Uranus, and Neptune
We correct learners sometimes when they have made a mistake and we want to show them that something is wrong. There is a range of correction strategies and techniques we can use to indicate (show) that there is a mistake, and the ones we choose depend on a number of different factors, for example the aim of the activity, the age of the learners and the language level of the learners. Keep on mid that over-correction can result in learners not wanting to say anything in class because they are afraid of making mistakes. Let’s see some strategies and techniques to correct learners’ mistakes in class. In the exercise below you will find the next vocabulary: |FINGER CORRECTION||GESTURES/FACIAL EXPRESIONS||ECHO CORRECTING| |IDENTIFYING||DELAYED CORRECTION||IGNORING ERRORS| |REFORMULATING||RECASTING||GIVING THE RULE AND EXAMPLE OR DEFINITION
(HealthDay News) -- Gastroesophageal reflux disease (GERD) occurs when stomach acids back up into the esophagus. The U.S. National Institute of Diabetes and Digestive and Kidney Diseases mentions these symptoms: - Having heartburn. - Tasting stomach acid or food in the back of the mouth. - Having bad breath. - Feeling nauseated, or vomiting. - Having difficulty breathing or swallowing. - Wearing of tooth enamel. Copyright © 2015 HealthDay . All rights reserved.
Galileo called mathematics the “language with which God wrote the universe.” He described a picture-language, and now that language has a new dimension. The Harvard trio of Arthur Jaffe, the Landon T. Clay Professor of Mathematics and Theoretical Science, postdoctoral fellow Zhengwei Liu, and researcher Alex Wozniakowski has developed a 3-D picture-language for mathematics with potential as a tool across a range of topics, from pure math to physics. Though not the first pictorial language of mathematics, the new one, called quon, holds promise for being able to transmit not only complex concepts, but also vast amounts of detail in relatively simple images. The language is described in a February 2017 paper published in the Proceedings of the National Academy of Sciences. “It’s a big deal,” said Jacob Biamonte of the Quantum Complexity Science Initiative after reading the research. “The paper will set a new foundation for a vast topic.” “This paper is the result of work we’ve been doing for the past year and a half, and we regard this as the start of something new and exciting,” Jaffe said. “It seems to be the tip of an iceberg. We invented our language to solve a problem in quantum information, but we have already found that this language led us to the discovery of new mathematical results in other areas of mathematics. We expect that it will also have interesting applications in physics.” When it comes to the “language” of mathematics, humans start with the basics — by learning their numbers. As we get older, however, things become more complex. “We learn to use algebra, and we use letters to represent variables or other values that might be altered,” Liu said. “Now, when we look at research work, we see fewer numbers and more letters and formulas. One of our aims is to replace ‘symbol proof’ by ‘picture proof.’” The new language relies on images to convey the same information that is found in traditional algebraic equations — and in some cases, even more. “An image can contain information that is very hard to describe algebraically,” Liu said. “It is very easy to transmit meaning through an image, and easy for people to understand what they see in an image, so we visualize these concepts and instead of words or letters can communicate via pictures.” “So this pictorial language for mathematics can give you insights and a way of thinking that you don’t see in the usual, algebraic way of approaching mathematics,” Jaffe said. “For centuries there has been a great deal of interaction between mathematics and physics because people were thinking about the same things, but from different points of view. When we put the two subjects together, we found many new insights, and this new language can take that into another dimension.” “Where before we had been working in two dimensions, we now see that it’s valuable to have a language that’s Lego-like, and in three dimensions,” Jaffe said. “By pushing these pictures around, or working with them like an object you can deform, the images can have different mathematical meanings, and in that way we can create equations.” Among their pictorial feats, Jaffe said, are the complex equations used to describe quantum teleportation. The researchers have pictures for the Pauli matrices, which are fundamental components of quantum information protocols. This shows that the standard protocols are topological, and also leads to discovery of new protocols. “It turns out one picture is worth 1,000 symbols,” Jaffe said. “We could describe this algebraically, and it might require an entire page of equations,” Liu added. “But we can do that in one picture, so it can capture a lot of information.” Having found a fit with quantum information, the researchers are now exploring how their language might also be useful in a number of other subjects in mathematics and physics. “We don’t want to make claims at this point,” Jaffe said, “but we believe and are thinking about quite a few other areas where this picture-language could be important.”
Fluoride is the most effective agent available to help prevent tooth decay. It is a mineral that is naturally present in varying amounts in almost all foods and water supplies. The benefits of fluoride have been well known for over 50 years and are supported by many health and professional organizations. Fluoride works in two ways: Topical fluoride strengthens the teeth once they have erupted by seeping into the outer surface of the tooth enamel, making the teeth more resistant to decay. We gain topical fluoride by using fluoride containing dental products such as toothpaste, mouth rinses, and gels. Dentists and dental hygienists generally recommend that children have a professional application of fluoride twice a year during dental check-ups. Systemic fluoride strengthens the teeth that have erupted as well as those that are developing under the gums. We gain systemic fluoride from most foods and our community water supplies. It is also available as a supplement in drop or gel form and can be prescribed by your dentist or physician. Generally, fluoride drops are recommended for infants, and tablets are best suited for children up through the teen years. It is very important to monitor the amounts of fluoride a child ingests. If too much fluoride is consumed while the teeth are developing, a condition called fluorosis (white spots on the teeth) may result. Although most people receive fluoride from food and water, sometimes it is not enough to help prevent decay. Your dentist or dental hygienist may recommend the use of home and/or professional fluoride treatments for the following reasons: Deep pits and fissures on the chewing surfaces of teeth. Exposed and sensitive root surfaces. Fair to poor oral hygiene habits. Frequent sugar and carbohydrate intake. Inadequate exposure to fluorides. Inadequate saliva flow due to medical conditions, medical treatments or medications. Recent history of dental decay. Remember, fluoride alone will not prevent tooth decay! It is important to brush at least twice a day, floss regularly, eat balanced meals, reduce sugary snacks, and visit your dentist on a regular basis.
What is entanglement theory? It is a Mystery, and here is a potential solution. But its implications are so paradig Imagine you found a pair of dice such that no matter how you tossed them, they always added up to 7. Besides becoming the richest man in Vegas, what you would have there is something called an entangled pair of dice. You could now separate these entangled dice. You could have your friend Alice take one of these to Macau, while the other one stays with you in Las Vegas. And as soon as you rolled your dice, the other one would always instantly show a number that added up to 7. Since this happens instantly, did your dice communicate at faster than speed of light to Macau? Scientists can create entangled photons, for example, by shining a laser on a nonlinear optical crystal. The Entanglement means that a pair of photons act like a single entity rather than two separate particles. To understand entanglement better, you first have to accept the fact that at the quantum scale, reality is fuzzy. Reality really doesn’t know what it is, until it is measured. This is like a single dice tossed in the air that doesn’t have a distinct face until it lands. When tossed up, it is 1, 2, 3, 4, 5, and 6 all at once. Quantum particles are similar in that they do not have distinct properties until they are measured. Particles such as a photon exists in all possible states simultaneously. But when it is measured, it is in only one state. And if the photon is entangled, this measurement of one particle causes its entangled pair to simultaneously exhibit the opposite state, no matter what the distance is between them. Einstein disliked this idea of one particle influencing the other over long distances so much, he called it “spooky action at a distance.” Einstein believed that the particles carried information about each other at the moment that they were entangled and were close to each other. He thought the properties of both particles were determined locally and carried along from the beginning. So in Einstein’s view, the two dice "knew" what they would show before they were tossed. But in the quantum world, this is impossible, because particles are fuzzy until measured. In 1964 the Irish physicist John Bell devised a test that actually could prove whether information was encoded within the entangled particles or whether the spooky action at a distance was real. He did this by taking advantage of the fact that in QM, measurement affects the thing you are measuring. If information was encoded at the time of particle creation, as Einstein believed, then nothing we do randomly to one particle should affect the other. And what this test found conclusively proved that Quantum mechanics is correct – that Einstein was wrong, and that spooky action at a distance does in fact take place. So are entangled particles communicating instantaneously? Even if the speed was 5 miles per hour faster than the speed of light, it would violate Einstein’s theory of relativity and our picture of reality would completely collapse if this was the case. So most scientists have come to the conclusion that no faster than light communication is taking place. So if no signal is telling these particles how to coordinate their results, what’s going on? There is another real possibility that is not popular among scientists, but that Bell himself proposed, and that is called superdeterminism. And here is how it could solve the mystery of entanglement: Bell proposed the idea of absolute determinism in the universe, the complete absence of free will. Suppose everything in reality is predetermined. It cannot be changed. The reality that you live in has already happened. No matter what you say or do, there is no free will. Our behavior and decisions, including our belief that we are free to choose to do one experiment rather than another, is absolutely predetermined.
As seen under a microscope, human embryonic cells (colored dots) confined to areas of precisely controlled size and shape start to specialize and form distinct layers similar to those seen in early development. "Thanks to our diverse scientific perspectives, we were in a good position to realize that geometry could be an important factor," says developmental biologist Ali Brivanlou, who led the team with physicist Eric Siggia. A former postdoctoral researcher trained in theoretical physics, Aryeh Warmflash, also played a big role. The researchers grew colonies of human ES cells in tiny circular patterns printed on glass plates, which kept the cells confined to areas of precisely controlled size and shape. Using customized software and fluorescent tags of different colors, the scientists tracked individual cells under a microscope in real time. When they added a growth factor called BMP-4 to the walled-in stem cells, they saw the cells begin to specialize and form organized patterns just as they would under natural conditions. BMP-4-treated cells that were not confined formed random patterns. The size of the colonies mattered, too. ES cells confined to circles measuring 1 millimeter across - roughly the size and shape of a week-old human embryo - organized into the three main "germ" layers destined to become different human cell types, plus an outer layer of cells like those that become the placenta. Cells confined to smaller circles formed fewer specialized layers, and those in the smallest circles formed only a single germ layer. From these observations, the team concluded that one key way ES cells know their fate is by calculating their distance from the edge of the colony. With the help of mathematical models, the researchers are now looking into exactly how cells make these measurements. Their follow-up studies of human ES cells confined to micro-patterned rectangles, squares and triangles confirm that "the response of a cell to a given growth factor is as much influenced by the geometry as it is by the growth factor itself," says Brivanlou. The team's work has opened a new window for studying early development. Shedding light on the process could advance efforts aimed at using human stem cells to replace diseased cells and regenerate lost or injured body parts, Brivanlou says. "By simply varying the size and geometry of these circles, it might be possible to coax stem cells into becoming brain cells or heart cells or pancreas cells," he explains. No stranger to working across disciplines, Brivanlou co-teaches an innovative architecture course on designing "dynamic buildings" of tomorrow that could morph in response to changing environmental conditions or other circumstances, as biological systems can do. His students spend 2 weeks doing experiments in his lab, he says, "so they can appreciate with their own eyes how nature allows forms to change shape."
Writing essays and short stories at Secondary level can be a daunting experience – and even more so when you are asked to write in a second language. Students are asked to write small essays and significantly extended stories when they reach Key Stages 3 and 4. This can be overwhelming for an EAL student. Many end up writing their essays using a tool like Google Translate, which can provide some benefit to learning; however, the benefit can be lost when huge chunks of text are simply copied without understanding the vocabulary and grammar structures used. Our role as teachers is to provide students with the tools they need for writing. Written assignments can prove useful for developing writing in an additional language (MacKay, 2006), especially when modelled from previous reading or shared writing. Nunan (2011) explains that developing the ability to write a fluent, coherent, extended piece takes time and a lot of practice. Writing is a fundamental skill for all learners and serves a cognitive as well as a physical function. While the student learners to form letters or characters physically, the act of writing also helps them develop their thinking and reasoning skills, develop their arguments and support these with evidence. There are many ways to approach writing. Here’s one approach that can help support EAL learners begin the writing process: Provide students with a content-based cloze activity to support the learning of new words. Give students a writing frame, as a model of how the essay or story should be structured. For example, a lesson on connectives and sentence starters can help with formulating an essay (Coelho, 2010). This can also support students in looking beyond simply writing ‘and’ and ‘then’ (Wray and Lewis, 1996). Writing frames can assist students with a variety of structures, such as writing to persuade versus writing a report for a Science class (Wray and Lewis, 1996). Provide models or prompts for short journal responses. For example, if you are learning the grammatical structure ‘used to’, you could provide a scaffolding phrase such as ‘My mother used to…’ Ask the student to provide their own writing journal which they can decorate. This can be their personal writing book, with a focus on writing for confidence. Give the student a title such as ‘Three Things I Enjoy Doing’ and time the students as they write, or let them write about the topic for homework. Students may also be required to reference their work. Find some simplified websites and books, which they can use to find their information (never leave EAL students to find the information alone on the Internet). This will also help to develop their reading skills. You will also need to model how the essay should be referenced. Once you have prepared all the materials, students need to plan their ideas. Allowing students to speak about their ideas will help with the organisation and development of their speaking skills. This can be done as a small group, with the support of an adult. Students may need to use their first language initially, to help them develop their ideas on the topic. This should also build their confidence and help develop their language structures and vocabulary. During this time, you may also wish to use some visual images as a prompt to learning new language. During the discussion group sessions, give your students a set of post-it notes to write down keywords and phrases (Mackay, 2006). Using a line map, ask your students to place their post-it notes on the line to begin forming an argument. Students can even write in their first language if necessary. Using the post-it notes, the focus will be on what needs to be said rather than on how to start (Mackay, 2006). The resource accompanying this article is a sample writing frame that can be used as a simple scaffold. With this, your students will feel ‘safe’, while the planning process should equally help them feel more secure (Mackay, 2006). To download the writing frame, click on the button at the top and bottom of this article. Author: Gemma Fanning, EAL Specialist Coelho. E (2010) Differentiated Instruction for English Language Learners, Ontario Institute for Studies in Education, University of Toronto. Nunan. D (2011) Teaching English to Young Learners, Anaheim University Press, Anaheim California. Mackay, N (2006) Removing Dyslexia as a barrier to achievement (Second Edition), SEN Marketing, Wakefield. Wray. D and Lewis. M (year) An approach to scaffolding children’s non-fiction writing: the use of writing frames, University of Exeter.
Myopia, more commonly known as nearsightedness, is a refractive error that affects millions of adults and children worldwide. This condition occurs when a person’s eyeball is too long, or the cornea or lens has an irregular shape. A myopic eye focuses the image at the front of the retina, as opposed to directly on the retina. it is often hereditary, especially if both parents are nearsighted. Recent studies show that the more time spent outdoors can slow the onset or progression of myopia for reasons explained below. These findings are significant, as myopia can seriously impact eye health if left untreated. At Camas Vision Centre, we’re here to answer any questions you may have and ensure that your child’s myopia is under control. How Does Spending Time Outdoors Benefit Myopia? By spending time outdoors, children train their eyes to focus on distant objects and relax their eyes. Just as with any other muscle in the body, the muscles in the eye need to be trained and strengthened in order to produce clear vision. Experts further suggest that moderate exposure to sunlight has a positive impact on myopia and general eye health. A recent study was conducted by the Centre for Ocular Research & Education (CORE) at the University of Waterloo’s School of Optometry and Vision Science. The study shows that children who spend 1 extra hour outdoors each week reduce their risk of developing myopia by over 14%. In contrast, according to the National Institute of Health, children who spend a considerable amount of time indoors watching TV or playing video games are at a significantly higher risk of developing nearsightedness. Outdoor time should be incorporated into every child’s routine, especially those at risk of developing myopia. Parents and caregivers can make being outdoors fun by playing sports, hiking new trails, enjoying picnics or barbeques, or organizing scavenger hunts. Why Is Slowing Myopia Progression So Important? Myopia generally worsens over time, mostly during childhood and into the adolescent years. If your child’s prescription regularly increases, this can lead to more serious complications. Myopia progression heightens the risk of developing other eye conditions and disorders, such as cataracts, glaucoma, or retinal detachment. In more severe cases, permanent vision loss — or even blindness— may occur. This is why it is crucial to monitor your child’s condition with a yearly visit to Dr. Robert Nicacio. Not sure whether your child has myopia? Refer to the following list. Signs of Myopia in Children Children with myopia may exhibit any of the following: - Squinting when reading the board or watching TV - Lack of interest in playing sports that require distance vision - Positioning oneself at close proximity to the TV or screen - Sitting at the front of the classroom to clearly see the teacher and board - Holding books close to the eyes If your child is experiencing any of these symptoms or if you’ve noticed some of these behaviors, give outdoor time a try and bring him or her in to Camas Vision Centre for a comprehensive eye exam. We offer evidence-based myopia management treatment to slow down the progression of nearsightedness, thus preventing severe vision loss later in life.Camas Vision Centre provides myopia management and other treatments to patients in Vancouver, Camas, Portland, Hillsboro, and throughout Washington.
Ravens Are Better At Planning Than 4-Year-Olds The ability to plan for future events is at the core of being human and is of crucial importance in our everyday lives. Previous studies on apes suggest that this ability evolved within the hominid lineage. According to various experiments, great apes can plan across time in tool use and bartering conditions, exhibiting self-control for time intervals up to at least one night. With this study, however, scientists from Lund University, Sweden, show that ravens can plan for tool-use and bartering events with delays of up to 17 hours, exert self-control, and consider the timespan until future events. Researchers highlight that it is unlikely that such advanced skills could have been present in the last common ancestor of birds and mammals, over 320 million years ago. Five captive and hand-raised adult ravens were tested. Four individuals took part in each condition (tool or bartering), while one male was too scared of the apparatus. Experiment 1 checked if ravens could select, save and later use either a tool or an exchangeable token that became useful 15 minutes after being chosen. Later, they were also given the opportunity to experience that other objects, used as distractors, did not open the reward giving apparatus. The following day, the birds were exposed to the apparatus, which they could interact with, but without a tool available in order to create motivation for future planning. Thereafter, the apparatus was removed in the presence of the subject. One hour later, the ravens were offered a selection from a tray containing the functional tool and three non-functional distractors. Each experiment included two conditions: tool-use and bartering with humans. Ravens are not habitual tool users, while bartering has never been observed in the wild. Specifically, the scientists tested if ravens can make decisions for an event 15 minutes into the future (experiment 1), and over a longer interval of 17 hours (experiment 2). Additionally, it was checked if the birds could exert self-control when making decisions for the future, by providing an immediate reward (experiment 3). As most of us well know, self-control is essential when planning as our impulsive demands tend to limit us to the whims of “here and now”. The last experiment tested whether the ravens valued a reward that is offered sooner than in experiment 3, where a 15 minute delay was enforced. In the tool condition, the subjects successfully selected and used the tool to solve the task at an average of 79%, a percentage lowered significantly due to one of the clever females figuring out how to operate the apparatus without the tool… Meanwhile, in the bartering experiment, on average, the ravens exchanged 78% of the selected tokens. A day later (experiment 2), in the tool condition, the ravens selected and used the tool in 89% of the cases. When presented with distractors and an immediate reward (experiment 3), the birds selected the tool on average in 74% of the trials and the token in an average of 73% of trials. Finally, when the delay was shorter (experiment 4), all ravens walked past the immediate reward, and instead selected and used the functional item in 100% of the trials. The results suggest that ravens make decisions for their future and that they are planners on par with apes. In the tool conditions, ravens were at least as proficient as tool using apes, whereas in the bartering conditions, the birds outperformed orang-utans, bonobos, and chimpanzees. The first trial performances show that the ravens’ behaviors were not a result of habit formation and that they perform better than 4-year-old children. The clear similarities in performance to great apes in these tasks show what the brains of some birds are capable of. Furthermore, this new knowledge opens up avenues for investigating the evolutionary principles of cognition, as such abilities must have evolved independently in birds. If still in doubt, animal advocates will now surely find it irrefutable that birds possess impressive cognitive abilities. Since our society seems to value intelligence on human terms, such studies aid the causes of bird welfare and rights by questioning the speciesist status quo.
SOURCE: The Guardian DATE: October 30, 2020 SNIP: The US and UK produce more plastic waste per person than any other major countries, according to new research. The analysis also shows the US produces the most plastic waste in total and that its citizens may rank as high as third in the world in contributing to plastic pollution in the oceans. Previous work had suggested Asian countries dominated marine plastic pollution and placed the US in 20th place, but this did not account for US waste exports or illegal dumping within the country. Data from 2016, the latest available, show that more than half of the plastic collected for recycling in the US was shipped abroad, mostly to countries already struggling to manage plastic waste effectively. The researchers said years of exporting had masked the US’s enormous contribution to plastic pollution. “The US is 4% of the world’s population, yet its produces 17% of its plastic waste,” said Nick Mallos at the Ocean Conservancy and one of the study authors. “The US needs to play a much bigger role in addressing the global plastic pollution crisis.” The size of the US contribution is likely to be the results of high income and consumption levels. “I assume we’re just the best consumers,” said Kara Lavender Law at the Sea Education Association and part of the research team. Plastic waste has polluted the whole planet, from the deepest oceans to Arctic snow and Alpine soils, and is known to harm wildlife. Concern is also growing about the quantity of microplastics people consume with food and water, and by breathing them in. A study led by Lau in September found that even if all currently feasible measures were used to cut plastic pollution it would fall by only 40%, putting 700m tonnes into the environment by 2040. “To avoid a massive buildup of plastic in the environment, coordinated global action is urgently needed to reduce plastic consumption, increase reuse, waste collection and recycling,” the study concluded. China banned the import of plastic waste in 2018, and Malaysia, Vietnam, Thailand, India and Indonesia have followed with their own restrictions. The fate of the plastic no longer going to these countries is not yet fully known, but a Guardian investigation in 2019 found US plastic was being sent to some of the world’s poorest countries, including Bangladesh, Laos, Ethiopia and Senegal, where labour is cheap and environmental regulation limited. Lavender Law said the Covid-19 pandemic was also increasing plastic waste, particularly discarded PPE, but that data on the scale of the issue was not yet available. The researchers found the US produced the most plastic waste by World Bank reckoning, at 34m tonnes in 2016, but the total increased to 42m tonnes when the additional data was considered. India and China were second and third, but their large populations meant their figures for per capita plastic waste was less than 20% of that of US consumers. Among the 20 nations with the highest total plastic waste production, the UK was second to the US per capita, followed by South Korea and Germany.
“BIRD brain” is usually an insult, but that may have to change. A light-activated compass at the back of some birds’ eyes may preserve electrons in delicate quantum states for longer than the best artificial systems. Migrating birds navigate by sensing Earth’s magnetic field, but the exact mechanisms at work are unclear. Pigeons are thought to rely on bits of magnetite in their beaks. Others, like the European robin (pictured), may rely on light-triggered chemical changes that depend on the bird’s orientation relative to Earth’s magnetic field. A process called the radical pair (RP) mechanism is believed to be behind the latter method. In this mechanism, light excites two electrons on one molecule and shunts one of them onto a second molecule. Although the two electrons are separated, their spins are linked through quantum entanglement. The electrons eventually relax, destroying this quantum state. Before this happens, however, Earth’s magnetic field can alter the relative alignment of the electrons’ spins, which in turn alters the chemical properties of the molecules involved. A bird could then use the concentrations of chemicals at different points on its eye to deduce its orientation. Intrigued by the idea that, if the RP mechanism is correct, a delicate quantum state can survive a busy place like the back of an eye, Erik Gauger of the University of Oxford and colleagues set out to find out how long the electrons remain entangled. They turned to results from recent experiments on European robins, in which the captured birds were exposed to flip-flopping magnetic fields of different strengths during their migration season. The tests revealed that a magnetic field of 15 nanoTesla, less than one-thousandth the strength of Earth’s magnetic field, was enough to interfere with a bird’s sense of direction (Biophysical Journal, DOI: 10.1016/j.bpj.2008.11.072). These oscillating magnetic fields will only disrupt the birds’ magnetic compass while the electrons remain entangled. As a weaker magnetic field takes longer to alter an electron’s spin, the team calculated that for such tiny fields to have such a strong impact on the birds’ compasses the electrons must remain entangled for at least 100 microseconds. Their work will appear in Physical Review Letters. The longest-lived electrons in an artificial quantum system – a cage of carbon atoms with a nitrogen atom at its centre – survived for just 80 microseconds at comparable temperatures, the team points out. “Nature has, for whatever reason, been able to protect quantum coherence better than we can do with molecules that have been specially designed,” says team member Simon Benjamin of the Centre for Quantum Technologies in Singapore. Thorsten Ritz of the University of California, Irvine, who helped perform the robin experiments, cautions that the RP mechanism has yet to be confirmed. But he is excited by the prospect of long-lived quantum states. “Maybe we can learn from nature how to mimic this,” he says.
A geosyncline may be defined as “a thick, rapidly accumulating body or sediment formed within a long, narrow, subsiding belt of the sea which is usually parallel to a plate margin”. (Oxford Dictionary of Geography) Or we may say a geosyncline is a “very large linear depression or down-warping of the earth’s crust, filled (especially in the central zone) with a deep layer of sediments derived from the land masses on each side and deposited on the floor of the depression at approximately the same rate as it slowly, continuously subsided during a long period of geological time”. (Penguin Dictionary of Geography) Evolution of the Concept of Geosynclines: The concept of geosynclines came into existence in 1859. Based upon his research on the stratigraphy and structure of the northern Appalachians, James Hall discovered that the folded Palaeozoic sediments belonging to mountain ranges are shallow-water type of marine origin having a thickness of 12 km. James Hall also found that the thickness was ten to twenty times greater in comparison to the unfolded rock strata of corresponding ages found in the interior lowlands towards the west. The deposition of massive sequence of shale, sandstone and limestone suggests that the underlying floor of older rocks subsided by a similar amount. The mountain formation was preceded by prolonged periods of down-warping during which the process of sediment accumulation maintained a balance with the subsidence of the crust. Dana (1873) called such elongated belts of subsidence and sedimentation ‘geosynclines’. H. Stille further categorised geosynclines into miogeosynclines and eugeosynclines. Eugeosynclines are characterised by intermittent volcanic activity during the process of sedimentation, whereas miogeosynclines have low volcanic activity. The two classes are found side by side separated by a geanticline in the middle. Miogeosynclines are now considered to be former continental margins like those fringing the Atlantic Ocean and eugeosynclines represent the inverted and deformed equivalents of ocean basins of smaller magnitude such as the marginal basins of the western part of the Pacific, the Sea of Japan and the Sea of Okhotsk. Schuchert categorised geosynclines on the basis of size, location and evolutionary history. The three categories according to him are as follows: (i) Monogeosynclines are exceptionally long and narrow tracts. Such geosynclines are situated either within a continent or along the littoral areas. They are called ‘mono’ since they pass through only one cycle of sedimentation and mountain-building. An example is the Appalachian geosyncline which was folded from the Ordovician to the Permian period. (ii) Polygeosynclines are broader than monogeosynclines. These geosynclines had a longer period of existence than the monogeosynclines. They passed, through more than one phase of orogenesis. The Rockies and Ural geosynclines are examples of polygeosynclines. Such mountain ranges exhibit complex parallel anticlines called geanticlines. (iii) Mesogeosynclines are surrounded by continents on all sides. They have greater depth and a long and complex geological history. E. Haug defined geosynclines as deep water regions of considerable length but relatively narrow in width. Haug drew palaeogeographical maps of the world to prove that the present-day fold mountains originated from massive geosynclines of the past. Haug postulated five major landmasses belonging to the Mesozoic Era, namely (i) North Atlantic Mass (ii) Sino-Siberian Mass (iii) Africa- Brazil Mass (iv) Australia-India Madagascar Mass and (v) Pacific Mass. He identified four geosynclines located between these rigid masses: (i) Rockies geosyncline (ii) Ural geosyncline (iii) Tethys geosyncline and (iv) Circum-Pacific geosyncline. According to Haug, the transgressional and regressional phases of seas have a direct impact on the littoral margins of the geosynclines. The finer sediments are deposited centrally in the geosynclines whereas the coarser sediments are deposited in marginal areas where depth of water is shallow All the geosynclines do not have the same cycle of sedimentation, subsidence, compression and folding of sediments. Haug’s theory is criticised because of its confusing ideas. The palaeogeographical map by Haug shows land areas disproportionately larger than oceanic areas or geosynclines. Critics raise questions about the existence of such a huge landmass after the Mesozoic Era. Haug’s idea of deep geosynclines is also not acceptable because of the evidence of marine fossils found in Fold Mountains. Marine organisms from which the fossils are derived are found only in shallow waters. According to J.W. Evans, the form and shape of geosynclines change according to the changes which occur in the environment. According to Evans, (i) geosynclines may be placed between two landmasses, e.g., Tethys geosyncline between Laurasia and Gondwanaland; (ii) geosynclines may be found in front of a mountain or a plateau, for example, after the origin of the Himalayas there was a long trench in front of the Himalayas which was later filled with sediments leading to the formation of the vast Indo-Gangetic plains; (iii) geosynclines are found along the continental margins; (iv) geosynclines may exist in front of a river mouth. According to Arthur Holmes, earth movements rather than sedimentation cause subsidence of geosynclines through a long and gradual process, e.g., the deposition of sediments up to 12,160 metres in the Appalachian geosyncline could be possible during a period of 300,000,000 years. Holmes identifies four types. (i) Geosyncline Formed by Magmatic Migration: Holmes considers earth crust to be made of three layers: (a) External layer of granodiorite (10-12 km thick); (b) Intermediate amphibolite (20-25 km thick); (b) intermediate amphibolite (20-25 km thick); (c) Eclogite and some peridotite. The migration of magma from the intermediate layer to the surrounding areas causes subsidence of the upper layers, leading to the formation of a geosyncline. (ii) Geosynclines Formed by Metamorphosis: The lowermost rock layers are metamorphosed due to compression caused by convergence of convective currents. Thus the density of rocks increases resulting in geosyncline formation. Holmes believes that the Caribbean Sea, the western part of the Mediterranean Sea and the Banda Sea were formed by this process. (iii) Geosynclines Formed by Compression: Subsidence may occur in the earth crust due to compression. Such a compressional activity occurs because of converging convective currents. Examples are the Persian Gulf and the Indo- Gangetic trough. (iv) Geosynclines Formed due to Thinner Sialic Layer: When a column of rising convectional currents diverges after reaching the bottom layer of the crust, two possibilities arise, (a) the sial is stretched apart owing to tensile forces. This causes thinning of sialic layers and the formation of geosynclines. (b) The continental mass may be broken apart to form geosynclines. Examples are found in the former Ural geosyncline. Dustar identified three types of geosynclines in his classification mainly on the basis of structure of mountain ranges, (i) Inter-continental geosynclines are located between two land masses. (Schuchert’s monogeosyncline coincides with this type.) (ii) Circum-continental geosynclines are located on the borders of continents; (iii) Circum-oceanic geosynclines are found along the littoral areas of oceans. Such geosynclines are also called special type of geosynclines or unique geosynclines. Geosynclinal Orogen Theory of Kober: The German geologist Kober in his book Der Bauder Erde has established a detailed and systematic relationship between geosynclines and rigid masses of continental plates and the formation of Fold Mountains. Kober’s geosynclinal theory is based on the contraction forces produced as a result of the cooling of the earth. In Kober’s view the forces of contraction of the earth lead to horizontal movements of forelands which in turn squeeze sediments into massive mountains. According to Kober, the mountains of the present occupied the geosynclinal sites of early periods. The geosynclines or mobile zones of water have been identified as ‘orogen’ by Kober. The rigid masses which surround the geosynclines are termed as ‘kratogen’. Such kratogens include the Canadian Shield, the Baltic Shield, the Siberian Shield, Peninsular India, the Chinese Massif, the Brazilian Mass, the African Shield, and the Australian and Antarctic rigid blocks. Kober considers the Pacific Ocean to have been formed when the mid-Pacific geosyncline separated the north and south Pacific forelands which were later filled with water and sank. He identified morphometric units based on the surface features of the earth during the Mesozoic Era, e.g., (i) Africa together with some parts belonging to the Indian and Atlantic Oceans, (ii) Indian Australian landmass, (iii) Eurasian landmass, (iv) Northern Pacific continent, (v) Southern Pacific continent, (vi) South America and Antarctica. Kober has demarcated six major mountain- building periods. Three very little-known mountain- building periods occurred during the Precambria Period. This was followed by two major periods during the Palaeozoic Era—the Caledonian orogenesis was over by the end of the Silurian Period and the Variscan orogeny was finished in the Permo-Carboniferous Period. The sixth and last orogenesis called Alpine orogeny was completed in the Tertiary Epoch. Kober opined that the whole process of mountain-building passes through three stages closely interlinked with one another. This stage is characterised by the creation, sedimentation and subsidence of geosynclines. Geosynclines are formed due to contraction caused by the cooling process of the earth. The forelands or kratogens which border geosynclines succumbed to the forces of denudation. As a result, there was constant wearing a way of rocks and boulders from forelands and deposition of the eroded material on the beds of geosynclines. This led to the subsidence of geosynclines. The twin processes of sediment deposition and the resultant subsidence led to further sediment deposition and increasing thickness of sediments. In this stage the geosynclinal sediments are squeezed and folded into mountain ranges. There is a convergence of forelands towards each other due to the force of the contraction of the earth. The enormous compressive forces produced by these moving forelands produce contraction, squeezing and folding of sediments deposited on the geosynclinal bed. The parallel mountain ranges found on both sides of the geosyncline have been termed by Kober as rand ketten meaning marginal ranges. Kober viewed the folding of geosynclinal sediments to be dependent upon the intensity of the compressive forces. Compressive forces of normal and moderate intensity produce marginal ranges on two sides of the geosyncline leaving the middle part unaffected. The unfolded middle part is termed as zwischengebirge (between mountains) or median mass. Kober tried to explain the forms and structures of fold mountains in the context of the median mass. He viewed the Thethys geosyncline as bordered by the European foreland in the north and by the African foreland in the south. The sedimentary deposits of the Tethys geosyncline had undergone massive compression due to the converging movement of the European landmass (foreland) and the African foreland, leading to the formation of the Alpine mountain system. For example, the Pyrenees, Betic Cordillera, the Provence ranges, the Carpathians, the Alps proper, the Balkan mountains and the Caucasus mountains came into being due to the northward movement of the African foreland, while the Atlas mountains, the Apennines, the Dinarides, the Hellenides and the Taurides were formed by the southward movement of the European foreland. Examples of such median masses are found in the Hungarian median mass located between the Carpathians and the Dinaric Alps on two sides. The Mediterranean Sea is a median mass placed between the Pyrenees-Provence Ranges on the north and the Atlas Mountains and their eastern extension on the south. Examples of median masses are the Anatolian plateau located between the Pontic and the Taurus, and the Iranian plateau located between the Zagros and the Elburz. Kober argued that the Asiatic Alpine fold mountains can be divided into two major categories based on the orientation of folds: (a) ranges formed by the northward compression such as Pontic, Taurus, Caucasus, Kunlun, Yannan and Annan ranges, and (b) ranges formed by the southward compression like the Zagros, the Elburz (Iran), the Oman ranges, the Himalayas, etc. The median mass is found in various forms: (i) plateaus like the Tibetan plateau between the Kunlun and the Himalayas, the Basin Range bordered by the Wasatch ranges and Sierra Nevada (USA); (ii) plains like the Hungarian plain bordered by the Carpathians and Dinaric Alps; (iii) seas such as the Caribbean Sea between the mountains of middle America and the West Indies. This phase of mountain- building is characterised by a gradual ascent of mountain ranges and the ongoing denudation processes by natural agents. Kober’s geosynclinal theory provided a satisfactory explanation for a few aspects of mountain building. The theory, however, suffers from shortcomings. First, the force of contraction produced by the cooling of the earth is not adequate for the formation of massive mountains like the Himalayas and the Alps. Secondly, Suess argued that only one side of the geosyncline moves while the other side remains static. Suess termed the moving side as ‘backland’ and the stable side as ‘foreland’. He opined that the Himalayas were formed by the southward movement of Angaraland; the Gondwanaland did not move. This observation is now irrelevant in the light of the Plate Tectonic Theory. Evidences of paleomagnetism and sea- floor spreading prove that both the forelands move towards each other. Thirdly, Kober’s theory has been successful in explaining the mountains having an east-west extension, but those having a north- south alignment can hardly be explained on the basis of his theory. Kober, however, has been given credit for having postulated the formation of geosynclines and the role of geosynclines in mountain formation. The Modern Concept of Geosyncline: The ideas about geosynclines underwent a significant change with the introduction of the Plate Tectonic Theory. A continental margin placed along a plate margin known for subduction, collision or transform-fault motion is called an active margin, while a continental margin which shifts away from a spreading axis is termed passive. For example, on the east coast of North America, a passive continental margin keeps depositing sediments with the gradual movement of the continent away from the spreading axis. The lithosphere becomes cooler and denser at an accelerated rate accompanied by an increasingly deeper ocean floor off the passive margin, as the sediments continue to get deposited on the ocean floor. Such a thick column of sediment along the border of a passive margin is called geosyncline. The studies conducted during the second phase of the 20th century reveal that a geosyncline is a thick, rapidly accumulating body which lies parallel to the continent. The age-old idea of a geosyncline or an intra-cratonic trough bordered by mountains contributing sediments needs to be abandoned. The accumulation of sediments may take place on the continental shelf and slope or in a trough or trench. Nowadays, the term ‘geocline’ is used because the structure of a geosyncline is not a two-sided trough; rather, it is more open towards the ocean. Geoclines of passive continental margins can be divided into two types: miogeoclines or the wedges of shallow water sediments of marine origin which constitute the continental shelves; and eugeoclines or wedges of deep sea sediment deposited at the foot of the continental slope and lying on oceanic crust. Both types of geoclines are made by sediments accumulated accompanied by slow subsidence of the lithosphere. In the Gulf of Mexico, the miogeocline sediments attain a thickness of 20 km at the external fringe of the continental shelf. Eugeocline sediments are found in the oceanic crust just above an oceanic volcano. The uninterrupted accumulation of sediments in the miogeoclines for about 200 million years has been possible due to sinking of crust as a result of sediment loading. The miogeocline areas bear great economic importance due to the availability of mineral oil.
We’ve seen 3D printed blood vessels before – even an entire functioning network of them that was surviving in mice – but researchers in the Netherlands who are working with blood vessels and 3D printing technology are focused on something a little different. The team, consisting of scientists from the University of Twente and Utrecht University, is using 3D printing techniques to replicate the interaction between the flow of blood and blood vessel walls, in order to reproduce and study blood clots. Blood clots in a person’s artery, or arterial thrombosis, can be fatal, and are one of the top causes of heart attacks and strokes, which result in over 14 million deaths around the world each year. 3D printing has been used in the past to help doctors identify the types of plaque that cause heart attacks, but by mimicking blood flow in the artery walls, this research team hopes to duplicate both diseased and healthy blood vessels in vitro for the purpose of controlled studies. The researchers developed a more anatomically correct, microfluidic blood vessel model, using layered stacks of computed tomography angiography (CTA) data and stereolithography, that mimics a blood clot forming due to stenosis defects (blood vessel narrowing). The team published a paper on this important research, titled “Mimicking arterial thrombosis in a 3D-printed microfluidic in vitro vascular model based on computed tomography angiography data,” in Lab on a Chip; authors include Pedro F. Costa, Hugo J. Albers, John E.A. Linssen, Heleen H.T. Middelkamp, Linda van der Hout, Robert Passier, Albert van den Berg, Jos Malda, and Andries D. van der Meer. The study’s abstract explains: “Microfluidic chip-based vascular models allow controlled in vitro studies of the interaction between vessel wall and blood in thrombosis, but until now, they could not fully recapitulate the 3D geometry and blood flow patterns of real-life healthy or diseased arteries. Here we present a method for fabricating microfluidic chips containing miniaturized vascular structures that closely mimic architectures found in both healthy and stenotic blood vessels. By applying stereolithography (SLA) 3D printing of computed tomography angiography (CTA) data, 3D vessel constructs were produced with diameters of 400 μm, and resolution as low as 25 μm. The 3D-printed templates in turn were used as moulds for polydimethylsiloxane (PDMS)-based soft lithography to create microfluidic chips containing miniaturized replicates of in vivo vessel geometries. By applying computational fluid dynamics (CFD) modeling a correlation in terms of flow fields and local wall shear rate was found between the original and miniaturized artery. The walls of the microfluidic chips were coated with human umbilical vein endothelial cells (HUVECs) which formed a confluent monolayer as confirmed by confocal fluorescence microscopy. The endothelialised microfluidic devices, with healthy and stenotic geometries, were perfused with human whole blood with fluorescently labeled platelets at physiologically relevant shear rates. After 15 minutes of perfusion the healthy geometries showed no sign of thrombosis, while the stenotic geometries did induce thrombosis at and downstream of the stenotic area.” The researchers 3D printed a mold of the blood vessel structure, and poured in a mixture of a crosslinking agent and PDMS, then cured the mold to make the vessel’s channel. Then they lined the channel with endothelial cells and perfusing (passing) blood, which was flowing at normal arterial shear rates. This allowed them to set up the ideal circumstances to form a blood clot. Due to the geometry of the 3D printed model, it offers more clinical relevance to researchers than the usual in vitro models with square walls, because it “achieves an even distribution of shear stress across the vessel.” The paper reads, “Overall, the novel methodology reported here, overcomes important design limitations found in typical 2D wafer-based soft lithography microfabrication techniques and shows great potential for controlled studies of the role of 3D vessel geometries and blood flow patterns in arterial thrombosis.” The research team believes that patient CTA data could allow them to go even further in their work, and develop patient-specific, 3D printed blood vessel models. In addition, the technique they used to gain resolution and control of blood vessels may offer a better model alternative for diseases like vascular dementia, and could even be used to develop a “fully stratified approach” to the research of vascular diseases, which may mean that less animals would be used in these types of studies. Discuss in the 3D Printed Blood Vessels forum at 3DPB.com. [Source: Medical Physics Web / Images: researchers via Lab on a Chip]
Use the password worksheets.site to open the PDF file. You can print these worksheets to help your kids learn how to write numbers up to 100. If you don't have a number-writting workbook, you can use these by printing as many of them as students you have. You can print them over and over again until your children have mastered the skills they need. This file contains several coloring worksheets with 100 Steps across the Little Red Riding Hood Forest. Help the Little Red Riding Hood take the 100 steps need to cross the forest. Use the first worksheet to trace each of the numbers shown as dotted outlines. In the next page, your children can do the same thing but going backwards, starting from 100 and reaching granma's house at 0. After that, children should fill in the missing numbers, first the even numbers, then the odd numbers. Next, there is the set of worksheets to practice skip counting: Skip Counting is an educational activity or game for kids to practice counting by multiples. It consists of counting by twos, threes, fours, fives, sixes, sevens, eights, or nines to 100 or more. Skip counting will increase your kid's confidence about number order and promotes addition and subtraction fluency. It's also an important way to prepare kids for learning their times tables as it lays a great foundation for number sense and learning the multiplication facts. We also recommend a more challenging skip-counting activity consisting of working through the Skip Counting Dot to Dots mystery puzzles. Do not burden your children with too much copywork or handwriting at one time. Keep practice sessions short, but require neat work from your child at each sitting. With young children, rather get them to write neat numbers than complete a whole page that are badly written.
Is it a “Panic Attack” or an “Anxiety Attack?” Basically these 2 labels mean the same thing. A panic attack is a severe form of anxiety, so one could say it's an anxiety attack. However the correct clinical term used to describe the symptoms is Panic Attack. A Panic Attack is a sudden and uncontrollable feeling of fear and anxiety. The Diagnostic and Statistical Manual of Mental Disorders (DSM 5, APA 2003) defines a Panic Attack as an abrupt surge of intense fear or intense discomfort that reaches a peak within minutes and during which time four (or more) emotional and physical symptoms are experienced. Note that the abrupt surge can occur from a calm state or an anxious state. Panic attacks can happen during specific feared situations and they can happen randomly during periods of non-threatening, normal activities such as sitting and watching TV. Panic attacks are the body’s evolutionary response to a perceived external threat. The fight, flight, or freeze response is helpful if we encounter a wild animal poised to eat us. The adrenaline that begins flowing through our bodies aids in our escape and survival. When the external threat is defeated or evaded, the symptoms disappear. For most people today, however, there is no wild animal. It’s a false alarm that signals an internal threat or danger. It’s a misperception of danger. After experiencing a first panic attack, the fear becomes about experiencing another one. Fear of experiencing a panic attack may actually bring on a panic attack and accelerate the symptoms of one already in progress, also known as, Fear of Fear. Symptoms of a Panic Attack: Palpitations, pounding heart, or accelerated heart rate. Trembling or shaking. Sensations of shortness of breath or smothering. Feelings of choking. Chest pain or discomfort. Nausea or abdominal distress. Feeling dizzy, unsteady, light-headed, or faint. Chills or heat sensations. Numbness or tingling sensations. Derealization (feelings of unreality) or depersonalization (being detached from oneself). Fear of losing control or “going crazy.” Fear of dying. Anxiety is the physiological feelings of tension, nervousness, racing heart, sweating, and feelings of dread that can be triggered by worried thoughts and/or fear. Anxiety can be a feeling of apprehension about something in the future and anxiety can be experienced in the moment during stressful events. All anxiety related disorders fall under the umbrellas of Anxiety Disorders, Panic Disorder, Obsessive-Compulsive Disorders, and Trauma Related Disorders in the Diagnostic and Statistical Manual of Mental Disorders (DSM 5, APA 2003). In the United States mental health professionals use the DSM as a universal authority for psychiatric diagnoses. The DSM does not contain the term “Anxiety Attack.” It’s just referred to as “Anxiety”, and it is not the same as a Panic Attack. Symptoms of Anxiety: Having difficulty controlling worry Sense of impending danger or doom Restlessness or feeling on edge Difficulty concentrating or mind going blank Breathing rapidly (hyperventilation) Increased heart rate Muscle tension or muscle soreness Frequent urination and/or diarrhea Causes of Panic Attacks Stress and Anxiety Chronic stress adds up overtime. If you don’t find a way to cope with the stress, your body will respond with anxiety. This leaves you vulnerable to panic attacks. A buildup of stress from life events such as losing a job, death of a loved one, or divorce may cause symptoms of anxiety that could lead to a panic attack. Our bodies experience a physiological response to these stresses that could lead to symptoms of a panic attack. Catastrophic thoughts turn on the body's fight or flight stress response that could lead to panic attacks. Misinterpretation of bodily sensations is another common trigger for a panic attack. Two minutes after running up a flight of stairs, a pounding heart and shortness of breath is misinterpreted as an internal threat such as a heart attack or other cardiac event. This misinterpretation causes fear of survival which turns on the flight or freeze response. This increases the heart rate and physical discomfort further and confirms the person’s misinterpretation of physical danger. Behaviorally avoiding situations that are associated with panic attacks only temporarily relieves anxiety. Avoidance actually strengthens anxiety and panic. When having a panic attack you most likely have been running away from the situation, avoiding situations you associate with panic attacks, or becoming over-controlled when feeling anxious. Avoiding confirms and maintains the belief that the situation and/or the symptoms are dangerous. You might not have a panic attack by avoiding a particular place but you have also reinforced that whatever you just avoided is dangerous. Treatment For Panic Attacks If you believe you could have Panic Disorder it will be helpful and important to seek consultation with a mental health professional to first verify the diagnosis and then receive appropriate treatment. Treatment often consists of a combination of medication and psychotherapy. Medication helps take the edge off the physical symptoms of anxiety and psychotherapy helps to challenge irrational thinking and beliefs that lead to the panic attacks. Cognitive Behavioral Therapy (CBT) Cognitive Behavioral Therapy (CBT) is a well-researched and highly effective form of talk therapy that focuses on learning more helpful ways of thinking and behaving. You learn different ways of responding to the symptoms of a panic attack. Also known as prolonged exposure, exposure therapy is a form of CBT. As with most anxiety disorders, in order to learn how to overcome the symptoms of panic disorder, you need to have the experience of successfully managing the symptoms. This often means exposing yourself to the thoughts, images, and the public places that trigger the panic attacks, and then applying the coping strategies until the thoughts, images, and public places no longer produce the same level of fear. Please consult with your primary care physician or a psychiatrist regarding the use of any medication. Medication can help reduce the symptoms of anxiety that occur in public situations that you have been avoiding such as school, work, and any other necessary public location. Commonly prescribed medications include benzodiazepines, anti-depressants, and beta blockers. Benzodiazepines are quick acting sedatives that are generally safe and effective for short term use. Anti-depressant medications are also effective at reducing symptoms of anxiety on a daily basis. This helps dampen the physical and emotional effects of anxiety and increases a person’s capacity to cope with stressful situations. Beta Blockers are used to treat high blood pressure, heart arrhythmias, and migraines. They are also prescribed for off-label use by physicians to help reduce the physical symptoms of anxiety. Beta blockers have the ability to control rapid heartbeat, shaking, trembling, and blushing in response to a panic attack. Beta blockers do not interfere with cognitive performance and are by far my favorite for removing the jitters before a speech. What's the Difference between a panic attack and anxiety? The symptoms are so similar that it can be difficult to tell the difference between a panic attack and anxiety. Here are some descriptions that can help: Panic attacks can occur without a trigger Anxiety is a response to a perceived stressor or threat Symptoms of a panic attack are intense and cause avoidance Panic attack symptoms involve a sense of things feeling unreal or detached from one's body Anxiety symptoms can go from mild to severe Panic attacks appear suddenly, while anxiety is more gradual Panic attacks usually subside after a few minutes, while anxiety symptoms can last for longer periods of time "Panic Attack" and "Anxiety Attack" are often used interchangeably and refer to the same set of symptoms. However the correct clinical term is Panic Attack. Anxiety can progress to panic attacks. We can become anxious about feeling anxious, and then have fear of feeling fearful. If a person experiences repeated panic attacks beyond one month and are not due to some kind of phobia, then they can be diagnosed with Panic Disorder. Check out this article on Health.com for more information on "How to Tell the Difference Between a Panic Attack and an Anxiety Attack." For more information: Please purchase my book “Attacking Panic: The Power to Be Calm” for more in depth information on how to stop panic attacks quickly and how to treat the root cause (Amygdala/Sympathetic Nervous System). The book shows you how to go beyond just giving up control and allowing yourself to experience a panic attack. The book has more powerful strategies that will short-circuit your fight or flight system, stop a panic attack very quickly, and even prevent a panic attack from occurring.
Plot, order and compare fractional numbers with the same numerator or the same denominator. Instruction includes making connections between using a ruler and plotting and ordering fractions on a number line. Clarification 2: When comparing fractions, instruction includes an appropriately scaled number line and using reasoning about their size. Clarification 3: Fractions include fractions greater than one, including mixed numbers, with denominators limited to 2, 3, 4, 5, 6, 8, 10 and 12.
At the moment, industry is lacking robust sensors that can withstand extremely high temperatures and pressures. Eight Fraunhofer Institutes have now developed a technology platform for building this type of sensor system as part of the "eHarsh" lighthouse project. These are even capable of monitoring the insides of turbines and deep boreholes for geothermal systems. They sense disruptive vibrations, issue warnings when a machine is running hot and are able to identify damaged components on a production line. Sensors play a key role in today's production processes. Complete production lines are managed using reliable sensing devices and artificial eyes. However, it has not yet been possible to deploy these watchful assistants in every area of industry: Conventional sensors do not last long in environments that are classified as extremely harsh. These include the insides of power plants or aircraft turbines and boreholes in the ground, where temperatures and pressures are high. Sensors are also damaged by aggressive gases and liquids, or dust. To address this problem, eight Fraunhofer Institutes have joined forces in the "eHarsh" project to develop the first highly robust sensors for extremely harsh environments. "We have a lot of in-depth knowledge within the individual institutes," says eHarsh Coordinator Holger Kappert from the Fraunhofer Institute for Microelectronic Circuits and Systems IMS. "We know a lot about heat-resistant ceramics and we have the ability to test material properties and produce robust microelectronic circuits. On our own, though, none of us were capable of creating this type of sensor. It was only through cooperation and the combination of many individual technologies that we were able to succeed." Signal processing right on site The team first focused on applications with high temperatures and pressures—the aforementioned turbines and boreholes. The aim was not just to incorporate robust pressure and thermal elements into the turbines and boreholes, but also to include the electronic components to evaluate the measurements. "The advantage of having the electronic components on site and of having signal processing take place in the sensor itself is that it improves the quality of the sensor signals," says Holger Kappert. "It also means we can network the sensors better in the future, saving on cabling effort." This would be particularly useful in aircraft engines because it would reduce their weight. These engines are complex. Air flows, voltages and electrical power need to be carefully controlled depending on the flight maneuver. Using small, robust sensors right inside the engine, the engine's status could be measured and the combustion process controlled with much greater precision in the future so that fuel can be used more efficiently, for example. The sensor casing is made from metal and the sensor elements from ceramic that can resist temperatures of up to 500 degrees Celsius. The internal electronics can withstand around 300 degrees Celsius. One challenge was to combine the different components so they would not come apart even when repeatedly heated and cooled, despite being made from materials that expand and contract at different rates. Among the materials used were heat-resistant ceramic circuit boards and conductors with a tungsten admixture that is also used for the filament in light bulbs. Sensors for geothermal systems The sensors are not only heat resistant but can also withstand pressures of up to 200 bar—almost a hundred times the pressure in a car tire. One possible future use for these sensors is in pumps for geothermal systems. In geothermal systems, buildings are heated with hot water from the earth. The pumps are situated deep down in the borehole and need to be able to withstand both the heat and the pressures at that depth. These new sensors make it possible to monitor the pumps easily and permanently. These enhanced possibilities can also help machine manufacturers to test the service life of their sensors. These tests subject components to high pressures or temperatures so that they age more quickly, which makes it possible to determine the service life of a product within a manageable time frame. If the sensors are able to function in more extreme conditions, it will be possible to run the tests with higher load. This will significantly reduce testing time. "Overall, the interdisciplinary nature of eHarsh has allowed us to successfully develop a technology platform for robust sensor systems for many different uses," says Holger Kappert. Provided by Fraunhofer-Gesellschaft