text
stringlengths 198
630k
| id
stringlengths 47
47
| metadata
dict |
---|---|---|
Economists, mathematicians, lawyers, scientists, and even high school students will join together to fight global warming under the new Center for Robust Decision Making on Climate and Energy Policy, (RDCEP) a joint effort by nine different institutions, including the U of C’s Computation Institute, founded to address energy and climate change issues.
A five-year, $6 million dollar grant from the National Science Foundation (NSF) will fund the new Center, which will approach problems of energy and climate change policy from various disciplines, ranging from economics to computational mathematics to law.
“There is no more important problem facing humanity today than meeting rapidly expanding energy needs without damaging the environment,” Computer science professor and future director of the Center, Ian Foster, said in a February 17 press release.
The Center, a collaborative effort, will bring together individuals from nine different institutions, including Argonne National Laboratory. The research center aims to serve the government, the public sector, and private individuals.
The MacArthur Foundation contacted Foster in 2007 to develop the premise for a center that would integrate multiple disciplines. Using the $350,000 grant from the Foundation, the team assembled at the University spent a year developing a strong proposal for the NSF’s Decision Making Under Uncertainty program competition (DMUU).
The “open science philosophy” of the Community Integrated Model of Economic and Resource Trajectories for Humankind (CIM-EARTH) will allow scholars and policy makers outside of the Center to access the resources and information produced by the Center in an effort to encourage more collaboration.
As part of the NSF’s proposal requirement to integrate outreach programs, the Center will bring undergraduate, graduate, and even high school students into dialogue at the Center.
In a program that will launch in the summer of 2012, the Center is currently seeking undergraduate students with a strong background in teaching sciences to work with high school students as well as researchers.
According the Moyer, the University will work specifically with students from the Woodlawn Community School and the Lindblom Math and Science Academy to teach students how to “answer real world questions” as well as “get [students] comfortable with a college campus.”
“We want the Center in general to be a place both for faculty research and education, to be a place where people from disparate fields, physics, computer science, economics, geophysical sciences, and policy can come together,” Moyer said. | <urn:uuid:6c3bea6a-4c87-4a80-95a5-67611eeb2325> | {
"date": "2014-03-12T21:00:32",
"dump": "CC-MAIN-2014-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394024787620/warc/CC-MAIN-20140305130627-00006-ip-10-183-142-35.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9309660792350769,
"score": 2.546875,
"token_count": 516,
"url": "http://chicagomaroon.com/2011/03/08/u-of-c-combats-climate-change/"
} |
- freely available
IJGI 2012, 1(2), 209-227; doi:10.3390/ijgi1020209
Published: 12 September 2012
Abstract: In Brazil, plantations of exotic species such as Eucalyptus have expanded substantially in recent years, due in large part to the great demand for cellulose and wood. The combination of the steep slopes in some of these regions, such as the municipalities located close to the Serra do Mar and Serra da Mantiqueira, and the soil exposure that occurs in some stages in the Eucalyptus cultivation cycle, can cause landslides. The use of a geographic information system (GIS) assists with the identification of areas that are susceptible to landslides, and one of the GIS tools used is the spatial inference technique. In this work, the landslide susceptibility of areas occupied by Eucalyptus plantations in different stages of development in municipalities in the state of São Paulo was examined. Eight thematic maps were used, and, the fuzzy gamma technique was used for data integration and the generation of susceptibility maps, in which scenarios were created with different gamma values for the dry and rainy seasons. The results for areas planted with Eucalyptus were compared with those obtained for other land uses and covers. In the moderate and high susceptibility classes, the pasture is the land use type that presented the greatest susceptibility, followed by new Eucalyptus plantations and urban areas.
Brazil has more than 6 million hectares of forest planted with species from the genera Pinnus and Eucalyptus . The primary purpose of these plantations is the production of cellulose pulp. The forests of the genus Eucalyptus are present in different regions of the country at sites with different topography and rainfall. In the state of São Paulo, Brazil, these plantations are concentrated in the Ribeirão Preto, Botucatu, Vale do Paraíba and São Paulo regions, which, with the exception of Ribeirão Preto, are located close to the Serra do Mar and/or Serra da Mantiqueira.
The Serra do Mar and Serra da Mantiqueira are characterized tectonically by a fault block terrain, and their lithology is made up of crystalline and metamorphic rocks, such as gneiss and granite, associated with heavily decomposed intrusive rocks . These characteristics, along with the rainfall regimen of this region, with an average annual rainfall of 1200 mm, can result in large landslides. The disasters that occurred in Caraguatatuba (March 1967, 200 lives lost), Cubatão (February 1994, flood of RPBC and interruption in petroleum production with losses of 40 million dollars, according to Gramani ) and Campos do Jordão (January 2000, with the destruction of many houses) are examples.
Vegetation cover can control and prevent natural disasters caused by mass movement such as landslides on the hillsides of mountainous areas. On the other hand, improper soil management linked with natural constraints accelerates the degradation process. Intense and concentrated rainfall, steep hillsides unprotected by vegetation, illegal settlements on steep hillsides and lithologic and pedogenic discontinuities are some of the conditions that can accelerate erosion processes and, consequently, mass movements [4,5,6,7].
Eucalyptus forests pass through different developmental stages, from deployment to harvest, which are characterized by different percentages of soil coverage and leaf biomass. During the harvesting period, the erosion rate and landslide frequency increase. Compared with a preserved forest area, the rate of erosion in a harvested area can increase by a factor of as much as four . There is concern about the increasing area of Eucalyptus plantations in places with steep hillsides in the state of São Paulo, as there are no specific studies on the impact of reforestation on mass movement processes.
Parise highlights four types of landslide maps: maps of inventory, maps of the current movements of landslides, maps of susceptibility and maps of vulnerability. Geographic information systems (GIS) are an important analysis tool, allowing the mapping of areas that are susceptible to landslides using different modeling methods. There are two classifications of such methods in the literature: direct and indirect . The direct methods are based on a detailed geomorphological map, and the different degrees of susceptibility are mapped with the aid of field surveys. The primary disadvantage of these methods is the delay in mapping . The indirect methods are based on the mapping of places where landslides have already occurred in the past and the mapping of geological and geomorphological characteristics that are directly or indirectly related to hillside stability .
Approaches to mapping can also be classified as either quantitative or qualitative. A quantitative approach uses mathematical tools to estimate susceptibility and includes multivariate statistical methods, discriminant analysis, linear regression and nonlinear methods, such as neural networks [6,13,14]. Qualitative methods are based on the previous experience of an individual or a group of people and are therefore more subjective. Some examples are the WLC (Weighted Linear Combination) and AHP (Analytic Hierarchy Process) methods . Some techniques are classified as semi-quantitative, such as fuzzy logic [16,17,18,19,20,21].
Due to the uncertainties in the parameters used in the evaluation of landslides and the non-linearity that characterizes this phenomenon, fuzzy logic is considered an effective approach to mapping landslides, incorporating expert knowledge in the spatial inference technique and resulting in maps that are easy to understand, which are indicated for the analysis of large areas . In Brazil, the most common mass movements are shallow translational landslides induced by rainfall, and this work predicts this movement. Translational movement represents the most common form among the types of mass movement, showing a plane-like surface of rupture, which accompanies, in general, mechanical and/or hydrological discontinuities that exist inside the material . In Brazil, there are no ongoing studies evaluating the scars caused by mass movements for the purpose of map validation. However, susceptibility maps are of extreme importance because they are the basis for the generation of landslide hazard maps .
The present study aimed to map the areas that are susceptible to landslides in places occupied by Eucalyptus plantations in different developmental stages in the state of São Paulo using geographic information systems (GIS) and spatial inference techniques, and to compare these areas with the areas occupied by other land uses and land covers.
2. Materials and Methods
The state of São Paulo, located between 53°10′28″ and 43°52′0″W and 19°40′41″ and 25°07′42″S, has an area of 248,209 km2. For the present study, the municipalities of the state that contained areas with Eucalyptus plantations in terrain with undulating topography were selected (Figure 1).
The western part of the state of São Paulo is located on a 600 km long plateau. The Serra do Mar, with abrupt scarps, is located between this plateau and the coastal plain. The Serra da Mantiqueira, also showing scarps in some places, is located in the northeastern part of the state, on the border with the state of Minas Gerais (Figure 2). The climate is varied: tropical in the northern region, tropical with altitude near the Serra do Mar and Serra da Mantiqueira and sub-tropical in the south. The average annual temperature is approximately 20 °C, and the average rainfall is 1,500 mm/year.
2.1. Data Collection
Initially, a database incorporating all of the information relevant to the study was created, including geological, geomorphological, soil, topographic and climatic data as well as the satellite images used for mapping areas with Eucalyptus plantations. The data were collected for the entire state and, subsequently, only the selected areas were evaluated.
SPRING (Geo-Referenced Information Processing) software was used for this study because it offers a database option, a series of image processing functions, thematic data manipulation, numerical terrain modeling, storage and retrieval of spatial data with attribute tables, modeling and the use of networks and spatial analyses. SPRING, developed by the National Institute of Space Research, is public domain and can be acquired free of charge at www.dpi.inpe.br/spring.
Geological data at a 1:750,000 scale were obtained from the Brazilian Geological Survey, available on the website of the CPRM (Research and Mineral Resources Company: www.cprm.gov.br). A geomorphological map, at a 1:1,000,000 scale, was acquired from the IPT (Technology Research Institute) . A soil map at a 1:500,000 scale was acquired from the IAC (Agronomic Institute of Campinas) . Topographical data were obtained from the Topodata website (www.dsr.inpe.br/topodata), where the original data from the SRTM (Shuttle Radar Topography Mission) were processed within the scope of Topodata for the derivation of geomorphometrical variables, including slope (zenith angle) and vertical and horizontal curvatures.
Historical climate data were obtained from the IAC (www.ciiagro.sp.gov.br) and INMET (National Institute of Meteorology) , which together have represented 103 sites in the state of São Paulo for a period of approximately 25 years. The monthly average rainfall data from all sites were entered into the database and, subsequently, two distinct periods were defined: a rainy season (December, January and February) and a dry season (June, July and August), and the average rainfall for each period was calculated. The individual data were spatialized for the whole state by means of interpolation (weighted average) and then were sliced into classes (every 50 mm), thereby generating two thematic maps: one for the rainy season and the other for the dry season.
To identify the different developmental stages of Eucalyptus plantations, Landsat TM 5 satellite images for three years (2006, 2007 and 2008, one image per year) were used (Table 1). The images were geo-referenced using the Geocover images available on the INPE Image Processing Division website (www.dpi.inpe.br/geocover) as the basis and were then inserted into the database. The average image registration error was less than 1 pixel, that is, less than 30 m.
|Table 1. Path/row and dates of passage of the Landsat/TM5 satellite.|
|Orbits/Points||Dates of Passage|
|218/76||21 July||25 August||12 September|
|219/76||14 September||16 August||18 August|
|220/76||9 May||20 August||10 September|
|220/77||5 September||20 June||25 August|
|221/77||12 September||29 July||28 May|
In Brazil, the majority of Eucalyptus plantations are made from clones that have been genetically improved for the climate and edaphic conditions of the planting location. The Eucalyptus cultivation cycle lasts an average of 7 years, and 3 rotations are possible with the same clone. The year 2008 was adopted as the mapping basis, whereas the other two years (2006 and 2007) were used for a temporal study of the plantations. Thus, it was possible to identify three development stages: adult Eucalyptus, young Eucalyptus and new Eucalyptus and/or exposed soil (Figure 3). These three stages were selected due to the limitations to the identification of targets via remote sensing. The map was generated through automatic image classification, and manual editing was used to correct some areas with mistakes in classification. The procedure adopted for this study was that adopted by Kronka et al. , where areas of reforestation throughout the state of São Paulo were mapped using satellite images. The areas occupied by Eucalyptus plantations in 16 selected municipalities are presented in Table 2 as well as their percentages of the municipalities’ areas.
The presence or absence of vegetation is one of the factors that define the stability of slopes. In general, the less vegetation cover that is present on a slope, the greater its susceptibility to mass movements. Vegetation protects the soil from factors that can accelerate landslides by intercepting rainwater, reducing its kinetic energy and promoting the infiltration of water into the soil. Soil without vegetation becomes more susceptible to soil compaction due to the impact of raindrops and the consequent increase in runoff, which also leads to erosion. The volume of material removed and transported by rainwater is related to the density of vegetation cover and the steepness of the hillside, so that with the removal of vegetation, these processes become more intense, especially in locations with steeper slopes .
|Table 2. Quantification of areas with Eucalyptus in the studied municipalities.|
|Municipality||Area Occupied by Eucalyptus (ha)||% of Municipality Area|
|Mogi das Cruzes||6,841||9.36|
|Redenção da Serra||3,393||10.92|
|São Luís do Paraitinga||5,988||9.55|
In two of the successional stages of Eucalyptus, the early one observed in the plantations and the one at the stage called “young”, approximately 2 to 3 years of age, there is a large quantity of leaf biomass, but the soil is still vulnerable, due to the absence of understory or the complete blockage of sunlight. In the adult phase of Eucalyptus, the quantity of leaf biomass decreases, allowing sunlight to reach the ground and the understory to develop, which reduces the soil’s vulnerability.
For the other land uses and land cover classes present in the state, the map generated by Vieira et al. was used. This map included other projects developed at INPE (National Institute for Space Research) such as the CANASAT (www.dsr.inpe.br/mapdsr) for sugarcane mapping purposes (in this study, the sugarcane was reclassified as agriculture) and SOS Mata Atlântica (www.sosmatatlantica.org.br), where all of the remnants of natural forest of the Mata Atlantica were mapped. Therefore, the final map of the state of São Paulo includes the following classes: forest, pasture, urban area, agriculture, adult Eucalyptus, young Eucalyptus and new Eucalyptus and/or exposed soil.
Due to the different scales associated with the various datasets considered in this work, the results refer to a scale compatible with the smallest scale of the input data, i.e., 1:1,000,000 which is equivalent to a 500 m pixel.
2.2. Generation of Weighted Maps
Before producing the susceptibility maps, the thematic maps related to landslide susceptibility must be weighted. The weights vary from 0 to 1, where 0 indicates classes with no relationship to landslide occurrence and 1 indicates classes with a high relationship to landslides. This weighting transforms the thematic maps onto a numerical grid, in which each class of map receives a weight (from 0 to 1). Table 3 displays the susceptibility values for all classes present in the different themes addressed in this study.
For the geological data, the work of Crepani et al. , which evaluated the relationships of different types of rock with landslides, was considered as the basis for the weighting. Igneous rocks had the lowest landslide probabilities, and intermediate metamorphic and sedimentary rocks had a lower resistance to weathering, i.e., a greater landslide probability.
The geomorphological units presented in the study area were defined by Ponçano et al. , and the assigned weights were based on the terrain, dissection and slope shapes present for each geomorphological class. For the different soil types, the weights were based on the premise that soils with a higher amount of sand tend to be more susceptible than soils with more clay. These weights were also based on the study of Crepani et al. .
The topography was addressed through horizontal and vertical curvatures and the slope. The horizontal curvature refers to the divergent/convergent character of flows of matter on the ground when analyzed on a horizontal projection (Figure 4). This curvature is related to the processes of migration and accumulation of water, minerals and organic matter in soil caused by gravity, and plays an important role in the resulting water balance and pedogenesis process .
Concave areas are more susceptible to landslides than convex areas, receiving the highest weights in the susceptibility table. Terrain with convergent profiles presents a higher risk of sliding incidents than divergent profiles, thus receiving higher susceptibility weights (Table 3). The slope map was divided into 5 classes in accordance with those suggested by Binda and Bertotti and Kanungo et al. , with weights attributed to each slope class.
|Table 3. .Collected data and their weights in relation to landslide susceptibility.|
|Rhyolite, granite,||0.37||Melanic Gleisol||GM||1.0|
|Granodiorite, quartz,||0.40||Red Latosol||LV||0.4|
|Migmatite, gneiss||0.43||Red Nitosol||NV||0.7|
|Phonolite, syenite||0.47||Mesic Organosol||OU||1.0|
|Mylonites, quartz||0.57||Lithic Neosol||RL||1.0|
|muscovite, biotite||Quartzarenic Neosol||RQ||1.0|
|Conglomerate||0.83||20 to 45°||Heavy undulation||0.8|
|Siltstones, mudstones||0.90||8 to 20°||Undulation||0.5|
|Shales||0.93||3 to 8°||Smooth undulation||0.3|
|Limestone, dolomite||0.97||0 to 3°||Plane||0.2|
|New Eucalyptus /soil exposed||1.0|
|Reliefs of Aggradation/Continentals||Flood plain||0.10|
|Relief of degradation in dissected plateaus/hill relief||Tabular Landforms||0.31|
|Small Hills with Local Ridges||0.34|
|Parallel Small Hills||0.35|
|Small Isolated Hills||0.33|
|Relief of degradation in dissected plateaus/Relief of hills with smoothed hillside||Elongated Hills||0.42|
|Relief of degradation in dissected plateaus/Small hill relief||Low Small Hills||0.51|
|Elongated Parallel Small Hills Alongados Paralelos||0.54|
|Elongated Small Hills and Ridges||0.53|
|Relief of degradation in dissected plateaus/hill relief||Side slopes||0.64|
|Hills with Restricted Mountains||0.66|
|Relief of degradation in dissected plateaus/Mountainous relief||Elongated Mountains||0.71|
|Mountains with deep valleys||0.73|
|Residual relief supported by individual lithologies/Sustained by massive basaltic plateaus||Basaltic tables||0.81|
|Scarp with Ridges||1.00|
The volume of material removed and transported by rainwater is related to the density of vegetation cover and the slope declivity, and with vegetation removal, these processes become more intense, especially in areas with steep slopes . The weights assigned to each land use class depend on the type of vegetation coverage. The young stage of Eucalyptus plantations, approximately 3 to 4 years old, has a large amount of leaf biomass, but the soil is still susceptible due to the absence of an understory resulting from the complete blocking of sunlight. However, in the adult stage, the amount of leaf biomass decreases, allowing sunlight to reach the soil and the understory to develop, which decreases the soil’s susceptibility.
The climatic thematic maps for the rainy and dry seasons were weighted using the criteria of Crepani et al. . The landslide hazard increases substantially during the rainy season because rain is an erosive agent and a landside trigger. Thus, the greater the rainfall intensity, the higher the weight.
2.3. Generation of Susceptibility Maps
The eight themes of geology, land use and land cover, geomorphology, soil, slope, vertical and horizontal curvatures and rainfall intensity, were combined to generate a final susceptibility map using the fuzzy gamma operator.
The fuzzy operator was introduced by Zadeh and allows a more realistic treatment of imprecise and subjective data that are part of analyses of physical environments. Fuzzy logic is able to model real problems where uncertainties and inaccuracies are present .
Inaccuracy limits, called fuzzy sets, admit partial pertinence and are mathematically defined, as if Z denoted an object space; however, the set A in Z is the set of ordered pairs (Equation (1)) .
The pertinence function (z) is known as “the degree of membership of Z in A”. The fuzzy membership value must lie in the range from 0 to 1 and reflects the degree of certainty of membership. The fuzzy theory employs the idea of member functions and expresses the degree of membership with respect to some attribute, in this case landslide susceptibility.
The fuzzy gamma operator consists of the product of the fuzzy algebraic sum and the fuzzy product. Equation (2) represents this operator.
where γ is a parameter within the range (0,1). The first term of the equation is named the fuzzy sum and the second term is the fuzzy product. When γ = 0, the fuzzy combination is equal to the product and when γ = 1, it is equal to the sum.
For Bonham-Carter , the values in the range from 0 to 0.35 show a “diminutive” character, i.e., they are always less than or equal to the smallest input fuzzy member; the values in the range from 0.8 to 1.0 have an “increasing” character, in which the output value will be equal to or greater than the value of the largest fuzzy member input values, and the range from 0.35 to 0.8 does not have an “increasing” or “diminutive” character.
Susceptibility maps were generated with values of gamma equal to 0.7 and 0.8 for each season (rainy and dry). These input values do not have a diminutive or increasing character and were used in works from Lee , Pradhan et al. and Pradhan . After the generation of maps, they were divided into susceptibility classes, as shown in Table 4.
|Table 4. Ranges adopted in the slicing process.|
Susceptibility maps were intersected with the land use map so that areas of Eucalyptus could be evaluated and compared with other uses. For comparison, uncertainty maps were generated both for the dry season and for the rainy season, highlighting areas that maintained the classes in the two maps (gamma values of 0.7 and 0.8) and the areas that switched classes between maps, generating areas of uncertainty, in accordance with the suggestion by Meirelles et al. .
3. Results and Discussion
The land use and land cover map of the studied municipalities is presented below (Figure 5). Approximately 33.3% of the total area is occupied by agriculture (4,266 km2), 28.2% by pasture (3,608 km2), 19.9% by forest (2,554 km2), 7% by adult Eucalyptus (897 km2), 4.4% by young Eucalyptus (568 km2), 3.3% by new Eucalyptus and/or exposed soil (423 km2), 2% by surface water (257 km2) and 1.6% by urban areas (212 km2). These results show that almost 50% of the studied municipalities have some type of cropland (including reforestation of Eucalyptus for commercial purposes).
The maps generated for the dry period for gamma values equal to 0.7 and 0.8 are presented in Figure 6. In the map generated with a gamma value equal to 0.7, the results show that 34% and 60% of the area was classified with very low and low susceptibility, respectively, whereas 5% of the study area had moderate susceptibility. In the map generated with a gamma value equal to 0.8, only 5% of the area was classified with very low susceptibility and 55% and 42% were classified as low and moderate susceptibility, respectively. The dry period, with lower values of precipitation, tends to exhibit a lower susceptibility to landslides.
An uncertainty map was also prepared for the dry period and, by intersecting this map with the map of land use, it was possible to observe the distribution of sensitivity classes within each land use class (Figure 7). Classes that require the greatest attention have moderate, high and very high susceptibility. The largest areas for the classes with moderate susceptibility are associated with pasture and agriculture, corresponding to 1,240 and 990 km2, respectively; followed by the forest class, with an area equal to 745 km2. For the high susceptibility class, the largest areas are associated with pasture (30 km2), Eucalyptus (8 km2) and agriculture (5 km2).
The areas covered by pasture and agriculture are examples of land uses that include different phenological stages that are dependent on the hydrological regimen. The pasture loses part of its green biomass in the dry season, leaving the soil more susceptible. The areas covered by agriculture, especially the areas cultivated for species that have annual cycles, undergo a harvesting period, a phase that also increases the susceptibility of the soil.
It is notable that the “forest” land use class is grouped with the other classes (pasture and agriculture) due to the location of forest remnants in the state of São Paulo, which are almost entirely located in the Serra do Mar where there is mountainous terrain. Although this land use class is associated with a low susceptibility weight, the region may have received a higher final weight due to other factors, such as slope and geology.
Although the uncertainty between the maps generated with the two gamma values (0.7 and 0.8) is high, the maps are useful for identifying which uses and land covers are contributing to increased landslide susceptibility.
Similarly, maps for the rainy season were also generated, considering gamma values equal to 0.7 and 0.8 (Figure 8).
In the map generated with a gamma value equal to 0.7 (0.8), 23% (0.19%) and 62% (42%) of the study area were classified as having very low and low susceptibility, respectively, whereas 14% (52%) of the area was classified as having moderate susceptibility and 0.2% (5%) as high susceptibility. Therefore, it appears that a range equal to 0.8 increases the areas susceptible to landslides when compared with those observed in the map generated for the dry period.
An uncertainty map was also generated for the rainy season, and the distribution of sensitivity classes in each land use class is shown in Figure 9. Again, the pasture, agriculture and forest land use classes had the largest areas associated with the moderate susceptibility class, 1,574, 1,407 and 931 km2, respectively. For the high susceptibility class, the same three different land uses, pasture (144 km2), agriculture (47 km2) and forest (32 km2), were again the largest areas. For the purpose of comparing the effects of different soil uses based on the generated maps (dry and rainy season and gamma equal to 0.7 and 0.8), the relative areas of susceptibility occupied by each land use were calculated, and the results are presented in Figure 10.
With an increase in the gamma value from 0.7 to 0.8, there is a general increase in the susceptibility of all land use classes and in both periods (dry and rainy). Particularly in the areas occupied by Eucalyptus plantations, it was notable that areas with new Eucalyptus have the highest percentages of susceptibility because this stage of cultivation leaves the soil exposed and therefore more susceptible.
In the moderate and high susceptibility classes, the pasture is the land use type that presented the greatest susceptibility, followed by new Eucalyptus and urban areas.
The areas occupied by Eucalyptus in the evaluated sites are in mountainous terrain, with 22% in the soft undulating slope class, 42% in the undulating class and 28% in the strongly undulating class, which can lead to increased susceptibility.
Ternan et al. concluded that the sites that presented good ground cover and tree establishment had approximately 3-fold lower soil losses than the sites with a degraded understory (in this case, Pinus forest). Their study concluded that reforestation should be adopted for soil conservation purposes and that the early stages of forest establishment were the ones that have the greater risk of occurrence of overland flow and soil losses.
In other work, distinct differences in landslide density were found between forest cover classes in the landslide inventory, with the highest average density in recently disturbed areas (open class) and the lowest density in older forests (large class) .
For some land uses, landslides are more prevalent due to the lack of protection that deep roots provide to the ground through to the slope stability achieved by the mechanical reinforcement provided by tree root systems, especially in terrain with steep slopes .
The results indicate that, although the evaluated areas in the state of São Paulo had more pronounced terrain, the landslide susceptibility generally stayed between low and moderate. This occurred because many factors can contribute to the higher landslide susceptibility of a region, such as the slope and the geology. In this context, the areas occupied by forest in the state of São Paulo, which are located near the Serra do Mar and Serra da Mantiqueira, have a terrain with steep slopes and therefore higher susceptibility.
Agriculture and pasture were the land use types with more area susceptible to landslides, which was shown on all of the generated maps.
Eucalyptus in its initial stage or at harvest time (new Eucalyptus/exposed soil) is associated with the most exposed soil. Therefore, these areas are more susceptible to landslides, and this development stage presents the greatest susceptibility. Generally, the areas occupied by Eucalyptus plantations are associated with low values of susceptibility.
The fuzzy gamma technique map overlay proved satisfactory but requires prior user experience to assign weights to the different classes that are present within each of the used themes (geology, geomorphology, etc.). This technique is recommended for working with environmental data, where the information is imprecise and there are strict limits between one class and another. The variance of the values of gamma allows the user to work with the error in the data during the map overlay process, generating pessimistic (γ = 0.7) and optimistic (γ = 0.8) scenarios with lower and higher gamma values, respectively, and in the rainy season, increasing the area’s susceptibility to landslides.
The maps generated by this study could be used to assess which factor or set of factors contributes to an increased susceptibility to mass movements in the area of study.
- Associação Brasileira dos Produtores de Florestas Plantadas. Anuário Estatístico da ABRAF, 2010. Available online: www.abraflor.org.br/estatisticas.asp (accessed on 14 May 2010).
- Conti, J.B. Resgatando a “Fisiologia da Paisagem”. Revista do Departamento de Geografia da USP 2001, 14, 59–68.
- Gramani, M.F. Caracterização Geológica-Geotécnica das Corridas de Detritos (“Debris Flows”) no Brasil e Comparação com Alguns Casos Internacionais. Master Thesis, Escola Politécnica, Universidade de São Paulo, São Paulo, Brazil, 2001.
- Rossetti, L.A.F.G.; Pinto, S.A.F.; Almeida, C.M. Geotecnologias Aplicadas à caracterização das Alterações da Cobertura Vegetal Intraurbana e da Expansão Urbana da Cidade de Rio Claro, São Paulo. In Proceedings of the 13 Simpósio Brasileiro de Sensoriamento Remoto, Florianópolis, Brazil, 21–26 April 2007; pp. 5479–5486.
- Cunha, S.B.; Guerra, A.J.T. Degradação Ambiental. In Geomorfologia e Meio Ambiente, 2nd; Guerra, A.J.T., Cunha, E.S.B., Eds.; Bertrand Brasil: Rio de Janeiro, Brazil, 1996; pp. 337–379.
- Dai, F.C.; Lee, C.F.; Ngai, Y.Y. Landslide risk assessment and management: An overview. Eng. Geol. 2002, 64, 65–87, doi:10.1016/S0013-7952(01)00093-X.
- Fernandes, N.F.; Amaral, C.P. Movimentos de Massa: Uma Abordagem Geológico-Geomorfológica. In Geomorfologia e Meio Ambiente, 4th; Guerra, A.J.T., Cunha, S.B., Eds.; Bertrand Brazil: Rio de Janeiro, Brazil, 2003; pp. 123–194.
- Neary, D.G.; Hornbeck, J.W. Impacts of Harvesting and Associated Practices on Off-Site Environmental Quality. In Impacts of Forest Harvesting on Long-Term Site Productivity; Dyck, W.J., Cole, D.W., Comerford, N.B., Eds.; Chapman & Hall: London, UK, 1994; pp. 81–118.
- Parise, M. Landslide mapping techniques and their use in the assessment of the landslide hazard. Phys. Chem. Earth 2001, 26, 697–703.
- Carrara, A.; Guzzetti, F.; Cardinali, M.; Reichenbach, P. Current Limitations in Modeling Landslide Hazard. In Proceedings of the International Association for Mathematical Geology IAMG’98, Ischia, Italy, 4–9 October 1998; pp. 195–203.
- Barredo, J.I.; Benavides, A.; Hervas, J.; van Westen, C.J. Comparing heuristic landslide hazard assessment techniques using GIS in the Tirajana basin, Gran Canaria Island, Spain. Int. J. Appl. Earth Obs. Geoinf. 2000, 2, 9–23, doi:10.1016/S0303-2434(00)85022-9.
- Clerici, A.; Perego, S.; Tellini, C.; Vescovi, P. A procedure for landslide susceptibility zonation by the conditional analysis method. Geomorphology 2002, 48, 349–364, doi:10.1016/S0169-555X(02)00079-X.
- Carrara, A.; Crosta, G.; Frattini, P. Geomorphological and historical data in assessing landslide hazard. Earth Surf. Process. Landf. 2003, 28, 1125–1142, doi:10.1002/esp.545.
- Kanungo, D.P.; Arora, M.K.; Sarkar, S.; Gupta, R.P. A comparative study of conventional, ANN black box, Fuzzy and combined neural and Fuzzy weighting procedures for landslide susceptibility zonation in Darjeeling Himalayas. Eng. Geol. 2006, 85, 347–366, doi:10.1016/j.enggeo.2006.03.004.
- Ayalew, L.; Yamagishi, H.; Ugawa, N. Landslide susceptibility mapping using GIS-based weighted linear combination, the case in Tsugawa area of Agano River, Niigata Prefecture, Japan. Landslides 2004, 1, 73–81, doi:10.1007/s10346-003-0006-9.
- Aleotti, P.; Chowdhury, R. Landslide hazard assessment: Summary review and new perspectives. Bull. Eng. Geol. Environ. 1999, 58, 21–44, doi:10.1007/s100640050066.
- Ercanoglu, M.; Gokceoglu, C. Use of Fuzzy relations to produce landslide susceptibility map of a landslide prone area (West Black Sea Region, Turkey). Eng. Geol. 2004, 75, 229–250, doi:10.1016/j.enggeo.2004.06.001.
- Guidicini, G.; Nieble, C.M. Estabilidade de Taludes Naturais e de Escavação, 2nd ed.; Edgard Blücher: São Paulo, Brazil, 1984.
- Muñoz, V.; Almeida, C.M.; Valeriano, M.M.; Crepani, E.; Medeiros, J.S. Técnicas de Inferência Espacial na Identificação de Unidades de Susceptibilidade aos Movimentos de Massa na Região de São Sebastião, São Paulo, Brazil. In Proceedings of the 12 Simposio Internacional en Percepción Remota y Sistemas de Información Geográfica (SELPER), Cartagena de Indias, Colômbia, 24–29 September 2006.
- Saboya, F.; Alves, M.G.; Pinto, W.D. Assessment of failure susceptibility of soil slopes using Fuzzy logic. Eng. Geol. 2006, 86, 211–224, doi:10.1016/j.enggeo.2006.05.001.
- Wang, W.D.; Xie, C.M.; Du, X.G. Landslides susceptibility mapping in Guizhou province based on Fuzzy theory. Min. Sci. Technol. 2009, 19, 399–404.
- Vahidnia, M.H.; Alesheikh, A.A.; Alimohammadi, A.; Hosseinali, F. A GIS-based neuro-Fuzzy procedure for integrating knowledge and data in landslide susceptibility mapping. Comput. Geosci. 2010, 36, 1101–1114, doi:10.1016/j.cageo.2010.04.004.
- Camara, G.; Souza, R.C.M.; Freitas, U.M.; Garrido, J. SPRING: Integrating remote sensing and GIS by object-oriented data modeling. Comput. Graph. 1996, 20, 395–403, doi:10.1016/0097-8493(96)00008-8.
- Ponçano, W.L.; Carneiro, C.D.R.; Bistrichi, C.A.; Almeida, F.F.M.; Prandini, F.L. Mapa Geomorfológico do Estado de São Paulo; IPT: São José dos Campos, Brazil, 1981. Scale 1:1,000,000.
- Oliveira, J.B.; Camargo, M.N.; Rossi, M.; Calderano Filho, B. Mapa pedológico do Estado de São Paulo: Legenda expandida; Instituto Agronômico/EMBRAPA Solos: Campinas, Brazil, 1999. Scale 1:500,000.
- Valeriano, M.M. TOPODATA: Guia de Utilização de Dados Geomorfométricos Locais, Available online: http://mtc-m18.sid.inpe.br/col/sid.inpe.br/mtc-m18@80/2008/07.11.19.24/doc/publicacao.pdf (accessed on 24 October 2011).
- Instituto Nacional de Meteorologia, Normais Climatológicas do Brasil 1961–1990, INMET, Brasília, Brazil, 2009. (DVD).
- Image Generation Division, Instituto Nacional de Pesquisas Espaciais. Landsat 5/CCD Sensor, Path/Row: 218/76, 219/76, 220/76, 220/77, 221/77, Available online: http://www.dgi.inpe.br/CDSR/ (accessed on 23 May 2009).
- Kronka, F.J.N.; Nalon, M.A.; Matsukuma, C.K. Inventário Florestal das Áreas Reflorestadas do Estado de São Paulo; Secretaria de Estado de Meio Ambiente, Instituto Florestal: São Paulo, SP, Brazil, 2002.
- Vieira, R.M.S.P.; Alvalá, R.C.S.; Ponzoni, F.J.; Ferraz-Neto, S.; Canavesi, V. Mapeamento dos usos da Terra e da Cobertura Vegetal do Estado de São Paulo, Available online: http://urlib.net/sid.inpe.br/mtc-m19@80/2010/01.22.12.32 (accessed on 25 January 2010).
- Crepani, E.; Medeiros, J.S.; Hernadez Filho, P.; Florenzano, T.G.; Duarte, V.; Barbosa, C.C.F. Sensoriamento Remoto e Geoprocessamento Aplicados ao Zoneamento Ecológico-Econômico e ao Ordenamento Territorial, Available online: http://www.lapa.ufscar.br/bdgaam/geoprocessamento/Crepani%20et.%20al.pdf (accessed on 21 September 2011).
- Valeriano, M.M.; Carvalho Júnior, O.A. Geoprocessamento de modelos digitais de elevação para mapeamento da curvatura horizontal em microbacias. Revista Brasileira de Geomorfologia 2003, 1, 17–29.
- Binda, A.L.; Bertotti, L.G. Geoprocessamento Aplicado à Análise da Bacia Hidrográfica do Rio Cachoeirinha, Guarapuava-PR. In Proceedings of the 12 Simpósio Brasileiro de Geoigrafia Física Aplicada, Natal, Brazil, 9–13 July 2007.
- Veloso, A.J.G. Importância do estudo das vertentes. Geographia 2002, 4, 79–83.
- Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353, doi:10.1016/S0019-9958(65)90241-X.
- Burrough, P.A.; McDonnell, R.A. Principles of Geographical Information Systems; Oxford University Press: Oxford, UK, 1998.
- Bonham-Carter, G.F. Geographic Information Systems for Geoscientists: Modelling with GIS; Pergamon: Oxford, UK, 1994.
- Lee, S. Application and verification of fuzzy algebraic operators to landslide susceptibility mapping. Environ. Geol. 2007, 52, 615–623.
- Pradhan, B.; Lee, S.; Buchoithner, M.F. Use of geospatial data and fuzzy algebraic operators to landslide-hazard mapping. Appl. Geomat. 2009, 1, 3–15, doi:10.1007/s12518-009-0001-5.
- Pradhan, B. Landslide susceptibility mapping of a catchment area using frequency ratio, fuzzy logic and multivariate logistic regression approaches. J. Indian Soc. Remote Sens. 2010, 38, 301–320, doi:10.1007/s12524-010-0020-z.
- Meirelles, M.S.P.; Moreira, F.R.; Camara, G.; Netto, A.L.C.; Carneiro, T.A.A. Métodos de Inferência Geográfica: Aplicação no Planejamento Regional, na Avaliação Ambiental e na Pesquisa Mineral. In Geomática: Modelos E Aplicações Ambientais; Meirelles, M.S.P., Camara, G., Almeida, C.M., Eds.; Embrapa Informação Tecnológica: Brasília, Brazil, 2007; pp. 381–288.
- Ternan, J.L.; Elmes, A.; Tanago, M.G.; Williams, A.G.; Blanco, R. Convertion of matorral land to Pinnus forest: Some hidrological and erosional impacts. Mediterranée 1997, 12, 77–84.
- Miller, D.J.; Burnet, K.M. Effects of forest cover, topography, and sampling extent on the measured density of shallow, translational landslides. Water Resour. Res. 2007, 43, WO3433.
- O’Loughlin, C.L. Effectiveness of Introduced Forest Vegetation for Protection against Landslides and Erosion in New Zealand’s Steeplands. In Proceedings of the Symposium on Effects of Forest Land Use on Erosion and Slope Stability, Honolulu, HI, USA, 7–11 May 1984; pp. 275–280.
© 2012 by the authors; licensee MDPI, Basel, Switzerland. This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). | <urn:uuid:bafd72fb-37d8-48c0-b5d5-6b194ea3c5f4> | {
"date": "2014-10-23T14:51:31",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558066654.4/warc/CC-MAIN-20141017150106-00073-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8721251487731934,
"score": 2.953125,
"token_count": 10086,
"url": "http://www.mdpi.com/2220-9964/1/2/209/htm"
} |
Part 1 - Introduction
Sean M. O’Brien, P.E., LEED AP
Michael B. Waite, LEED AP
As energy resources are diminished, energy costs increase, and the detrimental environmental effects of combustion emissions become more pronounced and better understood, optimizing building energy performance becomes essential to building owners, the global community and the planet itself. The building design and construction industry is becoming more aware of the effect of the building enclosure on energy use, particularly with the increasing prominence of “green” building rating systems, such as LEED, higher stringency in energy efficiency standards, such as ASHRAE 90.1, and stricter enforcement of building energy codes. However, with only a few exceptions, rating systems, standards and codes in the United States all but ignore air barriers. And, despite numerous studies and extensive analysis showing the benefits of air barriers, and many architects and engineers espousing the advantages of reducing air leakage, the industry has been slow to adopt air leakage control as a priority, producing buildings with admirable intentions, but marginal performance.
Enclosure air leakage can increase heating and cooling energy use of buildings. Many buildings are designed to maintain a slight positive air pressure (relative to the exterior environment), so the greater the air leakage through the enclosure, the greater the volume of ventilation air necessary to maintain the required pressure differential. Typically, this air needs to be either heated or cooled to reach the system’s supply air temperatures. In some cases, the air may need to be dehumidified even when HVAC zones demand heating, which requires the ventilation air to be cooled (and dehumidified) before it is heated.
Air infiltration due to negative building air pressure in southern climates, or during the summer throughout most of the U.S., carries both heat and moisture into buildings. This can result in moisture-related problems, but also increases the burden on the building’s cooling systems. Air conditioning systems will be required to provide more sensible and latent heat removal than would otherwise be needed with lower levels of air leakage. Consequently, the balance between latent and sensible cooling may be different than that predicted for design conditions, especially since exterior air will often contain significantly more moisture than interior air. This may result in mechanical systems being unable to maintain the required interior temperature or relative humidity levels without operating more frequently.
The effects described above are significant, but the most substantial impact of air leakage on building energy use is generally an increase in building heating requirements in cold climates. Verifying this effect is simple: stand by a leaky window on a cold windy day or feel the air rush in behind you when opening the door of a tall building during the winter. The negative implications of air leakage are not limited solely to cold northern climates. A 1995 study performed by NIST (National Institute of Standards and Technology) found that 15% of the heating load in commercial buildings nationwide is caused by air leakage. The conclusions of this study are particularly noteworthy because the authors found that, perhaps counter intuitively, this percentage is higher for newer buildings than for older buildings. This does not necessarily mean that air leakage is higher in newer buildings, but it likely indicates that other enclosure characteristics (e.g. insulation levels and passive solar heating design concepts) are improving.
3. Moisture Problems
Moisture-related problems due to enclosure air leakage can be caused by humid interior air coming into contact with cold surfaces while exiting the building, or by humid exterior air finding a detrimental path into the building. As with many issues with air leakage, the problems typically occur at the details, such as transition areas between envelope components and assemblies, rather than in the field of the wall.
When buildings or portions of buildings are under negative pressure, the air flow is inward. As discussed above, exterior air can contain a significant amount of moisture. When this air is pulled into the building through wall cavities, ceiling plenums or similar spaces, moisture in the air can condense if it passes over cool surfaces, which are common in air-conditioned spaces. Water can accumulate on materials that are known to be conducive to mold growth, such as paper-faced gypsum wallboard.
When building or portions of buildings are at positive pressure relative to the exterior air, the air flow is outward. In most buildings, this may not be a problem as interior humidity levels are not high enough in the winter to cause significant problems. However, in high humidity buildings, such as museums and natatoriums (pool structures), the interior air contains ample moisture to cause condensation if it reaches cold surfaces. The most visible cases of condensation in these buildings are where thermal bridging – high conductivity materials bypassing low conductivity materials – occurs, such as at window frames and structural members. In positively pressurized buildings with air leakage paths to the exterior, humid air often “finds” hidden cold surfaces within the enclosure. The air flow paths are typically within concealed spaces, allowing moisture to accumulate over time undetected, and potentially causing catastrophic failures.
Over the past few decades, the building industry has developed some understanding of the causes of moisture-related problems. However, much focus has been placed on vapor retarders without adequate attention to the effects of air leakage. Vapor retarders are intended to prevent the migration of moisture through building materials by diffusion. In many cases, drawings call for vapor retarders where a continuous air barrier system is necessary. Humid air can carry up to 100 times more moisture than can be transferred by diffusion through typical porous building materials. The discrepancy between the widespread application of vapor retarders and the generally poor design of air barriers is likely due to the prevalence of vapor retarder requirements and the near-complete lack of air barrier requirements in U.S. building codes.
4. Codes and Standards
State building codes typically reference or adopt ASHRAE Standard 90.1 – Energy Standard for Buildings Except Low-Rise Residential Buildings, or the International Energy Conservation Code (IECC), which references ASHRAE 90.1 itself. The most widely used and referenced energy codes and standards have well-developed requirements for wall and roof insulation, and for glazing assembly thermal transmittance and solar heat gain. They contain requirements for maximum allowable conductive thermal transmission through the envelope. Every few years these values are evaluated and, often, slightly increased. These requirements may be approaching – many would argue they have reached – a point of diminishing returns for which very little additional performance is gained by adding insulation or by reducing solar heat gain through glazing.
Neither the IECC nor ASHRAE 90.1 contains quantitative requirements for air barriers. While there are requirements that seams and transition areas be sealed, these provisions are difficult to enforce and contain no performance requirements for the sealing materials. The lack of air barrier requirements in energy standards is a significant oversight, as reducing air leakage can have a greater impact on building energy use than the incremental increases in insulation or reductions in glazing SHGC typically included in new editions of codes and standards.
Voluntary guidelines are also falling short of their stated goals. LEED, a widely accepted guideline for sustainable building design, references ASHRAE 90.1 for its energy efficiency prerequisite and credits, but contains no separate provisions specific to air barriers. While some benefit can be obtained through the LEED energy performance credit, the Performance Rating Method – outlined in ASHRAE 90.1 and used by LEED to compare a proposed building design to a “baseline” building – requires that a building not be given credit for reducing air leakage (i.e., designers can specify an air leakage rate, but that rate must be the same for both the baseline and the “improved” buildings). This process used to exhibit anticipated improved energy performance prescribes that the air infiltration input in the energy model be the same for the proposed design and the baseline building.
Some states have begun adopting air barrier requirements, yet even the most stringent among them do not meet the code provisions implemented in some other countries (e.g. Canada and the UK), or the recommendations of enclosure design professionals and energy efficiency experts. The U.S. Army Corps of Engineers recently began requiring air barriers (and quantifiable enclosure air leakage performance) for all new and renovated buildings, and federal building design guidelines now contain air barrier provisions. However, the lack of a national standard air barrier requirement prevents the concept from taking hold on the scale we have seen with insulation, thermally-broken windows and vapor retarders.
The importance of air barriers in building design is clear and well-documented. This is particularly true now that enough of the the “low-hanging fruit” in energy efficient enclosure design – insulation and improved glazing performance, for example – has been picked, bringing us near the point of diminishing returns on those improvements. Sustainable building design considerations – an essential aspect of all building design – can no longer be compartmentalized. As with other building features, air barriers need to be designed and constructed as a system, and integrated with all other building systems.
The next article in this series will discuss methods and materials that can be used to produce acceptable enclosure air leakage performance.
Part 2 - Design Guidelines for Air Barrier Systems
Sean M. O’Brien, P.E., LEED AP
Michael B. Waite, LEED AP
1. Air Barrier Basics
The basic function of an air barrier system is to prevent uncontrolled air leakage through the building enclosure. To this end, an air barrier must be a complete system of materials and components that work together to provide a continuous barrier to airflow. Even small discontinuities in an air barrier can significantly reduce its performance, since air will follow the path of least resistance regardless of its location. Air barriers must resist air pressure caused by wind, stack effect, or mechanical pressurization of a building, so they must also be relatively rigid or have solid backing capable of resisting moderate to high pressures.
Unfortunately, the system concept is often ignored and designers “specify an air barrier” by including a specification section for self-adhered or spray-applied membranes or spray-applied foam insulation – sometimes as an addendum or afterthought once the rest of the building has been designed. This article discusses performance criteria for air barrier systems and highlights common problems found in air barrier specifications.
2. Air Barrier or Vapor Retarder?
A major stumbling block for many designers has been appreciating the difference between air barriers and vapor retarders. This is made more difficult by the fact that many air barriers are also vapor retarders, such as the ubiquitous “peel-and-stick” membranes that are used in some way, shape, or form on nearly all new construction projects. The danger in confusing these two systems is that the proper location for a vapor retarder is dependent on both the interior and exterior environments, while an air barrier can typically be located anywhere within the building enclosure as long as it is continuous. Table 1, below, summarizes the differences between air barriers and vapor retarders.
Table 1 – Summary of Differences Between Air Barriers and Vapor Retarders
|Vapor Retarder||Air Barrier|
|Purpose||Control of water vapor flow via diffusion through building materials.||Control of water vapor flow via air movement, primarily through gaps or cracks in the building enclosure.|
|Requirements for Continuity||Does not need to be completely continuous; can contain small gaps, holes, or unsealed laps without significant loss of performance.||Must be continuous to be effective; even small discontinuities can significantly affect performance.|
|Location||Typically installed on "warm-in-winter" side of insulation (some exceptions apply depending on climate). Improper location can exacerbate condensation problems.||Can be installed anywhere in the building envelope if vapor permeable, otherwise follow guidelines for vapor retarder.|
|Structural Support||No structural support.||Must be continuously supported and be capable of resisting forces from wind, mechanical pressurization, and stack effect.|
|Detailing||Minimal detailing required to achieve design intent.||Careful detailing of transitions and changes in material are necessary to support proper system installation and meet the design intent of an air barrier system.|
Another area of confusion is the concept of air barrier materials vs. air barrier assemblies and systems. The following definitions are presented to establish the difference between materials, assemblies, and systems :
• An air barrier material is a primary element that provides a continuous barrier to the movement of air (e.g., self adhered membranes)
• An air barrier assembly consists of the air barrier materials and accessories that provide a continuous designated plane of resistance to the movement of air through portions of building enclosure assemblies. Air barrier assemblies typically consist of both air barrier materials and connections to adjacent materials, as well as penetrations, laps, seams, etc.
• An air barrier system is the combination of air barrier assemblies installed to provide a continuous barrier to the movement of air through building enclosures.
3. Performance Criteria
Specifying appropriate (and more importantly, achievable) performance criteria for air barrier assemblies and systems is a surprisingly challenging task. Established performance criteria exist for nearly all aspects of the building enclosure, such as windows and doors, curtain walls, and roof systems. This is the result of many years of work by designers, testing companies, and industry organizations. Over the course of several decades, unrealistic or unverifiable performance criteria such as “windows shall not leak under any conditions” have gradually been replaced by criteria such as “windows shall not experience water leakage at a test pressure of 5.5 pounds per square foot (psf) when tested according to ASTM Standard E1105”. Performance criteria for air barrier assemblies and systems are still developing; the third article in this series summarizes the current testing and performance standards for air barriers.
Although performance criteria for air barrier materials are relatively well established, problems with air barrier systems rarely develop as a result of air leakage through the field of an air barrier sheet or membrane. Further, air barrier products such as spray-applied or self-adhered membranes often have leakage rates that are orders of magnitude lower than the generally accepted criteria of 0.004 cfm/sf at 0.3 in. water for air barrier materials , making leakage through the field of the barrier unlikely to be a significant problem.
Since most air leakage occurs at details and transitions, the air permeance of the primary air barrier material(s) is often unrelated to the overall air leakage through a building. The Air Barrier Association of America (ABAA) recommends a maximum air leakage rate of 0.04 cfm/sf at 0.3 in. water for air barrier assemblies, which takes into account seams and penetrations and is more representative or real building conditions. The 2005 National Building Code of Canada recommends (but does not require) a slightly more conservative value of 0.02 cfm/sf at 0.3 in. water for buildings that maintain interior relative humidity levels between 27 and 55% - typical of most buildings with the exception of cold storage facilities and natatoriums. Since even “seamless” systems such as fluid-applied membranes will still have transitions and penetrations, such as brick ties or other cladding attachments, applying the more stringent air barrier material criterion to air barrier assemblies is unrealistic.
The effects of air leakage through windows, doors, and curtain walls (i.e., fenestration) are rarely considered when evaluating air barrier assemblies. This is a significant oversight, as most of the air leakage through a properly designed air barrier system will likely occur through these components. Established values for air leakage through fenestration range from 0.06 cfm/sf at 1.2 in. of water for glazed curtain walls to 0.4 cfm/sf at 1.2 in. of water for operable windows. Maximum air leakage rates are included in most building/energy codes as well as industry standards from organizations such as ASHRAE and AAMA. “Typical” values for air leakage through fenestration are somewhat difficult to determine as the various (local, state, and national) codes and standards attempt to reach a consensus.
Although the test procedures for air barrier assemblies (ASTM E2357) include the air barrier connections at windows, the window opening itself is “blanked off” during the test so that only the perimeter is evaluated. Consider the example of a 10 ft x 10 ft air barrier assembly containing a 4 ft x 4 ft double hung window. Specifying a maximum assembly leakage rate of 0.04 cfm/sf would result in an allowable airflow of 4 cfm through the assembly. For a typical code-compliant window meeting the performance criteria of 0.4 cfm/sf, the window leakage alone would be 6.4 cfm, exceeding the allowable value for the entire assembly without even considering leakage through other air barrier components. If a window is specified in assembly testing, a modified value for assembly leakage that considers the inherently “leakier” windows must be used.
To account for the wide range of materials, details, and transitions in the air barrier of any particular building, it is often more useful to speak in terms of system (i.e., whole-building) air leakage than material, assembly, or component leakage. This is especially true for purposes of energy simulation or HVAC load calculation, where the global quantity of air leakage is the primary concern. Unfortunately, there are very few established standards for whole-building air leakage that designers can reference. The 2009 ASHRAE Handbook of Fundamentals, Chapter 16 notes three “levels” of air leakage for typical buildings. These are 0.1 cfm/sf at 0.3 in. water for “tight” buildings, 0.3 cfm/sf for “average” buildings, and 0.6 cfm/sf for “leaky” buildings. These general classes of air leakage were first presented in the results of a study of 8 commercial buildings in Canada, ranging in height from 11 to 22 stories, and clad with glazed aluminum curtain walls. Despite being based on a small sample size and very specific building types, these “classes” are frequently cited in discussions of typical building airtightness or building performance criteria. A more recent study of approximately 200 low rise commercial and institutional buildings in the United States found an overall average leakage rate of 1.55 cfm/sf at 0.3 in. water – over 5 times greater than the “average” value of 0.3 cfm/sf noted above. Unfortunately, neither study clarifies if the buildings were designed with continuous air barriers. Considering this limitation, the average value of 1.55 cfm/sf from the 2005 study could be seen as a maximum value for building air leakage, as a new building with a dedicated, continuous air barrier is likely to provide greatly improved performance. For buildings designed and constructed with continuous air barriers, ABAA currently recommends an overall building air leakage rate of 0.4 cfm/sf at 0.3 in. of water. More stringently, the U.S. Army Corps of Engineers specifies a maximum leakage rate of 0.25 cfm/sf at 0.3 in. of water for some of their projects, and is considering the use of that criteria as a standard for all new buildings (although no formal design guide has been developed to include airtightness criteria at the time of this writing).
In 2002, the United Kingdom added a requirement for whole building/system air leakage to their “Building Regulations for England & Wales” for commercial buildings greater than 10,760 sf (currently 5,380 sf in the 2006 code). The established value, which is required to be verified through whole-building testing, is 0.547 cfm/sf at 0.2 in. of water. Preliminary findings have shown a marked improvement in airtightnes of more “standardized” building types such as warehouses and retail stores, with many buildings exceeding the code-required value. This is a significant improvement in airtightness, as typical values for air permeability of the same building types prior to the 2002 code change were on the order of two to three times higher than values achieved in recent years. However, less standardized building types, such as offices, schools, and hospitals, have exhibited a much lower “passing” rate. This is most likely attributed to the general lack of attention to air barrier detailing at conditions for which typical practices are not well established, in contrast to less unique building types for which a large body of detailing experience exists.
Given the results of recent studies in the United States, the average commercial building significantly exceeds this target. Until additional studies of more recently constructed buildings (designed with continuous air barriers) are available, system air leakage criteria may be difficult to enforce due the lack of knowledge about what level of air leakage is typical and achievable for new buildings. In addition, the acceptability of leakage criteria is likely to fluctuate as new data becomes available and more testing is performed.
4. Specifications of Air Barrier Systems
At present, there is a significant disconnect between the criteria contained in most specifications for air barrier system and the actual performance achieved in the field. This is due to a combination of factors, including designers’ unfamiliarity with air barrier systems, poor understanding of how air barriers function, and misunderstanding regarding test procedures and limitations.
The Air Barrier Association of America has proposed a new specification section to establish the administrative and procedural requirements necessary for the construction of a complete air barrier system in a new building. Since the air barrier system consists of multiple materials covered under several specification sections (including windows, doors, curtain walls, roofing systems, and exterior wall air barriers), this specification seeks to establish some kind of connection between the various sections and provide a means of coordinating the different trades involved in the construction of the air barrier system. As the popularity of air barriers has grown in the past few years, so too have the number of projects where air barriers were added to the scope during design (or even early construction) by the inclusion of a single specification section for a sheet or spray-applied membrane. This approach creates an air barrier in name only, and does not address the numerous connections of that material/assembly to other components in the building. Details such as window perimeters and roof-to-wall joints are critical to the performance of the overall air barrier system, and require much more coordination and planning than is likely to happen during the construction process, when trades may be running behind schedule and design/consulting budgets may have been exhausted.
The following common mistakes should be avoided when specifying air barrier systems:
• Failure to coordinate air barrier components, such as specifying windows that are difficult to integrate successfully with the air barrier, or specifying air barrier materials or assemlblies with conflicting performance. Since the air barrier is only as strong as the weakest component, specifying high performance windows in a building with a poor (or no) air barrier will do little for overall airtightness. The same is true of specifying an air barrier in a wall but not in the adjacent roof, while requiring the entire building to pass an airtightness test. Quantitative testing of air barrier systems installed as part of a building addition where the “base” building has no such systems is also generally of little value, unless the addition is separated by airtight interior partitions to make it a truly separate volume.
• Failure to provide sufficient details for the air barrier system, especially at critical locations such as window perimeters and roof-to-wall interfaces. Many specifications provide only general information or do not show sufficient detail on the drawings, but may include language intended to place the detailing design burden on the contractor. Air barrier systems are complex and require careful design to be effective. Just as we would not allow the contractor to design the structural system for the building “on the fly”, it is unreasonable to expect contractors to assume the role of primary designer of the air barrier details.
• Specification of impossible or unrealistic test criteria. Some specifications require that air barrier assemblies be tested in the field to verify performance, but do not take into account the numerous issues associated with qualitative testing that may make testing impractical or unlikely to yield useful results. Some specifications contain incompatible test criteria, such as including a window in the air barrier assembly that is tested but not adjusting the assembly criteria to account for the inclusion of that window. Given the differences between criteria for windows and criteria for air barrier assemblies, it may be impossible to meet the “typical” assembly leakage of 0.04 cfm/sf due to leakage at the window.
• Specification of system performance criteria that are not backed up by research or practical experience. Given the lack of whole-building airtightness data on relatively recent buildings that include air barrier systems, specifying a system leakage rate can lead to confusion or disagreement if the building fails to achieve the test criteria. Without realistic established values for system leakage (with the exception of the 2006 United Kingdom Building Regulations, which is still in its infancy), it may be difficult to enforce compliance with a seemingly arbitrary requirement.
Next article in the Air Barrier Systems series: Field Testing of Air Barrier Systems
Part 3 - Field Testing of Air Barrier Systems
Sean M. O’Brien, P.E., LEED AP
Michael B. Waite, LEED AP
1. Why Test?
For some enclosure systems such as thermal insulation, in-place performance is generally consistent with calculated performance. However, physical testing is often the only way to accurately assess the installed performance of air barrier systems. The sensitivity of air barriers to workmanship (e.g., sealing laps, making transitions, etc.) and their potentially large impact on building energy use, make in-place performance testing an important quality control measure for air leakage assessment of specific details, and verification on a whole building (or “system”) level.
2. Test Methods
2.1 Quantitative Testing
In this arrangement, the measured airflow is a combination of flow through the specimen and the test chamber. To separate these flows, the test (using an interior chamber) is initially performed with the exterior of the specimen sealed off, typically with an impermeable sheet material. After this initial test, a second set of measurements is taken with the specimen unsealed. The difference in measurement between these two tests is the leakage through the specimen. Chambers are typically constructed on the interior of the specimen for practical reasons (e.g., access), although they can technically be located on either side. For typical punched windows, air leakage, on the order of several cubic feet per minute CFM), can be measured reliably with equipment designed for use in the field.
Air barrier assemblies, consisting of several components, can be tested in a similar manner, but the testing is generally more difficult for several reasons. First, air barrier assemblies are typically much larger than discrete components such as windows and doors. Second, air barrier assemblies may contain unique geometries that make construction of an air-tight test chamber difficult, such as parapets, structural members or slab edges that interrupt the test chamber (or in the case of steel studs, create so many penetrations in the chamber that it must be constructed from the exterior). If performed during construction, scaffolding or other temporary constructions may interfere with access to the assembly. Third, testing of complete assemblies may not be practical due to the installation of different materials at different times, as in the case of a wall air barrier being installed long before the roof air barrier.
Figure 2 illustrates the same assembly being tested using an exterior chamber, which resolves some, but not all, of these issues. Although the use of an exterior chamber and seal eliminates leakage through the surrounding walls, it raises a new problem – how to remove the seal following the initial test. Due to the need for accurate measurement of relatively small airflows, even a small amount of uncertainty in the testing could result in “false negative” results. Removing some or all of the chamber will disturb perimeter conditions and invalidate the initial chamber leakage measurement. Chambers with operable doors or removable panels are required to allow for removal of the initial seal. These chambers must be tested multiple times, following operation of the door/panel, to demonstrate that operation does not modify the basic chamber leakage rate. Only after this is demonstrated can the initial seal be removed and a reliable measurement of specimen leakage be made.
2.2 Qualitative Testing
Since a primary goal of air barriers is the reduction in air infiltration and corresponding reduction in heating cooling loads, a useful value to designers is typically the system air leakage rate. Knowing the leakage rates through individual components can be useful for verifying component performance (typically windows and curtain walls) or comparing the relative performance of existing vs. replacement components in retrofit applications, but for new construction these rates are less critical to determining overall building performance than the whole-building value. At the component or assembly level, knowing where air leakage occurs is often far more useful than knowing how much leakage is actually occurring, especially in the case of high humidity buildings, such as museums and swimming pools, where even small air leaks can cause significant condensation. Qualitative testing has the advantage of providing installers with the locations of defects in the air barrier that require repairs. Providing an air barrier installer with an air leakage rate through the overall system provides little practical information (e.g., locations of major air leaks) on how the system can be improved.
The test chambers and setups used for quantitative testing can be used for qualitative testing as well. However, chambers for qualitative testing are often easier to construct since then need only provide a level of airtightness sufficient to achieve the desired test pressure; this is in contrast to chambers for quantitative testing, which must be relatively airtight to allow for accurate measurement of the airflows in question. A basic test chamber can be constructed on the interior of the component (Photo 4), connected to a fan and differential pressure gage only. The chamber is typically necessary in new construction projects, where the building enclosure is still relatively open to the elements. For enclosed, or partially enclosed, buildings, a blower door or similar device can be used to place whole rooms or whole buildings under positive or negative pressure, eliminating the need for a specially-constructed chamber. Rather than measure the air leakage directly, visualization aids such as tracer smoke or infrared thermography (if temperature conditions allow) are used to locate leaks in the air barrier system
Tracer smoke is a relatively simple method of locating air leaks while a specimen or area has a pressure differential applied. The smoke will quickly reveal air leakage paths as it is drawn into gaps or blown away from them (Photo 5). An alternate method is to use a large smoke generator within the test chamber so that smoke is “blown out” through any breaches in the system.
A variety of test methods are available for both quantitative and qualitative evaluation of air barrier systems. A summary of the most common methods is provided in Table 1. Understanding the special requirements for these procedures, as well as the difficulties and limitations associated with the tests, are critical for performing successful tests that produce accurate and meaningful results.
Table 1 – Common Tests for Air Barriers
ASTM E2178 - Standard Test Method for Air Permeance of Building Materials
|Quantitative – Laboratory Test||Air Barrier Materials (membranes, etc.)|
|ASTM E2357 - Standard Test Method for Determining Air Leakage of Air Barrier Assemblies.||Quantitative – Laboratory Test||Air Barrier Assemblies (membrane, including laps and penetrations but not including fenestration components)|
|ASTM E283 - Standard Test Method for Determining Rate of Air Leakage Through Exterior Windows, Curtain Walls, and Doors Under Specified Pressure Differences Across the Specimen||Quantitative – Laboratory Test||Fenestration components|
|ASTM E783 - Standard Test Method for Field Measurement of Air Leakage Through Installed Exterior Windows and Doors||
Quantitative - Field Test
|Fenestration components; can be modified to test air barrier assemblies|
|ASTM E1186 - Standard Practices for Air Leakage Site Detection in Building Envelopes and Air Barrier Systems||Qualitative - Field Test||All air barrier materials, assemblies, and components|
ASTM E779 - Standard Test Method for
|Quantitative - Field Test||Air barrier systems (i.e., whole buildings)|
Part 4 - Energy Analysis
Sean M. O’Brien, P.E., LEED AP
Michael B. Waite, LEED AP
The first article of this series discussed the problems with excessive air leakage, detrimental air leakage paths through the enclosure, and pressurization for certain building types and environments. In subsequent articles we presented materials and design strategies to minimize air leakage, and discussed how to quantify enclosure air leakage in existing buildings, and the challenges this poses. This article addresses the difficult question of how to predict the amount of air leakage in a new design, and the implications of inaccurate quantification of air leakage in new and existing buildings.
Underestimating air leakage could result in undersized mechanical equipment, but this is not typically the case since mechanical engineers employ safety factors in the design of cooling and, especially, heating systems. With the evolving implementation of sustainable design practices and the understanding of the effects of peak building energy on the required capacity of on-site equipment and the nation's energy infrastructure (i.e. the number of power plants), more sophisticated tools have gained prominence in sizing mechanical equipment and predicting building energy use.
By employing more advanced tools, there is a potential for smaller safety factors and, thus, smaller equipment. However, smaller safety factors coupled with inaccurate assumptions about building performance may cause problems. Complicating matters, building owners and designers often expect energy analyses developed during design to be absolute predictors of building energy use. This expectation is generally unrealistic due to the wide variation in actual versus assumed building operation. In addition, mischaracterizing air leakage rates will reduce the accuracy of energy models and may adversely affect design decisions that depend on whole building energy analysis results.
A theme throughout this series has been the importance of designing and understanding an air barrier as a system. In much the same way, the building enclosure itself is a system made up of many component assemblies, such as fenestration, insulation, waterproofing and air barriers. And, as discussed previously, the overall building is a system with far too many interdependent performance characteristics to allow for compartmentalization of our design evaluations.
Whole Building Energy Analysis
The use of whole building energy analysis (often referred to as "energy modeling") to evaluate design options is becoming increasingly widespread throughout the building design community, and particularly in high performance building design. Energy modeling allows users to include many more aspects of building design and operation than traditional HVAC sizing approaches. Naturally, it is very dependent on the accuracy of those inputs. No matter how simple or complex the tool, its performance relies on the user's understanding of the capabilities and limitations of that tool. The complexity and interdependence of building systems require that engineers constructing energy models understand how the systems interact and how the inputs and assumptions for individual systems relate to other systems and affect the predicted building performance as a whole.
Some practitioners in the building industry may expect energy models to be absolute predictors of building energy use. However, software developers (and most users) make no such claims. Still, the presence of engineers (or "modelers" who are not engineers or architects) who claim the ability to predict actual energy use has provided fodder for energy modeling's detractors. Though this article does not aim to examine - and certainly not adjudicate - the current deliberations in the industry over the proper role of whole building energy analysis, we think it is important to understand these tools have their shortcomings, but also their advantages. To evaluate design options, system control schemes and building operation considerations, whole building energy analysis is beneficial in its primary role as a comparative tool.
The assumed level of air leakage affects both the predicted energy performance of a building and what systems may appear attractive to a designer. That is, the relative performance of some energy efficiency measures is affected by the assumed air infiltration. The first item here is understood, and perhaps even intuitive, to the majority of designers. Mischaracterizing the air leakage performance of the enclosure in an energy model will likely affect the building energy use predicted by that model. The effect will be particularly pronounced in heating-dominated climates, where the effect of air leakage is more significant.
Probably less intuitive is the fact that incorrect air leakage assumptions, even if they are consistent across all analyses, can affect the predicted improvement or increase in energy use for the design option (or combination of criteria) being evaluated. At higher leakage rates, the heating and cooling requirements are higher. Since the absolute increase or reduction in energy use is similar under most conditions, the relative effect of air leakage will be less at higher leakage rates (the denominator in the equation increases while the numerator remains the same). Generally, however, the opposite problem seems to present itself in many energy models we have seen: The predicted air leakage rate is much lower than that of the actual building. This can result in overstated energy savings, which can affect the economic analyses for a project.
In most climates, the effect of inaccurate air leakage modeling on the evaluation of design options will not be significant enough to significantly influence the decision-making process. Existing buildings, as is often the case in energy modeling, present unique challenges. On existing building projects, we are often trying to evaluate and predict the reduction in energy use associated with "tightening" the building enclosure. In these cases, improvements in thermal (or solar heat gain) performance are often coupled to reductions in air leakage in the same energy efficiency strategy (e.g. replacing windows). The best approach is to evaluate the envelope modifications with a range of air leakage rates to gauge the sensitivity of the building's predicted performance to the assumed air leakage. We have found that utilizing the testing techniques discussed in the previous article to be effective in evaluating an existing building and envelope improvements. Whole building tests establish a baseline and component tests can be used to quantify the contribution of specific envelope areas to the overall leakage rate. By assuming a leakage rate for these components after upgrading the envelope, the whole building rate can be adjusted in the energy model.
Air Barriers in High Performance Building Design Standards
The first article in this series touched upon the rise in popularity of "green building" rating systems, such as LEED, and the increasing stringency of energy efficiency codes and standards, such as ASHRAE 90.1. We discussed the lack of air barrier requirements in most codes and standards and the absence of any credit in LEED rating systems for reducing air leakage. These systems - and the codes and standards themselves - rely heavily on whole building energy analysis to show compliance or to exhibit improved energy performance. Due to the issues discussed above, the energy savings associated with various energy efficiency measures may be misrepresented if air leakage is modeled inaccurately. That said, the effect this has on certification under LEED, while worthy of discussion, is a secondary concern. A good, continuous air barrier is essential to acceptable enclosure performance in high performance buildings. However, the supporting infrastructure for projects attempting to achieve a sustainable design has not been formalized. The benefits of air leakage must be properly included in these standards and programs, and the high performance building design process must account for air leakage in whole building energy analyses.
We have now developed an understanding of the importance of good air barrier design; outlined important design considerations, materials and methods; discussed approaches to quantifying and tracking enclosure air leakage in existing buildings; and presented the implications of inaccurate quantification or expectation of air leakage rates. We have touched upon the interaction between the enclosure and other building systems. We now must understand how a good air barrier changes overall building performance and how other systems need to be designed, constructed and controlled. The next article in this series will discuss the requirements for mechanical systems in tight buildings. We will focus primarily on how "business-as-usual" is often not an option and that code requirements may not be sufficient to provide an acceptable level of performance.
About the Authors
Sean O’Brien is a Senior Project Manager in the New York City office of Simpson Gumpertz & Heger Inc. Mr. O’Brien specializes in building science and building envelope performance, including computer simulation of heat, air, and moisture migration issues. He has investigated and designed repairs for a variety of buildings, from condominiums to natatoriums and art museums, and has published extensively on building science-related matters including moisture migration in masonry wall systems and condensation resistance of windows and curtain walls. He can be reached at [email protected]
Michael Waite is an engineer in the New York City office of Simpson Gumpertz & Heger Inc. He specializes in the interaction between mechanical systems and the building enclosure. He has designed and investigated a wide range of building types and has focused primarily on building energy performance, building enclosure design, and thermal and hygrothermal performance building enclosures. He is a member of ASHRAE SSPC 90.1 and its Envelope Subcommittee, as well as several other industry organizations. He can be reached at [email protected] . | <urn:uuid:1b8a9988-101a-4e76-aa96-c5d0be9e7718> | {
"date": "2013-05-20T02:16:03",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9321838617324829,
"score": 3.234375,
"token_count": 8557,
"url": "http://www.bdcnetwork.com/print/18271"
} |
Next week, the Obama administration is planning to unveil a climate action plan that it intends to implement without legislative approval. It’s a creative approach to governing, not unlike other executive actions President Obama has taken to bypass Congress.
When lawmakers refused to pass cap-and-trade legislation, Obama announced there was more than one way to skin the cat. Through climate plans, executive orders and regulatory action, he directed his agencies to find ways to curb the country’s carbon dioxide output and commit to reducing greenhouse-gas emissions.
Leading the charge, unsurprisingly, is the Environmental Protection Agency, which will release its carbon-dioxide regulations for existing power plants on Monday. The plan will drive up energy prices for American families and businesses without making a dent in global temperatures.
Our infographic explains what it means for jobs, incomes and the states hurt most. | <urn:uuid:a32c376f-bed5-4b15-aa0b-345b705c8daa> | {
"date": "2014-10-25T23:32:19",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119651455.41/warc/CC-MAIN-20141024030051-00320-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9489348530769348,
"score": 2.859375,
"token_count": 175,
"url": "http://dailysignal.com/2014/05/29/obama-bypassing-congress-time-going-cost/"
} |
Shiremark Mill, Capel
Shiremark Mill, c.1919
|Mill name||Shiremark Mill|
|Base storeys||Single-storey base|
|Smock sides||Eight sides|
|No. of sails||Four sails|
|Type of sails||Double Patent sails|
|Winding||Hand winded by wheel and chain|
|No. of pairs of millstones||Two pairs|
|Year lost||Burnt down 1972|
|Other information||Only windmill with stones on a hurst frame south of the River Thames|
Shiremark Mill was built in 1774, incorporating some material from a demolished open trestle post mill which had stood at Clark's Green (TQ 176 398, ). It was so named because it stood close to the border with Sussex, and although often thought of as a Sussex mill, actually stood just within Surrey by some 20 yards (18 m).
The mill was offered for sale in 1777, described as "new-built" and in 1802 was acquired by the Stone family, who were to work it until 1919. In 1886, the mill was tailwinded and the cap and sails were blown off. Messrs Grist and Steele, the Horsham millwrights replaced them that year. The mill worked by wind until 1919, when it was stopped on account of a defective curb.
Shiremark Mill slowly became derelict, an inspection by Rex Wailes in 1933 resulted in an estimated repair cost of £100. The cap boarding was repaired but the mill was again left to deteriorate. In 1950, Capel Parish Council approached the Society for the Protection of Ancient Buildings and the owner of the mill with a view to securing the mill's preservation. The mill had been listed as an antiquity by Surrey County Council by 1951. In 1952, a detailed inspection of the mill found that the sills and lower part of the cant posts were rotten. Thompson's, the Alford millwrights estimated that the mill would cose £2,500 to restore. The main beams of the first floor were supported by brick piers, but no other work was done. Although the mill had all four sails in 1928, the sails fell off one by one, with the last falling in 1956. Photographs show that the cap was intact in August 1958, but by May 1966 the roof had gone, exposing the brake wheel to the weather.
Shiremark Mill was a three-storey smock mill on a single-storey base. There was no stage, earth having been thrown up against the base to form a mill mound. It last worked with four double Patent sails carried on a cast-iron windshaft. The cap was winded by a hand wheel.
The single storey octagonal brick base was 8 feet 6 inches (2.59 m) from floor level to the top of the brickwork internally. Externally it was 5 feet (1.52 m) from ground level to the top of the brickwork, earth having been embanked against the base to allow the sails to be reached for reefing, the mill having originally been built with Common sails. The brickwork tapered in thickness, being 14 inches (356 mm) thick at the top. It was 24 feet (7.32 m) across the flats. By 2006 the base was the only remaining part of the mill, although largely hidden by dense undergrowth.
The three-storey smock tower rested on oak sills of 10 inches (254 mm) by 6 inches (152 mm) in section. The eight oak cant posts were 10 inches (254 mm) were 9 inches (229 mm) square and 21 feet (6.40 m) long, and carried a circular oak curb of 14 feet (4.27 m) diameter at the top. There were two sets of 6 inches (152 mm) square oak transoms at appropriate heights which carried the joists for the internal floors. Each of the twenty-four frames was infilled with a vertical oak post 5 inches (127 mm) square and two diagonal struts 5 by 3 1⁄2 inches (127 by 89 mm) in section. On the bottom floor of the smock there were two doors on opposite sides to enable access whatever direction the sails were facing.
Internally, the bottom floor of the smock was at two levels, with a 4 feet (1.22 m) height difference. The main beams were 23 feet (7.01 m) long and 12 inches (305 mm) square on 6 feet 6 inches (1.98 m) centres. These formed the base of the Hurst Frame, a feature more commonly found in watermills than windmills. Shiremark mill is the only recorded windmill with a hurst frame south of the River Thames. A surviving windmill with a hurst frame is Chesterton Mill, Warwickshire.
The cap was 17 feet (5.18 m) by 14 feet (4.27 m) in plan, and 10 feet 6 inches (3.20 m) in height above the curb. The mill was 40 feet (12.19 m) high from the ground floor to the cap ridge, thus 36 feet 6 inches (11.13 m) from ground level to roof externally. The main cap frame consisted of two sheers, each 12 inches (305 mm) square in section and 16 feet (4.88 m), set 10 feet (3.05 m) apart. The main cross members were the breast beam, the sprattle beam and the tail beam, in order from head to tail. The cross members extended each side of the sheers to form a base for the nine pairs of roof rafters. There was no ridge board to the roof.
The cap was winded by a hand wheel of 8 feet (2.44 m) diameter housed just inside the rear of the cap. The worm wheel that engaged with the cogs set into the top of the tower was latterly a cast-iron one, replacing an earlier wooden one. It was necessary to pull about a 1⁄4 mile (400 m) of chain to turn the mill through 180 degrees.
Sails and windshaftEdit
The mill was built with four Common sails. After it was tailwinded in 1886, a new cap, windshaft and four double Patent sails were fitted. The sails were 6 feet 10 inches (2.08 m) wide and spanned 60 feet (18.29 m). Each pair of sails was carried on a stock 39 feet (11.89 m) long and of 14 inches (360 mm) by 12 inches (300 mm) section at the canister, tapering to 6 inches (150 mm) square at the tips. Each stock was strengthened by a pair of clamps, 10 feet (3.05 m) long and 8 inches (200 mm) by 5 inches (130 mm) in section.
The cast-iron windshaft is 16 feet (4.88 m) long overall, with a canister at the outer end to carry the stocks. It was 12 inches (300 mm) diameter at the neck bearing, 8 1⁄2 inches (220 mm) square at the boss for the brake wheel and 6 inches (150 mm) diameter at the tail, the tail bearing itself being 4 inches (100 mm) diameter. The windshaft carried a 9 feet (2.74 m) diameter clasp arm Brake wheel, which had been converted from compass arm construction, the original windshaft having been of wood. The brake wheel had 75 cogs. The windshaft from Shiremark Mill was used in the restoration of Ripple Mill, Ringwould, Kent in 1994.
The elm Upright Shaft was 21 feet (6.40 m) long It carried a cast-iron Wallower 3 feet (910 mm) diameter Wallower, cast in halves and having 26 teeth. It replaced an earlier wooden wheel.The underside of the wallower had a friction ring which drove the sack hoist. At the foot of the Upright Shaft, a wooden clasp arm Great Spur Wheel of 6 feet 6 inches (1.98 m) diameter with 70 cogs was carried. This drove the two pairs of millstones underdrift. The French Burr stones were driven by a stone nut with 20 cogs, and the Peak stones were driven by a stone nut with 18 cogs. Each pair of millstones was controlled by its own governor, missing at the time of the survey in 1952.
- David Southow, 1774–1777
- John Stone, 1802–
- Thomas Stone
- G. Stone
- Eliza Stone
- John Chantler, c. 1875
- William Rapley, 1886
- George Stone, 1919
Culture and literatureEdit
- "Shiremark Mill". Windmill World. Retrieved 2008-05-21.
- Farries, Kenneth G & Mason, Martin T (1966). The Windmills of Surrey and Inner London. London: Charles Skilton. pp. 62–69.
- "Capel, Shiremark Windmill 1928". Fancis Frith. Retrieved 2008-05-21.
- "TEMPLEMAN LIBRARY - The Donald W. Muggeridge Collection of Mill Photographs - COLOUR SLIDES: ENGLAND". University of Kent at Canterbury. Retrieved 2008-05-21.
- "Ripple, Ripple Mill, Smock mill, Cereal Milling". The Mills Archive Trust. Archived from the original on 2011-07-14. Retrieved 2008-05-21.
- "ON THE CHARACTER OF ENDURING THINGS". Morec.com. Retrieved 2008-05-21.
- Windmill World webpage on Shiremark mill. | <urn:uuid:cb347555-69a6-4349-9e7d-133c3054ae68> | {
"date": "2019-04-18T17:12:00",
"dump": "CC-MAIN-2019-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517745.15/warc/CC-MAIN-20190418161426-20190418183426-00376.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9458961486816406,
"score": 2.890625,
"token_count": 2044,
"url": "https://en.m.wikipedia.org/wiki/Shiremark_Mill,_Capel"
} |
Nanoelectromechanics in Engineering and Biology
Michael Pycraft Hughes University of Surrey
The success, growth, and virtually limitless applications of nanotechnology depend upon our ability to manipulate nanoscale objects, which in turn depends upon developing new insights into the interactions of electric fields, nanoparticles, and the molecules that surround them. In the first book to unite and directly address particle electrokinetics and nanotechnology, Nanoelectromechanics in Engineering and Biology provides a thorough grounding in the phenomena associated with nanoscale particle manipulation.
The author delivers a wealth of application and background knowledge, from using electric fields for particle sorting in lab-on-a-chip devices to electrode fabrication, electric field simulation, and computer analysis. It also explores how electromechanics can be applied to sorting DNA molecules, examining viruses, constructing electronic devices with carbon nanotubes, and actuating nanoscale electric motors.
The field of nanotechnology is inherently multidisciplinary-in its principles, in its techniques, and in its applications-and meeting its current and future challenges will require the kind of approach reflected in this book. Unmatched in its scope, Nanoelectromechanics in Engineering and Biology offers an outstanding opportunity for people in all areas of research and technology to explore the use and precise manipulation of nanoscale structures.
Table of ContentsIntroduction. Electrokinetics. Colloids and Surfaces. Analysis and Manipulation of Solid Particles. Dielectrophoresis of Complex Bioparticles. Dielectrophoresis, Molecules and Materials. Nanoengineering. Practical Dielectrophoretic Separation. Electrode Structures. Computational Applications in Electromechanics. Dielectrophoretic Response Modeling and MATLAB. Appendix.
ContributorsAuthor 1 Hughes, Michael Pycraft, University of Surrey, Guildford, UK
by siebo — last modified September 14, 2009 - 13:24 | <urn:uuid:6af94561-d792-4f7a-9b48-c7c28e0f9f30> | {
"date": "2014-04-19T02:48:42",
"dump": "CC-MAIN-2014-15",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00411-ip-10-147-4-33.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8524250984191895,
"score": 2.578125,
"token_count": 399,
"url": "http://www.nanoscienceworks.org/publications/books/imported/0849311837"
} |
Breathing difficulty while lying down is an abnormal condition in which a person must keep the head raised by sitting or standing to be able to breathe deeply or comfortably.
A type of breathing difficulty while lying down is paroxysmal nocturnal dyspnea. This condition causes a person to wake up suddenly during the night feeling short of breath.
Waking at night short of breath; Paroxysmal nocturnal dyspnea; PND; Difficulty breathing while lying down; Orthopnea
This is a common complaint in people with some types of heart or lung problems. Sometimes the problem is subtle. People may only notice it when they realize that sleep is more comfortable with lots of pillows under their head, or their head in a propped-up position.
Schraufnagel DE, Murray JF. History and physical examination. In: Mason RJ, Broaddus VC, Martin TR, et al., eds. Murray and Nadel's Textbook of Respiratory Medicine, 5th ed. Philadelphia, PA: Elsevier Saunders; 2010:chap 17.
Linda J. Vorvick, MD, Medical Director and Director of Didactic Curriculum, MEDEX Northwest Division of Physician Assistant Studies, Department of Family Medicine, UW Medicine, School of Medicine, University of Washington. Also reviewed by A.D.A.M. Health Solutions, Ebix, Inc., Editorial Team: David Zieve, MD, MHA, David R. Eltz, Stephanie Slon, and Nissi Wang. | <urn:uuid:f7b7d01a-59f8-441a-a625-dcdf5e2fd097> | {
"date": "2015-05-25T21:40:30",
"dump": "CC-MAIN-2015-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928715.47/warc/CC-MAIN-20150521113208-00219-ip-10-180-206-219.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8825024366378784,
"score": 2.9375,
"token_count": 323,
"url": "http://www.nortonhealthcare.com/body.cfm?id=10&action=detail&AEArticleID=003076&AEProductID=Adam2004_117&AEProjectTypeIDURL=APT_1"
} |
[EDITOR’S NOTE: Part I of this two-part series appeared in the May issue. Part II follows below and continues, without introductory comments, where the first article ended.]
The Factual Accuracy of the Bible
The Bible claims to be the inspired Word of God. Therefore, it should be accurate in whatever subject(s) it discusses, since God is not the Author of confusion and contradiction (1 Corinthians 14:33), but of truth (John 17:17). The factual accuracy of the Bible proves that it is accurate. Time and again the Bible’s facts have withstood the test. Examples abound.
Numerous passages indicate that Moses wrote the Pentateuch (2 Chronicles 34:14; Ezra 6:18; Nehemiah 13:1; Exodus 17:14; John 5:46; Mark 12:26). Having been adopted by the royal family of Egypt, he would have had access to the finest schools, best tutors, and greatest libraries that country had to offer, thus securing for himself an impressive education (see Acts 7:22). Yet Bible critics suggested that Moses could not have written the Pentateuch because the art of writing was not developed until well after his death (c. 1451 B.C.). This criticism, however, has been blunted by a plethora of archaeological discoveries. In 1933, J.L. Starkey, who had studied under famed archaeologist W.M.F. Petrie, excavated the city of Lachish, which had figured prominently in Joshua’s conquest of Canaan (Joshua 10). Among other things, he unearthed a pottery water pitcher “inscribed with a dedication in eleven archaic letters, the earliest ‘Hebrew’ inscription known” (Cheyne, 1899, 2:1055). Pfeiffer has noted: “The Old, or palaeo-Hebrew script is the form of writing which is similar to that used by the Phoenicians. A royal inscription of King Shaphatball of Gebal (Byblos) in this alphabet dates from about 1600 B.C.” (1966, p. 33). In 1949, C.F.A. Schaeffer “found a table at Ras Shamra containing the thirty letters of the Ugaritic alphabet in their proper order. It was discovered that the sequence of the Ugaritic alphabet was the same as modern Hebrew, revealing that the Hebrew alphabet goes back at least 3,500 years” (Jackson, 1982, p. 32).
The Code of Hammurabi, (c. 2000-1700 B.C.) was discovered by a French archaeological expedition under the direction of Jacques de Morgan in 1901-1902 at the ancient site of Susa in what is now Iran. It was written on a piece of black diorite nearly eight feet high, and contained 282 sections. Free and Vos have stated:
The Code of Hammurabi was written several hundred years before the time of Moses (c. 1500-1400 B.C.).... This code, from the period 2000-1700 B.C., contains advanced laws similar to those in the Mosaic laws.... In view of this archaeological evidence, the destructive critic can no longer insist that the laws of Moses are too advanced for his time (1992, pp. 103, 55).
The Code of Hammurabi established beyond doubt that writing was known hundreds of years before Moses. In fact, the renowned Jewish historian, Josephus, confirmed that Moses authored the Pentateuch (Against Apion, 1,8), and various non-Christian writers (Hecataeus, Manetha, Lysimachus, Eupolemus, Tacitus, Juvenal, and Longinus, to name only a few), credited Moses as having authored the first five books of the English Bible (see Rawlinson, 1877, pp. 254ff.).
In days of yore, detractors accused Isaiah of having made a historical mistake when he wrote of Sargon as king of Assyria (Isaiah 20:1). For years, this remained the sole historical reference—secular or biblical—to Sargon having been linked with the Assyrian nation. Thus, critics assumed Isaiah had erred. But in 1843, Paul Emile Botta, the French consular agent at Mosul, working with Austen Layard, unearthed historical evidence that established Sargon as having been exactly what Isaiah said he was—king of the Assyrians. At Khorsabad, Botta discovered Sargon’s palace. Apparently, from what scholars have been able to piece together from archaeological and historical records, Sargon made his capital successively at Ashur, Calah, Nineveh, and finally at Khorsabad, where his palace was constructed in the closing years of his reign (c. 706 B.C.). The walls of the palace were adorned quite intricately with ornate text that described the events of his reign. Today, an artifact from the palace—a forty-ton stone bull (slab)—is on display at the University of Chicago’s Oriental Institute (“weighty” evidence of Sargon’s existence). Isaiah had been correct all along. And the critics had been wrong—all along.
One of the most famous archaeologists of the last century was Sir William Ramsay, who disputed the accuracy of events recorded by Luke in the book of Acts. Ramsay believed those events to be little more than second-century, fictitious accounts. Yet after years of literally digging through the evidence in Asia Minor, Ramsay concluded that Luke was an exemplary historian. In the decades since Ramsay, other scholars have suggested that Luke’s historical background of the New Testament is among the best ever produced. As Wayne Jackson has noted:
In Acts, Luke mentions thirty-two countries, fifty-four cities, and nine Mediterranean islands. He also mentions ninety-five persons, sixty-two of which are not named elsewhere in the New Testament. And his references, where checkable, are always correct. This is truly remarkable, in view of the fact that the political/territorial situation of his day was in a state of almost constant change. Only inspiration can account for Luke’s precision (1991b, 27:2).
Other Bible critics have suggested that Luke misspoke when he designated Sergius Paulus as proconsul of Cyprus (Acts 13:7). Their claim was that Cyprus was governed by a propraetor (also known as a consular legate), not a proconsul. Upon further examination, such a charge can be seen to be utterly vacuous, as Thomas Eaves has documented.
As we turn to the writers of history for that period, Dia Cassius (Roman History) and Strabo (The Geography of Strabo), we learn that there were two periods of Cyprus’ history: first, it was an imperial province governed by a propraetor, and later in 22 B.C., it was made a senatorial province governed by a proconsul. Therefore, the historians support Luke in his statement that Cyprus was ruled by a proconsul, for it was between 40-50 A.D. when Paul made his first missionary journey. If we accept secular history as being true we must also accept Biblical history for they are in agreement (1980, p. 234).
The science of archaeology seems to have outdone itself in verifying the Scriptures. Famed archaeologist William F. Albright wrote: “There can be no doubt that archaeology has confirmed the substantial historicity of the Old Testament tradition” (1953, p. 176). Nelson Glueck, himself a pillar within the archaeological community, said: “It may be stated categorically that no archaeological discovery has ever controverted a Biblical reference. Scores of archaeological findings have been made which conform in clear outline or exact detail historical statements in the Bible” (1959, p. 31). Such statements, offered 30+ years ago, are as true today as the day they were made. Jerry Moffitt has observed:
Over thirty names (emperors, high priests, Roman governors, princes, etc.) are mentioned in the New Testament, and all but a handful have been verified. In every way the Bible accounts have been found accurate (though vigorously challenged). In no single case does the Bible let us down in geographical accuracy. Without one mistake, the Bible lists around forty-five countries. Each is accurately placed and named. About the same number of cities are named and no one mistake can be listed. Further, about thirty-six towns are mentioned, and most have been identified. Wherever accuracy can be checked, minute detail has been found correct—every time! (1993, p. 129).
The Hittites are mentioned over forty times in Scripture (Exodus 23:28; Joshua 1:4; et al.), and were so feared that on one occasion they caused the Syrians to flee from Israel (2 Kings 7:6). Yet critics suggested that Hittites were a figment of the Bible writers’ imaginations, since no evidence of their existence had been located. But in the late 1800s, A.H. Sayce discovered inscriptions in Syria that he designated as Hittite. Then, in 1906, Hugh Winckler excavated Boghazkoy, Turkey and discovered that the Hittite capital had been located on that very site. His find was all the more powerful because of the more than 10,000 clay tablets contained in the ancient city’s library, containing the society’s law system that eventually came to be known as the Hittite Code. Thus, Ira Price wrote of the Hittites:
The lack of extra-biblical testimony to their existence led some scholars about a half-century ago to deny their historicity. They scoffed at the idea of Israel allying herself with such an unhistorical people as the Hittites, as narrated in 2 Kings vii.6. But those utterances have vanished into thin air (1907, pp. 75-76).
In his classic text, Lands of the Bible, J.W. McGarvey remarked:
A fictitious narrative, located in a country with which the writer is not personally familiar, must either avoid local allusions or be found frequently in conflict with the peculiarities of place and of manners and customs. By this conflict the fictitious character of the narrative is exposed (1881, p. 375).
McGarvey then documented numerous instances in which the facts of the Bible can be checked, and in which it always passes the test. Are compass references accurate? Is Antioch of Syria “down” from Jerusalem, even though it lies to the north of the holy city (Acts 15:1)? Is the way from Jerusalem to Gaza “south” of Samaria (Acts 8:26)? Is Egypt “down” from Canaan (Genesis 12:10)? McGarvey noted that “in not a single instance of this kind has any of the Bible writers been found at fault” (p. 378). Further, as Wayne Jackson has commented:
In 1790, William Paley, the celebrated Anglican scholar, authored his famous volume, Horae Paulinae (Hours with Paul). In this remarkable book, Paley demonstrated an amazing array of “undesigned coincidences” between the book of Acts and the epistles of Paul, which argue for the credibility of the Christian revelation. “These coincidences,” said Paley, “which are often incorporated or intertwined in references and allusions, in which no art can be discovered, and no contrivance traced, furnish numerous proofs of the truth of both these works, and consequently that of Christianity” (1839 edition, p. xvi). In 1847, J.J. Blunt of Cambridge University released a companion volume titled, Undesigned Coincidences in the Writings of Both the Old Testament and New Testament. Professor Blunt argued that both Testaments contain numerous examples of “consistency without contrivance” which support the Scriptures’ claim of a unified origin from a supernatural source, namely God (1884, p. vii) (1991a, pp. 2-3).
A sampling from Paley’s and Blunt’s books provides startling evidence of the fact that the writers could not have “contrived” their stories. Often the writers were separated from one another by centuries, yet their stories dovetail with astounding accuracy, and provide additional proof of the Bible’s inspiration.
When Joseph was seventeen years old, he was sold into Egyptian slavery by his brothers. While serving in the house of an Egyptian named Potiphar, Joseph found himself the object of affection of Potiphar’s wife, whose advances he rejected. Her anger aroused, she fabricated a story that resulted in Joseph’s being thrown into prison where the king’s captives were “bound” (Genesis 39:20). In the context of this passage, the word “bound” is of critical importance, because hundreds of years after the fact the psalmist would state of Joseph: “His feet they hurt with fetters: He was laid in chains of iron” (Psalm 105:18). Contrivance—or consistency?
When Pharaoh stubbornly refused to release the Israelites from bondage, God rained down plagues on the Egyptian monarch and his people, including a plague of hail that destroyed the flax in the fields (Exodus 9:31). Eventually, the Israelites were released, traveled to the wilderness of Sinai, were found faithless in God’s sight, and were forced to wander for four decades while everyone over the age of twenty perished (except for the houses of Joshua and Caleb—Numbers 14:29-30). Finally, however, the Hebrews were allowed to enter the promised land of Canaan. The arrival of the younger generation was exactly forty years after Moses had led them out of Egypt (Joshua 4:19), and thus shortly before the anniversary of that eighth plague which destroyed the flax. The book of Joshua mentions that their entrance into Canaan was near harvest time (3:15). Interestingly, when spies were sent to investigate the city of Jericho, the Bible notes that they were concealed by Rahab under drying stalks of flax upon the rooftop of her house (Joshua 2:6). Coincidence—or concordance?
In Exodus 1:11, the story is told of how the Israelites were forced to build the treasure cities of Pithom and Raamses for the Egyptian ruler. Exodus 5 records that, initially, the slaves made bricks containing straw, but later were forced to use stubble because Pharaoh ordered his taskmasters not to provide any more straw. Excavations at Pithom in 1883 by Naville, and in 1908 by Kyle, discovered that the lower layers of the structures were made of bricks filled with good, chopped straw. The middle layers had less straw with some stubble. The upper layers contained bricks that were made of pure clay, with no straw whatsoever (see Pfeiffer, 1966, p. 459). Contrivance—or correctness?
The Tell-el-Armarna Tablets (c. 1450 B.C.) record the custom of bowing down seven times when meeting a superior. Thus the statement in Genesis 33:3—“And he [Jacob] himself passed over before them, and bowed himself to the ground seven times, until he came near to his brother [Esau]”—is confirmed as an act of respect. Coincidence—or consistency?
In at least two places, the Old Testament speaks of the Horites (Genesis 14:6; 36:21). Until approximately 1925, no one ever had heard of the Horites. Once again, however, archaeology revealed the factual accuracy of the Bible. About 1925, archaeological discoveries helped explain the existence of this formerly unknown nation. Free and Vos have commented that “Horite” derives from the Egyptian Hurru, which is “...a general term the Egyptians applied to southern Transjordan...,” and that “...the Hebrews adopted it from the Egyptians” (1992, p. 66). Thus, both Egyptian and Hebrew cultures were intertwined with the Horites. Contrivance—or concordance?
On one occasion during His earthly ministry, Jesus miraculously provided a meal for more than 5,000 people. Mark records that the Lord seated the people upon the “green grass” (6:39). Such a statement agrees completely with John’s reference to the fact that this event occurred near the time of the Passover (6:44), which is in the spring—exactly the time in Palestine when the grass should be green. Coincidence—or correctness?
In Acts 20:28, Luke described Paul’s Roman imprisonment, and quoted the apostle as proclaiming: “...because of the hope of Israel I am bound with this chain.” During this incarceration, Paul penned four important letters (Ephesians, Philippians, Colossians, and Philemon). In his epistle to the Ephesians, Paul alluded to his “chain” (6:20). In Philippians he referred to his “bonds” (1:7,13-14,17). Similarly, see the references to his “bonds” in Colossians 4:3 and Philemon 1:13. Coincidence—or consistency?
In his second letter to Timothy, Paul admonished the young man by stating that “...from a babe thou hast known the sacred writings which are able to make thee wise unto salvation through faith which is in Christ Jesus” (2 Timothy 3:15). The reference to the “sacred writings” is an allusion to the Old Testament. Since Timothy had known those writings from his earliest days, certainly it would be safe to suggest that his background was Jewish. As a matter of fact, the book of Acts states Timothy was “the son of a Jewess that believed, but his father was a Greek” (Acts 16:1). Of further interest is the fact that when Paul commended Timothy for his strong faith (2 Timothy 1:5), he alluded to the spirituality of both the young man’s mother and grandmother, yet made no mention of Timothy’s father. Coincidence—or concordance?
In their book, A General Introduction to the Bible, Geisler and Nix wrote: “Confirmation of the Bible’s accuracy in factual matters lends credibility to its claims when speaking on other subjects” (1986, p. 195). Indeed it does! After previewing most of the above facts, and others of a similar nature, Wayne Jackson concluded:
The Bible critic is likely to trivialize these examples as they are isolated from one another. When, however, literally hundreds and hundreds of these incidental details are observed to perfectly mesh, one begins to suspect that what have been called “undesigned coincidences” (from the human vantage point) become very obvious cases of divinely designed harmony—tiny footprints that lead only to the conclusion that God was the guiding Force behind the composition of the Sacred Scriptures (1991a, 11:3).
The Prophecy of the Bible
One of the most impressive internal proofs of the Bible’s inspiration is its prophetic utterances. Rex A. Turner Sr. has suggested:
Predictive prophecy is the highest evidence of divine revelation. The one thing that mortal man cannot do is to know and report future events in the absence of a train of circumstances that naturally suggest certain possibilities... (1989, p. 12).
If the Bible is inspired of God, it should contain valid, predictive prophecy. In fact, the Bible’s prophecy—completely foretold to the minutest detail, and painstakingly fulfilled with the greatest precision—has confounded its critics for generations. The Bible contains prophecies about individuals, lands, nations, and even the predicted Messiah.
Thomas H. Horne defined predictive prophecy as “a miracle of knowledge, a declaration or representation of something future, beyond the power of human sagacity to discern or to calculate” (1970, 1:272). The Bible confirms that definition:
But the prophet, that shall speak a word presumptuously in my name, which I have not commanded him to speak, or that shall speak in the name of other gods, that same prophet shall die. And if thou say in thy heart, How shall we know the word which Jehovah hath not spoken? when a prophet speaketh in the name of Jehovah, if the thing follow not, nor come to pass, that is the thing which Jehovah hath not spoken: the prophet hath spoken it presumptuously, thou shalt not be afraid of him (Deuteronomy 18:20-22).
The prophet Isaiah based the credibility of his message on prophecy. To the promoters of idolatry in his day, he issued the following challenge: “Let them bring forth, and declare unto us what shall happen: declare ye the former things, what they are, that we may consider them, and know the latter end of them; or show us things to come” (Isaiah 41:22). His point was this: It is one thing to make the prediction; it is entirely another to see that prediction actually come true and be corroborated by subsequent history.
In order for a prophecy to be valid, it must meet certain criteria. First, it must be a specific, detailed declaration, as opposed to being nebulous, vague, or general in nature. Arthur Pierson wrote: “The particulars of the prophecy should be so many and minute that there shall be no possibility of accounting by shrewd guess-work for the accuracy of the fulfillment” (1913, pp. 75-76). Bernard Ramm has suggested: “The prophecy must be more than a good guess or a conjecture. It must possess sufficient precision as to be capable of verification by means of the fulfillment” (1971, p. 82). Second, there must be a sufficient amount of time between the prophetic statement and its fulfillment. Suggestions as to what “might” happen in the future do not qualify as prophetic pronouncements. Rather, the prophecy must precede the fulfillment in a significant fashion, and there must be no chance whatsoever of the prophet having the ability to influence the outcome.
Third, the prophecy must be stated in clear, understandable terms. Roger Dickson has noted: “Prophecies must be sufficiently clear in order for the observer to be able to link pronouncement with fulfillment. If a prophecy is not understandable enough so as to allow the observer to depict its fulfillment, then what good would the prophecy be?” (1997, p. 346). Fourth, the prophecy must not have historical overtones. In other words, true prophecy should not be based on past (or current) societal or economic conditions. Pierson amplified this point by stating that: “There should have been nothing in previous history which makes it possible to forecast a like event in the future” (1913, p. 75). Fifth, a clear, understandable, exact prophecy must have a clear, understandable, exact fulfillment. It is not enough to suggest that a certain event came true with a “high degree of probability.” The fulfillment must be unmistakable, and must match the prophecy in every detail.
Two questions, then, are in order: (1) does the Bible employ predictive prophecy; and (2) if it does, can the predictive prophecy be proven true? The answer to both questions is a resounding “yes!” Further, the Bible’s prophecy fits the above standards perfectly—each and every time. Consider just a few brief examples.
Within the Sacred Volume, numerous prophecies are presented regarding the rise, decline, and eventual fall of kings, cities, and even nations. (1) The Bible foretells the destruction of the city of Tyre with miraculous precision. Ezekiel predicted that Nebuchadnezzar, king of Babylon, would destroy the city (Ezekiel 26:7-8). Many nations were to come up against Tyre (26:3). The city would be leveled and scraped clean like a bare rock (26:4). The city’s stones, timbers, and soil would be cast into the sea (26:12). The surrounding area would become a place for the spreading of fishermen’s nets (26:5). And, finally, the city never would be rebuilt to its former glory (26:14).
History records that each of these predictions came true. Tyre, a coastal city from ancient times, had a somewhat unusual arrangement. In addition to the inland city, there was an island about three-fourth’s of a mile offshore. Nebuchadnezzar besieged the mainland city in 586 B.C., but when he finally was able to inhabit the city in about 573 B.C., his victory was hollow. Unbeknownst to him, the inhabitants had vacated the city and moved to the island—a situation that remained virtually unchanged for the next 241 years. Then, in 332 B.C., Alexander the Great conquered the city—but not with ease. To get to the island, he literally had his army “scrape clean” the inland city of its debris, and he then used those materials (stones, timbers, and soil) to build a causeway to the island. But even though Alexander inflicted severe damage on the city, it still remained intact. In fact, it waxed and waned for the next 1,600 years until finally, in A.D. 1291, the Muslims thoroughly crushed Tyre.
The city never regained its once-famous position of wealth and power. The prophet Ezekiel looked 1,900 years into the future and predicted that Tyre would be a bald rock where fishermen gathered to open their nets. And that is exactly what history records as having happened (see Bromling, 1994, p. 96; Major, 1996, pp. 93-95).
(2) During a time in the history of Israel in which God’s people had delved deeply into idolatry, the prophet Isaiah foretold that God would raise up the Assyrians, as His “rod of anger” in order to punish the disobedient Hebrews (Isaiah 10:5-6). But, Isaiah noted, after that had been accomplished, God would see to it that the Assyrians themselves were punished for their own wicked deeds (Isaiah 10:12,24-25).
Archaeology has revealed some impressive facts regarding this prophecy. Assyrian records discovered in recent years discuss the fact that in the reign of Hosea, king of Israel, Shalmanesar, ruler of Assyria, assaulted Samaria, the capital city of Israel. However, he died before completing the assault, which was taken up by his successor, Sargon, who captured the city (cf. 1 Kings 18:10). An Assyrian clay prism comments on the fact that 27,290 Israelite captives were taken in the conflict. Almost twenty-five years later, the Assyrian king Sennacherib once again invaded Palestine (2 Kings 18:13ff.). Archaeological records report that 46 Judean cities were captured, and that 200,150 Israelites were taken into captivity. Jerusalem, however, was not conquered—a fact that is noteworthy, since 2 Kings 19:32-34 predicted that Sennacherib would be unable to take the holy city.
The Taylor Cylinder, discovered at Nineveh in 1830, presents the history of the Assyrians’ assault, and states that king Hezekiah of Judah was “shut up like a bird in a cage.” But was Jerusalem itself spared? It was. And were the wicked Assyrians punished? They were. The account, provided in 2 Kings 19:35, indicates that in a single night, God annihilated 185,000 Assyrian soldiers who had encircled Jerusalem. In addition, the prophecy stated that Sennacherib would return to his home, and there fall by the sword (2 Kings 19:7). Some twenty years later, he was assassinated by his own sons, who smote him with the sword while he was worshiping pagan deities (Isaiah 37:37-38).
(3) The Old Testament contains more than three hundred messianic prophecies. As Hugo McCord has said, “Testimony about Jesus was the chief purpose of prophecy. To him all the prophets gave witness (Acts 10:43)” (1979, p. 332). The Prophesied One would be born of a woman (Genesis 3:15; Galatians 4:4), of the seed of Abraham (Genesis 22:18; Luke 3:34), of the tribe of Judah (Genesis 49:10; Hebrews 7:14), of the royal lineage of David (2 Samuel 7:12; Luke 1:32), in Bethlehem (Micah 5:2; Matthew 2:1), to the virgin Mary (Isaiah 7:14; Matthew 1:22), in order to bruise the head of Satan (Genesis 3:15; Galatians 4:4; Hebrews 2:12-14).
His Galilean ministry was foretold (Isaiah 9:1-2), and it was prophesied that a forerunner would announce His arrival (Isaiah 40:3; Matthew 3:1-3). He would appear during the days of the Roman reign (Daniel 2:44; Luke 2:1), while Judah still possessed her own king (Genesis 49:10; Matthew 2:22). He would be killed some 490 years after the command to restore Jerusalem at the end of the Babylonian captivity (457 B.C.), i.e., A.D. 30 (Daniel 9:24ff.). He was to be both human and divine; though born, He was eternal (Micah 5:2; John 1:1,14); though a man, He was Jehovah’s “fellow” (Zechariah 13:7; John 10:30; Philippians 2:6). He was to be gentle and compassionate in His dealings with mankind (Isaiah 42:1-4; Matthew 12:15-21). He would submit perfectly to His heavenly Father (Psalm 40:8; Isaiah 53:11; John 8:29; 2 Corinthians 5:21; 1 Peter 2:22).
The prophecy was that He would be rejected and know grief (Isaiah 53:3), and be betrayed by a friend (Psalm 41:9) for thirty pieces of silver (Zechariah 11:12). He was (John 13:18; Matthew 26:15). He would be spit upon, and beaten (Isaiah 50:6; 53:5), and in death both His hands and His feet were to be pierced (Psalm 22:16). This is exactly what happened (Matthew 27:30; Luke 24:39). The Scriptures foretold that He would be numbered among criminals (Isaiah 53:12), which He was (Matthew 27:38). He would be mocked, not only with scornful words (Psalm 22:7-8), but with bitter wine (Psalm 22:18). So He was (Matthew 27:39,48). Although He would die and be placed in a rich man’s tomb (Isaiah 53:9; Matthew 27:57), His bones would not be broken (Psalm 34:20; John 19:33), and His flesh would not see corruption, because He would be raised from the dead (Psalm 16:10; Acts 2:22ff.), and eventually ascend into heaven (Psalm 110:1-3; 45:6; Acts 1:9-10).
Time and again biblical prophecies are presented, and fulfilled, with exacting detail. Jeremiah wrote: “...when the word of the prophet shall come to pass, then shall the prophet be known, that Jehovah hath truly sent him” (28:9).
The Scientific Foreknowledge of the Bible
Among the intriguing proofs of the Bible’s inspiration is its unique scientific foreknowledge. From anthropology to zoology, the Bible presents astonishingly accurate scientific information that the writers, on their own, simply could not have known. Jean S. Morton has observed:
Many scientific facts, which prove the infallibility of Scripture, are tucked away in its pages. These proofs are given in nonscientific language; nevertheless, they substantiate the claims of authenticity of the Holy Scriptures.... In some cases, scientific concepts have been known through the ages, but these concepts are mentioned in a unique manner in Scripture. In other cases, scientific topics have been mentioned hundreds or even thousands of years before man discovered them (1978, p. 10).
Space limitations prohibit an in-depth examination of the Bible’s scientific foreknowledge, but I would like to mention just one of the more prominent examples. For those who might desire additional information, I have dealt with this theme elsewhere in a much more extensive fashion (Thompson, 1981, 1:33-36; Thompson and Jackson, 1992, pp. 125-137).
In Genesis 17:12, God commanded Abraham to circumcise newborn males on the eighth day. But why day eight? In humans, blood clotting is dependent upon three factors: (a) platelets; (b) vitamin K; and (c) prothrombin. In 1935, professor H. Dam proposed the name “vitamin K” for the factor that helped prevent hemorrhaging in chicks. We now realize that vitamin K is responsible for the production (by the liver) of prothrombin. If vitamin K is deficient, there will be a prothrombin deficiency and hemorrhaging may occur.
Interestingly, it is only on the fifth to seventh days of a newborn’s life that vitamin K (produced by the action of bacteria in the intestinal tract) is present in adequate quantities. Vitamin K—coupled with prothrombin—causes blood coagulation, which is important in any surgical procedure. A classic medical text, Holt Pediatrics, corroborates that a newborn infant has
...peculiar susceptibility to bleeding between the second and fifth days of life.... Hemorrhages at this time, though often inconsequential, are sometimes extensive; they may produce serious damage to internal organs, especially to the brain, and cause death from shock and exsanguination (1953, pp. 125-126).
Obviously, then, if vitamin K is not produced in sufficient quantities until days five through seven, it would be wise to postpone any surgery until sometime after that. But why did God specify day eight?
On the eighth day, the amount of prothrombin present actually is elevated above 100 percent of normal. In fact, day eight is the only day in the male’s life in which this will be the case under normal conditions. If surgery is to be performed, day eight is the perfect day to do it. S.I. McMillen, the renowned medical doctor who authored None of These Diseases, wrote concerning this information:
...as we congratulate medical science for this recent finding, we can almost hear the leaves of the Bible rustling. They would like to remind us that four thousand years ago, when God initiated circumcision with Abraham, He said “And he that is eight days old shall be circumcised....” Abraham did not pick the eighth day after many centuries of trial-and-error experiments. Neither he nor any of his company from the ancient city of Ur in the Chaldees had ever been circumcised. It was a day picked by the Creator of vitamin K (1963, p. 21, emp. in orig.).
The information employed by Abraham, and confirmed in writing by Moses, was accurate scientifically then, and remains so now. No culture possessed such scientific acumen, which was years ahead of its time. How, then, did Abraham and Moses come to know the best time for circumcision, unless, of course, this fact was revealed to them by God, and recorded in His Word through inspiration?
There are numerous instances of scientific foreknowledge within the Bible. The incredible accuracy of the Bible’s science is yet another example of God’s superintending guidance, and one that provides impressive proof of its inspiration.
Those who have set their face against God have railed against the Bible for generations. King Jehoiakim took his penknife, slashed the Old Testament Scriptures to pieces, and tossed them into a fire (Jeremiah 36:22-23). During the Middle Ages, attempts were made to keep the Bible from the man on the street. In fact, those caught translating or distributing the Scriptures often were subjected to imprisonment, torture, and even death. Centuries later, the French skeptic Voltaire boasted that “within fifty years, the Bible will no longer be discussed among educated people.” His braggadocio notwithstanding, the Bible still is being discussed among educated people, while the name of Voltaire languishes in relative obscurity.
In the early 1900s, American infidel Robert Ingersol claimed regarding the Bible: “In fifteen years, I will have this book in the morgue.” But, as history records, Ingersol ended up in the morgue, while the Bible lives on. Like the blacksmith’s anvil—which wears out many hammers but itself remains unaffected—the Bible wears out the skeptics’ innocuous charges, all the while remaining unscathed. John Clifford (1836-1923), a Baptist minister and social reformer, once wrote:
Last eve I passed beside a blacksmith’s door,
And heard the anvil ring the vesper chime;
Then looking, I saw upon the floor,
Old hammers, worn with beating years of time
“How many anvils have you had,” said I,
“To wear and batter all these hammers so?”
“Just one,” said he, and then with twinkling eye;
“The anvil wears the hammers out, ye know.”
And so, thought I, the anvil of God’s Word,
For ages skeptic blows have beat upon;
Yet though the noise of falling blows was heard
The anvil is unharmed...the hammers gone.
Governments come and go. Nations rise and fall. People live and die. Jesus warned that “heaven and earth shall pass away” (Matthew 24:35), but went on to note that “my words shall not pass away.” Isaiah wrote: “The grass withereth, the flower fadeth; but the word of our God shall stand forever” (40:8). It is fitting that we end this study with the following statements from Kenny Barfield’s book, Why the Bible is Number 1.
We have seen how the biblical materials are unique. They are no run-of-the-mill religious writings, but—quite the contrary—reveal a remarkable understanding of the universe. ...How did the biblical writers manage to avoid the erroneous world views of their contemporaries? What made these men capable of producing painstakingly accurate scientific statements far in advance of their actual discovery? We want answers to those important questions....
One answer has been suggested by the biblical writers themselves. If their materials are so radically different from other sources, surely we must listen to their explanation. Rather than finding confusion or uncertainty in their ranks, we find calm unanimity.
They refused to be called geniuses and scorned personal glory. Even more significant, they denied having figured it out for themselves. In fact, there is reason to believe that they never really understood the far-reaching implications of the words they wrote.
Humbly, without a dissenting voice, these writers gave credit to a superior being. One of their favorite phrases was: “This is the Word of God.” They sensed a far-greater intelligence behind this universe than that of any mortal. They stood in awe before that wisdom and power. They even wrote words on their papyri and scrolls that made little earthly sense. “All Scripture is given by inspiration of God.” It was the only answer they ever gave.
It is the thesis of this study that one must simply look at the trademark, the signature of authorship.... Unless we can devise a more suitable explanation, it seems reasonable to believe that the seemingly incongruous wisdom was placed in the Bible by an intelligence far greater than that of man. That intelligence is God’s alone (1988, pp. 182,184-185).
Albright, William F. (1953), Archaeology and the Religion of Israel (Baltimore, MD: Johns Hopkins University Press).
Barfield, Kenny (1988), Why the Bible is Number 1 (Grand Rapids, MI: Baker).
Blunt, J.J. (1884), Undesigned Coincidences in the Writings of the Old and New Testaments (London: John Murray).
Bromling, Brad T. (1994), “Prophetic Precision,” Reason & Revelation, 14:96, December.
Cheyne, T.K., ed. (1899), Encyclopedia Biblica (London: A&C Black).
Dickson, Roger E. (1997), The Dawn of Belief (Winona, MS: Choate).
Eaves, Thomas F. (1980), “The Inspired Word,” Great Doctrines of the Bible, ed. M.H. Tucker (Knoxville, TN: East Tennessee School of Preaching).
Free, Joseph P. and Howard F. Vos (1992), Archaeology and Bible History (Grand Rapids, MI: Zondervan).
Geisler, Norman and William E. Nix (1986), A General Introduction to the Bible (Chicago, IL: Moody).
Glueck, Nelson (1959), Rivers in the Desert: A History of the Negev (New York: Farrar, Strauss, and Cudahy).
Holt, L.E. and R. McIntosh (1953), Holt Pediatrics (New York: Appleton-Century-Crofts), twelfth edition.
Horne, Thomas H. (1970 reprint), An Introduction to the Critical Study and Knowledge of the Holy Scriptures (Grand Rapids, MI: Baker).
Jackson, Wayne (1982), Biblical Studies in the Light of Archaeology (Montgomery, AL: Apologetics Press).
Jackson, Wayne (1991a), “Bible Unity—An Argument for Inspiration,” Reason & Revelation, 11:1, January.
Jackson, Wayne (1991b), “The Holy Bible—Inspired of God,” Christian Courier, 27:1-3, May.
Major, Trevor (1996), “The Fall of Tyre,” Reason & Revelation, 16:93-95, December.
McCord, Hugo (1979), “Internal Evidences of the Bible’s Inspiration,” The Holy Scriptures, ed. Wendell Winkler (Fort Worth, TX: Winkler Publications).
McGarvey, J.W. (1881), Lands of the Bible (Philadelphia, PA: Lippincott).
McMillen, S.I. (1963), None of These Diseases (Old Tappan, NJ: Revell).
Moffitt, Jerry (1993), “Arguments Used to Establish an Inerrant, Infallible Bible,” Biblical Inerrancy, ed. Jerry Moffitt (Portland, TX: Portland Church of Christ).
Morton, Jean S. (1978), Science in the Bible (Chicago, IL: Moody).
Paley, William (1839), The Works of William Paley (Edinburgh: Thomas Nelson).
Pfeiffer, Charles F. (1966), The Biblical World (Grand Rapids, MI: Baker).
Pierson, Arthur T. (1913), The Scriptures: God’s Living Oracles (London: Revell).
Price, Ira (1907), The Monuments and the Old Testament (Philadelphia, PA: American Baptist Publication Society).
Ramm, Bernard H. (1971), Protestant Christian Evidences (Chicago, IL: Moody).
Rawlinson, George (1877), Historical Evidences of the Truth of the Scripture Records (New York: Sheldon & Company).
Thompson, Bert (1981), “Science in the Bible,” Reason & Revelation, 1:33-36, September.
Thompson, Bert and Wayne Jackson (1992), A Study Course in Christian Evidences (Montgomery, AL: Apologetics Press).
Turner, Rex A. Sr. (1989), Systematic Theology (Montgomery, AL: Alabama Christian School of Religion).
[AUTHOR’S NOTE: I would like to thank my friend and colleague, Wayne Jackson, for graciously allowing me to use freely, during the preparation of this series of articles, various materials he has authored on the inspiration of the Bible.] | <urn:uuid:a7883b16-49fe-4325-9d37-d1701bf04df2> | {
"date": "2015-09-03T08:54:01",
"dump": "CC-MAIN-2015-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645310876.88/warc/CC-MAIN-20150827031510-00287-ip-10-171-96-226.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9551584124565125,
"score": 3.15625,
"token_count": 9509,
"url": "http://www.apologeticspress.org/apPubPage.aspx?pub=1&issue=482"
} |
Gown of Green (I), The
DESCRIPTION: Polly agrees "to wear the gown of green" The singer leaves "to fight our relations in North America." Many are killed. Some men foolishly buy their sweethearts toys, rings and posies; "give her the gown of green to wear, and she will follow you"
EARLIEST DATE: 1818 (_The Vocal Library_, according to Kidson) [but note the 18C "answer" and rhe several broadsides from before 1813]
KEYWORDS: courting sex war separation death America lover soldier
FOUND IN: Britain(England(Lond,North),Scotland(Aber))
REFERENCES (3 citations):
GreigDuncan4 907, "The Gown o' Green" (4 texts plus a single verse on p. 575, 4 tunes)
Kidson-Tunes, pp. 61-63, "The Gown of Green" (1 text, 1 tune)
OShaughnessy-Yellowbelly1 19, "The Gown of Green" (1 text, 1 tune)
Bodleian, Harding B 16(106a), "The Gown of Green" ("As my love and I was walking to view the meadows round"), J. Evans (London), 1780-1812; also Harding B 25(766), Harding B 17(116b), Firth c.14(198), Harding B 11(1098), Harding B 11(2104), Harding B 25(766), "The Gown of Green"
cf. "Erin's Lovely Home" (tune, per GreigDuncan4)
NOTES: The description follows broadside Bodleian Harding B 11(1098).
GreigDuncan4 quoting Duncan: "Learnt fifty to sixty years ago in Kinardine. Noted 26th November 1908."
GreigDuncan4 versions share only the first two verses with the broadsides. The description is from the broadsides but GreigDuncan4 versions omit the narrative dealing with separation and North America and goes right to commentary on the fickleness of young women." One verse of GreigDuncan4 907B seems not connected to the tradition but is suggested by "the fickleness of young women" theme: "When Adam was created, and none on earth but he, And Eve she was his only bride, and full of modesty, No bed of down, I'm sure they had, but on a flowery plain, No wonder that her daughters love to wear the goons o' green."
There are broadsides answering "The Gown of Green"; see, for example, Bodleian, Harding B 25(767), "The Answer to The Gown of Green" ("As a soldier was walking all on the highway"), J. Grundy (Worcester), 18C; also Harding B 25(766), "Answer to The Gown of Green"; 2806 c.18(132), "Sequel to The Gown of Green" - BS
Roud assigns the same number to "The Gown of Green" (I) and (II). The two are obviously related though there is no overlap in story or evidence that they are fragments of some longer ballad; in fact, the wars are not the same. - BS
(In fact it's just possible that they are the same, though not likely. During the American Revolutionary War, Spain was fighting against Britain; if the hero was a sailor, or simply a soldier being transported in a warship, it's just possible that he could have been in a fight with a Spaniard. Alternately, if we reverse the place where he lost the limb, Our Hero could have fought in Wellington's Peninsular Campaign in Spain, then been shipped to America to fight in the War of 1812. That happened to several regiments. - RBW)
The expression "Gown of Green" predates this song. See, for example, J Woodfall Ebsworth, The Roxburghe Ballads, (Hertford: The Ballad Society, 1897 ("Digitized by Microsoft")), Vol. VIII Part 3 [Part 25], pp. 689-690, "The Shepherd's Ingenuity; or The Praise of the Green-Gown" ("Amongst the pleasant shady Bowers, as I was passing on"), printed c.1682, and which ends "Now, all you little pretty maids, that covets to go brave, Frequent the meadows, groves and shade, where you those girls may have, When Flora's coverlid she spreads, then Bridget, Kate, and Jane, May change their silly maiden-heads for curious Gowns of Green."
Also, [Thomas d'Urfey,] Wit and Mirth, or, Pills to Purge Melancholy (London: J Tonson, 1719 ("Digitized by Google")), Vol V, pp. 17-14, "Jockey's Escape from Dundee and the Parsons Daughter whom he had Mow'd" ("Where gott'st thou the Haver-Mill bonack") (1 text, 1 tune), which includes the following verse: "All Scotland ne'er afforded a lass, So bonny and blith as Jenny my dear; Ise gave her a Gown of Green on the Grass But now Ise no longer must tarry here." According to James Henry Dixon, Scottish Traditional Versions of Ancient Ballads (London: The Percy Society, 1845 ("Digitized by Google")), p. 89, "Got on the gown o' green.] A young female who has acted indiscretely, is, in Scotland, said to have put on 'the gown of green.' The expression is not confined to Scotland, but prevails in the north of England." - BS
For that matter, "Greensleeves" is sometimes taken to refer to green clothing with, shall we say, suggestive overtones. - RBW
Last updated in version 3.0
Go to the Ballad Search form
Go to the Ballad Index Song List
Go to the Ballad Index Instructions
Go to the Ballad Index Bibliography or Discography
The Ballad Index Copyright 2017 by Robert B. Waltz and David G. Engle. | <urn:uuid:b6e35841-6360-41e2-b725-213da3acd927> | {
"date": "2018-04-20T14:46:24",
"dump": "CC-MAIN-2018-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125938462.12/warc/CC-MAIN-20180420135859-20180420155859-00136.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9306327700614929,
"score": 2.609375,
"token_count": 1341,
"url": "http://www.fresnostate.edu/folklore/ballads/RcTGoGr1.html"
} |
A 15 yr old Canadian girl has invented a flash light that uses the temperature of the palm to light up. Ann Makosinski invented the flashlight that work in the principle of thermoelectric effect. Thermoelectric effect is the conversion of difference in temperature into electric voltages.
Everyone must have faced a moment or two where they would be searching for the flashlight to find something important and when they find the flash light they batteries would be dead. She said that this invention would save a lot of money and would lead to a reduction in toxins going into the soil if we don’t have to use batteries anymore.
For creating the flashlight she first measured the amount of electricity that can be generated from the warmth of the palm which came out to be 57 milliwatts and how much energy she needed to power the LED which would be around half a milliwatt. Next she got Peltier tiles which would generate electricity when one side is warm and the other is cool.
The light generated by her flashlight may not be powerful but is enough to find something or lighten up the page of a book. At an ambient temperature of about 10 degrees Celsius it worked for around half an hour but the working time will depend upon the difference in temperature.
Makosinski said that the model she has made is just a prototype of the final product she has in mind but the components she has used are quite strong and if it would be manufactured then she would surely seal off the electronic components of the light to save them from elements like water so that they can last longer.
She has submitted her invention to Google science fair and will visit the Google headquarters in California in September for the final event. | <urn:uuid:34991995-bc23-4e8d-bcda-a3a580d6250a> | {
"date": "2017-09-21T17:39:20",
"dump": "CC-MAIN-2017-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687834.17/warc/CC-MAIN-20170921172227-20170921192227-00456.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9713873863220215,
"score": 3.78125,
"token_count": 346,
"url": "http://onlinedatahub.com/15-yr-old-kid-invents-flashlight-that-runs-using-body-temperature/"
} |
EVERY Longhorn Is AWESOME!
Sending you weekly resources, ideas, and updates to use with, not only SpEd students, but ALL students...because, EVERY Longhorn is AWESOME!
"Being Blind Means Seeing Differently"
You don't want to miss our awesome Nora! Click HERE to see her video.
How To Help Kids With An Auditory Impairment
(How Awesome is This?!)
Response To Intervention (RTI) ~ What is it?
You Know You Can't Wait To Read This!
RTI is a process that every teacher is a part of. Although it takes work, the RTI process has proven beneficial to the success of students when done correctly. The next few posts will break down information about the RTI process that we hope will be helpful to all of us!
WHAT IS RTI??
-RTI, Response to Intervention, is a process of implementing high quality, scientifically validated instructional practices based on learner needs, monitoring student progress, and adjusting instruction based on the student’s response. (Response to Intervention, A Practical Guide for Every Teacher)
-RTI is a movement which shifts the responsibility for helping all students become successful from the special education teachers and curriculum, to the entire staff and curriculum. It promises a unified system of education. (Pyramid Response to Intervention)
-RTI is a general education initiative which provides systematic, data-based decision making for all students.
Practical Behavior Management Strategies
Delany did a great job teaching us how to approach behavior management with a growth mindset. No matter who we are or how long we've been teaching, we all have difficult days, but we all need to remember that our behavior is the only thing we have complete control over!
Click HERE to go to the Power Point Delany shared with us!
Instructional Strategies for Kids Who Struggle with Processing Speed
Processing Speed is the ability to fluently and automatically perform cognitive tasks, especially when under pressure to maintain focused attention and concentration.
Math Strategies for Diverse Learners
Our classrooms are full of diverse learners. It's important to help them be successful by allowing them to use the learning strategies that work best for them as students and not us as teachers. WE become more successful teachers when we work to make sure ALL learners are having their specific needs met.
Transitions, Getting Your Students' Attention, and Staying Calm
We all know that transitions can be difficult for many students. Getting your students' attention or regaining their attention is also hard sometimes. There are MANY ways to transition students and get their attention without raising your voice. Check out these tips and tricks from Dr. Jean. They work great when modeled and practiced time and time again! Her tone of voice is so important as well!
Types Of Classroom Accommodations
We often have concerns for many students who are not supported by SpEd. Using accommodations like these for those students are a great way to support them in the classroom and an excellent way to document the support that they need.
Recommendations In The Classroom For Students With Sensory and/or ADHD Needs
Click on the link to view ways you can help your students with sensory or ADHD needs be more successful in the classroom!
(The link will take you to a Google Doc, so make sure to sign in to your NISD Google account so you can view the document.)
How should you use the paraprofessional working in your classroom when you're teaching a lesson? Here's what to do...
Accommodations VS. Modifications
Check out the short article below to read about the differences between an accommodation and a modification... | <urn:uuid:b069bdc3-5f71-41f0-b96f-0f27a8eea560> | {
"date": "2017-03-28T14:33:07",
"dump": "CC-MAIN-2017-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189771.94/warc/CC-MAIN-20170322212949-00171-ip-10-233-31-227.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9376122951507568,
"score": 3.46875,
"token_count": 762,
"url": "https://tackk.com/783fep"
} |
An estimated 15 percent (11,799) of Cuyahoga County's children are living one step away from homelessness, according to a new report released by Case Western Reserve University's Center on Urban Poverty and Community Development at the Mandel School of Applied Social Sciences. These children live with a grandparent or someone else and about 75 percent of the children without their parents.
The 70-page white paper, "Family Homelessness in Cuyahoga County," looks at new factors in determining homelessness by considering the numbers of people living in doubled-up housing situations with family and nonfamily members and the impact it has on them. It is one of the first comprehensive assessments of homelessness for families in Cuyahoga County.
The paper, written with support from the Sisters of Charity Foundation, can be found online.
"We need to better understand the extent to which people are doubled up in the county," said Cyleste C. Collins, the lead author on paper.
She explains, "The doubling up phenomenon is not well understood and has implications for understanding and dealing with the area's population decline. It also has the potential for shedding light on the effects of foreclosures."
Prepared by Collins, Claudia J. Coulton, and Seok-Joo Kim from the Poverty Center, the report sheds light on the different causes and factors that play out in family and individual homelessness.
The researchers also look at various models across the country that have successfully reduced rates of family homelessness and provide information to help county offices offer improved programs and interventions to meet housing needs.
One important piece of information tells who is impacted by homelessness: African Americans are hit harder by the homeless situation and make up 85 percent of those considered to lack housing, yet are only 27.5 percent of the county's population.
The authors report that increased foreclosures and the area's poor economic situation have the potential for leading to increased homeless rates as job prospects for unskilled workers dwindle.
Contrary to the stereotype of the homeless as single males with either substance abuse or mental health concerns, homeless families in Cuyahoga County are largely composed of single mothers with their children, who have become homeless because of unemployment and lack of the ability to pay their rent or mortgage. Some 7,747 adults or about 7 percent of the county population are homeless and some 1,211 families (3,748 individuals).
According to the National Low Income Housing Collation, in 2007-08 the average rent for a two bedroom apartment was $725 in Cleveland. At that price, to make rent, a single mother earning minimum wage would have to work two full-time jobs. HUD estimates rents should be about 30 percent of the annual income or about $266 a month.
The bottom line is that homelessness is a complex issue and involves a number of different options for families without a place to call home, report the authors.
Case Western Reserve University is committed to the free exchange of ideas, reasoned debate and intellectual dialogue. Speakers and scholars with a diversity of opinions and perspectives are invited to the campus to provide the community with important points of view, some of which may be deemed controversial. The views and opinions of those invited to speak on the campus do not necessarily reflect the views of the university administration or any other segment of the university community. | <urn:uuid:cf788305-aa54-444f-a9d9-f27c9b76cbab> | {
"date": "2017-01-24T09:03:11",
"dump": "CC-MAIN-2017-04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00458-ip-10-171-10-70.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9615921378135681,
"score": 2.6875,
"token_count": 676,
"url": "http://blog.case.edu/case-news/2009/07/22/centerurbanpovertypaper"
} |
Browse on keywords: insect legume alfalfa weevil
Search results on 10/18/18
5048. Parks, T.H.. 1913. The alfalfa weevil.. U. of Idaho Extension Bull. #7.
The weevil was introduced into Utah from Europe; feeds principally on alfalfa and sweetclover; spreads by wind; damage done by larvae which eat first crop leaves; injure foliage from May to July; one generation per year; spring tooth harrow in early spring and after first cutting; then go over with a brush drag to crush the insects; 31 species of birds feed on the weevil, also ladybugs; cool spring inhibits damage.
7386. Wakeland, C.. 1921. Fighting alfalfa weevil.. U. of Idaho Extension Bulletin # 50.
Spray with calcium arsenate; "effective and safe"; bio-control agent imported in 1912 - ichneumon fly; it stings the larvae and deposits eggs; questionable level of control.
7395. Wakeland, C.. 1925. Seasonal variation as it affects the activity and control of the alfalfa weevil.. ID Agr. Expt. Sta. Bull #138.
Using calcium arsenate on alfalfa; monitoring effect on cows eating the hay. | <urn:uuid:9328c206-eba3-4514-812e-add9087db3a2> | {
"date": "2018-10-18T08:52:29",
"dump": "CC-MAIN-2018-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511761.78/warc/CC-MAIN-20181018084742-20181018110242-00016.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8529077768325806,
"score": 2.65625,
"token_count": 276,
"url": "http://www.tfrec.wsu.edu/pages/organic/expandFind/insect/legume/3/alfalfa%20weevil"
} |
Celiac disease is being diagnosed six times more often now than 20 years ago. Scottish health officials have revealed that while 1.7 in 100,000 children were diagnosed in the early 1990s, that number has risen to 11.8 in 100,000 in the past few years. The cause of this shift is difficult to determine, since celiac disease is genetic. Solving this problem is not as simple as pinpointing a single environmental or behavioral trigger. More children are being diagnosed with the disease because more children are being born with the celiac gene.
People who are born with the gene do not always develop full-blown celiac disease. The gene has to be activated by a virus that targets the intestines. Logic suggests that these viruses might be presenting themselves more often than they did in the past, and this could very well be true. There are countless studies proving the dangers posed by the foods produced by the factory farming system. Since celiac disease is usually not diagnosed until the virus does its damage and symptoms become evident, it is nearly impossible to determine exactly when and where a person may have contracted the virus. Symptoms of the disease appear in response to gluten consumption.
Though the number of diagnoses is six times higher than it used to be, this does not mean that six times more people have celiac disease. A good portion of the diagnoses can be attributed to advances in medical care and increased awareness of the illness. Now that so many more people require gluten-free foods, it is likely that restaurants and manufacturers will continue their efforts to provide safe options for people with celiac disease. | <urn:uuid:20eeda38-bc39-4125-b84d-49399e61f729> | {
"date": "2017-04-27T10:59:46",
"dump": "CC-MAIN-2017-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122159.33/warc/CC-MAIN-20170423031202-00412-ip-10-145-167-34.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9830430150032043,
"score": 3.375,
"token_count": 324,
"url": "http://www.spooncafe.co.uk/diagnoses-of-celiac-disease-soar%E2%80%A8/"
} |
Have you ever mistaken NTUC for PAP? Well, there are legit reasons why they behave like husband and wife.
They “got to know” each other before 1961 but officially “celebrated their love” on 6 September 1961 when the non-communist unionists formed the National Trades Union Congress (NTUC).
This relationship would not have blossomed if not for the split in the People’s Action Party (PAP) in July 1961.
Here are 3 reasons why NTUC and PAP share such a strong and intimate relationship.
They enjoy close interactions
Couples in a happy marriage usually have close interactions with each other by spending quality time together and building a common space.
The same can be said for NTUC and PAP representatives.
Since 1959, trade unionists are elected to Parliament under the banner of PAP in every general election in Singapore. By serving in NTUC and being elected as Member of Parliament (MP), it helps unionists to advocate for workers’ concern in Parliament.
When trade unionists are elected to Parliament, this gives them opportunities to interact with their colleagues in Government ministries and help them understand trade union work better.
Other PAP members in Parliament, of non-trade-union origin, have served as officials and advisors in NTUC and its affiliated trade unions and co-operatives.
For example, Mr Seah Kian Peng is serving as a MP for Marine Parade GRC but he also heads the NTUC Fairprice as CEO in his day job.
These interactions help to foster deeper understanding between the two parties.
They are willing to compromise for common goals
Happy couples acknowledge the importance of taking a step back, each, to achieve a broader and more positive outlook.
After the NTUC was formed, Singapore separated from Malaysia in 1965 and the British withdrew their military base by 1971. There were job uncertainties and Singapore was forced to survive.
The country had to make itself attractive for foreign investors to come in. This meant that its workforce had to be available and flexible. Bad practices had to go.
Late Mr Lee Kuan Yew told unionists in 1968 that the labour laws had to be amended to allow management to manage workers.
“I am asking you to lick the labour movement into shape, cutting out restrictive practices which are no longer relevant and stopping abuse of fringe benefits which leads to lower productivity…Cut all these evils off, jack up productivity. Cut out abuse of privileges and create a new image of a thinking, hard-headed labour movement.”
The Industrial Relations (Amendment) Act was then introduced to prohibit the trade unions from making demands in areas such as workforce deployment, promotions and dismissal.
The unions’ scope as a bargaining entity was reduced but in return, the PAP Government implemented the Employment Act.
It appeared to favour employers but was actually fair to employees.
The unions also knew that by swallowing the “bitter pill” of accepting the legislations that could cause short-term pain, it would save jobs in the long run.
They don’t take each other for granted
Couples who make it far together, remember that they cannot rest on their laurels and take each other for granted.
When the unions “gave up” some of their bargaining rights to pave way for a viable economy in Singapore, the PAP Government did not conveniently forget this sacrifice.
They remember the cooperation and support given by the trade unions and the workers.
To fulfill its promise of providing a better life for workers, the Government revised the CPF Act to make CPF contributions payable on gross instead of basic salary.
Here’s an interesting fact – the Act was also amended in 1968 to allow CPF contributions to be used to purchase HDB flats!
Now imagine if the CPF Amendment Act was not passed, how could it be possible for Singaporean workers to own a space which they call home?
Renewing of vows
NTUC and PAP will continue to be in tandem only if both parties are committed to work on this symbiotic relationship. Just like some old married couples, they renew their vows to strengthen their commitment to each other.
Mr Lee Kuan Yew believes that the NTUC-PAP relationship has to be strengthened to create a secure future for Singapore.
To him, it was a “perfect marriage” between NTUC and PAP.
“There was never any distinction or division between the political (PAP) and the trade union (NTUC) objectives. Both were out to abolish the old unjust colonial order and to create a society which offered everybody equal opportunities for education, health, housing, jobs and a better life.” | <urn:uuid:93e82a90-6b3f-4c86-853e-cc16ae476e76> | {
"date": "2018-04-24T23:19:06",
"dump": "CC-MAIN-2018-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947421.74/warc/CC-MAIN-20180424221730-20180425001730-00576.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9689099192619324,
"score": 2.765625,
"token_count": 980,
"url": "https://thesingaporeconscience.com/2017/11/3-shocking-reasons-why-ntuc-and-pap-behave-like-husband-and-wife/"
} |
Kingstone and Thruxton Primary is a Values School; it is our Values that determine our thinking and behaviour. Through a Values-based Education, a positive culture for teaching and learning is created which is based upon valuing ourselves, each other and the environment through the consideration of core ethical principles that guide behaviour. Our School Values support the development of the whole child by enabling a secure sense of self, respect for others and underpin the raising of educational standards.
The ethos of our school is built on a foundation of 12 core Values which are addressed directly through lessons, assemblies and across the whole curriculum. Each month over a 2 year cycle we focus on a particular Value. We learn to understand what the Values looks like and how we can demonstrate the Value, in the way we behave, in our attitude towards each other and in our learning.
We feel that becoming a ‘Values Based School’ has been a great success. Since starting the programme, there has been a positive impact on improved pupils’ behaviour, attitudes and understanding which has been noted by parents, teachers and others outside the school.
We have 12 Values which change monthly, on a 2 year rolling programme.
Believing you have both the will and the way to accomplish your goals whatever they may be.
External peace in the world around us, internal peace with ourselves.
Wishing another person happiness and taking pleasure in their well-being. It is unconditional and unselfish.
Working as a team and recognising the unique role of every individual.
Doing your share and accepting what is required of you.
Not taking things for granted.
Treating your neighbour as yourself. A kind person is helpful and has a generous attitude.
Accepting myself and others even when we make mistakes and accepting others and appreciating differences.
If you are trustworthy, you can be relied on to do the right thing.
The only way to find a friend is to be one. Friendship is an act of giving to others.
Being able to wait contentedly.
Telling the truth, able to be trusted and not likely to steal, cheat, or lie.
Twenty Two values are covered over a two year cycle. Here is a brief overview of the cycle and each value:
|Month||Year A||Year B|
2019-20 is Year A of our two year cycle.
2020-21 is Year B of our two year cycle | <urn:uuid:731630d3-8e22-4b1f-b52e-d77f626c19f6> | {
"date": "2020-01-26T22:54:03",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690379.95/warc/CC-MAIN-20200126195918-20200126225918-00496.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9246811866760254,
"score": 2.703125,
"token_count": 496,
"url": "http://www.kingstone-thruxton.hereford.sch.uk/page/?title=Values+Education&pid=66"
} |
In ancient time and other culture of antiquity, people believed in gemstone had a wide array of health benefits. In modern times, all the gemstones are mostly used for ornaments purposes. In this guide we discuss some use gemstones as a part of their spiritual practices to restore energy fields, gain peace and promote love and safety as a wearing a piece of jewelry made with a specific gem.
According to old-time Vedic and Hindu astrology, gemstones which are believed to correspond with planetary activity and possess properties that benefit the wearer that is called navratna which is Sanskriti words meaning nine gems. These are called diamond, pearl, red coral, orange hessonite garnet, blue sapphire, cat’s eye, yellow sapphire, emerald, and ruby. These nine gems are considered favorable in many Asian countries and by all religion. In traditional Hindu architecture, the nine auspicious gems are placed under the cornerstone and beginning of a new building and also in the particularly in temples. In some countries, the ancient and auspicious order of the nine gems is a royal decoration that is awarded to member of the royal family and exceptionally high ranking officials. It is thoughts that wearers of such jewelry will enjoy well-being. The decoration consists, worn by men and worn by women, navratna ring design and pendants. The diamond is placed at the center of the circle of gems in the pendant. A beautiful royal jewelry known as the nine gems belts has movable jewelry with what appears to be a yellow sapphire at the center. Jewelry that nine gems may have the nine gems set into three, a circular design or various other creative designs. Navratan jewelry may be fashioned for either ladies or men and can take the form of earrings, necklace pendants, brooches, rings or even belt with or yellow precious metals. It may be traditional style or modern and maybe use cut gemstones. They absorb and transmit energy better and thus are more beneficial.
Traditional Indian navratna jewelry has the gems in a circular design, clockwise from the top of the beginning with diamond, and the ruby in the center. It is the thought that the ruby represents the sun. So this is why it is placed at the center. When set into a necklace or bracelets, the ruby is usually placed at the center. According to some, the blue sapphire should point towards the body. Some brides wear one or two pieces of navaratna jewelry because is considered to be auspicious. One carefully chosen navratan piece, such as ring, earring or bangle may constitute special day goes smoothly and the marriage gets off to the best start. The nine sacred gems are thought to not only protect the wearer from negative forces but are also believed to promote positive qualities. These good and bad aspects may refer to a mental attitude, power popularity, intelligence health, wealth, or external consequence. Just as a healthy and balanced lifestyle can help to make life good, navratna jewelry is said to ensure health, wealth and happiness. If enjoy wearing or designing colored gemstone women jewelry are a way of balancing planetary activity with the wearer and joint wealth and protection with beauty and fashion. The nine gem considered to be a way of providing protection and bringing luck to their wearable in one beautiful and colorful piece. | <urn:uuid:64bd6512-564e-49b9-8d60-bedf2af2aaba> | {
"date": "2019-06-17T15:58:40",
"dump": "CC-MAIN-2019-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998509.15/warc/CC-MAIN-20190617143050-20190617165050-00296.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9571388363838196,
"score": 2.640625,
"token_count": 682,
"url": "https://articles.indiaonline.in/need-to-know-about-navratna-jewelry-how-it-can-increase-your-profit-54756"
} |
Whiskers are also known as vibrissa, from the latin vibrare "to vibrate". Vibrissa are the specialized hairs on mammals and the bristlelike feathers near the mouths of many birds. Their resonant design is symbolic of the energies, good and bad, that are reverberating throughout the natural world. Every living thing is connected and, by birthright, deserves to exist.
Sometimes the best positive stories of the environment come from our own backyard. When you sum up the effects of millions of backyard naturalists, the positive impact is significant for the planet. The personal story I am sharing here will hopefully inspire, enlighten and encourage the development of even more backyard biophiliacs.
Last March, several trees were downed in my front yard by a heavy ice storm. Many other trees had significant loss of limbs. The clean up required a professional. Fortunately I am childhood friends with someone who married a certified arborist. He gave me a few options, when possible. One of the options was to either dig up and grind stumps for some pine trees that did not fully erupt from the Earth or to just saw them at the bottom and let them sink back into the Earth as much as possible. Two factors influenced my choice: the price to my wallet to dig and grind the stumps versus the price to the environment to dig and grind the stumps. The price for both was pretty steep.
Conventional wisdom always chooses to make our lawns “pretty,” often with little regard to the effects of fertilizers, insecticides, pesticides and selection of native plant species instead of ornamental non-native plants. Non-native plants often compete with native plants and rob wildlife of hosting sights and food resources which can only be provided by native plants. Also, one man’s yard trash can be a critter’s mansion. With that in mind, I opted to keep the stumps. I can see the grove of pine trees from my home office window and enjoy watching a great variety of wildlife supporting their lives there on a daily basis. Last week, I had the joy of watching a chipmunk sunning himself on one of the stumps. Chipmunks hibernate and the cutie had emerged from the den beneath the stump on an unseasonably warm day. Smart rodent.
Wildlife habitat was not my only motivation for keeping the stumps. If you recall the biology of photosynthesis, you know that plants absorb energy from the sun and carbon dioxide from the air around them to fuel themselves. Plants store the carbon that is obtained from the break down of the carbon dioxide molecule and, in most cases, release the oxygen back into the air. Those of you with lungs probably already understand how vitally important oxygen is to all non-plant life.
When vegetation, large or small, dead or alive is made into smaller pieces through chopping, grinding, sawing, mulching or most any other type of processing, it immediately releases a large amount of carbon. Of course, vegetation naturally rots and releases carbon but much more slowly. If you consider that deforestation is occurring on a global scale, thereby decreasing the amount of trees producing oxygen, and couple that with net carbon release because of these activities, it is clearly not a sustainable practice that will support a well-oxygenated planet. When you understand this, you never look at a stump, downed tree, logging operation or old wooden furniture in the same way. In my mind, all these kinds of items have a large invisible label that reads CARBON STORAGE (open with care).
How can you help? Keep that old adage “think globally, act locally” in mind when you engage in lawn and gardening activities. Piles of limbs, old logs, even leaf litter can be used by many animals for many purposes. For more tips on how to make your lawn and garden friendly to wildlife, check out tips at the National Wildife Federation’s website. | <urn:uuid:4275843a-d8cb-4b31-aa6c-eb516e12254c> | {
"date": "2017-11-23T05:10:03",
"dump": "CC-MAIN-2017-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806736.55/warc/CC-MAIN-20171123050243-20171123070243-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.954328179359436,
"score": 2.6875,
"token_count": 817,
"url": "https://thewhiskerchronicles.com/2015/02/17/chipmunks-and-carbon-storage-written-for-the-ecotone-exchange/"
} |
By RANDY RICHMAN
The April showers have brought May flowers, but don’t let their beauty distract you from your driving.
With everyone making the most of the warmer weather, it’s a must that we remain aware of the increased motorcycle, bicycle, and pedestrian traffic on and around our roadways.
The U.S. Department of Transportation reported 37,461 lives lost on U.S. roads in 2016, a 5.6-percent increase from the previous year, with 5,987 pedestrian, 5,286 motorcyclist, 840 bicyclist, and 3,450 distraction-related deaths.
Be sure to practice these safety tips and share the roads during the warm weather:
- Be alert and aware of your environment when driving. Don’t allow yourself to become distracted by your phone, radio, passengers, or anything else.
- Before riding a motorcycle in traffic, practice riding in a controlled environment, such as an empty parking lot, until you feel comfortable controlling your bike.
- When riding any motorized or non-motorized bike/vehicle, be sure to wear the proper safety gear. Helmets, eye protection, protective footwear, pads and clothing to protect your skin, and gloves for grip, will best protect you when riding.
- Wear brightly colored clothing so you’re easily spotted by motorists.
- Ride responsibly; follow all traffic laws; proceed cautiously at intersections; and ride with your headlight on, day and night.
- Bicyclists are to ride with traffic, while pedestrians are to walk against traffic so they can see any oncoming hazards.
- If you must walk at night, wear bright clothing, and carry a lit flashlight to make yourself more visible.
At the end of the day, we all want to go home safe and sound. Stay alert; be aware; and share the roadways. It’s the best route to summer fun!
Randy Richman is a part-time Brookfield firefighter and paramedic, firefighting instructor and regional director for hyperbaric medicine for Shared Health Service Inc. | <urn:uuid:625f8a60-a7bd-4647-a9e7-314fd7420ca6> | {
"date": "2018-12-17T12:23:30",
"dump": "CC-MAIN-2018-51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828507.57/warc/CC-MAIN-20181217113255-20181217135255-00376.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9369966983795166,
"score": 2.625,
"token_count": 436,
"url": "http://newsonthegreen.com/2018/05/07/ride-to-live-keep-distractions-at-bay/"
} |
World Trade Organization: Why It is Necessary in International Trade Essay
World Trade Organization: Why It is Necessary in International Trade
International Trade can be viewed as a voluntary exchange of mutual advantage in that countries engaged in trade take and give something beneficial in return to their trading partners. While international trade is thus conducted voluntarily, the World Trade Organization has emerged in the international system raising questions as to its necessity in the global trading system. This paper is thus an effort to answer why the WTO is considered necessary in international trade given that the trade in between countries is conducted voluntarily in exchange of mutual advantage.
The World Trade Organization is considered as necessary in international trade because the organization establishes rules and structures for international trade that provide security and stability for nations’ commerce (Kaplan, 2005). Without the WTO, for example, countries may impose unilateral policies that can prove detrimental to other countries such as can be recalled of the protectionist policy imposed by the United States during the Great Depression.
Such policies, while formulated to protect domestic products and national interest may destabilize the commerce of other countries and even the implementing country (Gusmorino, 1996). Without the World Trade Organizations, countries will be free to impose trading practices and policies that can be detrimental to the interest of other countries. More powerful countries are more likely to devour smaller economies in international trading competitions as more powerful and influential countries would be freer to impose their will unilaterally on their smaller trading partners (“The 10 Benefits”). The situation will thus result to an imbalance in the international economy with greater of the world’s wealth remaining in the hands of the already affluent countries.
The World Trade Organization is considered as necessary in international trade because it protects and provides opportunity to smaller and developing economies that can be otherwise crushed in a trading competition with developed and powerful countries. Smaller economies will have difficulty competing in the international market for reason that international trade competition can be dominated by powerful and rich countries such as Japan, Canada and the
World Trade Organization Page_#2
United States. Small countries have difficulty catching up with their competitors because “small countries do not have much to offer trading partners by way of market access concessions thereby limiting the extent to which they can seriously engage in, and reap benefits from, the reciprocal bargaining” (Mattoo & Subramanian, 2004). Smaller countries with smaller economies also oftentimes emerge as losers because of the difficulty in keeping pace with the advancements and innovations of highly developed countries such as the United States. This situation in the international market, where the less affluent can be easily devoured by the markets of the strong and powerful necessitates the presence of an organization that will oversee fair treatment of all competitors (“Trade, the”). As such, the WTO would be necessary because through it, the interest of the smaller economies are better protected and even advanced. The organization does this by providing non-discriminatory opportunity for all member states through its most favored nation treatment. The WTO has also been continuously restructured so as to be able to cater not only to the demands of the bigger player but also to cater to the needs of the less affluent countries. The WTO also provides more opportunity to the lesser and developing economies in that smaller countries can enjoy increased bargaining power. Thus, giving them the assurance that they will not be left behind in a trading competition against powerful and developed countries, allowing them to take advantage of global trading.
The World Trade Organization is considered as necessary in international trade because it helps prevent, mitigate and even negotiate disputes between countries over trading issues. Without the WTO, stiff competition in the international market can turn into disputes and may even result to more serious conflicts such as in the form of political war. The WTO has a structure of policies and regulations that entails conformity among member states establishing acceptable norms in the conduct of international trade. This can prevent instances where countries will curtail trading rights and opportunities of another country because no policy has been set up. Through its set of policies, the WTO prevents the emergence of disputes and issues
World Trade Organization Page_#3
which would have emerged from unregulated trading relations. David Ricardo proposed that worldwide wealth is maximized if states engage in international trade specifically according to their comparative advantage. However, with the growing popularity of free trade and globalization, more countries offer similar products that they consider to have their comparative advantage resulting to stiff competition and even friction in the international market. It is therefore inevitable that disputes will arise. Without the World Trade Organization, countries facing trading frictions may not be able to settle the problems themselves as other means of “settlement” may be resorted to. This situation may lead to more serious disputes and conflicts. The WTO is necessary in that it provides a venue for countries to report violations; establishes sets of agreements and rules through which countries are subject to abide; and provides an opportunity for countries to negotiate their disputes thereby preventing more serious conflicts and war . The WTO has proven to be such an effective intermediary between countries in conflict regarding trade issues. Since 1995, for example, around 300 disputes have been handled by the organization, more than half of these have been settled harmoniously which otherwise could have led to more serous political conflict (“The 10 Benefits”).
The World Trade Organization is considered as necessary in international trade because it lowers trade barriers through negotiation thereby promoting and supporting trade liberalization and preventing protectionist policies that can be disastrous to the international economy. The rules established by the WTO also ascertain that trade is free and fair. With a liberalized trade, more nations and even consumers can benefit. Free trade allows countries to specialize in the goods or services in which they have a comparative advantage. In a system of interdependence where countries should trade the good in which it is best at producing and abandon the good in which it is worst at producing, specialization will occur. Specialization will consequently result to lower operational costs and increased productivity. Thus, countries will be able to take advantage of their efficiencies at the same time benefit from the efficiencies of another country.
World Trade Organization Page_#4
Free trade also improves the profitability of many local industries by allowing them to get better deals in purchasing their supplies of raw materials and other capital goods thus helping them reduce costs of production. Free trade will also enhance and promote innovation in that equal footing in a multilateral market will compel manufacturers and economic players to offer better deals and newer innovations to the consumers. In addition to the benefits of free trade, history also shows the disastrous effects brought about by raising trade barriers. The enactment of the Fordney-McCumber Act in 1922, which raised the average import tax to some 40 percent prompted retaliation from European governments but did little to improve prosperity in the United States (“Smoot-Hawley Tariff Act”). The passage of the Smoot-Hawley Tariff Act which raised the already high tariff on over 20,000 imported goods also provoked a storm of foreign retaliatory measures. The Smoot-Hawley Tariff Act is attributed by many for worsening the Great Depression of 1929 (“Smoot-Hawley Tariff Act”). After its passage, many countries adopted “beggar-thy-neighbor” duties, a measure where one country seeks to gain at the expense of its trading partners thus exacerbating the situation of the world economy while reducing global trade. Through the World Trade Organization, protectionist policies of states can be prevented which would have otherwise solicited reciprocated protectionist policy from other trading nations, endangering the economic balance in the international system.
World Trade Organization is necessary in international trade because it discourages protectionist policies that can result to conflict in between nations as exemplified by the current problem between China and Japan. The friction started in April 2001 when Japan instituted provisional safeguard measures on three agricultural products mainly imported from China. China has retaliated by imposing special tariffs on cars, mobile phones and air conditioners imported from Japan (Kwan, 2001). Such situations that have emerged because of protectionist measures can be dealt with by the WTO through its principles in safeguard measures.
“10 Benefits of WTO Trading System”. (n.d) Retrieved November 21, 2006 from the World Trade Organization Website: http://www.wto.org/english/thewto_e/whatis_e/10ben_e/10b03_e.htm
Gusmorino, Paul Alexander (1996, May 13). “Main Causes of the Great Depression”. Retrieved November 21, 2006, from http://www.gusmorino.com/pag3/greatdepression/
Kaplan, Eben. (2005). ‘The World Trade Organization.” Retrieved November 21, 2006 from http://www.cfr.org/publication/9386/world_trade_organization.html?breadcrumb=default
Kwan, C.H. (2001). “Turning Trade Friction between Japan and China into a Win-Win Game”. Retrieved November 21, 2006 from http://www.rieti.go.jp/en/columns/a01_0003.html
Mattoo, Aaditya and Subramanian, Arvind. (2004, March). “The WTO and the Poorest Countries: The Stark Choices”. Retrieved November 21, 2006 from http://www.cgdev.org/doc/event%20docs/The_WTO_and_the_Poorest_Countries.pdf
Smoot-Hawley Tariff Act. (2006). In Encyclopædia Britannica. Retrieved November 21, 2006, from Encyclopædia Britannica Premium Service: http://www.britannica.com/eb/article?tocId=9396766
“Trade, the Doha Round and Poverty Reduction”. (2006, August). Retrieved November 21, 2006, from the World Bank Group Website: http://web.worldbank.org/WBSITE/EXTERNAL/NEWS/0,,contentMDK:20040979~menuPK:34480~pagePK:34370~theSitePK:4607,00.html | <urn:uuid:8278c1da-2746-400a-8f68-6e95dfafe9d7> | {
"date": "2019-09-19T14:56:29",
"dump": "CC-MAIN-2019-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573533.49/warc/CC-MAIN-20190919142838-20190919164838-00496.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9338605999946594,
"score": 2.859375,
"token_count": 2097,
"url": "https://graduateway.com/world-trade-organization-why-it-is-necessary-in-international-trade/"
} |
The Central of Georgia Railroad and Banking Company brought economic growth to Savannah by transporting cotton from the heart of the state to be shipped from the Port of Savannah.
During the Civil War, however, the role of the railroad began to change. With port shipments effectively cut off by the Union blockade, the transportation needs changed. Troops, supplies, artillery and prisoners of war were moved through Georgia by way of the railroads.
In late 1864, as Gen. William T. Sherman made his infamous “March to the Sea,” he followed the railroad from Atlanta, destroying track and facilities along the way to keep the Confederate Army from rapidly coming up behind him. The Rebel Army used the trains to evacuate Union POWs from Milledgeville and Savannah as Sherman closed in. The tracks also allowed the heavily outnumbered units scattered in Sherman’s path to consolidate for the defense of Savannah.
When under General Sherman’s personal supervision, his men took great care in destroying the railroad. Left wing commander Major Gen. Henry W. Slocum reported his men destroying 119 miles of track coming out of Atlanta and 6 miles of the Central of Georgia around Milledgeville. Right wing commander Gen. Oliver O. Howard claimed 191 miles, mostly from the Central of Georgia east of Macon to just west of Savannah.
Once they reached Savannah, the destruction was no longer necessary. All materials that could be used for repairing the railroad were destroyed on the grounds of the Central of Georgia facility in Savannah, but the buildings were left intact. With much of the rolling stock left behind by the evacuating Confederates and Savannah under Union control, there was no need to destroy the buildings as Sherman began the next phase of his campaign into the Carolinas.
The Central of Georgia was crippled to the extent that a train did not run between Savannah and Macon until 1866.
“A Walk Through Savannah’s Civil War” is an occasional series featuring photos and videos that show the visual remains of Savannah’s Civil War past. | <urn:uuid:58b906f1-d140-4bdf-9000-50792311b2c9> | {
"date": "2017-03-23T00:12:41",
"dump": "CC-MAIN-2017-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186530.52/warc/CC-MAIN-20170322212946-00391-ip-10-233-31-227.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9724934101104736,
"score": 3.875,
"token_count": 417,
"url": "http://savannahnow.com/accent/2012-01-15/walk-through-savannahs-civil-war-railroads"
} |
Learn something new every day
More Info... by email
International humanitarian law sets guidelines for armed conflict to protect civilians, prisoners of war, and others from unintended harm. These principles govern how wars are fought and outline basic standards that all member nations ratify through treaties. The Geneva Convention and the Hague Convention formalize laws of war, define war crimes, and provide the framework to prosecute war criminals.
In 1864, the first Geneva Convention created a body of law that set protocols for armed conflicts. Subsequent conventions strengthened and amended the rules to regulate military conduct during war time. The Hague Convention recognized the existence of customary international law and provided for international tribunals and international courts to prosecute criminals guilty of genocide and other war crimes.
One of the basic tenets of international humanitarian law is the protection of civilians not directly involved in conflicts. It guarantees medical care for the sick or wounded and defines medical personnel and their equipment as a neutral party. The International Red Cross, for example, is recognized and respected under the law.
Protocols in these treaties also guard against collateral damage to infrastructure necessary for civilian survival. International humanitarian law forbids attacks on crops, housing, and workplaces of people not serving in the military or actively engaged in war. The law grants special protections to women and children during wartime, and sets guidelines to protect religious facilities and environmental resources.
International humanitarian law also regulates the types of weapons used in war. It prohibits chemical and bacterial warfare capable of killing innocent people and destroying food supplies. Landmines are also covered under international treaties that govern war.
Several amendments to the law offered protection to prisoners of war. These treaties permit the detention of military combatants to prevent them from fighting. Prisoners of war must be treated humanely while in custody and cannot be tortured or exposed to mental or physical cruelty. They must be given adequate housing, food, and medical care while being held. At the end of an armed conflict, prisoners of war must be released, according to provisions in international humanitarian law.
These laws also apply to refugees who flee a country or region to escape persecution. Refugees enjoy the same protection as civilians, whether they seek asylum in another country or within the borders of their home country. International humanitarian law ensures refugees are assisted with food, water, and temporary housing. Treaties between countries aim to avoid displacement whenever possible during conflict.
Customary international law covers rules not formalized in treaties. These protocols expand on the expectations of nations during conflicts within countries or between nations. Such laws cover protected zones and independent journalists working in war zones. Customary laws set standards for conduct and the protection of victims of war that formal treaties might lack. | <urn:uuid:0e82cbce-5898-4438-8152-1d97c9a4a596> | {
"date": "2015-04-02T04:44:52",
"dump": "CC-MAIN-2015-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131317541.81/warc/CC-MAIN-20150323172157-00014-ip-10-168-14-71.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9292171001434326,
"score": 3.984375,
"token_count": 538,
"url": "http://www.wisegeek.com/what-is-international-humanitarian-law.htm"
} |
DEFINITION of CNN Effect
The CNN effect is a theory that 24-hour news networks, such as CNN, influence the general political and economic climate. Because media outlets provide ongoing coverage of a particular event or subject matter, the attention of viewers is narrowly focused for potentially prolonged periods of time. The CNN effect can therefore cause individuals and organizations to react more aggressively towards the subject matter being examined. For example, regular coverage of turmoil in the banking sector may result in investors withdrawing from bank stocks or even moving their deposits out of banks being mentioned. This in turn would heighten the turmoil.
BREAKING DOWN CNN Effect
The effect that media outlets have on consumer behavior has been examined since the CNN effect came to prominence during the 1980s. For example, by focusing on natural disasters, news outlets may influence consumers and investors to react more drastically to what is unfolding. While this can be viewed as a criticism, media outlets also shed light on the inner workings of governments and businesses, which may increase accountability.
The CNN Effect Post-Television
The CNN effect is really about the speed at which cable news was able to spread information and how that news seemingly made events far away matter to people that otherwise would not have noticed. Well informed people prior to cable news would still experience a delay in information as a news story from Asia, for example, took time to appear in the newspaper. This information lag actually helped to prevent stock panics based on international events as there was every reason to believe that the situation had changed since the column was written. Cable news came along and offered near real-time footage and further compounded this rapid reporting with a large dose of sensationalism. Now a typhoon in Asia could be seen making landfall and North American would react more rapidly to the fears of floods or the perceived severity of power outages and the impact on companies in the region.
However, as fast as cable news is, it has been overtaken by social media. Now cable news channels spend time monitoring the same social media channels that regular people follow because there is a torrent of real time data from all over the world. The CNN effect - the theory that real time information and prolonged focus on a particular event has a market impact - is still valid, but it may now be accurate to rename it the Twitter effect rather than tying it back to a cable news channel. Increasingly, we are in a world of cord-cutters, so cable news is far from the dominant medium. | <urn:uuid:fe1e6d66-83bd-4312-a6f0-58eb9c85c055> | {
"date": "2019-05-25T14:25:44",
"dump": "CC-MAIN-2019-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258058.61/warc/CC-MAIN-20190525124751-20190525150751-00056.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9707616567611694,
"score": 3.84375,
"token_count": 500,
"url": "https://www.investopedia.com/terms/c/cnneffect.asp"
} |
The sound of status: People know high-power voices when they hear them
Being in a position of power can fundamentally change the way you speak, altering basic acoustic properties of the voice, and other people are able to pick up on these vocal cues to know who is really in charge, according to new research published in Psychological Science, a journal of the Association for Psychological Science.
We tend to focus on our words when we want to come across as powerful to others, but these findings suggest that basic acoustic cues also play an important role:
"Our findings suggest that whether it's parents attempting to assert authority over unruly children, haggling between a car salesman and customer, or negotiations between heads of states, the sound of the voices involved may profoundly determine the outcome of those interactions," says psychological scientist and lead researcher Sei Jin Ko of San Diego State University.HT: Marginal Revolution's Assorted Links
The researchers had long been interested in non-language-related properties of speech, but it was former UK prime minister Margaret Thatcher that inspired them to investigate the relationship between acoustic cues and power.
"It was quite well known that Thatcher had gone through extensive voice coaching to exude a more authoritative, powerful persona," explains Ko. "We wanted to explore how something so fundamental as power might elicit changes in the way a voice sounds, and how these situational vocal changes impact the way listeners perceive and behave toward the speakers."
Ko, along with Melody Sadler of San Diego State and Adam Galinsky of Columbia Business School, designed two studies to find out.
In the first experiment, they recorded 161 college students reading a passage aloud; this first recording captured baseline acoustics. The participants were then randomly assigned them to play a specific role in an ensuing negotiation exercise.
Students assigned to a "high" rank were told to go into the negotiation imagining that they either had a strong alternative offer, valuable inside information, or high status in the workplace, or they were asked to recall an experience in which they had power before the negotiation started. Low-rank students, on the other hand, were told to imagine they had either a weak offer, no inside information, or low workplace status, or they were asked to recall an experience in which they lacked power....MORE
Also at MR: "Best non-fiction books of 2014" | <urn:uuid:2794cd6c-34d3-4ee8-8600-59e0e7b19e55> | {
"date": "2017-08-22T01:25:50",
"dump": "CC-MAIN-2017-34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109803.8/warc/CC-MAIN-20170822011838-20170822031838-00376.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9723678827285767,
"score": 2.6875,
"token_count": 475,
"url": "http://climateerinvest.blogspot.com/2014/11/high-status-voices-have-different-sound.html"
} |
We receive many enquiries concerning the operation of traffic signals. A clearer understanding of how traffic signals work can improve driving and walking habits, and reduce some of your frustration while waiting for a signal to change.
Why are Traffic Signals needed?
When traffic volumes increase beyond the capacity of an all-way stop sign, it may be necessary to install a traffic signal. The established criteria for installing traffic signals is based on the total vehicle and pedestrian volumes, delays to side street motorists and pedestrians, and collision history at the intersection.
Are Traffic Signals the answer to solving traffic problems?
The function of a traffic signal is to assign right-of-way between two or more flows of traffic at an intersection. A properly timed traffic signal can significantly increase the traffic through an intersection and can improve the safety for both pedestrians and vehicles.
Disadvantages of Traffic Signals
Traffic signals are not a "cure-all” for traffic problems, nor will they prevent collisions. Unjustified traffic signals can cause excessive delays, a disregard for the traffic laws, diversion of traffic through residential neighbourhoods, as well as an increase in collisions.
A traffic signal is a control device, not a safety device. While many people realize that traffic signals can reduce the number of right-angle (broadside) collisions at intersections, few realize that signals can also cause a significant increase in rear-end collisions.
Traffic Signal Equipment
Traffic signals are more costly than commonly realized, even though they are a sound public investment when justified. A new signal costs approximately $150,000 to $200,000. The equipment is highly specialized and expensive to install and maintain, therefore, the decision to install signals must be carefully considered.
Left Turn Phasing
It is sometimes necessary to install protective left turn phasing at locations with high volumes of left turning vehicles, where there is excessive delay, or where turning movement collisions are common.
The rules of the road in the Highway Traffic Act require motorists to stop before entering a signalized intersection when they see a flashing red signal. You must stop at the white stop bar or crosswalk, on the near side of the intersection. You should treat a flashing red traffic signal like you would a stop sign, and enter the intersection only if the way is clear.
You may also see a flashing red light or a red beacon at a stop sign or at a multi-way stop. The red flashing signal is to supplement the stop sign. At these intersections, enter only if the way is clear.
When traffic signals are flashing amber, it means that the other approach has a flashing red display and motorists may proceed through the intersection with caution.
When there is no power at a traffic signal and all the lights are dark, you should treat the intersection as if it were an all-way stop. Enter the intersection, subject to the rules at an all-way stop.
Traffic Signal Preemption
Trains and some fire trucks are given priority at traffic signals. When they approach an intersection, the signals transfer control to a special signal operation called preemption. In preemption, the traffic controller safely provides a green signal for the approaching emergency vehicle, or prevents vehicles from crossing the railway tracks.
Traffic signals are timed to alternate right-of-way between two or more approaches. Traffic signals may be programmed to operate with fixed timing. However, at many intersections, vehicle detectors are used to assign the right-of-way and signal timing at an intersection will vary, based on changing traffic demands. The majority of our vehicle detectors are loops of wire imbedded in the pavement and when sensing the metal in cars, signal to the traffic controller, the presence of vehicles. The City also uses video and thermal cameras to detect the presence of vehicles.
Coordination of Traffic Signals
Coordination of traffic signals provides the greatest benefit to motorists by reducing interruption in the flow of traffic. Co-ordination along a street is based on signal spacing the volume and speed of traffic, and the traffic signal cycle length. A well managed coordinated traffic control system saves fuel, reduces travel time, and reduces air-borne pollutants.
The goal of coordinating traffic signals is to get the greatest number of vehicles through the signal network with the fewest stops and delays, in a safe and comfortable manner. Where optimum conditions cannot be achieved, the intersection approaches with the busiest traffic movement are given priority.
Almost all the City's traffic signals along major arterials are interconnected and operate co-ordinated.
QUESTIONS MOST COMMONLY ASKED
Why do traffic signals on the side street change so quickly?
Many traffic signals operate in a semi-actuated mode where the green times are assigned to the side street, based on the traffic volume from the side street. If all the vehicles waiting at the intersection clear before the maximum green time is reached, the green signal will change to amber. This provides more green time for the main street which makes the overall operation more efficient.
Why do I have to wait so long for the signal to change?
At actuated traffic signals, the green phase for the side street will only occur with the detection of a vehicle or someone pushing a pedestrian push button. The length of delay before getting a green will depend on when the call for a green on the side street was received and if the main street traffic has been satisfied. In a coordinated system, the call may be further held up to maintain the main street progression. In these instances, the side street movement can only be serviced after the main street traffic stream has passed through the intersection. It can be frustrating when there is no traffic on the major street and you must wait.
How are the pedestrian crossing times calculated?
The time for pedestrian "Walk” and "Flashing Don't Walk” intervals is based on the distance across the intersection that a pedestrian follows. We calculate the pedestrian crossing time based on the average pedestrian walking speed and the roadway width. At locations where there are a lot of seniors or children, a slower walking speed is used to lengthen the pedestrian "Flashing Don't Walk” intervals.
Does somebody have to be hurt before a traffic signal will be installed?
Traffic signals don't always prevent collisions and don't always help traffic control. At some locations, collisions actually increased after signals were installed. The potential for pedestrian / vehicle conflict may also increases since some motorists do not always recognize the rights of pedestrians at a crosswalk. Where traffic signals are installed without justification, motorists see them as unnecessary, resulting in non-compliance to traffic laws. When this happens, traffic signals become a liability to safety rather than an asset.
Why don't all traffic signals have left turn phasing?
There are proper applications for left turn phases. They should not be over used or implemented at signals that can function effectively and safely without protecting left turns. The requirement for left turn phasing depends on vehicle volume, number of left turns, delay, collisions and intersection geometry. The implementation of left turn phasing reduces the amount of green time available for all other movements. In some instances, left turn phasing results in the interruption of the progressive traffic flow within a coordinated system. | <urn:uuid:2752c910-8b1c-43df-ae21-e4f04434be2c> | {
"date": "2017-10-18T02:03:52",
"dump": "CC-MAIN-2017-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822668.19/warc/CC-MAIN-20171018013719-20171018033719-00456.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9393432140350342,
"score": 3.21875,
"token_count": 1472,
"url": "https://www.greatersudbury.ca/live/transportation-parking-and-roads/roads/traffic-and-transportation/traffic-signals/"
} |
Phone & cell phone recycling.
Deutsche Telekom is involved in the reuse and recycling of old cell phones. Together with its customers, the company makes an important contribution to conserving natural resources and protecting the climate and the environment with these activities. Deutsche Telekom considers reusing used, functional cell phones to be a key component of sustainability, because the extended lifetime of the cell phones means their carbon footprint improves significantly.
We have been monitoring the Cell Phone Recycling CR KPI since 2010 in order to emphasize the importance of cell phone recycling. This KPI compares the collected cell phones, measured in units and the equivalent in kilograms, with the number of customers at a Group company.
Information on the development of CR KPIs can be found under Strategy & management.Find out more
According to the WEEE (Waste Electrical and Electronic Equipment) Directive, it is mandatory for manufacturers of new electrical and electronic devices in Europe to take back old devices at no charge and dispose of them properly. Deutsche Telekom is involved in this process beyond the scope of legal requirements with targeted initiatives for collecting old devices. In order to meet our legal obligations to inform consumers of how to properly dispose of electronic products, we include an informative flyer with each new device.
Data privacy poses a particular challenge when collecting cell phones. Telekom's Data Privacy and Data Security department is involved in the collection system to guarantee data privacy and keep customer data from being abused.
In 2011, we were able to collect a total of around 762,000 cell phones within the scope of a number of initiatives. The proceeds went to the following charitable organizations:
- BILD hilft e.V. "Ein Herz für Kinder"
- Deutsche Umwelthilfe e.V. and
- German Doctors e.V.
In April 2010, we set ourselves the goal, within the scope of the Changemaker Manifest by Utopia, to collect around 1 million old cell phones within one to two years in Germany for recycling. We achieved this goal at the end of 2011.
Cell phone collection initiatives for "Ein Herz für Kinder."
As part of an extensive cell phone collection initiative, which was conducted by BILD hilft e.V. "Ein Herz für Kinder" between October and December 2011, we were able to collect an additional 500,000 old cell phones compared with the previous year.
- We kicked off Germany's largest cell phone collection campaign ever in the German TV show "Wetten, dass...?" at the beginning of October 2011. In the show, we bet that we would be able to collect 500,000 or more old cell phones within only ten weeks. At the "Ein Herz für Kinder" gala show in December, host Thomas Gottschalk congratulated us for winning the bet. We managed to collect a total of 585,758 old devices within 10 weeks. Deutsche Telekom donated EUR 2 for each collected cell phone to the "Ein Herz für Kinder" charity. 140 sales and service employees took part in the charity event where they took phone calls from people who wanted to donate their old cell phones.
- Within the scope of an extensive campaign that took place between October 18 and 25, 2011, we mailed envelopes that could be used to send in old cell phones to Telekom free of charge to a total of 22.7 million households. Additionally, we provided address labels at www.telekom.com/nachhaltig-handeln for download so that users could send back their old cell phones free of charge.
- On November 15, 2011, the Telekom Truck collected old cell phones at the German national soccer team's game against the Netherlands.
In addition to the cell phone recycling initiative, we launched more campaigns during the year. The proceeds were also included in the sum donated. At the gala event, which was broadcast on ZDF, René Obermann was able to donate a total amount of EUR 1.5 million.
Deutsche Telekom also collects old fixed-line phones and recycles them.Find out more
Around two-thirds of all German residents have a total of 83 million old or unused cell phones in their homes, a 15 percent increase compared with 2010. This figure is based on a study that was carried out by the German industry association Bitkom in December 2011. These old cell phones could be used in other areas. They also contain valuable raw materials, including metals such as gold, silver and copper that could be recycled.
Metals and ores are often extracted under conditions that are problematic for people and the environment. Because of this, it is very important to collect, reuse and recycle old communications devices. Some of the cell phones returned by customers can be reused after making a few repairs and deleting all personal data. People in Asia or Africa, for example, are then able to purchase fully functioning cell phones at lower prices. Defective devices and cell phones that require extensive repairs or data deletion are recycled.
Supporting non-profit organizations.
Deutsche Telekom's cell phone recycling project not only protects the environment but also promotes non-profit organizations. The proceeds that Deutsche Telekom generates went to different organizations including BILD hilft e.V. "Ein Herz für Kinder," Deutsche Umwelthilfe (DUH) and German Doctors.
The charity organization, Bild hilft e.V. "Ein Herz für Kinder", has been raising money since 1978 and supporting children and institutions, both on a national and international level. The organization provides fast, non-bureaucratic support when children need help. In collaboration with renowned hospitals, the non-profit organization makes sure that children that cope with serious illness and cannot be treated adequately in their home countries receive access to vital surgical and therapeutic care in Germany. The charitable organization also provides emergency aid in war zones and after natural disasters. "Ein Herz für Kinder" focus their efforts mainly on Germany. The charity organization, which is run by the German daily BILD, promotes soup kitchens, children's hospitals and day care centers as well as sports and educational projects.
Deutsche Umwelthilfe (DUH) uses the donations to help important nature conservation projects such as maintaining natural river landscapes and forests as well as funding environmental education projects. 774 environmental and nature conservation projects were financed last year by donations from Telekom Deutschland. The majority of these projects were conducted in collaboration with groups collecting old cell phones at schools and local environmental and nature conservation groups. Registered cell-phone collecting groups receive a donation for every cell phone they send in for their own projects, which include initiatives to redesign school buildings or campuses to make them more environmentally friendly or to conduct environmental education activities. In 2011, the new Internet portal www.handysfuerdieumwelt.de was launched by DUH in collaboration with Telekom Deutschland.
The voluntary aid organization German Doctors works to improve the healthcare situation and living conditions of people in developing countries. With donations from cell phone collection activities, we are able to support mobile outpatient facilities in Mindanao (Philippines), which provide basic care to locals.
Even though T-HT Hrvatski Telekom did not specifically advertise for people to recycle their cell phones, the number of people who gave their old cell phones to the company increased. A total of 11,300 devices were handed into the company in 2011. That means that a total of 111,300 devices have been collected since the recycling activities were initiated in 2005.
T-Mobile Netherlands collected a total of 1,800 old cell phones during a campaign week from March 25 through April 1, 2011, under its "GSM return plan" program, which the company launched in 2010. The campaign week was conducted in collaboration with a local radio station's donation campaign. As usual, donations went to the War Child aid organization, which was able to help 3,000 children with the donations collected during the campaign week. The GSM return plan program collected 87,870 cell phones by late 2011.
(available only in Dutch)
T-Mobile Netherlands initiated the nationwide cell phone recycling program, the "GSM return plan," together with TNT Post on January 5, 2010. By the end of 2010, the cooperation partners were hoping to collect 100,000 devices.. However, this goal was too ambitious. The partners were able to recycle some 40,000 devices in the first year. Profits generated from recycling the phones are donated to the War Child initiative and the World Food Program. Two other companies from the ICT industry have been participating in the "GSM return plan" since late 2010. Together, they intend to promote the important topic of cell phone recycling in the Netherlands.
T-Mobile Czech Republic has been including information on the importance of cell phone recycling as well as information on online billing in all bills sent to its customer since 2010. T-Mobile Czech Republic collected almost 8,000 used cell phones in 2011.
Recycling bins for old cell phones and accessories are available at all T-Mobile USA stores. All new cell phone packaging will be labeled with the words "Recycle your cell phone." in order to emphasize the importance of recycling The company is also conducting a communication campaign. During the reporting period, the company collected and recycled a total of 180,000 cell phones.
T-Mobile USA also promotes recycling through its involvement in industry initiatives such as CTIA—The Wireless Association. With the participation of T-Mobile USA representatives, the "green" CTIA working group developed an industry recommendation for recycling cell phones. All contracting parties should develop a collection program and initiate measures to raise public awareness of the issue. They should also increase their collection rates by 20 percent by 2015 and use more recycled materials in production.
Cosmote group companies again helped to protect the environment and conserve resources during the reporting period. Cosmote Romania conducted a cell phone recycling campaign from July through December 2011. The company distributed 27,000 flyers throughout the country and hung posters in the stores. A total of 123 kilograms of old devices and accessories were collected.
Cosmote Greece successfully continued Join Us in Recycling, its internal recycling initiative for cell phones, accessories, portable batteries and printer cartridges, at Cosmote and Germanos. A good 8 metric tons of used cell phones and accessories as well as almost 45 metric tons of old portable batteries were recycled. Almost 1,100 printer cartridges had been properly disposed of by the end of 2011. The recycling initiative was accompanied by an extensive communication campaign.
|Sub-targets||Status of implementation/measures|
|Development of a Group-wide waste strategy||Target achieved. |
Please contact us.
Save up to five personal favorites.Save | <urn:uuid:b72d569e-520c-435e-b40f-e3fcdc2128f5> | {
"date": "2015-08-30T09:52:01",
"dump": "CC-MAIN-2015-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065241.14/warc/CC-MAIN-20150827025425-00172-ip-10-171-96-226.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9527304172515869,
"score": 2.65625,
"token_count": 2229,
"url": "http://www.cr-report.telekom.com/site12/customers/phone-cell-phone-recycling"
} |
Nineteen percent of the world's reptiles are estimated to be threatened with extinction, states a paper published February 14 by the Zoological Society of London (ZSL) in conjunction with experts from the IUCN Species Survival Commission (SSC).
The study, printed in the journal of Biological Conservation, is the first of its kind summarising the global conservation status of reptiles. More than 200 world renowned experts assessed the extinction risk of 1,500 randomly selected reptiles from across the globe.
Out of the estimated 19% of reptiles threatened with extinction, 12% classified as Critically Endangered, 41% Endangered and 47% Vulnerable.
Three Critically Endangered species were also highlighted as possibly extinct. One of these, a jungle runner lizard Ameiva vittata, has only ever been recorded in one part of Bolivia. Levels of threat remain particularly high in tropical regions, mainly as a result of habitat conversion for agriculture and logging. With the lizard's habitat virtually destroyed, two recent searches for the species have been unsuccessful.
Dr. Monika Böhm, lead author on the paper: "Reptiles are often associated with extreme habitats and tough environmental conditions, so it is easy to assume that they will be fine in our changing world.
"However, many species are very highly specialised in terms of habitat use and the climatic conditions they require for day to day functioning. This makes them particularly sensitive to environmental changes," Dr. Böhm added.
Extinction risk is not evenly spread throughout this highly diverse group: freshwater turtles are at particularly high risk, mirroring greater levels of threat in freshwater biodiversity around the world. Overall, this study estimated 30% of freshwater reptiles to be close to extinction, which rises to 50% when considering freshwater turtles alone, as they are also affected by national and international trade.
Although threat remains lower in terrestrial reptiles, the often restricted ranges, specific biological and environmental requirements, and low mobility make them particularly susceptible to human pressures. In Haiti, six of the nine species of Anolis lizard included in this study have an elevated risk of extinction, due to extensive deforestation affecting the country.
Collectively referred to as 'reptiles', snakes, lizards, amphisbaenians (also known as worm lizards), crocodiles, and tuataras have had a long and complex evolutionary history, having first appeared on the planet around 300 million years ago. They play a number of vital roles in the proper functioning of the world's ecosystems, as predator as well as prey.
Head of ZSL's Indicators and Assessment Unit, Dr Ben Collen says: "Gaps in knowledge and shortcomings in effective conservation actions need to be addressed to ensure that reptiles continue to thrive around the world. These findings provide a shortcut to allow important conservation decisions to be made as soon as possible and firmly place reptiles on the conservation map,"
"This is a very important step towards assessing the conservation status of reptiles globally," says Philip Bowles, Coordinator of the Snake and Lizard Red List Authority of the IUCN Species Survival Commission. "The findings sound alarm bells about the state of these species and the growing threats that they face globally. Tackling the identified threats, which include habitat loss and harvesting, are key conservation priorities in order to reverse the declines in these reptiles."
The current study provides an indicator to assess conservation success, tracking trends in extinction risk over time and humanity's performance with regard to global biodiversity targets.
ZSL and IUCN will continue to work with collaborating organisations to ensure reptiles are considered in conservation planning alongside more charismatic mammal species.
- Monika Böhm et al. The conservation status of the world’s reptiles. Biological Conservation, 2013; 157: 372 DOI: 10.1016/j.biocon.2012.07.015
Cite This Page: | <urn:uuid:b12cc2a2-0776-490e-8b6f-512adc2906ce> | {
"date": "2014-09-23T22:51:48",
"dump": "CC-MAIN-2014-41",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657140379.3/warc/CC-MAIN-20140914011220-00171-ip-10-234-18-248.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.936851441860199,
"score": 3.640625,
"token_count": 784,
"url": "http://www.sciencedaily.com/releases/2013/02/130217085303.htm"
} |
Mac OS X Security Checklist
To immediately secure your Macintosh system, take the three steps below:
- Install anti-virus software
If you don't have anti-virus software installed, you may leave your
system vulnerable to viruses, Trojan horses, spam, and other intrusions.
Students, faculty and staff can download anti-virus software from the BevoWare site. You should configure your software to scan regularly and set your virus definition (DAT)
files to auto-update.
- Use the Mac OS X firewall
The built-in firewall protects your machine against Internet attacks and
random network scans. To turn on the firewall:
- In 10.4, open System Preferences and select Sharing. Click Firewall and then click Start.You can then choose to allow specific services.
- In 10.5, open System Preferences and select Security. Click Firewall and then select the level of control you want for the firewall. You can choose to allow specific services and applications.
- Run the software update application
Keeping your software up-to-date helps protect your system. Macintosh
provides a Software Update application that
you can use to schedule regular automatic updates.
For increased security, you should also take the following steps:
Create a user account
You should not use your administrator account for everyday tasks on your
computer. Your administrator account allows you to install software, but
using it all the time is dangerous because viruses and Trojan horses
run from the administrator account can cause greater harm to your computer.
To prevent damage to your system, you should create
a user account for every day use.
Set strong passwords on all accounts
All users on the UT network are expected to choose
strong passwords and guard them well. If someone else obtains your
password, they can access your private data (including e-mail), alter
or destroy your files and perform illegal or inappropriate activities
in your name. To learn more about choosing strong passwords, visit the Password Dos and Don'ts topic.
Disable file sharing
File sharing should be disabled unless you are purposefully using
it to copy items from one computer to another, or to allow a known party
to access files stored on your computer.
Be careful when using peer-to-peer file sharing applications
Although peer-to-peer (P2P) applications such as Napster, Gnutella, iMesh,
Audiogalaxy Satellite, and KaZaA, are a good way of sharing information,
if you do not use them appropriately you may degrade the performance of
the Universitys network, unknowingly share your personal data, inadvertently
violate federal copyright law, or expose your computer to malicious code
use. Read What
You Need to Know about Peer-to-Peer File-Sharing Applications.
Use secure file transfer
When transferring files over the Internet you should always use a secured
connection. SSH and SFTP applications encrypt and protect your passwords
and information. If you use Telnet or a non-secure FTP program,
your information is sent in the clear for anyone to see. SSH and SFTP
clients are available for download on the BevoWare site. | <urn:uuid:e3abf38a-648a-4d58-95c8-9ec143d3be60> | {
"date": "2014-12-21T15:47:05",
"dump": "CC-MAIN-2014-52",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802771592.49/warc/CC-MAIN-20141217075251-00027-ip-10-231-17-201.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8415318131446838,
"score": 2.796875,
"token_count": 666,
"url": "http://security.utexas.edu/personal/mac/macosx.html"
} |
CHECKLIST OF COGNITIVE DISTORTIONS
- All-or-nothing thinking: You restrict possibilities and options to only two choices: yes or no (all or nothing).
- Over generalization: You view a single, negative event as a continuing and never-ending pattern of defeat.
- Negative Mental filter: You dwell mostly on the negatives and generally ignore the positives.
- Discounting the positives: You insist your achievements or positive efforts do not count.
- Jumping to conclusions:
- Mind-reading: You assume that people are reacting negatively to you without any objective evidence.
- Fortune-Telling: You predict that things will turn out badly without any objective evidence.
- Magnification or minimization: You blow things way out of proportion or minimize their importance.
- Emotional reasoning: You base your reasoning from your feelings: "I feel like a loser, so I must be one."
- "Mustabatory thinking" or "Shoulding All Over Yourself": You criticize yourself or other people with "musts," "shoulds," "oughts," and "have tos."
- Labeling: Instead of saying "I made a mistake," you tell yourself "I'm an idiot" or "I'm a loser."
- Personalization: You blame yourself almost completely for something for which you were not entirely responsible.
Adapted from Feeling Good by David D. Burns, MD. | <urn:uuid:1e05d727-25a3-4a11-94fc-9a859e9d9d95> | {
"date": "2015-02-28T06:49:12",
"dump": "CC-MAIN-2015-11",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461848.26/warc/CC-MAIN-20150226074101-00125-ip-10-28-5-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9160094261169434,
"score": 3.078125,
"token_count": 297,
"url": "http://ccvillage.buffalo.edu/Village/WC/wsc/outlines_and_handouts/feel_better_fast/hand02.html"
} |
However, thermography is not an alternative to mammography. Mammography remains the main way of screening for early signs of breast cancer and uses low doses of X-rays.
How thermography works
Thermography, also known as thermal imaging, detects the rise in skin temperature which may occur when cancer cells are multiplying.
Increased blood flow makes skin temperature increase. This rise in skin temperature is what breast thermography is aiming to detect.
There is a debate among doctors about how useful thermography is as a way to diagnose breast cancer. The American Cancer Society have said that it will take time to see if it is better than, or equal to, current tests.
Process of a thermograph
Breast thermography is a non-invasive physical test that lasts for around 15 minutes. It is also "non-compressive," which means that it does not put force on, or squeeze, the breast, as is the case with breast mammography exams.
Some people are worried about the force put on breasts in a mammogram, so prefer the idea of a thermograph. However, people should not be put off by a mammogram as it is currently considered the gold standard for breast cancer screening.
Thermography uses digital infrared imaging to detect subtle changes in the breast based on symmetry. It looks for clear abnormalities in one breast in comparison to the other. This makes it more difficult to use on individuals who have undergone a mastectomy.
The process can be undertaken in a doctor's office. During the thermograph a person will be asked to stand around 6-8 feet away from the camera.
What thermographs detect
To understand a thermograph, it is necessary to know two things about cancerous breast tissue, compared to normal breast tissue. These are:
- there is more metabolic activity (biochemical reactions)
- there is increased blood flow
These aspects of breast cancer tissue result from the cancer cells doing all they can to maintain and grow. Another side effect of this is a rise in skin temperature.
Ultra sensitive cameras and computers can detect this increase in temperature. They produce high-resolution images.
Using thermography with other tests
Thermography may be used alongside mammograms, where the thermograph would be used as a baseline record to be checked against future mammograms.
Thermography can be used alongside other tests, such as mammographic screening.
Normally if a thermograph alone is used, the images taken will be kept on record and used for future evaluations. The idea is that an initial breast thermography test, which can be used on people as young as 18, will provide a baseline.
Future tests can then be compared to this baseline to see if there are any changes or abnormalities that develop. These might be part of a yearly physical examination.
Follow up tests
If abnormalities are detected, then follow-up procedures will be required to investigate further. This may include a mammogram.
These follow-ups can also rule cancer out, as the images could be showing a host of other breast diseases. When abnormalities are present it could be a sign of:
- fibrocystic disease
- vascular disease
A doctor will be able to plan a careful program for further diagnosis and monitoring. They can also identify if treatment is required.
After the test, the reports are divided into five categories. These is known as the TH (thermobiological) grading system. The categories are as follows:
- TH-1: Symmetrical, bilateral, nonvascular (non-suspicious, normal study)
- TH-2: Symmetrical, bilateral, vascular (non-suspicious, normal study)
- TH-3: Equivocal (low index of suspicion)
- TH-4: Abnormal (moderate index of suspicion)
- TH-5: Highly abnormal (high index of suspicion)
Follow-up exams are needed at different times for each category, as follows:
- TH-1 and TH-2: every year
- TH-2: every year
- TH-3: every 6 months
- TH-4 and TH-5: every 3 months
These examinations will be carried out in conjunction with other medical examinations and can be done at the doctor's office. People should stay in regular contact with their doctor throughout all these tests and procedures.
People do not have to decide between breast thermography or mammography, but may use them both. Using the two together can be more effective.
Breast thermography is less effective at detecting small cancers or cancers deeper in the breast tissue.
The use of thermography, mammography, and a clinical exam is known as the "multimodal approach". Using this approach can help identify around 95 percent of early stage cancers.
Thermography uses sophisticated cameras and computers to capture infrared photographs.
A combination of sophisticated infrared cameras and computers are used to conduct thermography. Thermal imaging technicians capture an infrared photograph, or heat image, of the breast. This can then be printed in high resolution for a doctor to study, or may be sent to them electronically.
Thermography has been experimented with in medical science for many hundreds of years. However, it wasn't until 1972 that the Department of Health, Education and Welfare announced that thermography was "beyond experimental."
This announcement applied to the use of the technology for many areas, including the evaluation of the breast. The advance in technology since then has seen thermography become recognized in multiple areas of medicine, including breast health.
The cameras used in breast thermography produce ultra-sensitive, high-resolution infrared images. These images show heat patterns and identify changes in the temperature of the skin and blood flow.
Many other technologies that test for breast cancer work in other ways that do not involve looking for heat patterns. These tests include:
Breast thermal risk index
Other factors can affect the results of a breast thermography. So, often doctors will use the Breast Thermal Risk Index to help ensure more accurate results. This includes:
- age of person
- family history of breast cancer
- medication, including birth control and hormone pills
- if the person is overweight
Thermography offers the opportunity for early detection of breast cancer and has a number of specific benefits.
Earliest possible detection
A thermograph enables cancer to be detected sooner than other procedures.
Compared with other procedures being used on their own, thermography makes it possible for cancer or pre-cancerous growth to be detected up to 10 years sooner than they may otherwise have been.
Thermography means that people at potential risk of developing breast cancer can be monitored closely.
If breast cancer is caught at an early stage through the use of thermographs, this increases a person's treatment options and should ultimately lead to a more positive outlook.
However, breast thermography only has the potential to identify early warning signs. It cannot diagnose breast cancer on its own, but it may point to potential signs of a problem. | <urn:uuid:e01784a0-49ff-4010-a273-c5668d1b51e9> | {
"date": "2017-09-25T02:49:03",
"dump": "CC-MAIN-2017-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690307.45/warc/CC-MAIN-20170925021633-20170925041633-00056.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9439885020256042,
"score": 3.71875,
"token_count": 1437,
"url": "https://www.medicalnewstoday.com/articles/316958.php"
} |
The conduct of council elections is regulated by the Local Government Act 1989 and the Local Government (Electoral) Regulations 2016. The day-to-day management of the election process is undertaken by the Victorian Electoral Commission.
All council elections are held every four years on the fourth Saturday in October. Victorian state and local government election dates are both fixed-term, but are scheduled to occur two years apart from each other.
The Local Government Act the Victorian Electoral Commission the statutory provider for all council elections. The Returning Officer will be the Electoral Commissioner or his or her appointee.
To be eligible to vote at a council election, people must be on the state or local council voters’ roll 57 days before election day. This is called the ‘entitlement date’.
Candidates must submit their nominations in person to the Returning Officer before the close of nominations. Nominations close at 12 noon, 32 days before the election day.
There are two ways in which people can vote, depending on which system each council has chosen – postal elections and attendance elections. The close of voting differs.
Key election dates are publicised in the lead-up to an election, enabling people to participate fully in the process. The Returning Officer, who runs an election, is also able to provide more detail of the election timeline.
Two methods of counting votes are used in council elections, depending on whether or not the election is for a single-member ward.
The preferential voting system is used where a ward is electing a single councillor. This is similar to the system of vote counting used for single member electorates in the State Legislative Assembly and the Federal House of Representatives.
The Proportional Representation method is used for counting election results for unsubdivided councils and multi-member wards. Proportional representation is designed to elect candidates in proportion to their share of votes.
Proportional representation is used for Australian Senate elections and for the State Legislative Council. However, voting in council elections does not include above-the-line voting as it does in these federal and state systems (with the exception of Melbourne City Council – see below).
In a proportional representation system, a candidate does not require absolute majority of votes to be elected. Instead they must obtain a quota of votes, which is calculated in accordance with a statutory formula.
The quota is calculated by dividing the total number of formal votes by one more than the number of vacancies to be filled in the ward or district, and then increasing the result by one vote. For example, in an unsubdivided district where there are seven councillors to be elected and 80,000 formal votes have been cast, the quota would be calculated as (80,000 divided by (7+1) +1), which is equal to 10,001.
The vote counting process in a proportional representation system is undertaken as follows:
The Victorian Electoral Commission has more information about the ways votes are counted.
The Returning Officer will publicly declare results after the votes have been counted and scrutineers have had time to examine the record of the count. The declaration of the election may be delayed if the Returning Officer decides to conduct a recount
Melbourne City Council elections are different. Separate provision for the capital city council's elections is laid down in the City of Melbourne Act 2001.
The Lord Mayor and Deputy Lord Mayor nominate as a team and are elected on a single ballot paper, using preferential voting.
Candidates for the other councillor positions may nominate to run in groups and the ballot paper used is similar to that of the Australian Senate and the Victorian Legislative Council. This includes provision for above-the-line voting for group tickets. These votes are counted using proportional representation.
Following the dismissal of the elected council in April 2016, a Geelong Citizens' Jury will recommend an electoral structure for its future council. This includes the positions of mayor and deputy mayor. During the Parliamentary debate of the legislation to dismiss the council, the government committed to consult the community before a new council is elected in October 2017. Administrators have been appointed to act as the council until a new council is elected.
At the previous elections, in 2012, the Mayor of Greater Geelong City Council was directly elected by all voters in the municipal district and twelve councillors were elected to represent 12 single member Wards.
Similar to the practice with federal and state government elections, Victorian councils observe special arrangements during the period leading up to a general council election. These are commonly referred to as ‘caretaker arrangements’ and they apply during the ‘election period’.
The special caretaker arrangements that apply to Victorian councils broadly aim to avoid the use of public resources in a way that may unduly affect the election result. They also minimise councils making certain types of decisions that may unduly limit the decision-making ability of the incoming council.
The ‘election period’ is defined in the Local Government Act as the period between the last day of nominations and the election day. This is a 32-day period in Victorian local government elections.
By law, a council may not make the following types of decisions, either directly or by delegation, during an election period:
Councils may voluntarily place additional limits on their decision making during an election period to ensure they are not unduly committing an incoming council. These limits are often described in the council’s election period policy.The election period policy must be made available on the council's website and available in hardcopy for public inspection. All councils must ensure that copies are given to each councillor.
The Local Government Act prohibits a council from printing, publishing and distributing material that is electoral matter during an election period. Electoral matter is broadly defined as ‘matter that is intended or likely to affect voting in an election’. This limitation does not apply to electoral material that is only about the election process.
The Chief Executive Officer must certify all council publications during the election period to ensure they don’t contain electoral matter. Some councils describe, in a detailed policy, the way in which they apply this principle in practice.
Documents published before the election period commences (but still available after commencement, for example on the Council’s website) do not require certification and are not caught by the prohibition. Statutory documents permitted under legislation (such as rate notices, food premises registrations and parking fines) may continue to be disseminated by councils during the election period without limitation.
Occasionally, a position on council becomes vacant between general elections. This can occur if a councillor dies or resigns, or if a councillor ceases to be eligible to hold office.
Such vacancies are either filled by a by-election or by a countback, depending on how the departing councillor was elected.
A by-election is called if a vacancy occurs in a single-member ward where votes were counted using the simple preferential system.
A by-election must be held within 100 days of the vacancy occurring, but is not required if the vacancy occurs in the last six months before a general election is scheduled. When it’s necessary to avoid a clash with the Christmas/New Year holiday period, a by-election may be held up to 150 days after the vacancy occurs.
In a by-election, a complete election is conducted for the ward. This involves a new nomination process and voters casting votes in the same way as in a general election.
A countback is a method for filling vacancies in multiple-member constituencies where votes were counted using proportional representation.
The countback process involves re-using the ballot papers that were used to elect the councillor whose position has become vacant. A new preferential count is conducted using these votes. The candidate who obtains a majority of the votes after the distribution of preferences, is invited to take the vacated position on council.
In effect, this process works out which candidate the majority of voters that originally voted for the vacating councillor expressed as their next available preference.
The countback has three important benefits:
The Victorian Electoral Commission website contains more information about by-elections and countbacks.
Electoral structures and boundaries for councils need to be regularly reviewed to ensure that representation continues to be democratic and appropriate. This is particularly important in rapidly developing regions.
The Local Government Act 1989 requires every council to undergo an electoral representation review at least every 12 years. This means that the council will be reviewed before every third general election. The Victorian Electoral Commission is required to conduct the reviews.
Following each review, the Victorian Electoral Commission submits a final report to the Minister for Local Government that recommends: | <urn:uuid:7ca0a70d-4f29-4f72-823d-c51800affcb9> | {
"date": "2017-03-28T15:44:55",
"dump": "CC-MAIN-2017-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189802.18/warc/CC-MAIN-20170322212949-00506-ip-10-233-31-227.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9659274220466614,
"score": 2.546875,
"token_count": 1754,
"url": "http://knowyourcouncil.vic.gov.au/guide-to-councils/council-elections/how-elections-work"
} |
In which tradition did Buddha say this?
That would be the Buddha Himself, see the story of the life of the Buddha. Even children know this.
However asceticism is part of the practice, eg. Eating one meal a day is asceticism.
That is not asceticism in the way that the Buddha was referring to it as, or the kind he was cautioning against as being not part of a Middle Path.
The Buddha was against practices that pushed the body to the point of being potentially lethal, or making oneself ill or injured, or damaging the body. Such as the types that He Himself had practiced, and the types that the Tendai sect are now practicing.
He said they were unnecessary and unhelpful. He likened it to a string on a musical instrument being tightened to the point of snapping (see heart attacks from the Tendai practice)
Having one meal a day is fine if you are doing a more gentle practice.
Having one meal a day when you are traveling potentially more than twice the distance of an olympic marathon, with only 2 hours of sleep a night is suicidal.
The Buddha would not have endorsed a practice that caused people to have heart attacks from sheer exhaustion and malnourishment in ratio to the energy and calories expended.
Bodhidharma sat meditating facing a wall for 9 years...
I sincerely doubt that Bodhidharma spent nine years facing the wall without adequate food to keep him alive.
Sitting still in meditation requires very little calories. And unless he had an assistant to bathe him, and feed him, and prepare his food, and change his chamber pot, he got up sometimes to go to the bathroom, and cook, and eat. He also likely slept at least some. There's a limit to how far one can sleep deprive oneself before one's body just sleeps with one's eyes open. (in fact, according to one story,[if it's accurate] we know
he fell asleep, because apparently he got so frustrated [anger, acting on anger] with himself,[probably not realizing that there was indeed a physical limit to sleep deprivation] that he cut his own eyelids off [not a smart thing to do, nor would it stop his body from going to sleep], it wasn't until later he realized a middle path. But he likely beat himself up quite a bit before finding it) Not to mention that sitting in a cave, is not exactly extraordinarily dangerous.
"Life is full of suffering. AND Life is full of the Eternal
IT IS OUR CHOICE
We can stand in our shadow, and wallow in the darkness,
We can turn around.
It is OUR choice." -Rev. Basil Singer
" ...out of fear, even the good harm one another. " -Rev. Dazui MacPhillamy | <urn:uuid:a9af6a5d-e891-4b94-b750-f93501a1f2ab> | {
"date": "2019-01-18T17:31:17",
"dump": "CC-MAIN-2019-04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660258.36/warc/CC-MAIN-20190118172438-20190118194438-00576.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9778343439102173,
"score": 2.53125,
"token_count": 582,
"url": "https://dharmawheel.net/viewtopic.php?p=163419&sid=350c40068753db800951deea2cc19c93"
} |
Using Goals to Improve Performance and Accountability
A goal is a simple but powerful way to motivate people and communicate priorities. Leaders in states, local governments, Federal programs, and in other countries have demonstrated the power of using specific, challenging goals (combined with frequent measurement, analysis, and follow-up) to improve performance and cut costs. These stretch goals can be effective at changing the way an organization does business. This Administration has embraced the power of goal-setting as a way to improve the Federal Government's performance and accountability to the American people. Federal agencies are using near-term and longer-term goals in a variety of ways to improve their effectiveness and efficiency.
The Federal Government operates more effectively when agency leaders, at all levels of the organization, starting at the top, set clear measurable goals aligned to achieving better outcomes. It is also vital that they regularly engage their organizations and delivery partners in critical reviews of progress on these goals. This leads to the discovery of what works and what does not. Federal agency leaders are increasingly using goals and measurement to reinforce priorities, motivate action, and illuminate paths to improvement. Agencies are also using goals in partnership efforts to improve outcomes.
Agencies establish a variety of performance goals and objectives to drive progress toward key outcomes. Agencies outline long-term goals and objectives in their strategic plans, and annual performance goals in annual performance plans. Twenty-four major Federal agencies have also identified a limited number of two-year Agency Priority Goals in the FY2015 budget, aligned with their strategic goals and objectives. Agency Priority Goals target areas where agency leaders want to achieve near-term performance acceleration through focused senior leadership attention. The Administration has also adopted a limited number of Cross-Agency Priority Goals to improve cross-agency coordination and best practice sharing. | <urn:uuid:ec686383-d577-46aa-ad86-9b51c93c4b8f> | {
"date": "2015-04-27T08:56:22",
"dump": "CC-MAIN-2015-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246657868.2/warc/CC-MAIN-20150417045737-00201-ip-10-235-10-82.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9482305645942688,
"score": 2.96875,
"token_count": 359,
"url": "http://www.performance.gov/clear_goals?page=1&stra_goal=0&prio_goal=1&fed_goal=0&goal_type=APG"
} |
Sunday, June 8, 2008
Keep the "skeer" on 'em
At the battle at Brice's Crossroads on June 10, 1864, General Forrest gave the Union cavalry one of the classic beatings of the Civil War. The following morning, Forrest said to his artillery commander "the way to whip an enemy is to get 'em skeered, and then keep the skeer on 'em". He then chased them all the way to Memphis.
That was said 144 years ago. The first reenactment of that battle was on June 6, 1954, and quite a few of the Bearcats that helped with the battle at that time are still around.
The "commander" of the forces, Bruce McElroy and his volunteers pictured above, put on quite a show. Movie studio cameramen were in attendance, and you can suppose footage of the skirmish was used somehow later. We don't have all of the names of the guys in the photo. Maybe someone will remember them. Some of them are...you!
This week we'll post all the photos we have of that event and the Confederate Ball. Hope you see something interesting.
Posted by Carl Houston at 3:51 AM | <urn:uuid:9f0802dd-f6ac-466e-8675-59b01b6c8832> | {
"date": "2018-11-15T20:33:10",
"dump": "CC-MAIN-2018-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742937.37/warc/CC-MAIN-20181115203132-20181115225132-00136.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.981574296951294,
"score": 2.578125,
"token_count": 251,
"url": "http://newbaldwynbearcat.blogspot.com/2008/06/keep-on.html"
} |
|Name: _________________________||Period: ___________________|
This test consists of 5 short answer questions and 1 (of 3) essay topics.
Short Answer Questions
1. What are the doctors questioning Jim about in Act 3, Scene 2?
2. What does Jim tell Dr. Bellman he wants to do in the last scene?
3. What is Dr. Bellman's assessment of Jim at the end of Act 2, Scene 4?
4. How many days have passed between Act 2 and Act 3?
5. What does Miss Wingate tell Jim to do at the end of Act 2, Scene 3?
Write an essay for ONE of the following topics:
Essay Topic 1
Describe Tom, his role in Ann's scheme, and 3 ways that he contributed to Jim's surrender. Why were they significant and what reason does the author have for waiting until the second to last act to make Tom appear?
Essay Topic 2
What is relevant about the time in which the play was written? How does it affect Jim's outlook on his life? Explain why Jim was likely to feel as desperate as his did with his career, life, and self-worth. Use examples from the play to provide a detailed analysis of how men were perceived during that time as well as the way the theater/entertainment industry operated differently than it does today.
Essay Topic 3
The other patients are used to contrast Jim's situation at different times during the play.
1) What are the most significant differences between Jim and the other patients? Why are those differences necessary to his position?
2) Describe the patients that you think are the most similar to Jim and the ones that are the most different. For each of them, what role do they play in the plot and the overall message?
This section contains 332 words
(approx. 2 pages at 300 words per page) | <urn:uuid:b502471d-d619-45f0-930b-726f0c8ae00d> | {
"date": "2018-06-24T00:30:30",
"dump": "CC-MAIN-2018-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865438.16/warc/CC-MAIN-20180623225824-20180624005824-00416.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9807053804397583,
"score": 3.28125,
"token_count": 396,
"url": "http://www.bookrags.com/lessonplan/shrike/test6.html"
} |
William Waynflete (1395–1486)
Magdalen’s founder and most important benefactor, Waynflete began his long career as schoolmaster at Westminster, going on to become Provost of Eton College, bishop of Winchester, and Chancellor of England. In 1481 his gift to the college of about 800 printed and manuscript books would have ensured that Magdalen’s was then one of the best-stocked libraries in Oxford. Waynflete left the college other gifts, including land with an income of about £600, his bishop’s mitre and crozier (confiscated by Parliament in the 17th century), and his buskins and boots.
56. William Waynflete’s boots
Gift of William Waynflete (1400–86)
Recent research suggests that these boots were made in the mid-15th century. The fabric is red Italian velvet, brocaded with silver-gilt bell-flowers, and lined with felt. They have leather platform soles that show signs of wear. Waynflete left Magdalen other late medieval footwear: pontifical stockings of pink lampas silk, with flowers embroidered in polychrome silk, as well as the tawed goatskin linings of shoes that were once covered in ivory silk.
William Warham (c. 1450–1532)
William Warham, born a couple of generations after Waynflete, followed an even more illustrious career: Fellow of New College, Archbishop of Canterbury, Chancellor of England, Chancellor of Oxford University, patron and friend of Erasmus. He was forced to accept Henry VIII as supreme head of the church, and though he protested, was unable to prevent the complete submission of church to state. Warham added to the libraries of Winchester, All Souls and New College. Of the many books he gave New College, 17 manuscript and 41 printed volumes survive. He also presented a pair of stunning silk gloves.
57. Aristotle, Nicomachean Ethics and Politics
New College, MS 228
Gift of William Warham (1450?—1532)
This translation into Latin was made by the Italian humanist Leonardo Bruni (1370–1444), and the manuscript was written by the English humanist John Russell (c. 1430–94, Fellow of New College from 1449 and Sub-Warden in 1461). It was part of Warham’s gift to the college library.
58. William Warham’s gloves
New College, Chattels 1755
Gift of William Warham
These late 15th/early 16th-century gloves are made of red knitted silk. Each is decorated with a sunburst around ‘IHS’ (the monogram of Jesus Christ) at its centre, has a broad flower and green quatrefoil wrist, and a double band of silver thread on each finger. Most college account books of this time record payments for beautifully fashioned gloves (but not so elaborate as these), given as gifts to dignitaries on special occasions.
William Gray (c. 1414–78)
Gray, who came from a distinguished Northumberland family, was at Balliol by 1431. He was appointed Chancellor of Oxford University c.1440. Later he visited Cologne, Florence, Padua, Ferrara, and Rome, collecting manuscripts on the way. After serving as King’s Proctor to the papal court, he became bishop of Ely in 1454. The collection of books Gray left Balliol has been described as ‘by far the finest, as well as the largest, private collection to survive in England from the Middle Ages’. It doubled the size of the Balliol library, which required the addition of four bays to house it.
59. [Quintilian], Declamationes maiores
Balliol College, MS 139
Gift of William Gray (c. 1414–78)
Interest in the works of the rhetorician and educator Quintilian (c. 35 – c. 100) was revived by the Italian humanists. The Declamationes, considered in Gray’s time to be by Quintilian, is a set of exercises in rhetorical speech-making, legal cases argued on behalf of different parties. Gray probably commissioned this fine copy of a popular humanistic text during a stay in Italy 1444–54.
Thomas Allen (1540–1632)
Allen was a mathematician connected with Trinity College, friend of Sir Thomas Bodley and donor to the Bodleian in 1601. He began collecting manuscripts in the 1560s, at a time when monastic libraries had but recently been dispersed, and when some Oxford colleges were replacing their older manuscript collections with printed books. He left some of his books to Trinity, and the larger part to his pupil Sir Kenelm Digby, who presented his collection to the Bodleian Library in 1634. It is in great measure due to Allen that the Bodleian has one of the largest collections of manuscripts reflecting English monastic learning, including an exceptional record of medieval mathematics and science.
60. Euclid, [Kitab Tahrir Usul li-Uqlidis] Euclidis elementorum geometricorum libri tredecim (Rome, 1594)
Gift of Thomas Allen (1540?—1632)
Euclid was first known in the West through Latin translations of Arabic versions. This is the first printing of the 13th-century Arabic translation by Nasir al-Din al-Tusi of Euclid’s Elements, which is among the oldest extant Greek mathematical treatises, and one of the most influential texts in the history of mathematics. It made a fitting gift to his old college from the mathematician Thomas Allen.
61. La Chanson de Roland
Bodleian Library, MS. Digby 23 (part 2)
Gift of Thomas Allen via Sir Kenelm Digby
‘The Oxford Roland’, the earliest manuscript of this, the oldest major work of French literature, was written in Anglo-Norman in the 12th century. Perhaps it was already bound with a copy of the Latin translation of Plato’s Timaeus when the latter was bequeathed to Oseney Abbey by Master Henry of Langley (died c. 1263). The volume was left by Thomas Allen to Sir Kenelm Digby, who gave it to the Bodleian in 1634.
William Laud (1573–1645)
President of St John’s, active Chancellor of Oxford University who created a new code of statutes, and Archbishop of Canterbury under the patronage of Charles I, Laud also showed himself an energetic and wide-ranging collector of books. He amassed substantial collections of manuscripts both ancient and newly copied, which he gave to the Bodleian and to his beloved St John’s, where his gifts included manuals and textbooks for educational use. He set the University Printing Press on a firm basis, and promoted Greek, Arabic and Hebrew studies. The Long Parliament of 1640 impeached Laud for treason, and he was executed in 1645.
62. Al-Sakkaki, Yusuf ibn Abi Bakr ibn Muhammad, Miftah al-‘ulum [The Key to the Sciences]
St John’s College, MS 122
Gift of William Laud
A 14th-century Arabic manuscript compendium of the linguistic sciences, acquired by Sir Kenelm Digby and presented to St John’s by Laud. This is representative of the sort of practical book – manuals and textbooks – that Laud gave to his college.
63. Laud’s cap
St John’s College
Tradition has it that this is the cap Laud wore to his execution, a tradition possibly inspired by the ghoulish idea that the cutaways were created by the executioner’s axe. However, an eyewitness, Simon Foster, stated that in fact Laud wore ‘his ordinary hat faced with taffeta’.
64. ‘Codex Laud’
Bodleian Library, MS. Laud Misc. 678
Gift of William Laud
This pre-Colombian (?15th-century) screenfold manuscript from lost its national identity by the time Archbishop Laud obtained it from some unknown source in 1636. It came in a finely decorated leather box, perhaps Spanish or Italian, to which Laud’s secretary mistakenly added a label in Latin, ‘Book of Egyptian hieroglyphics’. Its images of Mexican deities, rituals and the native calendar bolstered Laud’s aims of extending the range of cultures to be studied at his University.
Thomas Barlow (1608/9–1691)
Barlow was successively Bodley’s Librarian, Provost of The Queen’s College, Lady Margaret Professor of Divinity and bishop of Lincoln. His knowledge of contemporary Oxford philosophical and theological studies was encyclopaedic. He was a friend and correspondent of Thomas Hobbes. After his death his library, including manuscript and printed books, and many pamphlets on contemporary political and ecclesiastical affairs, was divided between the Bodleian and The Queen’s College, where the bequest occasioned the building of the magnificent Upper Library.
65. Christine de Pisan, The Fayt of Armes and of Chyvalrye, translated into English and printed by William Caxton, 1489
The Queen’s College, Sel.a.113
Gift of Thomas Barlow (1608/9–1691)
Christine de Pisan (1364 – c. 1430) may have been the first woman professional author, a career she successfully followed to support her family in financial difficulties. She is probably best known for her poetical and persuasive rhetorical work, such as The Book of the City of Ladies, but her repertoire extended to military strategy.
66. ‘The Barlow psalter’
Bodleian Library, MS Barlow 22
Bequest of Thomas Barlow
The early 14th-century ‘Barlow psalter’ is the masterpiece of its anonymous East Anglian artist, ‘The Barlow master’. It had belonged to Walter de Rouceby, a monk of Peterborough Abbey (died 1341). The Latin psalter is preceded by a sequence of New Testament scenes, all illuminated to the highest quality, and survives in its medieval binding with painted edges.
Thomas Marshall (1621–85)
Marshall was awarded his BA in 1645. A Royalist, he went abroad, and was chaplain to the Merchant Adventurers in Holland from 1650 to 1672, where he met leading philologists. He became Rector of Lincoln College in 1672, then Dean of Gloucester. A serious student of theology, philology, Germanic and oriental studies, he was always a generous scholar, procuring manuscripts for his colleagues, assisting them in editing their research, and even searching out continental type for John Fell at the University Press. He bequeathed hundreds of printed and manuscript books to the Bodleian; his gift to his college of 1040 books and 77 volumes of Civil War tracts transformed Lincoln’s library.
67. Medieval bindings on Dutch manuscripts
Bequest of Thomas Marshall
A selection of well-preserved medieval bindings from Thomas Marshall’s bequest to Oxford University.
68. Francis Junius to Thomas Marshall, 31 December 1669
Bodleian Library, MS. Marshall 134
Gift of Thomas Marshall
Junius wrote to Marshall from The Hague, describing the visit of the bookseller Robert Scott, who was angling for a trade discount on his sales of the Gothic and Anglo-Saxon Gospels:
‘But, sayd he, if I should desire to have 50 or 100 copies to bee sent to me, some thing I hope should be abated of the price, according to the proportion of copies. You have reason to expect, answered I, and all that busines I doe referre to Dr Marshall …’
69. Gothic and Anglo-Saxon gospels (Dordrecht, 1665)
Lincoln College, SR 1
Gift of Thomas Marshall
The Anglo-Saxon gospels with facing translation in Gothic, accompanied by two volumes of commentary, formed Thomas Marshall’s most important work. He edited the gospels at the request of his fellow philologist Francis Junius (1598–1677). Marshall presented this copy to his college.
William Wake (1657–1737)
William Wake was a canon of Christ Church, bishop of Lincoln, Archbishop of Canterbury, and author of polemical pamphlets that brought him to prominence during the Catholic controversy following the accession of James II in 1686. He was also a distinguished historian – The State of the Church and Clergy of England (1703) was his most important publication – and naturally collected books and manuscripts to support that work. He and others left Christ Church such extensive collections of books and manuscripts that a new library had to be built to house them.
70. A Byzantine Evangeliary
Christ Church, MS 28
Bequest of William Wake (1657–1737)
This manuscript book of the gospels was written in the late 14th century in Constantinople by the scribe Gregorios, possibly in the famous scriptorium at the Hodegon monastery. It remained in monasteries for some centuries before Archbishop Wake acquired it in around 1735. The opening shows a miniature of St Mark and the first lines of his gospel.
71. Palm leaf Book of Common Prayer
Christ Church, MS. 233
Gift of John Sneyd (1763–1835)
Archbishop Wake was famous for his correspondence with colleagues around the world, among them the missionary Benjamin Schultze (1689–1760). This unusual Book of Common Prayer was translated into Tamil by Schultze in 1726 for his missionary work in Tranquebar, India. He had a number of projects to translate the Bible and other Christian materials into a variety of Indian languages.
72. The interior of the Codrington Library, showing marble statue of Codrington by Henry Cheere (1702–81)
All Souls College
Printed in the 1829 Oxford Almanack (with later hand-colouring), this view was drawn by C. Wild and engraved by Joseph Skelton.
Edward, fifth Baron Leigh of Stoneleigh (1742–86)
Edward was the first Leigh to go to Oriel, receiving his MA in 1764 and DCL a few years later. He devoted a great deal of his time to improving his house at Stoneleigh, collecting art, furniture, books, and music. Unfortunately Leigh died without heir, leaving a will that would create legal disputes into the 19th century. Among its provisions, the will gave his scientific instruments and his entire library to Oriel. The donation occasioned the building of a new Library designed by the architect James Wyatt and completed in 1791.
73. Henry Purcell, The Indian Queen
Oriel College, U.a.36
Gift of Edward, fifth Baron Leigh of Stoneleigh
The autograph score of The Indian Queen (composed 1695), like that of almost all Purcell’s theatre music, is lost. This early copy (c. 1700) in the hand of a London copyist known to be associated with Henry Purcell’s brother, Daniel, is therefore one of the most important extant sources of the work. It was given to Oriel by Edward Leigh, whose Warwickshire home of Stoneleigh housed a large collection of music and a number of musical instruments. The scores were in common use in the house.
Robert Mason (1782–1841)
The son of the miller of the village of Hurley in Berkshire, Mason matriculated at St Edmund Hall in 1807, and took a BA and DD as a member of The Queen’s College. He became curate of Hurley in 1812 and retired in 1824, becoming increasingly reclusive. Mason left £36,000 to the Bodleian, and his collection of antiquities and £30,000 to Queen’s, stipulating that it be spent on the library within three years. The librarian purchased modern books and the greatest editions of earlier printed books, which made Queen’s the best college library of the time. To accommodate this gift, thearcade below the Upper Library was enclosed following the design of Charles Cockerell.
74. Giovanni Boccaccio, De claris mulieribus (Ulm: Johan Zainer, 1473)
The Queen’s College, Sel.a.126
Bought from the bequest of Robert Mason (1782–1841)
This is the first printed edition of Boccaccio’s biographies, in Latin, of famous women (composed c. 1362); it is illustrated with hand-coloured woodcuts. A previous owner was Pope Pius V (1504–72).
75, 76. ‘Reliques of Antiquity’
The Queen’s College
Bequest of Robert Mason
Robert Mason also left his collection of Egyptian, Roman, Grecian, and other ‘Reliques of Antiquity’ to The Queen’s College, among which are this Egyptian stele (18th Dynasty, Thebes), and ten amulets including a winged scarab. Mason’s collection is now housed in the Ashmolean Museum.
Recording the gifts
College libraries have been recording gifts for centuries, at first simply by entering donors’ names in the books they gave, or mentioning them in account books when related items – such as book-chains to secure the new acquisitions – were purchased. From the 17th century onward colleges, perhaps following the example of the newly refounded University Library, began to record donations in special volumes, sometimes bound in fine leather, with clasps of precious metals, or beautifully illustrated, like the University College benefactors’ book shown here.
77. Hertford College Library benefactors’ book
Hertford College Archives
Begun in 1656, during the Principalship of Henry Wilkinson (1616/17–1690). The first entries are for books given by Wilkinson himself; later gifts include a sizeable library of early medical books received in 1692. Thomas Hobbes hesitated to present his works to the college, fearing that they might be considered too controversial, until the Vice-Principal invited him to send a copy (see item 15).
78. Lord Nuffield’s donations book
William Morris, Viscount Nuffield (1877-1963) began, aged sixteen, repairing and then building bicycles before moving on to the manufacture of motorcycles and motorcars. He has been described as ‘the most famous industrialist of his age’, but he is just as famous for his philanthropic achievements. These included the establishment of a medical school at Oxford in 1936, and the foundation of Nuffield College the following year. He bequeathed most of his remaining estate to the college.
79. University College Library benefactors’ book
University College, UC:BE1/MS1/3
In 1674, shortly after erecting a new library funded by donations from Old Members and friends, University College compiled this benefactors’ book recording all known gifts of books or funds for purchasing books; the earliest donation dates from 1406. Its title-page bears this splendid depiction of the new library (which was turned into student rooms in the 1860s, when the present library was built). Discoveries made in the 1950s proved that the drawing is fairly accurate.
80. William of Wykeham
Bodleian Library, Lane Poole no. 10
Oil on canvas, 1195mm × 915mm, by William Sonmans (d. 1708), based on the posthumous portrait by Sampson Strong, c. 1596, at New College. It is one of the series of imagined portraits of college founders painted by Sonmans which hang in Duke Humfrey’s Library. The arms are those of Winchester College impaling those of the sitter. The views at the top show Wykeham’s two foundations: Winchester College on the right, and New College on the left.
The 17th and 18th centuries saw gifts of magnificent collections of manuscripts, printed books, music, prints and drawings valued for their beauty, antiquity, and associations; and great library buildings were designed to house them.
Such gifts continued in the 19th and 20th centuries. As authors’ and creators’ working drafts came to be valued, colleges received gifts from poets, writers and composers, as well as contemporary papers of historical importance, and memorabilia of life in college. | <urn:uuid:d572d2bf-69c2-4841-a9e9-285f9c1f62f1> | {
"date": "2019-09-16T01:08:22",
"dump": "CC-MAIN-2019-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572439.21/warc/CC-MAIN-20190915235555-20190916021555-00016.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9664440751075745,
"score": 2.765625,
"token_count": 4414,
"url": "https://www.bodleian.ox.ac.uk/whatson/whats-on/online/workofone/major_donors"
} |
In 2012 an estimated 22,500 Canadians were diagnosed with colorectal cancer and 9,100 died from it.
Colorectal cancer is the second leading cause of death from cancer both in Nova Scotia and nationally.
Also known as colon cancer, colorectal cancer is a disease in which malignant cells grow in the tissue of the colon or rectum, forming tumors. In Nova Scotia, approximately 1,000 men and women are diagnosed every year, and about 350 of them will die from the disease.
The good news is that colon cancer is one of the most highly treatable cancers if caught early. The Canadian Cancer Society reports that Canada has one of the best colorectal cancer survival rates in the world – slightly lower than the US, but better than most of Europe.
Despite the high treatability, most Canadians have not had a screening test and are confusedabout how and when it should happen. The key to surviving colon cancer is early detection. | <urn:uuid:ab756689-ff27-4498-949d-95fd06213acb> | {
"date": "2019-04-20T00:42:46",
"dump": "CC-MAIN-2019-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528433.41/warc/CC-MAIN-20190420000959-20190420022959-00416.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9755759835243225,
"score": 3.125,
"token_count": 200,
"url": "https://www.yourdoctors.ca/blog/daily-living/colon-cancer"
} |
From Treasury Vault to the Manhattan Project
The U.S. War Department borrowed 14,000 tons of government silver in its drive to make the world’s first atomic bomb
The U. S. government’s drive to produce enough weapons-grade uranium for the world’s first atomic bomb was an experimental pursuit. Physicist Cameron Reed of Alma College describes how the Manhattan Project built a massive secret facility in Tennessee to scale up technology invented in a University of California at Berkeley laboratory. The technology had roots in the discipline of nuclear physics rather than in warfare. Reed explains why 14,000 tons of silver borrowed the U.S. Treasury was vital to the success of the enterprise.
Go to Article | <urn:uuid:bfc57715-43b0-49b9-98ee-6a9d84c395ca> | {
"date": "2013-12-10T07:42:34",
"dump": "CC-MAIN-2013-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164013027/warc/CC-MAIN-20131204133333-00002-ip-10-33-133-15.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9064673781394958,
"score": 3.203125,
"token_count": 145,
"url": "http://www.americanscientist.org/issues/feature/2011/1/from-treasury-vault-to-the-manhattan-project"
} |
Civil War era music, at the Music Library
|Cover to the sheet music for Bonnie Blue Flag, courtesy of the Laurie Music Library|
The Laurie Music Library, located on the lower level of the Douglass Library, holds a large collection of 19th century sheet music covering most of that century. While there is certainly an aesthetic value attached to the collection, it is the sheet music's historical value that is most significant, since by studying the music and lyrics much can be learned about daily life in various decades of the 1800s. One of the more fascinating parts of this collection is the music from 1860-1865, the Civil War era.
April 2011 marked the 150th anniversary of the start of the war, when Confederate forces attacked the Union soldiers stationed at Fort Sumter, South Carolina. In commemoration of this historical event, we present here two songs from our collection.
The first song is one of the most popular Confederate songs, "The Bonnie Blue Flag," (listen to the song at the Cylinder Preservation and Digitization Project here) arranged by Harry Macarthy, the Arkansas comedian, and based on a traditional Irish tune, "The Irish Jaunting Car" (as performed by Polk Miller and his Old South Quartet.) First published in New Orleans in 1861, the song celebrates the symbol of the Confederacy.
When Northern treachery attempts our rights to mar,
we hoist on high the Bonnie Blue Flag, that bears a single star.
Hurrah! Hurrah! for Southern rights hurrah!
The following verses describe the secession of the other southern states, culminating in the final verse
Then cheer, boys, cheer, raise the joyous shout,
For Arkansas and North Carolina now have both gone out;
And let another rousing cheer for Tennessee be given -
The Single Star of the Bonnie Blue Flag has grown to be Eleven.
In response, a year later, Oliver Dio. of Boston, MA published the song, "The Bonnie, Red, White, and Blue, or Our Beautiful Flag" written by J.C.J. to the same tune as "The Bonnie Blue Flag." Its lyrics are a direct response to the Confederate tune, in which the Union replies
We never wished to harm you, we'll welcome you again
When you tear down the rebel flag, as brothers and as men.
When Sumter's walls were battered, what could we, freemen, do.
But rally round our beautiful flag, of Red White and Blue.
The Northern version of the song also encourages the "southern poor man" to
Strike off the captive's chain!
And rally round the starry flag to sing in loud refrain:
Hurrah! Hurrah! for a nation's rights hurrah!
These songs, as well as other songs of the era, serve as first-hand accounts of attitudes during the war. A greater understanding of views of the North, South, soldiers, and civilians can be attained by studying the music of this period. You can contact Michelle Oswell, Music/Performing Arts Librarian, at [email protected] to arrange to see these and other items from the 19th-century sheet music collection in the Music Library.
Story written by Sara Rizzo,
A student in the Masters of Library and Information Science program
Posted April 18, 2011 | <urn:uuid:96a63e49-ed52-4806-b020-50e749041a8f> | {
"date": "2014-10-31T22:10:39",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900397.29/warc/CC-MAIN-20141030025820-00019-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9340601563453674,
"score": 2.59375,
"token_count": 690,
"url": "http://www.libraries.rutgers.edu/rul/news/11/04_civil_war_era_music.shtml"
} |
Role of krill versus bottom-up factors in controlling phytoplankton biomass in the northern Antarctic waters of South Georgia
Whitehouse, M.J.; Atkinson, A.; Ward, P.; Korb, R.E.; Rothery, P.; Fielding, S.. 2009 Role of krill versus bottom-up factors in controlling phytoplankton biomass in the northern Antarctic waters of South Georgia. Marine Ecology Progress Series, 393. 69-82. 10.3354/meps08288Before downloading, please read NORA policies.
m393p069.pdf - Published Version
Restricted to NORA staff only until 1 October 2014.
The extent to which Antarctic phytoplankton stocks are controlled by 'bottom-up' and/or 'top-down' factors is highly variable. Here we consider data collected at South Georgia during 3 summer surveys that recorded substantial hydrographic variability. A suite of bottom-up and top-down controlling factors were measured simultaneously at the mesoscale. Sea surface temperature varied by >2 degrees C, macronutrients ranged from near-winter concentrations to near-depleted, while mean densities of a major grazer, krill Euphausia superba, varied between near-zero and >400 g wet mass m(-2). A general linear model was used to identify the main factors implicated in the observed differences in phytoplankton biomass. Despite east-to-west and on- to off-shelf temperature gradients, temperature per se was not implicated in phytoplankton variability. Also, while there was an abundance of NO3-N in surface waters, NH4-N was the key nutrient throughout. A domed relationship between phytoplankton and krill peaked between 2 and 4 mg chlorophyll a m(-3) and 6 and 30 g krill m(-2). The positive side of this dome was represented by the west off-shelf region downstream of South Georgia. Here, an ample supply of micro- and macronutrients promoted high primary production, and low densities of krill presumably had little grazing effect, This positive relationship between krill and phytoplankton biomasses was interpreted as krill accumulating in areas of good feeding conditions. The negative side of the dome was typified by the east off-shelf region, where macronutrients remained high, primary production rates were low, and krill densities were very high. The grazing rates calculated here suggested that krill affect their food stocks severely, and the negative krill-phytoplankton relationship in this region may reflect locally high krill densities driving down their food supply.
|Item Type:||Publication - Article|
|Digital Object Identifier (DOI):||10.3354/meps08288|
|Programmes:||CEH Topics & Objectives 2009 onwards > Biodiversity
BAS Programmes > Global Science in the Antarctic Context (2005-2009) > DISCOVERY 2010 - Integrating Southern Ocean Ecosystems into the Earth System
|Additional Keywords:||Antarctic phytoplankton, macronutrients, NH4, temperature, grazing effect|
|NORA Subject Terms:||Marine Sciences
Ecology and Environment
|Date made live:||18 Nov 2010 10:59|
Actions (login required) | <urn:uuid:cb170bc9-816e-4459-b925-134dd627ee09> | {
"date": "2014-04-23T19:43:10",
"dump": "CC-MAIN-2014-15",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00491-ip-10-147-4-33.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8944441676139832,
"score": 2.515625,
"token_count": 718,
"url": "http://nora.nerc.ac.uk/11376/"
} |
Getting a diagnosis for Alzheimer’s can be a huge challenge – the only scans that confirm the presence of the disease (PET or spinal tap) are incredibly expensive and, therefore, not widely available. Scientists believe that early diagnosis is crucial to finding treatments for the disease so they can get people enrolled in clinical trials before the damage to their brains is too significant for drugs to have an effect. According to new research presented at the conference, they might be a step closer to this goal. Researchers at Washington University in St Louis, have a developed a blood test that can confirm the presence of amyloid plaques in the brain. They compared the ratios of types of beta-amyloid in 41 people’s blood with PET scans showing how much beta-amyloid had aggregated in their brains and found a clear match.
Amyloid plaques start developing 15 to 20 years before the symptoms of Alzheimer’s disease start to show so detecting the plaques early on could be crucial in treating the disease before it progresses to an advanced stage. The study was met with cautious optimism by the Alzheimer’s Association but needs to be validated with a larger sample before it can be considered viable. The researchers are currently conducting the same tests with 180 people – watch this space.
Read the full breakdown in the New Scientist here. | <urn:uuid:12ce3ed0-6b77-4b51-82e4-124f4e7ed9dd> | {
"date": "2020-01-28T08:15:36",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251776516.99/warc/CC-MAIN-20200128060946-20200128090946-00336.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9520344734191895,
"score": 3.015625,
"token_count": 272,
"url": "https://www.beingpatient.com/blood-test-alzheimers/"
} |
Wandering Planet Lacks a Star
Astronomers using ESO’s Very Large Telescope and the Canada-France-Hawaii Telescope have identified a body that is very probably a planet wandering through space without a parent star. This is the most exciting free-floating planet candidate so far and the closest such object to the Solar System at a distance of about 100 light-years. Its comparative proximity, and the absence of a bright star very close to it, has allowed the team to study its atmosphere in great detail. This object also gives astronomers a preview of the exoplanets that future instruments aim to image around stars other than the Sun.
Free-floating planets are planetary-mass objects that roam through space without any ties to a star. Possible examples of such objects have been found before, but without knowing their ages, it was not possible for astronomers to know whether they were really planets or brown dwarfs — “failed” stars that lack the bulk to trigger the reactions that make stars shine.
But astronomers have now discovered an object, labeled CFBDSIR2149, that seems to be part of a nearby stream of young stars known as the AB Doradus Moving Group. The researchers found the object in observations from the Canada-France-Hawaii Telescope and harnessed the power of ESO’s Very Large Telescope to examine its properties.
The AB Doradus Moving Group is the closest such group to the Solar System. Its stars drift through space together and are thought to have formed at the same time. If the object is associated with this moving group — and hence it is a young object — it is possible to deduce much more about it, including its temperature, mass and the composition of its atmosphere. There remains a small probability that the association with the moving group is by chance.
The link between the new object and the moving group is the vital clue that allows astronomers to find the age of the newly discovered object. This is the first isolated planetary mass object ever identified in a moving group, and the association with this group makes it the most interesting free-floating planet candidate identified so far.
“Looking for planets around their stars is akin to studying a firefly sitting one centimeter away from a distant, powerful car headlight,” says Philippe Delorme, from Institut de planétologie et d’astrophysique de Grenoble, CNRS, Université Joseph Fourier, France, lead author of the new study. “This nearby free-floating object offered the opportunity to study the firefly in detail without the dazzling lights of the car messing everything up.”
Free-floating objects like CFBDSIR2149 are thought to form either as normal planets that have been booted out of their home systems, or as lone objects like the smallest stars or brown dwarfs. In either case these objects are intriguing — either as planets without stars, or as the tiniest possible objects in a range spanning from the most massive stars to the smallest brown dwarfs.
“These objects are important, as they can either help us understand more about how planets may be ejected from planetary systems, or how very light objects can arise from the star formation process,” says Philippe Delorme. “If this little object is a planet that has been ejected from its native system, it conjures up the striking image of orphaned worlds, drifting in the emptiness of space.”
These worlds could be common — perhaps as numerous as normal stars. If CFBDSIR2149 is not associated with the AB Doradus Moving Group it is trickier to be sure of its nature and properties, and it may instead be characterized as a small brown dwarf. Both scenarios represent important questions about how planets and stars form and behave.
“Further work should confirm CFBDSIR2149 as a free-floating planet,” concludes Philippe Delorme. “This object could be used as a benchmark for understanding the physics of any similar exoplanets that are discovered by future special high-contrast imaging systems, including the SPHERE instrument that will be installed on the VLT.” | <urn:uuid:f5af7a66-56d9-45a4-b318-83e567ce54ed> | {
"date": "2014-11-26T12:32:15",
"dump": "CC-MAIN-2014-49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006855.76/warc/CC-MAIN-20141125155646-00228-ip-10-235-23-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9413803219795227,
"score": 3.46875,
"token_count": 864,
"url": "http://www.laboratoryequipment.com/news/2012/11/wandering-planet-lacks-star"
} |
Roskilde 6 was found – together with eight other ships from Viking times and the Middle Ages – during the construction of the Museum Harbour for the Viking Ship Museum in 1997. The ship had been hauled up into shallow water in Roskilde's Viking harbour area and partially broken up. Therefore, only part of the bottom was preserved. Despite this it is one of the most remarkable Viking ship finds yet seen.
The feature that makes the most immediate impression, is the length of the ship. The keel is almost intact and measures 32 m. Adding on the stem and stern gives a total length for the whole ship approaching 36 m – the longest Viking ship yet discovered. The distance between the ribs is about 80 cm, allowing the approximate number of oarsmen to be calculated. There were probably 39 pairs of oars and, accordingly, 78 oarsmen to serve them. A ship of this size was without doubt the property of a king or earl and probably one we are acquainted with from written sources. The age of the ship has not yet been established exactly. The preliminary results of the dendrochronological studies only show us that the ship was built after AD 1025.
It is not only the size of Roskilde 6, which is remarkable. Its great length forced the shipbuilders to adopt unusual measures and solutions. As a consequence, the 32 m long keel is made up of three sections. The use of keel extensions, loter, is not in itself unusual, but the way in which the central keel section and the extensions are joined together is. In Viking ships it was normal to use simple vertical through-scarfs or hooked scarfs to join the keel and the loter together. These were simple to make, but counteracted only to a limited extent up and down movements of the stem and stern in the waves. These scarfs are less than three times as long as the keel is wide at the joint. In Roskilde 6, the joints between the central keel section and the 4 m long loter are almost 2 m long and are ingeniously shaped to have a bracing effect in both a vertical and a horizontal plane. The intention was clearly to make the whole of the long keel behave, as far as possible, as a single piece of timber. Perhaps it gives us an indication of where the shipbuilders had to focus their attention and improve their construction methods as the warships of the Late Viking Age were built progressively longer and longer. It is therefore striking that the surviving lot from the longship in Hedeby Harbour contributed in practice just as much to the length of the ship, i.e. rather more than 2m, as the loter from Roskilde 6 – but in the Hedeby ship it was possible to suffice with a vertical hooked scarf of only 25 cm. Part of the explanation could be, however, that the two ships were built for different sailing conditions. Hedeby 1 was an extremely narrow vessel, whereas Roskilde 6's width has provisionally been reconstructed at about 3.7 m. Even so, it was still built narrower than the Irish-built Skuldelev 2.
The ship was built with great care and with carefully selected materials. This can be seen, for example, in the up to 8 m long oak planks which were used for the ship’s hull, but also in the elegant form of the ship’s internal components. The keelson was held in place by carefully carved double knees, which were spiked partly to the sides of the keelson and partly to the upper surface of the floor timbers. Between the ribs lie thin, shaped and fitted intermediate frames, which bound together the otherwise vulnerable transition between bottom and side. This is a typical longship feature which can also be seen on Hedeby 1 and Skuldelev 2, but conversely not on the smaller vessels such as Ladby and Skuldelev 5.
Roskilde 6 is today in the process of being recorded in full in the Viking Ship Museum’s Archaeological Workshop on the Museum Island. As this work progresses, it will be possible to reveal even more information about this unique find from the conclusion of the Viking Age.
Danish text: Jan Bill
Translation: Gillian Fellows-Jensen | <urn:uuid:825e6cb4-ef59-4d95-8507-16e9d5bafefa> | {
"date": "2014-10-01T10:10:55",
"dump": "CC-MAIN-2014-41",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663417.12/warc/CC-MAIN-20140930004103-00270-ip-10-234-18-248.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.973923921585083,
"score": 3.140625,
"token_count": 882,
"url": "http://www.vikingeskibsmuseet.dk/en/the-sea-stallion-past-and-present/longships-magnified/the-archaeological-sources/roskilde-6/"
} |
Those seemingly harmless plants you are growing may pose a risk to your four-legged friends. Certain plants contain toxins that, when ingested, threaten your dog’s health and can also put his life in danger. That is why it is important to grow only nontoxic, dog-safe plants. Several creeping plants, which are often used as ground cover, won’t harm your pooch should he nibble on them.
Sun-Loving Creeping Plants
Creeping zinnia and star jasmine are two nontoxic creeping plants that thrive in full sun and don’t pose a risk to your dog. Creeping zinnia (Sanvitalia spp.) is a cheery plant with bright yellow blooms and deep green foliage. This sun-loving annual grows in U.S. Department of Agriculture plant hardiness zones 9 through 11 to about 10 inches tall. Creeping zinnia works well as a ground cover or container plant. Star jasmine (Trachelospermum jasminoides) is a creeping vine that produces star shaped, highly fragrant white flowers and grows in sunny areas in USDA zones 8 through 11. It grows about 1 to 2 feet tall with twining stems that can trail for 18 to 20 feet. Star jasmine attracts birds and is tolerant of seacoast exposures.
Shade-Loving Creeping Plants
Two dog-safe creeping plants that grow best in shaded areas are baby’s tears and Oregon grape holly. Baby’s tears (Soleirolia soleirolii) is an evergreen perennial thriving in full to partial shade in USDA zones 9 through 11. This creeping, mat-forming plant reaches heights of about 1 to 2 inches tall, produces white flowers and is typically grown for its pale to lime green, round foliage. This quick grower may have an invasive tendency if not controlled. Also known as creeping mahonia, Oregon grape holly (Mahonia aquifolium) is a shade-loving broadleaf evergreen growing in USDA zones 5 through 8. It produces blue-black edible berries on a creeping form and is deer tolerant.
For year-round greenery that won’t harm our pooch, consider Swedish ivy or spider plant. Swedish ivy (Plectranthus australis) grows in partial shade in USDA zones 10 and 11. This fast-growing evergreen produces attractive glossy green foliage on a creeping form and white or pale purple blooms on tall flower stalks. Spider plant (Chlorophytum comosum) is another fast-growing evergreen that doesn’t pose a danger to your dog. It grows in shaded areas throughout USDA zones 9 through 11. Even though spider plants produce white flowers, the real attraction is the impressive white and green striped blade-like foliage that slightly arches.
Madagascar jasmine (Stephanotis floribunda) is a broadleaf tropical evergreen that grows in USDA zone 12 but works well as a houseplant. This creeping plant produces deep green foliage and white star-shaped flowers. It grows best in full sun to partial shade. Madagascar jasmine will require some type of support, such as trellis, since it can spread outward to 20 feet. Purple passion plant (Gynura aurantiaca) is another non-toxic houseplant. It grows in partial shade in USDA zones 10 through 12 and produces velvet-like leaves covered in purple hairs. This evergreen has weak stems that start out erect but -- as the stems age -- develop a sprawling form. Purple passion plant looks lovely growing in a hanging basket where its attractive foliage can creep out of the container and hang downward. | <urn:uuid:67683582-0153-4435-89f2-ad7bba63d8cf> | {
"date": "2018-09-23T06:43:56",
"dump": "CC-MAIN-2018-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159160.59/warc/CC-MAIN-20180923055928-20180923080328-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8952614068984985,
"score": 2.8125,
"token_count": 768,
"url": "https://living.thebump.com/creeping-plants-safe-dogs-13477.html"
} |
Shale Oil and How It Changes the World
Oil shale, also known as kerogen shale, is an organic-rich fine-grained sedimentary rock containing kerogen (a solid mixture of organic chemical compounds) from which liquid hydrocarbons called shale oil can be produced. Shale oil is a substitute for conventional crude oil and the USA has a lot of it. The global energy map is changing, with potentially far-reaching consequences for energy markets and trade. It is being redrawn by the resurgence in oil and gas production in the United States due to shale and could be further reshaped by a retreat from nuclear power in some countries, continued rapid growth in the use of wind and solar technologies and by the global spread of unconventional gas production.
Extracting shale oil from oil shale is more potentially costly than the production of conventional crude oil both financially and in terms of its environmental impact. Deposits of oil shale occur around the world, including major deposits in the United States of America. Estimates of global deposits range from 2.8 to 3.3 trillion barrels. This obviously will change the picture of global economics and politics.
Mining oil shale involves a number of environmental impacts, more pronounced in surface mining than in underground mining. They include acid drainage induced by the sudden rapid exposure and subsequent oxidation of formerly buried materials, the introduction of metals including mercury into surface water and groundwater, increased erosion, and sulfur-gas emissions.
Energy developments in the United States are profound and their effect will be felt well
beyond North America — and the energy sector. The recent rebound in US oil and gas production, driven by upstream technologies that are unlocking light tight oil and shale gas resources, is spurring economic activity — with less expensive gas and electricity prices giving industry a competitive edge — and steadily hanging the role of North America in global energy trade. By around 2020, the United States s projected to become the largest global oil producer (overtaking Saudi Arabia until the mid-2020s) and starts to see the impact of new fuel-efficiency measures in transport.
Taking all new developments and policies into account, the world is still failing to put the
global energy system onto a more sustainable math. Global energy demand grows by more than one-third over the period to 2035 in the New Policies Scenario, with China, India and the Middle East accounting for 60% of the increase. Energy demand barely rises in OECD countries, although there is a shift away from oil, coal (and, in some countries, nuclear) towards natural gas and renewables.
Despite the growth in low carbon sources of energy, fossil fuels remain dominant in the present global energy mix.
Natural gas is the only fossil fuel for which global demand grows in all future studied by the International Energy Agency in a 2012 report,; but the outlook varies by region. Demand growth in China, India and the Middle East is strong: active policy support and regulatory reforms push China’s consumption up. In the United States, low prices and abundant supply see gas overtake oil around 2030 to become the largest fuel in the energy mix. Europe takes almost a decade to get back to 2010 levels of gas demand: the growth in Japan is similarly limited by higher gas prices and a policy emphasis on renewables and energy efficiency.
Coal has met nearly half of the rise in global energy demand since 2000, growing
faster even than total renewables. Whether coal demand continues to rise strongly will depend on the strength of national policy measures that favor lower-emissions energy sources, the deployment of more efficient coal-burning technologies. The policy decisions carrying the most weight for the global coal balance will be taken in China and India which account for almost three-quarters of projected coal demand.
The world will be a different place by 2035 in more ways than one.
For further information see IEA Report.
Shale image via Wikipedia. | <urn:uuid:4bddc99d-9c6e-40b7-903e-ee956e953e7a> | {
"date": "2015-10-10T16:07:58",
"dump": "CC-MAIN-2015-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737958671.93/warc/CC-MAIN-20151001221918-00020-ip-10-137-6-227.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9440104365348816,
"score": 3.90625,
"token_count": 791,
"url": "http://www.enn.com/top_stories/article/45289"
} |
Jft2 Task3Presenter notes
Slide 1: Title
Slide 2: The Utah Opera is an adhocracy culture. An adhocracy culture is one that is externally focused and values flexibility. This type of culture is adaptable, creative, and reacts to change quickly (Kreitner & Kinicki, 2010). The opera shows these qualities in their culture. The opera values flexibility, and has tailored their business model to allow for adjustments in both the size of the opera and fundraising projects. This allows for them adjust their operations in a timely manner as needed in order to meet profitability goals. The general nature of the opera fosters the creativity that is required for the adhocracy culture, since they are an arts program. The opera focuses less on their budget and more on their fixed assets than the symphony. The opera is also externally focused out of need. A majority of their income comes from ticket sales, so they must deliver enough high-quality performances to please the customers. Additionally, the opera is structured in such a way that decisions lie with a variety of directors who have the skills and knowledge to make decisions about their departments (DeLong, 2005).
The Utah Symphony is a hierarchy culture. A hierarchy culture is one that is internally focused and emphasizes stability and control. They value standardization, control, and a well-defined structure for authority and decision making. This is supported by having Chairman of the board and a music director. The hierarchy culture also puts more emphasis on monitoring people and processes. The symphony shows these qualities in their culture. The symphony is presently classified as a Group II, but the director is attempting to turn it into a Group I symphony. The symphony is made more stable by the fact that the musicians are unionized, and the organization is slow to change. The symphony is also more budget focused than the opera is. In addition, the leadership control within the symphony lies with an... | <urn:uuid:e1d173a5-32cc-46ec-89cb-9b32803858f3> | {
"date": "2015-07-31T13:23:53",
"dump": "CC-MAIN-2015-32",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988305.14/warc/CC-MAIN-20150728002308-00336-ip-10-236-191-2.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9672350287437439,
"score": 2.640625,
"token_count": 399,
"url": "http://www.termpaperwarehouse.com/essay-on/Jft2-Task3/98276"
} |
When snow surveyors trek into the Sierra Thursday to measure the snow's water content, they will likely confirm dismal news already being reported by electronic sensors.
The Sierra snowpack, and the water it holds, is only 54 percent of average for this time of year.
The snow season started with some decent storms in November and December that put the Sierra above normal, but then the weather turned unseasonably dry, according to the Department of Water Resources.
The April 1 reading is considered the most important of the season.
This is when the snowpack is traditionally at its peak, before spring weather begins the melt.
The Department of Water Resources said a third of California's water comes from snow melt.
The agency estimates that it will be able to deliver 35 percent of the water requested by the 29 public agencies that distribute water through the State Water Project.
More than 4 million acre-feet of water is requested by the public agencies. The water goes to more than 25 million Californians and nearly a million acres of irrigated farmland.
California reservoirs and rivers are around normal for this time of year, but water managers say those numbers will decline later in the spring as the below average snowpack is depleted. | <urn:uuid:37f169a8-15bd-4c17-b74b-41ae85f4b474> | {
"date": "2015-09-02T02:24:06",
"dump": "CC-MAIN-2015-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645241661.64/warc/CC-MAIN-20150827031401-00229-ip-10-171-96-226.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.962965190410614,
"score": 3.328125,
"token_count": 247,
"url": "http://m.kcra.com/news/Sierra-snowpack-half-of-seasonal-average/19502338"
} |
The June 10 issue of The New York Times Magazine scrutinized the income gap and showcased Democratic presidential hopeful John Edwards’ “War on Poverty” in a 34-page, 21,000-word article co-authored by four people.
But, buried deep within this collection of articles, in the portion written about political solutions, co-author Matt Bai admitted the middle- and lower-income classes are much better off than they were in the early part of the 20th century:
“According to the economists Thomas Piketty and Emmanuel Saez, the average income of an American taxpayer in 1929, using today’s dollars, was about $16,000 a year; the entire middle class, in other words, was poor by modern standards. It’s true that the official poverty rate, while fluctuating quite a bit, is pretty much unchanged from where it was 40 years ago (it was 14.2 percent in 1967, compared with just under 13 percent at last count), but it’s also true that what we call poverty has changed strikingly. When Johnson stepped onto that front porch in Inez, there were still rural poor who had no electricity, no running water, no primary-school education. Now most rural towns have access to satellite TV, and even the worst of the housing projects built in the 1960s — though thoroughly horrid places to live — come with solid roofs and indoor plumbing.”
So are things getting worse or better? It all depends on your perspective. And Bai attempted to explain differing perspectives on economic policy – with some wild results.
Redistribution of Wealth = Moderate ‘Center’ of Politics
A system that encourages personal initiative within a free market has long been the American tradition. Everybody is afforded the opportunity to follow the path they desire and make the most of it. Not according to Bai.
Bai described this system as a creation of the “far right” in America – where free markets are seen as “enabling some of the money flowing into Wall Street and corporate boardrooms to ‘trickle down’ to the middle class through spending and investment.”
If free markets are “far right,” what’s moderate? Redistribution of wealth.
This “center” of the economic debate, according to Bai, agrees with the “far right” that creating wealth is good – but “government has to redistribute some of the wealth by progressively taxing the affluent and giving the money back to the poor through carefully incentivized social programs and tax breaks.”
That system is eerily similar to Hillary Clinton’s philosophy of “shared prosperity.” "I prefer a 'we're all in it together' society," said Clinton to the Associated Press last month. "I believe our government can once again work for all Americans. It can promote the great American tradition of opportunity for all and special privileges for none."
Even further to the left on the spectrum, and the focal point of the article, are the “populist Democrats,” or the ones who “tend to gravitate” toward Edwards’ candidacy. The theory on this end of the spectrum is called the “predistribution” of wealth – “using the tools of government to divert money from the wealthiest Americans before they earn it” – and is romanticized by Bai.
These “Predistribution Democrats” advocate a system that requires little more than a high school diploma to achieve prosperity, according to Bai. Budget deficits are deemed a necessity in order to make “large-scale social investments in health care and job training.” The threat of inflation is ignored because “lower interest rates lead to tighter labor markets, lower unemployment and higher wages.” | <urn:uuid:6794ef35-adad-4c8d-b994-6c52b90120a9> | {
"date": "2016-05-05T11:11:02",
"dump": "CC-MAIN-2016-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860126502.50/warc/CC-MAIN-20160428161526-00184-ip-10-239-7-51.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9612463116645813,
"score": 2.59375,
"token_count": 791,
"url": "http://www.mrc.org/print/articles/ny-times-povertys-huge-issue-though-were-better-ever-wealth-redistribution-centrist-policy"
} |
Located above the kidneys, the two adrenal glands are part of the body's hormonal (endocrine) system. The outer layer of each gland — called the adrenal cortex — releases hormones that regulate metabolism, blood pressure, and characteristics such as hair growth and body shape. The inner region of each gland — called the adrenal medulla — makes hormones that control responses to physical and emotional stress.
Hormones produced by the adrenal glands include:
This hormone, which is released by the adrenal cortex, aids in metabolism and helps the body recover from physical stress due to surgery, injury, and infection.
Produced in the adrenal cortex, this hormone regulates blood levels of sodium and potassium, which affect blood pressure and the balance of fluids and electrolytes.
A precursor to male and female sex hormones (androgens and estrogens, respectively) produced by the adrenal cortex, DHEA levels typically decrease after age 30.
Catecholamines include epinephrine (adrenaline) and norepinephrine (noradrenaline), the “fight or flight” hormones released by the adrenal medulla that help the body respond to stress or fear. They can increase heart rate, blood pressure, breathing rate, muscle strength, and mental alertness.
Benign, Functional, and Malignant Tumors
Benign (noncancerous) adrenal tumors — called adenomas or nodules — are common, and are often found incidentally during diagnostic imaging for an unrelated health problem. About 5 to 10 percent of adrenal nodules are malignant (cancerous). Functional hormone testing can help distinguish between benign and malignant adrenal tumors.
Functional (hormone-producing) adrenal tumors may be found during tests to investigate hormone-related symptoms. Functional tumors cause a wide variety of symptoms, depending on which hormones they produce. Most functional tumors are benign, but a few are malignant and can spread (metastasize) beyond the adrenal gland to other parts of the body.
Malignant tumors of the adrenal glands are rare. An estimated 300 to 500 new cases of adrenal cancer occur in the United States each year. The most common type of adrenal cancer is adrenal cortical carcinoma. There are few known risk factors for adrenal cortical carcinoma, although people with certain genetic conditions may have a higher risk of developing this type of cancer.
Types of Adrenal Tumors
The treatment your doctor recommends will depend on what kind of adrenal tumor you have. The following are the most common types of adrenal tumors. Follow the links for more specific information about each tumor type, and how it is treated. | <urn:uuid:c34527d6-f829-42fc-a707-823a9ceba3dd> | {
"date": "2014-11-29T06:14:35",
"dump": "CC-MAIN-2014-49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931013466.18/warc/CC-MAIN-20141125155653-00124-ip-10-235-23-156.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9005229473114014,
"score": 3.65625,
"token_count": 551,
"url": "http://www.mskcc.org/cancer-care/adult/adrenal-tumors/about-adrenal-tumors?glossary=on"
} |
Who Are You, Who Are We?
Through December 20, 2014
Beginning in January of 2014, a class of students from West High learned about video production, interviewing techniques, exhibit building, research skills and much more. To facilitate this process, the Community Storytelling Partnership was formed. Joining the Western Heritage Center were MontanaPBS, Billings Public Schools and the Billings Public Library. The partners share an interest in providing students with experiences that build lifelong skills.
In this year’s project, the students studied the events leading up to the formation of the Not in Our Town movement. They learned about the historical events of 1992-1994 and interviewed the local leaders involved. Throughout the process, the students collaborated in researching, writing, and problem solving, while gaining an understanding that everyone sees the story from their own perspective. As they built this exhibit they also built their critical thinking and conflict management skills. The Community Storytelling Partnership provided the tools, the training, and some guidance, but this is an exhibit from the students of Bruce Wendt’s West High class. We couldn't be more proud.
Secret Life of Artifacts: Native American Design
Through December 20, 2014
American Indian tribes developed clothing and belongings unique for a life of hunting in the Northern Plains. Even though the items were made for mobility, they were rendered with colorful cultural and individualized colors, shapes, and meanings. This wonderful display of traditional beadwork, paintings, and objects highlight the finest of the Western Heritage Center’s collections.
Echoes of Eastern Montana: Stories from an Open Country
Through December 20, 2014
This interactive exhibit will share stories of the people of the Yellowstone River Valley and Northern High Plains. Visitors can watch interviews, listen to amazing stories, read personal diaries, peruse family photo albums, copy favorite recipes, learn new Crow and Northern Cheyenne words, play interactive games, and hear local music.
People in communities as diverse as Wibaux, Colstrip, Laurel, Hardin, Forsyth, Harlowton and Billings tell compelling stories of sacrifice and struggle and offer lessons about leadership, home, and family. Come laugh at outrageous tales and discover the changing world of Eastern Montana.
Billings: The Railroads Shape our Town
Billings Montana is a railroad town. Since its inception in 1882, the history and share pf the town have been influenced by the railroads. Throughout Billings is evidence of the railroad's impact in planning, designing, and promiting the settlement of the region. This exhibit and short film illustrate how we can still see the impact of the railroad in Billings.
Dude Ranch Lobby
The museum’s lower gallery has been made over to replicate the lobby of a 1930s dude ranch lodge. Rustic western furniture, inspired by the designs of Thomas Molesworth, and a stone fireplace, provide the ideal setting to display paintings by James Kenneth Ralston, a regional artist inspired by the great stories of the West.
J.K. Ralston: History on Canvas
James Kenneth (J.K.) Ralston (1896-1987) was a noted western artist who lived in Billings for many years. In 1946, Ralston and his son built a log cabin to serve as the artist’s studio. In 2005, the cabin was moved to the Western Heritage Center and the cabin’s interior was restored to reflect his working environment. Ralston’s oil paintings and sketchbooks include scenes depicting his early years growing up on ranches and riding the range in Montana. He relied on family heirlooms and collected artifacts to help him create accurate depictions of famous western events. The Western Heritage Center merged with the J.K. Ralston Studio and now houses a significant repository of the famed artist’s letters, memorabilia and artwork.
In Voice of the Curlew (J.K. Ralston Studio, Inc.:1986) Ralston is quoted as saying:
"In looking back over the years, I must say the art game has been good to me. It has been rewarding far beyond anything I ever dreamed of as a small boy living on ranch along the Missouri River. Art was always the way I found to express myself and of the things that have meant so much to me and to my people."
I’m glad that the dice was so rolled out that to be a cowboy I was born. I saw the curtain rung down on the last of the old time range business in Montana. Like a lot of others, I hated to see it go. Now it is history and I am very, very glad that I lived in time and to see and be part of it.
I have been drawing pictures as far back as I can remember and I have made it my life’s work to try and make the old west live on canvas."
Photo: Billings Mayor Willard Fraser confers with James Kenneth Ralston in Ralston’s studio cabin, 1960s. The cabin is now located on the grounds of the Western Heritage Center.
American Indian Tribal Histories Project
The permanent American Indian Tribal Histories Project Exhibit provides visitors with an overview of Montana’s Native American tribes through maps, tribal flags and an explanation of their symbols, Crow and Northern Cheyenne tribal member oral histories and a chronology of the American Indian Tribal Histories Project, whose mission is to preserve and maintain American Indian tribal histories and culture. | <urn:uuid:f96a59d0-80a6-4fcf-89f4-1ec1dcd83c9c> | {
"date": "2015-01-29T14:17:52",
"dump": "CC-MAIN-2015-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115855094.38/warc/CC-MAIN-20150124161055-00039-ip-10-180-212-252.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9611377120018005,
"score": 2.59375,
"token_count": 1132,
"url": "http://www.ywhc.org/index.php?p=14"
} |
TCP/IP 40th anniversary
Three Billion People use TCP/IPBy the early 1970s, the practice of connecting once stand-alone computers together into general-purpose computer networks was barely five years old. But already there were more than half a dozen such networks: ARPANet, NPL Mark I, CYCLADES, and so on. ARPA itself was in the process of commissioning two more; the Packet Radio Network (PRNET), and the Satellite Network (SATNET).
But there was a big problem: none of these networks could talk to each other!
The solution came to several researchers in that early community around the same time. What was needed was a network of networks, a process known as "internetworking" or "internetting." By 1973, the European Informatics Network was experimentally connecting NPL in the UK and CYCLADES in France. Behind closed doors, Xerox's PARC research center was hooking up Ethernet to other local area networks that same year with its new PUP internetting protocol.
Bob Kahn at ARPA, who was funding the creation of PRNET and SATNET, had a very practical need -- to connect these military networks to each other and to the existing ARPAnet. He met up with another ARPANET alum, Vint Cerf, and in one inspired session in May of 1974 they created the first specification for ARPA's own internetting protocol -- TCP, or Transport Control Protocol.
It would be three years before they fully tested it, in a dramatic three-network international trial based from a research van in motion. It would be nearly 20 years of struggle before ARPA's protocol beat out all its rivals -- including heavyweight contenders from international standards organizations and computing giants like IBM and DEC.
And "The Internet" was born
By the early '90s Cerf and Kahn's protocol would emerge as the undisputed standard for internetting, or connecting computer networks to each other: the one we call "The Internet". The Web won a separate battle to become the dominant online system for navigating information across the Internet. Together, the Web running over the Internet beat out earlier alternatives from Minitel to CompuServe, and our familiar online world took off.
Forty years ago, Vint Cerf and Bob Kahn hammered out the rudiments of the standard that underlies today's online world and connects over 3 billion people. On May 10th, 2014, we were at Mitchell Park, Palo Alto, California, to celebrate this historic event -- the event that made the world a smaller place. | <urn:uuid:fbf06d89-bfb5-4f87-a9b0-7a15eb8f2781> | {
"date": "2016-05-01T00:28:35",
"dump": "CC-MAIN-2016-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860113541.87/warc/CC-MAIN-20160428161513-00062-ip-10-239-7-51.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9550782442092896,
"score": 2.984375,
"token_count": 536,
"url": "http://www.tcpip40.com/"
} |
As a favorite artist of Stalin, Alexander Gerasimov secretly painted pictures in the genre of “nude”
The name of the legendary artist Alexander Gerasimov, who lived and worked at a time when socialist realism dominated art, and to this day has caused heated debates among critics and art critics. Many consider him a court artist, who wrote in favor of the government, in which there is a significant element of truth. But there are facts with which you can not argue … In essence, the Impressionist Gerasimov remained a subtle painter throughout his life, excellently writing still lifes, flowers, lyrical sketches, and also pictures in the style of “nude”.
Indeed, Alexander Mikhailovich gained particular popularity and fame as a portrait painter at the dawn of Soviet power. In those years, he created a huge number of portraits of the leaders of the revolution and their comrades. For which he was awarded both titles, and Stalin Prizes, and leadership positions. And according to his hands, the ruling power took the most severe measures against artists who deviated from the direction of social realism in art.
And so it all began …
Alexander Gerasimov (1881-1963) hails from the town of Kozlov, Tambov Gubernia, from a merchant family. This small town will remain for Alexander not only his native corner of the earth, but also a refuge where the master will run away from the capital to purify his soul, relax and be inspired. There, throughout his life, he will write canvases that will excite him personally, as a person and artist.
Well, back in 1903, a 22-year-old young man, he would leave Kozlov for Moscow, to study painting. His mentors and teachers will be famous painters of the 19th century – Konstantin Korovin, Abram Arkhipov and Valentin Serov.
The start of the First World War crossed out the plans of the future artist. In 1915 he was mobilized to the front and served as a non-combatant soldier for two years on a sanitary train evacuating seriously wounded from the combat zones. The revolution of 1917 – also made its own adjustments in the life of Alexander Gerasimov, he leaves military service and leaves for Kozlov, where he has worked as a decorator for seven years in a local theater.
In 1925 the artist again pulled into the capital. He “is included in the revolutionary painting” and writes the famous posthumous portrait of the leader “Lenin on the podium”. And whether it is necessary to say what a sensation in those years made it to the people who lost their guide. Glory portraitist immediately entrenched for the artist. Although Gerasimov began his career with still-life and landscape sketches. And it should be noted that the artist had an excellent gift to easily reproduce the portrait likeness, while not writing images to the smallest details. With broad impressionistic strokes, he seemed to sculpt them on his canvases, achieving tremendous recognition.
Then came the portraits of Joseph Vissarionovich from the photo, later from life, and over time the artist created the “canonical image of Stalin.” He also painted portraits of the first persons of the state. And for all the merits he was generously favored by the authorities. His political works were widely replicated, bringing fees to the artist. And for those times Gerasimov was a very wealthy man. And it was he who became the first president of the USSR Academy of Arts established in 1947.
Critics unanimously asserted that the portraits of the artist are the standard of Soviet painting and that this is the way to write the leaders of the revolution. And who in those days could argue with that? Everyone considered Gerasimov a favorite portraitist of Comrade Stalin. And not a single political event in the country bypassed the artist; he created a picture after a picture, reflecting its life and historical events.
And in the early 50s, all the same critics began to present the artist in a completely new light: a careerist and a lackey, pleasing the vanity of political figures. After the death of Joseph Stalin, the career ladder of Gerasimov broke down, and with the arrival of Khrushchev he became displeased with the new authorities. And soon the artist is gradually released from all his posts, and his paintings are removed to the storerooms of museums, and some are simply destroyed.
Nevertheless, the work of Alexander Gerasimov turned out to be much wider and more multifaceted than it is accepted to talk about him. And in the history of Russian painting of the Soviet era there are not so many artists who left to descendants a richer and more diverse heritage. However, much of what worked Gerasimov was pushed into the background. It was not very possible for the master of the front portrait to advertise his attachments. | <urn:uuid:fee1fa59-1e3d-4b3e-a891-5eaab41a4805> | {
"date": "2019-07-23T17:41:55",
"dump": "CC-MAIN-2019-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529481.73/warc/CC-MAIN-20190723172209-20190723194209-00536.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9808050990104675,
"score": 2.515625,
"token_count": 1010,
"url": "http://rdmonsalve.com/post66"
} |
- Submitted by: hollier1999
- Views: 1369
- Category: Science
- Date Submitted: 07/09/2010 07:02 PM
- Pages: 2
Post a 200- to 300-word response explaining why the process of mitosis and meiosis are both important to a living organism. When would an organism need to undergo the process of mitosis and meiosis? What would happen if meiosis did not occur?
Cells are always dying, so new cells need to replace them somehow. Mitosis is the dividing and formation of new cells from current already existent cells. Some cells such as the cells in your hair follicle on your head divide very fast (which is why radiation therapy in cancer causes you to lose hair since it targets rapidly dividing cells). Meiosis involves the production of gametes, also known as egg and sperm. If meiosis didn't happen, organisms couldn't reproduce, and if mitosis didn't occur, organisms would die because cells undergo cellular respiration in order for us to even breathe!
Mitosis is the process of the chromosomes dividing.
Steps of mitosis:
* prophase- chromosomes coil up and become visible
* metaphase- the chromosomes move toward the center
* anaphase- the chromatids are pulled apart
* telophase- a nuclear envelope forms around the chromosomes
It is very essential that cells go through mitosis, so that each new cell contains the same genetic information.
Meiosis is a form of cell division that halves the number of chromosomes in the gametes. Meiosis is pretty much the same as mitosis it’s just that is goes through the process twice. Males end up with 4 sperm which is spermatogenesis. Females end up with 4 eggs but the other 3 die due to being too small and lacking cytoplasm which is oogenesis. It is also very essential that meiosis occurs, because reproduction would cease. | <urn:uuid:f6b59a9f-4c95-404e-a142-c9b594cf9e59> | {
"date": "2015-02-02T01:43:24",
"dump": "CC-MAIN-2015-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122220909.62/warc/CC-MAIN-20150124175700-00177-ip-10-180-212-252.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9579434394836426,
"score": 3.375,
"token_count": 406,
"url": "http://www.cyberessays.com/Term-Paper-on-Mitosis/9931/"
} |
United States Atomic Energy Commission
|This article needs additional citations for verification. (September 2009)|
Seal of the United States
Atomic Energy Commission
|Independent agency overview|
|Superseding agency||Energy Research and Development Administration (ERDA) and
Nuclear Regulatory Commission (NRC)
|Headquarters||Washington, D.C. (1947-1957)
Germantown, Maryland (1958-1975)
The United States Atomic Energy Commission (AEC) was an agency of the United States government established after World War II by Congress to foster and control the peacetime development of atomic science and technology. President Harry S. Truman signed the McMahon/Atomic Energy Act on August 1, 1946, transferring the control of atomic energy from military to civilian hands, effective from January 1, 1947. Public Law 585, 79th Congress. This shift gave the first members of the AEC complete control of the plants, laboratories, equipment, and personnel assembled during the war to produce the atomic bomb.
During its initial establishment and subsequent operationalization, the AEC played a key role in the institutional development of Ecosystem ecology. Specifically, it provided crucial financial resources, allowing for ecological research to take place. Perhaps even more importantly, it enabled ecologists with a wide range of groundbreaking techniques for the completion of their research. In the late 1950s and early 1960s, the AEC also approved funding for numerous bio-environmental projects in the Arctic and near-Arctic. These projects were designed to examine the effects of nuclear energy upon the environment and were a part of the Commission’s attempt at creating peaceful applications of atomic energy.
An increasing number of critics during the 1960s charged that the AEC's regulations were insufficiently rigorous in several important areas, including radiation protection standards, nuclear reactor safety, plant siting, and environmental protection. By 1974, the AEC's regulatory programs had come under such strong attack that Congress decided to abolish the agency. The agency was abolished by the Energy Reorganization Act of 1974, which assigned its functions to two new agencies: the Energy Research and Development Administration and the Nuclear Regulatory Commission. On August 4, 1977, President Jimmy Carter signed into law The Department of Energy Organization Act of 1977, which created the Department of Energy. The new agency assumed the responsibilities of the Federal Energy Administration, the Energy Research and Development Administration, the Federal Power Commission, and programs of various other agencies.
In creating the AEC, Congress declared that atomic energy should be employed not only in the form of nuclear weapons for the nation's defense, but also to promote world peace, improve the public welfare and strengthen free competition in private enterprise. At the same time, the McMahon Act which created the AEC also gave it unprecedented powers of regulation over the entire field of nuclear science and technology. It furthermore explicitly prevented technology transfer between the United States and other countries, and required FBI investigations for all scientists or industrial contractors who wished to have access to any AEC controlled nuclear information. The signing was the culmination of long months of intensive debate among politicians, military planners and atomic scientists over the fate of this new energy source and the means by which it would be regulated. President Truman appointed David Lilienthal as the first Chairman of the AEC. Congress gave the new civilian Commission extraordinary power and considerable independence to carry out its mission. To provide the Commission exceptional freedom in hiring scientists and professionals, Commission employees were exempt from the Civil Service system. Because of the need for great security, all production facilities and nuclear reactors would be government-owned, while all technical information and research results would be under Commission control. The National Laboratory system was established from the facilities created under the Manhattan Project. Argonne National Laboratory was one of the first laboratories authorized under this legislation as a contractor-operated facility dedicated to fulfilling the new Commission's mission. The Commission's first order of business was to inspect the scattered empire of plants and laboratories to be inherited from the Army.
The AEC was furthermore in charge of developing the United States' nuclear arsenal, taking over these responsibilities from the wartime Manhattan Project. Over the course of its first decade, the AEC oversaw the operation of Los Alamos Scientific Laboratory, devoted primarily to weapons development, and, in 1952, the establishment of a second weapons laboratory in California (the Lawrence Livermore National Laboratory). It also implemented the "crash" program to develop the hydrogen bomb, and played key roles in the prosecution of the Rosenbergs for espionage. It began a program of regular nuclear testing both in the Pacific Proving Grounds and at the continental Nevada Test Site. While it also supported much basic research, the vast majority of its early budget was devoted to atomic weapons development and production.
Within the AEC, high-level scientific and technical advice was provided by the General Advisory Committee, originally headed by J. Robert Oppenheimer. In its early years, the GAC provided a number of controversial decisions, notably its decision against building the hydrogen bomb in 1949. As a result, Senator Brien McMahon influenced the decision not to reappoint J. Robert Oppenheimer to the GAC in 1952 after his six-year statutory term expired. David Lilienthal, then AEC Chair, agreed with Oppenheimer and opposed "a 'crash' program to build the hydrogen bomb ahead of any other nation." The White House asked Lilienthal to leave the Atomic Commission. He did so on February 15, 1950. Lilienthal was one of the original members of the commission who granted Oppenheimer clearance in 1947. With Oppenheimer and Lilienthal removed, Truman announced the United States' decision to build the hydrogen bomb; and major supporter of hydrogen weapons Admiral Lewis. W. Strauss was later appointed by Eisenhower as AEC Chair.
Regulations & Experiments
The AEC's connections with the armed services was facilitated by a Military Liaison Committee. Congressional oversight over the AEC was exercised by the Joint Committee on Atomic Energy, which had considerable power in influencing AEC decisions and policy.
The AEC's far-reaching powers and control over subject matter which had far-reaching social, public health, and military implications made it an extremely controversial organization. One of the drafters of the McMahon Act, James R. Newman, famously concluded that the bill made "the field of atomic energy [an] island of socialism in the midst of a free-enterprise economy".
Before the Nuclear Regulatory Commission (NRC) was created, nuclear regulation was the responsibility of the AEC, which Congress first established in the Atomic Energy Act of 1946. Eight years later, Congress replaced that law with the Atomic Energy Act Amendments of 1954, which for the first time made the development of commercial nuclear power possible, and resolved a number of other outstanding problems in implementing the first Atomic Energy Act. The act assigned the AEC the functions of both encouraging the use of nuclear power and regulating its safety. The AEC's regulatory programs sought to ensure public health and safety from the hazards of nuclear power without imposing excessive requirements that would inhibit the growth of the industry. This was a difficult goal to achieve, especially in a new industry, and within a short time the AEC's programs stirred considerable controversy. Stephanie Cooke has written that:
"the AEC had become an oligarchy controlling all facets of the military and civilian sides of nuclear energy, promoting them and at the same time attempting to regulate them, and it had fallen down on the regulatory side ... a growing legion of critics saw too many inbuilt conflicts of interest".
The AEC had a history of involvement in experiments involving radioactive iodine. In a 1949 operation called the "Green Run," the AEC released iodine-131 and xenon-133 to the atmosphere which contaminated a 500,000-acre (2,000 km2) area containing three small towns near the Hanford site in Washington. In 1953, the AEC ran several studies on the health effects of radioactive iodine in newborns and pregnant women at the University of Iowa. Also in 1953, the AEC sponsored a study to discover if radioactive iodine affected premature babies differently from full-term babies. In the experiment, researchers from Harper Hospital in Detroit orally administered iodine-131 to 65 premature and full-term infants who weighed from 2.1 to 5.5 pounds (0.95 to 2.49 kg). In another AEC study, researchers at the University of Nebraska College of Medicine fed iodine-131 to 28 healthy infants through a gastric tube to test the concentration of iodine in the infants' thyroid glands.
Public Opinion & Abolishment of the AEC
During the 1960s and early 1970s, the Atomic Energy Commission came under fire from opposition concerned with more fundamental ecological problems such as the pollution of air and water. Environmental awareness had grown after the 1962 publication of Rachel Carson's "Silent Spring", and environmentalists began turning their attacks toward nuclear power and the government agencies that supported it. Under the Nixon Administration, environmental consciousness grew exponentially and the first Earth Day was held on April 22, 1970. Along with this rising environmental movement came a growing suspicion of the AEC and public hostility for their projects increased. In the public eye, there was a strong association between nuclear power and nuclear weapons, and even though the AEC had made a strong push in the late 1960s to portray their efforts as being geared toward peaceful uses of atomic energy, criticism grew for the agency. The AEC was still chiefly held responsible for the alleged health problems of people living near atmospheric test sites from the early 1960s, and there was a strong association of nuclear energy with the radioactive fallout from these tests. Around the same time, the AEC was also struggling with opposition to nuclear power plant siting as well as nuclear testing and experimentation. An organized push was finally made to curb the power held by the Commission, and in 1970 the AEC was forced to prepare an Environmental impact statement (EIS) for a nuclear test in northwestern Colorado as part of the initial preparation for Project Rio Blanco.
Into the mid 1970s, the United States Atomic Energy Commission and the Manhattan Project conducted other human radiation experiments. Radiation was known to be dangerous and the experiments were designed to ascertain the detailed effect of radiation on human health. In Nashville, pregnant women were given radioactive mixtures. In Cincinnati, some 200 patients were irradiated over a period of 15 years. In Chicago, 102 people received injections of strontium and cesium solutions. In Massachusetts, 74 schoolboys were fed oatmeal that contained radioactive substances. In all these cases, the subjects did not know what was going on and did not give informed consent. The government covered up most of these radiation mishaps until 1993, when President Bill Clinton ordered a change of policy. The resulting investigation was undertaken by the Advisory Committee on Human Radiation Experiments, and it uncovered much of the material included in The Plutonium Files.
In 1973, the AEC predicted that, by the turn of the century, one thousand reactors would be producing electricity for homes and businesses across the United States. But after 1973, reactor orders declined sharply as electricity demand fell and construction costs rose. Many orders and partially completed plants were cancelled.
By 1974, the AEC's regulatory programs had come under such strong attack that Congress decided to abolish the agency. Supporters and critics of nuclear power agreed that the promotional and regulatory duties of the AEC should be assigned to different agencies. The Energy Reorganization Act of 1974 put the regulatory functions of the AEC into the new NRC, which began operations on January 19, 1975 and placed the promotional functions within the Energy Research and Development Administration, which was later incorporated into the United States Department of Energy.
|1946–1950||David E. Lilienthal||Harry S. Truman|
|1950–1953||Gordon Dean||Harry S. Truman|
|1953–1958||Lewis Strauss||Dwight D. Eisenhower|
|1958–1960||John A. McCone||Dwight D. Eisenhower|
|1961–1971||Glenn T. Seaborg||John F. Kennedy, Lyndon Johnson, Richard Nixon|
|1971–1973||James R. Schlesinger||Richard Nixon|
|1973–1974||Dixy Lee Ray||Richard Nixon|
Atomic Energy Commission Commissioners
- Sumner T. Pike : October 31, 1946 - December 15, 1951
- David E. Lilienthal, Chairman : November 1, 1946 - February 15, 1950
- Robert F. Bacher : November 1, 1946 - May 10, 1949
- William W. Waymack : November 5, 1946 - December 21, 1948
- Lewis L. Strauss : November 12, 1946 - April 15, 1950 ; Chairman : July 2, 1953 - June 30, 1958
- Gordon Dean : May 24, 1949 - June 30, 1953 ; Chairman : July 11, 1950 - June 30, 1953
- Henry DeWolf Smyth : May 30, 1949 - September 30, 1954
- Thomas E. Murray : May 9, 1950 - June 30, 1957
- Thomas Keith Glennan : October 2, 1950 - November 1, 1952
- Eugene M. Zuckert : February 25, 1952 - June 30, 1954
- Joseph Campbell : July 27, 1953 - November 30, 1954
- Willard F. Libby : October 5, 1954 - June 30, 1959
- John von Neumann : March 15, 1955 - February 8, 1957
- Harold S. Vance : October 31, 1955 - August 31, 1959
- John S. Graham : September 12, 1957 - June 30, 1962
- John Forrest Floberg : October 1, 1957 - June 23, 1960
- John A. McCone, Chairman : July 14, 1958 - January 20, 1961
- John H. Williams : August 13, 1959 - June 30, 1960
- Robert E. Wilson : March 22, 1960 - January 31, 1964
- Loren K. Olson : June 23, 1960 - June 30, 1962
- Glenn T. Seaborg, Chairman : March 1, 1961 - August 16, 1971
- Leland J. Haworth : April 17, 1961 - June 30, 1963
- John G. Palfrey : August 31, 1962 - June 30, 1966
- James T. Ramey : August 31, 1962 - June 30, 1973
- Gerald F. Tape : July 15, 1963 - April 30, 1969
- Mary I. Bunting : June 29, 1964 - June 30, 1965
- Wilfred E. Johnson : August 1, 1966 - June 30, 1972
- Samuel M. Nabrit : August 1, 1966 - August 1, 1967
- Francesco Costagliola : October 1, 1968 - June 30, 1969
- Theos J. Thompson : June 12, 1969 - November 25, 1970
- Clarence E. Larson : September 2, 1969 - June 30, 1974
- James R. Schlesinger, Chairman : August 17, 1971 - January 26, 1973
- William O. Doub : August 17, 1971 - August 17, 1974
- Dixy Lee Ray : August 8, 1972 ; Chairman : February 6, 1973 - January 18, 1975
- William E. Kriegsman : June 12, 1973 - January 18, 1975
- William A. Anders : August 6, 1973 - January 18, 1975
Relationship with science
For many years, the AEC provided the most conspicuous example of the benefit of atomic age technologies to biology and medicine. Shortly after the Atomic Energy Commission was established, its Division of Biology and Medicine began supporting diverse programs of research in the life sciences, mainly the fields of genetics, physiology, and ecology. Specifically concerning the AEC's relationship with the field of ecology, one of the first approved funding grants went to Eugene Odum in 1951. This grant sought to observe and document the effects of radiation emission on the environment from a recently built nuclear facility on the Savannah River in South Carolina. Odum, a professor at the University of Georgia, initially submitted a proposal requesting annual funding of $267,000, but the AEC rejected the proposal and instead offered to fund a $10,000 project to observe local animal populations and the effects of secondary succession on abandoned farmland around the nuclear plant.
In later years[when?], the AEC began providing increased research opportunities to scientists by approving funding for ecological studies at various nuclear testing sites, most notably at Eniwetok, which was part of the Marshall Islands. Through their support of nuclear testing, the AEC gave ecologists a unique opportunity to study the effects of radiation on whole populations and entire ecological systems in the field. Prior to 1954, no one had investigated a complete ecosystem with the intent to measure its' overall metabolism, but the AEC provided the means as well as the funding to do so. Ecological development was further spurred by environmental concerns about radioactive waste from nuclear energy and postwar atomic weapons production. In the 1950s, such concerns led the AEC to build a large ecology research group at their Oak Ridge National Laboratory, which was instrumental in the development of radioecology. A wide variety of research efforts in biology and medicine took place under the umbrella of the AEC at national laboratories and at some universities with agency sponsorship and funding. As a result of increased funding as well as the increased opportunities given to scientists and the field of ecology in general, a plethora of new techniques were developed which led to rapid growth and expansion of the field as a whole. One of these techniques afforded to ecologists involved the use of radiation, namely in ecological dating and to study the effects of stresses on the environment.
In 1969, the AEC's relationship with science and the environment was brought to the forefront of a growing public controversy that had been building since 1965. In search for an ideal location for a large-yield nuclear test, the AEC settled upon the island of Amchitka, part of the Aleutian Islands National Wildlife Refuge in Alaska. The main public concern was about their location choice, as there was a large colony of endangered sea otters in close proximity. To help diffuse the issue, the AEC sought a formal agreement with the Department of the Interior and the state of Alaska to help transplant the colony of sea otters to other former habitats along the West Coast.
The AEC played a role in expanding the field of arctic ecology. From 1959 to 1962, the Commission’s interest in this type of research peaked. For the first time, extensive effort was placed by a national agency on funding bio-environmental research in the Arctic. Research took place at Cape Thompson on the northwest coast of Alaska, and was tied to an excavation proposal named Project Chariot. The excavation project was to involve a series of underground nuclear detonations that would create an artificial harbor, consisting of a channel and circular terminal basin, which would fill with water. This would have allowed for enhanced ecological research of the area in conjunction with any nuclear testing that might occur, as it essentially would have created a controlled environment where levels and patterns of radioactive fallout resulting from weapons testing could be measured. The proposal never went through, but it evidenced the AEC’s interest in Arctic research and development.
The simplicity of biotic compositions and ecological processes in the arctic regions of the globe made ideal locations in which to pursue ecological research, especially since at the time there was minimal human modification of the landscape. All investigations conducted by the AEC produced new data from the Arctic, but few or none of them were supported solely on that basis. While the development of ecology and other sciences was not always the primary objective of the AEC, support was often given to research in these fields indirectly as an extension of their efforts for peaceful applications of nuclear energy.
The AEC issued a large number of technical reports through their technical information service and other channels. These had many numbering schemes, often associated with the lab from which the report was issued. AEC report numbers included AEC-AECU (unclassified), AEC-AECD (declassified), AEC-BNL (Brookhaven National Lab), AEC-HASL (Health and Safety Laboratory), AEC-HW (Hanford Works), AEC-IDO (Idaho Operations Office), AEC-LA (Los Alamos), AEC-MDCC (Manhattan District), AEC-TID, and others. Today, these reports can be found in library collections that received government documents, through the National Technical Information Service (NTIS), and through public domain digitization projects such as HathiTrust.
- Anti-nuclear movement in the United States
- Atomic bombing of Hiroshima and Nagasaki
- Harold Hodge, head of the United States Atomic Energy Commission's (AEC) Division of Pharmacology and Toxicology for the Manhattan Project, where he studied on the effects of the inhalation of uranium and beryllium through the "Rochester Chamber".
- List of anti-nuclear groups in the United States
- Manhattan Project
- Kenneth Nichols (first General Manager of AEC)
- Nuclear waste
- Operation Plowshare
- Oppenheimer security hearing
- Price-Anderson Nuclear Industries Indemnity Act
- Alvin Radkowsky (Chief Scientist, Office of Naval Reactors from 1950 to 1972)
- The Cult of the Atom: The Secret Papers of the Atomic Energy Commission
- Unethical human experimentation in the United States
- United States Department of Energy
- We Almost Lost Detroit
- "U.S. Department of Energy: Germantown Site History". United States Department of Energy. Retrieved March 13, 2012.
- Moss, William; Eckhardt, Roger (1995). "The Human Plutonium Injection Experiments". Los Alamos Science. Radiation Protection and the Human Radiation Experiments (23): 177–223. Retrieved 13 November 2012.
- "The Media & Me: [The Radiation Story No One Would Touch]", Geoffrey Sea, Columbia Journalism Review, March/April 1994.
- Niehoff, Richard. 1948. "Organization and Administration of the United States Atomic Energy Commission." Public Administration Review Vol. 8, No.2, pp. 91-102.
- Hewlett, Richard G., and Oscar E. Anderson. A History of the United States Atomic Energy Commission. University Park: Pennsylvania State University Press, 1962.
- Hagen, Joel Bartholemew. An Entangled Bank: The Origins of Ecosystem Ecology. New Brunswick, N.J.: Rutgers University Press, 1992.
- Wolfe, John N.. "National Agency Programs and Support of Arctic Biology in the United States: Atomic Energy Commission." BioScience 14, no. 5 (1964): 22-25.
- "Atomic Energy Commission". Nuclear Regulatory Commission. Retrieved 2009-11-16.
- Buck, Alice L. "A History of the Atomic Energy Commission" Washington, D.C.: U.S. Department of Energy, July 1983.
- Niehoff, Richard. 1948. "Organization and Administration of the United States Atomic Energy Commission." Public Administration Review Vol. 8, No.2, pp. 91-92.
- Hewlett, Richard G., and Oscar E. Anderson. The New World: A History of the United States Atomic Energy Commission. University Park: Pennsylvania State University Press, 1962.
- FBI memo, Mr. Tolson to L.B. Nichols, "Dr. J. Robert Oppenheimer, 8 Jun. 1954, FBI FOIA, http://vault.fbi.gov/rosenberg-case/robert-j.-oppenheimer/robert-j-oppenheimer-part-03-of.
- Stephanie Cooke (2009). In Mortal Hands: A Cautionary History of the Nuclear Age, Black Inc., p. 252.
- Goliszek, Andrew (2003). In The Name of Science. New York: St. Martin's Press. pp. 130–131. ISBN 978-0-312-30356-3.
- Goliszek, Andrew (2003). In The Name of Science. New York: St. Martin's Press. pp. 132–134. ISBN 978-0-312-30356-3.
- Seaborg, Glenn Theodore, and Benjamin S. Loeb. "The Atomic Energy Commission under Nixon: adjusting to troubled times." New York: St. Martin's Press, 1993. 113.
- Seaborg & Loeb. 113.
- Seaborg & Loeb. 115.
- Hacker, Barton C.. Elements of Controversy: The Atomic Energy Commission and Radiation Safety in Nuclear Weapons Testing, 1947-1974. Berkeley, CA: University of California Press, 1994. 244.
- R.C. Longworth. Injected! Book review:The Plutonium Files: America's Secret Medical Experiments in the Cold War, The Bulletin of the Atomic Scientists, Nov/Dec 1999, 55(6): 58-61.
- Stephanie Cooke (2009). In Mortal Hands: A Cautionary History of the Nuclear Age, Black Inc., p. 283.
- The Atomic Energy Commission, By Alice Buck, July 1983
- Creager, Angela N.H.. "Nuclear Energy in the Service of Biomedicine: The U.S. Atomic Energy Commission’s Radioisotope Program, 1946–1950." Journal of the History of Biology 39, (2006): 649-684.
- Hagen, (1992).
- Creager, 649-684.
- Hacker, 246.
- Hacker, 247.
- Wolfe, 22.
- Wolfe, 23.
- Wolfe, 25.
- Hathitrust search for "Atomic Energy Commission". Accessed May 23, 2013.
- Buck, Alice L. A History of the Atomic Energy Commission. U.S. Department of Energy, DOE/ES-0003. July 1983. PDF file
- Richard G. Hewlett; Oscar E. Anderson. The New World, 1939-1946. University Park: Pennsylvania State University Press, 1962.
- Richard G. Hewlett; Francis Duncan. Atomic Shield, 1947-1952. University Park: Pennsylvania State University Press, 1969.
- Richard G. Hewlett; Jack M. Holl. Atoms for Peace and War, 1953-1961: Eisenhower and the Atomic Energy Commission. Berkeley: University of California Press, 1989.
- U.S. Nuclear Regulatory Commission Glossary: "Atomic Energy Commission"
- Diary of T. Keith Glennan, Dwight D. Eisenhower Presidential Library
- Papers of John A. McCone, Dwight D. Eisenhower Presidential Library
- Technicalreports.org: TRAIL—Technical Report Archive and Image Library — historic technical reports from the Atomic Energy Commission (& other Federal agencies) are available here.
|Wikimedia Commons has media related to United States Department of Energy.| | <urn:uuid:029da94d-e2c6-424d-97bc-ca7a1d584a37> | {
"date": "2014-07-22T15:10:09",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997858962.69/warc/CC-MAIN-20140722025738-00208-ip-10-33-131-23.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9342381954193115,
"score": 3.09375,
"token_count": 5558,
"url": "http://en.wikipedia.org/wiki/U.S._Atomic_Energy_Commission"
} |
We all know that everyone sweats when they train or find themselves in very stressful situations. Sweating, of course, allows us to effectively regulate our body temperatures. But why should we not use deodorant on our whole bodies to curtail our sweat?
The reason is that there are two types of sweat glands.
What Is Sweat?
Sweat consists mainly of water (H₂0) and salt (Na +). That's why having a sufficient level of hydration is important! If you are not properly hydrated before you train, your body will not be able to cool down and adjust your temperature appropriately. Likewise, if the fluids you lose through sweating are not replenished after training, you may become de-hydrated and experience a high body temperature. To replenish lost fluids, you can drink water and home-made electrolyte drinks. You can also add salt (preferably Himalayan salt) to your meals.
The two types of sweat gland
Eccrine and apocrine are the two types of glands present in our body.
Eccrine glands are responsible for cooling us down and are found throughout the body on the skin surface. They assist in sweat evaporation, which is a very effective way of cooling us off.
Apocrine glands are found under your arms and in the groin area, where hair follicles are also concentrated. These glands are activated when your body temperature rises, but above all when you are stressed out or experience hormonal fluctuations. Sweat here is milky in color and mixes with skin bacteria to cause an unpleasant smell.
If you sweat a lot, does it mean you are training well?
The amount of sweat you produce depends on age, weight, sex, level of physical activity, environmental conditions and genetics. In addition, a fit person will sweat more often because their body is more used to cooling itself in that manner.
How to manage sweat
1. Drink enough fluids
Many people do not drink enough fluids. Be sure to drink 30 ml per kg of body weight to start, and add 0.5 liters for light intensity, 1 liter for intermediate intensity and 1.5 liters for extreme intensity. You should drink water even when you are not thirsty. When you are thirsty, it is too late!
2. Wash before and after
If possible, wash away your makeup or any lotion that might block your pores and prevent sweating. Blocking pores can cause excessive sweating in other areas.
3. Choose the right clothes
Avoid non-breathable fabrics and choose materials that dissipate sweat. If you choose suitable clothing, you will notice the difference!
What if I sweat too much?
If you think you sweat excessively, consult a doctor! It could be hyperhidrosis, and people who suffer from it sweat even though they have not done any physical activity. It is not uncommon, and up to 5% of the population suffer from it. So, don’t be ashamed! | <urn:uuid:4122f3a5-7f65-45fb-80c5-337d63747f96> | {
"date": "2019-09-17T14:41:53",
"dump": "CC-MAIN-2019-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573080.8/warc/CC-MAIN-20190917141045-20190917163045-00376.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9458778500556946,
"score": 2.671875,
"token_count": 612,
"url": "https://www.tennisworldusa.org/tennis/news/Tennis_Technique/65759/sweating-during-training-why-and-how-to-manage-it/"
} |
The Integumentary System (SKIN)
The cutaneous membrane also called the skin serves coats the body externally. The skin and its derivatives such as the sweat and oil glands, hair and nails are generally called the integumentary system. The cutaneous membrane is a dry membrane and is exposed to air.
Functions of the SKIN
The integument or the skin with its derivatives serves the following function:
- Covers the body.
- Protects the body from mechanical damage. This function is done by insulating and cushioning the deeper body organs. Examples of mechanical damage are bumps and cuts. When a person is bumped, the uppermost layer of the skin toughens or hardens the cells. The toughening of the cells is due to the presence of keratin in the upper layer of the skin. Pressure receptors in the skin send an impulse to the nervous system about a possible damage. These receptors alert an individual to bumps and provide a great deal of information about the external environment.
- 3. Protects the body from chemical damage. Acids and bases, when exposed to the body at high levels, can cause extreme damage to the internal organs. However, because of the presence of tough keratinized cells, damage to internal organs is prevented.
- Protects the body from bacterial damage. In preventing infection, one of the most important considerations to consider is an unbroken skin surface. The skin secretes urea, salt and water (acidic) when a person sweats, thus, inhibiting bacterial growth. Phagocytes are also located in the skin which is responsible for ingesting foreign substances and pathogens. Hence, bacterial penetration to deeper body tissues is prevented.
- Protects from ultraviolet radiation. The pigment or color of the skin depends on the presence of melanin. This melanin that is produced by the melanocytes is good at protecting the body from the damaging effects of the sunlight or UV damage.
- Protects the body from thermal damage. When the body is exposed to extreme heat or cold the heat and cold receptors located in the skin alerts the nervous system of the tissue-damaging factors. The brain, in response sends impulses to the site of damage or possible damage for the body’s compensatory mechanism.
- Protects the body from drying out. The skin’s outermost part, the epidermis, contains a waterproofing glycolipid and keratin in order to prevent water loss from the body surface.
- Regulation of heat loss and heat retention. The body must maintain a constant core temperature. Any change in the environmental temperatures could possibly alter the required core temperature. The skin contains a rich capillary network and sweat glands which are controlled by the nervous system. These mechanisms play an important role in regulating heat loss or retention in the body. When the body is needs to loss heat, the skin receptors alert the nervous system which in response activates the sweat glands (sweat helps cool the body in a hot environment). The blood is also flushed into the skin capillary beds, making heat loss possible. When the body needs to retain heat, the blood is NOT allowed to be flushed into the capillary skin beds. This is the main reason why during cold weather, the palms of the hands are pale.
- Acts as mini-excretory system. The perspiration contains urea, uric acid and salts.
- Synthesizes Vitamin D. The skin produces proteins that are vital for the synthesis of the Vitamin D. When a person is exposed to sunlight, modified cholesterol molecules in the skin are converted to Vitamin D. | <urn:uuid:f1b9578a-f487-41f8-9568-3fd439bca558> | {
"date": "2014-10-01T05:57:55",
"dump": "CC-MAIN-2014-41",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663365.9/warc/CC-MAIN-20140930004103-00272-ip-10-234-18-248.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9153469204902649,
"score": 3.84375,
"token_count": 737,
"url": "http://nursingcrib.com/anatomy-and-physiology/the-integumentary-system-skin/"
} |
In recent years environmental policymakers in the General Assembly have focused on reducing emissions of carbon dioxide (CO2) in what is, at best, a misguided attempt to thwart global warming.
In 2007 the General Assembly passed Senate Bill 3 (SB3), which requires North Carolina's electric companies to generate 7.5 percent of the electricity that they sell in the state from renewable sources, such as wind and solar energy, and to induce its customers to reduce their energy use by another 5 percent. The measures are referred to as "energy efficiency" although the term efficiency in this case simply means reduction in use.
Global warming is likely to continue as the focus of environmental policy for the foreseeable future. In 2005 the legislature created the N.C. Legislative Commission on Global Climate Change (LCGCC) to evaluate the issue and to determine what, if anything, should be done about it. From the beginning the Commission's leadership, Rep. Joe Hackney, replaced by Rep. Pricey Harrison, and lawyer and environmental activist John Garrou, all have been strongly in favor of imposing CO2 controls and energy use restrictions on North Carolina’s industries and consumers.
The LCGCC completed its work in May 2010. Its final report recommended a laundry list of policies formulated by an environmental advocacy group, the Center for Climate Strategies, that was hired by the North Carolina Department of Environment and Natural Resources for its own commission called the Climate Action Plan Advisory Group (CAPAG).
CAPAG was formed to investigate ways to control CO2 emissions and was specifically forbidden to have any discussion regarding the science of global warming. All of its investigative approaches were developed by the Center for Climate Strategies. Most of the CAPAG recommendations are now being proposed to the legislature by the LCGCC.
Given that former LCGCC chair Rep. Joe Hackney is now Speaker of the House, it is very likely that these recommendations will be taken seriously and ultimately be introduced as legislation in the coming months and years. | <urn:uuid:cc459841-8c25-48b4-99c3-12e02dedcfd1> | {
"date": "2014-11-28T05:38:54",
"dump": "CC-MAIN-2014-49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009777.87/warc/CC-MAIN-20141125155649-00164-ip-10-235-23-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9658558964729309,
"score": 2.71875,
"token_count": 401,
"url": "http://www.johnlocke.org/agenda2010/climatechange.html"
} |
Terms of world history
10 - 50 downloadsAdd this app to your listsTweets por @Appszoom
English → Japanese abbreviated name] question the [international organizations such as World History English abbreviation
Quiz-type application appearance
Memorize English abbreviated form such as a quiz to measure the problem of frequent international organizations · In recent years!
Just select the full name from the three selected · English acronym!
Political and economic history of modern society to combat global university examination?
· MARCH (Meiji, Aoyama Law Rikkyo middle) level, total full support to the level of Sophia!
Employment testing issues common sense Arts (English) also supported!
Eiken · Eiken UN support to people with knowledge of students to put a TOEIC!
Frequent testing center level ..., mid-level private universities
Normal level ... MARCH, private universities challenging levels (employment exam is recommended to study here)
Sophia level difficulty level total ...
The questions 10 questions 20 questions at random from each section ※.
From the author
"In recent years, college admissions, total permanent problem, especially world history is becoming more and more English abbreviations of international organizations question the issue.
Also, the problem of English and only know the full name and abbreviation in English of international organizations, it is possible to solve the problem efficiently.
This application is based on the materials used in courses that total anti-Sophia we actually created a problem.
Also, even with the full name in the description. I hope this app will help your exam. "
[Agency name] Japanese acquisition
United Nations Educational, Scientific and Cultural Organization
International Monetary Fund
International Labour Organization (Organization)
International Atomic Energy Agency
U.N. peacekeeping activity
General Agreement on Tariffs and Trade
World Trade Organization
North Atlantic Treaty Organization
Organization of Petroleum Exporting Countries
Organization of Arab Petroleum Exporting Countries
Association of Southeast Asian Nations
Ministerial Conference on Regional Economic Cooperation in Asia Pacific Ocean
Palestine Liberation Organization
Limited Test Ban Treaty
Stop Treaty Comprehensive Nuclear Test
Official Development Assistance
Communist Information Bureau
Eastern Europe, Council for Mutual Economic Assistance
United Nations Children's Fund
World Health Organization
UN interim administration in Cambodia
United Nations Conference on Trade and Development
United Nations Food and Agriculture Organization
North American Free Trade Agreement
Organization of African Unity
Group of newly industrialized economies region
Irish Republican Army
Organisation for European Economic Cooperation
Organisation for Economic Co-operation and Development
European Coal and Steel Community
European economic community
European Atomic Energy Community
European Free Trade Association
Intercontinental ballistic missile
Gross domestic product
Commonwealth of Independent States
General Headquarters Supreme Commander for Allied Powers
Central Intelligence Agency
Tags: world trade organization, terms in world history tagalog, история мировая на андроид, termos em historia. | <urn:uuid:943386ef-464b-4444-a74b-ec6f5afc5c75> | {
"date": "2015-05-25T21:51:50",
"dump": "CC-MAIN-2015-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928715.47/warc/CC-MAIN-20150521113208-00219-ip-10-180-206-219.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8250772356987,
"score": 2.75,
"token_count": 632,
"url": "http://www.appszoom.com/android_applications/education/terms-of-world-history_bvahf.html?nav=related"
} |
Protons are larger, heavier energy particles that move quickly through healthy tissue, depositing the bulk of radiation they carry at the tumor site. This maximum energy deposit is known as the Bragg Peak. Since virtually all of a proton’s energy is focused at the tumor site, almost no radiation travels beyond the Bragg Peak, which reduces damage to adjacent tissues and organs.
Protons are most successful in treating localized cancers that have not spread to other areas. When used in the treatment of specific tumors, proton therapy may result in fewer side effects and better quality throughout treatment.
Conventional Radiation Therapy
Targeted Proton Therapy
Proton therapy is an extremely effective treatment option for localized tumors that have not spread to other areas. It can be used alone or in conjunction with other cancer treatments including traditional radiation, surgery, and chemotherapy. For a full list of cancers that proton therapy has been proven beneficial for, please click here.
Proton therapy at Ackerman Cancer Center is precise, accurate, and safe. The MEVION S250 system combines the most advanced proton therapy technology currently available into a compact, efficient treatment system featuring DirectDose™ beam modulating technology and Full Robotic Positioning for precise patient positioning.
Learn more about the MEVION S250 system here. | <urn:uuid:02f65d7a-dab8-4efe-a653-50e1f0285edc> | {
"date": "2018-06-19T21:58:26",
"dump": "CC-MAIN-2018-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863206.9/warc/CC-MAIN-20180619212507-20180619232507-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9250751733779907,
"score": 2.96875,
"token_count": 266,
"url": "http://ackermancancercenter.com/proton-therapy/what-is-proton-therapy/"
} |
Baptism of Pocahontas
Light shines on the kneeling figure of Pocahontas, daughter of the Virginia Indian chief Powhatan, and on Reverend Alexander Whitaker, the minister officiating at her baptism into the Christian faith. It is believed that Pocahontas was the first native to convert to Christianity after the arrival of the English. The baptismal ceremony took place in Jamestown in 1613 or 1614, during which she was given the Christian name Rebecca and also revealed her secret native name, Matoaka. Her future husband, John Rolfe, is seen standing behind her. Others shown in the painting include Sir Thomas Dale, deputy governor of the colony, who stands to the left, dressed in armor. Members of Pocahontas's family are in evidence to the right: her brother Nantequaus, wearing tan robes and an elaborate headdress, turns away from the ceremony, while her uncle Opachisco, in rose-colored clothes, appears to be leaning in to listen. Another uncle, Opechancanough, remains seated and has a rather somber mien. Pocahontas's sister, wearing Indian garb and cradling her infant in her lap, watches the proceedings while sitting on the floor.
This twelve-foot by eighteen-foot oil was painted by Virginia-born artist John Gadsby Chapman. In 1837 he was commissioned to create a large historical painting for the Rotunda in the U.S. Capitol, and he chose Pocahontas and her conversion as his subject. The church in Jamestown where Pocahontas had been baptized no longer stood, so Chapman travelled in England and America to seek out similar 17th-century structures. In the midst of completing the portrait he lost two children; he also suffered under the strain of financial worries. When he finally received payment for the finished portrait he noted in a journal that the money he received was "barely equivalent to its cost" to him. The painting was installed in the Rotunda in 1840. | <urn:uuid:10c0576b-96b7-482c-ac08-8467a1e72e79> | {
"date": "2015-01-28T09:11:04",
"dump": "CC-MAIN-2015-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122691893.34/warc/CC-MAIN-20150124180451-00153-ip-10-180-212-252.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9722104668617249,
"score": 3.765625,
"token_count": 421,
"url": "http://encyclopediavirginia.org/media_player?mets_filename=evm00002822mets.xml"
} |
"John Harrison" was a self-educated English Carpentry/carpenter and clockmaker. He invented the marine chronometer, a long-sought after device for solving the problem of establishing the East-West position or longitude of a ship at sea, thus revolutionising and extending the possibility of safe long-distance sea travel in the Age of Sail. The problem was considered so intractable, and following the Scilly naval disaster of 1707 so important, that the British Parliament offered the Longitude prize of Pound sterling/£20,000 (£/r=0}}}}). Harrison came 39th in the BBC's 2002 public poll of the 100 Greatest Britons.If you enjoy these quotes, be sure to check out other famous inventors! More John Harrison on Wikipedia. | <urn:uuid:b4351ff8-19e5-44b8-b10f-4fbd9a06e5b5> | {
"date": "2019-01-23T06:00:02",
"dump": "CC-MAIN-2019-04",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583897417.81/warc/CC-MAIN-20190123044447-20190123070447-00296.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9089934825897217,
"score": 3.234375,
"token_count": 161,
"url": "http://quotio.com/by/John-Harrison"
} |
"Our World" theme
week on "Our World" ... The health benefits of shutting down dirty
power plants ... different weight-loss diets compared ... and farmers weigh the
value of using genetically modified crops.
"And so, it was something
where some of our members would get the benefit, but everybody faced potential
risk of having customers say, 'we don't want this in wheat.'"
better yields vs. consumer resistance...a new drug to attack a stubborn
infection, and more. I'm Art Chimes. Welcome to VOA's science and technology
magazine, "Our World."
and Chinese researchers have found that shutting down an old, dirty, coal-fired
power plant can reduce air pollution and significantly improve cognitive
developmentthe ability to think — in
PERERA:"This study compares children who were
exposed in-utero to pollution from coal-fired power plants with children who
were not so exposed, and it demonstrates the benefits of closing the plant on
children's development measured at age two."
Perera of Columbia University led the research team, which took advantage of a
decision by Chinese officials to close down a power plant in the city of
Tongliang, in Chongqing Municipality.
PERERA:"This power plant was shut down because
the Chinese government had ordered the closure of old, small, polluting power
plants that burned coals, and this one was on the list."
coal-burning facility was a major source of chemicals called polycyclic
aromatic hydrocarbons, or PAHs. Earlier research has identified PAHs as toxic
materials that can cause a variety of developmental defects in young children.
evaluate the impact of PAHs, the researchers compared two groups of children.
One group was born in 2002, before the power plant closed; the second group was
born in 2005, after it shut down.
the babies were two years old they were tested using the standard Gesell test
of child development. And the group born after the coal-burning plant shut down
scored higher, particularly in a measure of motor skills.
says previous studies demonstrated that the pollutants could be harmful. But
the researchers here were able to clearly demonstrate the benefits of reducing
pollution with compelling before-and-after data.
PERERA:"This study was unique in that it
allowed us to show the benefits of removing such a polluting source and to
demonstrate that the children in the second group actually fared better in
terms of developmental tests, particularly in the area of motor function."
has been closing older, dirtier coal-burning power plants. But with oil prices
in record territory, coal remains a dominant fuel for generating electricity
around the world. The newest, high-tech plants do have pollution controls, but
many older plants remain in operation. As the demand for electricity continues
to increase, Dr. Frederica Perera says her research sounds a note of caution
... and, at the same time, demonstrates the benefits of cutting emissions from
PERERA:"These findings do have relevance for
environmental health and energy policy worldwide since these are pollutants
that are extremely widespread from fossil fuel burning, particularly from coal,
so they are a positive message both for China and the rest of the world."
Perera's paper appeared this week in Environmental Health Perspectives.
The journal is published by the U.S. National Institute of Environmental Health
U.S. Senate Wednesday passed a bill to triple U.S. spending on a program to
combat AIDS and other diseases. On Tuesday, lawmakers and AIDS experts gathered
to reflect on the successes and challenges facing PEPFAR, which is currently a
$3 billion a year program, plus the larger fight against HIV/AIDS. Eric Libby
has our report.
LIBBY:Unlike other diseases such as smallpox and
polio, HIV/AIDS is a chronic disease. Those infected with HIV may not show
symptoms of AIDS for years, allowing this killer to infiltrate a population.
Senator John Kerry, one of several speakers invited to assess the global war on
AIDS, outlined the magnitude of the crisis with some grim statistics.
KERRY:"You've got 12 million kids who have
lost one or both parents [to AIDS]. Some 30 percent of the world's orphans
today are AIDS-related orphans. The fact is that 33 million people worldwide
are still infected with HIV, and more than 2.1 million people died of AIDS last
year, more than 2.5 million will be infected this year."
LIBBY:Despite these challenges, Kerry highlighted
some of the progress made by programs like PEPFAR.
KERRY:"The good side of the story is that
we've got a program that can assist 10 million people, including five million
AIDS orphans, hopefully prevent seven million people from being infected
provide and help provide anti-retrovirals to two million people. That's a big
LIBBY:Part of PEPFAR's funding supports scientific
research. Anthony Fauci, the director of the National Institute of Allergy and
Infectious Diseases, points out that there are now more FDA-approved drugs
against HIV than all other antiviral drugs combined. A study released two
months ago identified more than 30 new protein targets for anti-retroviral
therapy. At the Capitol Hill forum, Fauci described a new technology that will
FAUCI:"So some of the things that we have
available are now being tested in the field, such as the collection and
shipping of samples with dried blood spots — something unimaginable years ago.
Five years from now when we have even more people on therapy we are going to
want to monitor them for resistance, we want to monitor them for viral load. We
can't do it by these very complicated technologies, it has got be simple enough
to apply in the field and that's where it's going right now."
LIBBY:Fauci said a vaccine is still a long time
away, and past failures have revealed there is still much scientists do not
know about HIV/AIDS.
PEPFAR holds much promise and enjoys wide political support, it has been
criticized for how it allocates some of its resources, and for its promotion of
sexual abstinence as a way to slow the spread of AIDS. The congressional debate
over PEPFAR's renewal is forcing both critics and supporters of the program to
evaluate not only how important it has been in the battles against AIDS so far,
but how important it will be in the battles still to come. This is Eric Libby
nutrition is important for people with AIDS, or actually, pretty much everyone.
study in this week's New England Journal of Medicine evaluates the
health effects of three of the most popular diets to combat overweight and
obesity.VOA's Jessica Berman reports
that obesity has reached epidemic proportions globally and is a risk factor for
illness and death.
BERMAN:According to the World Health Organization,
1.6 billion adults are overweight or obese, putting them at risk for heart
disease, diabetes and cancer.That
number is expected to soar to 2.3 billion by 2015, owing to fast food and
agree that getting the weight off can be life saving, but an international team
of researchers wanted to find out the long-term health effects of three of the
most popular diet plans.
compared the standard calorie-reduction diet, the Mediterranean diet that is
high in olive oil and grains, and the popular high-protein diet in a group of
322 middle-aged, moderately obese individuals.
found those on the high-protein diet lost the most weight at 4.7 kilograms and
kept it off, followed by those on the Mediterranean diet at 4.4 kilograms.Those on the calorie-reduction diet lost the
least amount of weight, 2.9 kilos.
important, according to study lead author Iris Shai of Ben Gurion University of
the Negev, were the cholesterol levels.
says the high protein dieters, who were not calorie restricted, had a 20
percent reduction in their total cholesterol levels compared to a 12 percent
reduction among low calorie dieters, whose plans included carbohydrates.
says that could be important for a dieter with high cholesterol who has to lose
SHAI:"So maybe the message here is that
carbohydrates must be much more risky than we thought and omitting them
benefits obese patients."
BERMAN:Among diabetic participants, researchers
found the Mediterranean diet did a better job in maintaining blood glucose
Cheskin runs a diet and nutrition program at Johns Hopkins University in
Baltimore, Maryland. For now, Cheskin cautions against reading too much into
CHESKIN:"Even though we have studies such as
this one that we are discussing today suggesting that you can lose weight
better on a low carbohydrate diet, we have the evidence from many people in
non-Western countries that a low fat, relatively high carbohydrate diet results
in good weight control."
BERMAN:Meanwhile, Shai believes the results of her
study in the New England Journalsuggest that people need to work with
their doctors to tailor a weight loss reduction plan to their particular
medical needs. Jessica Berman, VOA News, Washington.
again for our Website of the Week, when we showcase interesting and innovative
time, it's an animal website that shows what can happen when one person shares
his knowledge and passion with the Internet community.
HUFFMAN:"The Ultimate Ungulate website is
designed to be a resource for all the world's hoofed mammals, which are
Huffman is the creator of UltimateUngulate.com. It features great photos and
in-depth information about a group of animals whose scientific name may be
unfamiliar, but whose members may be living right nearby.
HUFFMAN:"Ungulates include really common
animals like cows and goats, things like deer, horses. But also rhinoceroses
and various antelopes and giraffes. They're all considered ungulates."
fact, there are some 250 species of ungulates, and the Ultimate Ungulate
website is full of deeply-researched (and fully-referenced) information about
these often-overlooked animals ... just what you would expect from a trained
HUFFMAN:"There's a section on taxonomy and
classification, basically where that animal fits into the general scheme of
life on earth. There's a description of that animal, including sizes. There's a
section on reproduction. Ecology and behavior. And then a bit on habitat and
distribution: where you can find the animal in the world."
Huffman has taken his interest — and his camera — on the road. He has
photographed ungulates in Africa and in zoos around North America, so the
website has lots of unique images. In fact, he says one of the reasons he
started the website was because there was so little material available about
more at UltimateUngulate.com, or get the link to this and more than 200 other
Websites of the Week from oursite, voanews.com/ourworld.
MUSIC: The High Llamas — "The Goat"
VOA's science and technology magazine, Our World. I'm Art Chimes in Washington.
predict that climate change will bring warmer temperatures, more severe storms,
and rising sea levels.
also expected to be more tropical disease, as tropical areas expand.
new research published this week projects a health-related product of global
warming that we hadn't thought about:
increase in kidney stones. Hydrologist Tom Brikowski of the University of Texas
in Dallas led the research team.
BRIKOWSKI:"And I believe we've shown pretty
sufficiently that certainly there's a significant effect, enough to make a big
difference in costs. And of course this is one of the more painful diseases
that are not fatal that are out there. So it's going to have a pretty
significant impact on the population as well."
and his urologist colleagues looked at the link between mean annual temperature
and kidney stones. They found that in the United States, the number of people
with kidney stones could increase 30 percent in some areas, resulting in some
two million more cases a year by 2050.
why do kidney stones increase in warmer temperatures? Brikowski explains that
kidney stones result when salts and minerals, usually calcium, solidify, or
precipitate, out of urine. That often happens when the urine is highly
concentrated. That's common when it's warm and you lose more water from your
body by perspiration than you would otherwise, and probably don't drink enough
fluids to replace it.
BRIKOWSKI:"So most likely what's going on is,
people lose additional water through perspiration and if they fail to replace
that water, then their urine volume goes down, concentration of salts goes up,
and the risk of forming kidney stones increases.
study was an effort to quantify the increase in kidney stone disease in the
United States. Even in the U.S., information on the number of people affected
is a little sketchy because many people don't need to see a doctor. The data in
other countries is even harder to come by. But Brikowski says as other areas
get warmer, the risk is likely to increase when mean annual temperatures top 13
BRIKOWSKI:"It seems pretty clear that this same
kind of phenomenon will take place in southern Europe, Balkan countries,
southeastern Europe, South Asia in particular, and probably will have a greater
effect just because medical care is a little more limited and more costly in
terms of GDPs of each of these countries, so a little bit of morbidity will
take place there.
meaning illness. Tom Brikowski's study was published this week in the
Proceedings of the National Academy of Sciences.
the history of antibiotics, there has been a steady arms race between humans
and bacteria. We design a drug that kills them, they develop resistance to it,
and we look for another drug. Now, the microbes are fighting back with
resistance to multiple antibiotics. Scientists at a university in New York City
say they may have a new "magic bullet" ... at least for now.
TEXT:Alexander Tomasz of Rockefeller University
says we should be especially concerned about MRSA or multi-drug resistant Staphylococcus
TOMASZ:"The mortality by MRSA infections is
surprisingly high. And also this bacteria that were formerly called the
hospital bug — you acquired them when you went to a hospital — now these same
bugs showed up in the community."
TEXT:Tomasz and colleagues found that a new
antibiotic called Ceftobiprole annihilated colonies of MRSA. Like penicillin — one of the first and still one of the most widely used antibiotic agents — Ceftobiprole binds enzymes crucial to making bacterial cell walls, ultimately
killing the bacteria. After widespread use of penicillin, the Staph bacteria
developed enzymes less likely to attach penicillin, becoming resistant to the
drug. Ceftobiprole manages to elude the bacteria's resistance and bind the
enzymes once again.
Tomasz says that Ceftobiprole proved effective even against cells that were
already highly resistant to powerful antibiotics.
TOMASZ:"A small population — let's say ten
thousand out of ten billion cells — would be really highly resistant. These may
be the hotbed from which a newer wave of resistance would come forward against
Ceftobiprole. So we deliberately went after these subpopulations and tested the
Ceftobiprole and to our delight they were wiped out. So that particular
resource which the bugs already seem to have put into reserve, they don't
TEXT:Although Tomasz is excited about this new
potential weapon against MRSA, he cautions that bacteria are true survivors and
capable of finding a way around any drug-even Ceftobiprole. So the war against Staphylococcus
research, to be published in the August 2008 issue of the journal Antimicrobial
Agents and Chemotherapyand available online now. I'm Faith Lapidus.
prices are rising around the world. You've probably seen that where you shop.
Basic commodities are especially hard hit: vegetable oils, grains, dairy
products and rice. Fuel costs are one big factor. Weather is another,
responsible for lower exports from some big grain-exporting countries,
including Canada and Australia.
modified wheat could help expand supply, but here in the United States, it's a
complex and controversial issue. Julie Grant has our report.
GRANT:Nearly every major U.S. crop is grown with
genetically modified seeds — corn, soybeans, cotton.
companies take genes from other organisms and put them into corn and soybean
seeds. This alters the behavior of crops. One of the most-used alters crops to
withstand herbicides. So, when an herbicide is sprayed, it kills the weeds, but
the crops survive.
wheat producers said thank you, but no, to those genetically altered seeds.
Coppock is chief of the National Wheat Growers Association. He says a lot of
wheat farmers didn't need the genetically altered traits being offered.
weeds just aren't a big problem in some types of wheat.
second, Coppock says wheat growers were worried about the export market in
Europe and Japan. In those countries, they call genetically altered crops
COPPOCK:"And so, it was something where some of
our members would get the benefit, but everybody faced potential risk of having
customers say, 'we don't want this in wheat.'"
GRANT:Since the farmers didn't want it, Coppock
says Monsanto and the other big seed companies dropped research into biotech
wheat. That was five years ago. Coppock says turning down biotech has since
proven to be a bad move for wheat growers.
the big biotech companies don't do as much research on how to improve wheat,
including breeding drought resistant varieties. Drought in Australia and Canada
is part of the reason there's a wheat shortage now, making prices higher.
COPPOCK:"And so the conclusion that the
industry basically has come to is, we have to do something to change the
competitiveness equation or wheat will end up being a minor crop."
GRANT:And that could mean wheat shortages in the
wheat farmers are re-considering the genetically modified seed question. They
think asking for new biotech wheat strains might kick start research on wheat.
say something needs to be done.
Sanders is with the American Bakers Association.
SANDERS:"When wheat prices go up 173 percent in
one year, it certainly effects how bakers can do business. And how smaller
bakers, in particular, if they can keep their doors open."
rising wheat prices are being passed on to consumers.
bakers aren't convinced biotech seeds will lower wheat prices. They're more
concerned about how their customers will respond to the idea of genetically
in the bread aisle at this Ohio supermarket have mixed views.
3:"I don't know, it just doesn't
sound good. I mean, I don't mind paying a little bit more for bread. Everything
else is more expensive now too."
4:"If it would keep prices down,
I'd probably actually go with genetically altered wheat."
GRANT:You might not realize it, but you're already
eating lots of genetically modified foods.
U.S. government says they're safe, so they're not labeled.
people in many other countries are more aware and a lot more concerned about
Gurian Sherman is a senior scientist with the Union of Concerned Scientists. If
American wheat goes biotech, he says farmers will probably lose their export
SHERMAN:"They can go elsewhere and they will go
elsewhere. They really are trying to avoid it for any kind of human food
GRANT:Even if wheat growers can persuade Monsanto
and the others to start researching genetically modified wheat, it will take at
least five to ten years before anything is in the field.
then, farmers say, climate change may make some places so dry that people will
need biotech wheat whether they like it or not.
The Environment Report, I'm Julie Grant.
for the Environment Report comes from the Joyce Foundation, the George Gund
Foundation, and the Americana Foundation. You can get in touch with them at
"Our World" theme
our show for this week. If you'd like to get in touch, email us at
[email protected]. Or use the postal address —
Voice of America
Washington, DC 20237 USA
Rob Sivak edited the show. Eva Nenicka is the technical
director. Our story on a new antibiotic was written by Eric Libby. And this is
Art Chimes, inviting you to join us online at voanews.com/ourworld or on your
radio next Saturday and Sunday as we check out the latest in science and
technology ... in Our World. | <urn:uuid:7a8a102f-2f13-4db5-9de6-15ab172de045> | {
"date": "2017-09-23T07:55:12",
"dump": "CC-MAIN-2017-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689572.75/warc/CC-MAIN-20170923070853-20170923090853-00336.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9405381679534912,
"score": 3.15625,
"token_count": 4534,
"url": "https://www.voanews.com/a/a-13-2008-07-18-voa30/403646.html"
} |
Since Peter, as only one, does not teach Dutch language,
and Mr. Dutch does not teach any course that is taught by Karl or Mr. Painter,
it follows that Peter and Mr. Dutch are the same person and that he is at least math teacher.
Simon and Mr. English both teach history, and are among the three Dutch teachers.
Peter Dutch therefore has to teach chemistry next to math.
Because Steven is also chemistry teacher, he cannot be Mr. English or Mr. Painter,
so he must be Mr. Writer.
Since Karl and Mr. Painter are two different persons, just like Simon and Mr. English,
the names of the other two teachers are Karl English and Simon Painter.
Peter Dutch, math and chemistry
Steven Writer, Dutch and chemistry
Simon Painter, Dutch and history
Karl English, Dutch and history.
back to the puzzle | <urn:uuid:dc4f8662-01ad-4be9-ade0-ccc7b28abaed> | {
"date": "2017-05-29T00:11:03",
"dump": "CC-MAIN-2017-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612003.36/warc/CC-MAIN-20170528235437-20170529015437-00508.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9573174715042114,
"score": 2.625,
"token_count": 182,
"url": "http://puzzle.dse.nl/logical/at_school_us.html"
} |
As humans, we take the care of children for granted. If you have a kid, you take care of it until it is old enough to move out and live on its own. Lots of other mammals care for their children in similar ways, teaching their offspring how to survive in the world without their parents. But this sort of parental care behavior is very rare in insects. The insects I study, the giant water bugs, have a very special form of parental care and I’ll talk about that in my next post. Today, I want to go over some of the different insects that use parental care so that you might learn a bit about the different ways that insects can care for their young.
This is a carrion beetle (also known as a burying beetle):
If you follow my blog, I’ve talked about this beetle before in my post about my mold problem in my insect collection, so it should be familiar. Carrion beetles are some of the more disgusting animals in the world, at least as far as most people are concerned, so please skip on to the next photo if you have a weak stomach.
So how exactly do carrion beetles care for their young? Let’s go through the process, keeping in mind that it works a little differently from species to species. First, the male-female pair finds a dead animal. This could be a mouse, a snake, a bird, a small opossum – anything that’s in the size range a pair of beetles can handle. Let’s say the beetle above has found a mouse. The beetle and its mate will pull all of the fur off, roll the mouse into a ball (often colorfully called “mouse balls” in entomological circles), and bury it to prepare the carrion. The pair will mate and then the female will lay her eggs near or on the carrion. When the eggs hatch, the larvae will feed on the rotting carcass and the parents often help them feed. The parents also help make the carrion last longer by eating fly larvae (maggots) that compete with their young for food. Some carrion beetles spit digestive enzymes on the carrion to keep it fresher longer and others will carry mites that provide this service for them. In fact, the parents are so busy taking care of the carrion that it requires both of them to keep molds, maggots, and other organisms from completely taking it over and depriving their children of food. If the parents are successful, the larvae will feed for several days to a few weeks and go through all of their larval instars, then drop off the carcass to pupate. At this time, the parents abandon the nest and leave their offspring to fend for themselves. So, carrion beetles care for their young from the egg stage until pupation. They are also among the very few insect species that have this sort of bi-parental (two parent) care. It is very unusual for both the father and the mother to care for the young.
A more common parental care behavior is the sort you find in the webspinners. This lovely creature is a webspinner:
Isn’t he gorgeous? These are actually rather unusual insects that aren’t common most non-tropical (i.e. temperate) locations. Many entomologists will actually never see one of these alive in their lives! Luckily for me, Arizona just happens to be one of the places where they are very common, so I was able to get some photos of this webspinner on my back porch one afternoon.
Take a look at the forelegs and look at the tarsi, those little segments near the end of the leg. See that one big oval shaped tarsus where the arrow is pointing? This is a specialized tarsal segment. Were you curious why these are called webspinners? If so, here’s the reason: that specialized tarsal segment contains a silk gland. Webspinners are actually able to make webs! They don’t make webs like most spiders though. They make long, tube-like webs, called galleries, underground or in a food source. The galleries are where the parental care takes place. A male and female webspinner mate in a female’s gallery. The male leaves right away and the female lays her eggs in her gallery. As the nymphs hatch, they live within the gallery of their mother under her care. When they reach the adult stage, they may leave the nest to find another place to live (especially if they are males – they don’t stick around in their mother’s nest very long) or continue to live in the gallery, expanding it so that it fits more and more individuals. This sort of parental care should sound very familiar, even if you know very little about insects. If it’s not coming to you right away, I’ll give you a hint: ever see an ant farm? Webspinner galleries are a lot like ant nests and the sort of care that they exhibit is very ant-like. One female establishes a nest that can end up containing several generations of offspring. Paternal care by a single adult female is relatively common among insects, especially in the social insects like ants, bees, and wasps. But webspinners are rather different from the ants, bees, and wasps too – they don’t have one single female who produces all of the offspring in the nest. The female who establishes the gallery originally produces a second generation and might produce several more, but the other females in the nest are all able to produce their own offspring as well. So, to recap, webspinners use maternal parental care (the female parent cares for the young) and care for their offspring from the egg stage through adulthood, and even sometimes beyond! This is very different than what we saw in the carrion beetles where both parents were necessary for the survival of the offspring and care ended as soon as the larvae pupated.
Now we’ve come to the really rare parental care behavior: paternal care, or care only by the father. This sort of behavior is only known in a VERY few insects, including the golden egg bug (Phyllomorpha laciniata) and the giant water bugs. I’m going to talk about the giant water bugs in more detail in my next post, so for now, check out the photo of the golden egg bug at this link:
(I apologize for not having my own photo, but these are only found in Europe and I’ve never been there. I’m also not keen on stealing other peoples’ photos without permission.) Did you see the gold colored eggs on the back of the male in the photo? These bugs are, like SO many other insects, named after a characteristic they possess. These bugs have bright gold eggs, so they’re called golden egg bugs. So how do they care for their offspring? This species is probably just evolving their paternal care, so it’s still a bit sloppy compared to the elegant system you find in the giant water bugs, but here’s the general idea of how the system is thought to work. The eggs of these bugs have traditionally been laid on plants near the ground. However, the vast majority of the eggs left by themselves are eaten by ants. The females of this species are therefore starting to deposit their eggs on the back of other members of their own species, mostly the males, gluing them to the backs of these individuals so that they are protected from the ants until they hatch. The bugs themselves don’t like having ants on them, so they’re inclined to keep the ants away from the eggs they carry as well. Pretty neat huh? The offspring thus benefit from the selfishness of the adult that carries them. I call this a sloppy system because females basically have to ambush a mating pair to be able to lay eggs on their backs. Most golden egg bugs really don’t want to carry the eggs and will try to get away. Mating pairs have more important things going on and keep doing what they’re doing while another female lays her eggs on the male. The male bugs then carry the eggs around with them until they hatch, at which point the egg shells fall off. Golden egg bugs thus care for young only in the egg stage and then the nymphs are on their own. Unlike the carrion beetles, only one sex usually cares for the eggs, and unlike the webspinners, the males are usually the caregivers. In the giant water bugs, the other insects that use paternal parental care, the system is a little different. The male and female mate, and then the female lays her eggs in a way that ensures that the male cares for his own offspring. The females mate, lay their eggs, and leave. The male is left on his own to care for the eggs until the nymphs hatch from them.
Paternal parental care is probably the most rare form of parental care known in insects, but all of the giant water bugs observed to date use this form of parental care. Tune in next time for more information about the amazing parental care system of the giant water bugs and prepare to be dazzled and amazed!
Text and images copyright © 2009 DragonflyWoman.wordpress.com | <urn:uuid:061492d7-136d-481a-aa71-9f85a5db4d96> | {
"date": "2019-09-19T13:50:52",
"dump": "CC-MAIN-2019-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573519.72/warc/CC-MAIN-20190919122032-20190919144032-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9685516953468323,
"score": 3.40625,
"token_count": 1942,
"url": "https://thedragonflywoman.com/2009/08/"
} |
Robert Fickler / University of Vienna
This is a false-color image of a laser beam showing a superposition of entangled photons spinning in opposite directions.
By Jesse Emspak
How fast do quantum interactions happen? Faster than light, 10,000 times faster.
That's what a team of physicists led by Juan Yin at the University of Science and Technology of China in Shanghai found in an experiment involving entangled photons, or photons that remain intimately connected, even when separated by vast distances.They wanted to see what would happen if you tried assigning a speed to what Einstein called "spooky action at a distance."
They didn't find anything unexpected, but that wasn't the point: in physics, sometimes it's good to be sure. The group published their work on the ArXiv.org, a preprint server for physics papers.
All tangled up
Quantum physicists have long known that after two particles — photons, for example — interact, they sometimes become "entangled." This kind of experiment has been repeated many times, and involves taking two entangled photons and sending them to different places. Perhaps photon A goes to Los Angeles and photon B goes to Boston.
When photon A is observed, it has a certain polarization, perhaps "up." The other photon in Boston is always in the opposite polarization, "down." No matter what measurement is made of photon A, photon B will always be opposite. It is impossible to tell what the polarization will be before you measure it, but the entangled photons always seem to "know" the right state to be in, instantaneously. [Twisted Physics: 7 Mind-Blowing Findings]
As Chad Orzel, assistant professor of physics at Union College, explained, "It's as though you sent two cards to two different addresses. One might be the jack of diamonds and the other the ace of hearts. When you get the card at one address you know which one went to the other. Quantum mechanics is weird because until you open the envelope, saying which card it is doesn't have any meaning; it could be either one."
Speed of quantum interaction
This is what Albert Einstein called "spooky action at a distance." And the correlation between the photons' states seems to happen instantaneously. But what does "instantaneous" really mean? That's part of what the Chinese team wanted to look at.
So the researchers entangled two photons and sent them to two different stations about 10 miles (16 kilometers) apart. In their ArXiv paper, the scientists said that previous experiments had "locality loopholes," which is another way of saying that it's possible to explain the link between photons with something other than the "action at a distance."
The group measured the state of one photon and timed how long the entangled state took to show up in the other. They found that the slowest possible speed for quantum interactions is 10,000 times the speed of light — assuming your experiment is moving relatively slowly, at least relative to light beams.
Whereas the result may sound like a way to send faster-than-light messages, it isn't, really, because you can't know the state of the entangled photon pair before it's measured; so there's no way to control it and make the photon at the other end take on certain states and use it like a Morse code telegraph. [10 Implications of Faster-Than-Light Travel]
This type of experiment has been done before, notably by a European team, in 2008. So why do it again? Many physics experiments are performed to check more closely the values of constants used in equations, for instance, which enable more precise measurements in other areas.
Orzel said that even if it turned out that there was some small amount of time it takes for the state of a photon to change (meaning it's not instantaneous), it isn't clear that lag would mean much for quantum physics generally. That's because there are several interpretations for why quantum phenomena happen the way they do, and all explain the experimental results equally well. Physicists aren't even certain that there's an experiment one could do to tell the difference.
He added that it is extremely unlikely that anyone will ever get an "exact" value for the speed of such quantum interactions, and, in fact, modern physics prohibits that kind of finding in principle. But it is useful to see what the limits are — to clarify what we mean when we say "instantaneous."
"There's a certain strain of physics that people that will say it has to be instantaneous – in fact, if it is faster than light it mustbe instantaneous," Orzel said. "So if you can put a limit on it that is kind of cool."
- Gallery: Dreamy Images Reveal Beauty in Physics
- Wacky Physics: The Coolest Little Particles in Nature
- What's That? Your Physics Questions Answered
Copyright 2013 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. | <urn:uuid:ae48b0c0-5027-43fd-8dcb-6dd3e554a93e> | {
"date": "2017-10-21T04:50:27",
"dump": "CC-MAIN-2017-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824570.79/warc/CC-MAIN-20171021043111-20171021063111-00556.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9589236974716187,
"score": 3.390625,
"token_count": 1033,
"url": "http://science.nbcnews.com/_news/2013/03/15/17327430-quantum-interaction-10000-times-faster-than-light"
} |
Ancient Egypt’s Images Revived
Egypt Stereoviews by Underwood & Underwood Exhibition
by Dalia Abushady, Nada Ehab and Noura Shibl
Century-old images of Ancient Egyptian monuments were displayed on Sunday at the “Egypt Stereoviews: Underwood & Underwood” event held at the American University in Cairo (AUC) and presented by the manager for Egyptology, Coptic and CEMEAC studies, Dr. Amr Omar.
Recording consists of Director of CEMEAC studies, Mark Muehlhaeusler introducing the event, a portion of Amr Omar’s lecture and interview, and an interview with an attendee.Stereoviews were created in the late nineteenth century to project and display an illusion of 3D images. Elmer and Bert Underwood were one of the leading publishers and distributors of the views. As Omar explained, they “developed an extensive library of images,” which had “500,000 views and stereoviews” to be observed. Photographer for Caravan Newspaper, Suhayla El-Sheikh, on the significance of images and photography, expressed “through pictures, I can see how Egypt used to be and how it is now.”
Underwood & Underwood travelled to Egypt and took historically significant pictures, highlighting important features of Egyptian history. The lecture thus, aimed to review the images of Ancient Egyptian monuments before any modern-day excavations and reparations took place. According to AUC’s curator of university archives, photographs and cinema collections, Ola Seif, this is the first time these images are being exhibited.
Egyptologist Amr Omar pictured showing a 3D stereoview and discussing the history of stereoviews at the Underwood & Underwood exhibition.The lecture started off with Omar handing out 3D glasses to the audience and having them look at an image of old-Egypt. The point was to highlight how 3D-images are modernly viewed as opposed to stereographs, where people had two 2D images placed next to each other and had to look through two wooden binocular-like holes.
Images were displayed and discussed ranging from rural Egyptian views to Islamic mosques and religious heritages to Ancient Egyptian temples, monuments and artefacts. “The historical value of these photographs make them very, very special,” Omar stated.
Omar pointed out the role of AUC in all of this, that when Underwood & Underwood closed down in 1975, the former president of the university, Richard Pederson, bought these images and it is now available at the Rare Books and Special Collections Library.
Senior anthropology student, Dalia Habib, shared her thoughts on the display saying she enjoyed looking at the pictures and that she would like to come again to read the captions of the images. She added, on the advertising and publicity of the event, that “I think it was really well done.”
The exhibit has over 50 stereoviews displayed and will remain open until November 12, 2015. | <urn:uuid:58571ca4-43e7-4d4a-80ab-fc9b4020bba7> | {
"date": "2018-01-20T12:44:43",
"dump": "CC-MAIN-2018-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889617.56/warc/CC-MAIN-20180120122736-20180120142736-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9599552750587463,
"score": 2.578125,
"token_count": 647,
"url": "http://daliamultimediaportfolio.weebly.com/covering-the-event.html"
} |
Incest also protects royal assets. Marrying family members ensures that a king will share riches, privilege, and power only with people already his relatives. In dominant, centralized societies such as ancient Egypt or Inca Peru, this can mean limiting the mating circle to immediate family. In societies with overlapping cultures, as in second-millennium Europe, it can mean marrying extended family members from other regimes to forge alliances while keeping power among kin.
And the hazards, while real, are not absolute. Even the high rates of genetic overlap generated in the offspring of sibling unions, for instance, can create more healthy children than sick ones. And royal wealth can help offset some medical conditions; Charles II lived far better (and probably longer, dying at age 38) than he would have were he a peasant.
A king or a pharaoh can also hedge the risk of his incestuous bets by placing wagers elsewhere. He can mate, as Stanford classicist Josiah Ober notes, "with pretty much anybody he wants to." Inca ruler Huayna Capac (1493-1527), for instance, passed power not only to his son Huáscar, whose mother was Capac's wife and sister, but also to his son Atahualpa, whose mother was apparently a consort. And King Rama V of Thailand (1873-1910) sired more than 70 children—some from marriages to half sisters but most with dozens of consorts and concubines. Such a ruler could opt to funnel wealth, security, education, and even political power to many of his children, regardless of the status of the mother. A geneticist would say he was offering his genes many paths to the future.
It can all seem rather mercenary. Yet affection sometimes drives these bonds. Bingham learned that even after King Kamehameha III of Hawaii accepted Christian rule, he slept for several years with his sister, Princess Nahi'ena'ena—pleasing their elders but disturbing the missionaries. They did it, says historian Carando, because they loved each other. —David Dobbs | <urn:uuid:eab7c794-9d4e-4448-8fcb-fdb52c45a28b> | {
"date": "2014-09-02T09:25:10",
"dump": "CC-MAIN-2014-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921872.11/warc/CC-MAIN-20140909030952-00040-ip-10-180-136-8.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9801363945007324,
"score": 2.859375,
"token_count": 429,
"url": "http://ngm.nationalgeographic.com/2010/09/tut-dna/dobbs-text/3"
} |
IPv6 Multicast in VANET
Prof. Uma Nagaraj
Ms. Deesha G. Deotale
VANET is the Mobile ad-hoc network, to providecommunications among nearby vehicles and between vehicles andnearby fixed equipment, usually described as roadsideequipment. VANET turns every participating car into a wirelessrouter or node, allowing cars approximately 100 to 300 meters of each other to connect and, in turn, create a network with a widerange. As cars fall out of the signal range and drop out of thenetwork, other cars can join in, connecting vehicles to oneanother so that a mobile Internet is created. IPv6 support isneeded in vehicular ad hoc network (VANET) with geographicalrouting. Basic IPv6 protocols such as address auto-configurationassume multicast capable link. In this we take the geographicalinformation of each car which is in defined Geographical areathrough the GPS system, and also capturing the graph of all thecar in the network through Google Mapit is presented in thepaper, which aims at combining IPv6 networking and C2CNet..
VANET, IPv6, Multicasting V2V, C2C, Multicasing, Geonetworking.
By now the rapid growth of the Internet and theimpending shortage of Internet Protocol (IP) addresses havebeen well documented. Internet Protocol Version 6 (IPv6) isthe next-generation protocol developed by the InternetEngineering Task Force (IETF) to replace the currentaddressing scheme, Internet Protocol Version 4 (IPv4).Vehicles are expected to exchange information beyond theirimmediate surroundings, with other vehicles and the roadinfrastructure. Nowadays, communications become essentialin the society. Everyone can get information anywhere, evenin mobility conditions. The vehicle is another place whereusers stay for long periods.These day Most of the time human spend in the vehicle.ITS are going to be more and more important technologies inour life, that enhance safety, driving efficiency and amusingby allowing various service such as fleet management,navigation, billing multimedia application and game. IPv6 isconsidered as the most appropriate technologies to supportcommunication in ITS thanks to its extended address space,embedded security, enhanced mobility support and ease of configuration. Future vehicles will embed a number of sensorsand other devices that could be IPv6 enabled .In vehicular networks, vehicles equip with on board units(OBU) to enable the communication with other vehicles.Vehicle-to-vehicle ad hoc networks are multihopcommunication using geographic position, which has beeninvestigated on GeoNet Project . On the other hand, road-side units (RSU) are installed around the road. IEEE802.11 isused to connect between OBUs, and between OBU and RSU.Application Unit (AU) is a portable or built-in device
connected temporarily or permanently to the vehicle’s OBU.
OBU also can be connected to the Internet with cellularnetworks, WiMAX, etc. These terminologies are proposed inCar2Car communication consortium (C2C-CC ).
For the VANET networks now a day’s support of IPv6 is
needed with the geographical routing. The present IPv6protocols (like auto configuration) assuming that they havingmulticast capable link. But, for VANET, the definition of link becomes ambiguous and it is difficult to support link-scopemulticast. Artificial emulation of multicast capable link likeEthernet is possible but may cause low efficiency and highcost. Hence the new way to efficiently run IPv6 over VANETis needed with minimal cost. we are presenting the newapproach for running the IPv6 in VANET for efficiency aswell as lower cost. Instead of emulation, we rely ongeonetworking specific features for IPv6 operation. Oursolutio
n exploits inherent location management’s functions to
efficiently perform fundamental IPv6 protocols, i.e. NeighborDiscovery and Stateless Address Auto configuration. This newproposed approach is implemented with C2C communicationconsortium as reference system and exploits its inherentfunctions in order to perform the IPv6 multicast operationswith link scope multicast. we have to first design C2Carchitecture with IPv6.The main objective is to combining IPv6 networking and Car-to-Car Communication
-CC)GeoNetworking capabilities into a single protocol stack forIntelligent Transportation Systems (ITS). We see in thearchitecture what is IPv6 GeoNetworking: what functions areto be provided, under which conditions it shall operate (e.g.communication scenarios, communication environment withor without infrastructure support) and how it shall perform(e.g. scale to a large number of vehicles).The organization of this paper as follows: Section IIexplains Design Goal. Section III presents the short overviewof Methodology of communication between vehicles. SectionIV describes the Communication using IPv6 in C2CArchitechture. Section V explain IPv6 Multicast overviewand in Section VI explain communication flow example .Section VII conclusion of the paper.
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 10, No. 4, April 2012135http://sites.google.com/site/ijcsis/ISSN 1947-5500 | <urn:uuid:3c635532-ebd7-4a91-b2af-301ecac5513d> | {
"date": "2014-07-26T00:28:07",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894865.50/warc/CC-MAIN-20140722025814-00048-ip-10-33-131-23.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8713662028312683,
"score": 2.6875,
"token_count": 1115,
"url": "http://www.scribd.com/doc/93873273/IPv6-Multicast-in-VANET"
} |
Islam is one of the major religions in the world. That being said, its holy book Quran, has been involved in different strange circumstances throughout the human history. Apart from, for example, what is the size of the biggest Quran or how Quran treats swearing, this article will present to you another 5 most unusual facts about this magnificent religious piece.
Misprinting and Misinterpreting Can Cause Grave Consequences
The holy book of Islam is thought to be Allah’s words. Whatever you come across in Quran, know that is what Allah would say and think. Misinterpreting his words is a grave sin, but misprinting the book is even a graver sin. The original version is to be preserved, with no alterations here and there. However, the case of bad printing occurred and causes a political crisis.
In 1999, due to the bad copy-pasting of Quran, the entire Kuwaiti parliament was put under the political pressure, thus being dissolved. The minister for justice and Islamic affairs was accused of deliberate misprinting, with the comment of “trying to disfigure the faith of Muslims”. | <urn:uuid:65e85ad8-755c-45f2-81b1-a77a57730438> | {
"date": "2019-08-24T08:03:38",
"dump": "CC-MAIN-2019-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319915.98/warc/CC-MAIN-20190824063359-20190824085359-00496.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9627175331115723,
"score": 2.65625,
"token_count": 234,
"url": "https://www.ancientfacts.net/5-unusual-facts-quran/"
} |
How to Calculate Your Heart Rate (and Why It Matters)
Measuring you heart rate lets you see how hard you’re exercising, or your exercise intensity. The weekly recommended dose of exercise is 150 minutes of moderate aerobic activity or 75 minutes of vigorous activity. If you know your heart rate while at rest and while exercising, you’ll be better able to ensure you’re meeting the recommendations for the specific intensity levels.
Calculating Heart Rate
Your heart rate measurement is the same as your pulse, which is measured in beats per minute (bpm). You can measure your heart rate manually by lightly placing your fingertips on the pulse site either on your inner wrist or neck, just below the outer edge of your jawbone.
Count the number of beats you feel for 10 seconds, and then multiply the number by six to get the rate for 60 seconds, or a full minute. Doing this before you exercise will give you your resting heart rate, which is generally between 60 and 100 bmp for the average adult.
Target Heart Rate
While exercising, you can measure your heart rate the same way you did while at rest, although a number of stationary bikes, treadmills and other equipment have gauges that can do it for you.
For exercise to be effective, you’ll want your heart rate to increase to what is known as your target heart rate as you’re exercising. Your target heart rate is between 50 percent and 85 percent of your maximum heart rate.
Maximum Heart Rate
You can find your maximum heart rate by subtracting your age from the number 220. For instance, if you’re 30 years old, your maximum heart rate would be 220 – 30, or 190. Your target heart rate, therefore, would be between 50 and 85 percent of 190.
- • 50 percent of 190 = .50 x 190 = 95 bpm
- • 85 percent of 190 = .85 x 190 = 161.5 bpm, or about 162 bpm
Moderate exercise would be at the lower end of the scale, around 95 bpm, while vigorous exercise would fall at the higher end of the scale, or around 162 bpm. It’s also important to know your heart rate during exercise so you don’t exceed your target heart rate, which could cause too much strain on your body over a prolonged period. | <urn:uuid:0f388a82-78d0-43a4-831c-743c2a0edd84> | {
"date": "2017-04-23T19:42:57",
"dump": "CC-MAIN-2017-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118743.41/warc/CC-MAIN-20170423031158-00527-ip-10-145-167-34.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9201820492744446,
"score": 3.578125,
"token_count": 489,
"url": "http://paradigmhw.com/how-to-calculate-your-heart-rate-and-why-matters/"
} |
CRESS (Lepidium sativum)
Requires six hours or more
of strong, direct sun per day.
Requires two to six hours
of direct sun per day.
Garden cress (Lepidium sativum) is a fast-growing, edible plant botanically related to watercress and mustard and sharing their peppery, tangy flavor and aroma.
In some regions garden cress is known as garden pepper cress, pepper grass or pepperwort. Garden cress is a perennial plant most typically used as a garnish or as a leaf vegetable.
Growing cress is remarkably easy. Soak cotton wool or peat moss in water and stuff it into a small pot. Sprinkle seeds on top, and keep them well watered until they start to sprout.
Keep the cress in a light area, but not in direct sunlight, and keep watering.
|As it matures, you can harvest the whole young cress or let it grow to a larger size so that it will develop big, peppery leaves. The cress will be usable within about five days of planting.|
Garden Cress basics
Vegetable (Cool Season) - Salad Greens
Also known as Peppergrass, Pepper cress, Mustard cress it is the easiest of the cresses to grow. Garden cress can be harvested in as little as two weeks after sowing as a micro-green. It’s peppery taste adds zing to salads, but hot weather makes this cool-season crop bitter and inedible.
Full sun part shade. Prefers part shade during hot summer weather.
Requires well-drained soil. Prefers moist, fertile soil with high organic matter and pH 6.0 to 6.7
How to plant:
Propagate by seed. Germination temperature: 55 F to 75 F. Days to emergence: 2 to 7 - In early spring when soils are cold (~45 F), germination may take two weeks. Seed can be saved 5 years.
Plant in early spring as soon as you can work the soil. Broadcast seed and cover very lightly with soil or compost. A small patch (1- to 2-feet square) provides plenty of cress. Make succession plantings every 2 to 3 weeks until weather warms. Start planting fall crops when weather cools in late summer.
To produce an optimum crop Cress should be planted when the moon is in the 2nd Quarter (i.e. waxing) and in one of the following Zodiac Signs: Cancer, Scorpio, Pisces, Libra, Taurus
The Cress species
Cress is a blanket name for a number of related peppery greens in the mustard family. These greens are used in herb mixes, salads, and compound butter, among other things. Many cress species are very easy to grow, and they make great decorative garden plants in addition to a source of food.
Greengrocers and some markets may also carry cress, although it can be extremely perishable, so it should be purchased only when it is needed.
Several different plants are considered cress, including watercress and penny cress. One species, Lepidium sativum, is more heavily cultivated than others. This cress species is also called garden cress, pepper cress, pepperwort, or garden pepper cress. As the names imply, this plant has a biting, sharp flavor which is quite distinctive and rather piquant. Some people also call Nasturtiums cress.
Nutrition and culinary uses of Cress
As a general rule, all of the parts of a cress plant are edible. Most people use the leaves, since they are packed with iron, calcium, folic acid, and vitamins C and A.
The stems, flowers, and seeds of the plants are also edible, however. In some cases, cooks use entire immature seedlings for a unique flavor, look, and texture. Typically, cress is used in relatively small amounts, because the peppery flavor can get overwhelming.
Especially in the Old World, cress is a very common inclusion in salads and sandwiches, since the unique and zesty flavor makes a dish more lively.
Cress as a medicine
Garden cress is also used as a medicine in India in the system of ayurveda. It is used to prevent postnatal complications; the seeds of this plant perform as an abarsipent (i.e. prevention of post-natal complications) when boiled with milk.
Vitamins and minerals provided by Cress
Garden cress is found to contain significant amounts of iron, calcium and folic acid, in addition to vitamins A and C.
Garden cress produces an orange flower suitable for decorative use and also produces fruits which, when immature, are very much like caper berries.
Preservation and care of Cress
When selecting cress, look for firm, evenly colored, rich green specimens. Avoid cress with any signs of slime, wilting, or discoloration. The cress can be stored under refrigeration in plastic for up to five days.
To prolong the life of the cress a bit, stick the stems in a water filled glass and bag the glass, refrigerating the cress until it is needed. Leave the greens on the stems until they are ready to be used and wash the cress before use to remove residual dirt and other materials.
You can intercrop cress with carrots or radishes, or mix it with other salad green crops. Keep cress well watered and provide shade when weather warms. Cover with fabric row covers if flea beetles or other pests are a problem.
Flea beetles - Use row covers to help protect plants from early damage. Put in place at planting and remove when temperatures get too hot. Control weeds. | <urn:uuid:9e1ad097-f610-42fc-928c-2b380f962323> | {
"date": "2016-05-25T12:51:52",
"dump": "CC-MAIN-2016-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274985.2/warc/CC-MAIN-20160524002114-00136-ip-10-185-217-139.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9324102997779846,
"score": 3.09375,
"token_count": 1218,
"url": "http://moongrow.com/vegetable_planting_guide/cress.html"
} |
In the following test-question which version is the most correct?
I miss her very much, almost every minute of the day I think of her,
or I think I ___ her.
A) am hearing
C) have heard
D) will hear
I think, the most potential answers are either (A) or (B). In this example, what kind of meaning of the verb "hear" is expressed: state or action. Can both (A) and (B) be correct answers?
Thanks a lot... | <urn:uuid:8b70b986-9c2a-40d1-b7d9-c8cf2c08ba0d> | {
"date": "2017-02-22T15:39:23",
"dump": "CC-MAIN-2017-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00540-ip-10-171-10-108.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9451962113380432,
"score": 3,
"token_count": 111,
"url": "https://www.englishclub.com/esl-forums/viewtopic.php?p=407248"
} |
Making Government Better Through Open Science: Real-life Examples of Truly Smarter Cities
The growth of the "Smart City" movement promises improvements in efficiency and quality of life for people who live and visit cities. It also raises questions on the ethical uses of data, privacy protection, and responsible uses of technology. Tom Schenk Jr. will discuss how adopting open science principles can help smart cities thrive by providing transparency and also leveraging a large community of researchers and citizen scientists. Schenk is a researcher and author on applying technology, data, and analytics to make better decisions. He’s currently the director of analytics at KPMG, where he leads the smart city and government analytics practice. He served previously as Chief Data Officer for the City of Chicago, led education research for the State of Iowa, and has held a variety of positions within academia.
Tom Schenk Jr is the co-founder of the Civic Analytics Network at Harvard University’s Ash Center for Democratic Governance and Innovation. He is also the current co-organizer of the Chicago Data Visualization Group.
He has authored several publications, including a book on data visualization, book chapters on education research, and academic articles on a variety of subjects. His work has been featured in The Economist and Wall Street Journal while he’s been featured in television programs on PBS NewsHour and National Geographic Channel. | <urn:uuid:4a6b6634-c3c4-462a-a488-472b75f0315c> | {
"date": "2019-11-20T07:06:23",
"dump": "CC-MAIN-2019-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670512.94/warc/CC-MAIN-20191120060344-20191120084344-00176.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9462267160415649,
"score": 2.65625,
"token_count": 281,
"url": "https://www.lectures.iastate.edu/lectures/making-government-better-through-open-science-real-life-examples-truly-smarter-cities"
} |
Crafts are not only meant for adult people. Even kids can express their imagination and creativity through crafts. There are special crafts that are intended for young ones. Such crafts are easier to make and are considered more enjoyable to create for children. There are several ideas that you can go for when choosing materials for babysitter crafts. Animal crafts including mammals, birds, and fishes are great craft ideas for little children. Most children are fascinated by animals so they will surely be interested in making an animal art. Kids can try out different stuff as long as they have the materials to do so. Here are some useful tips that parents should bear in mind in helping their kids make creative crafts.
When buying paints, you must make sure that they are lead-free. Lead can be dangerous to the health of your children if they unintentionally swallowed it in some way. Although paints with lead are not directly harmful, it is still so much better to stay away from them, especially if you children are still very young. It is also wise to choose paints that do not spill easily. You would have a hard time cleaning the area if the paint residues are everywhere. Keep your kid’s craft materials clean and safe all the time. If you still do not know the interests of your kids, it would be better if you let them join you in picking the craft materials. Your children might not be interested in all sorts of stuff so it will be more practical to let them choose the materials themselves. The best way you can do as a parent is to encourage your kids in creating arts that they really like. Help them unleash the creativity in them. Children may express their feelings and imaginations through paintings, crafts, and many other forms of artworks. You can even teach them to create items that can be profitable, including bracelets and treats.
It is always best for parents to accompany their kids in creating their crafts. Kids become more confident in expressing their ideas when they have their parents with them. Parents can initiate the craft idea and make their children finish the entire thing. An interesting idea that you can do with your kids is to create a family photo. You can purchase items to decorate the picture frame after getting a happy family photo with your kids. You can hang the portrait on your wall or anywhere in the house. Your kids will surely be proud of what they made. If you have enough money, you can teach your children to create artworks and pay for them. This will provide a motivation to your children to do better and to make more meaningful artworks. Your kids can use the profit in buying additional materials for future crafts. Not only will your kids improve in terms of creating their own masterpieces but it will also help them improve their skills in business and negotiation.
Kids craft is definitely a good starter for young artists. Any child out there can become a future artist and it is the responsibility of the parents to hone that skill during their childhood years. | <urn:uuid:58a16e98-69c3-47b2-8ffb-9493ec62df25> | {
"date": "2019-07-21T02:10:53",
"dump": "CC-MAIN-2019-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526818.17/warc/CC-MAIN-20190721020230-20190721042230-00336.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.973835825920105,
"score": 2.59375,
"token_count": 596,
"url": "http://www.youreinsanehoney.com/category/arts-crafts/page/4/"
} |
Spinal Cord Injury
Spinal-cord injuries result from external trauma causing damage to the cord. The following symptoms vary, depending on the site and extent of the spinal-cord damage:
- Loss of movement and sensation in affected arms or legs.
- Loss of urinary and bowel control.
- Loss of normal blood pressure.
- Loss of body-temperature control.
- Impaired sexual function
Spinal Cord Injury patients are classified on their level of spine injury. | <urn:uuid:0e631ed4-512b-400a-b865-bb6a667c8219> | {
"date": "2014-07-25T08:56:41",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894140.11/warc/CC-MAIN-20140722025814-00072-ip-10-33-131-23.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8325037956237793,
"score": 3.09375,
"token_count": 102,
"url": "http://specialtyhospitaljax.com/our-services/rehab/spinal-cord-injury.dot"
} |
Aiming to instill a love for the Arabic Language in our students’ hearts; at Land of Learning we offer the Arabic subject as part of our varied education. Our Arabic curriculum has evolved from first-hand experience of learning and teaching Arabic and focuses on four skills: Listening, Reading, Writing and Speaking.
Learning another language provides many benefitting experiences for the children. These include improving their literacy skills, to developing self-esteem and widening cultural awareness. It develops speaking and listening skills and helps to lay the foundation for future study.
From Foundation Stage 2 to years 2, our students focus on writing and reading Arabic and obtaining relevant vocabulary through a fun and engaging manner. Students in year 3 are introduced to the grammar and additional vocabulary. Students will focus on using the language rather than simply memorisation. Also in all four areas: reading, writing, listening and speaking the students will begin to form simple sentences and phrases.
From years 4 to 6, students will continue to work on grammar and extend their vocabulary giving them the tools to engage in conversations, ask and answer questions, speak in sentences, read and show understanding of words, phrases and simple writing. They will write phrases from memory, and adapt these to create new sentences, to express ideas clearly. The students will describe people, places, things and actions orally and in writing. They will understand basic grammar appropriate to the language being studied, including (where relevant): feminine, masculine and neuter forms and the conjugation of high-frequency verbs; key features and patterns of the language; how to apply these, for instance, to build sentences; and how these differ from or are similar to English. (Based on the MFL curriculum).
- F2: Recognising the Arabic letters in both their isolate and joined forms. Recognising them in written and oral form.
- Year 1: Basic reading and writing, simple vocabulary e.g. numbers 1-10, colours, days of the week
- Year 2: Forming simple phrases, greetings and sentences orally and written e.g. asking and answering simple questions
- Year 3: Forming more complex sentences in Arabic. To read, write and understand simple texts incorporating:
- Pronouns (Personal & Possessive)
- Adjectives e.g. Describing, people and animals.
- Year 4: Reading and writing more advanced texts incorporating all of the above and conjugation of verbs in the past tenses. E.g. ‘Going to school’, ‘My daily routine’
- Year 5: Texts using conjugation of verbs in all three tenses are studied. Pupils build on their writing to write their own passages to include greater use of connectives and topic variation. E.g. Around town, Going shopping, At home etc.
- Year 6: Expand on vocabulary and sentence structure for further dialogues, conversations and complex texts e.g. At the restaurant, At the hospital
To support and ensure our students learn and enjoy learning Arabic we use many different resources from dual language stories, Arabic-only stories and picture books, CDs, the internet, educational games and activities. Lessons are planned taking into consideration that each student will be at a different level. We conduct assessments to track that students are making progress. | <urn:uuid:19ca4a37-0a37-4a62-9768-07d4c75cc6ea> | {
"date": "2018-11-15T16:24:52",
"dump": "CC-MAIN-2018-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742793.19/warc/CC-MAIN-20181115161834-20181115183834-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9361639618873596,
"score": 3.25,
"token_count": 681,
"url": "http://landoflearning.co.uk/primary-home/curriculum/arabic/"
} |
A map of South America showing Argentina.
country in S South America: 1,073,518 sq mi (2,780,400 sq km); pop. 32,616,000; cap. Buenos Aires
A country of southeast South America stretching about 3,700 km (2,300 mi) from its border with Bolivia to southern Tierra del Fuego. The region was sparsely populated by indigenous peoples before the Spanish founded settlements there in the early 1500s. In 1776 Spain established a viceroyalty in present-day Argentina, Uruguay, Paraguay, and Bolivia. Argentina achieved its independence from Spain in 1816. Buenos Aires is the capital and the largest city.
- Ar′gen·tine′ , Ar′gen·tin′e·an
- A country in South America, called officially The Argentine Republic. Capital: Buenos Aires.
From Latin argentum (“silver”) + the feminine of the adjectival suffix -īnus, in reference to the Río de la Plata. | <urn:uuid:c4ef9002-8e60-43eb-a974-19d90697549a> | {
"date": "2014-11-25T00:30:12",
"dump": "CC-MAIN-2014-49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416405325961.18/warc/CC-MAIN-20141119135525-00024-ip-10-235-23-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8001330494880676,
"score": 2.921875,
"token_count": 218,
"url": "http://www.yourdictionary.com/argentina"
} |
LiDAR mapping technology utilizes airborne lasers to measure distance between the laser scanner and the ground and/or objects on the ground. With this technology, a laser scanner “paints” surfaces below with millions of laser beams and measures the time of the “bounce back” to determine distance from the aircraft to the ground or the object.
Tuck Mapping Solutions has aerial photography from multiple sources. For our aerial mapping projects, we will collect the color or black and white imagery using one of our Leica RC30 cameras. When we collect imagery from our helicopters, we will use the Leica RCD30 4-band digital camera or one of our DSS digital cameras.
All aerial mapping projects planned have ground control layouts. This layout considers the scale and contour interval of the project, the use for the mapping, the terrain to be controlled, access to the location of the control points, the availability of photo identification points, and the control that is already available in the project area. | <urn:uuid:bc538c5c-e823-4814-b793-2e031f20f088> | {
"date": "2016-06-29T07:22:31",
"dump": "CC-MAIN-2016-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00138-ip-10-164-35-72.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8850718140602112,
"score": 2.78125,
"token_count": 204,
"url": "http://tuckmapping.com/"
} |
Lesson 2 of 7
Objective: SWBAT identify and draw rotations of polygons on the coordinate plane.
Do Now: In this Do Now, students will ask students to first try and identify a rotation using a Yankee symbol. Teachers who do not live in New York may want to change this to another sports symbol. The second question will also ask students to review reflecting about the x and y-axis. This is a great time for teachers to review key vocabulary from previous lesson like pre-image, image and transformation.
Teachers can also review the agenda and objective for this lesson.
Introduction to New Vocabulary:
Before starting the middle of the lesson, teachers can review key vocabulary for this lesson, counterclockwise and clockwise as well as a movement (or rotation) of 90 degrees. The graphics of the cats will help to review this topic with students.
Lesson End + Homework
After completing in class examples, students should be encouraged to work in pairs or small groups on practice questions and a worksheet with examples, which can be found in student notes. Teachers can circulate and answer student questions. After giving students 15-20 minutes to work on this assignment, teachers can ask students to put their work on the board and then review these questions with the entire class. This is a great opportunity for teachers to reinforce vocabulary and also ask students some important summary questions like,
- How are rotations different from reflections?
- How do rotations change a pre-image?
- What transformations have we covered so far? Which have been the most challenging for you? Why?
- What is the definition of transformation?
The Exit Ticket for this lesson reviews how to complete a rotation for CCW 90 and CCW 180. Please find the enclosed resources. | <urn:uuid:90cb4c46-8822-4c5b-b3c1-35e7fbc028d1> | {
"date": "2016-09-27T13:52:18",
"dump": "CC-MAIN-2016-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661087.58/warc/CC-MAIN-20160924173741-00282-ip-10-143-35-109.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9125807285308838,
"score": 4.09375,
"token_count": 365,
"url": "http://betterlesson.com/lesson/resource/1903492/student-notes-rocking-rotations-pdf"
} |
Feature Image Credits: Wikipedia
History marks countless events including wars, discoveries, political events and many others. Let’s find out few of the important events that occurred between December 21st and December 27th.
Here’s a synopsis of historical events for this week beginning December 21st.
Here are few important historical events listed chronologically that occurred on December 21.
1832: Egyptian forces defeat Ottoman troops at the Battle of Konya during ‘Egyptian – Ottoman War’.
1846: Anesthesia is used for the first time in Britain during an operation at University College Hospital in London by Robert Liston.
1898: Scientists Pierre and Marie Curie discover radium.
1911: Establishment of Central Bank of India.
1923: United Kingdom and Nepal sign the Nepal – Britain Treaty, an agreement of friendship.
1941: World War II- A formal treaty of alliance is signed between Thailand and Japan.
1952: Saifuddin Kitchlu receives the Lenin Peace Prize of USSR Award. He was the first Indian to receive this award.
1968: Apollo 8 departs from moon’s orbit.
1972: East and West Germany establish diplomatic ties after almost two decades of Cold War.
1988: Pan Am Flight 103 from London to New York explodes midair over Lockerbie, Scotland killing all 243 passengers, 16 crew members and 11 residents on ground.
1995: The city of Bethlehem passes from Israeli to Palestinian control.
1998: The Viswabharati University in Calcutta (now Kolkata) presents the ‘Deshikottama’ award to Nobel Laureate Professor Amartya Sen.
Here are few important historical events listed chronologically that occurred on December 22.
1135: Stephen of Blois becomes King of England.
1769: The Sino – Burmese War ends with an uneasy truce.
1807: US Congress passes the Embargo Act, which halts all trading completely.
1851: India’s first freight train is operated in Roorkee (a city in Haridwar district, Uttarakhand), India.
1885: Ito Hirobumi became the first Prime Minister of Japan.
1945: The United States recognises Tito’s Government in Yugoslavia.
1998: The Patents Bills gets Rajya Sabha approval.
2010: US President Barack Obama signs a law officially repealing the policy of ‘Don’t ask, don’t tell’; the new law permits homosexuals to serve openly in the US military.
Here are few important historical events listed chronologically that occurred on December 23.
Kisan Diwas is observed on December 23 in India to celebrate the birthday of Chaudhary Charan Singh, the Prime Minister of Republic of India from 1979 to 1980.
1913: The US Congress passed the Federal Reserve Act establishing the Federal Reserve System to serve as the nation’s central bank.
1919: Great Britain institutes a new Constitution for India.
1947: The transistor was invented by John Bardeen, Walter Brattain and William Shockley at Bell Laboratories. They shared the Nobel Prize for their invention.
1964: India and Ceylon hit by cyclone, about 4,850 killed.
2000: The Central Government of India gives a green signal to West bengal Government’s proposal to rename Calcutta as Kolkata.
1986: Dick Rutan and Jeana Yeager set a new world record of 216 hours of continuous flight around the world without refuelling. Their aircraft Voyager traveled 24,986 miles at a speed of about 115 miles per hour.
Here are few important historical events listed chronologically that occurred on December 24.
1814: The Treaty of Ghent is signed ending the War of 1812.
The War of 1812 was a military conflict between the United States of America and the United Kingdom of Great Britain and Ireland, its North American colonies and its Native American allies. The war lasted for two and a half years.
1894: The first medical conference was held in Calcutta (now Kolkata), India.
1906: Reginald Fessenden transmits the first radio broadcast consisting of a poetry reading, a violin solo and a speech.
1921: Rabindranath Tagore established ‘Visva Bharati’ at Santiniketan, West Bengal.
1924: Albania becomes a republic.
1951: Libya becomes independent from Italy. Idris I is proclaimed King of Libya.
1968: The crew of Apollo 8 enters into orbit around the Moon, becoming the first humans to do so.
1969: The oil company Philips Petroleum made the first oil discovery in the Norwegian sector of North Sea.
1973: District of Columbia Home Rule Act is passed. This act allowed residents of Washington DC to elect their own local government.
1974: Cyclone Tracy devastates Darwin, Northern Territory, Australia. The cyclone destroyed more than 70 per cent of the city’s buildings.
1999: Indian Airlines Flight 814 hijacked in Indian airspace between Kathmandu, Nepal and Delhi. The aircraft eventually landed at Kandahar, Afghanistan. The ordeal ended on December 31 with the release of 190 survivors.
2000: Viswanathan Anand becomes the first Asian to win a world chess title in the world chess championship in Tehran (city in Iran).
Here are few important historical events listed chronologically that occurred on December 25.
December 25th is celebrated as Christmas day commemorating the birth of Jesus Christ.
1066: William the Conqueror was crowned the King of England after he had invaded England from France, defeated and killed King Harold at the Battle of Hastings, then marched on London.
1926: Hirohito became the Emperor of Japan.
1944: Winton Churchill, then Prime Minister of Britain goes to Athens to seek an end to the Greek Civil War.
Here are few important historical events listed chronologically that occurred on December 26.
1805: Austria ans France sign Treaty of Pressburg. The Treaty was signed as a consequence of the French victories over the Austrians at Ulm and Austerlitz.
1862: American Civil War- The Battle of Chickasaw Bayou, also called the Battle of Walnut Hills started.
1953: The United States announces the withdrawal of two divisions from Korea.
1976: The Communist Party of Nepal was founded.
1986: The Consumer Rights Act was passed in India.
1991: The Supreme Soviet of the Soviet Union formally dissolved the Soviet Union.
2004: Around 2,30,000 people lost lives and 1.5 million were left homeless after a 9.3 magnitude earthquake on the seafloor of Indian Ocean set off a series of giant tsunami waves that smashed into the shorelines of countries like Indonesia, Sri Lanka, Thailand and Somalia.
Here are few important historical events listed chronologically that occurred on December 27.
1831: Charles Darwin set out from Plymouth, England on his five-year global scientific expedition.
1945: The International Monetary Fund was established in Washington DC.
1947: The new Italian Constitution is promulgated in Rome.
1949: The Dutch transferred sovereignty of Indonesia to the new United States of Indonesia.
1950: The United States and Spain resume relations for the first time since the Spanish Civil War of the 1930s.
1968: Apollo 8 return to Earth.
2001: China received permanent normal trade relations with the US.
2007: Former Pakistani Prime Minister Benazir Bhutto was assassinated.
Latest posts by Pallavi Sinha (see all)
- 8 Schemes Launched by Modi Government in 2015 - January 15, 2016
- Famous People Born in January - January 6, 2016
- The 12 Months of a Year and Origin of their names! - December 30, 2015 | <urn:uuid:f2ff68e4-0182-4dd8-a74f-378e09f2d6c1> | {
"date": "2017-11-19T03:01:58",
"dump": "CC-MAIN-2017-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805265.10/warc/CC-MAIN-20171119023719-20171119043719-00656.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9460854530334473,
"score": 2.890625,
"token_count": 1641,
"url": "http://blog.gyanlab.com/important-historical-events-for-the-week-dec-21-dec-27/"
} |
Muses and their role in the progress of art !
Muses has always been a huge influence in the history of art. It all started with the Greek myth. In Greek myth the Muses are the daughters of Zeus and the Titan Mnemosyne after the couple slept together for nine consecutive nights. In Greek mythology, the nine Muses are goddesses of the various arts such as music, dance, and poetry and are blessed not only with wonderful artistic talents themselves but also with great beauty, grace, and allure. Their gifts of song, dance, and joy helped the gods and mankind to forget their troubles and inspired musicians and writers to reach ever greater artistic and intellectual heights.
These were the history but now if we talk about the modern muse they have influenced the art even more strongly. And for the sake of art and the revival of historical and modern muses it’s so much important for modern artist. An artist who gets involved deeply with the modern muse ends up falling in love with his muses. Definitely the love is not the sexual one rather it’s adorable. Artist must adore them to bring out that other dimension and show them in a new form of ultimate beauty.
One such artist is Shuska. Who is highly inspired by muses and love to portray muses with his own creativity. Shuska has achieved this with collaborations, his inventions and his secrets. Concealed deep within his work are explosive ideas, words and energy wrapped in mystery. Shuska sets his sights on artistic immortality. If you will see Shuska’s work that’s bases on muses you will definitely get impressed by the talent and skills of the artist. Here are some of these.
You can see more of Shuska’s art here. | <urn:uuid:7de07d9c-2c1a-410d-b671-e76c37f16c72> | {
"date": "2018-01-17T19:56:14",
"dump": "CC-MAIN-2018-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886964.22/warc/CC-MAIN-20180117193009-20180117213009-00656.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9735047221183777,
"score": 2.65625,
"token_count": 367,
"url": "http://www.pmlngroup.com/muses-and-their-role-in-the-progress-of-art/"
} |
New England Historic Genealogical Society - Founded 1845
#88 Royal Descents, Notable Kin, and Printed Sources Additions to Ancestors of American Presidents, 2009 Edition
An Ancestral Lines Pairing System: Uniquely Numbering Each Ancestral Line, Generation, Pairing and Sibling
A Note from the Editor: Historical Markers of Massachusetts
MURDER? At the St. James
#72 Royal Descents, Notable Kin, and Printed Sources: Notes on the Total Ancestry of Princes William and Harry
DNA Solves A Wilder Ancestral Enigma
Scots for Sale
DNA Banking for Medical Information
Edwin M. Knights Jr., M.D.
Although we are grouping mental and neurological diseases together, the fact that there’s overlapping symptomatology is deceptive because we’re dealing with a diversity of diseases, many of which are very common. Just how common mental illness is depends upon who did the survey, when it was done and what criteria are used. Some of the results tax credibility. In 1962 it was reported that 81.5 percent of Manhattan’s residents showed signs of mental illness. Were the epidemiologists from New Jersey or Connecticut? The figures seem somewhat skewed!
Some studies have suggested that one person out of four has significant mental disease. A suggested test: Think of your three best friends. If all of them are perfectly normal -- well, how about you? !
Tracing mental disease in your family can be difficult, unless your relatives were institutionalized. Brief episodes of balmy behavior are so common that investigators tend to confirm the diagnosis by looking for clustered signs and symptoms accompanied by inability to function. Ignore data collected from all teenagers and avid sports fans, especially basketball fans in March. Symptomatology varies with the sport. Baseball fans can undergo wide mood swings on a daily basis. With football fans it’s a seasonal weekly phenomenon, and whether it is bipolar or unipolar has a positive geographic correlation. Devotees of curling exhibit remarkable mental stability, rarely demonstrating any emotion except for repetitive sweeping with brooms.!
Temporary aberrations in mental stability overshadow the reality that around 15 to 20 percent of the overall population suffers from a recognized mental illness at any given time, but these figures do include a significant number of addictive problems. These mental illnesses are not only common but are serious enough to impair thinking, cause emotional instability, stiffle motivation, posing major obstacles to normal social interation for many children and adults. We will later look at mental retardation, the product of over 350 inborn errors of metabolism which lead to conditions such as Down syndrome and Fragile X. !
Molecular Genetics of Mental Disorders!
The U.S. Public Health Service set a high priority on studying the molecular genetics of mental disorders and the National Institutes of Mental Health (NIMH) are sponsoring research projects employing the latest available diagnostic and analytic procedures to study large enough samples to prove statistically significant data to identify genomic regions which might harbor loci increasing susceptibility to schizophrenia, bipolar disorder, or early-onset recurrent unipolar depression. They also are conducting research to find the chromosomal locations of these loci. Their goal is to use this genetic information to improve diagnosis, treatment and prevention of diseases. It’s not easy, because of the complex interrelationship between genetic and non-genetic contributions to these illnesses. Although familial occurence of some of the diseases has been recognized for some time, the transmission is usually not of simple Mendelian type and identification of loci conferring susceptibility or otherwise influencing the clinical course hasn’t advanced rapidly. After identifying the appropriate genetic loci, it will still be necessary to devise means of modifying or neutralizing them to achieve the desired effects. We will review some of common entities and see what progress is being made. Studies involve finding out how many loci are involved in heritability, how dissimilar they are, and how they all interact. Already, some promising sites have turned out to have little influence, inconsistent effects, and even false positive results. !
There are about seven types of anxiety disorders and many varieties of mood disorders, including manic-depressive (now called unipolar and bipolar), schizophrenia, anoroxia and others. In children and adolescents 9 to 17, the distribution is: 13 percent with anxiety, 6.2 percent with mood disorders, 10.3 percent with disruptive disorders and perhaps 2 percent where there is associated substance abuse. The estimated direct costs in the U.S. in 1990 were estimated at $78.6 billion. The NIMH are focusing DNA research on these mental diseases. Diagnostic information and blood samples are sought from affected persons and their relatives. Where possible, cell lines are established with transformed cells so that unlimited quanitites of DNA can be made available for research. The research protocols are of three types:!
Linkage studies are designed to find genes which control susceptibility to mental disorders!
Linkage-disequilibrium studies are done on isolated populations with genealogical evidence that one or more founding members transmitted the gene. Concentrating efforts on a limited group makes success in locating susceptibility genes much more likely. Association studies are done, in which the investigator seeks a version of a gene which varies between affected and non-affected individuals. After locating the genetic anomalies, they can be further tested and identified with the eventual hope of using them for early diagnosis and perhaps “tailored” treatment of a disorder.!
Because of its strong hereditary influence, one of the conditions already studied extensively is attention-deficit hyperactivity disorder (ADHD). Appearing early in childhood, symptoms include inattention, impulsive behavior and hyperactivity. Because other conditions can mimic this disorder, it can be difficult to diagnose. There is an increased rate of occurence in first degree relatives. The genetic pattern in ADHD suggests the dominant single major locus type of transmission of classical Mendelian inheritance. Confusion results because the symptoms are not always full-blown. Geneticists explain this as representing “an incompletely penetrant dominant or additive autosomal single major locus.” As a result, only 46 percent of boys and 31 percent of girls with ADHD gene will appear to have this disorder.!
Autism is a childhood disease presenting with mild to severe symptoms which include impaired ability to communicate, either verbally or nonverbally, and impaired social interaction, in which the child seems unaware of feelings of others and has no appropriate social responses. Body movements may be repetitive and there is inappropriate preoccupation with objects or routines. Males seem more involved than females. Inheritance of autism is not sharply defined because it’s hard to separate genetic from environmental causes. The complex transmission probably includes many genes, each having a relatively weak effect.!
Asperger sydrome is considered a form of childhood autism, with many similar features. The patients are of normal or above normal intelligence but exhibit impaired social interaction, repetitive behavior patterns, inflexible adherence to routines and preoccupation with certain activities. Links have been found to specific chromosomes and X-linked varieties have been described. A study of 38 Finnish families found 1/3 of probands had related conditions in first degree relatives. !
Long known as “manic-depressive” disorder, it is now divided into two varieties: Bipolar, with mood swings between manic and depressive states, and unipolar when the sole symptom is depression. The first case I ever saw, as a medical student, was a highly successful stock broker, which horrified me at the time and I wondered how this behavior went undetected on Wall Street. Years later I realized he blended right in with many of his cohorts.!
The risk of having bipolar disorder in the U.S. is about 0.8 percent, according to the NIMH. International studies report lifetime risks of 0.3 to 1.5 percent and are equal in men and women. Bipolar disorder usually starts in adolescence or early adult life. Overuse of drugs or alcohol occurs in over 50 percent of affected individuals.!
About half of the patients with bipolar illness have a family history of the disorder, and there are “multiplex families” in which it occurs in many members involving several generations. There is no Mendelian pattern of inheritance; if you have bipolar disorder but your spouse does not, nor do other family members, there is about a 10 percent chance that your child will develop it. Familial risk is said to be higher with bipolar than unipolar disorders. Complex inheritance patterms point to multiple interacting genes. While markers have been found on chromosomes 18 and 22, no single one has been replicated. All of the susceptibility loci, the recurrence risks or interactions are not yet known. An excellent review of the subject is on the NIMH web site. Konradi et al. have found molecular evidence for mtDNA dysfunction in bipolar disorder.!
The many reported epidemiological studies show considerable ethnic variability, with higher risks in males in England, Sweden and Iceland. In the United States, women have a 21 percent chance for a major depressive episode vs. a 13 percent rate in men. There was a progressive increase in the rates of depression for all ages between 1960 and 1975, with the risk of depression consistently 2 to 3 times higher among women than men, and this trend has continued. Family studies have found the age-adjusted risk for unipolar depression in a first degree relative in the range of 5 to 25 percent. The risks seem to be higher when the early-onset cases are involved. Depression is a major cause of suicide in teen-aged children and not easily diagnosed. Symptoms include a depressed mood, little interest in sports or other social activities, insomnia, fatigue and feels of guilt with low self-esteem.!
As in many other mental states, depression has a complex genetic pattern and a significant environmental impact. Multilocus genetic effects, rather than shared environmental ones, seem to be more important risk factors.!
So far, the pattern seems to be of multiple genes have individually weak effects. !
Down syndrome is the most common cause of mild to moderate mental retardation, occurring in 1/ 800 live births. From 3,000 to 5,000 babies are born each year with this disorder, involving about 250,000 families in the United States. It has benefitted from intensive medical research.!
For some time it has been recognized that Down syndrome is the results of abnormalities affecting chromosome 21. There are three variations that are responsible. In 92 percent of the cases there is an extra chromosome 21 in all cells of the individual, having originated during the development of either the sperm or the egg. This condition is called trisomy 21. In about 2 to 4 percent of cases, the extra chromosome 21 is present in some, but not all cells, because of an early error during chromosome division has resulted in some of the cells acquiring this extra chromosome. The result is known as mosaic trisomy 21. In each of these situations, then, some or all of the body’s cells have 47 instead of the usual 46 chromosomes. There is a third type of Down syndrome, in 3 to 4 percent of cases, in which the body has the usual 46 chromosomes in each cell. Material from a chromosome 21 becomes adherent or translocated to another chromosome, resulting in an excess of chromsome 21 material. This type of Down syndrome is translocation trisomy 21.!
Environmental factors or behavioral activity of the parents haven’t been implicated, maternal age is important -- so much so, that many physicians recommend that women becoming pregnant at age 35 or old undergo prenatal testing for Down syndrome. Although at the present time, only 9 percent of pregnancies occur in women aged 35 or older, about 1 /4 of the babies with Down syndrome come from in this age group. The table created by E. G. Hook and A. Lindsjo gives a dramatic view of the increased chance of Down syndrome in progressively older mothers. (Ref.: Table 1) !
RELATIONSHIP OF DOWN SYNDROME TO MOTHER’S AGE
Incidence of Down Syndrome
Less than 1 in 1,000
1 in 900
1 in 400
1 in 300
1 in 230
1 in 180
1 in 135
1 in 105
1 in 60
1 in 35
1 in 20
1 in 16
1 in 12
Children with Down syndrome tend to be small, with slower physical and mental development. Mild to moderate mental retardation occurs in most, but some have no mental retardation and others are are severely retarded. Physical abnormalities can include flattening of the back of the head, slanted eyelids with skin folds at the inner corners of the eyes, nasal bridge depression, decreased muscle tone, and small ears, mouth, hands and feet. Other features which may or may not be present are hearing deficits in one or both ears (in 66 to 89 percent of children with Down syndrome), congenital heart disease, visual disorders and hypothyroidism.
Prenatal screening can help identify pregnant women whose babies might be at risk for heritary birth defects, including Down syndrome. They are not diagnostic but positive results can be confirmed with other studies.
They are also advocated by many obstetricians even if there is no apparent risk. If any hidden abnormalities are found, genetic counseling and discussion of the family medical history may be indicated. Often three of these procedures are done at once, as a “triple test,” on a small sample of the mother’s blood. The three tests are maternal serum alpha fetoprotein, chorionic gonadotropin and unconjugated estriol. Results may suggest Down’s syndrome of the fetus. Additional types of diagnostic testing may be indicated, such as alpha fetoprotein, HCG and estriol to rule out other congential defects. A recent study in the U.S. shows that improved medical care has doubled the life span of people with Down syndrome since 1983.
Specific Prenatal Testing
Prenatal tests that can be performed to find evidence of Down syndrome include amniocentesis, chorionic villus sampling (CVS) and percutaneous umbilical blood sampling (PUBS). As none of these are without risk, as well as benefits, genetic counseling may be of value. The umbilical blood sampling is most accurate of the three and is sometimes used to confirm the results of the other two follow-up procedures. It can’t be done until late in pregnancy (18th to 22nd weeks) and has the greatest risk of miscarriage.
Newer testing methods under study include analyzing fetal cells which are normally circulating in the mother’s blood. Another type of diagnosic approach in preimplantation diagnosis, or blastomere analysis before implantation (BABI), permitting detection of chromosome imbalances before an embryo is implanted during in vitro fertilization. The method enables a genetic diagnosis to be made prior to implantation and has been used successfully in cases of cystic fibrosis and Tay Sachs disease, offering a possible alternative to prenatal testing.
A Strategy for X-Linked Disorders
Costa, Benachi and Gautier recommend a much safer strategy for the prenatal diagnosis of X-linked disorders. Cell-free DNA circulating in maternal plasma offers the possibility of a noninvasive approach, enabling the determination of the sex of the fetus with 100 percent accuracy when maternal serum is analyzed during the first trimester of pregnancy. They determined the sex of the fetus in 131 pregnant women by analysis of maternal serum between 10 and 13 weeks of gestation, followed by chorionic-villus sampling only if the fetus was identified as a male. Chorionic-villus sampling was not performed if the fetus was a female. Fetal sex was confirmed later in the pregnancy by ultrasonography. All women received genetic counseling and gave written informed consent.
In two cases, the sex of the fetus couldn’t be determined because of spontaneous miscarriage, but in all other cases, the laboratory diagnosis was confirmed by the actual fetal sex. Identification of all 70 male fetuses was confirmed by karyotyping of chorionic villi, while the sex of the female fetuses was confirmed by ultrasonography, averting the potental hazard of loss of female fetuses. If ultrasonography happens to reveal a misdiagnosis of sex, prenatal diagnosis is still possible by means of amniocentesis.
The proposed strategy results in a substantial decrease in the use of risky, unnecessary and costly diagnostic tests such as karyotyping and molecular analysis of chorionic villi in the case of female fetuses. The authors listed the X-linked genetic diseases that were included in their series. (Ref. Table 2)
Table 2. X-Linked Genetic Diseases Studied with New, Safer Strategy
(Costa J-M, Benachi A, Gautier E: N Engl J Med 2002; 346:1502)
No. of Cases
X-linked mental retardation
X-linked severe immunodeficiency
Anhidrotic ectodermal dysplasia
Studies of the Parents
It is important to study the chromosomes of parents of a child who has the translocation type of Down syndrome. Usually this extra chromosome 21 is attached to chromosome 14, 21 or 22. At least 1/3 of the parents of such children will be a balanced carriers of the translocation. If neither of the parents is a balanced carrier, there is no increased risk of having a Down syndrome baby in future pregnancies. Chromosome 21 research has found that it is derived from the mother in 88 percent of the cases, from the father in 8 percent, and there are mitotic mutations in cell division in the other cases. Down syndrome research continues, including a mouse model for studying the developmental aspects. These mice have genes similar to those on human chromosome 21 on their number 16 chromosomes. It is hoped that it will be possible to find means of intervention and specific treatments.
Eating disorders usually begin in adolescence or early adulthood and may present as anorexia nervosa or bulimia nervosa. Anorexia nervosa patients refuse to maintain the minimum recommended body weight for the age and height, with an intense fear of gaining weight, a distorted body image and amenorrhea in women. In bulimia nervosa there is persistent overconcern with body shape and weight, frequent binge eating during whith uncontrolled consumption of food and self-induced vomiting. The individual may use fasting or strict dieting, laxatives, diuretics or repeated vigorous exercise to prevent weight gain.
Both types of eating disorders are more prevalent in females. Some family studies showed a slightly icnreased rate of mood disorders among relatives. High heritability rates have been reported for both conditions, with those for anorexia nervosa being slightly higher. A multi-centered international team identified a genetic link to anorexia nervosa on chromosome 1 in 2002. The disease is almost certain to have multiple genetic influences.
FRAGILE X SYNDROME
Fragile X is the most common form of intellectual disability, more commonly known as mental retardation. It is found throughout the world, affecting one in 1,500 males, one in 2,500 females; one in 260 females are carriers. Carrier females have a 30 to 40 percent chance of giving birth to a retarded male and a 15 to 20 percent chance of bearing a retarded female. There will be a maternal family history for a relative with mental retardation or developmental and learing disabilities.
This is a genetic syndrome carried by the X chromosome. Females have two X chromosomes, one from each parent. Males have one, inherited from the mother. If the single X chromosome in a male is affected, the male will have the fragile X syndrome. Females having one of the two X chromosomes involved are somewhat less affected. The discovery of the fragile X gene (FRAXA) occurred in 1991.
Most males having a full mutation are mentally retarded, with typical features of fragile X. Of females with full mutations, 1/3 have normal intelligence, 1/3 are borderline, and 1/3 are mentally retarded. There is associated gene inactivation in severe cases, which causes an important part of the syndrome. In a few fragile X patients there is a different genetic mechanism responsible. The American College of Medical Genetics makes the following recommendations for diagnostic testing:
* Individuals of either sex with mental retardation, developmental delay or autism, especially if they have (a) any physical or behavioral characteristeics of fragile X syndrome, (b) a family history of fragile X syndrome, or (c) male or female relatives with undiagnosed mental retardation. *Individuals seeking reproductive counselling who have (a) a family history of fragile X syndrome or (b) a family history of undiagnosed mental retardation. *fetuses of known carrier mothers. *patients who have a cytogenetic fragile X test that is discordant with their phenotype. These include patients who have a strong clinical indication (including risk of being a carrier) and who have had a negative or ambiguous test result, and patients with an atypical phenotype who have had a positive test result.
PANIC DISORDER, ANXIETY DISORDERS, AGORAPHOBIA, OBSESSIVE-COMPULSIVE DISORDERS
The onset of panic disorder is most often between ages 15-37, marked by recurrent unanticipated panic attacks during a sharply defined interval of minutes to hours. These uncomfortable episodes of intense fear or discomfort are accompanied by trembling, accelerated heartbeat, sweating, chest pain, dizziness, shortness of breath and other related symptoms, including fear of dying. The affected individual becomes persistently apprehensive about recurrence of similar experiences. There are some substances, such as caffeine, carbon dioxide, sodium lactate and cholecystokinin which can bring on an attack.
Panic attacks are about twice as common in females. Studies have found a risk for panic disorder of about 14 percent in first-degree relatives and over 95 percent in second degree relatives. A review with meta-analysis of panic disorder, generalized anxiety disorder, phobias and obsessive-compulsive disorders in 2001 found all had significant familial aggregations. The role of non-shared environment was significent; the role of family environment uncertain. There are multiple recent reports of various genetic loci being linked to panic disorder and related conditions.
Schizophrenic patients have abnormal thoughts, perception of self and of others, along with strained social relationships. Psychotic symptoms are variable, but can include persistent paranoid obsessions, delusional disorders and inappropriate moods or behavior. Worldwide the incidence is about 1 percent. There is considerable heritability, especially in an early-onset variant. One study found heritability of 89 percent with no environmental contributions. Another showed 74 percent, also with no environmental relationship. But the NIH states, “The modes of ineritance of schizophrenia and mood disorders are complex and likely involve environmental factors and multiple genes in interaction.”
The mode of inheritance is complex and appears to involve numerous interacting genes, and scientists are still debating whether the condition is dominant or recessive. Although genetic evidence itself so far in schizophrenia seems to be rather schizophrenic, the NIH has confidence that fast new genotyping and mapping of polymorphisms will soon bear fruit. Prior research depended largely upon three types of procedures:
localizing key zones on chromosomes by linkage analysis with family data
study of haplotypes for linkage disequilibrium mapping
direct detection of functional variants in affected individuals by means of association analysis.
New methods of large-scale data collection and analysis will enable whole-genome studies and offer opportunities for major progress in understanding the genetic role in mental disorders. Improved, more stringent diagnostic criteria are urgently needed in order to provide precise linkage evidence for the molecular genetic research. Statistics are no better than the data provided -- as the old saying goes, “garbage in, garbage out.” World-wide uniformity for diganosis, perhaps based upon newly created molecular pathology tests, would enhance the value of all research and place scientists on the fast track to understanding and dealing successfully with mental diseases.
Molecular Genetics and Neurodegenerative Disorders
Some neurodegenerative diseases are commanding increasing attention by afflicting our burgeoning burden of aging humans. Chronic and progressive, they place ever-increasing psychological and financial loads on society as they relentlessly expand the numbers of individuals who are no longer self-sufficient. The pathologic features of these diseases have long been recognized, as they selectively and symmetrically disrupt and destroy motor, sensory and cognitive functions.
In contrast to the mental disorders, the inheritance patterns are clearly apparent in many of these diseases. (Table 3) Family history is present in nearly every case of Huntington’s disease, and autosomal dominant traits are demonstrable in up to 10 percent of amyotrophic lateral scerosis (AMS), Alzheimer disease and Parkinson’s disease cases. Mutant genes have been demonstrated in over 50 nervous system diseases. These few include some you might encounter in your medical pedigree.
Alzheimer Disease (AD), affecting over 4 million people in the United States, is the leading cause of cognitive impairment and the fourth leading cause of death in adults, killing over 100,000 annually and costing more than $60 billion. There is an early-onset familial form of Alzheimer disease (AD), but risk of AD rises sharply with age most cases appearing in the 7th to 9th decades of life, where it causes senile dementia of the Alzheimer type.
The two main types of AD are familial and sporadic. It is now recognized that at least four genes can cause Alzheimer disease. Offspring in the same generation have a strong chance of developing AD if one parent had the disease. The sporadic, or late-onset type (although it occurs as young as age 35) is much more common and is related to the APOE gene on chromosome 19. This gene is in several forms, or alleles, and one of these alleles greatly increases the risk for AD. That APOE is present does increase the risk but doesn’t define the degree of risk, so testing for the allele is currently more valuable for predictive screening.
Because of the physical, psychological and social impacts, adults having one parent afflicted with AD are becoming very concerned about their own risk for the disease, and some seek information from clinicians about their own APOE. Meanwhile, some groups have recommended against disclosing this information to unaffected individuals. As a result, a multi-institutional study, Risk Evaluation and Education for Alzheimer’s Disease (REVEAL) was established in 1999 by the National Human Genome Research Institute to establish the risk of providing this information to offspring of an AD patient. Resultant research has provided predictive risk curve graphs for first degree relatives of various ages, but there are many aspects of this problem, such as appropriate education and genetic counseling, which remain to be resolved.
: Before having this test performed, a subject should be aware of its possible implications if it becomes a part of the individual’s medical records. One can only speculate on how employers, insurance companies and health care providers would react to this information. We hope that this dilema will be resolved by the discovery of efficent means of slowing the progression or curing AD. Table 5 shows some genetic factors which link other neurodegenerative disorders to Alzheimer disease.
Table 4. Genetic Factors Linking Other Diseases to Alzheimer Disease
(Modified from Martin JB: Mechanisms of Disease: Molecular basis of the neurodegenerative disorders. N Engl J Med 1999; 340: 1970-1980)
Genetic Factor Chromosome Involved
Down syndrome 21
Amyloid precursor protein mutation 21
Presenilin 1 mutation 14
Presenilin 2 mutation 1
Alpha2-macroglobulin mutation 12
Apolipoprotein E (APOE) e-4 allele 19
Huntington’s disease (HD) was first reported by George Huntington in 1872 and for many years the disease was known as “Huntington’s Chorea” because of the involuntary, irregular movements he described in his cases. It has long been recognized as being familial, an autosomal dominant state with high penetrance passed from parent to children and equally affecting either sex. The disorder does not skip generations and often it appears earlier in succeeding generations. The defective gene was identified on chromosome 4 back in 1983, which is almost a prehistoric era in the world of molecular genetics! Further studies found the source of mutation in the gene and determined it to consist of a series of repeated units of information, known as cAG repeats. Only about 10 percent of HD cases appear before age 20, with the peak occurrence during the 4th and 5th decades. The younger-aged patients usually inherited HD from their father, and old-aged from the mother. Cases becoming symptomatic under age 20 are known as juvenile HD and are usually also accompanied by progressive Parkinsonism, dementia, ataxia and seizures. Adult cases more often present with clumsiness, slowed movement and rigidity. In juvenile HD, death is usually in 8 to 10 years; the illness lasts about 15 years in adult HD. The clinical course is complicated by progressive motor dysfunction, dementia, dysphagia and incontinence.
The HD gene is called the huntingtin gene and can be used to confirm a diagnosis where HD is suspected in adults or juvenile HD in minors, plus it is a predictive test for asymptomatic relatives at 50 percent risk. It is also used for prenatal diagnosis in high risk families.
Because of the complexity of the DNA testing for Huntington disease, the American College of Medical Genetics prepared updated technical standards and guidelines for testing laboratories in 2004. These are not intended to establish fixed diagnostic criteria but to act as a helpful guide for using those procedures which are currently available and providing some uniformity in the methodolgy.
Several drugs have been used to treat HD, but these help control emotional and movement problems. In February, 2002, a drug called cystamine was reported by L.Steinman, M. Karpuji and associates to aleviate tumors and prolong life in mice with the gene mutation for HD. It seems to stop the formation of huntingtin clumps in the brain and scientists are hopeful it might prevent them in humans. Research efforts are being expanded to evaluate this new treatment.
Multiple sclerosis (MS) is one of the most common neurological diseases of young adults, with between 250,000 to 300,000 patients in the United States and 200 new cases occurring each week. The highest prevalence rates are in Iceland, Scandinavia, British Isles and the countries settled by their emigrants. An underlying genetic susceptibility is clearly apparent in its etiology, with familial clustering, and a strong case was by Dr. Charles M. Poser in 1994 that Vikings were instrumental in spreading this susceptibility in those areas and other parts of the world. He felt that the custom of capturing and keeping or selling women and children, plus the flourishing slave trade in men, were important factors in this genetic dissemination.
As MS strikes seemingly healthy young adults with normal life expectancies and has a median duration of over 30 years, it creates an enormous economic impact, with medical and supportive care costs of $2.5 billion each year.
MS is an autoimmune neurological disorder, and its etiology is complex. Numerous studies have shown that genes play a significant role in MS susceptibility, and the major histocompatibility complex (MHC) is an important linkage component, but as many as 50 other loci need further identification.
Genetic analysis has largely focused on candidate genes, comparing the frequencies of marker alleles in groups of patients vs. healthy controls and stastistically analyzing the results. The figure gives the relative risk of an individual developing the disease if her or she carries the particular allele. More productive research has utilized family-based studies, permitting identification of specific haplotypes and better statistical analyses. Modern genetic techniques should make it possible to support or disprove Dr.Poser’s Viking theories.
Family-Based Studies for Genealogists?
Family-based studies would appear to be of particular value to the genealogist because they are more adaptable to combining more specific genetic analyses with multiple statistical approaches. All we have to do is become experts in both genetics and statistics! Using this approach it could be possible to create family collections consisting of extended multigenerational pedigrees, affected sib pairs, alone or with parents and other affected sibs. Family-based studies, properly analyzed and interpreted, appear to offer the best opportunities to evaluate linkage and association to a particular marker.
There is some evidence for an environmental role in MS. The disease is much more common in northern Europe than southern Europe. An epidemic of MS in the Faeroe Islands was suggestive of a viral agent, but toxins are also under suspicion. The highest reported incidence of cases, at 250/100,000, is in the Orkney Islands off of the coast of Scotland. MS is uncommon in Japan (2/100,000), Asia, Africa and native populations of Oceania and the Americas. Gypsies in Hungary, a high risk area, seem resistant to the disease. If a person’s mother, father, brother or sister has MS, the person’s risk of developing MS is 20-50 times higher; if an identical twin has MS, the risk is 300 times higher. In spite of these statistics, anyone can develop MS, and over 80 percent of the patients have no immediate family history of the disease.
The genetic findings represent a real stew, and it seems that there are multiple genetic influences but none are particularly strong. Area 6p21 on chromosome 6 is definitely linked to MS, as combinations of alles from this area render persons susceptible to the disease. Another region that appears in several studies is within chromosome 19q13. A unique Finnish familial connection exists with a gene encoded for myelin basic protein on chromosome 18. DNA studies may open up the possibilities of new treatments or more effective uses of ones already available.
Parkinson disease (PD) is the second most common neurodegenerative disorder, second only to Alzheimer disease, affecting 2 percent of people over age 65. About 50,000 cases are reported each year. It is the result of degeneration of motor neurons in the brain, nerve cells which control muscle movements. The loss of these cells causes a shortage of dopamine, a neurotransmitting chemical, so that body movements are impaired. Symptoms start with a tremor of a limb when the body is at rest, then movements become slow and difficult, with rigidity, a stooped posture and a shuffling gait. Facial expressions are reduced and the disease can also be associated with personality changes, dementia, sleep disturbances, and a soft voice with slurred speach.
There is increasing evidence of a genetic role in PD, especially in the early-onset variety where there is continual reporting of new linkage data and most cases occur in specific family groups. Defective genes that regulate the molecules alpha synuclein and parkin occur in many cases, with alpha synuclein involved in both genetic and sporadic types. Genetic mutations of the gene encoding parkin cause autosomal recessive PD. To date, 8 defective genetic markers are known to be associated with dominant or recessive forms of PD. There is also evidence that mitochondrial abnormalities, which affect cellular energy, can contribute to PD pathogenesis.
There are other genetic factors which point to an increased risk for PD, such as mutations found in some patients with Gaucher’s disease, a recessively inherited disorder affected the storage of glyolipids. Influenza and other viruses have long been implicated with development of Parkinsonism; environmental toxins and infections are also being investigated. But the most promising research appears to be genetically-related.
Tourette’s syndrome is a neurological disorder that usually appears before the age of 18, occurs in all ethnic groups and affects males three to four times more often than females. It is characterized by repeated, involuntary body movements (tics) and uncontrollable vocal sounds (vocal tics), not necessary presenting concurrently. Tics can occur frequently during the day and over a period of a year. Once thought to be rare, the American Academy of Physicians states that up to 20 percent of children have at least a transient tic disorder at some point, but the diagnosis of Tourette’s syndrome is usually reserved for the more complex and severe cases. The tics can be accompanied by attention deficit/hyperactivity disorder, obsessive-compulsive behavior and learning disabilities.
The syndrome is genetically inherited in an autosomal dominant pattern of genetic vulnerability for variable types, from mothers or fathers to sons or daughters. Ninety-nine percent of males with Tourette’s syndrome are symptomatic, but only 70 percent of females. The genetic degree of expression of symptoms is referred to by molecular geneticists as penetrance. Although researchers have been seeking the Tourette genetic locus for over 20 years, they are still “on the threshold” of finding it. Diagnosis at this time is the old-fashioned way, by history and physical examination, as there are no prenatal, biochemical or genetic tests specific for the syndrome. There is one potential genetic tool, but it is to protect the patient, rather than for diagnosis. Rare children are genetically incapable of metabolizing fluoxetine, one of the drugs that is effective in controlling tics. A nine year-old boy died from very high blood levels of this drug. It is now possible to identify this type of genetic polymorphism in patients, but unfortunately it is not yet practical because of the time and difficulties involved trying to get such a procedure performed. Withholding medication is not always an option because many of the affected children vocalize socially inappropriate insulting words and phrases, known as coprolalia, which can be quite upsetting to their peers. This could result in the Timex syndrome for the unfortunate child, who takes a licking but keeps on ticking.
Genetics and Mental Disorders: Report of NIMH Genetics Workshop
Molecular Genetics of Mental Disorders, Nat. Inst. of Mental Health 1998
Fragile X Syndrome: Diagnostic and Carrier Testing Policy Statement: Am Coll Med Genet.
Hagerman RJ, Silverman AC: Fragile X syndrome: Diagnosis, treatment, and research. Baltimore: Johns Hopkins U. Press, 1991
Genetic Causes of Mental Retardation
Fragile X Syndrome - An Introduction
Warren ST, Nelson DL: Advances in molecular analysis of fragile X syndrome. JAMA 1994; 271: 536-42
Stevenson J: Evidence for a genetic etiology in hyperactivity in children. In: Fulker DW, Driscoll P, et al., eds: Behavior Genetics. New York, Plenum Press, 1991
Cantwell DP: Genetics of hyperactivity. J Child Psychol & Psych 1975; 16: 261-64
Smalley SL et al: Autism and genetics: a decade of research. Arch Gen Psych 1988; 45: 953-961
Bailey A et al: Autism as a strongly genetic disorder: evidence from a British twin study. Psycholog Med 1995; 25: 63-77
Happe F, Firth, U: The neuropsychology of autism. Brain, 1996; 119: 1377-1400
Bipolar Disorder: Gene Hunting
Schizophrenia and bipolar disorder: http://www.hopkinsmedicine.org/epigen
Belmaker, RH: Medical Progress: Bipolar disorder. N Engl J Med 2004; 351: 3476-486
Merikangas K, Yu K: Genetic epidemiology of bipolar disorder. Clin Neurosci Res 2002; 2: 127-141
Konradi C, Eaton M et al: Molecular evidence for mitochondrial dysfunction in bipolar disorder. Arch Gen Psychiatry 2004 Mar; 61(3): 300-8
Expert Consensus Treatment Guidelines for Bipolar Disorder: A Guide for Patients and Families.
Gurling H: Candidate genes and favored loci: strategies for molecular genetic research into schizophrenia, manic depression, autism, alcoholism and Alzheimer disease. Psych Develop 1986; 4: 289-309
Alzheimer Disease: OMIM
Alzheimer Disease, NCBI
Levy-Lahad et al: Alzheimer disease genetics. Science 1995; 269: 973-977
Alzheimer Disease Genetics. 1997
Roses AD: Genetic testing for Alzheimer disease: practical and ethical issues. Arch Neurol 1997; 54: 1226-1220
Green RC: Genetic testing for Alzheimer’s disease: has the moment arrived? Alz Care Quarterly 2002; 208-214
Cupples LA, Farrer La, et al.: Estimating risk curves for first-degree relatives of patients with Alzheimer’s disease: The REVEAL study. Genet in Med 2004 July/Aug; 6: 192-196
Brown RH: Amyotrophic lateral sclerosis and the inherited motor neuron diseases. In: Martin JG, ed.: Molecular Neurology. New York: Scientific American, 1998: 35-54
Jankovic J: Clinical Presentations of Huntington’s Disease
Shapira SK: Clinical Genetics of Huntington’s Disease
Potter NT, Spector EB & Prior, TW: Technical standards and guidelines for Huntington disease testing. Genet in Med 2004; Jan/Feb 6: 61-65
Karpuji MV et al.: Prolonged survival and decreased abnormal movements in transgenic model of Huntington’s disease, with administration of the transglutaminase inhibitor cystamine. Nature Med 2002; 8: 143-49
NINDS Huntington’s Disease Information Page
Multiple Sclerosis as a Genetic Disease
NUH Guide: Multiple Sclerosis. grants1.nih.gov/grants/guide
Poser CM: The dissemination of multiple sclerosis: A Viking saga? Ann Neurol 1994; 36(S2)S231-243
Multiple Sclerosis as a Genetic Disease. 1/31/00 UCSF Research Papers
Pericak-Vance MA et al: Linkage and association analysis of chromosome 19q13 in multiple sclerosis. Sringer-Verlag 2001: Combined study: Duke, Vanderbilt, U.Calif.
Hogancamp WE, Rodriguez M, Weinshenker BG: The epidemiology of multiple sclerosis. Mayo Clin Proc 1997; 72: 871-78
Sawcer W, Goodfellow PN: Inheritance of susceptibility to multiple sclerosis. Curr Opin in Immunol 1998; 10: 697-703
Lublin FD, Rheingold SC: Defining the clinical course of multiple sclerosis. Neurology 1996; 46: 907-911
Multiple Sclerosis - Hope Through Research. Nat, Inst. Neurol. Disorders & Stroke.
Oksenberg Jr, Seboun E, Hauser SL: Genetics of demyelinating diseases. Brain pathol 1996; 6: 289-302
Compston A: Genetic epidemiology of multiple sclerosis. J Neurol Neurosurg Psych 1997; 62: 553-561
Feany MB: New genetic insights into Parkinson’s disease. N Engl J Med 2004; 351: 1937-1940
VilaM, Przedborski S: Genetic clues to the pathogenesis of Parkinson’s disease. Nat Med 2004; 10: Suppl: S 58-62
Wong K, Sidranski E et al: Neuropathology provides clues to the pathophysiology of Gaucher disease. Molec Genet Metab 2004; 82: 192-207
Hardy J, Gwinn-Hardy K: Genetic classification of primary neurodegenerative disease. Science 1998; 282: 1075-9
Bagheri MM, Kerbeshian J, Burd L: Recognition and Management of Tourette’s Syndrome and Tic Disorders. Amer Fam Phys,
Tourette’s Syndrome: Genetics and Mental Disorders
EXAMPLES OF INHERITED NEURODEGENERATIVE DISORDERS
Disease and Site
Autosomal dominant (3-5%)
Autosomal dominant (rarely)
Lewy body dementia
Brain Stem and Cerebellum
Multiple system atrophy
Amyotrophic lateral sclerosis
Sporadic or autosomal dominant
(1-10% of cases)
Spinal and bulbar muscular atrophy
Spinal muscular atrophy
Familial spinal paraparesis
Autosomal dominant or recessive
Adapted from Martin, Joseph B: MECHANISMS OF DISEASE: Molecular Basis of the Neurodegenerative Disorders. N Engl J Med 1999; 340:(25), p. 1971
New England Historic Genealogical Society
99 - 101 Newbury Street
Boston, Massachusetts 02116, USA
© 2010 - 2014 New England Historic Genealogical Society | <urn:uuid:2312184e-7b2c-42db-881d-cd6bce1d3d04> | {
"date": "2014-10-31T16:00:05",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900029.49/warc/CC-MAIN-20141030025820-00233-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9204342365264893,
"score": 2.5625,
"token_count": 9336,
"url": "http://www.americanancestors.org/dna-banking-for-medical-information/"
} |
2013 Findings on the Worst Forms of Child Labor
In 2013, Zambia made a moderate advancement in efforts to eliminate the worst forms of child labor. The Government hired 55 new labor inspectors and created a new District Child Labor Committee in Kaoma District. The Government also eliminated examination fees for grades seven and nine and expanded implementation of the social cash transfer program in some provinces. However, children in Zambia continue to engage in child labor in agriculture and mining. The Government has yet to adopt into law the draft statute on hazardous forms of child labor. Gaps also remain in the current legal framework related to children; for instance, the Education Act does not include the specific age to which education is compulsory and the Government has not defined school-going age as required in the law, which may leave children under the legal working age vulnerable to the worst forms of child labor.
Children in Zambia are engaged in child labor in agriculture and mining. Table 1 provides key indicators on children's work and education in Zambia.
|Working children, ages 7 to 14 (% and population):||28.1 (992,722)|
|Working children by sector, ages 7 to 14 (%)|
|School attendance, ages 5 to 14 (%):||65.2|
|Children combining work and school, ages 7 to 14 (%):||27.6|
|Primary completion rate (%):||91.3|
Source for primary completion rate: Data from 2012, published by UNESCO Institute for Statistics, 2013. (1)
Source for all other data: Understanding Children's Work Project's analysis of statistics from the LFS Survey, 2008. (2)
Based on a review of available information, Table 2 provides an overview of children's work by sector and activity.
|Agriculture||Production of maize,* coffee,* and tea* (3)|
|Production of tobacco and cotton (3)|
|Raising cattle (3-5)|
|Fishing,* activities unknown (3)|
|Industry||Mining gemstones (3, 6)|
|Extracting amethysts* and emeralds* (3)|
|Mining and processing lead,* zinc,* iron ore,* and copper* (3, 6)|
|Quarrying rock,* conducting rudimentary mine drilling, and scavenging mine dump sites (3, 6)|
|Crushing stones (7)|
|Construction, activities unknown (3, 6)|
|Producing charcoal* (3)|
|Services||Domestic service (3, 6)|
|Street work, including begging (3, 8)|
|Categorical Worst Forms of Child Labor‡||Commercial sexual exploitation sometimes as a result of human trafficking (3, 8)|
|Agriculture activities such as plowing, weeding, harvesting, and transporting water and supplies and domestic service as a result of human trafficking (3)|
*Evidence of this activity is limited and/or the extent of the problem is unknown.
‡Child labor understood as the worst forms of child labor per se under Article 3(a) - (c) of ILO C. 182.
Children who were trafficked for agriculture work were primarily trafficked from the Democratic Republic of the Congo or neighboring countries, while most domestic service workers who were trafficked were trafficked internally.(3) Some children in Zambia are forced by jerabo gangs, which are illegal mining syndicates, to load trucks with stolen copper ore in the Copperbelt Province.(8) In addition, the Government has yet to release information on child labor from the 2008 Labor Force Survey, although the general Labor Force Survey results were released in 2011.(9, 10)
Zambia has ratified most key international conventions concerning child labor (Table 3).
|ILO C. 138, Minimum Age||✅|
|ILO C. 182, Worst Forms of Child Labor||✅|
|UN CRC Optional Protocol on Armed Conflict|
|UN CRC Optional Protocol on the Sale of Children, Child Prostitution and Child Pornography|
|Palermo Protocol on Trafficking in Persons||✅|
Zambia has not ratified the UN CRC Optional Protocol on the Sale of Children, Child Prostitution, and Child Pornography, while commercial sexual exploitation of children continues to be a problem in Zambia.
The Government has established relevant laws and regulations related to child labor, including in its worst forms (Table 4).
|Minimum Age for Work||Yes||15||Constitution, Employment Act (11, 12)|
|Minimum Age for Hazardous Work||Yes||18||Employment of Young Persons and Children Act (13, 14)|
|List of Hazardous Occupations Prohibited for Children||No|
|Prohibition of Forced Labor||Yes||Constitution;Penal Code; Anti-Human Trafficking Act of 2008 (3, 11)|
|Prohibition of Child Trafficking||Yes||Constitution;Penal Code; Anti-Human Trafficking Act of 2008 (3, 11)|
|Prohibition of Commercial Sexual Exploitation of Children||Yes||Penal Code; Employment of Young Persons and Children Act (3, 13)|
|Prohibition of Using Children in Illicit Activities||Yes||Employment of Young Persons and Children Act (13)|
|Minimum Age for Compulsory Military Recruitment||N/A*|
|Minimum Age for Voluntary Military Service||Yes||18||Defence Act (15)|
|Compulsory Education Age||No||Education Act (16)|
|Free Public Education||Yes||Education Act (16)|
*No conscription or no standing military.
Zambia has not enacted into law a list of hazardous occupations prohibited for children, although the draft Statutory Instrument on Hazardous Forms of Child Labor is pending Parliamentary adoption.(3) In addition, the penalties for child prostitution violations in the Employment of Young Persons and Children Act are different from those in the Penal Code.(17) While the Penal Code treats child prostitution as a felony with a minimum 20 year jail sentence, the Employment of Young Persons and Children Act treats penalties as civil and punishment is a fine of $35-$165. In practice, the Penal Code would be applied, however research did not discover any such prosecutions in recent years.(18) The Education Act of 2011 requires the Government to provide free education up to the seventh grade, and stipulates that education is compulsory for children of school-going age.(3, 16, 19) However, the Act does not provide a specific age or definition of "school-going age," which may allow children to leave school before they are legally able to work.(16) The lack of standards in this area may increase the risk of children's involvement in the worst forms of child labor. Furthermore, the Government of Zambia does not provide public schools in every village, so some communities must contribute their own labor and resources to fill this gap. While government primary schools are free, schools are understaffed and parent-teacher association and other associated fees prohibit some students from attending.(20) In 2013, the Government undertook new efforts to promote female education and eliminated examination fees for grades seven and nine to increase school retention.(3)
The Government has established institutional mechanisms for the enforcement of laws and regulations on child labor, including in its worst forms (Table 5).
|Ministry of Labor and Social Security (MLSS) Child Labor Unit (CLU)||Implement and enforce child labor laws.(3)|
|Zambia Police Service Child Protection Unit (CPU)||Work with MLSS to identify and remove vulnerable children from the streets. Work with 72 District Street Children Committees to rescue street children from child labor, including the worst forms, and place them with families, in foster care, or in children's homes.(9, 17) Work with immigration officials to combat child trafficking, with local officials regarding crimes against children, and with schools to education and sensitize children about abuse; and collaborate with the Ministry of Justice to investigate and prosecute child labor cases.(9, 21)|
|Zambia Police Service (ZPS) Victim Support Unit||Handle the enforcement of laws against trafficking, commercial sexual exploitation, and/or use of children in illicit activities.(3)|
|Ministry of Justice||Investigate and prosecute child labor cases.(9, 21)|
Law enforcement agencies in Zambia took actions to combat child labor, including in its worst forms.
Labor Law Enforcement
In 2013, the MLSS recruited and trained 55 new labor inspectors, increasing the number to 108. Despite the addition of 55 new labor inspectors, the MLSS believes that the number is inadequate to conduct inspections country-wide and plans to continue to seek an increase in the number of inspectors.(3) The new labor inspectors received a month-long training on child labor in 2013. The CLU was allocated $36,000 for 2013, which is the same as the budget allocation for 2012.(3) The MLSS stated that the budget and transportation were inadequate to conduct inspections.(3) No child labor cases or prosecutions were recorded in 2013; the MLSS conducted labor inspections in public institutions only and did not conduct any in the private sector where child labor is more likely to be found.(3)
Criminal Law Enforcement
In 2013, 7 potential child trafficking cases were identified by the Government.(8)
The Government has established mechanisms to coordinate its efforts to address child labor, including in its worst forms (Table 6).
|Coordinating Body||Role & Description|
|MLSS||Coordinate government efforts on issues of child labor, including the worst forms.(3)|
|MLSS-CLU||Coordinate with District Child Labor Committees (DCLCs) in 24 of Zambia's 102 districts to increase local awareness of child labor. Mobilize communities against child labor, including its worst forms.(3)|
|Zambia Police Service Child Protection Unit||Coordinate with the Ministry of Community Development, Mother, and Child Health (MCDMCH) to protect children from general abuse, including the worst forms of child labor.(3)|
|DCLCs||Respond to child labor complaints at the local level and file complaints to the MLSS. Composed of ZPS, MLSS, MCDMCH, and civil society stakeholders.(3)|
In 2013, the Government created a DCLC in Kaoma District in the Western Province. Kaoma District was targeted due to the high prevalence of child labor on tobacco farms.(3) The Government intends to establish DCLCs in all districts but lacks the resources. DCLCs serve as the main referral mechanism for social welfare services.(3) Due to overlapping responsibilities and communication lapses, individual agency mandates may not be carried out effectively in some cases and a lack of DCLCs may lead to inadequate referral mechanisms.(9)
The Government of Zambia has established policies related to child labor, including in its worst forms (Table 7).
|National Child Labor Policy||Establishes an action plan and designates responsible agencies to address child labor issues.(3, 19)|
|National Action Plan for the Elimination of the Worst Forms of Child Labor||Identifies five specific priorities for Government focus: improve and enforce existing laws and policies on child labor, protect all children from hazardous labor, strengthen institutional capacity, raise awareness, and establish monitoring and evaluation systems.(3, 19)|
|Poverty Reduction Strategy Paper*||Includes the eradication of the worst forms of child labor as a goal.(9)|
|Sixth National Development Plan (2011-2016) *||Includes the eradication of the worst forms of child labor as a goal.(3, 22)|
|Education Policy and Education Act of 2011*||Includes rights of children, including the right to free education, and provides for the re-entry of teen mothers into school.(3)|
|National Employment and Labor Market Policy*||Proposes interventions to eliminate the worst forms of child labor through services provided in the agriculture, health, and education sectors. Provides skills and education to prepare young people for decent and productive work.(9, 14)|
|UN Development Assistance Framework for Zambia (2011-2015)*||Includes the prevention, protection, and rehabilitation from the worst forms of child labor as a policy outcome in accordance with the Sixth National Development Plan.(23)|
*The impact of this policy on child labor does not appear to have been studied.
Efforts to implement the Child Labor Policy have been restricted due to inadequate funding.(19)
In 2013, the Government of Zambia funded and participated in programs that include the goal of eliminating or preventing child labor, including in its worst forms (Table 8).
|Tackling Child Labor through Education (TACKLE) project||Jointly launched by the European Commission and the ILO to combat child labor through education in 12 African and the Caribbean countries and the Pacific group of states (ACP). (24) Aims to strengthen the capacity of national and local authorities to implement and enforce child labor laws and policies in Zambia.(9, 19, 25, 26) Extended until August 2013 and included ILO training on child labor issues for government officials and teachers; implementation of four Action Programs to assist children exposed to or at risk of child labor, especially those living in vulnerable communities; and awareness-raising on child labor through education initiatives.(9, 19, 25, 26)|
|Global Action Program on Child Labor Issues Project||USDOL-funded project implemented by the ILO in approximately 40 countries to support the priorities of the Roadmap for Achieving the Elimination of the Worst Forms of Child Labor by 2016 established by the Hague Global Child Labor Conference in 2010. In Zambia, the project aims to improve the evidence base on child labor through data collection and research.(27)|
|Pilot social cash transfer program*‡||Government program that provides funds on the condition that parents send their children to school rather than to work.(19)|
|Government child labor sensitization efforts‡||Government programs to sensitize the public on child labor at the national and district levels through implementing partners.(3)|
|Zambia National Service skills training camps*‡||Government program that provides camps for life skills training to at-risk youth, including victims of the worst forms of child labor and children living and working in the streets.(9, 19)|
|Youth Empowerment Fund*‡||Government program that provides start-up capital for youth to start businesses based on their skills.(3)|
|School Feeding Program*‡||Government program that provides meals for children that attend school.(3)|
*The impact of this program on child labor does not appear to have been studied.
‡Program is funded by the Government of Zambia.
In 2013, the Government expanded the implementation of the social cash transfer program in various provinces.(3) Although Zambia has programs that target child labor, the scope of these programs is insufficient to fully address the extent of the problem, especially in some of the most common worst forms of child labor, particularly children in the agriculture and mining sectors and those working on the streets.
Based on the reporting above, suggested actions are identified that would advance the elimination of child labor, including in its worst forms, in Zambia (Table 9).
|Area||Suggested Action||Year(s) Suggested|
|Laws||Ratify the CRC Optional Protocol on the Sale of Children, Child Prostitution, and Child Pornography.||2013|
|Adopt the draft statutory instrument that enumerates the hazardous occupations prohibited for children.||2009 - 2013|
|Determine through statutory instrument the school-going age for compulsory education.||2012, 2013|
|Harmonize legislation to ensure that penalties for child commercial sexual exploitation are consistent.||2009 - 2013|
|Enforcement||Provide transportation, staffing, and other appropriate resources for conducting child labor inspections and child trafficking investigations and ensure that inspections cover all areas where children work, including both public and private sectors.||2010 - 2013|
|Provide free education as required by the Education Act of 2011.||2012, 2013|
|Coordination||Establish DCLCs in remaining districts.||2011 - 2013|
|Improve lines of communication and clarify responsibilities among agencies to improve effectiveness and referrals to social services.||2011 - 2013|
|Government Policies||Provide adequate funding to implement the National Child Labor Policy.||2012, 2013|
|Assess the impact that existing policies may have on addressing child labor.||2013|
|Social Programs||Conduct research to determine the activities carried out by children working in construction to inform policies and programs.||2013|
|Assess the impact that existing social programs may have on addressing child labor.||2013|
|Institute and implement programs to address the worst forms of child labor in Zambia, particularly for street children and those working in the agriculture and mining sectors.||2011 - 2013|
|Publish the data on child labor from the 2008 Labor Force Survey.||2011 - 2013|
1. UNESCO Institute for Statistics. Gross intake ratio to the last grade of primary. Total. [accessed February 10, 2014]; http://www.uis.unesco.org/Pages/default.aspx?SPSLanguage=EN . Data provided is the gross intake ratio to the last grade of primary school. This measure is a proxy measure for primary completion. For more information, please see the "Children's Work and Education Statistics: Sources and Definitions" section of this report.
2. UCW. Analysis of Child Economic Activity and School Attendance Statistics from National Household or Child Labor Surveys. Original data from Labor Force Survey, 2008. Analysis received February 13, 2014. Reliable statistical data on the worst forms of child labor are especially difficult to collect given the often hidden or illegal nature of the worst forms. As a result, statistics on children's work in general are reported in this chart, which may or may not include the worst forms of child labor. For more information on sources used, the definition of working children and other indicators used in this report, please see the "Children's Work and Education Statistics: Sources and Definitions" section of this report.
24. ILO-IPEC. Tackling child labour through education in African, Caribbean and the Pacific (ACP) States (TACKLE), ILO-IPEC, [online] n.d. [cited February 27, 2014]; http://www.ilo.org/ipec/projects/global/tackle/lang--en/index.htm.
Labor Rights in Zambia
Learn what we are doing to to protect the rights of workers | <urn:uuid:830fef88-e71d-44a9-a0a1-b9487cc56855> | {
"date": "2014-11-23T11:02:39",
"dump": "CC-MAIN-2014-49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379462.60/warc/CC-MAIN-20141119123259-00044-ip-10-235-23-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9163304567337036,
"score": 2.921875,
"token_count": 3876,
"url": "http://www.dol.gov/ilab/reports/child-labor/zambia.htm"
} |
- The nose pads of both dogs and cats are unique – ridged in a pattern just like fingerprints of humans.
- Among the survivors from the doomed Titanic were two dogs – a Pekingese and a Pomeranian.
- A cat’s heart beats twice as fast as a human heart – 110 to 140 beats per minute.
- Cats knead with their paws when they’re happy.
- Your cat loves you and can read your moods. If you’re sad or under stress, you may also notice a difference in your cat’s behavior.
- The oldest known breed of dog native to North America is the Chihuahua and the oldest known breed is the Saluki, an Arabic word meaning “noble one.”
- An average cat has 1-8 kittens per litter and 2-3 litters per year.
- During her productive life, one female cat could have more than 100 kittens.
- A single pair of cats and their kittens can produce as many as 420,000 kittens in just 7 years.
- The U.S. has the highest dog population in the world, followed by France.
- People who own pets live longer, have less stress, and have fewer heart attacks.
- “Sociable” cats will follow you from room to room to monitor your activities throughout the day.
- Give your cat a quality scratching post to deter her from scratching your furniture. Still scratching? Try putting lemon scent or orange scent on the area. Cats hate these smells.
- You’ve probably heard that one year of a dog’s life is equal to seven in human years. Here’s another way of calculating a dog’s age: At one year, a dog is the equivalent of 16 human years; at two dog years they are 24 human years; at 3 dog years, 30 human years; and for every dog year after that, add 4 human years.
- Cats are partially colorblind. They have the equivalency of human red/green color blindness. (Reds appear green and greens appear red – or shades thereof.)
- At birth, kittens can’t see or hear. Kittens open their eyes after five days and begin to develop their eyesight and hearing at approximately 2 weeks. They begin to walk at 20 days.
- Cats are the sleepiest of all mammals. They spend 16 hours of each day sleeping. With that in mind, a seven year old cat has only been awake for two years of its life!
- The Doberman breed was created in the 1860′s by Louis Doberman, a German tax collector who created the dog to protect him while he worked.
- Cats spend 30% of their waking hours grooming themselves.
- 95% of all cat owners admit they talk to their cats.
- 32% of those who own their own home also own at least one cat.
- Egyptians shaved their eyebrows as a sign of mourning when they lost a beloved cat.
- A cat has five more vertebrae in its spinal column than a human does.
- The weirdest cat on record was a female called Mincho who went up a tree in Argentina and didn’t come down again until she died six years later. While treed, she managed to have three litters with equally ambitious dads!
- Dogs have lived with humans for over 14,000 years. Cats have lived with people for only 7,000 years.
- Cats can see up to 120 feet away. Their peripheral vision is about 285 degrees.
- Dogs can hear low sounds about as well as humans (40 Hz, compared to our 20 Hz), and can hear sounds that are a quite a bit higher (60,000 Hz, compared to our 20,000 Hz). They are more sensitive to loud sounds – loud noises that humans can tolerate may be painful to dogs. The flipside of this is that they can hear sounds that are 4 times farther away.
- Because cat-eye pupils are vertical slits, they get narrower in bright light. The neat trick: Cats can lower or raise their eyelids to hide more or less of the slit, just like a window shade. This gives a cat more precise control than nearly any other animal over the amount of light entering his eyes.
- In absolute darkness with no light at all, cats can’t see any better than humans can.
- Scientists estimate that cats can see clearly in one-sixth the amount of light we humans would need.
- The fastest dog in the world is a greyhound which can reach up to an amazing 45mph!
- Contrary to popular belief, a Pit Bull doesn’t have the strongest bite. In a scientific test measuring bite pressure, the Rottweiler won with 328 lbs., followed by a German Shepherd at 238 lbs., and the Pit Bull at 235 lbs. What makes Pit Bulls dangerous biters is their ability to lock their jaws.
You should know when to call the vet immediately in case of emergency.
Desert Veterinary Clinic provides the “first contact” pet veterinary care. Our services include vaccinations and routine pet care, surgical, nutritional counseling, behavior counseling and grooming
Izabela, who is a professional groomer takes care of the cleaning and pampering your pet.
We give your pet the best care
Our commitment is to treat our clients (owners and pets alike) with dignity, honesty, integrity and respect, but most of all compassion and a strong commitment to quality of life. | <urn:uuid:495b1678-f99d-43da-a9ea-d4bdf8c5f265> | {
"date": "2015-03-29T17:16:55",
"dump": "CC-MAIN-2015-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298660.78/warc/CC-MAIN-20150323172138-00074-ip-10-168-14-71.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.957798182964325,
"score": 3.09375,
"token_count": 1154,
"url": "http://vet-in-dubai.com/"
} |
Assessment Strategies for the "New" Kindergarten
How to use assessments to help kindergartners reach the new standards for achievement
- Grades: PreK–K, 1–2
With years of teaching kindergarten, 1st grade, and 3rd grade under my belt, I’m often asked if kindergarten is the easiest to teach. My reply? "No!" Teaching kindergarten requires crafting a program that is equal parts academics and social/self-care skills. While later grades still need to address the social and self-care skills, more focus tends to be placed on the academics. Now that isn’t to say I don't need to stay on top of the academic piece. With individual states and the federal government focused on standardized testing, kindergarten is the new 1st grade, meaning there is more content that my young students need to learn.
Among the recent curriculum shifts is the move to start literacy instruction in kindergarten rather than 1st grade. The expectation is that each student will reach a certain level by the time they leave my class, but they are coming in with a wide range of skills and abilities. In order to help them all achieve the level of learning expected from today's kindergarteners, I need to have a clear picture of where each one is starting out from: that’s where assessment comes into play.
In the fall, around the fourth week of school, I begin with three formal assessments:
- Developmental Indicators for the Assessment of Learning (DIAL)
- Dynamic Indicators of Basic Early Literacy Skills (DIBELS)
- Hearing and Recording Sounds in Words
After administering the DIAL for articulation and expressive and receptive language, the DIBELS for initial sound fluency and letter naming fluency, and Fountas and Pinnell’s rhyming assessment, I follow up with several informal evaluations to round out my profile of each student:
- Hearing and Recording Sounds in Words by Marie Clay
- a writing sample
- an alphabet identification inventory
Conditions for Screening
These assessments comprise part of my school's initial screening for students entering kindergarten. To prepare, I make sure that everyone is aware that it's assessment time. This helps us avoid schedule conflicts and attendance problems. A notice is sent home to parents informing them when the screening will take place. Colleagues are informed of the space being used through notes in their mailboxes and a sign goes up on the door on the days of the screening. The library, cafeteria or other quiet place is reserved to administer the assessments.
For the screening, it’s important to have materials on-hand. Being organized and prepared myself helps the students relax. I stock up on sharpened pencils, scrap paper, timers or stop watches and clip boards and make sure there is an accurate clock in the room. I also like to have paper and markers and books available for students who are in between assessments.
I keep Post-it notes on the files containing an individual student’s paperwork, allowing teachers to check off assessments as we each do our piece. It allows me to see at a glance what a student has been tested on.
During the screening, the occupational and physical therapists, speech teacher, early childhood coordinator, and curriculum coordinator help me to administer. I have a substitute teacher working with the aide in my classroom, allowing me to concentrate on the task at hand.
Focusing on Students' Individual Needs
Although I’m trying to balance timed assessments while calculating and recording results, it’s important for me to pay attention to cues that show students are feeling overwhelmed or stressed. I want to be able to pick up on the frown or puckered lip that foreshadows frustration. When I do, I’m not beyond responding with praise or humor. A tried and true response: “I’m noticing this question is hard for you. You’re doing great. We’re almost done.” A quick joke or a funny voice might do the trick, or even a pat on the shoulder.
When the screening is complete, the data is entered into a color-coded Excel spreadsheet. The team that administered the screening meets, along with the principal, and decides what interventions might be appropriate. Some of these include Lexia, Great Leaps and small group direct instruction. I also refer to Susan Hall’s I’ve DIBEL’d, Now What?, Phonemic Awareness in Young Children, Jo Fitzpatrick’s Phonemic Awareness: Playing with Sounds to Strengthen Beginning Reading Skills, and Pinnell and Fountas’ Phonics Lessons. These great books are packed with literacy ideas to address specific areas.
Letters are then sent home to parents, either informing them that their child passed the screening without any concerns or that a rescreening would be appropriate at a later date to look at an area of concern. Another letter is enclosed, including tips for games and activities that will address and help strengthen that skill. A rescreening date is then scheduled and the team is ready to hit the ground running.
I think one of the reasons my school is so successful at identifying the needs of students is that we take an organized approach to assessment. With portfolio assessment and work sampling supplementing the screening assessments, it would be difficult for a student to fall through the cracks. And with all the wonderful data gathered, I’m able to plan curriculum that is relevant, challenging and fun. | <urn:uuid:5e2623be-539b-476e-9e1a-0b9007a6dc2a> | {
"date": "2014-08-29T10:17:45",
"dump": "CC-MAIN-2014-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832052.6/warc/CC-MAIN-20140820021352-00262-ip-10-180-136-8.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9456875324249268,
"score": 3.734375,
"token_count": 1134,
"url": "http://www.scholastic.com/teachers/article/assessment-strategies-new-kindergarten"
} |
Hunger The Physiology of Hunger The Psychology of Hunger Obesity and Weight Control
Hunger When are we hungry? When do we eat? When there is no food in our stomach. When we are hungry. How do we know when our stomach is empty? Our stomach growls. These are also called hunger pangs.
The Physiology of Hunger Stomach contractions (pangs) send signals to the brain making us aware of our hunger.
Stomachs Removed Tsang (1938) removed rat stomachs, connected the esophagus to the small intestines, and the rats still felt hungry (and ate food).
Body Chemistry & the Brain Levels of glucose in the blood are monitored by receptors (neurons) in the stomach, liver, and intestines. They send signals to the hypothalamus in the brain. Rat Hypothalamus
Hypothalamic Centers The lateral hypothalamus (LH) brings on hunger (stimulation). Destroy the LH, and the animal has no interest in eating. The reduction of blood glucose stimulates orexin in the LH, which leads rats to eat ravenously.
Hypothalamic Centers The ventromedial hypothalamus (VMH) depresses hunger (stimulation). Destroy the VMH, and the animal eats excessively. Richard Howard
Hypothalamus & Hormones The hypothalamus monitors a number of hormones that are related to hunger. HormoneTissueResponse Orexin increaseHypothalamusIncreases hunger Ghrelin increaseStomachIncreases hunger Insulin increasePancreasIncreases hunger Leptin increaseFat cellsDecreases hunger PPY increaseDigestive tractDecreases hunger
Set Point Manipulating the lateral and the ventromedial hypothalamus alters the body’s “weight thermostat.” Heredity influences set point and body type. If weight is lost, food intake increases and energy expenditure decreases. If weight is gained, the opposite takes place.
The Psychology of Hunger Memory plays an important role in hunger. Due to difficulties with retention, amnesia patients eat frequently if given food (Rozin et al., 1998).
Taste Preference: Biology or Culture? Body chemistry and environmental factors influence not only when we feel hunger but what we feel hungry for! Richard Olsenius/ Black Star Victor Englebert
Hot Cultures like Hot Spices Countries with hot climates use more bacteria- inhibiting spices in meat dishes.
Eating Disorders Anorexia Nervosa: A condition in which a normal-weight person (usually an adolescent woman) continuously loses weight but still feels overweight. Reprinted by permission of The New England Journal of Medicine, 207, (Oct 5, 1932), Lisa O’Connor/ Zuma/ Corbis
Eating Disorders Bulimia Nervosa: A disorder characterized by episodes of overeating, usually high-calorie foods, followed by vomiting, using laxatives, fasting, or excessive exercise.
Reasons for Eating Disorders 1.Sexual Abuse: Childhood sexual abuse does not cause eating disorders. 2.Family: Younger generations develop eating disorders when raised in families in which weight is an excessive concern. 3.Genetics: Twin studies show that eating disorders are more likely to occur in identical twins rather than fraternal twins.
Obesity and Weight Control Fat is an ideal form of stored energy and is readily available. In times of famine, an overweight body was a sign of affluence.
Obesity A disorder characterized by being excessively overweight. Obesity increases the risk for health issues like cardiovascular diseases, diabetes, hypertension, arthritis, and back problems.
Body Mass Index (BMI) Obesity in children increases their risk of diabetes, high blood pressure, heart disease, gallstones, arthritis, and certain types of cancer, thus shortening their life- expectancy.
Obesity and Mortality The death rate is high among very overweight men.
Social Effects of Obesity When women applicants were made to look overweight, subjects were less willing to hire them.
Physiology of Obesity Fat Cells: There are billion fat cells in the body. These cells can increase in size (2-3 times their normal size) and number (75 billion) in an obese individual (Sjöstrum, 1980).
Set Point and Metabolism When reduced from 3,500 calories to 450 calories, weight loss was a minimal 6% and the metabolic rate a mere 15%. The obese defend their weight by conserving energy.
The Genetic Factor Identical twin studies reveal that body weight has a genetic basis. The obese mouse on the left has a defective gene for the hormone leptin. The mouse on the right sheds 40% of its weight when injected with leptin. Courtesy of John Soltis, The Rockefeller University, New York, NY
Activity Lack of exercise is a major contributor to obesity. Just watching TV for two hours resulted in a 23% increase of weight when other factors were controlled (Hu & others, 2003).
Food Consumption Over the past 40 years, average weight gain has increased. Health professionals are pleading with US citizens to limit their food intake.
Losing Weight In the US, two-thirds of the women and half of the men say they want to lose weight. The majority of them lose money on diet programs.
Plan to Lose Weight When you are motivated to lose weight, begin a weight-loss program, minimize your exposure to tempting foods, exercise, and forgive yourself for lapses. Joe R. Liuzzo | <urn:uuid:2e83bcd4-0396-455c-9e95-6544cc7d192c> | {
"date": "2016-12-08T18:23:19",
"dump": "CC-MAIN-2016-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542648.35/warc/CC-MAIN-20161202170902-00288-ip-10-31-129-80.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9014760851860046,
"score": 3.6875,
"token_count": 1118,
"url": "http://slideplayer.com/slide/3448272/"
} |
July 10, 2013 | 278 views
Person to Person
The anchoring effect
Your teen is in desperate need of a new wardrobe. You set a day for a shopping trip. Lucky you. It’s not long until your daughter finds the perfect pair of jeans. Great, you tell her — until you check the price tag, $149.95. “Sorry honey, no deal. Too expensive. I’m sure you can find another pair of nice jeans that’s less expensive.” “No, I love this one; I have to have it.” Her voice has become a screech when a saleswoman approaches. “Do you know that these jeans are on sale, this week only, marked down 25 percent?” “Mom, that’s perfect. If we get four pairs of jeans, that’s like getting one free.”
Daughter’s delighted. Mom feels conned. What’s happening here? Is it just that daughter’s a spoiled brat and mom’s a tightwad? Sorry, it’s not that simple. To understand what’s going on here, you need to appreciate the power of the “anchoring effect.”
How do you know how much you should pay for something? How do you know what’s a deal and what’s a rip-off? You need some sort of reference point. A cue to help you evaluate. For your daughter, the reference point is $149.95. The discount makes it a real bargain so why is mom still giving me a hard time?
Your reference point, however, is quite different. You remember, when you were a kid, a great pair of jeans cost no more than $50. Sure, prices have gone up but three times the price? Crazy! No, in your mind, these jeans are way too expensive.
The anchoring effect is a cognitive bias that influences you to rely too heavily on the first piece of information you receive. And it’s not just a factor between the generations. Stores use it all the time to convince you to buy.
The manufacturer’s suggested retail price for a new Lexus is $39,465. You negotiated a price for $35,250. You feel terrific. You believe you got a great deal. The anchoring effect has worked!
You paid $80,000 less for your home than the initial price offering. Were you a great negotiator or is this one more example of the anchoring effect? | <urn:uuid:83d7504f-f52e-46bb-9078-f9edf84e5944> | {
"date": "2013-12-10T07:54:46",
"dump": "CC-MAIN-2013-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164013027/warc/CC-MAIN-20131204133333-00002-ip-10-33-133-15.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9452322125434875,
"score": 2.90625,
"token_count": 538,
"url": "http://www.liherald.com/fivetowns/stories/The-anchoring-effect,48725?page=1&content_source="
} |
How can instructors teach students a new language when they themselves are not fluent in it? That was the challenge faced by educators in the Indian state of Bihar as they attempted to teach their students English. But with a new EDC radio program, both students and teachers alike are beginning to master English.
The program, called English is Fun, is broadcast daily across the state and reaches close to 7 million children and 140,000 teachers. The 30-minute broadcasts target learners in grades 1 and 2 but have also proved popular among older students.
“Bihar has some of the lowest education indicators in the country, and teachers have poor English language skills,” says EDC’s Nadya Karim-Shaw. “The government provides teachers 20 days of training a year, but that is not sufficient to learn a language. Radio is a great way to reach out and provide teachers with daily support.”
The radio programs are based on Bihar’s current English curriculum; they include songs, games, and other interactive features to engage students in learning English. According to Karim-Shaw, “The programs feature vocabulary related to greetings, numbers, shapes, and colors. We focus on speaking and listening, so that children are able to comprehend and respond in English.”
Says Victor Paul, also of EDC, “The broadcasts motivate both students and teachers, and it is very encouraging to see the teachers use the active learning methods in other classes.”
English is Fun is funded by the government of Bihar and the U.S. Agency for International Development.
Originally published on January 21, 2009 | <urn:uuid:2b578514-c161-4f1f-8853-61d2f5438a47> | {
"date": "2014-07-28T20:30:23",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261958.8/warc/CC-MAIN-20140728011741-00300-ip-10-146-231-18.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9698817729949951,
"score": 3.296875,
"token_count": 338,
"url": "http://www.edc.org/newsroom/articles/teachers_and_students_learn_together"
} |
Near-equatorial peak diversities are a prominent first-order feature of today’s latitudinal diversity gradient (LDG), but were not a persistent pattern throughout geological time. In an analysis of Ordovician (485–444 Ma) fossil occurrences, an equatorward shift of the latitudinal diversity peak can be detected. A modern-type LDG and out-of-the-tropics range shift pattern were synchronously established during emerging icehouse conditions at the climax of the Great Ordovician Biodiversity Event. The changes in the LDG pattern and range shift trends can be best explained as a consequence of global cooling during the Middle Ordovician and of diversification in the tropical realm following a greenhouse period with temperatures too hot to support diverse tropical marine life. These results substantiate a fundamental role of temperature changes in establishing global first-order diversity patterns. | <urn:uuid:580cdfb4-9286-4942-9c40-0419bca36ddb> | {
"date": "2018-08-18T16:36:05",
"dump": "CC-MAIN-2018-34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213691.59/warc/CC-MAIN-20180818154147-20180818174147-00656.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8968298435211182,
"score": 2.859375,
"token_count": 178,
"url": "https://pubs.geoscienceworld.org/gsa/geology/article-abstract/46/2/127/524858/changes-in-the-latitudinal-diversity-gradient?redirectedFrom=PDF"
} |
Use arrow keys to move
HOW THIS CODE WORKS
I made this program so others could make their own pacman and other maze games and artistic expressions. The way this program works is that whenever you push an arrow key, pacman points in the direction you push (if "left arrow key" pressed point "left"). Then pacman moves forward as long as his whiskers (a purple dot in front of his mouth) are touching the color green. You can see this in the pacman sprite "if (purple touching green)". If pacman touches red, the maze switches.
To change it just draw any maze/path you want to using the same color green I used (in the scratch paint editor that's the 4th square down in the green column). Then change the pacman costume. | <urn:uuid:2061b7f0-4336-44fe-a7ce-8ae0627837b6> | {
"date": "2016-05-28T05:57:33",
"dump": "CC-MAIN-2016-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277313.92/warc/CC-MAIN-20160524002117-00051-ip-10-185-217-139.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8978894948959351,
"score": 2.671875,
"token_count": 166,
"url": "https://scratch.mit.edu/projects/1125125/"
} |
Motor control relays: Workhorses of the control world
Motor control relays are heavy-duty relays used to control motor starters and other industrial components. More specifically, they are typically used to energize the coil of a motor starter or contactor, which in turn starts a motor. A motor protective relay is a type of motor control relay used to prevent the coil of a motor starter or contactor from being energized.
Motor control relays are heavy-duty relays used to control motor starters and other industrial components. More specifically, they are typically used to energize the coil of a motor starter or contactor, which in turn starts a motor. A motor protective relay is a type of motor control relay used to prevent the coil of a motor starter or contactor from being energized. These relays prevent equipment damage by detecting overload, over- and under-voltage, over-current and phase-loss conditions.
Motor control relay benefits
The main advantage motor control relays offer over general purpose relays is the ability to add accessories and additional poles. They also offer the benefit of selecting motor control relays with 600 Vac coils. The ruggedness of motor control relays make them preferable in manufacturing applications.
Motor control relays allow for a variety of accessories including:
Transient surge suppression
Pneumatic and solid-state timers
Mechanical and permanent magnet latching controls
Using general-purpose relays
Using intelligent relays
Direct control using a programmable logic controller.
It is possible to use a general-purpose relay with a control power transformer or power supply. The control power transformer steps the voltage down to a level usable by the general purpose relay coil, while the power supply steps the voltage down and rectifies the voltage to dc. The general purpose relay’s contacts are typically rated for a resistive load. Since motor starter coils are an inductive load, the general purpose relay’s contacts must be de-rated for use in this application.
Intelligent relays allow for the ability to combine the features of a general-purpose relay in a programmable electronic device. Common features include timers, counters, real-time clocks and displays. Intelligent relays are programmable from the front of the unit, which makes it easy to make changes on the plant floor. Importantly, the designer must ensure suppression is installed on the motor starter coil to protect the relay from the collapsing field of the inductive load. Additional advantages of intelligent relays include reduced labor and assembly costs, less troubleshooting time due to fewer components and the ease of modifying relay logic. Intelligent relays can be cost-effective when replacing two time-delay relays.
With advances in electronic coils in motor starters, the motor control relay can be eliminated from the circuit design and the motor starter can be switched directly from a PLC. Electronic coils in motor starters are sometimes more efficient than motor control relays, especially for the smaller motor starters (less than 40 hp). The electronic coils can be switched from a low current dry circuit. This can be accomplished from a transistor or relay output. For relay outputs, the designer must take into consideration the expected life of the relay and the number of relays on the PLC output module. If one relay fails, will the entire relay module need to be replaced? To determine whether the PLC relay can switch the motor starter coil, the designer must ensure the motor starter inrush coil current is less than the PLC relay rated switching current.
Motor control relays are the heavy-duty workhorses of the control world. Engineers must weigh voltage/current handling capability, reliability, endurance, assembly, cost, component size and maintenance issues. With proper selection and application, the motor control circuit can be as reliable as your trusty old stable horse.
<table ID = 'id1278413-0-table' CELLSPACING = '0' CELLPADDING = '2' WIDTH = '100%' BORDER = '0'><tbody ID = 'id1278063-0-tbody'><tr ID = 'id1278065-0-tr'><td ID = 'id1278067-0-td' CLASS = 'table' STYLE = 'background-color: #EEEEEE'> Author Information </td></tr><tr ID = 'id1278242-3-tr'><td ID = 'id1278244-3-td' CLASS = 'table'> David Brandt is a product specialist at Eaton Corp. </td></tr></tbody></table>
To protect sensitive instruments and solid-state devices, transient surge suppression directly mounts to coil terminals to limit high transient voltages that result from de-energizing relay coils. Pneumatic timers mount directly to the motor control relay in place of auxiliary contacts, and are convertible from on- to off-delay or the other way around.
More reliable than pneumatic timers and with similar functionality, solid-state timers improve upon the overall accuracy of the timing function. Latches are important to keep the motor control relay contacts closed during a loss and return of power. Convertible contacts can be changed from normally-closed to normally-open or vice versa. By adding auxiliary contacts mounted directly to the top or side of the motor control relay, users are able to add additional poles.
Motor control relays are part of the control circuit. For example, an application could include two motor starters, where the second motor is started and stopped after a time delay. The second motor could be a cooling fan or pump in this application. Other applications include priming pumps, conveyor systems, machine jogging, manufacturing processes, safety circuits, surge and backspin protection for pumps and float controls. Motor control relays can also be used to sequentially start motors to prevent excessive starting loads due to motors starting simultaneously.
To select the appropriate control relay, it is important to determine the system voltage, the load currents, number of poles required and the expected life before replacement. The motor control relay coil should be selected based on the system voltage that energizes the coil. The coil ranges offered typically go to 600 Vac, which is useful for legacy systems. The motor control relay contact rating should be high enough to make and break the coil load of the motor contactor or starter it is controlling. Since coils are inductive loads, the designer must be sure the contacts can handle the inrush currents present when energizing the motor starter coil.
As with all electro-mechanical devices, motor control relays have electrical and mechanical lives. The mechanical life is based on opening and closing the contacts of the relay under no-load conditions; the electrical life is based on duty cycle and making and breaking currents.
Alternative motor starter control methods
Additional methods of controlling motor starters and contactors include:
Case Study Database
Get more exposure for your case study by uploading it to the Plant Engineering case study database, where end-users can identify relevant solutions and explore what the experts are doing to effectively implement a variety of technology and productivity related projects.
These case studies provide examples of how knowledgeable solution providers have used technology, processes and people to create effective and successful implementations in real-world situations. Case studies can be completed by filling out a simple online form where you can outline the project title, abstract, and full story in 1500 words or less; upload photos, videos and a logo.
Click here to visit the Case Study Database and upload your case study.
Annual Salary Survey
In a year when manufacturing continued to lead the economic rebound, it makes sense that plant manager bonuses rebounded. Plant Engineering’s annual Salary Survey shows both wages and bonuses rose in 2012 after a retreat the year before.
Average salary across all job titles for plant floor management rose 3.5% to $95,446, and bonus compensation jumped to $15,162, a 4.2% increase from the 2010 level and double the 2011 total, which showed a sharp drop in bonus. | <urn:uuid:8c255ff4-0d13-481a-9a4a-1b75df2d7370> | {
"date": "2014-07-25T18:26:35",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894473.81/warc/CC-MAIN-20140722025814-00056-ip-10-33-131-23.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8972243070602417,
"score": 3.296875,
"token_count": 1652,
"url": "http://www.plantengineering.com/single-article/motor-control-relays-workhorses-of-the-control-world/1cf7b4433b7719a2fea578fb64cf39cc.html"
} |
Generosity vs. Stinginess
Definition: Carefully managing my resources so I can freely give to those in need.
* Share what I have with others
* Not expect anything in return for my generosity
* Give of my time and talents
* Praise the good I see in others
Week One Activities
(Practicing ~ Sharing what I have with others)
Encourage siblings to share within the home, which will enable sharing among friends and others.
Week Two Activities
(Practicing ~ Recycling)
Teach and practice recycling in your home. This is not only in garbage, but you can also practice this in giving clothes and toys away as well.
Week Three Activities
(Practicing ~ Not expecting anything in return for your generosity)
Locate a need that your family can fill, above and beyond that which you typically would do. Possibly one that you would usually be paid for, but do for nothing teaching your family generosity by practicing it.
Week Four Activities
(Practicing ~ Giving of my time and talents)
During family time discuss and determine each family member talents and how they can give their time and talents generously this week.
Week Five Activities
(Practicing ~ Praising the good I see in others)
Encouragement is always a blessings ~ be sure to praise generously! | <urn:uuid:b06d42f7-55ed-41f5-b435-f05df9eac47d> | {
"date": "2016-02-07T01:08:59",
"dump": "CC-MAIN-2016-07",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148428.26/warc/CC-MAIN-20160205193908-00217-ip-10-236-182-209.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9521079063415527,
"score": 3.046875,
"token_count": 274,
"url": "http://www.simplyhelpinghim.com/2013/02/28/generosity-vs-stinginess/"
} |
OK the question is a charge Q1=2.00*10^6C (mass=1.22*10^-19)is at the origin. Charge Q2=4.00*10^-6 (mass=2.50*10^-19) is heldat rest at the location x=2.00
a.find the magnitude and direction of the electric field atthe point x=1.5
b.If Q2 is let go, what will be its speed infinitely faraway?
For part a do you use E1=kq/r^2 and E2=kq/r^2 then add thetwo, and is the direction in the negative X direction?
And for part B what how do you solve it , what formula do youuse? | <urn:uuid:727cb29a-69e3-4e68-8baa-a869ef7c2a16> | {
"date": "2014-08-31T00:22:32",
"dump": "CC-MAIN-2014-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835844.19/warc/CC-MAIN-20140820021355-00256-ip-10-180-136-8.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8392013311386108,
"score": 2.9375,
"token_count": 168,
"url": "http://www.chegg.com/homework-help/questions-and-answers/ok-question-charge-q1-200-10-6c-mass-122-10-19-origin-charge-q2-400-10-6-mass-250-10-19-he-q116951"
} |
RWE npower Energy Challenge is an international competition which tasks students with devising a solution to the biggest challenge facing the energy sector: "How can an energy company respond to climate change, maintain a secure supply and more importantly ensure that energy bills are affordable?"
The Birmingham-based team, calling themselves Green Grid Consultancy, won the competition for their innovative idea to use a hybrid wind and wave generator. The project involved combining two forms of generation at the same site, this could reduce operating costs, improved power output and has no carbon emissions. Each member of the winning team was awarded £1,250, and £5,000 for Birmingham University.
The RWE npower Energy Challenge has been running for five years. With hundreds of students across the UK taking part, this year the competition had an international feel with two teams from Eindhoven University in Holland making the final.
Ayekame Tseja, captain of Birmingham's Green Grid Consultancy team said: "We're thrilled to be winners of the RWE npower Energy Challenge, and that our ideas stood out against such a strong field of competitors. Taking part in a competition that's challenged us on such an important topic has been a great experience, and we're thrilled that our hard work has paid off."
The judges, comprising of senior board members from RWE npower including Volker Beckers, CEO of RWE npower were impressed with the unique nature of the team's solutions.
Volker said: "The Green Grid Consultancy presentation was excellent and stood out due to the original way the team tackled the challenge of cutting emissions, whilst ensuring clean energy provision for the future. We’d like to congratulate them on winning the competition.”
"Among the goals of the Energy Challenge is encouraging and rewarding young people studying engineering and science, and attracting new students to these important topics. We’ve been impressed with all the entries to this year's challenge, but special congratulations must go to our winners."
Recent research from the Science for Careers Expert Group report has indicated that there could be a major shortage in science-skilled workers in the UK workforce, with a shortfall of 324,000 workers with the relevant skills by 2014. npower believes competitions like the Energy Challenge can play an important role in helping science undergraduates into industrial jobs and filling this skills gap.
About RWE npower:
RWE npower is an integrated energy business, generating electricity (http://www.npower.com/
For more information contact:
0845 070 2807 | <urn:uuid:6c7ef309-6f47-4160-a161-27f1fcb23e35> | {
"date": "2015-10-10T04:28:36",
"dump": "CC-MAIN-2015-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737940789.96/warc/CC-MAIN-20151001221900-00052-ip-10-137-6-227.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9498984217643738,
"score": 2.5625,
"token_count": 524,
"url": "http://www.prlog.org/11857382-rwe-npower-energy-challenge-won-by-birmingham-undergraduates.html"
} |
Note from the editors for readers from outside of the United States: In the US, educational rights for students with disabilities are covered by the federal Individuals with Disabilities Education Act (IDEA). Another piece of legislation, the Americans with Disabilities Act (ADA) also has educational implications. A third piece of legislation, the Family Education Rights and Protection Act (FERPA) transfers privacy rights to students when they reach the age of 18, unless the student grants rights to parents.
In the US, the preferred term for substantial limitations in cognitive and adaptive functioning is "intellectual disability", while "learning disability" is reserved for unexpected difficulty in acquiring specific academic skills. Elsewhere in the English-speaking word, "learning disability" is used in referring to people who have substantial limitations in adaptive and cognitive function.
There is widely understood to be a continuum of disability (both in physical and cognitive areas): mild, moderate, and severe and profound. There are varying definitions at the US state level, nationally, and internationally.
Note from the author: This article is part of a multi-part series about Individual Education Plans (IEPs) and the IEP process in which I go over each part of the IEP in-depth and describe the process from both a teacher perspective and a parent perspective. The complete list of is at the bottom of this post.
The transition plan is a required component of IEPs for students 16 years old or older, although most systems begin building a transition plan at age 14, presumably before the student begins high school.
For all the anxiety over all the other parts of the IEP, this section probably involves a great deal more fear over the uncertainty of the future. And it should.
The end goal of all educational efforts (regular and special education) is for the student to have a successful transition into adulthood and post-school life. Ideally, the student will be able to live and work independently. But for many students with special needs, this is often not a likely scenario. I am a parent of special-needs students, and I teach students with severe and profound disabilities. In some ways, I see myself as an "educational undertaker", because my students have severe and profound disabilities, and are unlikely to be able to live and work without significant, around-the-clock support.
For the parents and care-givers of students like mine, contemplating life beyond the protections of IDEA can be frightening. Time passes all too quickly. This was a realization that struck me as my son was in 1st grade, when I first published this article, and now he is in 5th grade and his voice is changing and he is taller than his mother. I still have a few years, but some other parents do not have that luxury.
The transition plan is known in some districts as an ITP (individual transition plan). The purpose of it is to plan for what that student will do after high school and then map out a course for getting them there.
What parents have to know is: planning for transition should take place, in a parent’s mind, right now. No matter the age of the child, parents are the ones who have to take the much longer view. While teachers, schools and programs come and go, the parent is the one constant in that child’s life. Along with the child, the parents have to live with the consequences of decisions made today.
What parents should be thinking about, and including in the transition plan, starting in middle school (if not before)
- Is the student working towards a regular education diploma?
- If the student will earn a regular education diploma, will he or she go to college?
- If the student will earn a regular education diploma, but college is not in the plan, what additional training will be required for employment?
- After graduating from high school, will the student require services from vocational rehabilitation services?
- What is the plan to transition the student to services from vocational rehabilitation services?
- After graduation from high school, where will the student live?
- If group home/congregate care is in the student's future, what needs to be done to transition the student from living with the parent to a group home?
As part of this plan, the student is informed in the year he or she turns 17, at the age of 18, all rights transfer to them. In other words, the student is the one who has then has rights and the student is the one who accepts or rejects service options. As a courtesy, schools keep parents involved. But they don’t have to, under the law.
My students have severe and profound disabilities, but they are not excepted from the passage of rights from parents to the student at the age of 18. Rights transfer to them unless the parent or legal guardian applies for and obtains guardianship of that student. This is a process that parents or guardians should initiate soon after the student turns 17. The process takes away rights from the student and grants them to the parent or legal guardian, by declaring the student incompetent. But it is a necessary process, because it protects the student from being abused and taken advantage of by the syste. For less severe students, partial guardianship may be obtained to protect their interests.
Why should parents start thinking about this transition process in middle school? One reason is that teachers tend to pass problems along like a proverbial hot potato up the line. As a igh school teacher, I am the end of the line. There’s no place left to toss them except out the door. And out there, is a very cold and bleak world for our kids. There is no IDEA, there is no due process, and there are no procedural safe-guards. No one has to take your kids once they are out. Parents, at that point, you are stuck with whatever happened during the previous 21 years. Planning today for the outcome you wish to see later can pay dividends or at least minimize some of the later headaches.
As my son approaches middle school, we are concentrating mostly on making sure he is keeping up academically. At the present time he is able to pass the regular curriculum but with the increased emphasis on state standardized tests, the rigor and pressure also increases. But it is important to persist in efforts to stay on that regular track because once a student falls off and gets behind, it is almost impossible to catch up again. A student who is not in a regular diploma track will have an increased emphasis on job-related skills, while continuing to be served in the academic subjects. For students with developmental disabilities, it is important to attempt to make the transition as seamless as possible since difficulties with transition is often such a defining characteristic for these individuals.
Here is Daniel's entire IEP series:
- IEP Preparation: School Staff
- IEP Preparation: Parents
- Present Level of Performance
- Behavior Intervention Plan
- Accomodations and Modifications
- Goals and Objectives
- Transition Plan and IHP
- Service Options and Placement
- IEP Process: Functional Behavior Assessment
- Manifestation Determination Part 1
- Manifestation Determination Part 2 | <urn:uuid:069642fb-6a42-4be8-a352-39cc58516110> | {
"date": "2014-09-01T13:37:36",
"dump": "CC-MAIN-2014-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535919066.8/warc/CC-MAIN-20140901014519-00464-ip-10-180-136-8.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9623519778251648,
"score": 3.40625,
"token_count": 1477,
"url": "http://www.thinkingautismguide.com/2011_02_01_archive.html"
} |
A simple self-guided activity to practice conversions between fractions and decimals. Can be used as a warm up or interactive class/group activity. These sheets will help your students organize their work/thinking.
The license is intended for one teacher and all of their classes. Please purchase additional licenses for each additional teacher planning on using these materials in your building. Thanks!
If you like what you see, please follow the Actis Standard store. New materials are posted frequently. | <urn:uuid:a467fb47-ae1f-491d-95b7-dbdfb914706a> | {
"date": "2018-06-18T04:28:21",
"dump": "CC-MAIN-2018-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860041.64/warc/CC-MAIN-20180618031628-20180618051628-00496.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9446821212768555,
"score": 2.65625,
"token_count": 97,
"url": "https://www.teacherspayteachers.com/Product/Fraction-and-Decimal-Conversions-Activity-3254527"
} |
What to do with a fish skeleton: a 3D-scan library of all fish species.
To understand how humans and non humans move around in their world, knowledge of the skeleton is essential. It’s easier to understand how a bird flies if we know the internal structure of its flying apparatus: the bones, tendons, joints and muscles. This biological knowledge can be useful for engineers, for example to design aircrafts that are not only lighter but also have better aerodynamics. Biologically inspired techical design is called biomimetics. A fascinating application are biorobots: robots that minic the way real animals move and interact with the environment (see insert, left showing Pleurorobot: a salamander-like robot that mimics its biological counterpart*).
For creatures that live in the sea the structure of its internal skeleton is important to understand how certain species move, make swift turns, burrow, prefer to stay in shallow water or in the deep etc. This knowledge can then be used to mimic movements of fishes, for example in filmed animations of fishes or modelling a robotic sea slug. Adam Summers (Adam P. Summers – [email protected]) is a biologist at the University of Washington in the Biology department and School of Aquatic and Fisheries Sciences. He has set himself the goal to create a digital library with 3-D images using a CT scanner, of all 33,000 species of fish in the world (see insert, right part for an example).** Summers says it can be done in about three years by scanning multiple fish at the same time. His fish odyssey began 15 or more years ago with a question: ‘Why are sharks and rays able to move about like other fish even though their skeletons are composed of cartilage and not bone?’
Summers has a background in engineering and mathematics and an interest in the evolution of non-human animals and how they move in their environment. "I love the idea of getting all this stuff up on the Web for anyone to access for any purpose'. 'To allow the general public and every scientist out there to just download these data is fabulous', Summers says.***. He even uses 3D printers to make physical models of these skeletons (just think of that transparent model of a stingray on your desk!). His mission is to ‘use the natural world and the sea for inspiration for new materials and new ways of doing things’ For example, he shares his experience working with the animation studios, where he was asked by animators to judge separate frames and distinguish real from computer created fishes. He also supervised animators on how a certain fish would behave and move around in films like as 'Finding Nemo' and Disney's new product fFinding Dory'.**** | <urn:uuid:55d80303-f324-44e4-90b7-af3a6994aefd> | {
"date": "2018-08-22T01:06:24",
"dump": "CC-MAIN-2018-34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219242.93/warc/CC-MAIN-20180822010128-20180822030128-00416.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9385991096496582,
"score": 3.25,
"token_count": 569,
"url": "http://www.albertkokuw.nl/424978583/3994332/posting/what-to-do-with-a-fish-skeleton-a-3d-scan-library-of-all-fish-species"
} |
Early History of Old Orange County
The source of this information is Orange County - 1752-1952 edited by Hugh Lefler and Paul Wager,
published in 1953, copied with permission.
Before Orange County - The Indians
When the curtain rose for the drama of history to begin, the land that is now Orange county was occupied by small tribes of Siouan origin. The great Trading Path from Virginia to the Catawba nation led through the region of present Hillsboro and Mebane to Haw river. The first description of this famous Indian trail was given by John LEDERER, a German doctor, in June, 1670. He told of his visit to the Eno Indians along the Eno river near present Hillsboro. His narrative read:
*****Dr. Lederer's comments****
The county here, by the industry of these Indians, is very open and clear of wood. Their town is built round a field where in their sports they exercise with so much labour and violence, and in so great numbers that I have seen the ground wet with sweat that dropped from their bodies: their chief recreation is slinging of stones. They are of mean stature and courage, covetous and thievish, industrious to earn a penny; and therefore hire themselves out to their neighbours, who employ them as carryers or porters. They plant abundance of grain, reap three crops in a summer, and out of their granary supply all the adjacent parts. These and the mountain-Indians build not their houses of bark, but of watling and plaister. . . .Some houses they have of reed or bark; they build them generally round: to each house belongs a little hovel made like an oven, where they lay up their corn and mast, and keep it dry. They parch their nuts and acorns over a fire, to take away their rank oyliness; which afterwards pressed, yield a milky liquor, and the acorns an amber-colour'd oyl. In these, mingled together, they dip their cakes at great entertainments, and so serve them up to their guests as an extraordinary dainty. Their government is democratick; and the sentences of their old men are received as laws, or rather oracles, by them.
****end of Dr. Lederer's comments****
The author goes on to say that more than two centuries later the following comment was written. He does not say who wrote this comment.
Not far from Eno Town the young braves of North Carolina and Duke universities still carry on their ball play with much labour and violence, the government of the county is still democratic, and the three crops a year are possible for farmers who space their corn plantings properly.
Fourteen miles west-southwest from his visit to the Eno Village Lederer found the Shackory Indians dwelling upon a rich soil. These seem to tally with the Shakori (Shoccoree), or Saxapahaw, sometimes called Sissipihaw, dwelling Haw river in the neighborhood of Haw fields.
Another traveler, John LAWSON, came along the trading path from the south in 1701. The trail was followed across "three Great Rivers", identified as Little and Big Alamance rivers and Haw river. The Haw river ford, which was crossed "with great Difficulty, (by God's Assistance)," was in the neighborhood of the present village of Swepsonville, and bordered lands which Lawson described as "extraordinary Rich".
As he traveled through Haw fields, he met a trading caravan of thirty horses led by several horsemen. The leader, a man named MASSEY, from Leeds in Yorkshire, England, advised Lawson to secure ENO-WILL, a faithful Indian guide, who was to be found at one of the villages in the Occoneechee neighborhood. This Indian was a Shakori by birth, whose people had been met by Lederer at Haw river and who had since joined the Eno and another tribe known as Adshusheer.
The Occoneechee Indians had fled from their island home at the confluence of Dan and Staunton rivers and were then n the region of the Eno river, where they left their name in the "Occoneechee Hills", not far from present Hillsboro.
The Occoneechee (Occaneechee) Indians provided Lawson with a feast of "good fat Bear, and Venison." The Indians' cabins, or lodges, were festooned with dried bear and dear meat, "a good sort of Tapestry," which caused Lawson to declare that the Indians possessed "the Flower of Carolina; the English enjoying only the Fag-end of that fine Country."
Eno-Will agreed to guide Lawson to eastern Carolina. A halt was made at Eno town, located on a "Pretty Rivulet", fourteen miles east of the Occoneechee, and northwest of the present city of Durham. Here Lawson wrote this character sketch of his Indian friend:
Our Guide and Landlord, Enoe-Will, was of the best and most agreeable Temper that I ever met with in an Indian, being always ready to serve the English, not out of Gain, but real Affection; which makes him apprehensive of being poisoned by some wicked Indians, and was therefore very earnest with me, to promise him to avenge his Death if it should so happen. He brought some of his chief Men into his Cabin, and two of them having a Drum and Rattle, sung by us as we lay in Bed, and struck up their Music to serenade and welcome us into their Town. And though at last, we fell asleep, yet they continued their Concert till Morning.
****End of Lawson's sketch****
Soon after this visit of John Lawson, the Siouan tribes of the Piedmont departed for eastern Carolina. Apparently all of the Indians in the region later included in Orange county had disappeared by the time that the white settlement of the area began.
The First Settlers
There were few white families in the 1740's in the area that was to become Orange County. But, by 1751 Governor Gabriel JOHNSTON reported that settlers were flocking in, mostly from PA. At the time it was formed Orange County had an estimated population of 4,000. By 1767 it had the largest population of any county in NC.
The migration along the "Great Wagon Road" from PA through Shenandoah valley to Carolina was made up largely of Scotch-Irish and German immigrants. "Scotch-Irish" is the term used in the reference book. German refers to the area that was later to become Germany.
The most distinctly Scotch-Irish settlement in the county was Eno, about 7 miles north of Hillsborough. They also settled in the area east of the Haw river and in the Little river and New Hope creek sections. The Scotch-Irish, in what is now Guilford County, organized Buffalo Presbyterian Church in 1756. The Scotch were said to have been most prevelant in Cumberland County, but there were some that settled in southern Orange in the area that is now Chatham County.
Germans held the land west of the Haw River. There were Lutherans and German Reformed. Ludwig CLAPP had a grant of 640 acres on the Alamance. Michael HOLT had large acreage along the Great and Little Alamance. John FAUST had land on Cain Creek. Adam TROLINGER had land on the west bank of the Haw River, near the present railroad crossing. Other German pioneers were Christian FAUST, Jacob ALBRIGHT, Peter SHARP, Philip SNOTHERLY and David EFLAND. Quoting from the book: "By 1773 there were so many Germans in western Orange that J.F.D. SMYTHE, an English traveler, experienced difficulty in finding anyone who understood his language in some areas west of Hillsboro."
Some of the names of these early German settlers include: ALBRECHT/ALBRIGHT, BASON, KLAPP/CLAPP, EPHLAND/EFLAND, FAUST/FOUST, GERHARD, GOERTNER/COURTNER/CURTNER, GRAFF/GRAVES, HOLT/HOLD, KIMBRO/KIMBROUGH, LEINBERGER/LINEBERRY, LONG, LOY, MAY, MOSER, NEASE/NEESE/NEESE, RICH/RIDGE, SCHADE/SHADDIE, SCHEAFER/SHAVER/SHEPHERD, SCHWENCK/SWING, SHARP/SHAEBE, TROLLINGER, STEINER/STONER, WEITZEL/WHITESELL,
English immigrants from VA settled in northern Orange along the Hico River and County Line Creek. There was a settlement of Irish near Stoney Creek in what is now Alamance County. The Welsh, including Thomas LLOYD settled between Hillsborough and what is now Chatham County.
Quakers were very prominent in early Orange County. There were some north of Hillsborough. There were more in the Cain Creek and Stinking Quarter Creek areas that are now part of Alamance, Chatham and Randolph. Two prominent Quaker pioneers were Jonathan LINDLEY of the Cain Creek section and William COURTNEY of Hillsborough.
Land Ownership in Orange County
From its beginning Orange County was the home of farmers. It has been said that in 18th century Orange county more than 75% of the land owners owned between 100 and 500 acres. This was at a time that large land grants were common, but only 5% of the land owners had 1,000 acres or more.
The three largest landowners in 1800 were William CAIN who had 4,417 acres, Richard BENNEHAN with 4,065 acres, and William STRUDWICK with 4,000. By 1860 77% of the land owners had 100 acres or less with only about 1% having 1,000 acres or more.
Slavery in Orange County
Slavery was well established in the colony of North Carolina long before Orange County came into being. Slavery was not as important an institution in Orange County as other places. At no time did slaves constitute more than 31 percent of the total population of the county.
In 1755 (3 years after its founding) only 8 percent of the families owned slaves. The largest slaveholder at that time, Mark MORGAN, had only 6 slaves. By 1780, however, 3 percent of Orange Co slaveholders had more than 20 slaves.
The 1790 census showed 10,055 whites, 2,060 Negro slaves, and 101 other free persons. At that time there were 14 slaveholders who had 10 slaves or more. 4 of these 14 lived in Hillsboro. William COOPER was the largest slaveholder in Hillsboro with 22 and Richard BENNEHAN, a planter, was the largest slaveholder in the county with 24. Others who had 10 or more were George ALLEN, John TAYLOR, Matthew McCAULEY, John HOGAN, Thomas H PERKINS with 10 each; Walter ALVES with 11; William SHEPPARD and William O'NEAL with 12 each; Hardy MORGAN with 14; Alexander MEBANE with 16; and a person whose name is not known with 20.
In 1860 less than half of all landowners in the county had slaves.
Over 40 percent of those had only one slave. The following is a direct quote: "Most slaveholders owned a small number of slaves, hence the relationship between master and slave was very close. The master knew his slaves by name, took a personal interest in them individually, and looked upon them almost as members of his family".
In 1860 the 3 largest slaveholders were I. N. PATTERSON with 106, Paul CAMERON with 98, and Henry WHITTED with 78. | <urn:uuid:7524d4dd-d439-4e52-b980-55bd1ae0ec08> | {
"date": "2017-02-20T11:32:20",
"dump": "CC-MAIN-2017-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00536-ip-10-171-10-108.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.980343222618103,
"score": 3.03125,
"token_count": 2485,
"url": "http://www.rootsweb.ancestry.com/~ncalaman/early.html"
} |
Demand Labeling to Prevent Superbug CrisisPosted by Jennifer Molidor, ALDF Staff Writer on July 14, 2014
Did you know that nearly 80% of the antibiotics sold in the U.S. go to animals on factory farms before they’re even sick? Crowding and filth in factory farms creates conditions where animals need medicine to keep them alive before slaughter. This mass usage of antibiotics in animal agriculture means antibiotic resistant “superbugs”—bacteria that can’t be killed by our common medicines. These superbugs could be the number one public health crisis facing us today.
Consumers who eat (and feed their children) meat and poultry have no way of knowing if it comes from an animal who was given pre-emptive antibiotics, and that’s why ALDF is demanding the government place labels on meat products. Consumers have a right to make informed decisions—and animals have rights to live in conditions that don’t require mass doses of medicine simply to survive. A Change.org petition to require labeling on meat from animals fed antibiotics is approaching 150,000 signatures—help it get there by signing here. This petition supports ALDF’s request to the USDA to protect the public from a “superbug” crisis from factory farms by labeling meat and poultry.
Tightly packed spaces and unsanitary conditions on factory farms mean mass usage of antibiotics. Intensive confinement like this weakens immune systems and causes immense physiological stress. Instead of making conditions better, producers give drugs to animals to prevent the outbreak of disease and to speed an animal’s hormonal growth to “slaughter weight” more cheaply by using less food. That meat may end up in school lunches or family suppers. Studies show that antibiotic-resistant epidemics of E. coli and salmonella may start from just this origin. Yet the ag industry isn’t required to disclose its use of antibiotics to consumers!
Labeling is how consumers understand what they are purchasing and feeding their families. Food safety information helps consumers make better choices, and it’s up to the USDA to empower us with the ability to make these decisions. Carter Dillard, ALDF’s director of litigation, explains:
Millions of animals are being harmed and consumers misled by this failure to accurately label food products—and this is a matter critical to public and animal health.
Write or call your Congressional representative and ask them to support ALDF’s petition by urging the USDA to protect the public with clear food labels. | <urn:uuid:6de4d6c0-dc2e-4be4-814c-fb599786c4f8> | {
"date": "2016-02-11T19:10:24",
"dump": "CC-MAIN-2016-07",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162648.4/warc/CC-MAIN-20160205193922-00069-ip-10-236-182-209.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9282984733581543,
"score": 2.90625,
"token_count": 525,
"url": "http://aldf.org/blog/demand-labeling-to-prevent-superbug-crisis/"
} |
GMOs at the Polls: 7 Things to Tell Your Friends Before Election Day
Farmers and eaters around the country and the world are watching the November 6 election with a very important question at the forefront of their minds: Will California’s Proposition 37—requiring labeling of GMOs—pass?
Sixty-one countries already require such labeling. But here in the U.S., GMOs took off in the 1990s with no public debate, and today they're in most processed foods, making Americans the world’s GMO guinea pigs.
We know it’s easy to get sunk by "information overload" and agribusiness advertising. So far the largest GMO maker, Monsanto, and other industry giants have plowed at least $35 million into killing Prop 37.
To help us think straight, we’ve prepared seven points—backed by peer-reviewed studies, a physicians’ 10-year investigation, and UN data—to consider and share with your friends. Here’s what they reveal:
1. GMOs have never undergone standard testing or regulation for human safety.
2. But we know that GMOs have proven harmful in animal studies.
A 2009 review of 19 studies found mammals fed GM corn or soy developed “liver and kidney problems” that could mark the “onset of chronic diseases.”[ii] Most were 90-day studies. In a new two-year study, rats fed genetically modified (GM) corn developed 2-3 times more tumors—some bigger than a quarter of their total body weight—and these tumors appeared much earlier than in rats fed non-GM corn. Among scientists, the study has its defenders and critics, but even the critics underscore that we need more long-term studies.
3. And the most widely used GMOs are paired with an herbicide linked to serious reproductive problems and disease.
GM crops Roundup Ready soy & corn are treated with the herbicide glyphosate. A physicians’ study found people exposed to glyphosate had increased risk of miscarriages, birth defects, cancer, and neurological problems in children. Neurologists report that herbicides, especially glyphosoate, "have been recognized as the main environmental factor associated with ... Parkinson"s disease." [iii]
4. The consequences of GMO technology are inherently unpredictable.
Inserting a single gene can result in multiple, unintended DNA changes and mutations. “Unintended effects are common in all cases where GE [genetic engineering] techniques are used,” warn scientists . One such environmental consequence—genetic contamination of other plants—is already documented. Note that unlike food, once released into the environment, seeds can’t be “recalled”![iv]
5. GMO-makers intimidate and silence farmers and scientists.
GMO corporations use patents and intellectual property rights to sue farmers, block research, and threaten investigators. “For a decade,” protested Scientific American editors in 2009, GMO companies “have explicitly forbidden the use of the seeds for any independent research,” so “it is impossible to verify that genetically modified crops perform as advertised.”[v]
6. GMOs undermine our food security.
Within the biotechnology market, Monsanto alone controls 90 percent of GE crops worldwide. And Monsanto is one of three GMO companies, including DuPont and Syngenta, that control 70 percent of the global seed market, reinforcing monopoly power over our food. GMO seeds are costly and must be purchased every year, so they worsen farmers’ indebtedness, dependency, and vulnerability to hunger.[vi]
7. GMOs aren't needed in the first place, so why would we take on risks and harms?
Studies show that safe, sustainable farming practices applied worldwide could increase our food supply as much as 50 percent. And keep in mind that the world’s already producing 2,800 calories for every person on earth every day—more than enough. And that’s just with what’s left over: Half the world’s grain goes not to people directly but to feed, fuel, and other purposes. Plus, one-third of all food is wasted. So the urgent question isn’t about “more” anyway. It is, how can all of the world’s people gain the power to secure healthy food? And a good start is knowing what’s in our food.[vii]
Just Label It: Let Us Know It's GMO
Video: Wouldn’t it be nice if they had to tell you what’s in your food?
Shopping in the know (not GMO)
Avoid processed foods! It’s a simple way to reduce exposure to the four most common GM ingredients: non-organic forms of soy, canola, cottonseed, and
corn, including high-fructose corn syrup.
- Look for the voluntary “non-GMO” label.
- Buy “certified organic,” which ensures that no GMO ingredients were used.
- Visit www.NonGMOShoppingGuide.com for a list of thousands of GMO products and brands.
See below for full citations
Frances Moore Lappé and Anna Lappé wrote this article for YES! Magazine, a national, nonprofit media organization that fuses powerful ideas and practical actions. Frances is author of the legendary best seller Diet for a Small Planet, and many other books. She is co-founder of the Small Planet Institute and is a contributing editor for YES! Magazine. This article draws on material from her latest book, Eco-Mind, Nation Books, 2011.
Anna is the author of Diet for a Hot Planet: The Climate Crisis at the End of Your Fork and co-author of Grub: Ideas for an Urban Organic Kitchen and Hope’s Edge. She is a founding principal of the Small Planet Institute.
To sort more food myths from facts, visit the new Food MythBusters: the Real Story About What We Eat website at FoodMyths.Org.
- A Farm Bill Only Monsanto Could Love
Three provisions in the bill would make it more difficult to regulate the safety of genetically modified crops. Consumers fight back with a flurry of organizing.
- California Soccer Moms Face Off against Monsanto
A grassroots coalition of California citizens has an initiative on the ballot to require the labeling of genetically modified organisms. While Monsanto and other corporations have spent tens of millions to silence them, the initiative seems likely to succeed.
- Just Label It: Let Us Know It’s GMO
Because wouldn’t it be nice if they had to tell you what’s in your food?
[i] Source for GMOs in 70% processed foods: California Department of Food and Agriculture, “A Food Foresight Analysis of Agricultural Biotechnology: A Report to the Legislature,” Jan. 1, 2003. www.cdfa.ca.gov/files/pdf/ag_biotech_report_03.pdf.Source for safety testing: Freese, W. & Schubert, D., “Safety testing and regulation of genetically engineered foods,” Biotechnol Genet Eng Rev, 2004: 299-324. www.saveourseeds.org/downloads/schubert_safety_reg_us_11_2004.pdf .
[ii] Source for GM corn study : Séralini, G.-E., et al., “Long term toxicity of Roundup herbicide and a Roundup-tolerant genetically modified maize,”Food and Chemical Toxicity, http://dx.doi.org/10.1016/j.fct.2012.08.005. Source 19 GM corn and soy studies: Séralini G.-E. et al., “Genetically modified crops safety assessments: Present limits and possible improvements,” Environmental Sciences Europe, 2011; 23(10). www.enveurope.com/content/23/1/10.
[iii] Sources for miscarriages, birth defects, cancer: Report from the 1st NATIONAL MEETING OF PHYSICIANS IN THE CROP-SPRAYED TOWNS, Faculty of Medical Sciences, National U. of Cordoba. Aug 2010, University Campus, Cordoba Coordinators: Dr. Medardo Ávila Vazquez, Prof. Dr. Carlos Nota. www.permaculturenews.org/files/INGLES-Report-from-the-1st-National-Meeting-Of-Physicians-In-The-Crop-Sprayed-Towns.pdf . Also see: Eriksson, M. et al, “Pesticide exposure as risk factor for non-Hodgkin lymphoma including histopathological subgroup analysis.” Int J Cancer. Oct 1, 2008; 123(7): 1657-1663. http://www.ncbi.nlm.nih.gov/pubmed/18623080. Source for cell toxicity: Gasnier C. et al., “Glyphosate-based herbicides are toxic and endocrine disruptors in human cell lines,” Toxicology. Aug 21, 2009; 262(3): 184-191. www.barnstablecounty.org/wp-content/uploads/2010/09/gasnier-toxicology-elsevier-262-184-191-glyphostae-ed-human-cell-lines2.pdf .
[iv] Source for unpredictability: Wilson, A.K. et al., “Transformation-induced mutations in transgenic plants: Analysis and biosafety implications,” Biotechnol Genet Eng Rev, 2006; 23: 209–238.www.somloquesembrem.org/img_editor/file/Wilson%2006%20BGER.pdf.Source for 1st quote: Freese, W. & Schubert, D., “Safety Testing and Regulation of Genetically Engineered Foods,” Biotechnology and Genetic Engineering Reviews, Nov 2004; Vol. 21. http://www.saveourseeds.org/downloads/schubert_safety_reg_us_11_2004.pdf . Source for genetic contamination: Quist, D. & Chapela, I., “Transgenic DNA introgressed into traditional maize landraces in Oaxaca, Mexico,” Nature, Nov 29, 2001; 414: 541-543.http://www.nature.com/nature/journal/v414/n6863/full/414541a.html. Source for 2nd quote: Cummings, C. H., “Trespass: Genetic Engineering as the Final Conquest,” WorldWatch Institute, World Watch Magazine, Jan/Feb 2005: 18(1), http://www.worldwatch.org/node/568
[v] Center for Food Safety. “Monsanto vs. US farmers: Nov. 2007 Update.” Washington, DC & San Francisco, CA. Nov. 2007. www.centerforfoodsafety.org/pubs/Monsanto%20November%202007%20update.pdf . See also: Waltz, E. “Under wraps – Are the crop industry’s strong-arm tactics and close-fisted attitude to sharing seeds holding back independent research and undermining public acceptance of transgenic crops?” Nat Biotechnol, Oct 2009; 27(10): 880–882.www.nature.com/nbt/journal/v27/n10/abs/nbt1009-880.html. For Scientific American editors: The Editors, “Do Seed Companies Control GM Crop Research?” Scientific American, Aug 13, 2009. www.scientificamerican.com/article.cfm?id=do-seed-companies-control-gm-crop-research&print=true
[vi] Source for Monsanto 90% statistic: Marie-Monique Robin, 2010, “The World According to Monsanto: pollution, corruption and the control of our food supply,” The New Press, http://thenewpress.com/index.php?option=com_title&task=view_title&metaproductid=1755
Greenpeace, Monsanto: Get out of our food, accessed 19 December 2011: www.greenpeace.org.uk/gm/monsanto-get-out-of-our-food
Center for Food Safety, Monsanto vs. US Farmers, 2005: www.centerforfoodsafety.org/pubs/CFSMOnsantovsFarmerReport1.13.05.pdf Source for GMO monopoly statistic: GRAIN, “Global agribusiness: two decades of plunder,” July 13, 2010. www.grain.org/article/entries/4055-global-agribusiness-two-decades-of-plunder .
[vii] Source for producing plenty: Badgley, C. et al., “Organic Agriculture and the Global Food Supply,” Renewable Agriculture and Food Systems, 22 (2007): 86-108. http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=1091304 Statistics calculated from : FAOSTAT, Agricultural Production Indices, Net Per Capita. Index 100 = 2004-2006.faostat.fao.org/site/612/DesktopDefault.aspx?PageID=612#ancor.And: FAOSTAT, Food Balance Sheets, Commodity Balances, 2009. faostat.fao.org/site/368/DesktopDefault.aspx?PageID=368#ancor
That means, we rely on support from our readers.
Independent. Nonprofit. Subscriber-supported. | <urn:uuid:42290195-56e8-45f8-a567-7f66c5acde56> | {
"date": "2014-10-24T13:17:24",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119645920.6/warc/CC-MAIN-20141024030045-00240-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8522563576698303,
"score": 2.6875,
"token_count": 2912,
"url": "http://www.yesmagazine.org/planet/7-things-to-tell-your-friends-about-gmos?amp;amp;amp;amp;amp;amp;amp;amp&amp;amp;amp;amp;amp;amp;amp&amp;amp;amp;amp;amp;amp;amp"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.