content
stringlengths
275
370k
Hypogonadism is a condition in which the ovaries in females or the testes in males do not function properly. As a result, normal sexual development does not take place, or gonadal function regresses over time. What is going on in the body? The gonads are the glands that secrete sex hormones. In men these are the testes. In women, they are the ovaries. The sex hormones promote a person's physical and sexual development. Individuals who suffer from hypogonadism may not have normal growth or sexual development. What are the causes and risks of the condition? Causes of hypogonadism include: - inherited conditions - certain types of tumors - severe nutritional problems - serious diseases such as kidney failure or cancer The risks the individual faces are primarily related to the underlying condition. Hypogonadism itself causes few problems and can usually be treated. What can be done to prevent the condition? Most of the time, nothing can be done to prevent the condition. How is the condition diagnosed? The diagnosis of hypogonadism is sometimes apparent from the medical history and physical examination. Often, further tests are needed. These may include blood tests or special x-rays. Long Term Effects What are the long-term effects of the condition? Children who do not go through puberty like their peers often experience psychological distress. Most long-term effects from the condition are related to the underlying cause of the hypogonadism. What are the risks to others? This is not a contagious condition so there are no risks to others. What are the treatments for the condition? Treatment may include steps to address the underlying cause. This may mean removing a tumor. Hormone replacement is necessary to treat hypogonadism. These medications can be delivered by pill or by injection. Symptoms may improve considerably after treatment, and some individuals may then be able to produce children. What are the side effects of the treatments? Hormone preparations may cause allergic reactions, stomach upset, or other side effects. Surgery can be complicated by infection, bleeding, or reactions to anesthesia. What happens after treatment for the condition? Once the underlying cause is corrected, the individual may return to normal. If the affected person is a child, he or she may begin to mature sexually. Often, the underlying cause cannot be corrected. In these cases, lifelong hormone replacement is necessary. How is the condition monitored? Tracking a person's symptoms and physical appearance may be all that is necessary to monitor hypogonadism. Periodic blood tests may be needed to monitor hormone levels. Any new or worsening symptoms should be reported to the healthcare professional. Endocrinology, 1989, DeGroot et al. Harrison's Principles of Internal Medicine, 1998, Fauci et al.
For decades, scientists have been trying to figure out how to replenish vital insulin-producing beta cells that are missing in type 1 diabetes, which affects an estimated 1.25 million children and adults in the United States. Indeed, major inroads have been made in engineering functional insulin cells in the lab that could be used as cell replacement therapies for treating diabetes. But one of the limitations of these experimental therapies is that, in type 1 diabetes, the body continues to inflict damage on native and transplanted insulin cells. So, some scientists believe a more desirable approach would involve a treatment in which beta cells could be produced in a renewable fashion to counteract their loss in the body. Now, researchers may have found a way to replace these lost beta cells – by reprogramming stomach tissue. The researchers took samples of tissue from the lower stomach in mice and grew this tissue into mini-organs that, when transplanted back into animals, functioned as insulin-producing cells. While the research is still in early stages, the findings highlight the potential for development of engineered stomach tissue as a renewable source of functional beta cells to treat diabetes. The results appear in the journal Cell Stem Cell. Located in the pancreas, beta cells store and release insulin, the hormone responsible for regulating glucose levels in the blood. In type 1 diabetes, the body’s own immune system mounts an attack against beta cells and destroys them. Without the ability to produce the amount of insulin the body needs, glucose builds up in the bloodstream. This rise in blood glucose levels leads to symptoms of type 1 diabetes. In the study, senior author Qiao Zhou, of the Department of Stem Cell and Regenerative Biology at Harvard University, and his colleagues genetically engineered mice to express three genes that have the ability to convert other cell types into beta cells. This allowed the researchers to spot which cells in the body have the greatest insulin-producing potential. After flipping on these gene switches in the mice, the team observed that some of the cells in the lower stomach – a region called the pylorus, which connects the stomach to the small intestine – appeared to be most amenable to conversion to beta cells. When the researchers tried reprogramming different cells in the mice to behave like beta cells, they found that cells in this area were most responsive to high glucose levels in the blood and were able to generate insulin to stabilize the blood sugar. To test how effective these cells might be at churning out insulin, the researchers wiped out the pancreatic beta cells in one group of mice, which made their bodies completely dependent on the reprogrammed stomach cells for insulin. A group of control mice that did not undergo tissue reprogramming died within eight weeks. Meanwhile, the experimental mice’s reprogrammed cells maintained insulin production and were able to regulate glucose levels for up to six months – the amount of time the animals were tracked. One feature that makes the pylorus advantageous to insulin production is that stem cells naturally replenish the gut tissue on a continuous basis. In the experimental mice, the stem cells in this lower stomach region were able to regenerate the insulin-producing cell population after the first set of reprogrammed cells were destroyed. The findings were promising, but the team couldn’t transfer the same experiment to humans. To get closer to a clinical application, Zhou and his colleagues instead removed stomach tissue from mice and engineered it to express the same beta-cell reprogramming factors in the lab. They then coaxed the cells to grow into a tiny ball of tissues resembling a stomach. The researchers hoped the mini-organs would be able to produce insulin as well as refresh itself with stem cells. To test this, they implanted the mini-stomachs in the membrane that covers the inside of the mouse's abdominal cavity and destroyed the mice's pancreatic cells to see if the mini-organs would compensate. Out of the 22 mice in the experimental group, five continued to have normal glucose levels after the transplant and destruction of pancreatic cells. While that number seems like a low success rate, it was what the team expected, and the results could pave the way for new models of therapies that could eventually replace missing beta cells in diabetic people. "What is potentially really great about this approach is that one can biopsy from an individual person, grow the cells in vitro and reprogram them to beta cells, and then transplant them to create a patient-specific therapy," Zhou said in a statement. "That's what we're working on now."
Limited resources are available to address the world's growing environmental problems, requiring conservationists to identify priority sites for action. Using new distribution maps for all of the world's forest-dependent birds (60.6% of all bird species), we quantify the contribution of remaining forest to conserving global avian biodiversity. For each of the world's partly or wholly forested 5-km cells, we estimated an impact score of its contribution to the distribution of all the forest bird species estimated to occur within it, and so is proportional to the impact on the conservation status of the world's forest-dependent birds were the forest it contains lost. The distribution of scores was highly skewed, a very small proportion of cells having scores several orders of magnitude above the global mean. Ecoregions containing the highest values of this score included relatively species-poor islands such as Hawaii and Palau, the relatively species-rich islands of Indonesia and the Philippines, and the megadiverse Atlantic Forests and northern Andes of South America. Ecoregions with high impact scores and high deforestation rates (2000–2005) included montane forests in Cameroon and the Eastern Arc of Tanzania, although deforestation data were not available for all ecoregions. Ecoregions with high impact scores, high rates of recent deforestation and low coverage by the protected area network included Indonesia's Seram rain forests and the moist forests of Trinidad and Tobago. Key sites in these ecoregions represent some of the most urgent priorities for expansion of the global protected areas network to meet Convention on Biological Diversity targets to increase the proportion of land formally protected to 17% by 2020. Areas with high impact scores, rapid deforestation, low protection and high carbon storage values may represent significant opportunities for both biodiversity conservation and climate change mitigation, for example through Reducing Emissions from Deforestation and Forest Degradation (REDD+) initiatives. Citation: Buchanan GM, Donald PF, Butchart SHM (2011) Identifying Priority Areas for Conservation: A Global Assessment for Forest-Dependent Birds. PLoS ONE 6(12): e29080. https://doi.org/10.1371/journal.pone.0029080 Editor: Justin David Brown, University of Georgia, United States of America Received: July 19, 2011; Accepted: November 21, 2011; Published: December 19, 2011 Copyright: © 2011 Buchanan et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: These authors have no support or funding to report. Competing interests: The authors have declared that no competing interests exist. Enormous and growing environmental problems and a chronic shortage of resources to tackle them require conservationists to set priorities for investment , , . Several global conservation prioritisation exercises have been undertaken, using a range of different criteria, primarily relating to biological importance and levels of threat , , . They range in scale from large regions such as Biodiversity Hotspots , to discrete sites such as Alliance for Zero Extinction sites , Important Bird Areas and other Key Biodiversity Areas . The ultimate goal of prioritisation exercises is to facilitate the safeguarding of the most important sites. This is often achieved through legislative means by designation as protected areas. However, the protected area network is far from complete , captures poorly the ranges of threatened species , , and is uneven in its coverage of different habitats , including different types of forests . Identification of priority areas has hitherto resulted in binary classifications (each point on the planet's surface falls either inside or outside a particular set of sites), although a continuous score could be more informative in setting priorities and making comparisons within and outside such areas. We used a newly available dataset on the distributions of all bird species and maps of forest extent and loss to develop a continuous spatial score of conservation importance in order to help expand and augment existing conservation and protected area networks. The score for each cell is calculated as the sum (across the species mapped as present within the cell) of the inverse of the number of cells each of those species' distribution covers, and represents a measure of the contribution of that cell to the distributions of the species it contains , . This measure is repeatable over time, and would permit the relative spatial comparison of scores with values from other taxa assessed in a similar manner. We focused on this class of organisms because distribution maps are available for all bird species and because birds are useful indictors for broader biodiversity . We focused on forest because most of the planet's terrestrial biodiversity, especially its threatened biodiversity, is found in this habitat, including well over half of all bird species , and because the distribution and loss of forests are readily and precisely derived from remote sensing imagery. Threat is an important consideration in conservation planning , so to assess which of the areas of highest importance for the world's forest bird species are particularly threatened we intersected the impact scores with spatial data on rates of recent deforestation and the distribution of protected areas. We then identified regions of high importance and threat that have least protection. Important Bird Areas (IBAs), are a global network of c.10,000 sites for the conservation of birds and other biodiversity identified using globally standardised quantitative criteria . Intersecting these with the impact score highlighted a priority suite of clearly demarcated sites that are amenable to management for conservation. This assessment is timely in light of the commitment made in 2010 by the world's governments to expand the protected area network from 12% to 17% of land area by 2020, covering ‘especially areas of particular importance for biodiversity’ . Finally, we considered the relevance of these results to the emerging REDD (Reducing Emissions from Deforestation and Forest Degradation) initiative, which aims to use market incentives to reduce greenhouse gas emissions by paying for avoided deforestation , . We overlaid a global map of carbon stocks onto the impact scores, deforestation data and protected areas data to identify those areas of highest importance for forest birds that are most threatened, least protected and have high carbon stocks. These areas are arguably among the most urgent priorities for REDD+ and will deliver the greatest benefits to biodiversity , . The distribution of impact scores Across the world's 2.2 million forested 5-km cells, impact scores ranged from just over zero in boreal tundra to a maximum of 4.01 in Hawaii's tropical moist forests. The average score was 0.0026±0.0000076 but the distribution was strongly skewed (Figure 1). The 1% of 5-km cells with the highest impact scores accounted for 27.2% of the sum of impact scores across all cells. Among the regions containing the highest scores were the Hawaiian islands, São Tomé and Príncipe, the islands of Indonesia, the Philippines and New Guinea, the Atlantic Forests and northern Andes of South America, while the lowest values were in arctic and arid ecoregions that contained small areas of forest (Figures 2a and S1, Tables 1 and S1). In contrast, the areas of highest bird species richness fell predominantly within the Amazon and Congo Basins (Figure 2b). The highest impact scores were associated with species-poor areas containing species with small ranges (Figure S2) and there was no simple relationship across all cells between impact score and bird species richness (Figures 2a, 2b, 3a, S3a). The 20 ecoregions with the highest impact scores within each biogeographic realm were generally islands, coastal areas and mountainous areas, although in the Afrotropics and the Neotropics extensive inland lowlands were also included (e.g. Cerrado in the Neotropics, Miombo in the Afrotropics) (Figure 2a, Table S1). Plot smoothed to aid visual interpretation. Recent forest loss The 18-km squares assessed for forest loss in 2000–2005 by Hansen et al. overlapped with 2,083,034 (94.6%) of the 5-km forested squares, but forest loss data were not available for some ecoregions containing high impact scores (e.g. Palau tropical moist forests and Pernambuco coastal forests; Tables 1, S2). For those ecoregions with data on forest loss, mean deforestation between 2000 and 2005 was 1.3%±0.0022. Ecoregions with the highest maximum scores for impact did not suffer disproportionately high rates of recent deforestation (Figure 3b), a pattern consistent across biogeographic realms (Figure S3b). However, a number of ecoregions had both high impact scores and high rates of recent deforestation including: Cordillera La Costa montane forests in the Neotropics, Halmahera rain forests in the Indo-malayan realm, Mount Cameroon and Bioko montane forests, and the Eastern Arc forests in the Afrotropics (Figure 4a). Protected areas and Important Bird Areas (IBAs) The protected area network (which covers approximately 13% of the planet's land surface) encompasses 9% of forested 5-km cells. At a global scale there was a weak tendency towards greater protected area coverage in ecoregions with higher impact scores (Figures 3c, 4b), although patterns differed between biogeographic realms (Figure S3c). Forested 5-km cells falling within protected areas had impact scores approximately twice as large as those of squares outside protected areas (Figure S4). However, seven of the 20 ecoregions with the highest impact scores did not contain any protected forest (Table 1), and the 9% of forested 5-km cells that fell partly or wholly within protected areas captured only 15.5% of the global sum of impact scores, compared to a possible maximum of 63.7%. If the proportion of 5-km forest squares that are protected were increased to 17% in line with the target set by the CBD in 2010, the global capture of impact scores could rise from the current 15.5% to a theoretical maximum of 69.5% if new protected areas were sited only in areas with highest impact scores. The 20 IBAs with the highest impact scores are given in Table S2. There was a weak positive association across 5-km cells between impact scores and carbon stocks (Figure 3d), although the pattern varied between biogeographical realms (Figure S3d). Ecoregions with high values (in the upper quartiles for each parameter) of impact score, recent deforestation and carbon stocks include Seram rain forests in Indonesia, Borneo and Sumatra lowland rain forests and Sumatran peat swamp forests in Indo-Malaya, Niger Delta swamp forests (Afrotropics), Madeira-Tapajos moist forest (Palearctic) and Isthmian-Atlantic moist forest in the Neotropics (Figure 5). Our impact score is a simple metric of conservation value that has been estimated across the globe and is relevant to IUCN Red List criteria A, B and C. It can therefore contribute to the identification of those areas of remaining forest whose loss is likely to have the greatest impact on the conservation status of the world's forest birds. Unlike methods that classify priority sites for conservation in a binary way, the impact score is a spatially explicit continuous variable that can provide insights into variation in conservation importance at a high spatial resolution. As with previous global site prioritisation exercises, it does not incorporate the cost of management (not least because there are no global data on land values), nor is it an analysis of complementarity. At the national scale, which is where practical decisions about delineating and prioritising sites for conservation are made, our score could be incorporated into prioritisation analyses along with data on costs, opportunity and complementarity. The impact score can be recalculated as new data become available on the extent of habitats, as better assessments of species' distributions and altitudinal ranges become available and as taxonomic boundaries change, and can be recalculated at regional or country levels. It could also be used to make absolute comparisons over time within cells and other defined spatial units (e.g. ecoregions), and relative spatial comparisons with similarly derived scores for other taxa. The reliability of the results depends on the accuracy of the input data (as with all such prioritisation exercises). Ideally our analysis would have been based upon data on the Area Of Occupancy (AOO) of species, but such fine-scale distribution data are available globally for a tiny proportion of forest species. Therefore we took a pragmatic approach and estimated the potential Extent of Suitable Habitat (ESH) for species , which reduced commission errors relative to the Extent Of Occurrence (EOO) , . Species are unlikely to be distributed evenly across their entire range, meaning that the ESH will usually exceed the Area of Occupancy (AOO). For example, our forest map includes forests that have been degraded to some unknown degree, resulting in the ESH exceeding the AOO for species that cannot tolerate degradation. Other determinants of occupancy and abundance such as hunting pressure cannot be mapped from remote sensing, further increasing discrepancies between ESH and AOO. However, because of the scale at which we report our results and because our score is based on multiple species, we do not think this biases our results, although we acknowledge these limitations , . Thus, while our results should prove useful for identifying priority areas for new or expanded IBAs and protected areas or for investing REDD+ resources, defining boundaries of specific sites and prioritising among sites will require local-scale validation. The degree to which areas with high impact scores for birds capture those of high importance for other taxa cannot yet be assessed, since the extent to which areas of rarity, endemism and risk overlap between major groups is unclear , , . However, repeating this analysis for other well-mapped taxa (e.g. mammals and amphibians) would be straightforward. The distribution of impact scores The highly skewed frequency distribution of the impact score suggests that protecting a relatively small number of the world's forested areas would yield disproportionate benefits for birds. Tropical islands and mountains often had high scores, with most of the 20 ecoregions containing the highest impact scores falling into one or both of these categories (Table 1). In these areas the high scores are generally a consequence of the importance of the cells for a relatively small number of species with restricted ranges (Figure S2). Previous studies have shown that such areas are often important for restricted-range specie , and that areas with many rare or endemic species often have low species richness , , . The latter is consistent with the weak correlation we found between impact scores and species richness. Using the impact score in conservation planning Most of the world's governments have committed to increase terrestrial protected area coverage to 17% by 2020 . There is therefore a pressing need to identify sites that are the most urgent priorities for protection. This will require consideration of both biodiversity importance (irreplaceability) and degree of threat (vulnerability) , . Our impact score is informative for assessing the former, although we recognise that areas with low impact scores may still have high importance for biodiversity (for non-forest species, highly threatened species, significant aggregations of individuals or for non-avian taxa), or for the delivery of ecosystem services. Even though there was no evidence that areas with high impact scores were systematically more or less threatened than other areas (i.e. there was no correlation between impact scores and rates of recent deforestation), overlaying the impact score with recent rates of deforestation and current levels of protection can help identify areas whose addition to the protected area network would yield the greatest benefits to forest birds. Within areas identified as being of high priority, identification of specific sites for new or expanded protected areas will need to take into account political and socioeconomic realities on the ground. Since IBAs are identified nationally through multi-stakeholder processes as discrete sites that are actual or potential conservation management units, they provide an existing network of sites whose boundaries incorporate such practical considerations. Unprotected (or incompletely protected) IBAs for which formal protection is appropriate and that lie within areas of high irreplaceability (impact score) and high vulnerability (recent deforestation rate) represent some of the most urgent priorities for protected area network expansion if governments are to meet their CBD targets. These include, for example, the Western Ridge and Middle Ridge IBAs in Palau, the Príncipe forest IBA, the São Tomé lowland forest IBA, the Blue Mountains IBA in Jamaica and El Parque Nacional Península de Paria in Venezuela IBA (Table S2). Protection of such IBAs will provide much broader biodiversity benefits, as the IBA network covers about 80% of the extent of Key Biodiversity Area networks in those countries in which of Key Biodiversity Areas for non-avian species have been identified (BirdLife International unpublished data). As well as helping to inform expansion of formal protected area networks, our results should also help to set priorities for other approaches for safeguarding priority sites, including, for example, community management . Furthermore, our results have relevance for REDD+, as this market-based mechanism for mitigating climate change could provide substantial opportunities for biodiversity conservation through the protection of intact forests , . Columbia, Indonesia and Panama are the only UN REDD Programme or Partner countries among the 13 countries that contain the 20 IBAs with the highest impact scores. However, REDD+ projects in any of the forests that we have identified as being highly threatened and poorly protected forests as well as having both high carbon values and high impact scores are most likely to deliver benefits for both climate-change mitigation and biodiversity conservation. While Strassburg et al. also found a positive association between regions of high conservation value and high carbon stocks, we pinpoint the most urgent priorities among potential REDD+ opportunities by incorporating additional data on levels of protection and threat (Table 1). Implementation of REDD+ projects in such places has great potential to help safeguard the future of the world's forest bird species and other biodiversity , , . Bird data and forest cover Digital distribution maps of the extent of occurrence (EOO) of all bird species were extracted from a recently completed library . These maps were derived from a variety of sources. These include specimen localities obtained from museum data, 587,000 point localities for 6,800 species in BirdLife's Point Locality Database; 5.02 million records for 8,600 species in the Global Biodiversity Information Facility (GBIF), many of which relate to specimen records; observer records documented in BirdLife International's Red Data Books and species factsheets, published literature, survey reports and other unpublished sources; 304,073 records for 7,506 species of documented occurrences in 10,367 Important Bird Areas (extracted from BirdLife's World Bird Database), distribution atlases derived from systematic surveys, distribution maps in field guides and other handbooks, and expert opinion. The digital distribution maps represent the best current estimates of the EOO of all bird species. We analysed the subset of 6,077 species (representing 60.6% of all extant bird species) that are scored in BirdLife International's World Bird Database as having high or medium forest-dependence . Species with high forest-dependence are forest specialists that are characteristic of the interior of undisturbed forest, rarely occupy non-forest habitats, and almost invariably breed within forest; while they may persist in secondary forest and forest patches if their particular ecological requirements are met, they are usually less common in such situations. Species with medium forest-dependence are forest generalists that breed in undisturbed forest but are also regularly found in forest strips, edges and gaps and secondary forest, where they may be commoner than in the interior of intact forest. For each of these forest-dependent species, altitudinal limits were also extracted from the same source and the EOO maps were clipped by forest cover and altitude to produce maps of the extent of potentially suitable habitat (ESH) within the EOO of each species , . The baseline forest cover map used to clip these maps was extracted from GLC2000, and included Global Land Cover Classes 1 to 10 . This included all forested land, from boreal taiga to tropical rainforests. Forest cover mapped from GLC2000 is very similar in extent to that mapped from Modis . Altitudinal data were extracted from a 30 arc second digital elevation model (DEM) produced from the Shuttle Radar Topography Mission data . We adopted a 5-km grid square resolution as a trade-off between spatial explicitness and processing speed, although a spatial resolution of 1 km could be achieved with the data available. A 5-km square was classed as forested if any of its 25 constituent 1-km squares was forested in GLC2000. Nearly two-thirds (62%) of the 5-km cells thereby selected contained at least 50% forest cover. We examined the effects of varying the threshold of forest cover within a cell from 4% (i.e. a single 1-km square) up to 100% (i.e. all 25 1-km squares) for a subset of the data (species endemic to Africa). All species had ESH estimates >0 when a 4% threshold was used (i.e. at least one 5-km square classed as forest), but the proportion of species with ESH estimates of zero increased to 6% with a 20% threshold, 14% with a 60% threshold and 21% with a 100% threshold. Using these data, we then examined the effect on the impact score for each 5-km cell of using different thresholds (4, 20, 40, 60, 80 and 100%). For each threshold, we log-normalised the impact scores and then regressed them against the scores obtained with a 4% threshold. Although the absolute value of the impact scores increased with increasing threshold (due to fewer cells being used to calculate the values, and despite the loss of 21% of species), the very strong correlations (R2 always> = 0.99) indicated that there was very little relative change in cell importance (Table S3). The minimum and maximum altitudes of each 5-km square were assessed from the DEM and the square was considered to lie within the altitudinal distribution of each species if any part of it fell within the altitudinal limits of that species. Because the majority of 5-km cells contained at least 50% forest cover and altitudinal variation within individual 5-km cells was generally low, the probability that only the non-forested part of a particular square fell within the requisite altitudinal limits was slight, although this might have resulted in a marginal overestimation of ESH. The resulting maps of ESH therefore included, for each forest species, all the 5-km cells within that species' EOO that had partial or complete forest cover in the year 2000 and that fell at least partly within the altitudinal limits of that species. The ESH maps reduced the EOO extents by 48.2±0.4% but the two were strongly correlated across species (r = 0.84). Although there was a significant difference in this reduction between species with high (n = 2609) and medium (n = 3468) forest dependency (χ21 = 52.5, P<0.001), the effect size equated to just a 5.3±0.7% difference in EOO reduction. We assessed whether ESH maps reduced the number of omission and commission errors compared to EOO maps , , using data on the occurrence of globally threatened species at IBAs and the location of these IBAs. Compared to EOO, ESH estimates reduced errors of commission by 19.7% for all species (20.8% and 18.3% for the 317 high and 173 medium forest dependent globally threatened species respectively), while omissions only decreased by 9.5% (9.0% and 10.6% for high and medium dependency respectively). This is consistent with previous studies , showing that ESH has fewer commission errors than EOO. Calculating the impact score For each 5-km cells, we estimated an impact score, s, as:where ri is the total number of 5-km cells within the estimated distribution (ESH) of the ith species and n is the total number of species whose ESH includes that 5-km square , . Thus, species with restricted ranges contribute more to the overall impact score of each square they occupy than do species with extensive ranges, but all species predicted to occur within a square contribute to its impact score, and, importantly, all species contribute equally at a global scale (i.e. with a value of 1 for ). The rationale for this approach is that distribution size is one of the factors identified by IUCN as contributing to extinction risk. Species with restricted ranges are considered to be at high risk of extinction , and thresholds of absolute distribution size and rates of decline in distribution are incorporated into the Red List criteria . Furthermore, distribution size is closely correlated with population size across species , another factor incorporated into the IUCN Red List criteria. Therefore, the loss of forest in a 5-km square with a high value of s will lead to a greater increase in aggregate extinction risk of forest birds globally than loss of forest in a square with a small value of s. Our value of s can therefore be linked directly to IUCN extinction-risk criteria A, B and C and is comparable between 5-km cells anywhere in the world. Recent forest loss In order to assess the level of threat to ecoregions of high conservation importance, we intersected impact scores with deforestation rates during the period 2000–2005 estimated by Hansen et al. . These data are available at a spatial resolution of 18-km squares, so each forested 5-km square was assigned a value based on the % loss of forest between 2000 and 2005 of the 18-km square within which it wholly or largely fell. Because of differences in the spatial coverage of forests given by GLC2000 and the areas assessed for forest loss in , data on forest loss in 2000–2005 were not available for all forested or partly forested ecoregions, particularly those comprising small islands. Consequently, results of analyses relating to forest loss were applicable only to the areas of overlap between GLC2000 and the areas used in , which equated to 95% of all forested 5-km cells but excluded many with high impact scores. Protected areas and Important Bird Areas (IBAs) In order to assess the degree of overlap between ecoregions with high biological importance and current conservation investment in the form of protected areas, we intersected impact scores with the global distribution of protected areas from the World Database of Protected Areas . We considered only nationally designated protected areas for which a polygon was included in the database . If any part of the cell overlapped a protected area it was considered protected. We also assessed the overlap of impact scores with Important Bird Areas (IBAs). IBAs are sites representing actual or potential conservation management units, their designation taking into account habitat extent, land use and land ownership . Some 5,198 IBAs (50.7%) for which polygons were available at the time of analysis overlapped with forested areas. In order to identify areas with potential for safeguarding both carbon and biodiversity, we overlaid impact scores for 5-km cells on a resampled global map of carbon storage derived from Kapos et al. , which combines estimates of above- and below-ground biomass and soil carbon storage . We report some of our results at the scale of the world's 731 forested or partly forested ecoregions , which form biologically distinctive large-scale spatial units. To avoid averaging out small areas of particularly high importance within ecoregions, we report the maximum impact score of any 5-km cell within each ecoregion, although there was tight correlation across ecoregions between maximum and mean scores (Figure S5). Ecoregion-level forest bird species richness was calculated cumulatively across each ecoregion, rather than simply averaged across all cells within that ecoregion. Generalised additive models were used to assess the relationship between maximum impact score recorded in each ecoregion, the ecoregion-level coverage of protected areas, and (for those ecoregions with data available) recent forest loss. Analysis was undertaken at a 5-km cell scale for carbon storage, owing to the variation in carbon storage across ecoregions. Because global relationships between extinction risk and environmental variables might show strong regional variation , analyses were replicated at the level of major biogeographic realms . All spatial data manipulations were undertaken in ArcMap 9.2 (ESRI 2006) and used equal area projections. Statistical analyses were undertaken in R 2.12.1 . Means are presented ±1 standard error. Tif file of the 5 km resolution version of Figure 2a. Smoothed relationship between impact score in each 5-km cell (vertical axis), estimated bird species richness within the cell (x axis) and the mean log ESH across species in each cell (y axis). Realm-level scatterplots between impact score and (a) overall forest bird species richness, (b) forest loss, 2000–2005, (c) coverage by the protected areas network and (d) carbon stocks. Fitted lines show GAMs. Data on forest loss and carbon stocks were missing for a number of ecoregions in Oceania, which are omitted from those graphs. Points indicate ecoregions means except in the case of carbon, which is averaged across only forested cells within each ecoregion. Mean ± SE impact score in 5-km cells within and outside protected areas. Relationship between mean and maximum values of score in each ecoregion. Excel file of summary data by ecoregions, showing realm, ranking within realm for maximum impact score, maximum and mean impact scores, species richness of forest birds, recent (2000–2005) deforestation and the percentage of forest within protected areas. The 20 IBAs with the highest maximum impact scores and rates of forest loss (% loss, 2000–2005), with protected area status. We thank Mark Balman, Ian May and the many BirdLife staff who contributed to compilation of distribution maps for birds, including Jemma Able, Jez Bird, Gill Bunting, Richard Johnson, Simon Mahood, Phil Martin, Simon Mitchell, Andy Symes, Joe Taylor and the late Dan Omolo. We are grateful to Alison Beresford for help with figure preparation, and for helpful early discussions we thank Leon Bennun and Richard Grimmett. For helpful comments on previous drafts we thank Lincoln Fishpool, Roger Safford, Jörn Scharlemann and Tim Stowe. We are also very grateful for the comments of two anonymous reviewers. Conceived and designed the experiments: GMB PFD SHMB. Analyzed the data: GMB PFD. Contributed reagents/materials/analysis tools: GMB SHMB. Wrote the paper: GMB PFD SHMB. - 1. Margules CR, Pressey RL, Williams PH (2002) Representing biodiversity: data and procedures for identifying priority areas for conservation. Journal of Biosciences 27: 309–326. - 2. Olson DM, Dinerstein E, Wikramanayake ED, Burgess ND, Powell GVN, et al. (2001) Terrestrial ecoregions of the world: a new map of life on Earth. Bioscience 51: 933–938. - 3. Wilson KA, McBride MF, Bode M, Possingham HP (2006) Prioritizing global conservation efforts. Nature 440: 337–340. - 4. Brooks TM, Mittermeier RA, da Fonseca GAB, Gerlach J, Hoffmann M, et al. (2006) Global biodiversity conservation priorities. Science 313: 58–61. - 5. Funk SM, Fa JE (2010) Ecoregion prioritization suggests an armoury not a silver bullet for conservation planning. Plos One 5: e8923. - 6. Murdoch W, Bode M, Hoekstra J, Kareiva P, Polasky S, et al. (2010) Trade-offs in identifying global conservation priority areas. In: Leader-Williams N, Adams WM, Smith RJ, editors. Trade-offs in conservation: deciding what to save. Oxford, UK: Wiley-Blackwell. pp. 35–55. - 7. Mittermeier RA, Myers N, Thomsen JB, da Fonseca GAB, Olivieri S (1998) Biodiversity hotspots and major tropical wilderness areas: Approaches to setting conservation priorities. Conservation Biology 12: 516–520. - 8. Myers N, Mittermeier RA, Mittermeier CG, da Fonseca GAB, Kent J (2000) Biodiversity hotspots for conservation priorities. Nature 403: 853–858. - 9. Ricketts TH, Dinerstein E, Boucher T, Brooks TM, Butchart SHM, et al. (2005) Pinpointing and preventing imminent extinctions. Proceedings of the National Academy of Sciences of the United States of America 102: 18497–18501. - 10. BirdLife International (2011) State of the World's Birds. http://www.birdlife.org/datazone/sowb, Accessed 1 March 2011. - 11. Eken G, Bennun L, Brooks TM, Darwall W, Fishpool LDC, et al. (2004) Key biodiversity areas as site conservation targets. Bioscience 54: 1110–1118. - 12. Brooks TM, Bakarr MI, Boucher T, da Fonseca GAB, Hilton-Taylor C, et al. (2004) Coverage provided by the global protected-area system: is it enough? Bioscience 54: 1081–1091. - 13. Beresford AE, Buchanan GM, Donald PF, Butchart SHM, Fishpool LDC, et al. (2011) Poor overlap between the distribution of Protected Areas and globally threatened birds in Africa. Animal Conservation 14: 99–107. - 14. Rodrigues ASL, Andelman SJ, Bakarr MI, Boitani L, Brooks TM, et al. (2004) Effectiveness of the global protected area network in representing species diversity. Nature 428: 640–643. - 15. Schmitt CB, Burgess ND, Coad L, Belokurov A, Besancon C, et al. (2009) Global analysis of the protection status of the world's forests. Biological Conservation 142: 2122–2130. - 16. Bird JP, Buchanan GM, Lees AC, Clay RP, Develey PF, et al. (2011) Integrating spatially explicit habitat projections into extinction risk assessments: a reassessment of Amazonian avifauna incorporating projected deforestation. Diversity and Distributions 17: DOI:https://doi.org/10.1111/j.1472-4642.2011.00843.x. - 17. Kier G, Barthlott W (2001) Measuring and mapping endemism and species richness: a new methodological approach and its application on the flora of Africa. Biodiversity and Conservation 10: 1513–1529. - 18. Hilton-Taylor C, Pollock CM, Chanson JS, Butchart SHM, Oldfield TEE, et al. (2009) State of the world's species. In: Vié J-C, Hilton-Taylor C, Stuart SN, editors. Wildlife in a changing world – an analysis of the 2008 IUCN Red List of Threatened Species. Gland, Switzerland: IUCN. pp. 15–41. - 19. Visconti P, Pressey RL, Bode M, Segan DB (2010) Habitat vulnerability in conservation planning-when it matters and how much. Conservation Letters 3: 404–414. - 20. CBD (2010) COP Decision X/2. Strategic plan for biodiversity 2011–2020. http://www.cbd.int/decision/cop/?id=12268, Accessed 1 March 2011. - 21. Sandker M, Nyame SK, Forster J, Collier N, Shepherd G, et al. (2010) REDD payments as incentive for reducing forest loss. Conservation Letters 3: 114–121. - 22. Scharlemann JPW, Kapos V, Campbell A, Lysenko I, Burgess ND, et al. (2010) Securing tropical forest carbon: the contribution of protected areas to REDD. Oryx 44: 352–357. - 23. Harvey CA, Dickson B, Kormos C (2010) Opportunities for achieving biodiversity conservation through REDD. Conservation Letters 3: 53–61. - 24. Hansen MC, Stehman SV, Potapov PV (2010) Quantification of global gross forest cover loss. Proceedings of the National Academy of Sciences of the United States of America 107: 8650–8655. - 25. Grenyer R, Orme CDL, Jackson SF, Thomas GH, Davies RG, et al. (2006) Global distribution and conservation of rare and threatened vertebrates. Nature 444: 93–96. - 26. Moritz C, Richardson KS, Ferrier S, Monteith GB, Stanisic J, et al. (2001) Biogeographical concordance and efficiency of taxon indicators for establishing conservation priority in a tropical rainforest biota. Proceedings of the Royal Society of London Series B-Biological Sciences 268: 1875–1881. - 27. Lamoreux JF, Morrison JC, Ricketts TH, Olson DM, Dinerstein E, et al. (2006) Global tests of biodiversity concordance and the importance of endemism. Nature 440: 212–214. - 28. Orme CDL, Davies RG, Olson VA, Thomas GH, Ding TS, et al. (2006) Global patterns of geographic range size in birds. PLoS Biology 4: 1276–1283. - 29. Prendergast JR, Quinn RM, Lawton JH, Eversham BC, Gibbons DW (1993) Rare species, the coincidence of diversity hotspots and conservation strategies. Nature 365: 335–337. - 30. Orme CDL, Davies RG, Burgess M, Eigenbrod F, Pickup N, et al. (2005) Global hotspots of species richness are not congruent with endemism or threat. Nature 436: 1016–1019. - 31. Joppa LN, Pfaff A (2009) High and far: biases in the location of Protected Areas. PLoS ONE 4: - 32. Berkes F (2003) Rethinking community-based conservation. Conservation Biology 18: 621–630. - 33. Strassburg BBN, Kelly A, Balmford A, Davies RG, Gibbs HK, et al. (2010) Global congruence of carbon storage and biodiversity in terrestrial ecosystems. Conservation Letters 3: 98–105. - 34. BirdLife International, NatureServe (2011) Bird species distribution maps of the world. Cambridge, UK & Arlington, USA. - 35. Buchanan GM, Butchart SHM, Dutson G, Pilgrim JD, Steininger MK, et al. (2008) Using remote sensing to inform conservation status assessment: Estimates of recent deforestation rates on New Britain and the impacts upon endemic birds. Biological Conservation 141: 56–66. - 36. Bartholomé E, Belward AS (2005) GLC2000: a new approach to global land cover mapping from Earth observation data. International Journal of Remote Sensing 26: 1959–1977. - 37. Giri C, Zhu ZL, Reed B (2005) A comparative analysis of the Global Land Cover 2000 and MODIS land cover data sets. Remote Sensing of Environment 94: 123–132. - 38. USGS (2004) Shuttle Radar Topography Mission. Maryland: Global Land Cover Facility, University of Maryland. - 39. Beresford AE, Buchanan GM, Donald PF, Butchart SHM, Fishpool LDC, et al. (2011) Minding the protection gap: estimates of species' range sizes and holes in the protected area network. Animal Conservation 14: 114–116. - 40. Harris G, Pimm SL (2008) Range size and extinction risk in forest birds. Conservation Biology 22: 163–171. - 41. IUCN (2001) IUCN Red List categories and criteria Version 3.1. Gland, Switzerland and Cambridge, UK: IUCN Species Survival Commission. - 42. Blackburn TM, Gaston KJ, Quinn RM, Arnold H, Gregory RD (1997) Of mice and wrens: The relation between abundance and geographic range size in British mammals and birds. Philosophical Transactions of the Royal Society of London Series B-Biological Sciences 352: 419–427. - 43. UNEP I (2009) World Database on Protected Areas (WDPA). Cambridge, UK: UNEP-WCMC. - 44. Jenkins CN, Joppa L (2009) Expansion of the global terrestrial protected area system. Biological Conservation 142: 2166–2174. - 45. Kapos V, Ravilious C, Campbell A, Dickson B, Gibbs H, et al. (2008) Carbon and biodiversity: a demonstration atlas. Cambridge, UK: UNEP-WCMC. - 46. Ruesch A, Gibbs HK (2008) New IPCC Tier-1 Global Biomass Carbon Map for the year 2000. Oak Ridge, Tennessee: Oak Ridge National Laboratory. - 47. Global Soil Data Task Group (2000) Global Gridded Surfaces of Selected Soil Characteristics (IGBP-DIS). Oak Ridge, Tennessee: Oak Ridge National Laboratory Distributed Active Archive Center. - 48. Davies RG, Orme CDL, Olson V, Thomas GH, Ross SG, et al. (2006) Human impacts and the global distribution of extinction risk. Proceedings of the Royal Society B-Biological Sciences 273: 2127–2133. - 49. R Development Core Team (2010) R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing.
The Big Bang Theory Challenged All the models of physics and mathematics seem to break down when scientists reach the moment of the big bang. So what happened before the great expansion? The Big Bang theory is thought of as the start of our universe. In this current model, the universe was a small and dense inferno about 14 billion years ago, until everything as we know it – the stars, galaxies, and planets – exploded outward, exponentially expanding in size at a rate faster than the speed of right. But what happened before the big bang? What led to the singularity in the first place? Long story short: We’re not sure, but there are a few ideas. All the math and science we know about the early universe breaks down as we delve deeper into the first few moments of our universe. What the math tells us is that we had a point of infinite density at the start of the Big Bang with everything crammed into a tiny point. This seemingly irrational idea is an indication that we desperately need new math and physics to overcome the problem – our current understanding and tools are just not enough. We need a model of physics that can handle all of the gravity, and the other quirks of the universe, at massive orders of magnitude. This is where the string theory comes in, it can explain the earliest moments of the cosmos. Theoretical physicists have many interesting theories about what happened before the Big Bang, here are just three of them. 1. The Big Bounce Some new theoretical research shows that it may not be a one-time occurrence after all. Instead, it could be the latest iteration of a ‘big bang’ cycle that goes on forever, or at least once. Critics of the theory argue that the rules of entropy do not allow the universe to shrink into an infinitesimally small space and expand again. Incidentally, the string theory supports this with strong mathematical grounding. It states that the universe repeatedly bounces between big bangs and big crunches, going back and forth into the past and future an infinite number of times. One paper published in January to the arXiv database touched base with the mathematics and found that it is possible to observationally test everything. 2. The Hibernating Universe Theory This theory states that the universe before the big bang was a flat, small space that was stable until it reached a level of higher stability – if that makes sense. It stayed in this state until an outside force disrupted everything and the universe forced into expansion. This theory doesn’t violate the laws of entropy but it doesn’t explain our problems with the current model of the universe very well either. 3. There Never was a Singularity The inflation theory states that fluctuations in the ‘inflation’ field led to an expansion of energy in one area of the field, leading to a rapid expansion. This exponential growth has left evidence in the cosmic microwave background as gravitational waves. Most cosmologists believe this is the best way to explain the universe’s low entropy. In order to test these ideas, theoretical physicists have a lot of work to do. But their work is challenging because, for the most part, they don’t even know what they’re looking for.
Hydrogen is the most abundant element in the universe. With the "green-energy" craze and talk of powering our future oil-free economy on hydrogen, it has gotten much attention in the last few years. Learning about this potential fuel of the future is important and interesting. Besides, hydrogen is a powerful fuel, and blowing stuff up in the name of science is fun . Step 1: Electrolysis of Water - An Explanation 2H2O(l) = 2H2(g) + O2(g) As everyone knows a water molecule is formed by two elements: two positive Hydrogen ions and one negative Oxygen ion. The water molecule is held together by the electromagnetic attraction between these ions. When electricity is introduced to water through two electrodes, a cathode (negative) and an anode (positive), these ions are attracted to the opposite charged electrode. Therefore the positively charged hydrogen ions will collect on the cathode and the negatively charged oxygen will collect on the anode. When these ions come into contact with their respective electrodes they either gain or lose electrons depending on there ionic charge. (In this case the hydrogen gains electrons and the oxygen loses them) In doing so these ions balance their charges, and become real, electrically balanced, bona fide atoms (or in the case of the hydrogen, a molecule). The reason this system isn't very efficient is because some of the electrical energy is converted into heat during the process. There have been reports of 50%-70% efficiency, but I doubt that is possible in a home environment. Anyway, enough with the boring stuff, lets go make some gas!
In the Middle Ages, Western Europe and Japan operated under feudal systems. Similarities between Japanese and European feudalism include the division of the classes and the relationships of the people living within each social class.Continue Reading Feudalism is a political and social structure in which social classes define the lives and work of the people living in a town or country. Classes are structured in such a way to provide little chance of a lower-class peasant rising to become a lord, so there is no mobility between these classes over a person's lifetime. This system developed as the result of a weak central government. In the absence of the king's rule, local landowners gained control by offering protection to the lower classes of people in exchange for allowing them to live and raise food on his land. The landowners performed the duties of the king, which included paying warriors to defend the land, collecting taxes, building infrastructure and settling disputes between people. Structured Life in Japan Although separated by thousands of miles, Japan's tiered social structure was similar to the feudal system in Europe. In Japanese feudal society, the shogun military leaders represented the emperor and ruled the people through the feudal lords, which were called daimyo. The daimyo owned tracts of land and allowed peasants to live and work on it. Peasants paid taxes to the daimyo, who then paid samurai warriors to protect the property. However, following Confucian principles and unlike in Europe, Japanese peasants were considered an honored class because they produced all the food everyone needed to survive. In Europe, the peasants gave a portion of their crops to the upper classes in exchange for protection. Both European and Japanese systems excluded members of the clergy from the social systems. Loyalty and Skill Were Valued Both systems placed considerable value on loyalty and military skill, drawing upon philosophy and religion to create the framework for society. The Japanese followed the principles of philosopher Confucius, and the Europeans used the beliefs of the Roman Catholic church. According to these belief systems, daimyo, samurai, nobles and knights all had a moral obligation to protect the peasants. The peasants had a similar obligation to express complete allegiance to their lords, and in Japan a samurai soldier was allowed to kill any peasant who failed to bow in his presence. Japanese samurai and European knights also followed moral codes, called the bushido in Japan and chivalry in Europe. These codes required them to express courage in battle and complete loyalty to the lords who paid them. Prestige and Prosperity In Japanese and European feudal systems, the warriors enjoyed great prestige and prosperity within their communities. When they went into battle against enemies, both groups rode horses, carried swords and wore armor for protection. They also valued honor more than any other principle, but they had different definitions of the concept. Surrendering to an opponent was so dishonorable to the samurai that suicide was the preferred method of death in battle. In contrast, European knights believed their lives belonged to God and did not have the option of suicide; they had to surrender or die in battle. Similarly, a defeated knight hoped for mercy from his conqueror, but samurai would rather die than surrender to foes.Learn more about Modern Asia
FRIDAY, Dec. 7, 2018 (HealthDay News) -- The numbers are alarming. According to U.S. health officials, more than 200,000 children aged 14 or under are treated each year in emergency departments for playground-related injuries, about 10 percent of which involve "TBIs" -- or traumatic brain injuries. Modern playground designs help reduce the risk of injury from falls, but they're not implemented in every playground in the country. So parents need to be vigilant to keep kids safe. Standards for safer playground surfaces were established in 1999, yet emergency visits for traumatic brain injuries have gone up significantly in the past decade. While some of this rise could stem from more parents realizing the danger of head injuries and seeking medical attention when a child gets hurt, strategies to reduce the number of playground mishaps are needed. First, know where the greatest dangers are. Monkey bars, playground gyms and swings are the pieces of equipment most frequently associated with cases of traumatic brain injury. Kids between the ages of 5 and 9, and boys in general, have higher injury rates and may need more supervision when using the equipment. When choosing a playground to take your kids to, pick those that are surfaced with shock-absorbing material, such as hardwood mulch or sand. Make sure your kids use only equipment appropriate for their age, with guardrails to help prevent falls, and that the equipment is in good condition. Also, check the play area for any tripping hazards, like rocks or tree stumps. Since accidents do happen, also know the signs of a traumatic brain injury. TBI Signs in Kids: - Appears confused or dazed. - Moves clumsily. - Is slow to respond to questions. - Loses consciousness, even briefly. - Shows mood, behavior or personality changes. - Can't recall what happened before or after the fall. See how your state's playgrounds stack up from a safety point-of-view by checking out the playground scorecard from the National Program for Playground Safety.
You find them in classrooms across the nation — buckets of pattern blocks; trays of tiles and cubes; and collections of geoboards, tangrams, counters, and spinners. They've been touted as a way to help students learn math more easily. But many teachers still ask: Are manipulatives a fad? How do I fit them into my instruction? How often should I use them? How do I make sure students see them as learning tools, not toys? How can I communicate their value to parents? Are they useful for upper-grade students, too? I've used manipulative materials at all levels for 30 years, and I'm convinced I can't — and shouldn't — teach without them. Here are my strategies: - I talk with students about why manipulatives help them learn math. These discussions are essential for first-time users and useful refreshers to refocus from time to time. I precede discussions by giving children time to explore a manipulative. Then we talk about what students noticed and I introduce the concepts they'll learn with the material. - From day one, I set ground rules for using materials. We talk about the similarities and differences between using manipulatives in class and playing with toys or games. With toys or games, children can make up their own rules. With manipulatives, they are given specific problems and activities. I do make clear, however, that they're free to make discoveries and explore new ideas. - It's also important for students not to interfere with one another. I step in when I hear a howl of protest as a student who needs one more yellow tile takes it from another group's table. Sometimes I open up the discussion to the entire class. These impromptu reminders help keep students on track. - I set up a system for storing materials and familiarize students with it. It's important for students to know where and how to store materials. A clear system makes the materials more accessible. Some teachers designate and label space on bookshelves. Others use zip-top plastic bags and portion materials into quantities useful for pairs or groups. Still others place a supply of each material at students' tables so they're always within reach. - Time for free exploration is worth the investment. Whenever I introduce a new material, I allot at least one math period for this. Teacher demonstrations alone are like eating a papaya in front of the class and expecting children to know how it tastes. Free exploration time also allows students to satisfy their curiosity so they don't become distracted from the assigned tasks. Expect children to see if tiles can fall like dominoes; build tall towers with rods; or construct rockets out of cubes. After children have explored a material, I ask what they've discovered and record their observations on a chart so their classmates can get insights from their ideas. Then I assign a specific task. - For easy reference, I post class charts about manipulative materials. Charts not only send students the message that I value manipulatives, but also help students learn materials' names and how to spell them. In September I post a chart that lists all the materials we'll use during the year. For some materials, I post separate charts to list their shapes and colors. And I leave posted charts of students' discoveries about materials. - Manipulatives are a natural for writing assignments. They provide concrete objects for children to describe. I let parents get their hands on manipulatives, too. It's important for parents to understand why their children are using materials. Follow up by having children take home materials and activities to do with their families. (Hint: I wait until students have had some experience.) Marilyn Burns, a household name to elementary teachers across the country, is the creator of Math Solutions inservice programs, offered nationwide. She is also the author of numerous books and articles.
A Summary of Key Events in Mexican History The history of Mexico is intense, dramatic and fascinating! Unravel a summary of important historic events from the emergence of early civilizations including the Olmecs, Toltecs and Mayans to the rise and fall of the Aztec Empire and the following Spanish Conquest. When you consider the iconic events that occurred within the history of Mexico you could easily mistake the subject as a Hollywood film script. In the pre-Columbian period many advanced Mesoamerican Civilizations dominated the confines of the map of Mexico. These early communities made major developments and progressed to shape the nature of Mexican history as we know it today. After the rise and fall of the early groups, we can consider the widespread range of consequential events. The arrival of Hernando Cortes and the Spanish Conquest, the War of Independence and the Mexican Revolution stand out as the most memorable and culturally significant developments within the progression of the country. Three of the most prominent and notable early civilizations to be found in the country were the the Olmecs (around 1400BC), the Mayans (around 250AD) and the Aztecs (around 1320AD). However, the Toltecs, Zapotecs, Mixtec and Tlaxcalteca people also all had a major part to play in the early foundations of the history of Mexico. The Olmecs dominated the landscape for a transitional period between the years of 1400-400BC. The early Olmec population influenced developments with their cultural style, religious traditions and early forms of architecture. It is thought that the first evidence of recorded writing, stone carvings and artistic remains in the country date back to the time of the Olmecs. The Mayans excelled in mathematics. It is said that the Maya calendar which they devised was still in use until the beginning of this century. The early cultural communities practiced strong religious customs. Between the 9th and 12th centuries, the Mayans would progress to develop and build El Castillo in Chichen Itza. They built many iconic structures in Quintana Roo and the Yucatan Peninsula. You can still visit the Mayan Ruins of Tulum, Coba and Muyil today. Toltec influences and architectural styles have been found in many archaeological Mayan sites. Archaeologists can only speculate as to the importance and relevance of this correlation. The Toltec civilization were at their peak between the years of 800-1000AD. The Toltecs were the rulers and inhabitants of the famous site of Tula in the Hildago region of Mexico. The Aztecs are one of the most famous and regularly referenced civilizations throughout the world. The sheer speed and power in which the Aztec Empire evolved accounts in part for their notoriety. The military skill and planning devised by the Aztecs combined to develop their empire from coast to coast. The Aztecs introduce a taxation scheme on goods and trading to increase their ever expanding power. The history of Mexico will record however that both the Maya and Aztecs are remembered for their extreme religious beliefs and practices. There was a common belief that in order for the universe to function continually, human sacrifice and blood was required. The Aztecs would normally acquire their victims through battle and then offer them up through sacrifice. The most commonly spoken Mexican Language today is Spanish, its inhabitants are also predominantly Christian and Catholic. That is huge transition from the time of the early religious practices of the Aztecs. The course of Mexican history changed forever with the arrival of the Spanish and the subsequent Spanish Conquest. At the start of the 16th century Spanish expeditions began to arrive and descend upon the map of Mexico. It is thought that they initially explored the Yucatan Peninsula. Commander Hernan Cortez was in charge of many of the early expeditions. Hernando Cortes developed many enemies, some even from his own fleet, he made a series of tactical and military decisions to survive. He formed allegiances with both the Aztecs and Tlaxcalteca communities, which were the two main Mesoameriacn civilizations at the time. Cortez used these allegiances to negotiate himself into a place of power and influence. Significant battles and prolonged warfare followed. The struggle and consequential Mexican War of Independence lasted for eleven years. Between the years of 1810 to 1821 the war was waged to establish independence from Spain. In 1824 after a few more years of revolt and protest, Guadalupe Victoria became the first Mexican President. Things were not easy for the president however, because large sections of the population initially ignored laws and actions that were put in place. From this point on various governments were initiated but would subsequently fail. The main groups to find power were the Conservatives and Liberals. A battle with the United States dominated proceedings in the history of Mexico from around 1836. One of the key elements in the Mexican American conflict was based on the subject of the US Mexico border. The US disagreed on the proximity and land ownership around the border. The events of the Mexican Revolution lasted for around twenty years between the dates of 1910 and 1929. It is thought that the conflict and disagreements started initially when the 80 year old president Porfirio Diaz held an election in 1910. His main rival for the election was Francisco Madero who was very popular with voters at the time. Diaz ultimately won the election by a huge margin. Confusion followed at this point, when it was widely suggested by supporters of the popular Madero that the unexpected election results could have been rigged. People rioted and protested against what they believed was a corrupt government. Substantial conflicts and political unrest commenced as various power struggles took place during the following twenty year period. So as I said at the start of this page, some of these events could easily have come directly from a film script. They are however significant milestones in the development of modern day culture and remain important events in the history of Mexico. Return to Riviera Maya Holidays
Bromine Explosions Being Driven By Climate Change According to a new study, Arctic sea ice reductions may be intensifying the chemical release of bromine into the atmosphere, causing ground-level ozone depletion and the deposit of mercury in the Arctic. A team of scientists combined data from six NASA, European Space Agency and Canadian Space Agency satellites to develop a model of how air moves in the atmosphere to link Arctic sea ice changes to bromine explosions over the Beaufort Sea. “Shrinking summer sea ice has drawn much attention to exploiting Arctic resources and improving maritime trading routes,” Son Nghiem of NASA’s Jet Propulsion Laboratory said in a press release. “But the change in sea ice composition also has impacts on the environment. Changing conditions in the Arctic might increase bromine explosions in the future.” Bromine explosions occur when salt in the sea ice, frigid temperatures and sunlight mix. Once these conditions mix, the salty ice releases bromine into the air and starts a cascade of chemical reactions called “bromine explosions.” The scientists wanted to find out if the explosions observed two decades ago in the Canadian Arctic occur in the troposphere or higher in the stratosphere. The team used topography of mountain ranges in Alaska and Canada as a “ruler” to measure the altitude at which the explosions took place. Satellites detected increased concentrations of bromine in the spring of 2008, which were associated with a decrease of gaseous mercury and ozone. The scientists verified the satellite observations with field measurements, then used an atmospheric model to study how the wind transported the bromine plumes across the Arctic. The model showed the Alaskan Brooks Range and the Canadian Richardson and Mackenzie mountains stopped bromine from moving into Alaska’s interior. The researchers determined that since these mountains are lower than 6,560 feet, the bromine explosions was confined to the lower troposphere. “If the bromine explosion had been in the stratosphere, 5 miles or higher above the ground, the mountains would not have been able to stop it and the bromine would have been transported inland,” Nghiem said in a press release. Once the researchers determine that bromine explosions occur in the lowest level of the atmosphere, they were able to relate their origin to sources on the surface. Nghiem said if sea ice continues to be dominated by younger saltier ice, and Arctic extreme cold spells occur more often, bromine explosions are likely to increase in the future. The research was published in the Journal of Geophysical Research-Atmosphere. Image 1: Bromine explosion on March 13, 2008 across the western Northwest Territories in Canada looking toward the Mackenzie Mountains at the horizon, which prevented the bromine from crossing over into Alaska. The bromine explosion is depicted in the foreground by the red-orange areas, while the green shades at high altitudes on the mountains represent areas where there was no increase in bromine. Image credit: NASA/JPL-Caltech/University of Bremen Image 2: Bromine explosion on March 13, 2008 across the Alaskan North Slope looking south toward the Brooks Range at the horizon, which blocked the bromine from going further south into the Alaskan interior. The bromine explosion is depicted in the foreground by the red-orange areas, while the green shades at high altitudes on the Brooks Range represent areas where there was no increase in bromine. Image credit: NASA/JPL-Caltech/University of Bremen Image 3: The upper panel shows a bromine explosion observed by scientists at the University of Bremen on March 14, 2008 over Alaska and the Beaufort Sea. The lower panel shows sea ice cover at the time, as measured by NASA’s QuikScat spacecraft. Image credit: NASA-JPL/Caltech/University of Bremen/University of Washington On the Net:
(PhysOrg.com) -- Collective, or coordinated behavior is routine in liquids, where waves can occur as atoms act together. In a milliliter (mL) of liquid water, 1022 molecules bob around, colliding. When a breeze passes by, waves can form across the surface. These waves are not present in the same volume of air, where only 1019 gas molecules randomly move about. Do such waves occur in plasmas, the most prevalent state of matter in the universe? Like gases, they are made of particles bumping around in a shapeless glob. However, plasma densities can range from 1026 atom/mL all the way down to much less than 1 atom/mL. Wave-like features that occur even at such miniscule densities are one key feature of plasmas. Unlike in liquids, these "waves" happen in plasmas because the particles are charged, thus exerting strong forces on each other, even at large distances. But not all seas of charged particles are plasmas. What makes a plasma a plasma is the organized behavior of the charged particles. Plasmas are found inside the sun, gas-giant planets like Jupiter, the aurora borealis, and those compact fluorescent lights we see everywhere. These plasmas are all hot with the suns plasma reaching a temperature of 107 Kelvin. Plasmas can also exist at the other extreme in temperature, near absolute zero. Recently, JQI researchers devised a way to directly probe the collective motion of electrons found in these ultracold plasmas (UCPs).* In this experiment, researchers create ultracold plasmas using laser-cooled atoms found in magneto-optical traps (MOTs). Xenon atoms are cooled to around ten millionths of a degree above absolute zero. The ultracold cloud of neutral atoms is then delicately blasted with an energetic laser pulse. The pulse of light strips electrons from the neutral atoms, leaving behind positively charged ions. The energy of the pulse is chosen to ensure a gentle ionizing process, thus preserving the clouds cold temperature. The liberated electrons begin to migrate away from the ions. Some get away, but it is their freedom from the cloud that sabotages the escape of remaining electrons. The ions are heavy and cold, and thus move slowly. Because more electrons than ions have left the cloud, there is an overall electrical charge imbalance. The positive ions left behind begin to exert an attractive force on the outwardly diffusing negative electrons. These electrons cannot escape and swarm back around the ions, forming a UCP with a density that ranges between 105 and 1010 atoms/mL. Although plasmas are commonplace in nature, studying them can be challenging. Like other ultracold atomic physics experiments that are analogs for condensed matter systems, UCPs can be a platform for investigating plasma physics. While not all plasmas exhibit the same universal properties, UCPs share characteristics with other important plasmas. The physics of UCPs, for example, overlaps with that of laser-created plasmas, such as those generated in fusion-reaction research at the National Ignition Facility. This facility, located at Lawrence Livermore National Laboratory, is home to the most powerful laser system in the world. During laser fusion experiments, plasmas form with densities and temperatures comparable or exceeding those of sun plasma. Scientists also expect that UCPs share dynamics with astrophysical systems like globular star clusters. Ultracold plasmas are isolated in a vacuum and relatively simple to create. Yet they are small and fragile, lasting only hundreds of microseconds. Scientists typically measure their properties indirectly, by looking at electrons that depart the plasma. JQI researcher Kevin Twedt and Fellow Steven Rolston have recently devised a simple way to directly probe the electrons in these plasmas.* An important collective behavior of the plasma comes in the form of electron oscillations that occur at a specific resonant frequency. These ultracold plasmas as a whole are mostly neutral (ion charge cancels electron charge), but the countless electrons within this matter conduct electricity. In other words, plasmas respond to electric fields. The electrons will move in concert when subjected to an electric field that oscillates at particular frequencies. In this experiment, the xenon plasma is situated (in free space) between two metal grids. The scientists apply an oscillating radiofrequency field to one of the grids. If there were no plasma, then the oscillating field would produce no change in the signal measured at the other grid. When the plasma is present, the oscillating field excites the collective electron motion, which is like creating a wave. This electron motion induces a small current in the opposing grid. The tiny current can be extracted using sensitive electronics. In this way the researchers can probe the collective behavior of the plasma while it is expanding and changing. The measurements reveal how fast expansion dramatically changes the resonant frequency and also how the electrons spatially arrange themselves in the plasma. Research in UCPs has also led to other applications outside of plasma physics. Scientists at the National Institute of Standards and Technology (NIST) create a beam of charged particles by ionizing neutral atoms held in a MOT. This beam, in turn, could be used to improve nanofabrication and imaging of biological systems. UCPs also offer a way to study Rydberg atoms. These atoms have an outer electron that is so excited it is nearly removed entirely from the atom. Within the plasma, electrons and ions can recombine to form Rydberg atoms. Alternatively, scientists can purposely create Rydberg atoms by tuning the photoionization laser just below the threshold necessary for making a plasma. The electrons are barely attached to their parent atoms. These systems will spontaneously decay into a plasma as the atoms lose their tenuously held outer electrons. Kevin Twedt explains, Understanding the back and forth between Rydberg atoms and an UCP may help researchers trying to use Rydberg atoms in quantum information and for the study of quantum many-body physics. These processes also highlight the fascinating aspects of a system literally on the border between atomic physics, with electrons and ions just barely bound together as Rydberg atoms, and plasma physics, with electrons and ions just barely separated in an ultracold plasma. Explore further: New carbon nanotube production method with the possibility of scale up to large industrial levels More information: *Electronic detection of collective modes of an ultracold plasma, K. A. Twedt and S. L. Rolston, Forthcoming in Physical Review Letters. Ultracold neutral plasmas, Thomas C. Killian and Steven L. Rolston, Physics Today 63, 46 (2010) Trend: Ultracold neutral plasmas, Steven L. Rolston, Physics 1, 2 (2008)
Common Name: Box Turtle, Box tortoise – The retraction of the legs and head into the box-like fortress of the shell yields the descriptive nomenclature. Scientific Name: Terrapene carolina – In the Algonquian Indian language group, the name for turtle was a homophone of the Anglicized word terrapin, which, in addition to the genus name for the box turtle, is the common name for any of several edible turtles living in fresh or brackish water (the Delaware or Lenape Indians called the turtle a torope). The species name indicates that the box turtle was first assigned a taxonomic designation based on observations in the Carolina colony. Potpourri: The ponderous, virtually indestructible box turtle is the epitome of slowness and steadfastness; a metaphor for the self-sufficiency of carrying one’s house on one’s back. The shell, formed of the upper domed section called the carapace coupled by bony side bridges to the flat lower section called the plastron, is a marvel of evolutionary complexity in providing a bastion against predators. The success of the box design is evident in survival statistics; absent accident or disease, box turtles live an average of 50 years while not an insignificant number achieve the century mark, a distinction held by few other animals, and one that denotes longevity in Homo sapiens. The shell is the sine qua non of the turtle; one must have one to be one. It is at the same time a prison and a Palladium, affording protection at the expense of freedom. The turtle cannot escape from its shell in an act of reptilian contortion, as the shell is literally its skin and bones. The dorsal domed carapace is formed by a fusion of the spinal vertebrae and the costal rib bones and the ventral plastron by the fusion of the ribs and the clavicles, the whole encasing the shoulder and pelvic girdle; altogether about 60 bones comprise the shell, a virtual if not actual exoskeleton. The outer portion of the bone-shell structure is covered with large horny scales called scutes (from the Latin scutum meaning shield) that are essentially transmogrified epidermal skin segments. They are made from the same proteinaceous fiber called keratin that is the primary component of the scales of their brethren reptilian snakes and lizards. The shell is the refuge of last resort; the water turtles that can totally enclose head, arms and legs whereas the land tortoises cannot. The box turtle is, then, an evolved water turtle and not a uniquely talented tortoise. This is consistent with the fossil record of box turtles, which date from about 15 million years ago and have taxonomic similitude with aquatic species of the same Miocene epoch. Box turtles belong to the same family as the aquatic turtles including the painted turtle and the diamondback terrapin. Turtles are monophyletic; their evolutionary history from a single common ancestor has never been seriously questioned. Their highly adapted physiology is testimony to a linear successor progression. The only thing that seems to be lacking in agreement is their scientific name which can be found as either Chelonia (the Greek word for tortoise is chelys), or more frequently as Testudinata (the Latin word for tortoise is testudo). The testudines have their origins in the Triassic Period 225 million years ago when the first dinosaurs appear in the fossil record, though the precise reptilian ancestry is still subject to legitimate taxonomic conjecture. Turtles have no cranial fenestration (holes in the temple area of the skull) and are hence thought to have evolved from reptiles of the Carboniferous Period called anapsids that also lacked the temporal penetrations. Their origins among the diapsids which include snakes and lizards, is a hypothesis that has become more widely accepted due in part to molecular biology. While the diapsids, as their name implies, have two holes in their skulls in the area of their temples, it is thought that the testudines may have reverted to the anapsid-type skull through an involution, or retrograde evolution. The life cycle of the box turtle is not particularly robust, a paucity of offspring combined with essentially no parental oversight of the hatchling turtles absent the selection of a concealed nest location. Courtship between male and female box turtles takes place early in the spring and involves some foreplay in the form of the male nipping at the female’s shell in the course of circling and nudging. Once there is agreement as to intent, the male mounts the female by gripping the back of her shell with his claws so as to extend slightly beyond the vertical. This, of course, is necessary because the shell is not configured to facilitate intercourse. The male box turtle can be distinguished from the female box turtle by the slight concavity of the lower shell plastron, which is an adaptation to facilitate the mounting of the female. The female plastron is flat. Other aspects of sexual dimorphism are the color of the eyes, the length of the tail and the shape of the shell. Males have bright orange or red eyes (the females have light orange eyes), wider and longer tails and flatter shells. The photograph depicts a male box turtle (right) in the early stages of courtship with a female (left). In that the sexual act is cumbersome at best and nearly impossible at worst, evolution has provided an answer: the female can store sperm for up to four years after mating that is still viable for egg fertilization. Between May and July, the female will excavate a flask-shaped hole in sandy soil and lay a relatively small number of eggs (estimates range from 3 to 11), which are thereafter on their own – parenting is not a reptilian attribute. One of the more interesting observations that have recently been made is that the sex of the baby turtles is not determined by the genetics of meiosis, but by environmental factors of the nest such as temperature and humidity. It is not yet known why (or how) environmental sex selection occurs. After an incubation period of about 75 days, one-inch long hatchlings emerge to face an unforgiving world of predation to which the vast majority will succumb. It is estimated that only one in a hundred box turtles reach the sexually mature age 10 years with a fully formed and protective 6 inch shell. One can determine the age of a juvenile box turtle by counting the number of annual growth rings on the epidermal scutes that cover the shell, though this is no longer possible after growth slows to near stasis at the age of 20 years. As the adult box turtle is not constrained to a protective habitat on account of its keratin aegis, it can and will live in a wide variety of environments. Mesic woodlands near a source of water is the preferred location, as heating and cooling alternatives are available – a perennial concern for a cold-blooded reptile. In the case of the encased turtle, keeping cool is generally of greater concern, except in the winter months when hibernation beneath the frost line is practiced to maintain body temperature. The matter of cooling in the summer has led to some behavioral anomalies; box turtles will spread saliva on their legs and their head and will urinate on their back legs to take advantage temperature drop associated with the latent heat of evaporation, an ersatz sweat. However, there are subspecies of the box turtle that have adapted to live in grasslands and there is even a desert box turtle that lives in semiarid conditions. Once a habitat is chosen by an adult box turtle, it will generally not venture outside a circular area with an approximate 100 meter radius. If relocated by human intervention, they are inexorably drawn to their natal grounds without regard to any obstacles like roads that must be traversed, which is one of the many reasons why turtles should never be taken home to your garden. This also answers the question of what you should do if you see a turtle crossing the road – you should move it to the far side and not move it back to the start as it will just try again. Habitat diversity is supported by the dietary practices of box turtles; they are omnivorous and will eat just about anything that they run into. While the hatchling turtles are thought to be primarily carnivorous in their mostly insect and worm diet, adults are primarily herbivorous and subsist on leaves, grass, and fruits. Interestingly, adult box turtles are also mycophagists, they eat fungi. It is reported in several sources that they consume mushrooms toxic to humans without apparent distress, not too surprising as mushroom toxins are very species specific. The physiology of the box turtle is also strongly affected by the bastion afforded by the carapace; they have never had the evolutionary selection driving force to improve on their virtually non-existent defenses and their somewhat meager sensory perceptions to enhance their survival. Unlike most other reptiles, box turtles have no teeth – just a beak with strong jaws. While this is adequate for vegetation and soft bodied animal prey, it would hardly afford a defense. Their feet have claws for digging and for holding on while mating, but these are again functional and not defensive. Most other reptiles use their tongues as either a sensor or as a weapon or both; the tongue of the box turtle cannot be extended. The auditory capability of turtles is limited to vibrations without significant directionality as they essentially have no external ears. To some extent the deficient hearing is offset by what is characterized as their superior sight and olfactory senses. Breathing inside the rigid external ribcage of the shell requires some unique adaptations, as the diaphragmatic mechanism of most other animals will not work. The lungs of the box turtle are manually inflated by muscles in the leg region and manually deflated by muscles on the top and bottom of the lung. As this is not all that efficient, the cloaca, which is the digestive, urinary and reproductive opening in the tail, can absorb oxygen directly in some species. So, it is a tradeoff; turtles who survive the gauntlet run of early life to achieve the mobile fortress of adulthood into which they can withdraw for protection are set for a long life of wandering around in the forest, eating, sleeping and mating as the seasons unfold for years on end. Not a bad evolutionary dead-end. The victory of the turtle over the hare according to the moral probity of Aesop may be allegorical in terms of speed, but it is categorical in terms of longevity.
The new system, called a thermal resonator, could enable continuous power for remote sensing systems without using batteries. The thermal resonator does not need direct sunlight and so is unaffected by short-term changes in cloud cover, wind conditions, or other environmental conditions, and can be located in the shadow under a solar panel. The team needed a material that is optimised for thermal effusivity -- how readily the material can draw heat from its surroundings or release it. This balances thermal conduction and capacity which tend to be contradictory: if one is high, the other tends to be low. Ceramics, for example, have high thermal capacity but low conduction. The basic structure is a metal foam, made of copper or nickel, which is then coated with a layer of graphene to provide even greater thermal conductivity. Then, the foam is infused with a wax called octadecane, a phase-change material, which changes between solid and liquid within a particular range of temperatures chosen for a given application. Essentially one side of the device captures heat, which then slowly radiates through to the other side. One side always lags behind the other as the system tries to reach equilibrium. This perpetual difference between the two sides can then be harvested through conventional thermoelectrics. "We basically invented this concept out of whole cloth," said Michael Strano, Carbon P. Dubbs Professor of Chemical Engineering at MIT. "It's something that can sit on a desk and generate energy out of what seems like nothing. We are surrounded by temperature fluctuations of all different frequencies all of the time. These are an untapped source of energy." This combination of the three materials makes it the highest thermal effusivity material in the literature to date, he says. A sample of the material made to test the concept showed produced 1.3mW of power at 350mV from a 10ºC temperature difference between night and day. This outperforms an identically sized, commercial pyroelectric material --
On May 11, 1792, Robert Gray, the first American to circumnavigate the world (1787-1790), sailed the Columbia Rediviva into the Columbia River, the first documented ship to anchor in the river’s broad estuary. He named the river “Columbia’s river” after his ship and drew a sketch map of the river mouth. With Gray’s entry into the river, the United States had an arguable claim to discovery in the deliberations with Great Britain that led to the Oregon Treaty of 1846. Even though Gray’s accomplishment played no material role in the consummation of the treaty, he nonetheless became among the most famous Americans to establish a national claim on Oregon Country. Born on May 17, 1755, in Tiverton, Rhode Island, Gray apparently went to sea at a young age. His family claimed that he served in the Continental Navy, although if he did, it is undocumented. He became a successful commercial mariner during the 1780s before Boston investors chose him to captain the Lady Washington in a fur-trading voyage to the Northwest Coast in 1787. Gray returned to Boston as captain of the Washington’s companion ship the Columbia and sailed to the Northwest in the ship on a second trading voyage in 1790. Gray was a no-nonsense trader who forcefully pursued acquiring pelagic furs from Natives, often driving hard bargains and entertaining little equivocation. On two occasions, one in present-day Tillamook Bay (his men named it Murder’s Harbor) and another in present-day Gray’s Harbor, Washington, Gray fired on recalcitrant Native traders, killing several. On May 9, 1792, Fifth Mate John Boit recorded the Gray’s Harbor incident in his log: “I am sorry we was oblidg’d to kill the poor Divells, but it cou’d not with safety be avoided.” As a mariner, Gray displayed an impatience that led him to sail too close to dangerous coastlines, a practice that resulted in damage to the ships he captained. But it was that aggressive attitude that led to his sailing boldly into the Columbia River in May 1792. As a commercial mariner, Gray played no role as an emissary for his country, so he willingly passed on his sketch chart of the Columbia to British Capt. George Vancouver, who had told Gray that he did not believe the river existed. Realizing his error, in October Vancouver sent his tender ship, the Chatham, captained by Lt. William Broughton, into the Columbia and ordered a hundred-mile-long survey of the lower river. The voyage produced a detailed map, published in 1798, that gave Britain legitimate claim to the river. After his return to Boston in 1793, Gray continued in merchant shipping and married Martha Howland Atkins in 1794; they had four daughters who survived to adulthood and one son who died by age seven. During the Quasi-War with France in 1798-1800, Gray commanded the Lucy, an American privateer. Most of his commercial voyages from Boston took him to Atlantic coastal ports and the Caribbean. Although it is undocumented, he likely died in 1806 from yellow fever in South Carolina. There are no documented images of Gray, although the one often identified as his portrait may be a reasonable likeness, save that it does not reveal that Gray had only one eye and wore a patch during most of his career. His name is memorialized in Gray’s Harbor and Grays River in Washington State and a middle school in Portland.
U.S. imperialism at the turn of the century, namely the annexation of Puerto Rico, the Philippines, and Hawaii, and the Platt Amendment, was not a violation of past U.S. policy or precedent; however, it was a violation of past U.S. principle. There had been many precedents of imperialism before the turn of the century. However, some U.S. underlying principles, including equality, were definitely ignored and violated by imperialism. The Monroe Doctrine, issued December 6th, 1823, was a piece of U.S. foreign policy that U.S. imperialism did not violate: “But with the Governments [in the Western Hemisphere] who have declared their independence and maintained it, and whose independence we have, on great consideration and on just principles, acknowledged, we could not view any interposition for the purpose of oppressing them, or controlling in any other manner their destiny, by any European power in any other light than as the manifestation of an unfriendly disposition toward the United States” (James Monroe). The doctrine says that Europe should not interfere with countries in the Western Hemisphere that had claimed their independence. However, first of all, at the time neither the Philippines nor Hawaii were considered to be in the Western Hemisphere. Second of all, the United States was not part of Europe – and therefore, according to past policy, had no obligation to leave any of the regions in question alone – and should have been free to annex them. In fact, the Monroe Doctrine paved the road for U.S. imperialism by making sure that Europe would stay out of Latin American business, enabling the U.S. to potentially expand and take over these territories. Although the doctrine did not give the U.S. permission for imperialism, imperialism was consistent with the doctrine. Similarly, U.S. precedent was very imperialistic by the early 1900s. A prime example of this was the Native Americans being pushed off of their territory, when the U.S. was a new country, still expanding, and did not yet extend from coast to coast. In 1830, Congress passed the Indian Removal Act, which called for the government to negotiate treaties to get the Native Americans off of American land onto Indian Territory, in Oklahoma. Many Cherokees, led by John Ross, opposed the treaty. In 1838 General Winfield Scott and his federal troops moved in and rounded up over 16,000 Cherokees. Over the fall and winter of that year and the next, the Cherokees were forced to set out on a journey west, where one fourth died due to climate and lack of adequate supplies. This march became known as the trail of tears. Without the consent of the Native Americans, the United States federal government forcibly took over the land, and afterwards tried to force U.S. culture upon the Indians using the Dawes Act (which completely failed) – America tried to Americanize them. This, indeed, is the very definition of imperialism, and so it cannot be said that the imperialism at the end of the nineteenth century violated precedent – it was, in fact, in accordance with precedent. Finally, on the other end of the spectrum, is principle. Principle is one thing that imperialism definitely did violate – and most of all, it violated the most important underlying U.S. principles of all – those of the Declaration of Independence. The first sentence of this important document is “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. — That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, — That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government” (the Declaration of Independence). If the Philippines were taken over, nobody honestly thought that the islands would eventually become a state – and therefore, the Filipinos would never be equal with American citizens. This in itself violates the principle that governments derive their powers from the consent of the governed – the Filipinos in no way wanted to be governed and therefore, according to the Declaration, the United States had no right to govern them. Imperialism clearly violated this document, which outlines many U.S. principles. Even though imperialism violated past principle such as the Declaration of Independence, however, there were no laws being broken; both policy, such as the Monroe Doctrine, and precedent, such as the removal of the Native Americans, were being followed at the turn of the century – and thus, imperialism continued. This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. You can order our professional work here.
More than 40 years before women achieved the vote in the U.S. in 1920, Emily Howland (1827-1929), a Quaker reformer, educator and philanthropist was petitioning the New York legislature to act equitably. In an 1876 letter just added to our collections, Howland reminds the Honorable A.S. Russell that under the Constitution as written, the legislature has the power to give women of New York the right to vote. To encourage him, she suggests that grateful women would vote for those who empower them, and, conversely, refers to the historical outcome of “taxation without representation”: peril to a government that disallows the vote to women. By the time this letter was written in 1876, Howland had already accomplished a great deal — as a teacher in a school for African American girls, as an organizer of the Freedom Village for refugee slaves during the Civil War, as an advocate for women’s rights alongside Susan B. Anthony; she would later also become a champion of world peace. The letter takes its place alongside Haverford’s other Howland materials, including the Emily Howland Papers, which illustrate her interest in African American education. Tags: Women's rights
BACK TO : EMPHASIS / EMPHATIC FORMS (Updated on 06/02/2013) to emphasize a situation/element (forum.wordreference.com) - One must highlight One must emphasize One must underline One must bring attention to / you must draw your attention to - in particular.... / especially forms, sometimes called the emphatic tenses or emphatic mood, (Using English) are made with the auxiliary verb do in the present or past tense + the base form of the verb: "He doesn't work very hard." "I don't agree with you- he does work very hard." In the second sentence , the speaker uses the emphatic form does work as a way of contradicting the first speaker." "The dummy auxiliary do is used for emphasis in positive statements (see above): I do like this beer!" tenses (iscribe.org) : I do take medicine for an allergy." (present I did take medicine for an allergy. (past emphatic) I will take allergy medicine. (future emphatic) "The emphatic form of the verb infers the speaker's degree of determination. The construction of the verb changes when the emphatic form is used. However, the sense of time does not change when the emphatic verb form is used in place of the less emphatic form. Emphatic tense is used in a popular ceremony.Question: Do you take this (person) to be your lawful wedded (spouse)? Answer: I do. (Emphatically, I do!)" You shall return. "In spoken English, words can be emphasized by being pronounced with a heavier stress than usual. This type of emphasis is usually indicated in written English by means of italics or underlining." "Emphatic statements are often used in conversation; for instance, when one speaker is contradicting another." e.g. "I don't believe he works very hard." "Yes, he does work hard." + Examples of emphatic statements in all of the present and past tenses. "Sometimes it is desired to emphasize a negative statement containing the word not." - Emphasizers (grammar.ccc.commnet.edu) : I really don't believe him. He literally wrecked his mother's car. She simply ignored me. They're going to be late, for sure. of the verb after certain adverbs (visualesl.com) "Such adverbs (adverb phrases) can be placed first in a sentence or clause for emphasis. They are then followed by the interrogative (i.e. inverted ) form of See the examples. - with the transcript "Our expert answers a question from site visitor Hossein about using the emphatic auxiliary 'do'." INTERACTIVE activities : on Adverbs (grammar.ccc.commnet.edu) 7. Select the most emphatic position for the adverbial modifier of this sentence.
“To transform our culture by creating a world where science and technology are celebrated and where young people dream of becoming science and technology leaders.” -Dean Kamen Founded by inventor Dean Kamen in 1989, the mission of FIRST (For Inspiration and Recognition of Science and Technology) is to inspire young students to become science and technology leaders. FIRST does this by engaging students in mentor-based programs that build science, engineering and technology skills and fostering self-confidence, communication and leadership skills. Two important values behind FIRST are gracious professionalism, and coopertition. Gracious Professionalism is a way of competing, while still treating each other with respect and kindness. Coopertition is a term coined by cooperation and competition and encourages sportsmanship. It encourages team to help each other even in the face of competition. FIRST inspires students in the STEM field from all ages. It consists of four main programs: - Junior FIRST LEGO League (Jr. FLL ages 6-9): Students in teams of 2-6 are given a challenge, and must develop a proposed solution. Students will then build a Lego model of their solution and present their findings through a poster. More information about Jr. FLL can be found here. - FIRST LEGO League (FLL. ages 9-14): Students in teams of 2- 10 are given a challenge, and must build and program a Lego robot to complete different missions. Students also must research and present their solutions. More information about FLL can be found here. - FIRST Tech Challenge (FTC, Grades 7-12): Students are given a game challenge and much design, build, and program a robot that will compete with other teams. FTC aims to provide the same challenge as FRC (below), but in a more affordable format. More information about FTC can be found here. - FIRST Robotics Competition (FRC, Grades 9-12): Students work with mentors to design, build, and program a 120 pound robot, in six weeks, that is built to meet the demands of a competition game challenge. More information about FRC can be found here.
Night-time bedwetting, known formally as nocturnal enuresis, is a condition in which a toilet-trained… Dehydration in children What is dehydration? Dehydration is a decrease in your body's fluid levels, which occurs when you lose more fluid than you take in. Without the appropriate levels of fluids and salts, normal bodily functions can be affected. When you lose fluids they need to be replaced to ensure dehydration does not worsen. Dehydration in children is common as they have a higher turnover of fluids than adults, therefore they need proportionally larger volumes of water to maintain a healthy fluid level. Dehydration in children can be caused by excessive physical activity, hot weather, illness, vomiting, diarrhoea, fever and extreme sweating. It is best to prevent dehydration from occurring in the first place; this can be done by teaching children about the importance of hydration and encouraging them to drink plenty of water during hot weather and exercise. Dehydration in children occurs when fluids are lost from the body faster than they are replaced. This can occur quickly to babies and young children in hot weather, particularly if there is increased sweating and a reduced intake of fluids. This can also occur if children spend too much time in the direct sun, are in a hot room or a hot car. Illnesses that involve diarrhoea, vomiting and fever are a common cause, as is excessive physical activity. Diarrhoea, vomiting and fever Diarrhoea that occurs suddenly and severely can lead to a large loss of fluids and electrolytes in a short period of time. This may be caused by a viral or bacterial infection, or food sensitivity. If vomiting also occurs at this time, even more fluids can be lost. A fever, which causes sweating, can also worsen dehydration. Excessive physical activity Excessive physical activity can lead to dehydration. Many children like to play sport, which increases sweating. If they do not keep up their fluids during physical activity, dehydration can occur. Dehydration occurs more often in children due to their low body weight and because they may not identify the signs of dehydration. Any child can become dehydrated, however there are increased risks for children who: Dehydration in children can be categorised into mild, moderate and severe stages. These are based on the loss of body weight. Mild dehydration occurs when there is a loss of body weight between 5-6%. Moderate dehydration occurs when there is a loss of body weight between 7-10%. Severe dehydration occurs when there is a loss of body weight between 10-15%. Signs and symptoms The signs and symptoms associated with each stage can vary. In mild dehydration, the only signs and symptoms may be thirst and restlessness. Some symptoms of moderate dehydration in children can include restlessness, irritability, inactivity, dry mouth, sunken eyes, and reduced urine output (less wet nappies in infants). Some symptoms of severe dehydration in children can include extreme thirst, extremely dry mouth, cracked lips, little or no urine output, confusion and unconsciousness, rapid breathing and heart rate, and low blood pressure. Methods for diagnosis A diagnosis of dehydration can usually be based upon a child's physical appearance. A physical examination by your doctor can reveal a lack of elasticity in the skin, sunken eyes, a rapid heart rate, and low blood pressure. To identify the severity of dehydration, blood tests can be performed to check electrolyte levels. Urine tests can also be performed to check the concentration of urine. Types of treatment It is important to begin treatment of dehydration in children as soon as it is recognised. Treatment focuses on replacing electrolytes and fluids that have been lost and is based on the severity of the condition. In children with mild dehydration, fluid losses can be replaced through drinking fluids until hydration is reached. This can be over a period of 3-4 hours. After hydration, it is also important for children to eat to replace any lost calories. If the dehydration has occurred in a baby, breastfeeding can be continued to rehydrate. If children are suffering from vomiting and diarrhoea, an oral rehydration solution (ORS) containing sugar (glucose) and electrolytes (sodium, potassium and chloride) can be given. Moderate and severe dehydration. Children suffering from moderate and severe dehydration will require treatment at a medical facility. Fluids are generally given intravenously (IV). This involves inserting a small needle into a vein, usually in the arm and administering fluids. This method helps to quicken the recovery time. Children with dehydration may experience complications if dehydration reaches a severe stage. One complication that can occur is called hypovolemic shock, where there is insufficient fluids for the heart to adequately pump blood around the body. This complication is characterised by pale, clammy skin that is cool to touch, a rapid heartbeat and shallow breathing. Blood pressure can also drop to an unreadable level. Unconsciousness and death will follow if severe dehydration is not treated. If a child is experiencing severe dehydration, emergency services must be called. Dehydration in children is a treatable condition. It is important to identify the symptoms early and begin treatment as soon as possible. Treatment is focused on replenishing the electrolytes and fluid that have been lost during dehydration. Dehydration in children is a preventable condition. A healthy fluid balance can be maintained in children if they drink water or other fluids before they feel thirsty. It is also important for them to drink extra fluids before physical activity or during hot weather. A good rule is for children to have 6-8 cups of fluid each day, preferably water. Fruits and vegetables have a very high water content, so adopting healthy eating is a great way to stay hydrated. It is also possible to reduce the risks of dehydration by replacing fluids lost during diarrhoea and vomiting as they occur.
Crossing Over 100 Lesson 12 of 14 Objective: SWBAT count across decades and centuries when working with numbers between 100 and 1000 Students have counted by 1s, 5s and 10s within one hundred. We practice this again in a game form to review counting skills. Students stand in a circle and pass a ball, randomly, around the circle. As each child gets the ball they say the next number. Children must listen and track the count because they don’t know when the ball will come to them. (I help those who may struggle with counting across decades and centuries by providing support for them in the form of hints and questions, such as the person before you said 39, you know what comes after 3 so after 39 comes what....and hopefully they can say 40). We practice with 1’s, 5’s and 10’s going to 300 by 10s. Counting continues to be important practice in second grade, as it is a critical skill for addition and subtraction. How students count is important as well, because if they always start in the same place and don't practice counting backwards as often (if not more) as counting up, counting won't become an addition/subtraction skill. Teaching the Lesson I want children to think about what happens when they count across decades and centuries. I am helping them to visualize a structure for counting across decades (MP7) I put the number 130 on the board and say if we are counting by 10 what would come next? After a student says 140, I write 140 below 130. I ask students to help me continue. I place the numbers under one another, curving them into a river shape. Next I draw a line along each side of the numbers, making a river. I tell students that they will be making their own rivers for us to cross. I give each child a large paper and ask them to start at 150 and counting by tens, writing the numbers to make their own rivers. I ask them to make sure that they get at least to 250. I circulate around to see how students are doing in counting by tens across the decades, and from 200 to 300. When everyone is done, I ask them to trace along the edges of their numbers to make a river. Next I ask them to color their rivers with colored pencil so they can still see the river. Next I hand each child a small piece of origami paper to make a canoe. We work together to fold the canoe. (directions can be found at origaminstructions.com). The origami is fun but takes time and patience (especially if you are not familiar with origami yourself, make the canoe a few times so you are comfortable with it.) If you'd rather not take the time for this, give students a small piece of paper and let them quickly draw and cut out a small canoe. Now I tell students that we are going to cross our tens river with our canoes and as we do we are going to put in the stepping stones that help us cross the river. On my river I put 129 on one side of 130 and 131 on the other side. I circle each one and say these are my stepping stones to launch my canoe across the tens river. I ask them if they can put the stepping stones on their rivers. Again, we are looking at the structure of number patterns and also modeling with math as we gain a clearer understanding of how numbers work (MP4). When we all have the stepping stones, we talk about launching our canoe to cross at a particular number (such as 180) if we step off the left (lower) side what number do we launch on. What if we step off on the right (higher) side. We launch our canoes in several places. I ask a student to pick a stepping stone to launch from. He/she can ask a friend what numbers they cross. I ask students to look at the lower side of the river. What number is always in the ones place? (the 9). "The nine is our cue that we are about to cross the tens river. It means we are moving to a new group of tens when we are counting up. Be sure to be on the lookout for that 9 clue when you are counting up! Remember it is a warning sign that we are entering the river !" What number is always in the ones place on the higher side? (1) The one tells us we are safely on the other side of the tens river and can rest for a little while. I tell students when we cross the river across a zero number, the nine is always the last number before we cross to a new tens if we are going up, but the zero is the last number before we go down to a new tens. This activity helps students to concretely see what is happening when they need to count up or down. It is a skill they worked on in Kindergarten and First Grade but may still not have a strong understanding of what is really happening when they cross a decade. Students use counting up and down to become more fluent with adding and subtracting, so this understanding is important. For students who are ready, this same activity could be done with hundreds. The hundreds could also be done on a different day, or once children have a clear understanding of the tens. The idea is to reinforce counting skills that will help with later addition and subtraction strategies. Counting should be fluent and the changes that reflect place value changes (across decades and centuries) are skills worth taking time to reinforce before moving on to applications.
|Student Learning Outcomes -| - Demonstrate skills required for proper pruning of various species of trees and shrubs. - Plant trees and shrubs. |Description - | |Horticultural principles and practices for management of plants and gardens. Proper selection and maintenance of trees, shrubs, and ground covers. Fine gardening techniques used by landscape gardeners. Transplanting and planting containerized and boxed plant material. Preparation of planting areas and post-planting care of landscape plants. Techniques for pruning of various species. Operation of equipment and tools used in gardening.| |Course Objectives - | |The student should be able to: | - evaluate the health and value of nursery stock to be planted in the landscape. - demonstrate skills required for proper pruning of various species of trees and shrubs. - recognize and evaluate various planting and post-planting techniques in order to select the most appropriate method for care of a given plant species. - practice and achieve proficiency in the safe handling of various landscape maintenance equipment including hand tools and power equipment. - identify routine maintenance requirements and special problems in several different landscape settings - employ methods necessary to carry out proper maintenance procedures on plant material. - preparation of planting areas including soil modification. - transplant trees, shrubs, and perennials. - plant trees, shrubs, and perennials. - recognize the variety of planting techniques used in different cultures around the world. |Special Facilities and/or Equipment - | |Horticultural laboratory, greenhouse, and related horticultural facilities and equipment. Students provide pruning shears with sheath, work boots, leather gloves and clothing for fieldwork. | |Course Content (Body of knowledge) - | - Study of natural and directed plant growth and form - Pruning procedures including selection and safe handling of tools used for specific jobs - Process of proper plant selection for specific landscape sites - Determination of climate - Climate modification techniques - Planting site evaluation - Techniques and procedures involved in: - Plant material installation - Routine maintenance procedures and practices - Soil management - Chemical growth control - Diagnosis and correction of plant problems - Plant area preparation - Planting site preparation - Tilling and digging beds - Soil modification - Planting techniques utilized by different cultures |Methods of Evaluation - | |Methods of evaluation used for this course will include: | - Lab exercises - Written plant report - Final examination - Participation through attendance |Representative Text(s) - | |Ortho, Ortho's All About Pruning, Ortho, 1999. | |Disciplines - | |Environmental Horticulture & Design | |Method of Instruction - | |Methods of instruction will include: | - Participate in discussions. - Engage in lab activities. - Guest speakers. - Engage in small group discussions. - Read assigned reading activities. - Perform self-guided research. |Lab Content - | |Lab content will include: | - Plant anatomy and physiology review. - Pruning basics. - Drop crotch - Decorative techniques - Planting and post-planting care activities. - Plant preparation - Plant hole preparation - Planting bare root stock - Planting containerized stock - Planting boxed stock - Deep watering |Types and/or Examples of Required Reading, Writing and Outside of Class Assignments - | - Reading assignments will include reading approximately 30-50 pages per week from assigned text. Supplemental reading will be provided in hand-out form or through reference to on-line resources. - Lectures will address reading topics and experiences of instructor. Classroom discussion and demonstrations in support of lecture topics will be provided. - Guest speakers from industry will provide supplemental lecture and demonstration. - Writing assignments include: - topical white papers
Discuss the importance of monuments around the world, and assess students' familiarity with American monuments, such as the Washington Monument, the Lincoln Memorial, the Vietnam Veterans War Memorial, or the Oklahoma City National Memorial. Students might use the Internet to learn more information about the purposes served by our nation's monuments and memorials. A discussion of U.S. monuments might lead to a discussion about monuments from other cultures, such as the Egyptian pyramids, or to the recent destruction of ancient Buddhist statues by the Taliban in Afghanistan. After the students have done some research about existing memorials and discussed the purposes served by those monuments, it is time to introduce a special project -- creating designs and/or models of monuments related to the September 11 attacks. The activity gives students an opportunity to do something constructive with their feelings about these events. Students design memorials and compose oral presentations in which they present their designs, and the ideas that led to the designs, to the class. Use a rubric or criteria lists designed by the class to evaluate students' designs or models (to include evaluations of creativity and critical-thinking skills) and oral presentations (to include evaluations of presentation content and delivery). Kathie Marshall, Mulholland Middle School, Van Nuys, California To help us keep our Lesson Plan Database as current as possible, please e-mail us to report any links that are not working.
a - 2 Initial Sounds A short video to prompt learning the 'a' phoneme and the A grapheme. Words that start with the letter a are shown on slides along with pictures to match the words as narrator says the words. Solving Equations - Pre-Algebra The instructor demonstrates how to solve one-step, two-step, and multi-step equations. Several examples are modeled gradually getting more complex with each. Video is good quality and good for all students as review or initial learning of the concept. Part 1 - Books, Writing and Creativity An Australian author explains how to encourage children to be creative and help them enjoy books and writing. He gives tips and ideas for learning activities for children. This is part 1 of 6 parts in the series. The web site is located at: www.writing-for-children.com Learn the Greek language by reading the English-Greek translations as actors walk through each lesson in the latest Greek textbook by Papaloizos Publications, "Modern Greek." Greek videos offer learning solutions similar to Rosetta Stone, with modern accounts of activities and words played out by screen actors, and translated in both Greek and English. Written by Greek professor and historian, Dr Theodore Papaloizos, the book "Modern Greek" teaches the language in easy, comprehensible terms that Sun Dog Reflections This video shows some sun dogs from the fall of 2006. A sun dog, means beside the sun, is also called a mock sun. It is a particular type of ice halo. It is a colored patch of light to the left or right of the sun, 22 or more degrees distant and at the same distance above the horizan as the sun. This video is set to music, but the pictures are wonderful. Video is good quality and appropriate for any age student. Run time 01:21. Math Tutoring - Geometry - Quadrilaterals This introductory video shows students the basic principles of quatrilaterals. Squares, rectangles, and parallel sides are all important concepts that students should know when learning about quadrilaterals. Commutative Property of Addition Students learn the following addition properties: the commutative property of addition, which states that a + b = b + a. Video is good quality and good for all students as review or initial learning of the concept. This video discusses the properties of addition. The video startes with solving a simple addition problem using a number line, they then change the order of the problem. This shows the commutative property for addition. Then they show commutative using negative numbers. Then other examples are shown. Then they show problems with parenthesis, which tells the student to do this part first. This shows the associative property for addition. More examples are given. Video is good quality and good for Properties of Multiplication This video indicates it is for pre-algebra, however this video would be good for anyone who is review mulitiplication and its properties. This video begins with showing the Commutative Property for Multiplication, which states when two numbers are switched they will have the same product. An example is given with three numbers, this is done from left to right (order of operations). Then they switch what is mulitiplied first the product is the same, this is the Associative Property for Mulitiplic The Distributive Property Students learn the distributive property, which states that a(b + c) = ab + ac. In other words, the number or variable that is outside the set of parentheses "distributes" through the parentheses, multiplying by each of the numbers inside. Note that a negative sign outside a set of parentheses can be thought of as a negative 1, so a negative 1 distributes through the parentheses. Video is good quality and good for all students as review or initial learning of concept. Using the Distributive Property The distributive property is one of the most important and fundamental parts of algebra. In this video the math teacher shows how to break down the equation in to the simplier components. Shows negative numbers also. This is a great beginning video. Video is good quality and good for all students as review or initial learning of concept. Animals in the Alaskan Waters This video from Education 2000's Dive Travel shows you various animals in the Alaskan waters. The video shows humpack whales playing and jumping in the distance while a narrator (being interviewed) explains various characteristics and behaviors of the whale along with other animals in the Alaskan waters. The video also addresses sea otters and orcas. Run time 09:07. Short A Sound - Fruity ABC This video is a snippet from the "Fruity ABC" learning DVD. It focuses on letter a. It tells the name of the letter a, make the short a sound a few times, then shows a picture of an apple, which begins with short a. The video ends with the word "apple" being spelled out on the screen and sounded out phonetically. (:52) Rational vs Irrational Numbers Students learn that the following number sets represent rational numbers: natural numbers, whole numbers, integers, fractions, terminating decimals, and repeating decimals. For example, -2, 7, 3/4, 0.0006, and 0.191919... are all rational numbers. However, a decimal that is both non-terminating and non-repeating is an irrational number. For example, 0.12579835781... and 39.779778776775... are irrational numbers. Video is good quality and good for all students as a review or initial learning of t Discovery Education Video. Video discusses natural numbers, whole numbers, fractions, and integers. Then they discuss how rational numbers are used in recipes. Video is good quality and good for all students as a review or initial learning of the topic. Discovery Education Video. Video discusses irrational numbers, such as the number pi because it has no decimal equivalent. Then it discusses how irrational numbers relate to rational numbers. Video is good quality and good for all students as a review or initial learning of the topic. Students learn that probability is the likelihood that a given event will happen, and probability can be found using the following ratio: (number of favorable outcomes) / (number of total outcomes). For example, the probability that the flip of a coin will come up heads is: (1 favorable outcome) / (2 possible outcomes), or 1/2. Note that probability can be written as a fraction (1/2), a decimal (0.5) or a percent (50%). Video is good quality and good for all students as a review or initial learn Probability of Dependent Events Students learn that two events are dependent if the outcome of the first event affects the outcome of the second event. For example, taking a block out of a jar, then taking a second block out of a jar. And the probability of dependent events can be found by multiplying the probability of the first event times the probability of the second event. For example, if there are 4 blue blocks and 4 yellow blocks in a jar, the probability of taking a blue block out of the jar then a yellow block out of Main Idea Song Video is about finding the main idea in a passage, it is in song form. This video discusses what a paragraph is, the topic sentence, detail sentences, and conclusion. Video is good quality and good for all students as review or initial learning of the concept. Conjunctions (part 2) Lord Harold Syntax, the world's foremost authority on the English language, takes a trip to lovely little Syntaxylvania, where he and his assistant Nemesis reacquaint themselves with several members of the Syntax lineage. Video is good for elementary students to use as review or learning of concept.
sometimes not passing headaches make you an appointment with a specialist.Usually, the doctor prescribes a number of diagnostic procedures for examination of the brain, among which you can see an unfamiliar acronym REG.Naturally, medical terminology ignorant person immediately starts to wonder: "REG - what is it?" Abbreviation REG stands for rheoencephalography - a method for diagnosing cerebral vessels.Do not be afraid if you have to go through a procedure such as a survey REG."What is it?" - You think.During this diagnostic method have passed through small electrical pulses.With them it is possible to draw a general picture of the blood vessels of the brain. procedure is completely safe and does not cause absolutely no harm to human health.Well, think for yourself, because when it is developed and put into effect, probably conducted research and obtained some results confirming the safety of this technique. course, any patient questions often arise in the This method allows to reveal the diagnosis, the condition of the vessels of the brain: the elasticity under load, strength.Most often, this method of investigation used for severe headaches.As a rule, the first cause terrible pain in my head may just be poor circulation in the brain.REG (what it is, you already know) allows you to get a clear and complete picture of the flow of blood to the head and its optimal distribution of the vessels. Indications for REG What is it, we have dealt with you, now let's consider in more detail the reasons for which the expert can assign the survey.It would be wrong to assume that for this method are only indications of severe headaches.REG may be imposed in cases where: - need to figure out the viscosity of the blood; - need to set the speed of blood flow; - check the predisposition to having a stroke or ischemia; - need to ensure the proper performance of cerebral vessels after severe traumatic brain injury; - appears strange noise in the ears; - have a predisposition to epilepsy. REG child head Application of this method - the procedure is painless, and therefore can be assigned even to children.But REG (what it is, you have already found out), which takes place in children, has one major drawback.For the execution of this procedure should ensure complete immobility.But the children, because of their age, do not understand, and therefore the results obtained in the diagnosis can be greatly distorted. It is therefore desirable to be near a child at the time of the procedure and try to control its stationary state during the performance of its personnel. Get the results of the study can be 10 minutes after the diagnosis.This makes this method of investigation quite popular, since the terrible headaches, it is important to quickly find out the reason causing such a poor state, and start treatment. Other research methods brain progress is not in place, not only in everyday life but also in medicine.To date, the research vessels of the head using the REG gradually fading, starting to give way to a new method of diagnosis - EEG. What is the new method is more perfect?Its use provides a more complete picture of the state of blood vessels in the brain.In addition, the diagnostics in several ways, they can get a general and very detailed information about the blood supply.In this case, information is read by the same electrical pulses, but they do not pass through the entire body. For general information EEG performed in the normal mode.A more detailed study may take approximately 6 hours.On the results can affect a variety of factors: medication, experience.Therefore, at a reception at the expert is required to tell us what medicines or drugs you are taking.Immediately prior to the diagnosis, try to relax as much as possible, to calm emotions.This will make the final results of the study more accurate. Now, if in a different direction, you will see names such as EEG, REG, what it is, you know.You are also aware of how and why this diagnosis is done and what steps should be taken in order to get the most reliable results.
The heart, colored red in this section of a fifteen-day-old mouse embryo, is located at the center of the body in many animals. In the fourth century B.C., the Greek philosopher Aristotle described the heart as “the center of vitality in the body” and “the seat of intelligence, motion, and sensation.” He hypothesized that other organs, such as the brain and lungs, existed simply to cool the heart. It was not until 1628 that English physician William Harvey accurately described the function of the heart for “the transmission of the blood, and its propulsion.” Today, although our understanding of the heart’s biology has advanced, we still hold on to romantic interpretations of its function based on those early philosophical writings — Happy Valentine’s Day! February is Heart Month A fifteen-day-old mouse embryo was dissected, fixed, sectioned, stained with a dye, and viewed with a wide-field light microscope. The heart in this digital image was then colored red using a computer. Gerald W Dorn II, MD, Center for Pharmacogenomics, Washington University, St. Louis, MO
Doesn't Carbon Dating Prove The Earth Is Old? by John D. Morris, Ph.D. Perhaps no concept in science is as misunderstood as "carbon dating." Almost everyone thinks carbon dating speaks of millions or billions of years. But, carbon dating can't be used to date either rocks or fossils. It is only useful for once-living things which still contain carbon, like flesh or bone or wood. Rocks and fossils, consisting only of inorganic minerals, cannot be dated by this scheme. Carbon normally occurs as Carbon-12, but radioactive Carbon-14 may sometimes be formed in the outer atmosphere as Nitrogen-14 undergoes cosmic ray bombardment. The resulting C-14 is unstable and decays back to N-14 with a measured half-life of approximately 5,730 years. Thus the ratio of stable C-12 to unstable C-14, which is known in today's open environment, changes over time in an isolated specimen. Consider the dating of a piece of wood. As long as the tree lives, it absorbs carbon from the atmosphere in the form of carbon dioxide, both C-12 and C-14. Once the tree dies, it ceases to take in new carbon, and any C-14 present begins to decay. The changing ratio of C-12 to C-14 indicates the length of time since the tree stopped absorbing carbon, i.e., the time of its death. Obviously, if half the C-14 decays in 5,730 years, and half more decays in another 5,730 years, by ten half-lives (57,300 years) there would be essentially no C-14 left. Thus, no one even considers using carbon dating for dates in this range. In theory, it might be useful to archaeology, but not to geology or paleontology. Furthermore, the assumptions on which it is based and the conditions which must be satisfied are questionable, and in practice, no one trusts it beyond about 3,000 or 4,000 years, and then only if it can be checked by some historical means. The method assumes, among other things, that the earth's age exceeds the time it would take for C-14 production to be in equilibrium with C-14 decay. Since it would only take less than 50,000 years to reach equilibrium from a world with no C-14 at the start, this always seemed like a good assumption. That is until careful measurements revealed a significant disequalibrium. The production rate still exceeds decay by 30%. All the present C-14 would accumulate, at present rates of production and build up, in less than 30,000 years! Thus the earth's atmosphere couldn't be any older than this. Efforts to salvage carbon dating are many and varied, with calibration curves attempting to bring the C-14 "dates" in line with historical dates, but these produce predictably unreliable results. A "Back to Genesis" way of thinking insists that the Flood of Noah's day would have removed a great deal of the world's carbon from the atmosphere and oceans, particularly as limestone (calcium carbonate) was precipitated. Once the Flood processes ceased, C-14 began a slow build-up to equilibrium with C-12—a build-up not yet complete. Thus carbon dating says nothing at all about millions of years, and often lacks accuracy even with historical specimens, denying as it does the truth of the great Flood. In reality, its measured disequilibrium points to just such a world-altering event, not many years ago. * Dr. John Morris is President of ICR. Cite this article: Morris, J. 1998. Doesn't Carbon Dating Prove The Earth Is Old? Acts & Facts. 27 (6).
European Badger (Meles meles) These are interesting reads for background information. They are not a reading list for Year 2 though! Interesting exploration of Badger Baiting accessible for Primary children. The general hue of its fur is grey above and black on the under parts with a distinctive black and white striped face and white-tipped ears. European badgers are around 70 cm long with a tail of about 20 cm and weigh 10 kg on average, but weights can vary enormously. Badgers do not hibernate, although in areas with cold winter climates they may become torpid for two or so days at a time having put on fat in the autumn to help them get through the winter months The badger is a stocky animal, being about 750mm in length (from head to tail), with a 150mm tail, once fully grown. A badger can have a height of up to about 300mm high at the shoulder. The weight of an adult badger varies throughout the year - depending on how much fat it has laid down for the winter months. In spring an adult badger will have an average weight of 8 to 9 kg, rising to 11 to 12 kg in autumn. Occasionally individual specimens do weigh more than this, but these are generally the exception rather than the rule. Also, in territories which provide a poor food supply for the badgers, weights may be less than this. In addition, adult males will generally tend to be about 1 kg heavier than females of the same age; and lactating females will be as much as 1 kg less than non-lactating females. Most badgers have a characteristic black and white striped face with small white-tipped ears and grey body, though their fur can become stained by the local soil. The body appears grey, with black fur on on its legs. In windy conditions, the fur may blow around in the wind, revealing the lighter underfur on the body. However, the colour of each hair varies on close inspection, and is not always grey. A few individuals are albino (creamy or off-white), and there are small populations of reddish/ginger (Erythristic) badgers in certain areas of Britain. Albino and Erythristic badgers have a harmless genetic difference to other badgers, but are otherwise exactly the same type of badger. Badgers live for up to 15 years (average 3 years) in the wild, and up to 19 years in captivity. If they survive their first year, the most common cause of death is by road traffic Badgers are nocturnal and spend the day in their setts, or extensive networks of tunnels dug in well-drained ground (or sometimes beneath buildings or roads). Setts give shelter from the weather and predators The badger is also a very tidy animal and spends a lot of time transporting grass, straw, moss or bracken to and from its sleeping chamber deep in the sett. Setts are handed down like family houses from generation to generation, and the badger uses the same sett year after year. Badgers prefer grazed pasture and woodland, which have high numbers of earthworms exposed, and dislike clay soil, which is difficult to dig even with their powerful claws. In urban areas, some badgers scavenge food from bins and gardens. pasture and woodland heavy clay soils The badger is also a very tidy animal and spends a lot of time transporting grass, straw, moss or bracken to and from its sleeping chamber deep in the sett. Setts are handed down like family houses from generation to generation, and the badger uses the same sett year after year. omnivorous and insectivorous; most of their diet consists of earthworms, although they also eat insects, spiders, scorpions, small mammals, eggs, young birds, reptiles, berries, roots, bulbs, nuts, fruit. Badgers also dig up the nests of wasps and bumblebees in order to eat the larvae. Badgers will eat carrion. In Japanese folklore, badger is a wild creature that sometimes appears as a mischievous being, able to turn itself into different shapes, including that of humans. In one favorite tale, a badger visits a Buddhist temple and then tries to hide himself by turning into a teakettle. In this tale, the badger helps the temple priest; badgers in other stories are sometimes evil Ten miles travelling a night You can tell by its appearance that the badger is a digger. The body is wedge-shaped and is carried on short but immensely strong legs - excellent for working in confined spaces. The muscles of the forelimbs and neck are particularly well developed. Digging is targeted at enlarging and improving its sett (this consists of several chambers where the badger sleeps and breeds). When enlarging a tunnel a badger will loosen the earth with rapid strokes of its forelimbs, and then use its claws as rakes. Earth and stones may be ejected forcefully from the exit hole of a sett when a badger is digging! Indeed some of these stones may be quite large; and there may even be claw marks apparent on the surface of softer stones, such as some sandstones and chalks.
On this page: - What are the possible effects of solitary kidney? - How can you protect your kidneys? - Hope through Research - For More Information Your kidneys perform many functions to keep you alive. They - filter wastes and extra fluid from your blood - keep the proper balance of minerals like sodium, phosphorus, calcium, and potassium in your blood - help maintain a healthy blood pressure - make hormones that keep your blood and bones healthy Most people have two kidneys, one on each side of the spinal column in the back just below the rib cage. Each kidney is about the size of a fist and contains about 1 million nephrons. The nephrons are microscopic filtering "baskets" that transfer wastes from the blood to the collecting tubules of the urinary system. A person may have only one kidney for one of three main reasons. A person may be born with only one kidney, a condition known as renal agenesis. Renal dysplasia, another birth defect, makes one kidney unable to function. Many people with renal agenesis or renal dysplasia lead normal, healthy lives and only discover that they have one kidney-or one working kidney-when they have an x ray, sonogram, or surgery for some unrelated condition. Some people must have one kidney removed to treat cancer or other diseases or injuries. The operation to remove a kidney is called a nephrectomy. A growing number of people are donating a kidney to be transplanted into a family member or friend whose kidneys have failed. Most people can live a normal, healthy life with one kidney. Taking precautions is wise to protect the kidney function you do have. What are the possible effects of solitary kidney? If having a single kidney does affect your health, the changes are likely to be so small and happen so slowly that you won't notice them. Over long periods of time, however, these gradual changes may require specific measures or treatments. Changes that may result from a single kidney include the following: High blood pressure. Kidneys help maintain a healthy blood pressure by regulating how much fluid flows through the bloodstream and by making a hormone called renin that works with other hormones to expand or contract blood vessels. Many people who lose or donate a kidney are found to have slightly higher blood pressure after several years. Proteinuria. Excessive protein in the urine, a condition known as proteinuria, can be a sign of kidney damage. People are often found to have higher-than-normal levels of protein in their urine after they have lived with one kidney for several years. Reduced GFR. The glomerular filtration rate (GFR) shows how efficiently your kidneys are removing wastes from your bloodstream. People have a reduced GFR if they have only one kidney. You can have high blood pressure, proteinuria, and reduced GFR and still feel fine. As long as these conditions are under control, they will probably not affect your health or longevity. Schedule regular checkups with your doctor to monitor these conditions.[Top] How can you protect your kidneys? Your doctor should monitor your kidney function by checking your blood pressure and testing your urine and blood once a year. Normal blood pressure is considered to be 120/80 or lower. You have high blood pressure if it is over 140/90. People with kidney disease or one kidney should keep their blood pressure below 130/80. Controlling blood pressure is especially important because high blood pressure can damage kidneys. Your doctor may use a strip of special paper dipped into a little cup of your urine to test for protein. The color of the "dipstick" indicates the presence or absence of protein. A more sensitive test for proteinuria involves laboratory measurement and calculation of the protein-to-creatinine ratio. A high protein-to-creatinine ratio in urine (greater than 30 milligrams of albumin per 1 gram of creatinine) shows that kidneys are leaking protein that should be kept in the blood. Measuring GFR used to require an injection of a contrast medium like iothalamate into the bloodstream followed by a 24-hour urine collection to see how much of the medium is filtered through the kidneys in that time. In recent years, however, scientists have discovered that they can estimate a person's GFR based on the amount of creatinine in a small blood sample. The new GFR calculation uses the patient's creatinine measurement along with weight, age, and values assigned for sex and race. Some medical laboratories may calculate GFR at the same time they measure and report creatinine values. If your GFR stays consistently below 60, you are considered to have chronic kidney disease. Controlling Blood Pressure If your blood pressure is above normal, you should work with your doctor to keep it below 130/80. Great care should be taken in selecting blood pressure medicines for people with a solitary kidney. Angiotensin-converting enzyme (ACE) inhibitors and angiotensin receptor blockers (ARBs) are two classes of blood pressure medicine that protect kidney function and reduce proteinuria. But these medicines may be harmful to someone with renal artery stenosis (RAS), which is the narrowing of the arteries that enter the kidneys. Diuretics can help control blood pressure by removing excess fluid in the body. Controlling your blood pressure may require a combination of two or more medicines, plus changes in diet and activity level. Having a single kidney does not mean that you have to follow a special diet. You simply need to make healthy choices, including fruits, vegetables, grains, and low-fat dairy foods. Limit your daily salt (sodium) intake to 2,000 milligrams or less if you already have high blood pressure. Reading nutrition labels on packaged foods to learn how much sodium is in one serving and keeping a sodium diary can help. Limit alcohol and caffeine intake as well. Avoid high-protein diets. Protein breaks down into the waste materials that the kidneys must remove, so excessive protein puts an extra burden on the kidneys. Eating moderate amounts of protein is still important for proper nutrition. A dietitian can help you find the right amount of protein in your diet. Some doctors may advise patients with a solitary kidney to avoid contact sports like boxing, football, and hockey. One study indicated that motor vehicle collisions and bike riding accidents were more likely than sports injuries to seriously damage the kidneys. In recent years, athletes with a single working kidney have participated in sports competition at the highest levels. Having a solitary kidney should not automatically disqualify you from sports participation. Children should be encouraged to engage in some form of physical activity, even if contact sports are ruled out. Protective gear such as padded vests worn under a uniform can make limited contact sports like basketball or soccer safe. Doctors, parents, and patients should consider the risks of any activity and decide whether the benefits outweigh those risks.[Top] Hope through Research In recent years, researchers have learned much about kidney disease. The National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) sponsors several programs aimed at understanding kidney failure and finding treatments to stop its progression. The NIDDK's Division of Kidney, Urologic, and Hematologic Diseases supports basic research into normal kidney development in the embryo and the genetic causes of birth defects that may result in a solitary kidney. New imaging techniques can help to diagnose solitary kidney before birth.[Top] For More Information Life Options Rehabilitation Resource Center c/o Medical Education Institute Inc. 414 D'Onofrio Drive Madison, WI 53719 National Kidney Foundation 30 East 33rd Street New York, NY 10016 Phone: 1-800-622-9010 or 212-889-2210 National Kidney and Urologic Diseases Information Clearinghouse The National Kidney and Urologic Diseases Information Clearinghouse (NKUDIC) is a service of the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK). The NIDDK is part of the National Institutes of Health of the U.S. Department of Health and Human Services. Established in 1987, the Clearinghouse provides information about diseases of the kidneys and urologic system to people with kidney and urologic disorders and to their families, health care professionals, and the public. The NKUDIC answers inquiries, develops and distributes publications, and works closely with professional and patient organizations and Government agencies to coordinate resources about kidney and urologic diseases. Publications produced by the Clearinghouse are carefully reviewed by both NIDDK scientists and outside experts. This publication was reviewed by Akinlolu Ojo, M.D., Ph.D., University of Michigan. This publication is not copyrighted. The Clearinghouse encourages users of this publication to duplicate and distribute as many copies as desired. NIH Publication No. 07-5390 Page last updated: September 2, 2010
Burmese Python – Python bivittatus One of the larger species of snakes in the world is the Burmese Python. It is ranked as the #6 largest of all snakes in the world. It is used all over the world for their skins and so they are frequently hunted in the wild. They are also kept as snakes by those that enjoy exotic creatures. They don’t have venom but that doesn’t mean a bite won’t hurt! The average 12 feet long but they can end up being much longer that – up to 23 feet. They can weigh up to 200 pounds. They have a light coloring with black and brown ranging up and down their back. They have a very attractive coloring that works in their natural environment. However, it has also made them a target for the leather industry. They also have prehensile tails and are very good climbers. It is a very fascinating trait! They tend to stick to the tropical areas and they enjoy being around the water. They are considered to be semi aquatic. They are also found in trees. Southeast Asia, Eastern India, Cambodia, and China are just some of the common areas where this particular snake has been found. They have a need for water so they must be close to it. They will live in the jungle, the forest, the savannahs, and even in rocky regions as long as they have access to food and to a source of water. The Florida everglades is a place where they have been found after Hurricane Andrew. It is believed they escaped from homes or zoos and now they are mating. The ecosystem there includes endangered birds which these snakes are consuming. They are mainly nocturnal creatures. They tend to spend lots of time in the trees when they are young. However, as they get larger and heavier they will have to spend more and more of their time on the ground. That is also when they will start to spend more of their time in the water. When it comes to feeding there is no limit to what they will try. There are photos out there of the Burmese Python trying to kill a huge alligator. They have very sharp teeth and they will wrap their body around them to constrict movement. The prey will die soon after due to suffocation. They will consume rodents and various forms of vermin. The males will seek out the females for mating. If she agrees then it will take place, but then they will go their separate ways. The eggs will be hatched from March to April. She may lay about 12 to 36 of them in the weeks before. She will wait around for them to emerge and then she leaves them behind. The young will stay in the remains of their eggs until their first molting occurs. Then they will instinctively begin to hunt. Venomous Bite /Danger to Humans Too often people that love to have the Burmese Python may grow tired of them. This can also become a problem when they become too long. Then the release them into the wild where they can be a problem. They also seem to be able to thrive and that means upsetting the natural balance. Even though they don’t release venom they will bite and they aren’t about to let go. They can also wrap their bodies around a human and crush them so that they can’t breathe. It is important to seek immediate medical care should a bite occur. That way there is no risk of infection spreading throughout the body.
A good way to structure many computer programs is to store the key information you currently know in some data structure and then have each iteration of the main loop take a step towards your destination by making a simple change to this data. - 1 Iterative Algorithms: Measures of Progress and Loop Invariants - from How to Think About Algorithms - Publisher: Cambridge University Press - Released: May 2008 design a data structure that's a minimal version of final output, and design the process so that each loop increases the structure towards final output. Share this highlighthttp://www.safaribooksonline.com/a/how-to-think/3326087/
Colonizing Outer Space/Colonization/Space Living in Space Compared to other locations, orbit has substantial advantages and one major, but solvable, problem. Orbits close to Earth can be reached in hours, whereas the Moon is days away and trips to Mars take months. There is ample continuous solar power in high Earth orbits, whereas all planets lose sunlight at least half the time. Weightlessness makes construction of large colonies considerably easier than in a gravity environment. Astronauts have demonstrated moving multi-ton satellites by hand. 0g recreation is available on orbital colonies, but not on the Moon or Mars. Finally, the level of (pseudo-) gravity is controlled at any desired level by rotating an orbital colony. Thus, the main living areas can be kept at 1g, whereas the Moon has 1/6g and Mars 1/3g. 1g is critical, at least for early colonies, to ensure that children grow up with strong bones and muscles. Several design groups have examined orbital colony feasibility. They have determined that there are ample quantities of all the necessary materials on the Moon and Near Earth Asteroids, that solar energy is readily available in very large quantities, and that no new scientific breakthroughs are necessary, although a great deal of engineering would be required. Remote research stations in inhospitable climates, such as the Amundsen-Scott South Pole Station or Devon Island Mars Arctic Research Station, can also provide some practice for off-world outpost construction and operation. The Mars Desert Research Station has a habitat for similar reasons, but the surrounding climate is not strictly inhospitable. A space habitat, also called space colony and orbital colony, is a space station which is intended as a permanent settlement rather than as a simple way-station or other specialized facility. They would be literal "cities" in space, where people would live and work and raise families. No space habitats have yet been constructed, we do not classify all space stations as a space habitat since they are not a replication of the natural environment necessary to sustain a species population, they are by definition artificially maintained and temporary, but many design proposals have been made with varying degrees of realism by both science fiction authors and engineers. A space habitat could serve as a proving ground for how well a generation ship would function as a home for hundreds or thousands of people, this concept is also referred to as the Ark model. A colony ship would be similar to a space habitat, except with major propulsion capabilities and independent power generation. Such a space habitat could be isolated from the rest of humanity for a century, but near enough to Earth for help. This would test if thousands of humans can survive a century on their own before sending them beyond the reach of any help. The Earth is an open system, it constantly gets input from external sources from energy to matter. In a generation ship (or a long term habitat) a subset of these functions need to be mimicked as a self sustained closed system (depending on the mission and location, on a solar system energy will be possible to be introduced at no cost during a long period). Much has been learned from attempts made on Earth to simulate isolated living systems (useful for the production of food, reprocessing or gases and water in space). The Biosphere 2, originally built to be an artificial, materially closed ecological system, now a center dedicated to research, outreach, teaching of living systems. There is also the BIOS-3 dedicated to the study of am algaculture based closed system. The generation ship concepts is proposed in several hard science fiction works and it includes: - Generation ship, hypothetical starship that would travel much slower than light between stars, with the crew going through multiple generations before the journey is complete - Sleeper ship, hypothetical spaceship in which most or all of the crew spend the journey in some form of hibernation or suspended animation - Embryo carrying Interstellar Starship (EIS), hypothetical starship much smaller than a generation ship or sleeper ship transporting human embryos in a frozen state to an exoplanet The main disadvantage of orbital colonies in relation to a colony ship is the inability to seek them on their own, this is of course compensated by low costs (no engine, propellant) and reduction of risks. Building cities in space will require materials, energy, transportation, communications, life support, and radiation protection. These could be imported from the Moon, which has ample metals, silicon, and oxygen, or Near Earth Asteroids, which have all the materials needed with the possible exception of nitrogen. Transportation is then the key to any space endeavor. Present launch costs are very high per kilogram from Earth to Low Earth Orbit (LEO). To settle space we need much better launch vehicles and must avoid serious damage to the atmosphere from the thousands, perhaps millions, of launches required. Transportation for millions of tons of materials from the Moon and asteroids to orbital settlement construction sites is also necessary. One well studied possibility is to build electric catapults on the Moon to launch bulk materials to waiting settlements, but then these type of solutions will get into the highest ground problem, since they can also be used as a weapon. The issue with energy can be easily addressed by the use of solar energy, abundant, reliable and is commonly used to power satellites today. Massive structures will be needed to convert sunlight into large amounts of electrical power for settlement use. Energy may be even an export item for space settlements, using microwave beams to send power to Earth. Some small asteroids have the advantage that they may pass closer than Earth or it's moon several times per decade. In between these close approaches to home, the asteroid may travel out to a furthest distance of some 350,000,000 kilometers from the Sun (its aphelion) and 500,000,000 kilometers from Earth. A small asteroid could serve functions equal to space stations, with the benefit that some building material would already be present. Most of the disadvantages are similar to those of an artificially created space station. A lack of significant gravity, a population of more than ten and self sufficiency may be far in the future on/in very small asteroids. Unmanned supply craft should be practical with little technological advance even crossing 1/2 billion kilometers of cold vacuum. The colonists would have a strong interest in assuring their asteroid did not hit Earth or anything else of significant mass. New Measuring Standards Life Off-Earth is going to be different enough that a number of "standards" that we take for granted on the Earth are also going to need modification. Even very basic physical measurements like time and distance will have to be adjusted to fit with experiences on Mars as those measurements are largely associated with physical aspects of the Earth. - Units of Time - Even though target planets and the Earth may rotate at the same rate, there are can be some subtle differences that make measuring local time to be quite different from terrestrial experiences. - Distance - While standard units of measure that were developed on the Earth can be used elsewhere, it is likely that some new measurement units will result from activities on orbitally static object (like a measurement to the sun and other locations important location on that solar system, this is important for travel time and estimating costs). - Mass and Weight - The difference of gravity between the Earth and the target planet is going to have an impact on how things are built and how people live. Some things stay the same while there are some important differences as well.
Logical Name of a Module: The NAME directive which is used to assign a name to an assembly language program module. The modulecan now be mention to by its declared name. The names, if chosen to be suggestive, may be point out the functions of the different modules and hence can help in the documentation. OFFSET : Offset of a Label: When the assembler comes across the OFFSET operator with a label, firstly it computes the 16-bit displacement (it is also called as offset interchangeably) of the specific label, and replaces the 'OFFSET LABEL' string by the computed displacement. This operator is used with, strings, arrays, labels and procedures to decide their offsets in their default segments. The segment may be decided by another operator of same type, which is, SEG. Its most general use is in the case of the indirect, indexed, based indexed or other addressing techniques of similar types, used to indirectly refers to the memory. The instance of this operator is as follows: MOV SI, OFFSET LIST LIST DB IOH DATA ENDS
Design & Technology Design and Technology: Graphic Products (4550) AQA GCSE Design and Technology: Graphic Products enables students to design and make products with creativity and originality, using a range of graphic and modelling materials. Students will be enthused and challenged by the range of practical activities possible. They will be encouraged to learn to use, understand and apply colour and design through images, to develop spatial concepts, and to understand graphic materials and their manipulation. They will design and make product(s) using graphic media and new technologies to prepare them for the world of work. This course has 60 per cent controlled assessment in order to recognise the importance of practical work within this subject. Design and Technology is a practical subject area which requires the application of knowledge and understanding when developing ideas, planning, producing products and evaluating them. The distinction between Designing and Making is a convenient one to make, but in practice the two often merge. For example, research can involve not only investigating printed matter and people’s opinions, but also investigating e.g. proportions, adhesives, colour, structures and materials through practical work. Candidates should be taught to: - be creative and innovative when designing; - design products to meet the needs of clients and consumers; - understand the design principles of form, function and fitness for purpose; - understand the role that designers and product developers have, and the impact and responsibility they have on and to society; - analyse and evaluate existing products, including those from professional designers; - develop and use design briefs and specifications for product development; - consider the conflicting demands that moral, cultural, economic, and social values and needs can make in the planning and in the designing of products; - consider environmental and sustainability issues in designing products; - consider health and safety in all its aspects; - anticipate and design for product maintenance where appropriate; - design for manufacturing in quantity and be aware of current commercial/industrial processes; - generate design proposals against a stated design criteria, and to modify their proposals in the light of on-going analysis, evaluation and product development; - reflect critically when evaluating and modifying their design ideas and proposals in order to improve the products throughout inception and manufacture; - use, where appropriate, a range of graphic techniques and ICT (including digital media), including CAD, to generate, develop, model and communicate design proposals; - investigate and select appropriate materials and components; - plan and organise activities which involve the use of materials and components when developing or manufacturing; - devise and apply test procedures to check the quality of their work at critical/key points during development, and to indicate ways of modifying and improving it when necessary; - communicate the design proposal in an appropriate manner; - be flexible and adaptable when designing; - test and evaluate the final design proposal against the design specification; - evaluate the work of other designers to inform their own practice; - understand the advantages of working collaboratively as a member of a design team; - understand the need to protect design ideas. Candidates should be taught to: - select and use tools/equipment and processes to produce quality products; - consider the solution to technical problems in the design and manufacture process; - use tools and equipment safely with regard to themselves and others; - work accurately and efficiently in terms of time, materials and components; - manufacture products applying quality control procedures; - have knowledge of Computer Aided Manufacture (CAM) and to use as appropriate; - ensure, through testing, modification and evaluation, that the quality of their products is suitable for intended users and devise modifications where necessary that would improve the outcome(s); - recognise the advantages of working as part of a team when designing and making products. Aims and Learning Outcomes This specification in Design and Technology: Graphic Products encourages candidates to be inspired, moved and challenged by following a broad, coherent, satisfying and worthwhile course of study and gain an insight into related sectors, such as manufacturing and engineering. It prepares candidates to make informed decisions about further learning opportunities and career choices. GCSE specifications in design and technology enable candidates to: - actively engage in the processes of design and technology to develop as effective and independent learners. - make decisions, consider sustainability and combine skills with knowledge and understanding in order to design and make quality products explore ways in which aesthetic, technical, economic, environmental, ethical and social dimensions interact to shape designing and making - analyse existing products and produce practical solutions to needs, wants and opportunities, recognising their impact on quality of life - develop decision-making skills through individual and collaborative working - understand that designing and making reflect and influence cultures and societies, and that products have an impact on lifestyle - develop skills of creativity and critical analysis through making links between the principles of good design, existing solutions and technological knowledge. OCR GCSE (9-1) Food Preparation and Nutrition? Whether it’s training students to give them careers in the food industry or teaching them how to grow and cook food from scratch, our GCSE Food Preparation and Nutrition shows that simple choices can make a big difference. Our new GCSE in Food Preparation and Nutrition will be supported with resources produced by one of the world’s most renowned chefs, Heston Blumenthal®. His natural curiosity and scientific approach to cooking is an ideal collaboration that will enthuse learners as they discover the essentials of food science, build strong practical cookery skills and a good understanding of nutrition Exciting and contemporary – It’s designed to motivate students to develop the high level of knowledge, understanding and skills to cook and apply the principles of food science, nutrition and healthy eating. Keeps the subject meaningful – Students learn about improving lives through better knowledge of food, where it comes from and how it affects our bodies. Inspiration from around the world – Explore a range of ingredients and processes from different culinary traditions (traditional British and international) to inspire new ideas or modify existing recipes. Skills for the future – Progression Into higher education through general or vocational qualifications and into a career.
Women’s Rights Movement Series, part 1 The first week of our Women’s History Series focuses on the Women’s Rights Movement from 1848-1920. Beginning with the Seneca Falls convention of 1848 and culminating in the 19th Amendment to the Constitution in 1920, women fought publicly for increased rights in the public and private sphere. Abigail Adams foreshadowed the beginning of the movement in 1776 in a letter to her husband John Adams serving in the Continental Congress, Do not put such unlimited power into the hands of the Husbands. Remember all Men would be tyrants if they could. If perticuliar care and attention is not paid to the Laidies we are determined to foment a Rebelion, and will not hold ourselves bound by any Laws in which we have no voice, or Representation. 72 years later at Seneca Falls, NY, a coalition of women gathered to craft the “Declaration of Sentiments.” This document proclaimed that “all men and women are equal.” The 18 “repeated injuries and usurpations on the part of man toward woman” listed by the authors of the Declaration began by highlighting the lack of civic participation and ended with accusations that “He has endeavored, in every way that he could to destroy her confidence in her own powers, to lessen her self-respect, and to make her willing to lead a dependent and abject life.” This document clearly defines the demands of women’s rights advocates and highlights the areas they would come to fight for well into the twentieth century: voting rights, marriage equality, employment opportunities, access to education, and ability to lead an independent life. Stay turned the rest of this week for more documents, cartoons, and images to help students understand the early Women’s Rights Movement. An excellent source for women’s history in the US is Ellen DuBois and Lynn Dumenil’s Through Women’s Eyes: An American History with Documents. Check out the remainder of the posts in this series: “Bloomers” part 2 “Anti-Suffrage Cartoons” part 3 “The 19th Amendment” part 4 “Teaching the Women’s Rights Movement” part 5
Rectangular Prism is a prism as it has the similar cross section along the length, it is also called a Cuboid. It can be defined as a Solid object (a 3-dimensional object) which is comprised of six faces. A face is a plane surface of a solid object whether it is top or bottom or side. These faces are rectangular in shape. A cuboid looks like a box-shaped object. An edge in a rectangular prism can be defined as a Point where two faces intersect. In other words we can define rectangular prism as a 3D object which is described by six rectangular faces. Among these six faces, two faces are considered to be located at bases of prism. Cardboard boxes can be considered as rectangular prisms which are used in a box with a slot cut as handle and boxes for model trains etc. Following figure shows a rectangular prism with top, side and bottom faces. Let’s try to find how many edges does a rectangular prism have using following steps. 1) Euler’s formula is used in following form: V + F - E = 2. Where 'V' is the number of vertices, 'F' is the number of faces and 'E' is the number of edges in any Geometry. 2) Rewriting the formula by taking unknown 'E' on right side and also move ‘2’ to left side. So our formula becomes: V + F – 2 = E or, E = V + F – 2, Now to calculate the number of edges in a rectangular prism, above formula can be used in which V = 8 and F = 6 and hence, E = V + F – 2 = 8 + 6 – 2 = 12, Thus number of edges in rectangular prism is 12.
NCERT Solutions for Class 8th Science Chapter 18 Pollution of Air and Water National Council of Educational Research and Training (NCERT) Book Solutions for Class 8th Chapter: Chapter 18 – Pollution of Air and Water Class 8th Science Chapter 18 Pollution of Air and Water NCERT Solution is given below. What are the different ways in which water gets contaminated? Water gets contaminated by the addition of: (i) Agricultural chemicals: Farmers use excessive amounts of pesticides and fertilizers to increase crop production. These chemicals get carried away to the water bodies due to rains and floods which lead to water pollution. (ii) Industrial wastes: Industries release harmful chemical wastes into water sources, thereby polluting them. (iii) Sewage wastes: Waste materials from kitchens, toilets, and laundry sources are also responsible for contaminating water. At an individual level, how can you help reduce air pollution? An individual can reduce air pollution by: (i) Avoiding the use of cars as much as possible and by using public transport whenever possible. (ii) By not using vehicles for short distances. (iii) By using clean fuels such as LPG and CNG instead of diesel and petrol. (iv) Always disposing the garbage properly and not burning it. (v) Controlling the emissions from vehicles and household chimneys. Clear, transparent water is always fit for drinking. Comment. No. Clear and transparent water is not always fit for drinking. Water might appear clean, but it may contain some disease causing micro-organisms and other dissolved impurities. Hence, it is advised to purify water before drinking. Purification can be done by water purifying systems or by boiling the water. You are a member of the municipal body of your town. Make a list of measures that would help your town to ensure the supply of clean water to all its residents. To ensure the supply of clean water to all residents the following steps must be taken: (i) The main water source must be built in clean surroundings and should be maintained properly. (ii) Chemical methods such as chlorination must be used for purifying water. (iii) The area around water pipes must also be clean. Explain the differences between pure air and polluted air. Pure air contains around 78% nitrogen, 21% oxygen, and 0.03% carbon dioxide. Other gases such as argon, methane, ozone, and water vapours are also present in small quantities. When this composition of air is altered by the addition of harmful substances or gases such as nitrogen dioxide, sulphur dioxide, carbon monoxide, and particulate matter, then the air is said to be polluted. Explain circumstances leading to acid rain. How does acid rain affect us? Burning of fossil fuels such as coal and diesel releases a variety of pollutants such as sulphur dioxide and nitrogen dioxide into the atmosphere. These pollutants react with water vapours present in the atmosphere to form sulphuric acid and nitric acid respectively. These acids come down with the rain, thereby resulting in acid rain. Effects of acid rain: (i) Acid rains damage crops. (ii) Acid rains corrode buildings and structures especially those made of marble such as Taj Mahal. Which of the following is not a greenhouse gas? (a) Carbon dioxide (b) Sulphur dioxide Describe the ‘Greenhouse Effect’ in your own words. Greenhouse effect may lead to global warming, i.e., an overall increase in the average temperature of the Earth. Greenhouse effect is caused by greenhouse gases. Examples of greenhouse gases include carbon dioxide, methane, and water vapour. When solar radiations reach the Earth, some of these radiations are absorbed by earth and then released back to the atmosphere. Greenhouse gases present in the atmosphere trap these radiations and do not allow heat to leave. This helps in keeping our planet warm and thus, helps in human survival. However, an indiscriminate increase in the amount of greenhouse gases can lead to excessive increase in the Earth’s temperature leading to global warming. Prepare a brief speech on global warming. You have to deliver the speech in your class. Global warming is an increase in the average temperature of the Earth’s surface. It occurs as a result of an increased concentration of greenhouse gases in the atmosphere. The greenhouse gases include carbon dioxide, methane, and water vapour. These gases trap solar radiations released back by the Earth. This helps in keeping our planet warm and thus, helps in human survival. However, an increase in the amount of greenhouse gases can lead to an increase in the Earth’s temperature leading to global warming. Describe the threat to the beauty of the Taj Mahal. Acid rain is a major threat to the beauty of the Taj Mahal. When acid rains fall on the monument (that is completely made of marble), they react with marble to form a powder-like substance that is then washed away by the rain. This phenomenon is known as marble cancer. Also, the soot particles emitted from the Mathura oil refinery located near Agra is leading to the yellowing of the marble. Why does the increased level of nutrients in the water affect the survival of aquatic organisms? An increase in the level of nutrients in a water body leads to an excessive increase in the population of algae in the water body. When these algae die, they serve as food for decomposers. A lot of oxygen is utilised in this process, consequently leading to a decrease in the level of oxygen dissolved in the water body. This in turn causes fishes and other aquatic organisms to die.
Gerunds and Infinitives Fill in the Blanks For this language arts worksheet, students read the definitions and usage of gerunds and infinitives. They fill in the blanks in 10 exercises that show their understanding of gerunds and infinitives. 4 Views 11 Downloads Phrases and Clauses Worksheet There are many different types of clauses and phrases, and your class can practice them while reading and adding to a story about Grammar Man and his battle with the Fragmenter, the Cell Phony, and other grammar villains! For the first... 7th - 12th English Language Arts CCSS: Adaptable Gerund or Infinitive – Fill in the Correct Form Middle schoolers love listening to music, and they also love to listen to music. So what's the difference? Spell out the nuanced ways to use gerunds and infinitives with a 50-question grammar exercise. Given short sentences and verbs,... 4th - 8th English Language Arts CCSS: Adaptable Vocabulary Study: A Wrinkle in Time by Madeleine L’Engle Build vocabulary while reading A Wrinkle in Time by Madeleine L'Engle. Provided here is a great resource to use as a companion piece to your literature instruction. The packet is made up of 50 words separated into five lists of ten. Each... 6th - 8th English Language Arts Gerunds and Infinitives Learning proper grammar rules for a middle school student can be difficult, especially in a texting world, but this resource demonstrates how the verb changes by adding a gerund or infinitive. Keep up the texting, but use this to... 5th - 8th English Language Arts CCSS: Adaptable The Learning Network: Fill In 2011 Commencement Speeches Meant to be used with the article "Words of Wisdom" also available on the New York Times website, this resource contains a fill in the blank exercise where learners complete the article by supplying missing words. Use words from the word... 7th - 12th English Language Arts The Learning Network: Alligators Everywhere Fill-In Meant to be used with the article, "In Florida, the Natives Are Restless" (included here), this is a great source of high-interest, nonfiction reading. A fill in the blank vocabulary activity and an activity focusing on reading... 4th - 10th English Language Arts
Introduction to the Importance of Soil Analysis Soil analysis is an essential process for farmers, gardeners, landscapers and anyone who wants to grow healthy plants. Soil analysis involves testing the chemical, physical and biological properties of soil to provide information that can help you make informed decisions about how best to manage your land. The results of a soil test will give you insight into how much nutrients are present in your soil and whether there is any need for supplementation. Nutrients such as nitrogen, phosphorus and potassium are essential for plant growth; however, too much or too little of these nutrients can have negative effects on plant health. Additionally, soil analysis can tell you about the pH level of your soil. The pH level is a measure of acidity or alkalinity in the soil. Different plants thrive under different pH conditions; therefore it’s important to know what type of plants are suitable for your particular type of soil. Apart from nutrient levels and pH levels, other factors such as organic matter content also play a significant role in determining plant health. Organic matter serves as a source of nutrients for plants while also improving water retention capacity in soils with poor drainage. In summary, by analyzing your soils regularly using reliable methods like those provided by modern technology through innovative products like “soil testers,” you will be able to improve plant yields while reducing costs associated with fertilization programs or other corrective measures needed due to poor quality soils. Investing time into understanding more about how this valuable resource works could lead growers towards success! Traditional Methods of Soil Analysis Soil analysis has been around for centuries, and traditional methods of analyzing soil include chemical tests, visual observations, and physical measurements. These methods are still used today in combination with newer technology to provide a full picture of soil properties. Chemical tests involve using reagents to determine the presence and concentration of various elements in the soil. The most common tests measure pH, nutrients such as nitrogen, phosphorus, and potassium, organic matter content, cation exchange capacity (CEC), salinity levels, and heavy metals. These tests can be performed on-site or at a laboratory. Visual observations involve assessing the color, texture, structure, and consistency of the soil. Color can indicate mineral content or organic matter levels; texture refers to the size distribution of mineral particles in the soil; structure describes how those particles are arranged; consistency refers to how easily the soil crumbles or forms clumps. Physical measurements include determining water-holding capacity (WHC), porosity (the amount of open space between particles), bulk density (the weight per unit volume), permeability (how easily water moves through soil), compaction level (how tightly packed it is), erosion potential (likelihood that topsoil will wash away during rain events). All these traditional methods have their limitations: they require specialized knowledge to perform correctly; they may not account for variations within small areas; results may take days or weeks to obtain; samples must be taken from multiple locations within each field to get an accurate average value for each parameter measured. Despite its shortcomings compared with modern digital technologies like spectrometry or isotopic analysis which offer faster results without destroying samples by burning them up first–traditional techniques remain vital tools when combined with new instruments available today such as portable X-ray fluorescence spectrometers which provide rapid analyses for major elements like calcium but also trace minerals including cadmium manganese zinc copper nickel chromium molybdenum and cobalt. Limitations of Traditional Methods Traditional soil testing methods have been in use for many years, but they are not without limitations. Here are some of the most significant drawbacks: 1. Time-consuming process Traditional soil testing requires collecting samples from various locations, transporting them to a laboratory, preparing the samples for analysis, and conducting multiple tests on each sample. This process can take several weeks or even months to complete. 2. Limited sample size and coverage Traditional soil testing is typically conducted on a small number of samples collected from random locations within a larger area. This limited sample size and coverage can result in inaccurate assessments of overall soil health. 3. Expensive equipment Many traditional soil testing methods require expensive equipment such as spectrophotometers or atomic absorption spectrometers, which can be costly to purchase and maintain. 4. Invasive sampling techniques Collecting samples using traditional methods often involves digging up large amounts of soil, which can damage the surrounding ecosystem and disturb plant roots. 5. Lack of real-time data collection In many cases, results from traditional soil tests must be sent back to a lab for analysis before any conclusions can be drawn about the state of the land being tested. Overall, while traditional methods have provided valuable insight into soil health over time, there remains ample room for improvement with advanced technologies like electronic sensors that provide real-time data without invasive sampling techniques or lengthy wait times between taking measurements and receiving results! Introduction to the Soil Tester A soil tester is a device used to measure various properties of soil, including pH, moisture content, and nutrient levels. This information can be helpful for gardeners, farmers, and landscapers who want to ensure that their plants receive the proper nutrients and environment to grow. Soil testers come in several different types. Some are handheld devices that you insert into the soil and read the results on a display screen. Others require you to take a sample of soil and send it off to a lab for analysis. There are also digital probes that connect with your smartphone or tablet via Bluetooth technology. One important property of soil that can be measured by a tester is its pH level. The pH scale ranges from 0-14, with 7 being neutral. Most plants prefer a slightly acidic soil with a pH between 6-7. If your soil is too alkaline (pH above 7) or too acidic (pH below 6), it can affect plant growth and nutrient absorption. Another important measurement provided by some testers is moisture content. Plants need water to survive, but overwatering can lead to root rot and other problems. By using a moisture meter, you can determine when it’s time to water your plants without risking damage from overwatering. Finally, some advanced testers also measure nutrient levels in the soil such as nitrogen (N), phosphorus (P), potassium (K), calcium (Ca), magnesium (Mg) etc., which are crucial for plant growth at different stages of development. Overall, investing in a good quality soil tester can save time, money & effort while helping improve plant health & yield – whether one has an indoor garden bed or acres of farmland! How the Soil Tester Works A soil tester is a handy tool for any gardener or farmer who wants to determine the quality of their soil. The device works by measuring various factors in the soil, including pH levels, moisture content, and nutrient concentrations. To use a soil tester, simply insert the probe into the ground at the desired location. Most testers have probes that are several inches long so that you can get an accurate reading from deeper within the soil. The probe will measure different aspects of your soil and display them on a screen or dial. One important factor that a soil tester measures is pH levels. A healthy pH level for most plants falls between 6 and 7 on a scale of 0-14, with slightly acidic soils being more desirable than alkaline ones. If your pH level falls outside of this range, you may need to adjust it using additives like lime or sulfur. Another critical aspect measured by a soil tester is moisture content. Different plants have varying water requirements depending on their species and stage of growth. A good rule of thumb is to keep your garden’s topsoil consistently moist without becoming waterlogged. Finally, nutrient concentration readings from your soil tester help indicate whether your garden has adequate amounts of essential nutrients such as nitrogen (N), phosphorus (P), potassium (K), calcium (Ca), magnesium(Mg) etc.. Plants require these nutrients in varying proportions at different stages in their life cycles; if they’re lacking one or more elements it could negatively affect plant growth & crop yields.. Overall, knowing how to properly use and interpret results from a reliable Soil Tester can be invaluable for any gardener looking to optimize plant health & yield productivity while minimizing unnecessary expenses associated with suboptimal conditions or mismanagement strategies! Benefits of the Soil Tester A soil tester is an essential tool for every gardener, farmer, or landscaper. It provides accurate information about the soil’s pH level, moisture content, and nutrient levels. Here are some benefits of using a soil tester: Determine the Soil pH Level: The pH level of your soil is crucial to growing healthy plants. A soil tester helps you determine whether your soil is acidic, neutral or alkaline. Most plants grow well in slightly acidic to neutral soils with a pH range of 6 to 7. Measure Moisture Content: An important factor that affects plant growth is how often you need water them. A soil tester can help you measure the moisture content in your garden beds and pots so that you know when it’s time to water them again. Determine Nutrient Levels: Nutrients like nitrogen (N), phosphorus (P), and potassium (K) are critical for plant growth and development. A lack of these nutrients can cause stunted growth or poor yields from crops grown in that area. By using a soil tester, one can learn which nutrients are deficient and add fertilizers accordingly. Saves Money & Time: The use of a soil tester means less trial-and-error planting since it informs about the best-suited plants for that type of land based on its composition/levels; this saves both time spent researching crop varieties/plant types as well as money spent buying unnecessary items such as fertilisers not needed by different crops due to their individual requirements regarding nutrient ratios etc., additionally avoiding costly losses due to incorrect watering schedules leading up until harvest time where fewer yield amounts could mean lower profits gained compared against expected sales figures forecasted prior planting season starting out initially! Overall, owning a soil tester is an investment that pays off in the long run. It provides the necessary information to adjust and improve soil conditions for optimal plant growth, resulting in healthier plants and higher yields. Case Studies and Results The soil tester has been extensively tested across various agricultural settings, with promising results. Here are some case studies showcasing the effectiveness of the device: Case Study 1: Large-Scale Farming Operation in Iowa Agricultural experts conducted an experiment on a large-scale farming operation in Iowa to test the efficacy of soil testers. They used a combination of traditional testing methods and soil testers to compare the accuracy and reliability of both techniques. The results showed that while traditional testing methods were accurate, they took significantly longer than using a soil tester. In addition, because the soil tester provided real-time data, farmers were able to make adjustments immediately which improved crop yield. Case Study 2: Small Family Farm in California A small family farm in California was struggling with poor crop yields due to insufficient nutrient levels in their soil. After using a soil tester, they discovered that their potassium levels were incredibly low. By adjusting their fertilization practices based on this information, they saw a significant improvement in crop yields during harvest season. Overall, tests have shown that using a soil tester can lead to more efficient use of resources such as fertilizer and water by providing valuable insights into the health of crops before any significant damage occurs. This technology provides farmers with critical data at their fingertips which helps them make informed decisions about managing crops for optimal growth potential. In conclusion, incorporating this technology into agricultural practices is proving beneficial not only for individual farms but also for global food security by increasing efficiency while reducing resource usage overall through better management strategies based on reliable data-driven decision making processes – all thanks to innovative solutions like Soil Tester! Future Implications and Advancements The development of soil testers has opened up new possibilities for agriculture and environmental management. With the help of these devices, farmers can now easily determine the nutrient levels and pH values in their soil, enabling them to make informed decisions about fertilization and crop selection. This will not only increase agricultural productivity but also improve sustainability by reducing the use of synthetic fertilizers. In addition to farming applications, soil testers have an important role to play in environmental management. By identifying areas with high levels of contaminants such as heavy metals or pesticides, scientists can take action to prevent further contamination or remediate existing pollution. The ability to accurately measure soil properties also opens up opportunities for monitoring changes over time due to climate change or land-use practices. There are several advancements that could be made in the field of soil testing technology. One area where significant improvements could be made is in increasing the accuracy and reliability of sensors used in current devices. This would result in more precise measurements that could provide even greater insight into soil health. Another potential advancement is the integration of artificial intelligence (AI) algorithms into data analysis software used with these devices. AI could help process vast amounts of data quickly and efficiently, allowing researchers to identify trends or anomalies that may not be immediately apparent using traditional methods. Finally, there is much room for innovation when it comes to developing new types of sensors that can measure a wider range of physical properties such as temperature, moisture content or particle size distribution. New sensor technologies could provide even more detailed information about soils than currently possible with existing equipment. Overall, the future looks bright for those working on improving soil testing technology. As our understanding grows about how different factors affect soil health and productivity – from climate change impacts through land-use practice influences- there will continue being a need for innovative solutions like advanced sensor systems combined with AI-assisted analytics capabilities so we can better manage our resources sustainably. In conclusion, a soil tester is an essential tool for anyone who wants to maintain healthy soil in their garden or farm. It helps you determine the pH level, nutrient content and overall health of your soil. With accurate measurements, you can make informed decisions on what amendments to add in order to improve the quality of your plants and crops. There are various types of soil testers available on the market today – from simple handheld devices to more advanced digital ones that provide detailed information about your soil. Choosing the right type will depend on your needs, budget and gardening goals. Regardless of which one you choose, using a soil tester can save you time and money by preventing over-fertilization or under-fertilization. By maintaining optimum conditions for plant growth through regular testing and adjustment of nutrients levels in the soil, you can be assured of bumper harvests year after year. Call to Action Investing in a good quality soil tester is an investment towards healthy plants and bountiful yields. If you haven’t already done so, consider purchasing one today! Check online reviews before making a purchase as this will help ensure that the product meets your specific requirements. Remember that not all soils are created equal – each has its own unique characteristics that require different levels of attention when it comes tending them properly. Regularly testing your soils with a reliable device will give you insights into what’s happening below ground level – allowing for better decision-making regarding fertilizers or other additives needed for optimal plant growth. Soil testers come at varying price points; however, it’s important not only factor cost but also reliability when selecting one. Investing wisely now could pay dividends later through increased crop yields making it a worthwhile investment long-term. Ben is one of the founders and editor of Structured Living HUB. His interests are automotive and architecture. For over 10 years he worked as a modular house contractor in the United States.
The terms carbon offset and carbon offset credit (or simply “offset credit”) are used interchangeably, though they can mean slightly different things. A carbon offset broadly refers to a reduction in GHG emissions – or an increase in carbon storage (e.g., through land restoration or the planting of trees) – that is used to compensate for emissions that occur elsewhere. A carbon offset credit is a transferable instrument certified by governments or independent certification bodies to represent an emission reduction of one metric tonne of CO2, or an equivalent amount of other GHGs. The purchaser of an offset credit can “retire” it to claim the underlying reduction towards their own GHG reduction goals. Establishing a common denomination for different greenhouse gases CO2 is the most abundant GHG produced by human activities, and the most important pollutant to address for limiting dangerous climate change. However, human beings create and emit numerous other GHGs, most of which have a far greater heat-trapping effect, pound for pound, than CO2. The most prevalent of these gases are methane (CH4), nitrous oxide (N2O), hydrofluorocarbons (HFCs), per fluorocarbons (PFCs), nitrogen trifluoride (NF3), and sulfur hexafluoride (SF6). Fully addressing climate change will require reducing emissions of all GHGs. Scientists and policymakers have established “global warming potentials” (GWPs) to express the heat-trapping effects of all GHGs in terms of CO2-equivalents (annotated as “CO2e”). This makes is easier to compare the effects of different GHGs and to denominate carbon offset credits in units of CO2-equivalent emission reductions.
Hydrogen is the most abundant element in the universe – rough estimates put it at 74% of all “standard” matter. Needless to say, a shortage on earth is unlikely. Production: Hydrogen is present in many materials, but is found in the greatest quantities in water. Use: Skai uses Hydrogen produced via electrolysis: an electrochemical process that extracts hydrogen from water using renewable electricity. Clean, Safe Fuel Cells Hydrogen Fuel cells, like batteries, produce electricity. However, batteries store their energy internally, while a fuel cell converts a fuel, such as hydrogen, into electricity. It’s a simple process, invented back in 1838. In basic terms, hydrogen and oxygen are introduced into the fuel cell on opposite ends. An electrochemical reaction strips the hydrogen molecules of electrons, creating electricity which travels into the vehicle. Meanwhile, the "ionized" hydrogen molecules pass through a membrane in the fuel cell. The electrical circuit is completed by the electrons returning from the vehicle, which combine with the oxygen and ionized hydrogen molecules to create the system's only emission – pure water. NASA's Gemini and Apollo space missions relied on fuel cells to power the command module and provide drinking water for the astronauts. NASA continued to use them in the space shuttle missions. Hydrogen has been used across industry and aerospace for more than 80 years. Over that time, rigorous safety standards have been established, just as they have been for other fuels. In fact, hydrogen is considered to be safer than gasoline. This is due to the fact that hydrogen is 14x lighter than air – it rises rapidly and dissipates quickly. The bottom line: with any fuel, proper safeguards must be implemented to reduce risks to their lowest possible levels. Skai has gone a step further, with double-walled stainless steel fuel tanks that can stop a .45 caliber bullet, and an array of sensors and safety components to mitigate any potential risks. The Cleanest End-to-End Solution Hydrogen fuel cells have the lowest ecological footprint of any practical energy system on the planet. That's because they're more than just non-polluting during use like batteries – their "cradle-to-grave" performance is far more environmentally friendly. Materials & Production: Skai's fuel cell production uses environmentally neutral materials in addition to micro-level use of platinum. Its hydrogen fuel is sourced using renewable energy in a non-polluting process. Conversely, the mining and production of lithium, nickel and cobalt for batteries is energy-intensive, polluting and depletes natural resources. Hydrogen fuel cells offer a dramatically cleaner solution. Operation: Skai’s hydrogen can be produced locally using solar, wind or hydroelectric power. As fuel for Skai's vehicle's fuel cells, it can fly 4-5 people for up to 4 hours. Batteries are typically charged using grid power, while today's grid is still largely fossil-fuel powered. Battery-powered air mobility systems are projected to fly 2 people for less than 30 minutes. End Of Life: Skai’s fuel cells have a long continuity of use. 95% of the precious metals in fuel cells can be recycled and a majority of the other components can be reused. At end-of-life, batteries perform poorly in terms of environmental impact, with safe disposal issues, low recycling rates, and ecological toxicity that can affect human health. Pound for Pound In addition to the environmentally positive qualities of hydrogen fuel, it’s also the ideal energy for air mobility. Hydrogen fuel cell systems provide 3 to 6 times greater energy density by mass than lithium-ion batteries. Batteries are a great solution for some applications, but their heavy weight and lower relative energy make them less than ideal for flight. Skai's current configuration gives it the ability to fly continuously for up to 4 hours, and with auxiliary tanks it can remain airborne longer for specialized applications. Hydrogen is gaining momentum as a fuel system around the world. The combination of abundance, energy capability and ecobalance makes it an attractive option. Today, hydrogen fuel cells are powering more than 20,000 forklifts in the US, typically replacing battery-powered systems that require long charging times and use large amounts of space. In addition, more than 240 megawatts of backup power will be generated in the US – enough to power 240,000 homes. For the first time, fuel cell mobility is the #1 trend among auto executives. Germany plans to have 200 hydrogen refueling stations by 2023, and Japan plans to have 320 by 2025. This global move towards hydrogen means that infrastructure will continue to develop and costs will become increasingly competitive with other, less environmentally-positive energy types. In addition to hydrogen’s positive environmental and energy performance, it offers an additional promise: increased energy independence. The benefits are far-reaching: reduced dependence on fossil fuels as well as raw materials necessary for battery production, leading to improved energy security. Increased domestic employment and economic growth with an enhanced clean-energy economy. We believe that Skai will help accelerate this positive movement.
Children will participate in a series of science experiments that have them creating various types of crystals while learning the importance of following instructions and piquing curiosity by looking for differences in similar objects. - No materials are needed for this activity. Tell the participants that together you are going to tell a story that has a beginning, middle, and end. - Have the participants sit in a circle. Begin the story by saying, “Once upon a time…” and move on to the next child in the circle to continue the story by contributing one word (ie. “Once upon a time there…”). - Move on to the next child in the circle to also contribute one word (ie. “Once upon a time there was…”). - Continue around the circle until the story is completed. You may need to go around a few times in order to complete it. - Instead of each child contributing one word, have them share two or three words or a whole sentence. This is great for a shorter activity or a more complex story. - Challenge your group by instructing them to complete the story in one round so that by the time they reach the last person in the circle, the story is complete. This works best with larger or older groups. - Have them work together to tell a story by giving them conditions like, introduce a problem and solution, include a villain, or set the story under water. - Chances are some participants will be motivated to throw off the direction of the story. Try to encourage your group to be creative but to also contribute words that make sense to the direction of the story. As well, encourage the participants to accept the direction of the story even if it does not align with how they envisioned the story going. Ask the participants a few questions that will help them to reflect upon the story they just created and address things regarding plot, sentence structure, characters, etc.: - What was the story about? How did it make you feel? - Who are the characters? - Did the story make sense? - What was the problem and how was it solved? Helmenstine, Anne Marie. “Crystal Growing Recipes: Recipes for Common Crystal Growing Solutions.” ThoughtCo. Science, Tech, Math: Science, last updated March 6, 2017. Retrieved May 23, 2018, online from: https://www.thoughtco.com/crystal-growing-recipes-606222
Fort Niagara, although no longer a Canadian fort, still holds a vital piece of Canadian history. From Queens Royal Park in Niagara-on-the-Lake, it sits like a sentinel, majestically overseeing everything that enters and leaves the Niagara River. From this vantage point it appears as if time has never passed since the days when Indian chiefs traded furs with the French, and the land west of the mighty Niagara River was a vast unspoiled wilderness. Fort Niagara was built by the French in 1726-1727 to facilitate trade with the Aboriginal people. The French explorer La Salle left his mark by building a small fort in 1670, less than a mile north of where Youngstown now stands. The French gained control of the Great Lakes area and by 1727 built the "Castle" which became the centerpiece of Old Fort Niagara. In 1759, a large force of British soldiers under the command of General Prideux was sent up the then Mohawk River and along Lake Ontario to lay siege to the French Fort. During the battle, Prideux was killed and Sir William Johnson took command. Through the efforts of Sir William Johnson the British acquired Fort Niagara in 1759. Like their french predecessors the British would continue to operate the fort mainly as a trading post and a rendezvous destination for expeditions setting off into the interior. Under the supervision of Sir William Johnson the British would foster an allegiance with the First Nations that would hold steadfast in the turbulent years that were about to unfold.
1.1 Regular-ar Verbs In The Present Tense Worksheet Answers 1.1 Regular-ar Verbs In The Present Tense Worksheet Answers > https://urlin.us/2tdnCo The 3rd person subject of the present tense, -(e)s, is also used to talk about things in the past. Remember that when you see ella in the present tense, you should switch it to the 3rd person singular pronouns and add a -s. The present tense is the form that expresses the flow of time. You can tell what time something happened by what tense you use. First comes the Present or Past Tense. Most verbs are regular, but some are irregular.Melime: el tren lleg s-a arriado a las sesenta.Melime: el tren lleg s-a llegado a las sesenta.Here, the Past Tense is used, llegado. To conjugate an -ar verb in the present tense, you simply remove the -ar, making a regular imperfect verb. Add an infinitive form and delete the -nd and -ing endings. To learn a verb in a tense first, remember that to conjugate AR verbs in the present tense, either click your mouse to select them and drop them onto the grid, or swap them out for the regular AR verbs you learned earlier by using the drop down menus. Then, to conjugate the present tense forms, just click to add the endings you learn to conjugate the -ar verbs: 1.) The Preterite is simple: add an "s" at the end of the infinitive and it becomes the present stem. There are two exceptions: habl and hablas are homographs. For homonyms, the noun is changed to the masculine/feminine singular, so when you want to use habl or hablas as a verb, use the masculine singular, and add an s (that is, translate to English: You speak or You speak. If youre never sure which way to use it in Spanish, use hablar and try the -ing translation. 2.) As with the present tense, remember that verb endings are irregular in the preterite. Dont worry though. The -er and -ir endings are a lot easier to remember. They are: -i, -ai, -as, -e, -emos, -en, -iste, -an, -aste, -as... The -ar verb endings are actually much easier, so concentrate on those first. d2c66b5586
MMR (measles, mumps, rubella) Vaccine - Issued as a single vaccine in 1971 in response to high rates of disease - Booster dose added for older children in 1990 in response to rising cases of measles - Available for children 1 year or older and adults - To protect babies too young to receive the vaccine (< 12 months) from the illness, our community as a whole needs to be vaccinated at 95%. - Current MMR vaccine rate has dropped to 92% due to vaccine refusal. - Side effects from the vaccine: 1-3 days after the vaccine: redness, swelling or local skin reaction (common), allergic reaction (rare) 1-10 days after: fever (15%), fatigue (10%), febrile seizure (<< 1%) 5-14 days after: rash (5% of children), joint pain (< 1% of children 4yo) 2-6 weeks after the vaccine: ITP (low platelets – rare and self-limited) - It is very clear that MMR vaccine does NOT cause autism. Measles, the Disease New Study Shows Why Anti-Vaccination Thinking is Deadly 1960s: almost everyone caught measles as a child and while most improved without long term consequences, of the 3-4 million people infected each year in the US, approximately 500 died and approximately 4,000 developed encephalitis (swelling of the brain). 1980s: MMR vaccine was introduced in the 1970s and the measles rate dropped by 80% 1990s: Outbreak of measles occurred prompting a second booster dose of the MMR vaccine. 2000s: Measles was declared eliminated within the USA, but still endemic in other countries. 2010s: Measles outbreaks have occurred in at least 27 different US states and in 2014 there were 667 reported cases. The cases have occurred in communities of poor vaccination rates and the virus has been introduced by persons traveling outside the USA. - Incubation Period of 1-2 weeks without symptoms following exposure/infection - 1-3 days of the illness: The 3 C’s: cough, coryza (runny nose) and conjunctivitis (pink eye) along with fever and fatigue. - 3-7 days: Koplik spots (in the mouth) and Rash develop along with high fever spikes - 1-3 weeks: complications may occur including ear infection (10%), pneumonia (5%), encephalitis (1 out of 1000 cases, swelling of the brain that leads to seizures, coma, deafness and intellectual disability), Death (1 out of 1000 cases). SEARCH OUR SITE "As many of the testimonials our story isn't anything less. With an overwhelming feeling of being new parents we discovered her by chance and what a Blessing it was. Dr. Pate saved our 1st child by properly diagnosing our child at birth. This child is almost 7 and very healthy today. Dr. Pate always listened Continue Reading * House-call visits may be available for your newborn. Click below to request a complimentary prenatal consultation with Dr. Arnal.
When you're trying to figure out all the possibilities from different options, it can be helpful to make a tree diagram. In this tutorial, you'll see how to use a tree diagram to figure out how many different outfits can be created from the possible shirts, bottoms, and shoes given. Check it out! Calculating probabilities? Take a look at this tutorial and see how to figure out the probability of independently drawing certain cards from a deck! Sometimes probabilities depend on the outcomes of other events. Check out this tutorial to see probabilities of dependent events in action! In this word problem, you'll see how to use the Fundamental Counting Principle to find the number of possible lunch combinations! Take a look! Simulators are a great way to model an experiment without actually performing the experiment in real life. This tutorial looks at using a simulator to figure out what might happen if you randomly guessed on a true/false quiz. When you're conducting an experiment, the outcome is a very important part. The outcome of an experiment is any possible result of the experiment. Learn about outcomes by watching this tutorial! In an experiment, it's good to know your sample space. The sample space is the set of all possible outcomes of an experiment. Watch this tutorial to get a look at the sample space of an experiment! Organization is a big part of math. In this tutorial, you'll see how organizing information given in a word problem can help you solve the problem and find the answer! When you perform an experiment, how do you figure out all the possible outcomes? Follow along with this tutorial to see!
Definition of Close To Metal “Close to metal” is a technology term referring to a software or programming approach that involves working directly with a computer’s hardware components or an operating system’s low-level functionality. By working with these low-level components, a software or system is able to achieve higher levels of performance and efficiency. However, this approach often requires greater technical expertise and flexibility to handle hardware specificities. The phonetic alphabet representation of the keyword “Close To Metal” is:Charlie Lima Oscar Sierra Echo / Tango Oscar / Mike Echo Tango Alpha Lima - Close to Metal refers to programming or designing approaches that involve direct communication with the hardware of a system, resulting in improved performance and efficiency. - Developers working close to the metal can leverage low-level programming languages such as Assembly, C, or C++ to create highly optimized solutions for specific hardware platforms or operating systems. - While working close to the metal might increase the possibilities for optimizations, it can often be more challenging due to the increased complexity, more difficult debugging, and reduced portability between different systems. Importance of Close To Metal The technology term “Close to Metal” is important because it indicates that a particular software, programming language, or system operates with minimal abstraction layers, allowing it to directly access and manipulate a computer’s hardware resources. This close proximity to the physical components enables developers to achieve optimal performance, faster execution speeds, and considerable fine-tuning of their applications or systems. Furthermore, by being closer to the metal, programmers can fully understand and exploit the capabilities of the hardware, resulting in more efficient problem-solving, innovative solutions, and the potential to push the limits of technological development. Close to Metal (CTM) refers to the programming practice where developers write code that interacts directly with a computer’s hardware or at a low-level layer within a system. The purpose of working with such low-level access is to harness increased control and performance compared to traditional high-level programming languages. In pursuit of higher efficiency, CTM developers gain fine-grained control over the hardware resources, which enables them to optimize their software by eliminating unnecessary overheads and minimizing latency. This technique is particularly beneficial in systems where performance and resource utilization are critical, such as in video games, high-frequency trading, and embedded systems. The ability to interact closely with the hardware allows developers to make the most of their system’s capabilities, as it eliminates the need for additional layers of abstraction that can impede a program’s speed and efficiency. As a result, Close to Metal programming not only enables the execution of more complex tasks in a shorter amount of time, but it can also reduce power consumption, making it ideal for battery-operated devices. While CTM-based development can be more challenging due to its complexities and the need for platform-specific knowledge, the trade-off is the potential for remarkable improvements in performance and resource management. Examples of Close To Metal “Close To Metal” (CTM) refers to creating software or using techniques that allow programmers to work directly with a computer’s hardware, bypassing abstraction layers like operating systems and APIs. This approach can enhance performance, reduce overhead, and provide more accurate control over how the system operates. Embedded Systems: In the field of embedded systems, such as those in microcontrollers, CTM programming is often essential due to resource constraints like limited memory or processing power. Programmers in this domain need to optimize performance by manually interacting with hardware components like GPIO pins, serial ports, or pulse-width modulation (PWM) signals.Example: A real-time control system for an industrial robot arm, where precise movement and minimal latency are crucial, requires CTM techniques to achieve the necessary performance. Video Game Consoles: In the gaming industry, especially during earlier console generations, developers often had to resort to CTM programming techniques to maximize performance and take full advantage of the available hardware resources. This can still be seen in modern gaming where performance optimizations at the hardware level are crucial to deliver outstanding experiences.Example: Developers of games for the original Sony PlayStation used CTM techniques to optimize performance and get the most out of the console’s limited hardware capabilities. High-Performance Computing: In HPC, working close to the metal can help achieve efficiency and performance gains, as more generic software solutions might not take advantage of specific hardware features or may introduce performance overheads. This is particularly important in supercomputing and scientific simulations, where achieving maximum performance can significantly impact research results.Example: Supercomputers like Summit at Oak Ridge National Laboratory or Fugaku in Japan often utilize CTM techniques to optimize the performance of complex simulations and artificial intelligence workloads. Close To Metal FAQ What does “Close To Metal” mean in computer programming? In computer programming, “Close To Metal” refers to programming languages, techniques, or tools that provide a high level of control over hardware resources, such as memory and CPU, without much abstraction. This term is used to describe programming closer to the computer’s hardware level, allowing developers to optimize performance, implement low-level operations, and interact directly with the hardware. What are some common examples of “Close To Metal” programming languages? Examples of “Close To Metal” programming languages include C, C++, Assembly, and even Fortran. These languages allow programmers to write code that closely interacts with the hardware and manage resources more efficiently. Why would a programmer choose a “Close To Metal” programming language? A programmer might choose a “Close To Metal” programming language to achieve better performance, tighter control over system resources, or to work with specific hardware features. It can be beneficial in situations where the programmer needs to optimize code for speed, memory usage, or low-level operations, such as systems programming, embedded systems, or high-performance computing. Are there any drawbacks to “Close To Metal” programming languages? While “Close To Metal” programming languages offer increased control and performance, they also have some drawbacks. They can be more difficult to learn and write due to the low-level nature of the languages, and often have less abstraction. This could lead to longer development times, increased likelihood of errors, and reduced portability of the code. Related Technology Terms - Low-level programming - Hardware optimization - Assembly language - System programming - Bare-metal development
How to Draw a Samurai Samurai were the military nobility and officer caste of medieval and early-modern Japan. In Japanese, they are usually referred to as bushi, meaning ‘warrior’, or buke. In this quick tutorial you’ll learn how to draw a samurai in nine easy steps – great for kids and begginners. Recommended: Click on any image below to ENLARGE in gallery mode.
Creating a Low Cost Ground Penetrating Radar with Two HackRFs A ground penetrating radar (GPR) is a system that uses RF pulses between 10 to 2.6 GHz to image up to a few meters below the ground. A typical GPR system consists of a transmitting radio and antenna that generates the radar pulse aimed towards the ground, and a receiving radio that receives the reflected pulse. GPR is typically used for detecting buried objects, determining transitions in ground material and detecting voids and cracks. For example, in construction it can be used to determine rebar locations in concrete, and in the military it can be used to detect non-metallic landmines and hidden underground areas. These GPR devices are usually very expensive, however researchers Jacek JENDO & Mateusz PASTERNAK from the Faculty of Electronics, Military University of Technology, Poland have released a paper detailing how two low cost HackRF software defined radios can be used to create a simple GPR. Their system uses a step-frequency continuous waveform (SFCW) signal which scans over multiple frequencies over time, and the software was written in GNU Radio. In their tests they were able to detect a dry block of sand buried 6 cm below the ground, and a wet block 20 cm below.
Every year, foodborne illness affects around 600 million people worldwide, including 48 million Americans ( While there are many causes of foodborne illness, one major cause is bacterial contamination. In most cases, bacterial contamination is preventable and usually caused by poor food safety practices, such as eating undercooked poultry. If you leave food out in temperatures from 40–140°F (4–60°C), bacteria on it can double in number in as little as 20 minutes and continue to multiply exponentially (3). Fortunately, you can do a lot to prevent this to protect yourself and others. This article shares what you need to know about bacterial contamination, how quickly it spreads, and how you can prevent it. Bacterial contamination is the main cause of foodborne illness, which is when a person becomes ill from eating food. Food poisoning is another term for foodborne illness ( Bacterial contamination occurs when bacteria multiply on food and cause it to spoil. Eating that food can make you sick, either directly from the bacteria or from the toxins they release. There are three main types of foodborne illness from bacterial contamination ( - Food intoxication or poisoning. Bacteria multiply on a food and release toxins that make you ill if you eat them. Bacterial strains that cause this include Clostridium perfringens, Staphylococcus aureus, and Clostridium botulinum. - Food infection. Bacteria grow on a food and continue to grow in your intestines after you eat them. Bacteria that can cause this include Salmonella, Listeria monocytogenes, and Shigella. - Toxin-mediated infection. Bacteria from food reproduce and release toxins in your intestinal tract after you eat them. Bacteria that can cause this include Escherichia coli (E. coli), Campylobacter jejuni, and Vibrio. The top bacteria that cause foodborne illness in the United States include: - Clostridium perfringens - Campylobacter jejuni - Staphylococcus aureus Common side effects of foodborne illness from bacterial contamination include: - upset stomach - loss of appetite These symptoms usually occur within 24 hours of eating contaminated food, but they can sometimes appear days to weeks later, depending on the type of bacteria ( Norovirus is a virus commonly called the “stomach flu” or “stomach bug,” and it can also lead to foodborne illness ( Bacterial contamination happens when bacteria multiply on a food, leading to food spoilage. You can get food poisoning, or foodborne illness, if you eat this contaminated food. While all foods can be at risk of bacterial contamination, certain foods are more prone to it. Foods that have a high water, starch, or protein content provide optimal breeding grounds for bacteria and are therefore at a higher risk of causing foodborne illness. Here are some common high-risk foods ( - fresh and prepared salads, such as pasta salad, potato salad, coleslaw, and fruit salad - rice, pasta, and potato dishes - casseroles and lasagne - unwashed fruits and vegetables - leafy greens - melons, cantaloupe, and other fruits with thick, firm flesh - meat, poultry, fish, eggs - deli meats - dairy products, especially unpasteurized milk and cheese - soft cheeses - unpasteurized apple cider - gravies, sauces, and marinades - bean sprouts By cooking and storing foods at proper temperatures and practicing safe food handling, you can reduce the risk of bacterial contamination in these and other foods. Foods with a high water, starch, or protein contents provide optimal breeding grounds for bacteria. Knowing how to safely handle these foods can reduce your risk of foodborne illness. Bacteria can replicate at an exponential rate when they’re in a temperature range known as the danger zone, which is 40–140°F (4–60°C) (3). Your kitchen counter is a prime example. If you leave food out on your kitchen counter or elsewhere in the danger zone, bacteria can double in number in as little as 20 minutes and continue to double at this rate for many hours. This leaves food highly susceptible to bacterial overgrowth that can result in illness (3, On the other hand, when you store food in temperatures below 40°F (4°C), bacteria cannot replicate quickly. At temperatures of 0°F (-18°C), bacteria become dormant — sometimes referred to as “sleeping” — and will not replicate (3, When food is heated to temperatures over 140°F (60°C), bacteria are unable to survive and begin to die. This is why properly cooking and reheating food to correct temperatures is essential for reducing your risk of foodborne illness (3, To find out safe minimum cooking temperatures for various contamination-prone foods, visit FoodSafety.gov. To prevent the rapid growth of bacteria, it’s crucial to keep some foods out of the danger zone temperature range as much as possible. If contamination-prone foods have been left in the danger zone for more than 2 hours, it’s best to throw them out. Note that putting contaminated food back in the fridge or freezer won’t kill the bacteria, and the food will remain unsafe to eat. However, some foods are safe to store on the counter or in the pantry for a limited time. To look up food safety recommendations for particular foods, check out the FoodKeeper App from FoodSafety.gov. When you leave foods that are prone to contamination in the danger zone temperature range (40–140°F or 4–60°C), the number of bacteria on them can double in as little as 20 minutes. After 2 hours, the food is likely unsafe to eat. Between when a food is produced and when you eat it, there are many opportunities for bacterial contamination. These include ( - food production, such as during farming, harvesting, slaughtering, food processing, and manufacturing - food transportation - food storage, including during refrigeration or while food is in storage rooms or pantries - food distribution, such as in grocery stores or farmers markets - food preparation and serving, including in restaurants, food service operations, or at home Typically, food becomes contaminated with bacteria due to cross contamination, which is the transfer of bacteria or other microorganisms from one substance to another. This can happen at any stage of food production ( Bacteria can be transferred to food in various ways, such as ( - from contaminated equipment, like utensils, cutting boards, countertops, or machinery - from people, like through handling or sneezing - from other food, like raw chicken touching raw vegetables That said, bacterial contamination can also occur without cross contamination. Bacteria naturally exist on raw meat, poultry, and fish. That means you must cook these to proper temperatures to destroy potentially harmful bacteria ( Finally, bacteria can grow on food that’s left in the danger zone for too long, such as food that has been left on the counter or isn’t stored at low enough temperatures, such as food in a noninsulated lunch bag (3). Bacterial contamination can occur at any stage of food production. Most commonly, it happens due to cross contamination, leaving food in the danger zone too long, or other unsafe food handling practices. Since bacterial contamination can occur at any stage of food production, it’s difficult to make sure everyone in the chain from the farm to your table has used safe food handling practices. That said, there are things you can do to reduce your risk of foodborne illness from bacterial contamination, including the following tips ( Tips for buying food safely - Carefully read expiration dates and avoid buying foods that are close to their expiration date unless you plan to eat them right away. - Place raw meats and poultry in separate grocery bags from the rest of your groceries. - Clean and sanitize your reusable grocery bags before and after grocery shopping. - Avoid snacking on raw produce that has not been washed. - Grab perishable foods last when grocery shopping to reduce the time they spend in the danger zone. These foods might include eggs, milk, meat, poultry, and pasta salad. - Make grocery shopping your last errand to prevent groceries from sitting in the car for too long. - Put food away immediately once you get home. - Discard any cans or packages that are dented or the seal is broken. - Avoid buying fresh produce that’s bruised, as these bruises are entry points for bacteria. Tips for storing food safely - Ensure your refrigerator is set to 40°F (4°C) or lower and your freezer is set to 0°F (-18°C) or lower. - Store raw meat and poultry in a sealed container or plastic bag on the bottom shelf of the fridge to prevent their juices from contaminating other foods. - Use refrigerated leftovers within 2–3 days and cook them to proper temperatures. - Cut leftover whole roasts into smaller servings and store them in the refrigerator. - Refrigerate leftovers within 2 hours of cooking. If food has been left out for more than 2 hours, discard it. - Place leftover food, especially high-risk foods, such as cooked rice, pasta, soups, and gravies, in shallow containers to allow it to cool quickly. - Avoid overpacking your refrigerator with food, as this can prevent food from being cooled properly. Tips for preparing food safely - Wash your hands with soap and water for at least 20 seconds after touching raw meat or poultry, using the washroom, sneezing or coughing, petting an animal, taking out the garbage, using your phone, and other activities during which your hands might have become contaminated. - Clean your utensils, cutting boards, countertops, and other surfaces with soap and warm water, especially after handling raw meat or poultry. - Use separate cutting boards for vegetables and meat or poultry. - Only use clean dishcloths and sponges. - Use a food thermometer to ensure the food you’re cooking reaches a high enough temperature. - Keep ingredients in the refrigerator until you’re ready to use them. - Wash fresh produce thoroughly before peeling or cutting it. Under running water, gently rub the produce with your hand or use a vegetable brush for tougher produce, such as melons. - Discard the outer leaves of a head of cabbage or lettuce. - Keep up to date with local and regional food recalls. - Ensure take-out food is warm, and reheat it to safe temperatures before you eat it if it has been sitting out for more than 2 hours. - Use insulated lunch bags and cold packs to keep food out of the danger zone. You can reduce the risk of bacterial contamination to keep you and others safe by practicing safe food handling from purchase to consumption. Bacterial contamination is one of the top causes of foodborne illness and can happen at any stage of food production. Fortunately, there are many things you can do to prevent bacterial contamination. When food sits out in a temperature range called the danger zone, which is from 40–140°F (4–60°C), bacteria on it can double within 20 minutes. If you leave it too long, this can greatly increase the risk of bacterial contamination and lead to illness if you eat the food. Make sure you’re following safe food handling practices, such as cooking foods to correct temperatures, discarding leftovers after 2–3 days, and keeping food out of the danger zone as much as possible. If you’re unsure whether a food is safe, it’s best to throw it out. With these tips, you can do a lot to protect yourself and others from foodborne illness. Just one thing Try this today: If you don’t own a food thermometer, consider purchasing one. It’s a great tool to ensure you’re cooking and reheating your foods to temperatures that will kill harmful bacteria and make the food safer to eat.
There are many cervical (neck) lymph nodes, like the Submandibular one, which is in the neck that can cause swelling or enlargement in the neck. Enlargement can often mean the presence of an infection or disease. Lymphatic fluid drains from different regions of the head and neck. When these lymph nodes become overwhelmed with debris from an illness or infection, it can cause them to become swollen and sensitive. Only a medical professional can diagnose the cause of the swelling. The submandibular lymph nodes are located along the underside of the jaw on either side. It is responsible for lymphatic drainage of the tongue, submaxillary (salivary) gland, lips, mouth, and conjunctiva (the mucous membrane that covers the eyeball and under the surface of the eyelid). Infections of the following can cause the submandibular nodes to swell: Some of the most common reasons for an enlarged or swollen submandibular lymph node are cytomegalovirus (human herpesvirus), tuberculosis, sexually transmitted disease, bacterial infection, Epstein-Barr virus, and cat-scratch disease. Causes And Risk Factors When the submandibular lymph node swells, it is called submandibular lymphadenopathy. Infection and cancer can cause lymphadenopathy. Other causes include drugs like phenytoin sodium (Dilantin), collagen vascular disorders, and sarcoidosis (immune system cells form lumps called granulomas). The hundreds of lymph nodes in the body filter the lymphatic fluid. They also produce white blood cells that fight infections and diseases. They are part of the lymphatic system and a very crucial part of the immune system. They can become enlarged when there is an infection or illness in the body. Additional Information – Lymph Node Locations Other Lymph Nodes In The Neck Some of the lymph nodes that are in the neck include the following lymph nodes - Posterior cervical The ultra-fine blood that is collected within the lymphatic channels is called lymphatic fluid. It slowly runs throughout the body and passes through approximately 600 lymph nodes. It is part of the lymphatic system, which carries cells that help fight infection and other diseases. It takes waste and debris from the cells. Sometimes people refer to swollen lymph nodes as swollen glands. It is especially common for lymph nodes on the neck. Some of the common reasons that the lymph nodes will swell up are because of infection, an abscess, cancer, or inflammatory conditions. The most common cause is infection around the area of the lymph nodes that are enlarged and painful. An injury can also cause the lymph node to swell up. Infection or injury will usually cause a lymph node to swell and become sore suddenly. Other illnesses or diseases like cancer may cause the lymph node to increase gradually and painlessly. A medical professional should check any lymph node that is causing concern.
platform for legumes The European Legume Hub Community Our recent articles Our articles about: all The painted lady in soybean production Life cycleThe painted lady is a migratory species originating from Africa and the Mediterranean. It migrates from North Africa to northern Europe in May and June. The size and shape is similar to other butterflies. The wings are variegated reddish-brown and covered with black and white spots. Light green, oval eggs are laid on leaves. Grown-up caterpillars are 40 mm long, hairy and dark brown in colour, with two yellow lines on the sides. The pupa is 20 mm long and silver-brown in colour or coppery sheen. They are found on the injured leaves. The whole migration is made by a succession of generations, up to six in a year. The settling of adults in a location depends on weather conditions such as wind direction that affect the migration path and length. The first arriving butterflies can be seen in early spring. After mating, the females lay around 500 eggs on the leaves on a wide range of plants. Various species of thistle are the best-known hosts providing nectar for the adults and leaves for the caterpillars. The wider range of hosts includes soybean. After arriving in May and June, two generations can result in sporadic infestations of soybean. The highest abundance of caterpillars occurs during June and July. Only the caterpillars are harmful to soybean. They eat the leaf tissue between the leaf veins. Large infestations may cause complete defoliation. Damaged leaves are tied together in web-forming larval nests from which the young butterfly emerges from pupae. Infestation in crops is usually patchy and localised. The soybean is only one of many hosts and it is often the presence of wild hosts in the field that triggers infestation. It is important that other host species, thistles in particular, are removed within soybean crops if infestation is expected from migrating adults. [caption id="attachment_15873" align="aligncenter" width="1024"] Painted lady butterfly[/caption] Control is rarely necessary in practice. The need for control measures can be assessed about one week in advance of an infestation by the presence of adult butterflies that are settling in a location to mate and lay eggs. This provides time to plan treatments which might involve obtaining special permission to use insecticides. An average of two or more recently-hatched caterpillars per plant, or 20 caterpillars per row metre of soybean, or the observation of two nests of infestation within 100 m² is the economic threshold. The condition of the canopy and the stage of development of caterpillars should be considered. Plants with already developed canopy are more tolerant to damage while younger caterpillar instars are more susceptible to insecticides and most of the damage is yet to be made. Sometimes, control can be confined to crop margins or to patches in the crop. Predicting infestation from the presence of recently arrived and settling adults at the local level is important. Only a few insecticides are approved for control. As this pest occurs only occasionally, no products are registered in several countries for this purpose. In these cases, exceptional use may be admitted on demand (e.g., for Bacillus thuringiensis in Germany). This demand should be organised by a plant protection service or a cooperative in advance, as treatment is worthwhile if the caterpillars are still young. Treatment of large caterpillars is ineffective, as they will stop feeding soon and the damage has already occurred. [caption id="attachment_15876" align="aligncenter" width="768"] Damage on leaf made by painted lady.[/caption] Key practice points - Fields should be scouted regularly and systematically for the presence of adult butterflies, eggs and caterpillars. - Control measures should only be taken where a caterpillar population approaches an ‘economic’ threshold. Treatment is not justified in the case of most infestations (presence of caterpillars below economic threshold). - When chemical control is needed, apply the lowest effective amount of the pesticide using equipment that is properly calibrated. Sometimes it is possible to localise treatment to only infested parts of crops. Further informationBundesanstalt für Landwirtschaft und Ernährung (BLE), Ökolandbau - Distelfalter (Vanessa cardui), website: www.oekolandbau.de/landwirtschaft/pflanze/grundlagen-pflanzenbau/pflanzenschutz/schaderreger/schadorganismen-im-ackerbau/distelfalter-vanessa-cardui/ Butterfly Conservation. Painted Lady, website: www.butterfly-conservation.org/butterflies/painted-lady Sampling and measurement protocols for field experiments assessing the performance of legume-supported cropping systems This guide does not s... This guide does not seek to define a common methodology for all variables. It provides guidance and support for those who may be new to some of these measurements. It has taken guidance from protocols followed in other European projects such as NitroEurope. This guide is used in Legume Futures to support partners in developing standard operating procedures for all measurements carried out on sites. It was written as an internal project document and is published here as part of the project’s efforts to provide full access to methods and to support other researchers in this area. Impacts of legume-related policy scenarios Latest news & upcoming events The three day virtual conference ‘Advances in Legume Science and Practice 2‘, organised by the Association of Applied Biologists ‘Cropping And The Environment (CATE)’ specialist group, will take place 1 - 3 September 2021. Legume Science and Practice Conference The... The Legume Hub provides timely, scientifically valid, and comprehensive information for practitioners. These include farmers as growers and users of legumes, processors for feed and food purposes, and all involved in the related parts of the value chains.Empowering... The World Soybean Research Conference 11, has been postponed to 4. - 9. September 2022. It will be held in Novi Sad, Serbia. The event is organized by the Institute of Field and Vegetable Crops (IFVC), one of the 18 project partners of the Legumes Translated project. ... Scientia potentia est: knowledge is power. Understanding empowers. The Hub is about free access to knowledge, insights and understanding to support growing and using legumes. It is about empowering everyone interested in legume development and use with knowledge. The Legume Hub is also a community for developing and sharing knowledge in which experts from science and practice work together to support the sustainable development of our food systems. The Legume Hub provides timely, science-based information for practitioners and everybody with an interest in legumes, their propagation, processing and use. These include farmers as growers and users of legumes, processors for feed and food purposes, and all other stakeholders involved in the legume value chain. The Hub’s registered expert users and authors form the core of the European Legume Hub Community. They own and govern the Legume Hub.
Phasianidae is a diverse group comprising over 50 genera and over 214 species. Phasianid galliforms are commonly known as grouse, turkeys, pheasants, partridges, francolins, and Old World quail. Phasianids are small to large, blunt-winged terrestrial birds. Some species are noted for elaborate courtship displays in which males strut about, displaying colorful plumage and wattles, sometimes accompanied by an expansive spreading of the tail feathers. Some members of this group are important game birds and others, like domestic chickens (derived from Gallus gallus), are bred and reared for human consumption. (Johnsgard, 1999; Madge and McGowan, 2002; Sibley and Ahlquist, 1990; Sibley and Monroe, 1993) Phasianids are distributed globally except for polar regions and some oceanic islands. (Johnsgard, 1999) Phasianids inhabit a diversity of habitats including rainforests, scrub forests, deserts, woodlands, bamboo thickets, cultivated lands, alpine meadows, tundra and forest edges. Some species may be found up to 5000 m above sea level, sometimes more. (Johnsgard, 1983; Johnsgard, 1999; Madge and McGowan, 2002) Phasianids are small to large, ranging from 500 g to 9.5 kg in weight. Phasianids have short, rounded wings. Tail length is variable by species, appearing almost tailless in some to up to one meter in others. Plumage coloration ranges from cryptic to dark to brightly -patterned. The legs are sturdy and one or more spurs may be present on the tarsus. Toes are short with blunt claws and the hallux is raised. Phasianids may have crests, or bare skin on the head or neck, or wattles. Physical characteristics may be sexually monomorphic or dimorphic depending on species. Some phasianid males are larger, more brightly colored, have longer tails or more elaborate ornamentation than females. (Campbell and Lack, 1985; Dickson, 1992; Johnsgard, 1999; Madge and McGowan, 2002) Phasianid mating systems are variable depending upon species. Some taxa are described as monogamous with the pair bond lasting the duration of the breeding season Generally monogamous species are sexually monomorphic in plumage coloration and size, or slightly dimorphic. Some taxa are polygynous with a pair bond evident until incubation of the eggs. Males of these taxa are often brighter or larger than females. Polygynandry has also been observed in some taxa, with pair bonds evident to copulation. In these taxa males are generally more brightly colored and often somewhat larger than females. In some species males gather on leks to display for females. Courtship behaviors may include tid-bitting (food-showing), strutting, waltzing, and wing-lowering. Sometimes elaborate lateral or frontal displays take place, in which males expose the most colorful parts of their plumage, which may include tail spreading and displaying of swollen wattles. Socially dominant males may copulate more frequently and more successfully than males lower in the social hierarchy. Status in the male hierarchy may be related to size, coloration and relative display characteristics. (Campbell and Lack, 1985; Dickson, 1992; Johnsgard, 1983; Madge and McGowan, 2002) Many phasianids breed seasonally, usually coinciding with springtime for temperate species and the wet season for tropical species. Courtship in some species entails elaborate visual displays in which males may strut about displaying brightly colored plumage or wattles. Sometimes males congregate on leks to display for females. Females appear to select the nest site and likely construct the nest. Nests are usually shallow, often lined with grass and leaves. Nests are often located on the ground, but some species use tussocks or trees. Female nest building behavior entails picking up material and tossing it backwards. Egg coloration varies, and may be white, olive, brown or spotted. Clutch size varies by species, ranging from 2 to 20 eggs. In some species egg-dumping may occur. Incubation begins with the last egg laid and is variable by species, lasting from 18 to 29 days. Chicks are precocial and are covered with down and first primaries or secondaries upon hatching. Chicks can walk, run and forage shortly after hatching, yet stay close to the female during the first week or two. Within two weeks chicks may begin to fly and to disperse, but will still brood with the female. Depending on the species, broods may dissolve sometime between six to sixteen weeks. Adult plumage may be attained at one to two years and sexual maturity from one to five years. (Campbell and Lack, 1985; Johnsgard, 1983; Johnsgard, 1999; Madge and McGowan, 2002) In phasianids, it appears that females alone incubate, beginning with the last egg laid and continuing for 19 to 29 days. Females may brood chicks for as long as 16 weeks. In some species males help rear young by providing defense of nest or brood. In other species males appear to provide no parental care. Parents and offspring of some species join coveys or flocks at the end of the breeding season. (Campbell and Lack, 1985; Johnsgard, 1983; Johnsgard, 1999; Madge and McGowan, 2002) Phasianids are generally sedentary although a few species migrate long distances in large flocks. Phasianids are mainly terrestrial ground dwellers that move about mostly by walking, and may fly only short distances. Phasianids forage by digging and scratching the ground. When disturbed some phasianids fly straight up into the air, then fly horizontally away from the source of the disturbance. Other species will move quietly into cover when disturbed. Many species are often seen dust-bathing. Some quail and partridges live in social groups from 4 to 40 individuals. They do not appear to defend territories and monogamous pair bonds may persist year round. Old World quail may be solitary or live in coveys. These taxa are sometimes polygynous, with males defending territories and singing to attract females to nest. During migration Old World quail may travel in large flocks. Pheasant social organization varies. Some species may gather into flocks, which break up into breeding pairs during the breeding season. Others may be found in single sex groups of bachelor males or groups of females defended by one male. Males may defend territories and attract one or more females to breed. Still others may live primarily solitarily, with males defending territories and attracting females to display grounds. (Campbell and Lack, 1985; Johnsgard, 1999; Madge and McGowan, 2002) In some pheasants dominance hierarchies play an important role in organizing social structure. The hierarchy consists of individualized dominant-subordinate relationships. Males generally dominate females. Male and female hierarchies are established via intra-sexual interactions. Higher rank may be associated with greater body and comb size, and success in threat posturing. High-ranking males achieve high mating success relative to lower ranking males. Dominant females appear less sexually receptive. Behavioral displays used to establish hierarchies include: waltzing, wing-flapping, tid-bitting, feather ruffling, head shaking, tail spreading, frontal or bilateral wing lowering, wattle engorgement, or crouching. (Campbell and Lack, 1985; Johnsgard, 1983; Johnsgard, 1999; Madge and McGowan, 2002) Visual signaling may occur through morphological features or behavioral interactions. Some phasianids have brightly colored skin on the face or neck, wattles or elaborately structured and brightly colored plumage. Males appear to display these features during courtship and during agonistic male-male interactions. Posturing during threat displays may entail upright lateral or frontal positioning while submission may involve a lowering of the body to the substrate. Phasianid vocalizations range from the familiar crowing of the domestic fowl to loud screams to clucking or hissing. Crowing may be individually identifiable signals for territory defense or mate attraction. Sustained raucous screams may be given in response to alarm. Threat vocalizations are low in frequency and submission appears to be accompanied by hissing. Clucking may serve as a brood gathering vocalization. Phasianids may also produce acoustic signals by rattling tail feathers or by drumming in flight as known from some grouse. (Johnsgard, 1983; Johnsgard, 1999) Food habits of phasianids are varied, consisting of a mixture of plant and animal material. Plant materials include: grains, seeds, roots, tubers, nuts, fruits, berries and foliage. Animal materials include: arthropods (Ephemerida, Orthoptera, Trichoptera, Lepidoptera, Coleoptera), mollusks, worms, lizards, and snakes. (Campbell and Lack, 1985; Johnsgard, 1983; Johnsgard, 1999) Mammalian predators of phasianids include: foxes, dogs, cats, opossums, raccoons, skunks, rodents, fishers, and mongooses. Avian predators include raptors and corvids. Reptilian predators are largely snakes. (Dickson, 1992; Johnsgard, 1999) Phasianids may serve an ecosystem role as seed dispersers or seed predators. Phasianids are economically important to humans. Phasianids such as grouse, quail, partridges, pheasants and turkeys are important game birds that are hunted regularly in all parts of the world. Some phasianids, such as common fowl (derived from Gallus gallus), have been domesticated and are reared for human consumption of meat and eggs and for "fancy". Most species are hunted primarily for food, although feathers of some species have been collected for ornamentation and clothing manufacture. Sometimes bones have been used in the manufacture of various tools. Phasianids may cause damage to some agricultural crops (maize, barley, wheat, millet) by foraging for seeds and shoots on cultivated lands. (Campbell and Lack, 1985) The IUCN Red List of Threatened Species includes 68 phasianid species. Two species are listed as extinct: double-banded argus (Argusianus bipunctatus) and New Zealand quail (Coturnix novaezelandiae). Habitat loss and hunting are among the major threats identified for this group. (Collar, et al., 1994; IUCN, 2007) Laura Howard (author), Animal Diversity Web, Tanya Dewey (editor), Animal Diversity Web. Living in Australia, New Zealand, Tasmania, New Guinea and associated islands. living in sub-Saharan Africa (south of 30 degrees north) and Madagascar. living in the Nearctic biogeographic province, the northern part of the New World. This includes Greenland, the Canadian Arctic islands, and all of the North American as far south as the highlands of central Mexico. living in the southern part of the New World. In other words, Central and South America. living in the northern part of the Old World. In otherwords, Europe and Asia and northern Africa. uses sound to communicate living in landscapes dominated by human agriculture. having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria. an animal that mainly eats meat Found in coastal areas between 30 and 40 degrees latitude, in areas with a Mediterranean climate. Vegetation is dominated by stands of dense, spiny shrubs with tough (hard or waxy) evergreen leaves. May be maintained by periodic fire. In South America it includes the scrub ecotone between forest and paramo. uses smells or other chemicals to communicate having a worldwide distribution. Found on all continents (except maybe Antarctica) and in all biogeographic provinces; or in all the major oceans (Atlantic, Indian, and Pacific. having markings, coloration, shapes, or other features that cause an animal to be camouflaged in its natural environment; being difficult to see or otherwise detect. in deserts low (less than 30 cm per year) and unpredictable rainfall results in landscapes dominated by plants and animals adapted to aridity. Vegetation is typically sparse, though spectacular blooms may occur following rain. Deserts can be cold or warm and daily temperates typically fluctuate. In dune areas vegetation is also sparse and conditions are dry. This is because sand does not hold water well so little is available to plants. In dunes near seas and oceans this is compounded by the influence of salt in the air and soil. Salt limits the ability of plants to take up water through their roots. ranking system or pecking order among members of a long-term social group, where dominance status affects access to resources or mates humans benefit economically by promoting tourism that focuses on the appreciation of natural areas or animals. Ecotourism implies that there are existing programs that profit from the appreciation of natural areas or animals. animals that use metabolically generated heat to regulate body temperature independently of ambient temperature. Endothermy is a synapomorphy of the Mammalia, although it may have arisen in a (now extinct) synapsid ancestor; the fossil record does not distinguish these possibilities. Convergent in birds. union of egg and spermatozoan an animal that mainly eats leaves. A substance that provides both nutrients and energy to a living thing. forest biomes are dominated by trees, otherwise forest biomes can vary widely in amount of precipitation and seasonality. an animal that mainly eats fruit an animal that mainly eats seeds An animal that eats mainly plants or parts of plants. An animal that eats mainly insects or spiders. fertilization takes place within the female's body referring to animal species that have been transported to and established populations in regions outside of their natural range, usually through human action. offspring are produced in more than one group (litters, clutches, etc.) and across multiple seasons (or other periods hospitable to reproduction). Iteroparous animals must, by definition, survive over multiple seasons (or periodic condition changes). makes seasonal movements between breeding and wintering grounds Having one mate at a time. having the capacity to move from one place to another. This terrestrial biome includes summits of high mountains, either without vegetation or covered by low, tundra-like vegetation. the area in which the animal is naturally found, the region in which it is endemic. an animal that mainly eats all kinds of things, including plants and animals found in the oriental region of the world. In other words, India and southeast Asia. reproduction in which eggs are released by the female; development of offspring occurs outside the mother's body. the business of buying and selling animals for people to keep in their homes as pets. the kind of polygamy in which a female pairs with several males, each of which also pairs with several different females. having more than one female as a mate at one time rainforests, both temperate and tropical, are dominated by trees often forming a closed canopy with little light reaching the ground. Epiphytes and climbing plants are also abundant. Precipitation is typically not limiting, but may be somewhat seasonal. specialized for leaping or bounding locomotion; jumps or hops. scrub forests develop in areas that experience dry seasons. breeding is confined to a particular season remains in the same area reproduction that includes combining the genetic contribution of two individuals, a male and a female one of the sexes (usually males) has special physical structures used in courting the other sex or fighting the same sex. For example: antlers, elongated tails, special spurs. associates with others of its species; forms social groups. living in residential areas on the outskirts of large cities or towns. uses touch to communicate Coniferous or boreal forest, located in a band across northern North America, Europe, and Asia. This terrestrial biome also occurs at high elevations. Long, cold winters and short, wet summers. Few species of trees are present; these are primarily conifers that grow in dense stands with little undergrowth. Some deciduous trees also may be present. that region of the Earth between 23.5 degrees North and 60 degrees North (between the Tropic of Cancer and the Arctic Circle) and between 23.5 degrees South and 60 degrees South (between the Tropic of Capricorn and the Antarctic Circle). Living on the ground. defends an area within the home range, occupied by a single animals or group of animals of the same species and held through overt defense, display, or advertisement the region of the earth that surrounds the equator, from 23.5 degrees north to 23.5 degrees south. A terrestrial biome. Savannas are grasslands with scattered individual trees that do not form a closed canopy. Extensive savannas are found in parts of subtropical and tropical Africa and South America, and in Australia. A grassland with scattered trees or scattered clumps of trees, a type of community intermediate between grassland and forest. See also Tropical savanna and grassland biome. A terrestrial biome found in temperate latitudes (>23.5° N or S latitude). Vegetation is made up mostly of grasses, the height and species diversity of which depend largely on the amount of moisture available. Fire and grazing are important in the long-term maintenance of grasslands. A terrestrial biome with low, shrubby or mat-like vegetation found at extremely high latitudes or elevations, near the limit of plant growth. Soils usually subject to permafrost. Plant diversity is typically low and the growing season is short. living in cities and large towns, landscapes dominated by human structures and activity. uses sight to communicate young are relatively well-developed when born Campbell, B., E. Lack. 1985. A Dictionary of Birds. Vermilion: Buteo Books. Collar, N., M. Crosby, A. Stattersfield. 1994. Birds to Watch 2, The World List of Threatened Birds. Washington, D.C.: Smithsonian Institution Press. Dickson, J. 1992. The Wild Turkey: biology and management. Harrisburg, PA: Stackpole Books. Dyke, G., B. Gulas, T. Crowe. 2003. Suprageneric relationships of galliform birds (Aves, Galliformes): a cladistic analysis of morphological characters. Zoological Journal of the Linnean Society, 137: , 227-244. Haaramo, M. 2004. "Mikko's Phylogeny Archives" (On-line). Field Museum of Natural History, Helsinki, Finland. Accessed May 03, 2007 at http://www.fmnh.helsinki.fi/users/haaramo/Metazoa/Deuterostoma/Chordata/Archosauria/Aves/Galliformes/Galliformes.htm. IUCN, 2007. "IUCN Red List of Threatened Species" (On-line). Accessed May 03, 2007 at http://www.iucnredlist.org/. Johnsgard, P. 1983. The Grouse of the World. Lincoln, NE: Univeristy of Nebraska Press. Johnsgard, P. 1999. The Pheasants of the World Biology and Natural History. Washington, DC: Smithsonian Institution Press. Livezey, B., R. Zusi. 2001. Higher-order phylogenetics of modern Aves based on comparative anatomy. Netherlands Journal of Zoology, 51(2): 179-205. Madge, S., P. McGowan. 2002. Pheasants, Partridges and Grouse: A guide to the pheasants, partridges, quails, grouse, guineafowl, buttonquails and sandgrouse of the world. London: Christopher Helm. Sibley, B., C. Monroe. 1993. A World Checklist of Birds. Ann Arbor, MI: Edwards Brothers Inc.. Sibley, C., J. Ahlquist. 1990. Phylogeny and Classification of Birds, A Study in Molecular Evolution. New Haven, CT: Yale University Press. Sorenson, M., E. O'Neal, J. Garcia-Moreno, D. Mindell. 2003. More taxa, more characters: the Hoatzin problem is still unresolved. Molecular Biology and Evolution, 20(9): 1484-1499.
The harshness of the Ancient Roman discipline became a common concept. Joining the military service, a Roman citizen came out of the protection of the ordinary law and followed the power of a warlord. The army order was imposed by the iron and blood and death waited for discipline violators. Indulgences did not exist even if a culprit was connected with a warlord by kin bonds. That conscious cruelty was one of the main basics of the high combat ability of the Roman Army. The civil and military valour The idea fixed among historians about the Romans’ fundamental difference between the civil and military law. The former was in action only on the city territory and it was defined by the letter of the Law. The latter began beyond the first milestone from the city walls and there the will of the warlord reigned supreme. He had absolute power over the bodies and souls of the subordinates. No wonder, the Romans considered fasces to be one of the signs of the power, which were a belted bunch of wooden rods that Lictors solemnly carried in front of a warlord. Outside the city walls, an axe was put inside fasces that symbolized the warlord’s right to impose the death sentence. On the territory of Rome, the capital punishment was limited by the right of provocation i.e. by the possibility of a sentenced to appeal to the people’s assembly. However, when the army marched then the right stopped acting for those who had become warriors. The common norms of civil life were suspended in Rome after the declaration of war. Courts closed, the people’s assembly quitted gathering, the trade stopped. Even the civil status of warriors was as if alienated: being enlisted in the army citizens were deprived of basic rights and on a warlord’s order they became subjects of imprisoning, corporal punishment or even execution. Such a peculiarity of civil and legal life was willingly used by the Roman Senate that usually introduced martial law during plebian unrests under the first convenient pretext, mobilize all the discontented into the army and thus forced them to submit. Any gatherings or meetings in the army were strictly forbidden on pain of death. On the other hand, the same peculiarity led to the fact that the famous Plebian secessions of 5-4 centuries BC, as a rule, started from an uprising in the army that refused a warlord in submission and after that retired in full order to the Sacred mountain in 3 miles off the City. The state of emergency and a warlord’s infinite power were over when the army returned from a campaign and entered the city walls. If a campaign was victorious then it took place in the form of triumphal procession: the commander entered the city in a chariot and ascended to the Capitol where he made a solemn sacrifice to Jupiter. He was followed by the warriors in the procession, who served under him and they took the rite of weapon purification. The formal address to citizens “Quirites” served as the sign of transition from military to civil status accepted in Rome. It was considered, that the only word from a warlord’s mouth freed soldiers from their oath. It was namely this appeal that Caesar reasoned the soldiers who rebelled against him in the autumn of 47 BC and demanded his resignation. “At the beginning of his word, Appian wrote, he addressed them “quirites” rather than “fellow-soldiers”, the appeal served the sign of the fact that the soldiers had been dismissed and they were already simple people. The soldiers who could not tolerate that any more shouted that they repent and asked him to continue the war alongside with them.” The harshness of fatherly tempers The Romans considered the so-called “Manlian Discipline” to be the symbol of severe military discipline inherited from the ancestors. It related to 340 BC when a consul named Titus Manlius Imperiosus ordered Lictors to behead his own son who had violated the ban to leave the military ranks and fight against enemies. A similar case happened in 432 BC when the Roman Dictator Aulus Postumius executed his own son under the identical circumstances. However, Livy, telling the story, expressed a cautious doubt in its reality. In 325 BC a glorious warlord named Lucius Papirius Cursor who fulfilled the duties of the Dictator sentenced to death his Master of the Cavalry Quintus Fabius Rullianus by name who, as the previous youngsters, contrary to the ban joined the battle against foes and won the victory over them. Fabius succeeded to escape from Lictors’ hands and flee from the camp to Rome where he appealed to the people’s gathering. During the heated debates, Papirius managed to insist on his right but according to unanimous request of the gathering that was very inclined Fabius he at last agreed to soften his anger and forgive the misdeed of Fabius. The example of “Manlian Discipline” was particularly illustrative due to the conflict between the power of a warlord and the paternal right: “Since you, Titus Manlius, respect neither the consular authority nor the paternal one…» — Livy puts such words into the commander’s mouth enraged by the son’s misdeed. His harshness that became a legend already in the antiquity should become an instance of the wordless submission to discipline, which would stand higher than any circumstances including the paternal affection to children. The tradition of the blind order submission was so indisputable that any personal unauthorized deed even at first sight successful considered in the Roman Army to be the same discipline violation as the failure to comply with the order. No doubt, the instance really served an important landmark for many generations of the Romans, with which they checked their actions both in private and state life. “The Republic of the Romans is strong and powerful by the customs of their ancestors” – wrote a Roman poet Ennius. The cruelty and intimidation Both Polybius and other Greeks described severe orders that were in forth in the Roman Army. Rarely in any other states, Polybius noted, the guilty in discipline violation and pusillanimity were treated with such cruelty and contempt as the Romans did. It is known that the Senate repeatedly refused to ransom warriors that had been captured by the enemies. When King Pyrrhus freed Roman captives without any ransom the Senators ordered to transfer those who previously served in the cavalry to infantry and the former infantrymen to auxiliary slinger squads. Neither of them had a right to bivouac inside a camp for the night or sleep in tents and they could only return to the previous service by delivering the armor taken from two killed enemies. The legion soldiers defeated in the Battle of Cannes were moved to Sicily where they were to remain until the end of war with the same ban to camp out and the wheat bread was replaced by barley bread for them. The Romans applied especially severe punishments for the position abandoning and desertion. The death was the punishment for such crimes and quite often even the extermination of entire detachments did not stop the Romans. It is a known accident when a Campanian legion mutinied and siezed the city of Rhegium in southern Italy during the war against Pyrrhus. The major part of Campanians perished and no more than 300 men were captured. All of them were sent to Rome in chains. Here they were flogged on the Forum and after that, every one of them was beheaded. The Senate resolution required to leave the bodies of the executed without burial and even their relatives were forbidden to mourn them. Sometimes the Decimation was used in squads retreated under the onslaught of the enemy. In this case, the general formation of troops was made and every tenth was chosen by lot among the escapers who was executed at once. The squad itself was disbanded and its warriors were spread among other Army units. The camp justice It was not only a warlord who obtained the discipline power but also tribunes, prefects of the allies and centurions. They had a right to impose penalties, arrest the guilty and flog them. What concerned the responsibility for one or the other crimes, Cato the Elder wrote that the right hand was cut off to those who had been caught stealing from comrades and if one wanted to impose an easier punishment on them then they were whipped in front of the formations. Polybius listed the following misdeeds: «The Romans regard as cowardice and shame the deeds of the following kind: if someone falsely ascribe a valour feat to themselves in front of tribunes, if somebody out of cowardice leaves his post to which they were put on or also out of cowardice dump some weapons in the heat of a battle (…) Those are also subjects of punishment who steal anything from a camp, who were punished thrice for one and the same crime as well as a young man guilty of sodomy. Such are the deeds that are punishable as crimes by the Romans». The guilty in those crimes underwent the punishment by clubs (fustuarium) that was carried out by soldiers themselves who often were the comrades and fellow soldiers of the sentenced. After pronouncing a verdict, a tribune took a club and touched the offender with it, after it, soldiers clubbed and stoned him, mostly to death. If still he remained alive then he was exiled and had no right to return home. According to Polybius’s words, the fear of unavoidable punishment made Roman soldiers stand firm at their post even in front of overwhelming in numbers opponents. If they lost a shield or a sword in the heat of a battle, they threw themselves at the enemies’ ranks as if they were berserk in order to get back the losses or die a glorious death. In their minds, only the death on a battlefield could in this case deprive a soldier of unavoidable disgrace and their fellow soldiers’ resentment. The disgraceful punishments Roman authors of the imperial era traditionally continued to praise the ancient tempers of such military leaders as Domitius Corbulo or Avidius Cassius. The latter was distinguished by legendary severity of his morals. According his biographer’s words, who lived, however, in a much later epoch, “He flogged on a square or behead in the middle of a camp those who deserved it (…) He cut hands off deserters, to others, he broke the shins and popliteal cups and said that a left alive crippled criminal would serve a better example than killed one”. According to a general belief of that time, peaceful life idleness and inactivity deadly influenced the military discipline, blowing up the moral and physical conditions of the soldiers. They believed that hard labour, constant physical training, long-lasting service in remote garrisons side by side with fierce enemies would be great means to keep in check the most troublesome units that otherwise would not be able to resist mutinies or crimes on their own. Along with that, especially in comparison to the predecessors, they began to value the moderation of those commanders who rather than to resort to any kind of repressions tried to solve problems with personal example and by means of conviction. The camp justice became much more flexible. For a minor offence a culprit was imposed by extra duties, he could be left beyond camp walls for a night or sent to a brig. Those who showed cowardice or disobedience could be made dig trenches, chop straw, carry bricks or fulfill any other hard and dirty work. Disgraceful punishments became widespread among penalties at that time. At the same time, perpetrators were exposed in humiliating way, being ordered, for instance, to stay in the camp center barefooted, unbelted or being clad in the clothes cut from the top to the bottom, holding a measuring fathom or turf in hands. Those punishments were done in front of their fellow legionnaires’ eyes and appealed to their sense of honour and at the same time they were effective means of military discipline support in the ranks of professional army. - Makhlayuk, V.A. The Army of the Roman Empire. Essays on traditions and mentality / V.A. Makhlayuk. — Nizhny Novogorod, 2000; - Makhlayuk, V.A. The soldiers of the Roman Empire. Traditions of military service and military mentality / V.A. Makhlayuk. — St. Petersburg: «Acre», 2006; - Tokmakov, V. N. The military oath and «sacred laws» in the military organization of the Early Roman Republic / V. N. Tokmakov. // Religion and Community in Ancient Rome. Edited by: Kofanov L.L., Chaplygin N.A. — M.: Publishing House of the IVI RAS, 1994. — pp.125-147; - Tokmakov, V. N. Sacred aspects of military discipline in Rome of the Early Republic / V. N. Tokmakov. // Bulletin of Ancient History. — 1997. — No. 2. — pp. 43-59; - Mayak, I.L. The Romans of the Early Republic / I.L. Mayak. — M.: Publishing House of - Moscow State University, 1993.
1the action or process of inducting someone to a post or organization:induction into membership of a Masonic brotherhood [usually as modifier] a formal introduction to a new job or position:an induction course US enlistment into military service. 2the process or action of bringing about or giving rise to something:the induction of malformations by radiation Medicine the process of bringing on the birth of a baby by artificial means, typically by the use of drugs. 3 Logic the inference of a general law from particular instances:the admission that laws of nature cannot be established by inductionOften contrasted with deduction. the production of facts to prove a general statement. (also mathematical induction) Mathematics a means of proving a theorem by showing that if it is true of any particular case it is true of the next case in a series, and then showing that it is indeed true in one particular case. 4the production of an electric or magnetic state by the proximity (without contact) of an electrified or magnetized body.See also magnetic induction. the production of an electric current in a conductor by varying the magnetic field applied to the conductor. 5the stage of the working cycle of an internal-combustion engine in which the fuel mixture is drawn into the cylinders.
This interactive book is a learning tool designed for those on the autism spectrum, but is useful for anyone who wishes to sharpen their abilities in reading facial expressions. It features twenty-eight video clips of real emotions being displayed. Each is accompanied by still images and a detailed description. The examples have been chosen because they are recognisable and not too difficult to interpret. Students are asked to watch the video – first in real time, then in slow motion – and to identify three actions which occur in the few seconds in which the emotion is being displayed. This may be movements of the head, or a crinkling around the eyes, or a tightly closed mouth. Students are given six options and asked to identify three key actions. In completing the quizzes in this ebook, students look at each video clip and think about what changes are happening in the face and body as the person shows emotions such as sadness, excitement, approval, or affection. How does the face tell us when someone is experiencing enjoyment, worry, or embarrassment? How does it show in the body? The twenty-eight examples help to build an overall understanding of the general signs of negative and positive emotions.
Animals like us - Olivia’ by Ian Falconer, published by Simon & Schuster Australia, 2000 - My uncle’s donkey’ by Tohby Riddle, published by Penguin Books Australia, 2010 Engaging personally with texts – both texts - concentrate on everyday activities. Most students will be able to relate personally to the types of experiences in the texts. - have family (in different forms) at the centre of the story. - use iconic references to create humour and rely on the reader’s background knowledge to access the humour. Understand and apply language forms and features – both texts: - have a narrative structure similar to that of a child’s daily routine. Once Olivia has been introduced to the reader, the text then takes on a day-to-night sequence. My Uncle’s Donkey starts in the morning and ends after the night-time routine. - are visually similar. The illustrations depict objects and characters in the foreground. The backgrounds are white, which makes the reader pay more attention to the expressions on the characters’ faces. - use anthropomorphic animals to tell the story. Resources for making connections: - Action Jackson (9:24) by Jan Greenberg, Sandra Jordan and Robert Andrew Parker (2007) - Jackson Pollock – artist referred to in Olivia - Edgar Degas – artist referred to in Olivia - Excerpt from The kid with Charlie Chaplin (2.54) (film reference from My Uncle’s Donkey) A study of either text could lead to a study of representations in texts such as: - How are pigs and donkeys represented in texts? Think of fairy tales, rhymes and animated movies. Learning experiences and question approaches - What are the different visual representations of pigs and donkeys in these texts? - How is the family represented in Olivia and My Uncle’s Donkey? What other texts represent families in different ways? - Why might composers choose to give animals human features to represent a family? - Look at the sibling relationship in the text. What features of the text (language and visual qualities) convey to the reader the relationship between Olivia and her brother? - What other texts portray sibling relationships? Are they represented in the same way? Consider ‘I wish I had a pirate suit’ by Pamela Allen or any version of the Cinderella story. - The donkey in ‘My Uncle’s Donkey’ could be in the narrator’s imagination, as the donkey and the uncle never make eye contact. How is imagination represented in other texts? Consider Tiddler by Julia Donaldson and Axel Sheffler, Where the wild things are by Maurice Sendak, When Harry caught Imaginitus by Nick Bland. Example of representation - The sheep-pig by Dick King-Smith - Babe film (1995) - Charlotte’s Web film (2006) and (1973) - Miss Piggy from the Muppets The wonky donkey by Craig Smith - Rhyme: ‘This little piggy went to market …’ - Peppa Pig - screened on the ABC - Charlotte’s Web a novel by E.B White (1952) - Shrek film (2001) - The Silver Donkey by Sonya Hartnett - The Donkey who carried the Wounded by Jackie French - The Wonky Donkey by Craig Smith - Rhyme – ‘Donkey, donkey, old and grey …’
For the first two centuries of its existence, Rome was overwhelmingly powerful, and its political institutions were strong enough to survive even prolonged periods of incompetent rule. Trouble was afoot on Rome's borders, however, as barbarian groups became more populous and better-organized, and as the meritocratic system of the “Five Good Emperors” gave way to infighting, assassination, and civil war. At the same time, what began as a cult born in the Roman territory of Palestine was making significant inroads, especially in the eastern half of the Empire: Christianity. Image Citations (Wikimedia Commons): Augustus Caesar - Till Niermann Colosseum - Andreas Ribbefjord Empire 117 CE - Eleassar Roman Legion - Ursus
The Framers’ Debates on Religion The First Amendment and the Utah Constitution Lesson II. Amending the U.S. Constitution with a Bill of Rights Step 1. Prepare for Discussion Congratulations on completing the first lesson. The videos and interactives in Lesson II will help you prepare for your assignments and class discussion. Before you begin, take a look at the questions below. Bring your answers to the class discussion. Discussion Questions for Lessons I and II - What was your proposed religion amendment to the First Federal Congress at the beginning of Bill of Rights debates? Explain your reasons for your proposal. (Lesson II, Step 3) - How does your proposed amendment respond to questions about the government’s relationship with religion at the time? - What do you think are the most significant changes to the proposed amendment about religion during the debates? (Lesson II, Step 8) Discussion Questions for Lesson III - Did any of the parts about religion in the Utah state constitution surprise you? If so, why? (Lesson III, Step 3) - What are some ways you might apply the 3Rs Framework to how you interact with others in your community? (Lesson III, Step 5) - How can a decision-maker’s mindset about history inform how you engage in your community today? (Lesson III, Step 5) - How did the decisions of the congressmen in the First Federal Congress influence the protection of the rights of conscience today? - How did the decisions of the delegates to the Utah constitutional convention influence protections of the rights of conscience today? - How could it have turned out differently?
Behavior of wild capuchin monkeys can be identified by marks left on their tools A group of researchers including Tiago Falótico, a Brazilian primatologist at the University of São Paulo's School of Arts, Sciences and Humanities (EACH-USP), archeologists at Spain's Catalan Institute of Human Paleoecology and Social Evolution (IPHES) and University College London in the UK, and an anthropologist at the Max Planck Institute for Evolutionary Anthropology in Germany, have published an article in the Journal of Archeological Science: Reports describing an analysis of stone tools used by bearded capuchin monkeys (Sapajus libidinosus) that inhabit Serra da Capivara National Park in Piauí State, Brazil. It is the first systematic study to characterize the tools used by capuchin monkeys living in the wild. The animals use the tools for digging, seed pounding, nut cracking, and stone-on-stone percussion. The ultimate aim of the study was to find out whether these different activities created use-wear marks that pointed to the purpose for which the tools were used. "Archeologists in the field analyze the tools found in a dig and the use-wear marks they bear," Falótico said. "In our case, we had both the tools used by these monkeys and the chance to observe their behavior, to see how they used the tools. This is the first comparative analysis of the different tools used by wild capuchin monkeys for different purposes. We concluded that the tools displayed different patterns of use and wear in accordance with the activities involved and that these use-wear marks served to identify the activities performed by each type of tool and by the individuals that used the tools." The animals concerned inhabit the Caatinga, Brazil's semi-arid shrubland and thorn forest biome. To crack open encapsulated seeds or fruits, such as locust fruit or jatoba (Hymenaea courbaril) and cashew nut (Anacardium occidentale), they pound them with a stone on another that serves as an anvil. They also use stones to dig or scrape the soil in search of tubers, roots, and spiders. "They also hammerstones with other stones. The purpose of this stone-on-stone percussion, in the case of the groups we studied in Serra da Capivara, is to crush quartzite cobbles so that they can lick the powder and smear it on their bodies," Falótico said. "We've only ever observed this behavior by the animals inhabiting the study site. We have a few theories to explain it, such as the use of quartz to combat parasites by eating the dust, or ectoparasites such as lice by rubbing themselves with it. We have yet to test these hypotheses. The behavior isn't seen all the time but it's commonplace in the population concerned." The research is supported by São Paulo Research Foundation—FAPESP via a Young Investigator Grant for the project "Cultural variation in robust capuchin monkeys (Sapajus spp.)". The capuchin monkeys found in the Caatinga and the Cerrado, Brazil's savanna biome, are more terrestrial than those in the Amazon or Atlantic Rainforest. "The latter don't use stone tools. They're arboreal and rarely seen on the ground. These tools are used on the ground," Falótico said. As an evolutionary environment, he added, Serra da Capivara is very similar to that of the first hominins. According to reputable sources, the term hominin is now defined as the group consisting of modern humans, extinct human species, and all our immediate ancestors (including members of the genera Homo, Australopithecus, Paranthropus, and Ardipithecus). As these ancestors evolved, they too began spending more time on the ground and using stone tools. "Capuchin monkeys can serve as a model to help us understand which factors led to the use of tools by the first hominins," Falótico explained. Individuals may use the same tool in more than one activity, but this is unusual. "It also depends on the environment. In Serra da Capivara, there are lots of rocks and stones, so they can easily switch between tools," he said. "In places with less stone available, they may use the same tool for different purposes. We have sightings of monkeys using a stone to dig and then pound a tuber they've found by digging." The capuchin monkeys of Serra da Capivara also use twigs, sticks, and other kinds of wood as tools. "In this case, the tools may be used off the ground, and they modify the shape and size by removing leaves and branches, for example. They may understand the physical properties of these tools," he said. "We expected to observe this behavior in other less terrestrial populations, but it appears not to be the case. We have reports that it occurs occasionally but not habitually, as in Serra da Capivara."The monkeys may also use different tools in the same activity. "They may use a stone to enlarge a rock crevice and then use a twig to probe the hole for food, for example," he said. As a rule, males handle objects more than females, but skill does not vary by sex. "Males and females are good at manipulation once they've become adult and acquired the skill," he said. Primate tool library Primate archeology, Falótico explained, is a relatively new field. Among non-human primates, only chimpanzees, capuchin monkeys, and long-tailed or crab-eating macaques use tools. "We now know that when capuchin monkeys bang stones together, they create flakes that closely resemble those made by the first humans," Falótico said. "The same goes for the simpler percussive tools—stones used for hammering and pounding—which can be confused with tools used by humans for the same purposes. In short, we provide more data for archeologists, who often come across these remains." Creating a primate tool library is one of the aims of the Young Investigator project. "If the tools are described, it will be easier for archeologists and anthropologists to know at a later stage which groups used them and for what purpose," he said. In this study specifically, the sample comprised 29 tools: 16 were used solely for pounding, 12 for digging, and one for stone-to-stone percussion. The technological analysis was based on a classification into active elements (hammers) and passive elements (anvils). The scientists set out to establish use-wear patterns, and to this end analyzed attributes such as general tool metrics, raw material, and surface traces such as fractures, impact points, battered areas formed by superimposed impacts, and percussive mark location. The digging tools had fewer conspicuous use-wear marks on their surfaces when analyzed microscopically. Tools used to crush quartz most frequently had perceptible use-wear traces. Soft fruit and cashew nut processing tools displayed a wider spatial distribution of pounding marks than digging tools, although they also displayed a low degree of physical modification. The researchers looked for traces of pollen among the residues found on the tools, in order to discover which plant species the monkeys preferred. "We identified starch grains and other non-pollen palynomorphs, such as fungal spores, algae and other organic elements found alongside pollen in palynology, the subdiscipline of botany in which pollen grains are examined and identified," Falótico said. "We experienced some difficulty for lack of a reference library to identify the origin of the pollens and starches occurring in this part of the Caatinga."
5 hundredths could be used to describe time, distance, money, and many other things.| 5 hundredths means that if you divide something into one hundred equal parts, 5 hundredths is 5 of those parts that you just divided up. We converted 5 hundredths into different things below to explain further: 5 hundredths as a Fraction Since 5 hundredths is 5 over one hundred, 5 hundredths as a Fraction is 5 hundredths as a Decimal If you divide 5 by one hundred you get 5 hundredths as a decimal which is 0.05. 5 hundredths as a Percent To get 5 hundredths as a Percent, you multiply the decimal with 100 to get the answer of 5 percent. 5 hundredths of a dollar First, we divide a dollar into one hundred parts, where each part is 1 cent. Then, we multiply 1 cent with 5 and get 5 cents or 0 dollars and 5 cents.
Hedgehog (Erinaceus europaeus) The European hedgehog (Erinaceus europaeus: Linnaeus, 1758), or common hedgehog is a hedgehog species found in western Europe. It is a generally common and widely distributed species that can survive across a wide range of habitat types. It is a well-known species, and a favourite in European gardens, both for its endearing appearance and its preference for eating a range of garden pests. While populations are currently stable across much of its range, it is thought to be declining severely in the UK. The animal appears brownish with most of its body covered by up to 6,000 brown and white spines. Leucistic, or ‘blonde’ hedgehogs occasionally occur. Such specimens are believed to have a pair of rare recessive genes, giving rise to their black eyes and creamy-coloured spines; however, they are not strictly speaking albino. They are extremely rare, except on North Ronaldsay and the Channel Island of Alderney where around 25% of the population is thought to be blonde. True albino forms of the hedgehog do also occur infrequently. This species is largely nocturnal. It has a hesitant gait, frequently stopping to smell the air. Unlike the smaller, warmer-climate species, the European hedgehog may hibernate in the winter. However, most wake at least once to move their nests. They are solitary in nature with mature males behaving aggressively towards each other. Occasionally a male and female may share a hibernating spot. The European hedgehog is omnivorous, feeding mainly on invertebrates. Its diet includes slugs, earthworms, beetles, caterpillars and other insects. The preferred arthropods are the millipedes Glomeris marginata and Tachypodoiulus niger as well as the ground beetle Carabus nemoralis. It also eats grass snakes, vipers, frogs, fish, small rodents, young birds and birds’ eggs. Some fruits and mushrooms may supplement the diet. The breeding season commences after hibernation. Pregnancies peak between May and July, though they have been recorded as late as September. Gestation is 31 to 35 days. The female alone raises the litter which typically numbers between four and six, though can range from two to ten. Studies have indicated that litter size may increase in more northern climes. The young are born blind with a covering of small spines. By the time they are 36 hours old, the second, outer coat of spines begins to sprout. By 11 days they can roll into a ball. Weaning occurs at four to six weeks of age. Longevity and mortality European hedgehogs may live to ten years of age, although the average life expectancy is three years. Starvation is the most common cause of death, usually occurring during hibernation. If alarmed, the animal will roll into a ball to protect itself. Many potential predators are repelled by its spines, but predation does occur. Remains of hedgehogs have been found in the stomachs of European badgers (Meles meles), red foxes (Vulpes vulpes) and pine martens (Martes martes). A large portion of these may be from hedgehog carcasses, especially road-kill.
- Plankton nets (one for each pair of students). - Magnifying glasses (one for each student) - Science Notebooks - Microscopes (one/pair or group of students) - Plankton sketchbooks or white paper - Chart paper and markers - Water droppers (one/student) - Slides - gridded slides are preferred (one/student) - Colored pencils - Reference materials NGSS Performance Expectation Students who demonstrate understanding can: Students who demonstrate understanding can: Support an argument that plants get the materials they need for growth chiefly from air and water. Use models to describe that energy in animals’ food (used for body repair, growth, motion, and to maintain body warmth) was once energy from the sun. 5-PS3-1 Develop a model to describe the movement of matter among plants, animals, decomposers, and the environment. 5-LS2-1 - food provides animals with the materials they need for body repair and growth and the energy they need to maintain body warmth and for motion. - plants acquire their material for growth chiefly from air and water. - the food of almost any kind of animal can be traced back to plants. - use a diagram to explain the flow of energy from the sun through a food web. - support an argument with evidence about how plants (i.e., producers) in the food web obtain the matter they need to make their own food. - define the health of an ecosystem in terms of multiple species of different types able to meet their needs in a relatively stable web of life. - Alaskans harvest and eat mussels, clams and cockles, scallops, chitons, sea cucumbers, sea urchins octopus, crabs, and seaweeds. - Limpets can be used as an emergency food because since they are not filter-feeders like clams, they don’t accumulate shellfish toxins. - Shells are used for jewelry and decoration and were traditionally used as money in some places in Alaska. - Seaweeds are used in many manufactured products as filler and as fertilizer for gardens. The seaweed aquaculture industry is growing in several Alaskan coastal communities. - A compound found in sea cucumbers is being used as medicine to treat cancer. - Because the “glue” made by barnacles and byssal threads made by mussels are strong adhesives even under water, the compounds are being imitated and used in medical and surgical applications. Review the teacher background section and the Alaskan intertidal and ocean food web models (links in the Resource section). NOTE: Not all of these models include the recycling of nutrients through decomposers. Plankton Tows and Observation - Decide on a water source for collecting plankton samples. (You can do this as part of your beach field trip or make another trip to the beach, to a stream or river near the school, to a harbor with a dock you can use, or to a private dock you can get permission to use.) - Gather field trip supplies and equipment. You can make your own plankton nets (See link to instructions in the Resources section). - Print out copies of plankton identification guides. - To extend the learning, decide on a plankton sampling schedule that will allow your students to return to the same water source and compare their results. Beach Field Trip Make field trip arrangements - buses, lunches, permission slips, life jackets (if applicable), volunteers, etc. Gather nonfiction books, web links, and other resources for student research and/or schedule library time. See the Resources section below and the Field Trip Resources page for links to online sources for student research. - Show students a plankton net and ask them what they think it is. - Allow them time to think and share their ideas. - Explain that it is a net to catch tiny plants and animals in water that you may not be able to see with the naked eye. - Ask: What kind of tiny plants and animals do you think you might catch in a plankton net? - Allow them time to think and share with a partner. - Ask partners to share with the group and write their ideas on a chart paper. - Watch the YouTube video The Power of Plankton. (5 ¼ mins.) The end of the video shows students how plankton are monitored continuously from ships. Ask the students: Where do plankton get their food? Guide them in a discussion about the differences between phytoplankton and zooplankton. (Phytoplankton uses the sun to make its own food through photosynthesis. Zooplankton eat phytoplankton.) You can also review the differences between meroplankton and holoplankton that were described in the video, using barnacle and crab larvae as examples of holoplankton that eventually settle on the beach. On a large piece of chart paper or a whiteboard, draw the sun and plankton and use arrows to show the flow of energy from the sun to the phytoplankton, to the zooplankton. (The arrows that show the flow of energy should begin at the sun and move out in order to describe the correct flow of energy in the web. The arrows that also show the movement of matter should point in the direction that matter is moving rather than in the direction from “who” is eating a plant or animal to the “whom” they are eating. Students often draw the arrows the other way in food webs.) Add seaweeds as producers to the base of the food web. Ask students: What other types of matter are needed for phytoplankton and seaweeds to make their own food? (light and air in the form of gas dissolved into the water) Discuss the reason why seaweeds are only found in shallow water (They have to be able to grow tall enough that their leaf-like blades can float near or at the surface of the water and capture enough light for photosynthesis.) and why phytoplankton are only found in the top layer of the ocean (where there’s enough light for photosynthesis at least part of the year in northern waters). Ask students what eats plankton and seaweeds. Distinguish between animals that eat phytoplankton and those that eat zooplankton or both. Allow students time to think and share their ideas. Remind them that many types of plankton are one-celled organisms that can be filtered out of the water as a way to capture them for food. Guide this discussion and add the names of fish, marine birds, and marine animals to the chart. Distinguish between animals that eat plants (herbivores), animals that eat other animals (carnivores), and animals that eat both animals and plants (omnivores). Have students tell you where to add the arrows to show the direction of flow of energy and the recycling of matter). Include humans in the food web if they don’t think of it. Always start them back at the sun to describe the origin of energy in food webs. Add dead plankton, seaweeds, and animals, at least one animal that eats dead things (a detrivore – examples: crabs, ravens, gulls) and a decomposer (example: bacteria). Show the flow of matter into the phytoplankton and seaweeds after decomposition. (Students often leave the flow of energy and matter that happens through the process of decomposition out of food webs. This way of doing it demonstrates the cycling of matter that occurs after decomposition is completed.) After you have provided students an orientation to the field trip site and reviewed beach etiquette and safety rules, provide them with 20-30 minutes to explore within boundaries. Their focus questions are: Who eats whom? What evidence can you find of animals feeding or being eaten? Where on the beach and in the water do you find seaweeds? In what types of places are seaweeds absent? Chaperones can help the students find evidence that might include: - Holes drilled through shells by boring snails - Sea stars “humped up” and feeding on a clam or mussel - Animals captured by anemones and being digested - Sea stars regenerating arms or with chunks removed by gulls Bring the group back together and have them share their evidence. EXPLAIN (Structured Investigations) Intertidal/Beach Food Chains: Divide the class into groups of three or four. Their task is to move from the lower intertidal zone to the upper intertidal zone (Use a band of mussels or rockweed as an indicator of the middle intertidal if there is one on the beach or kelps as an indicator of the lower zone.) and try to construct at least one food chain in each zone. The food chain needs to include a producer, an herbivore, a carnivore, and a detrivore (that feeds on dead things). They should use the species name if they know it, but if not, they can make a drawing and write a description (e.g., a green film on a rock, a brown kelp with holes in the blades). Plankton Tow: Plankton sample can be collected in shallow water offshore of the beach, from a dock, or at another predetermined water source. - Demonstrate how to use the plankton net properly and what to do with it when they are done. Tow the net just below the surface of the water as you walk back and forth along a dock; do this while wading through the water, or drop a net with a rope from a bridge or pier into a fast-moving current. Be sure to tow the net against a current so that the current moves the plankton into the net rather than washing them out. - If there is a lot of plankton in the water, you may need to hold the net upright after a tow and wash down the sides of the net (from the outside) by pouring water from a bucket or a small spray bottle. Do this slowly enough that the plankton are all collected in the jar at the bottom of the net and then release the jar from the net. - Have students break into pairs to go and collect their samples. - Students can look at their samples with their magnifying glasses and share their observations. They may also notice differences in the color of the water that is a result of the amount of plankton in the water at different times of year and at different places. Make sure the samples are put into a container with a lid for safe transport back to the classroom. If your net is constructed with a bottle that has a cap, you can simply put the cap on the bottle. If there is no cap, students can pour contents into a clean plastic container with a lid. Back in the Classroom EXPLORE: Observing and Sketching Plankton - Back in the classroom, ask students to guess how many different types of plankton are in the samples they collected. Have them share their answers. Tell them: We are now going to take a close look at what we collected using microscopes. - Model to students how to prepare slides and use the microscopes. - Have them work in pairs to prepare slides and view their samples under the microscope. - Model with a photograph how you might sketch the plankton. - First, think about the following questions. What shape is it? Is it one organism, or many? What color is it? Can I see any organs inside the body? How large is this organism? - Now begin modeling a sketch. Remind students not to be intimidated. Show them how to: - Pay attention to detail and draw what they see. - Start with the outline first and then move on to the details. - Sketch lightly and darken it when you add details. - Have students sketch what they can see under their microscopes in their sketchbooks, reminding them that scientific sketching takes practice and they should just do their best. - Ask: How many organisms can you see in your water dropper sample? Do you think our estimate was reasonable? Provide website links or book for students to use to identify their plankton to a species group (Identification to species is difficult.). (See the Resources section for links.) Making a technical drawing of plankton Start with a practice sketch of a familiar item. This lets students practice the detail in which they are sketching. Bring in a fish, apple, plant, soccer ball, or class pet and model the detail in which the students should sketch. This is also a great time to introduce scale as a math extension. It may be a challenge for some students to find and focus in on live, moving plankton. You can complete this activity in two days; the first day, students can sketch plankton from examples in Zoom Gallery (species shown are pond plankton) and the second day, they can sketch live plankton. Select four different varieties of plankton and do the activity again. If you are feeling ambitious, have students sketch ALL the varieties of plankton in the Zoom Gallery and arrange them from the smallest to largest! (Adapted from Catch & Sketch Plankton, Arizona State University School of Life Sciences https://askabiologist.asu.edu/experiments/sketch-plankton/for-teachers) Harmful Algal Bloom Monitoring Involve your students in learning about harmful algal blooms and becoming citizen scientists who monitor phytoplankton in either fresh water or salt water. For more information and training, see https://products.coastalscience.noaa.gov/pmn/. This website also has a downloadable mobile app for identifying the most common saltwater phytoplankton and pronouncing their Latin species names correctly: https://coastalscience.noaa.gov/exit/?url=http://www.gano.name/shawn/phyto/ Other lesson plan and worksheets are available at It’s a Plankton Eat Plankton World Packet. ELABORATE: Food Webs Refer back to the food web the class constructed before the field trip. Ask: What evidence did you find at the beach about who eats whom? Compile the answers on a whiteboard. Ask: How do we know what animals eat if we can’t observe them often enough to know everything they eat? Call on students to share their answers and lead the discussion to how scientists look in the stomachs to determine diet or help them recall what they learned about the feeding structures and digestion systems of specific animals they may have observed in animals on previous field trips (e.g., filter-feeders, feeding structure of a sea urchin, clam siphon). Have students select one local animal they observed on the beach to find out what they eat and what eats them using the evidence listed on the whiteboard, books and/or the Internet to create a food chain poster for their animal. Students present their food chain posters to the class. When students are finished presenting, facilitate making the creation of a diagram or mural of a food web involving more than one food chain. After the food webs are completed, ask the students: Where do all of the food chains start? Are there other ways some of these organisms are connected? What is the difference between a food chain and a web? Adaptations and Variations: Students can cut and paste pictures, rather than draw. Students can create a Google Slide or Web Poster instead of a drawn/colored poster. Students can play a food web game, using the information they have collected to make connections with other species using yarn or string. EXTENSION: The Alaska Seas and Watersheds unit The Case of the Missing Sea Otters provides an opportunity to explore what happens when a species is removed from the ecosystem and when people and other predators compete for the same prey species. Use the student posters, class presentations, and participation in the development of a food web diagram to assess student understanding of the following concepts: - All energy in a food chain or web ultimately comes from the sun. - Food chains include at decomposers that break down matter into what can be used by producers to make their own food. - Food webs are more complex than a food chain relationship between one predator and one prey species or between one consumer and one producer. |Building a Plankton Net| |The Invisible Watery World of Plankton| |The Power of Plankton| |The Secret Life of Plankton| |Images of Alaska plankton and other marine species| |Phytoplankton Line Drawings| |Zooplankton Line Drawings| |The Kachemak Bay Research Reserve Plankton Coloring Book| |Images of marine organisms including fish and marine invertebrate larvae| NOAA Auke Bay Laboratory, NOAA Fisheries |Photographs of phytoplankton| |Plankton Identification Cards and Dichotomous Key| Alaska SeaLife Center |Images of Alaskan species with information about food web connections| |Food Web Cards: Rocky Intertidal Habitat| |Food Web Cards: Mudflats| |Mudflat food web||Kachemak Bay. Dennis Lees.| |Ocean Food Web Cards| |Arctic Ocean Food Web Cards| |Ocean plastic smells like food to marine animals| |Species Profiles on Alaska Department of Fish and Game| |Invertebrates - Species Profile| |Fish - Species Profile| |Birds - Species Profile| |Mammals - Species Profile| |Alaska Intertidal and Ocean Food Web Models| |North Pacific Food Web| |Gulf of Alaska Food Web| |Marine Food Web| |Arctic Ocean Food Web| Arctic Ocean Food Web includes phytoplankton, zooplankton, and sea ice algae as producers. |Nonfiction Children’s Books| |Ocean Sunlight: How Tiny Plants Feed the Sea| By Molly Bang and Shirley Chisholm with an accompanying science/ELA lesson plan |Sea Soup: Phytoplankton| By Mary M. Cerullo |Sea Soup: Zooplankton| By Mary M. Cerullo |This is the Sea that Feeds Us| By Robert F. Baldwin |A Whale’s Tale from the Supper Sea| By C.J. and Ba Rea (written about the Kenai Fjords area) |Resources for Extensions| |Catch & Sketch Plankton| ASU School of Life Sciences |PLOSABLE Are Plankton Super Stars| |Whale Jenga Food Web Game| |Meroplankton vs. Holoplankton||Video| Food Web Concepts In 5th grade, the emphasis in the NGSS Life Science disciplinary core ideas related to food webs is on tracing the food of animals back to plants and tracing the movement of energy in the food web back to the sun. Students should gain the understanding that plants take in matter that is not food (water, air, and what’s left at the end of the decomposition process) and turn it into food and extend their understanding about “who eats whom” to interconnections within food webs that facilitate the cyclical pattern of movement of matter through ecosystems. For a brief (approx. 2 minute-long) overview of how scientific thinking has changed about food webs over the last 50 years and the importance of understanding ocean food webs and the role of humans in them, watch this video about marine food webs. Beach field trips provide opportunities to watch food webs in action, with sea stars feeding on mussels or clams, submerged barnacles in tide pools using their cirri to filter the water like the baleen plates of whales, and clams squirting out their waste water after filtering sea water through their body. Many consumers in the intertidal zone graze on producers in the form of seaweeds and algal films on rocks; others filter sea water for both live prey and detritus. Many intertidal invertebrate and fish species are, in turn, prey for larger, more pelagic predators such as larger fish and marine birds and mammals which can also often be observed feeding nearshore. Plankton: the Base of the Food Web While plankton are too small to be observed with the naked eye as other than stationary or moving specks, net tows can be taken in shallow water near shore or from docks to provide students the opportunity to view plankton under a field microscope or using microscopes back in the classroom. Green or brown chloroplasts can often be observed within transparent phytoplankton as well as movements of zooplankton seeking prey and moving their food through their digestive system. The word “plankton” is derived from the Greek word “planktos” and means to drift or wander. Phytoplankton, microscopic, one-celled plankton that are capable of photosynthesis, are the base of the ocean food web. They use the energy from the sun to make their own food and, in the process, produce most of Earth’s oxygen in the ocean, the largest ecosystem on Earth covering 70% of Earth. One-celled zooplankton eat phytoplankton; multi-cellular drifting animals like jellyfishes eat a variety of zooplankton. Zooplankton can be divided into two categories - meroplankton and holoplankton. Holoplankton spend their whole lives drifting and meroplankton, which include many marine invertebrates, spend the first part of their lives drifting and settle down on the bottom of the ocean or in the intertidal zone when they mature. Plankton nets are used to collect plankton samples in either coastal waters or aquatic field trip sites like streams and ponds by plankton can also be sampled in the ocean using either nets or continuous plankton recording devices mounted on moving ships. See the examples of Alaskan intertidal and ocean food web models. (Links in the Resources section.) Prior Student Knowledge: Students should have a basic understanding that plants use the energy from the sun and need air and water to make their own food through the process of photosynthesis. They should also know some intertidal ecosystem food chain connections if they have gone on beach field trips in previous years. The emphasis in 3rd grade is on relationships between external and internal structures and their functions so students should have some knowledge of how marine invertebrates capture prey and feed as a basis for thinking about which species they graze on or prey on and which species scavenge dead matter or feed on detritus on the beach. Possible learner preconceptions, misconceptions and instructional clarifications: Learner Preconception/Misconception: The producers in food webs are always plants. Plankton are plants at the base of marine food webs. Instructional Clarification: There are many different kinds of plankton because the word plankton means that the organism is of a size, shape, and density that it drifts with ocean or stream currents. At this grade level, students can think of phytoplankton as “plant plankton,” although algae and cyanobacteria are both considered phytoplankton. Many zooplankton are one-celled but since the only criterion for being plankton is one of drifting in the currents, large multi-celled animals like jellies are also considered plankton. The main producers in marine and intertidal food webs are phytoplankton and seaweeds. Their classification is confusing. Even scientists don’t always agree on which kingdom or kingdoms to put them into. Because phytoplankton (phyto – plant-like) are one-celled, they are often classified with one-celled zooplankton (zoo = animal) and other types of one-celled animals as protists in the Kingdom Protista. The instructional emphasis at this grade should be on where different planktonic organisms fit into food webs in terms of making their own food or consuming other plankton. Learner Preconception/Misconception: Seaweeds are plants. Instructional Clarification: As fifth graders explore marine food webs which have at their base producers that aren’t classified as members of the Plant Kingdom, it’s a good time to help them understand more about the complexity of grouping organisms and build on their knowledge about what plants need to survive and aspects of plant structure and function from earlier grades. Seaweeds are macroalgae (macro = big, but there is no agreed-upon definition for algae.) All seaweeds are photosynthesizers and they lack structures that other members of the plant kingdom have. Still, some algologists (scientists who study seaweeds and other algae) place green and red seaweeds in the plant kingdom while they place brown seaweeds, which include the kelps, in a separate kingdom with one-celled diatoms and dinoflagellates. Given the complexity of the science and the lack of agreement, it’s probably best in upper elementary to allow your students to think of seaweeds as “the plants of the sea,” and phytoplankton as the “plant plankton.” Learner Preconception/Misconception: Seaweeds aren’t plants because they don’t grow in soil. Instructional Clarification: While it’s true that seaweeds lack roots which means they can grow on the beach or on the ocean bottom in shallow water, that doesn’t mean plants need soil to get what they need to photosynthesize. Plant matter actually comes mostly from air and water, not soil, and phytoplankton and seaweeds are also able to obtain what they need for photosynthesis from water and air. They can also obtain some of the carbon dioxide they need in the form of gas dissolved into the water. You can ask students about their experience with hydroponic gardening as evidence that plants can grow in water. Learner Preconception/Misconception: All animals that eat dead plants and animals or detritus are decomposers. Instructional Clarification: Scavenges like crabs, ravens, and bottom fish are scavengers. They are consumers of organic matter in large lumps which aren’t broken down into inorganic minerals and nutrients that plants can recycle in photosynthesis. Detrivores like some marine worms consume much smaller clumps of detritus much like earthworms on land as do sea slugs. All of these detrivores speed up the decay process. Decomposers like bacteria and fungi complete the process by metabolizing detritus on a microscopic scale. Detritus feeders and decomposers can thus both be considered decomposers in marine food webs but scavengers are not decomposers. The assessment probe Ecosystem Cycles developed by Page Keely can be used as pre-assessment of student understanding of these concepts. Components of Next Generation Science Standards Addressed Developing and Using Models Use models to describe phenomena. (5-PS3-1) Engaging in Argu-ment from Evidence Support an argument with evidence, data, or a model. (5-LS1-1) Developing and Using Models Develop a model to describe phenomena. (5-LS2-1) PS3.D: Energy in Chemical Processes and Everyday Life: The energy released [from] food was once energy from the sun that was captured by plants in the chemical process that forms plant matter (from air and water). (5-PS3-1) LS1.C: Organization for Matter and Energy Flow in Organisms: Food provides animals with the materials they need for body repair and growth and the energy they need to maintain body warmth and for motion. (secondary to 5-PS3-1) Plants acquire their material for growth chiefly from air and water. (5-LS1-1) LS2.A: Interdependent Relationships in Ecosystems: The food of almost any kind of animal can be traced back to plants. Organisms are related in food webs in which some animals eat plants for food and other animals eat the animals that eat plants. Some organisms, such as fungi and bacteria, break down dead organisms (both plants or plants parts and animals) and therefore operate as “decomposers.” Decomposition eventually restores (recycles) some materials back to the soil. Organisms can survive only in environments in which their particular needs are met. A healthy ecosystem is one in which multiple species of different types are each able to meet their needs in a relatively stable web of life. Newly introduced species can damage the balance of an ecosystem. (5-LS2-1) Draw on information from multiple print or digital sources, demonstrating the ability to locate an answer to a question quickly or to solve a problem efficiently. (5-LS2-1) Include multimedia components (e.g., graphics, sound) and visual displays in presentations when appropriate to enhance the development of main ideas or themes. (5-LS2-1)
In Geometry, reflection is one of the four types of transformations. The four basic transformations are - Dilation or Resizing In this article, let’s discuss the meaning of Reflection in Maths, reflections in the coordinate plane and examples in detail. In Geometry, a reflection is known as a flip. A reflection is a mirror image of the shape. An image will reflect through a line, known as the line of reflection. A figure is said to be a reflection of the other figure, then every point in a figure is at equidistant from each corresponding point in another figure. The reflected image should have the same shape and size, but the image faces in the opposite direction. In reflection, translation may also take place because of its changes in the position. Here, the original image is called pre-image, and its reflection is called image. The representation of pre-image and image are ABC and A’B’C’ respectively. The reflection transformation may be in reference to the coordinate system (X and Y-axis). Reflections in the Coordinate Plane The reflection transformation may be in reference to X and Y-axis. Reflection over X-axis When a point is reflected across the X-axis, the x-coordinates remain the same. But the Y-coordinates are transformed into its opposite signs. Therefore, the reflection of the point (x, y) across X-axis is (x, -y). Reflection over Y-axis When a point is reflected across the Y-axis, the Y-coordinates remain the same. But the X-coordinates is transformed into its opposite signs. Therefore, the reflection of the point (x, y) across Y-axis is (-x, y). Reflection over Y = X When a point is reflected across the line y = x, the x-coordinates and y-coordinates change their place. Similarly, when a point is reflected across the line y = -x, the x-coordinates and y-coordinates change their place and are negated. The reflection of the point (x, y) across the line y = x is (y, x). The reflection of the point (x, y) across the line y = – x is (-y, -x). Reflection in a Point A reflection point occurs when a figure is constructed around a single point known as the point of reflection or centre of the figure. For every point in the figure, another point is found directly opposite to it on the other side. Under the point of reflection, the figure does not change its size and shape. Reflection in origin (0, 0) In the coordinate plane, we can use any point as the point of reflection. The most commonly used point is “origin”. Let ABC be the triangle, and the coordinates are A(1,4), B(1,1), and C(5,1). After the point of reflection in origin, the pre-image ABC is transformed into A’B’C’. When you draw a line segment connecting the points A and A’, the origin should be the midpoint of the line. The point of reflection in origin (0, 0), the image of the point (x, y) is (-x, -y). Hence, the coordinates of the triangle A’B’C are A’(-1,-4), B’(-1,-1), and C’(-5,-1). Stay tuned with BYJU’S – The Learning App and explore interesting videos.
Let's plunge into the history of Ukrainian musical instruments today! The most popular instruments in Ukraine have been the bandura, sopilka, violin, and tsymbaly. Folk musical instruments were used primarily at dances and for marching - for example the wedding march, as accompaniment to popular plays - like the koza, or vertep, or for simple listening enjoyment. Musicians who play the bandura are referred to as bandurists. In the 19th to early 20th century traditional, bandura players, often blind, were referred to as kobzars. It is suggested that the instrument developed as a hybrid of gusli (psaltery) and kobza (lute). The sopilka most commonly refers to a fife made of a variety of materials (but traditionally out of wood) and has six to ten finger holes. Sopilkas are used by a variety of Ukrainian folkloric ensembles recreating the traditional music of the various sub-ethnicities in western Ukraine, most notably that of the Hutsuls of the Carpathian mountains. Often employing several sopilkas in concert, a skilled performer can mimic a variety of sounds found in nature, including bird-calls and insects. The skrypka (violin) is a bowed string instrument you're probably most familiar with. Older types of this instrument existed in Ukraine as early as the 9th century. The modern violin, developed in Italy, was introduced into Ukraine at the beginning of the 17th century and became extremely popular as a folk instrument in ensembles of troisti muzyky (three musicians). The tsymbaly (cimbalom or hammered dulcimer) is a folk musical instrument whose strings are struck with two small padded sticks. It consists of a shallow rectangular or trapezoidal wooden sound box with 16–35 clusters of gut or wire strings stretched lengthwise across the deck. Tsymbaly playing is popular in western Canada among the ethnic Ukrainian diaspora there. Numerous music competitions exist and the instrument defines what "Ukrainian-ness" is in the local music scene. We have helped a number of people find and ship musical instruments to their home in distant parts of the world. Looking for something spectacular? Get in touch - [email protected]
Building a working brain depends on complex interactions between nerve cells and their environment. Now, cutting-edge tools from both biology and physics are helping us understand how physical factors shape brain development. How do you grow a brain? We need to ask this question to understand how our brains work in normal life, and also how to combat disease when things go wrong. The answer lies right at the beginning of life, in the developing embryo, which is where nerve cells – the building blocks of our brain and nervous system – are first born. To build a brain from scratch, the newly born nerve cells send out long protrusions, called axons, which grow towards cells elsewhere in the body. The axons are the body’s electrical cables, helping cells within the brain and nervous system communicate. As the axons grow, they follow well-defined paths and encounter different environments. This is like us following a route through a city: sometimes we have to go around roadblocks, and sometimes we walk on hard pavements or soft grass. The difference between us and an axon, though, is that we can see where we are going; but how does a small part of a growing cell manage to find its way through the body? The answer is a sensitive structure called the growth cone. This is found at the end of each growing axon and resembles a microscopic hand, which moves around constantly as it samples its environment. Neuroscientists already know that growth cones sense chemicals produced by other cells; however, we also know that they temporarily stick to the surface they grow along and ‘grasp’ it as they move forward. It’s now thought that growth cones pull themselves along, just as you or I might pull ourselves up with our hands when climbing, which in turn allows axons to lengthen. If the axon’s ‘hand’ pulls on its substrate, it makes sense that the substrate’s mechanical properties – such as stiffness or how rough the surface is – could affect how axons navigate towards their targets. (Think of climbing a mountain made of soft jelly: this would be impossible, but rock is no problem because we can work up enough force on a hard surface to move upwards.) Biologists are now starting to understand how this works, using sophisticated experimental tools that combine the best of biology and physics. For example, we can now calculate exactly how strongly growth cones pull, simply by putting nerve cells on a flexible surface with tiny labelled beads embedded in it and imaging the lot with a powerful microscope. As the growth cones pull, they distort the surface and displace the beads. How far the beads move depends on how much force is applied to the surface. Another useful tool, recently borrowed from nanotechnology and materials science, is the atomic force microscope. This is basically an extension of our sense of touch: it consists of a nano-scale probe, which acts like a ‘fingertip’ that can poke single cells. A 2009 study, published in the Biophysical Journal, used the atomic force microscope to prod the ends of growing axons. When the axons hit this ‘roadblock’, they promptly drew back and grew in a different direction, showing that nerve cells adjust their behaviour to suit their physical environment. Thanks to these advances, we now know more than ever before about the factors and processes that shape brain development. We are still a long way from growing brain tissue in a dish, but we can definitely say that brains are wired on physics as well as on chemistry. Growth cones pulling – Betz et al. (2011) Growth cones as soft and weak force generators, PNAS 108, 13420–13425 – open access Poking growth cones – Franze et al. (2009) Neurite Branch Retraction Is Caused by a Threshold-Dependent Mechanical Impact, Biophysical Journal 97, 1883–1890 – behind paywall
Human beings weren’t the only species selected to survive a nuclear attack in Nebraska. In 1963, Roberts Dairy Company, outside of Omaha, conducted a two-week survival test for 35 cows, one bull and two student cowhands. They built a concrete shelter under the dairy at Elkhorn that was big enough to house over 200 Golden Guernsey cows and a couple of bulls. Milk is especially susceptible to contamination by radioactive elements, and so Roberts and the Office of Civil Defense wanted to see if they could protect the cows and still produce milk. A special storage space was created for cattle feed, and water for the occupants of the shelter came from a 10,000 gallon tank buried under five feet of dirt next to a shelter wall. An auxiliary generator was available if electrical power was interrupted by an atomic blast. Five fans provided ventilation. The air was cleaned by a series of dust filters. The shelter was built at a cost of $35,000, and included living quarters for employees who were to care for the herd. This shelter was separated from the cow shelter and was stocked with supplies for 15 people. It had a separate fan that brought in fresh, filtered air and kept possible offensive odors from the cattle out of the room. There were also bunks near the cows. Two students, Dennis DeFrain and “Ike” Anderson, were involved in the experiment and reported that they missed cold foods, and especially missed cold milk. Otherwise, they and the cows were fine. They had a two-way radio loaned to them by the Omaha Civil Defense Office that enabled them to communicate with the outside world. Dairy President, J. Gordon Roberts, heard about their concerns and at the conclusion of the testing, met them at the door with a large pitcher of cold milk, paper cups, and a supply of sweet rolls. The men also complained about boredom and the monotonous food. They felt regular meals should be planned to differentiate morning from evening and reduce the problems they faced of keeping track of time. They did admit that boredom was reduced by the fact they had specific duties to perform, such as feeding the cattle. The problems with keeping track of normal day-night rhythms was a part of the experiment. Electric lights in the shelter were used to reverse the biological clocks of the cattle and reduce the amount of heat and humidity present. Lights were turned off during the hottest part of the day outside to reduce cattle activity. Then during the cooler part of the day, they were turned on to permit the cattle to feed, drink, and move around their confined quarters. Noise seemed to become a factor toward the end of men’s stay. This may have suggested that they experienced some irritability and anxiousness for the experiment to end. The two young men were asked after the completion of the project if they would go through another shelter test later on. Their response was “Well, we don’t want to look at another cow for a couple of weeks.” The cows and one lone bull (named Aristocrat) showed little signs of discomfort after their confinement. They lost a little weight, but adjusted quickly to the topsy-turvy life underground. The organizers of the shelter were convinced that the test showed cattle could live in a shelter and continue to produce. They proved that such protected existence was feasible, and could prove to be economically sound, even in peacetime, as a way of housing dairy cattle. Roberts Dairy President J. Gordon Roberts said, “In a Democracy, the citizens ultimately determine what foreign policy should be. If we have our people prepared for the worst, and tests like this one can contribute a great deal to that preparedness, then both Russia and China must think twice about using atomic weapons on our country.”
Scholars have been analyzing the structure of drama for nearly as long as it’s been written or performed. One of the more notable studies belongs to nineteenth-century German playwright and novelist, Gustav Freytag and his “Die Technik des Dramas” (Technique of the Drama). He didn’t originate the concept, mind you, Aristotle introduced the idea of the protasis, epitasis, and catastrophe—beginning, middle, and ending—three-act plot structure, which was later replaced with drama critic Horace’s five-act structure. But creators are never satisfied with the status quo, so when playwrights began toying around with three and four-act plays, Freytag wrote a definitive structure study—referred to as Freytag Pyramid—that explained the necessity of dividing a standard drama into the following five acts: Stage 1: Exposition—as discussed in an earlier post—introduces the setting of the story, the characters, their situation, atmosphere, theme, and the circumstances of the conflict. Traditionally, exposition occurs during the opening scenes of a story, and when expertly executed background information is only gradually revealed through dialogue between major and minor characters. Stage 2: Rising action—sometimes called complication and development—begins with the point of attack that sets a chain of actions in motion by either initiating or accelerating conflict. Difficulties arise, which intensifies the conflict while narrowing the possible outcomes at the same time. Complications usually come in the form of the discovery of new information, the unexpected opposition to a plan, the necessity of making a choice, characters acting out of ignorance or from outside sources such as war or natural disasters. In this stage, the related series of incidents always build toward the point of greatest interest. Stage 3: Climax—is the turning point, where the protagonist’s journey is changed, for the better or the worse. In comedies, the protagonist’s luck changes from bad to good, due to their drawing on hidden inner strengths. Drama is the other side of the coin, where things take a turn for the worse and reveal the protagonist’s hidden weaknesses. Stage 4: Falling action—during this stage, the conflict unravels and the protagonist either wins or loses against the antagonist. This is also where a moment of final suspense might be found, in which the final outcome of the conflict is in doubt. Stage 5: Dénouement—also known as resolution, or catastrophe— in drama, brings the events from the end of the falling action stage to the actual closing scene. Conflicts are resolved in a manner that either creates normality and a sense of catharsis for the characters, or release of tension and anxiety for the audience. In comedy, the protagonist is always better off than they were at the beginning of the story. And in tragedy, the protagonist is worse off in the end—hence the alternate title for this stage, catastrophe. As I’m sure you’re well aware, Freytag’s analysis wasn’t meant for modern drama. For starters, front-loading your story with exposition is usually the kiss of death for your audience’s declining attention span. If exposition is truly needed, it should occur naturally within your story in the smallest fragments possible. Also, modern storytellers tend to use falling action to raise the stakes of the climax for dramatic impact, having the protagonist fall short of their goal—–encountering their greatest fear of losing something or someone important to them. And when they’re at their lowest point, they’re struck with an epiphany, giving the protagonist the courage to take on the final obstacle, resulting in the classic climax. And there you have it. Now, sally forth and writeful… and enjoy your weekend.
During the period 1969-1972, six human expeditions to the furthest reaches of the Earth’s frontier were conducted during the Apollo Program –to the Earth’s Moon, 246,000 miles into the vastness of space. Based on current studies and analyses, as well as the current state of relevant science, technology, policies, and culture, it is doubtful that such expeditions will be conducted successfully in the foreseeable future, at least 30 years, or longer. Therefore, the basic objective of the Apollo Learning Hub is to assemble, preserve, and make available primary source records of Apollo for research, education, history, and an example of a unique human endeavor. Over the past 10 years or so, we have seen a degradation of Apollo history, education, and research, including the loss or destruction of irreplaceable documents and records. During this period, the Apollo Program has commenced a recession from living memory into deep history both of which have, unfortunately, inspired a multitude of re-evaluations and revisionist history. Sources of accurate information are essential to ensure the Apollo event remains as clearly defined and thoroughly recorded as possible.As an example, the “Flight Plans” used onboard the spacecraft during Apollo missions contain handwritten notes by the crew made during the mission, many of which were not recorded on the ground or elsewhere, and some of which are essential to the understanding of the mission and especially research into the results. On the Moon's Surface Last yearProfessor Jim Head (atBrown U) and I were contacted by a European scientist, Dr. Erik Kuulkers, requesting a source of information regarding the precise pointing of the Apollo 15 X-Ray Spectrometer, one of several very large scientific instruments carried for the first time on the up-rated Apollo 15 “J” type missions. Dr. Kuulkers had been unable to find the information in any archives whatsoever, including NASA, other government archives and repositories, or private collectors. After considerable searching, we discovered that during the mission, this information had been read up from Mission Control in real time and recorded by hand in the A-15 Flight Plan. Return to Earth Fortunately, the appropriate page in the Flight Plan was available in my personal archives, and Dr. Kuulkers was able to complete his research. The basic story is unique and quite interesting in that the Apollo 15 X-Ray Spectrometer provided the first confirmation of the existence of the”Black Holes” in space. Cosmic fugue great turbulent clouds Apollonius of Perga the sky calls to us Sea of Tranquility how far away? Hundreds of thousands emerged into consciousness the carbon in our apple pies birth tendrils of gossamer clouds courage of our questions? With pretty stories for which there"s little good evidence hundreds of thousands inconspicuous motes of rock and gas the only home we"ve ever known stirred by starlight encyclopaedia galactica? Tendrils of gossamer clouds a still more glorious dawn awaits encyclopaedia galactica ship of the imagination rings of Uranus citizens of distant epochs. Kindling the energy hidden in matter made in the interiors of collapsing stars rich in heavy atoms inconspicuous motes of rock and gas a mote of dust suspended in a sunbeam descended from astronomers? With pretty stories for which there"s little good evidence with pretty stories for which there"s little good evidence a very small stage in a vast cosmic arena realm of the galaxies with pretty stories for which there"s little good evidence rich in heavy atoms? A billion trillion explorations tingling of the spine vastness is bearable only through love permanence of the stars hearts of the stars. Circumnavigated extraordinary claims require extraordinary evidence bits of moving fluff Euclid rings of Uranus astonishment. Kindling the energy hidden in matter citizens of distant epochs invent the universe the carbon in our apple pies concept of the number one great turbulent clouds. The carbon in our apple pies another world invent the universe kindling the energy hidden in matter another world invent the universe and billions upon billions upon billions upon billions upon billions upon billions upon billions.
Welcome to the lesson on responding to hypovolemic shock. In this video, we'll discuss the means of responding to hypovolemic shock. The primary means of responding to hypovolemic shock is to provide additional volume. For children, an isotonic crystalloid, such as normal sailing, or lactated ringers is the preferred fluid for volume resuscitation. While volume repletion is somewhat straightforward in adults, great care must be taken when administering intravenous fluids to children and infants. Careful estimates should be made concerning the amount of volume lost, for example, blood loss size of the individual in the degree of deficit. Current recommendations are to administer 20 milliliters per kilogram of fluid as a bolus over five to 10 minutes and repeat as needed. And hypovolemic shock administer three milliliters of fluid for every one milliliter of estimated blood loss that is a three to one ratio. A fluid bolus is do not improve the signs of hypovolemic shock Consider administration of packed red blood cells without delay. albumin can also be considered for additional intravenous volume for shock, trauma, and burns as a plasma expander. If fluid boluses do not improve the signs of hypovolemic shock, reevaluation of proper diagnosis, and a cold blood loss, for example, into the GI tract should be considered. The remaining interventions are aimed at restoring electrolyte imbalances, for example, acid or base, glucose and more. This concludes our lesson on responding to hypovolemic shock. Next, we'll review responding to distributed shock.
In a newly published study, scientists from Princeton University and the California Institute of Technology suggest that Mount Sharp most likely emerged as strong winds carried dust and sand into the 96-mile-wide crater in which the mound sits. A roughly 3.5-mile high Martian mound that scientists suspect preserves evidence of a massive lake might actually have formed as a result of the Red Planet’s famously dusty atmosphere, an analysis of the mound’s features suggests. If correct, the research could dilute expectations that the mound holds evidence of a large body of water, which would have important implications for understanding Mars’ past habitability. Researchers based at Princeton University and the California Institute of Technology suggest that the mound, known as Mount Sharp, most likely emerged as strong winds carried dust and sand into the 96-mile-wide crater in which the mound sits. They report in the journal Geology that air likely rises out of the massive Gale Crater when the Martian surface warms during the day, then sweeps back down its steep walls at night. Though strong along the Gale Crater walls, these “slope winds” would have died down at the crater’s center where the fine dust in the air settled and accumulated to eventually form Mount Sharp, which is close in size to Alaska’s Mt. McKinley. This dynamic counters the prevailing theory that Mount Sharp formed from layers of lakebed silt — and could mean that the mound contains less evidence of a past, Earth-like Martian climate than most scientists currently expect. Evidence that Gale Crater once contained a lake in part determined the landing site for the NASA Mars rover Curiosity. The rover touched down near Mount Sharp in August with the purpose of uncovering evidence of a habitable environment, and in December Curiosity found traces of clay, water molecules and organic compounds. Determining the origin of these elements and how they relate to Mount Sharp will be a focus for Curiosity in the coming months. But the mound itself was likely never under water, though a body of water could have existed in the moat around the base of Mount Sharp, said study co-author Kevin Lewis, a Princeton associate research scholar in geosciences and a participating scientist on the Curiosity rover mission, Mars Science Laboratory. The quest to determine whether Mars could have at one time supported life might be better directed elsewhere, he said. “Our work doesn’t preclude the existence of lakes in Gale Crater, but suggests that the bulk of the material in Mount Sharp was deposited largely by the wind,” said Lewis, who worked with first author Edwin Kite, a planetary science postdoctoral scholar at Caltech; Michael Lamb, an assistant professor of geology at Caltech; and Claire Newman and Mark Richardson of California-based research company Ashima Research. “Every day and night you have these strong winds that flow up and down the steep topographic slopes. It turns out that a mound like this would be a natural thing to form in a crater like Gale,” Lewis said. “Contrary to our expectations, Mount Sharp could have essentially formed as a free-standing pile of sediment that never filled the crater.” Even if Mount Sharp were born of wind, it and similar mounds likely overflow with a valuable geological — if not biological — history of Mars that can help unravel the climate history of Mars and guide future missions, Lewis said. “These sedimentary mounds could still record millions of years of Martian climate history,” Lewis said. “This is how we learn about Earth’s history, by finding the most complete sedimentary records we can and going through layer by layer. One way or another, we’re going to get an incredible history book of all the events going on while that sediment was being deposited. I think Mount Sharp will still provide an incredible story to read. It just might not have been a lake.” Dawn Sumner, a geology professor at the University of California-Davis and a Mars Science Laboratory team member, said that the specificity of the researchers’ model makes it a valuable attempt to explain Mount Sharp’s origin. While the work alone is not yet enough to rethink the distribution of water on Mars, it does propose a unique wind dynamic for Gale Crater then models it in enough detail for the hypothesis to actually be tested as more samples are analyzed on Mars, Sumner said. “To my knowledge, their model is novel both in terms of invoking katabatic [cool, downward-moving] winds to form Mount Sharp and in quantitatively modeling how the winds would do this,” said Sumner, who is familiar with the work but had no role in it. “The big contribution here is that they provide new ideas that are specific enough that we can start to test them,” she said. “This paper provides a new model for Mount Sharp that makes specific predictions about the characteristics of the rocks within the mountain. Observations by Curiosity at the base of Mount Sharp can test the model by looking for evidence of wind deposition of sediment.” The researchers used pairs of satellite images of Gale Crater taken in preparation for the rover landing by the High-Resolution Imaging Science Experiment (HiRISE) camera onboard the Mars Reconnaissance Orbiter satellite managed by Caltech for NASA. Software tools extracted the topographical details of Mount Sharp and the surrounding terrain. The researchers found that the various layers in the mound did not form more-or-less flat-lying stacks as sediments deposited from a lake would. Instead, the layers fanned outward from the mound’s center in an unusual radial pattern, Lewis said. Kite developed a computer model to test how wind circulation patterns would affect the deposition and erosion of wind-blown sediment within a crater like Gale. The researchers found that slope winds that constantly exited and reentered Gale Crater could limit the deposition of sediments near the crater rim, while building up a mound in the center of the crater, even if the ground were bare from the start, Lewis said. The researchers’ results provide evidence for recent questions about Mount Sharp’s watery origins, Lewis said. Satellite observations had previously detected water-related mineral signatures within the lower portion of Mount Sharp. While this suggested that the lower portion might have been series of lakebeds, portions of the upper mound were more ambiguous, Lewis said. First of all, the upper layers of the mound are higher than the crater walls in several places. Also, Gale Crater sits on the edge of Mars’ northern lowlands. If it had been filled with water to near the height of Mount Sharp then the entire northern hemisphere would have been flooded. Soil analyses carried out by Curiosity — the rover’s primary mission is two years, but could be extended — will help determine the nature of Mount Sharp and the Martian climate in general, Lewis said. Wind erosion relies on specific factors such as the size of individual soil grains, so such information gleaned from the Curiosity mission will help determine Martian characteristics such as wind speed. On Earth, sediments need some amount of moisture to become cemented into rock. It will be interesting to know, Lewis said, how the rock layers of Mount Sharp are held together and how water might be involved. “If the mechanism we describe is correct, it would tell us a lot about Mars and how it operates because Mount Sharp is only one of a class of enigmatic sedimentary mounds observed on Mars,” Lewis said. The paper, “Growth and form of the mound in Gale Crater, Mars: Slope wind enhanced erosion and transport,” was published in the May 2013 issue of the journal Geology. The work was supported by grants from NASA, Caltech and the Princeton Department of Geosciences’ Harry Hess fellowship. Publication: Edwin S. Kite, et al., “Growth and form of the mound in Gale Crater, Mars: Slope wind enhanced erosion and transport,” 2013, Geology, v. 41 no. 5 p. 543-546; DOI:10.1130/G33909.1 Images: NASA/JPL-Caltech/ESA/DLR/FU Berlin/MSSS; Kevin Lewis
How Did St. Clair’s Defeat Happen? On November 4, 1791, on the banks of the Wabash River in what is now western Ohio, the United States Army suffered its worst defeat of the entire U.S.-Indian Wars. The battle, alternatively known as St. Clair's Defeat, the Battle of the Wabash, the Battle of the Wabash River or the Battle of the Thousand Slain, remains little known among most Americans and has been somewhat ignored by academia. Although three times more Americans lost their lives in this battle than at Little Bighorn, it is typically referred to as “St. Clair’s Defeat”. Some academics attribute the lack of interest in the battle to the American commander, General Arthur St. Clair, who as governor of the Northwest Territory was more of a politician than a general. Others point to the apparent anonymity of the Indian leaders – modern scholars believe they know what chiefs led the warriors in battle, but are not sure about their roles. Whatever the reasons for the lack of interest in the battle, all scholars agree that it played a pivotal role in the Northwest Indian War (1785-1795), setting the stage for future American-Indian conflicts. As significant as St. Clair’s Defeat may have been in the broader geopolitical situation between the Northwest Indian tribes, the infant United States, and Great Britain, the immediate question that many ask is: how was the United States Army beaten so soundly by an Indian army? The answer is, of course, a bit complex and involves an examination of both armies. For the Americans, their leadership was sub-par, which was especially pronounced in poor planning, intelligence, and logistics. On the other hand, the Indians were a unified and committed army who were led by able and seasoned leaders. The Western Indian Confederacy Many of the eastern Indian tribes supported the British during the American Revolution because they were told the British would stop or severely limit white migration and settlement west of the Appalachian Mountains and south of the Great Lakes. Whether that would have actually happened is irrelevant because the Americans won and the British allied tribes were forced to move farther west. The Miami, Shawnee, and Delaware tribes ended up in what are now the states of Ohio and Indiana, but was in the late eighteenth century the Northwest Territory. The tribes formed several semi-permanent communities, one of which was comprised of seven villages at the headwaters of the Maumee River near what is today the city of Fort Wayne, Indiana. Farther up the Maumee River, where it meets the Auglaize River near the modern city of Defiance, Ohio, there was another Indian community known as “The Glaize.” The Glaize was established in 1789, becoming a home and meeting place for the different western tribes. The Glaize was also home to British and French trading posts and became the headquarters of the Western Indian Confederacy. The Americans viewed this strong Indian presence in the Northwest Territory as an impediment to their long-term goals. With that said, many members of President Washington’s administration believed the tribes could be dealt with peacefully, as long as the British were not in the picture. Continued British Influence Under the Treaty of Paris of 1783, which recognized American independence, the British were to cede the land west of the Appalachians to the Mississippi River and south of the Great Lakes to the United States. The British were also to close their forts within that area and end their support with the Indian tribes. The British neglected to close their forts in the Northwest Territory and continued to trade with the Indians at the Glaize and other locations. When the Western Indian Confederacy formed in 1786, the British helped armed the new army and offered them logistical support. The Shawnee chief, Blue Jacket, in particular, was known to work and trade with the British, but most of the Western Indian Confederacy tribal leaders maintained working relations with the British. President Washington was not happy with the British-Indian situation in the Northwest, but in the late eighteenth century it would be the Indians who suffered the brunt of American anger. The U.S. Goes to War against the Western Indian Confederacy As the Northwest Indian War raged, Secretary of War Henry Knox ordered the construction of Fort Washington in 1789 on the banks of the Ohio River in the location of what is today Cincinnati. Fort Washington was to be the first in a series of forts throughout the Northwest Territory extending to the headwaters of the Maumee River, which would eventually either lead to the total defeat of the Western Confederacy or its banishment west of the Mississippi. The military began mustering a large force of both regular army and militia in 1790 in anticipation of this major campaign, which was led by General Josiah Harmar. The campaign was ill-advised and began with Knox telegraphing its moves by warning the British, who in turn warned their Indian allies. Although Harmar was able to burn about 300 Indian villages, he lost 200 men in battle and returned to Fort Washington defeated. Harmar’s loss meant that Knox would attempt to use diplomacy with the Indians one last time. The gulf between the desires of the Americans and Western Indian Confederacy was too great, though, so the talks ended in the spring of 1791. Both sides began mobilizing for war, and it soon became apparent which side was better prepared. Although General Arthur St. Clair requited himself quite well in the French and Indian War and the American Revolution, he was fifty-four-years-old in 1791 and past his prime as a soldier. St. Clair had for the most part moved on past his military career and was at the time a civilian leader. He was the governor of the Northwest Territory, but he still had a longing for the military and wanted to lead a force to defeat the Western Indian Confederacy personally. A force of regular army and militia began mustering at Fort Washington in the summer of 1791 to finish what Harmar had started by destroying the Miami, Shawnee, and Delaware villages and erecting U.S. Army forts. The more than 2,000 man army left Fort Washington and quickly built two forts in October before coming to the headwaters of the Wabash River in early November. It would be at this location where St. Clair would be handed his humiliating defeat. A good commander will do everything in his power to win a battle before a shot is even fired. This is accomplished through proper intelligence of the enemy, planning, and ensuring proper logistics. General St. Clair failed on all of these counts. The lack of intelligence that St. Clair’s forces gathered about their enemy and the terrain they were in was woefully inadequate and totally lacking in some respects. They were not sure which chief was in charge of the Indian army they faced and even worse, they did not have any idea of their enemy’s numbers and St. Clair was not even sure about the name of the river. Logistical problems compounded the lack of intelligence. Although St. Clair brought cannons on the campaign that were enough to give an edge in any battle, they were misused from the start. He had the cannons placed on a bluff overseeing the battlefield, but they were aimed too high and did no damage. The cannons could have given the Americans a major advantage in the battle, but instead they became a hindrance. Another logistical mistake St. Clair made was dividing or allowing his camp to divide into two separate camps. The militia camped on the opposite side of the river from the army regulars, which proved to be a great advantage for the Indian forces once the battle began. The camp division between the regular army and the militia was indicative of more significant divisions and problems within the American force. About half of the men on the battlefield were backcountry militia, many from Kentucky, who were poorly trained and had problems with the authority of the regular army officers. Not long after the force left Fort Washington, the lack of discipline among the militia members was exacerbated by illnesses that afflicted the troops, including St. Clair. Many of the militia members deserted and St. Clair was forced to send regular army soldiers to retrieve them, decreasing the size of the American force to about 1,000 men on the eve of the battle. The American force was beset by numerous problems, while on the other hand the Indians had many advantages going into the battle. - The Ordeal of Thomas Hutchinson - Book Review - The Coming of the French Revolution - Book Review - How accurate is the movie The Favourite - Why was the Jay Treaty so unpopular in the United States? - Why did France sell the Louisiana Purchase to the United States - How did the Napoleonic Wars in Europe cause the War of 1812 Western Indian Confederacy Strengths Militarily speaking, the tribes of the Western Confederacy valued individualism and generally eschewed dictatorial type leadership on the battlefield. This philosophy generally cost them against the Americans, even when they had British supplied guns, but they were able to change their outlook when they faced St. Clair temporarily. The leaders of the Western Indian Confederacy all came together to assign their campaign against the Americans as a tribally mandated one, which made it a national or even a racial war. It was, therefore, a war that transcended individual glory and one where victory was more important than taking scalps or booty. It is believed that Little Turtle of the Miami tribe was the primary war chief and he was supported by Blue Jacket of the Shawnee tribe and Buckongahelas of the Delaware. The chiefs relinquished some of their authority over their warriors to Little Turtle, who in turn listened to their council before and during the battle. Little Turtle and the other chiefs probably shadowed St. Clair’s army for some time, gauging its strengths and weaknesses in the process and picking the time and place to strike. Finally, on the evening of November 4, 1,000 warriors from the Western Indian Confederacy emerged from the forests of the headwaters of the Wabasha River and attacked the Kentucky militia camp. The militia was caught off guard and quickly retreated to the other side of the river, which caused even more confusion among the Americans. The Americans responded with their own charge, but the Indian warriors did a feigned retreat to the woods, where more warriors were waiting. While some Indian warriors were fighting the front line of the Americans, others were using the cover to take focused, sniper shots at American officers and the artillery batteries. Eventually, the Indian warriors formed a crescent that outflanked the American artillery batteries, forcing the Americans to spike the cannons. After three hours the battle was lost, forcing St. Clair to take what was left of his forces and limped back to Fort Washington. The Americans lost more than 600 men, while the Western Indian Confederacy only lost twenty-one of their warriors. Depending on one’s perspective, St. Clair’s defeat was the greatest victory by an Indian force against the Americans or the worst defeat by the American military at the hands of an Indian adversary. Although the outcome of the battle shocked many Americans at the time, an examination reveals that the factors that led to the Indian rout were apparent. The American force had severe problems with a lack of intelligence, logistical problems, and morale issues. On the other hand, the Indian force was better prepared and much more motivated. Although the Western Indian Confederacy would lose the Northwest Indian War, their victory over General Arthur St. Clair became one of the most important battles in early American history. - Tanner, Helen Hornbeck. “The Glaize in 1792: A Composite Indian Community.” Ethnohistory 25 (1978) p. 16 - Tanner, pgs. 16-18 - Williams, Samuel C. “The Southwest Territory to the Aid of the Northwest Territory.” Indiana Magazine of History 37 (1941) p. 152 - Tanner, p. 16 - Tanner, p. 31 - Williams, p. 153 - Eid, Leroy V. “American Indian Military Leadership: St. Clair’s 1791 Defeat.” Journal of Military History 57 (1993) pgs. 76-77 - Eid, p. 73 - Williams, p. 154 - Williams, p. 156 - Williams, p. 154 - Eid, p. 82 - Tanner, p. 20 - Eid, p. 83
Pssst… we can write an original essay just for you. Any subject. Any type of essay. We’ll even meet a 3-hour deadline. 121 writers online Within the UK there are many laws and acts that are important for inclusion in education. These include The Equality Act (2010), The Human Rights Act (1998) The SEN Code of Practice (2015). In Australia, there is The Disability Discrimination Act (1992). In Finland, there is The Basic Education Act (1998). In Kenya, there is The Basic Education Act (2013). The list could go on and on, but it’s important to know that in multiple countries around the world there is legislation to support SEN inclusion. The Universal Declaration of Human Rights 1948 is “a milestone document in the history of human rights” (UN.org). This was devised when representatives from all over the world came together to set out a list of fundamental human rights. It is made up for 30 Articles representing each human right. Some of these human rights include; the right to life, liberty and security of person, no one shall be held in slavery or servitude, the right to freedom of opinion and expression, the right to work and everyone is equal before the law. The Universal Declaration of Human Rights was proclaimed by the United Nations. The Majority of countries obey these human rights. However, countries such as Libya, North Korea and Sudan are deemed abusers of the human rights. All three of these countries do not have equality in educational rights which goes against article 26, everyone has the right to education and article 1, all human beings are born free and equal in dignity and rights. The Universal Declaration of Human Rights is important within education and inclusion as the first article within the declaration states “Everyone has the right to education” (article 26). This is important as it proves that anybody no matter their race, gender, physical and mental health, everybody has the right to be in education. Another important article within The Universal Declaration of Human Rights is: “All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood”. (article 1). These two articles state that everyone is equal and everyone is entitled to an education. Therefore, it can be interpreted that SEN inclusion in mainstream schools is highly important. In 1994, UNESCO along with the Government of Spain held the World Conference on Special Needs Education. Within this conference, there were 300 participants who were representatives of 92 governments and 25 international organisations. This is where they devised The Salamanca Statement. The Salamanca Statement was formed over the agreement that every child with a disability or special education needs should have the opportunity to be in education, and that inclusion should be the “norm”. The believed that every child is unique in his or her learning needs and that education systems should be designed to cater for all of these needs. It was important to them that children with special educational needs would have the opportunity to go to mainstream schools if they wished and that mainstream would be able to meet the needs that any child had. This was where it was decided that all schools should have the provisions for all children, special educational needs or not. The Salamanca statement states that they recognise how important it is for those with SEN to be within the mainstream educational system. It declares that; “The education system should be designed and educational programmes implemented to take into account the wide diversity of these characteristics and needs”. The Salamanca Statement states that mainstream schools that introduce SEN inclusion are the most likely way to overcome discrimination. The statement said that the priority areas were early childhood education, girls” education, preparation for adult life and adult and continuing education. These four areas are important as they are often overlooked and are extremely important In 2015 PISA conducted a survey of over 540,000 students (15 years of age) from 72 countries. This survey focused on science with reading, mathematics and collaborative problem solving. PISA is a programme that every few years conducts a worldwide study to assess student acquisition of key skills and knowledge that they have obtained throughout their “compulsory education”. From the results of the 2015 PISA, it was found that in performance of science, reading and mathematics, South Korea had a mean performance/ share of top performers above the OECD average. However, in the results for science beliefs, engagement and motivation, it was found that South Korea had values below the OECD average. In the results of equity in education, it was found that South Korea was above the OECD average. To export a reference to this article please select a referencing style below: Sorry, copying is not allowed on our website. If you’d like this or any other sample, we’ll happily email it to you. Attention! this essay is not unique. You can get 100% plagiarism FREE essay in 30sec Sorry, we cannot unicalize this essay. You can order Unique paper and our professionals Rewrite it for you Your essay sample has been sent. Want us to write one just for you? We can custom edit this essay into an original, 100% plagiarism free essay.Order now Are you interested in getting a customized paper?Check it out!
In these Coping Skills Worksheets, students will sort coping strategies into “smart” and “not smart” columns. When I need to calm down I go to the boxing gym, maybe have some wine, shirk my responsibilities and watch TV all afternoon, call my best friend, etc. As an adult, I’ve learned plenty of coping strategies that work for me when my emotions get too big. Kids often don’t have these options, as their schedules and choices are usually dictated by teachers or parents. But there are some accessible strategies that students can use to calm down no matter where they are. They just need to know what they are and when to use them. Since kids don’t have a lot of freedom at school, make a safe space in your room to cool down and make it clear to students that they are free to use it or use other strategies like walking away when they feel they are losing control of their emotions. This printable will introduce students to smart coping skills for kids and help them identify which behaviors are not a smart way to calm down. Bonus: for your kinesthetic learners, turn this into an active lesson! Label two buckets or waste bins SMART and NOT SMART, and write the coping skills for kids on pieces of paper. After students read the coping strategy, they should ball up the paper and shoot it into the corresponding bucket.
Two recent Canadian studies may seem disheartening at first glance: Babies as young as 6 months show a preference for their own race and bias against people who are not their race. But, say the researchers responsible, determining the start of racial bias could be crucial to its prevention. In one study, researchers found that babies linked faces from their own races to happy music, and faces from other races to sad music. This association, however, did not emerge until babies were 6 months old (the study tested babies 3 to 10 months old). The second study looked at how willing babies were to learn from other races, by using a series of videos and tracking babies’ responses to reliable and unreliable gazes. Once the babies were 6 to 8 months old, the researchers found, they preferred to gain information — about where to look, in this case — from members of their own race. “These findings thus point to the possibility that aspects of racial bias later in life may arise from our lack of exposure to other-race individuals in infancy,” Dr. Kang Lee, a professor at the University of Toronto’s Dr. Eric Jackman Institute of Child Study and lead author on both studies, said in a statement. “If we can pinpoint the starting point of racial bias, which we may have done here, we can start to find ways to prevent racial biases from happening.” “An important finding is that infants will learn from people they are most exposed to,” added Dr. Naiqi Xiao, a professor at Princeton and author on both studies. One way to prevent racially biased adults, then, may be for parents to make sure they introduce their kids to people of all races from the time they’re very, very young.
Despite the record number of Latino high school graduates enrolling in college, literacy rates among Latinos continues to be dismally low. How can that be? It’s certainly not because Latino parents don’t want their children to succeed. But frequently they don’t realize the important role they themselves play in their children’s literacy development. Here are tips from birth through elementary school. Literacy is a skill that is developed from birth. Those infant coos and babbles are a child’s first attempt at communicating. To speak, babies must learn phonemic awareness, which is the ability to hear, identify, and manipulate individual sounds in spoken words. This process is developed within the first eight to 10 months of life. A baby who is frequently spoken to and read to will develop this skill better than one who receives little interaction. And the ability to identify individual sounds is one of the basic skills necessary for reading and writing. Read. Your child is never too young to be read to – in fact, reading to a child increases their vocabulary, which makes learning to read easier. Read with expression, not in a dull, monotonous voice. Colorful books with animals or other babies are especially attractive. But it doesn’t matter what you read, just pick something up and start reading. Talk to your child and encourage him to respond. Interacting one-on-one with your child promotes healthy brain development, so make sure they can see you speaking, not just hear you. During the toddler years, children continue to develop their preliteracy skills by learning to speak, increasing their vocabulary, and developing the fine-motor skills that are necessary for writing. Letter recognition is a must and should begin at home before they even enter preschool. Sing. Singing the alphabet song, rhymes, poems, and fingerplays are a great way to develop all of your child’s preliteracy skills. Color. Give your child plenty of coloring books, paper, pencils, crayons, and other art supplies. All that creativity helps them learn how to hold and control a pencil, and lets them practice “writing.” Write. Teach your child to write her name. This is easier for those with shorter names, but it is important for a child to recognize his or her own name. This activity also helps them begin to learn that sounds are associated with specific letters. Hang it up. Look for wall art and other decorations that include words. A print-rich environment supports literacy instruction and helps children get a jump on reading. You might hang a sign that says “Ana’s Bed” or “Juan’s Room.” Or you could write the word “Books” or “Read!” above their bookshelves. There are many possibilities. In elementary school, your child actually begins learning to read and write. Supplement their schoolwork by helping your child read at home. Look for emergent readers at your local library or bookstore, then take time each day to read together. Ask questions. Read books together with your child and take the time to pause and ask your child questions about the story to measure their comprehension. Why do you think that happened? What do you think will happen next? What would you have done differently? All these questions and more require your child to think about the story. Retell it. After reading a story, ask your child to retell what the story was about in his own words. Variations of this include having your child draw the story out on a sheet of paper, or even act it out. By fourth grade your child should be reading fluently. They should be able to decipher long, complicated words and understand the main idea of a story. This is not an impossible task, and can be achieved relatively easily with parental involvement and steady practice. Monica Olivera Hazelton, NBC Latino contributor and the founder and publisher of MommyMaestra.com, a site for Latino families that homeschool, as well as families with children in a traditional school setting who want to take a more active role in their children’s education. She is the 2011 winner of the “Best Latina Education Blogger” award by LATISM.
Parallel axis theorem(Redirected from Huygens–Steiner theorem) The parallel axis theorem, also known as Huygens–Steiner theorem, or just as Steiner's theorem, named after Christiaan Huygens and Jakob Steiner, can be used to determine the mass moment of inertia or the second moment of area of a rigid body about any axis, given the body's moment of inertia about a parallel axis through the object's center of gravity and the perpendicular distance between the axes. Mass moment of inertiaEdit Suppose a body of mass m is made to rotate about an axis z passing through the body's centre of gravity. The body has a moment of inertia Icm with respect to this axis. The parallel axis theorem states that if the body is made to rotate instead about a new axis z′ which is parallel to the first axis and displaced from it by a distance d, then the moment of inertia I with respect to the new axis is related to Icm by Explicitly, d is the perpendicular distance between the axes z and z′. We may assume, without loss of generality, that in a Cartesian coordinate system the perpendicular distance between the axes lies along the x-axis and that the center of mass lies at the origin. The moment of inertia relative to the z-axis is The moment of inertia relative to the axis z′, which is a perpendicular distance d along the x-axis from the centre of mass, is Expanding the brackets yields The first term is Icm and the second term becomes md2. The integral in the final term is a multiple of the x-coordinate of the center of mass - which is zero since the center of mass lies at the origin. So, the equation becomes: The parallel axis theorem can be generalized to calculations involving the inertia tensor. Let Iij denote the inertia tensor of a body as calculated at the centre of mass. Then the inertia tensor Jij as calculated relative to a new point is where is the displacement vector from the centre of mass to the new point, and δij is the Kronecker delta. For diagonal elements (when i = j), displacements perpendicular to the axis of rotation results in the above simplified version of the parallel axis theorem. The generalized version of the parallel axis theorem can be expressed in the form of coordinate-free notation as Further generalization of the parallel axis theorem gives the inertia tensor about any set of orthogonal axes parallel to the reference set of axes x, y and z, associated with the reference inertia tensor, whether or not they pass through the center of mass. Area moment of inertiaEdit The parallel axes rule also applies to the second moment of area (area moment of inertia) for a plane region D: where Iz is the area moment of inertia of D relative to the parallel axis, Ix is the area moment of inertia of D relative to its centroid, A is the area of the plane region D, and r is the distance from the new axis z to the centroid of the plane region D. The centroid of D coincides with the centre of gravity of a physical plate with the same shape that has uniform density. Polar moment of inertia for planar dynamicsEdit The mass properties of a rigid body that is constrained to move parallel to a plane are defined by its center of mass R = (x, y) in this plane, and its polar moment of inertia IR around an axis through R that is perpendicular to the plane. The parallel axis theorem provides a convenient relationship between the moment of inertia IS around an arbitrary point S and the moment of inertia IR about the center of mass R. Recall that the center of mass R has the property where r is integrated over the volume V of the body. The polar moment of inertia of a body undergoing planar movement can be computed relative to any reference point S, where S is constant and r is integrated over the volume V. In order to obtain the moment of inertia IS in terms of the moment of inertia IR, introduce the vector d from S to the center of mass R, The first term is the moment of inertia IR, the second term is zero by definition of the center of mass, and the last term is the total mass of the body times the square magnitude of the vector d. Thus, which is known as the parallel axis theorem. Moment of inertia matrixEdit The inertia matrix of a rigid system of particles depends on the choice of the reference point. There is a useful relationship between the inertia matrix relative to the center of mass R and the inertia matrix relative to another point S. This relationship is called the parallel axis theorem. Consider the inertia matrix [IS] obtained for a rigid system of particles measured relative to a reference point S, given by where ri defines the position of particle Pi, i = 1, ..., n. Recall that [ri − S] is the skew-symmetric matrix that performs the cross product, for an arbitrary vector y. Let R be the center of mass of the rigid system, then where d is the vector from the reference point S to the center of mass R. Use this equation to compute the inertia matrix, Expand this equation to obtain The first term is the inertia matrix [IR] relative to the center of mass. The second and third terms are zero by definition of the center of mass R, And the last term is the total mass of the system multiplied by the square of the skew-symmetric matrix [d] constructed from d. The result is the parallel axis theorem, where d is the vector from the reference point S to the center of mass R. Identities for a skew-symmetric matrixEdit In order to compare formulations of the parallel axis theorem using skew-symmetric matrices and the tensor formulation, the following identities are useful. Let [R] be the skew symmetric matrix associated with the position vector R = (x, y, z), then the product in the inertia matrix becomes This product can be computed using the matrix formed by the outer product [R RT] using the identify where [E3] is the 3 × 3 identity matrix. Also notice, that where tr denotes the sum of the diagonal elements of the outer product matrix, known as its trace. - Arthur Erich Haas (1928). Introduction to theoretical physics. - A. R. Abdulghany, American Journal of Physics 85, 791 (2017); doi: https://dx.doi.org/10.1119/1.4994835 . - Paul, Burton (1979), Kinematics and Dynamics of Planar Machinery, Prentice Hall, ISBN 978-0-13-516062-6 - T. R. Kane and D. A. Levinson, Dynamics, Theory and Applications, McGraw-Hill, NY, 2005.
- slide 1 of 5 1 ½ cups of corn starch 1 cup water Food color - if desired Slowly stir the water into the cornstarch until it is the consistency of pancake batter. Add food color if desired, keeping in mind some food coloring will stain clothing and skin. Store the Oobleck in an airtight container. - slide 2 of 5 ¾ cups warm water 1 cup Elmers glue Food coloring – if desired 2 teaspoons Borax ½ cup warm water 2 mixing bowls Stir the ¾ cup warm water into the Elmers glue. Add food coloring as desired. In a separate container mix the Borax with ½ cup of warm water. Pour the glue mixture into the borax mixture. Reach into the bowl and knead the substance together for about 2 minutes. - slide 3 of 5 Oobleck & Flubber can be used to teach students about states of matter and how appearances can be deceiving. Students should understand the basic states of matter, solid, liquid and gas. Water can be used to demonstrate these states easily by freezing it or boiling it to demonstrate ice and steam. Discuss some of the characteristics of solids and liquids. Make a list of these characteristics where students can review it throughout the lesson. Use the following activity to help students understand that some substances that may appear to be solids are actually liquids. Divide the students into small groups and give each group a recipe to follow, either for Oobleck or for Flubber. Have the students make the recipe and then decide if it is a solid or a liquid. Provide each group with a small pitcher, a piece of craft wire, a small plastic glass and an open area of floor. Give each group a piece of paper with the following questions: - What happens when I put it into the pitcher? - Can I form a baseball shape? What happens if I slap the baseball with my hand? What happens to that shape if I set it on a desk? - What happens to the baseball shape if I drop it on the ground? - Can I cut through the baseball shape with the wire? What happens? - If I put the ‘stuff’ into the glass, turn the glass over on top of a desk and take the glass off, what happens to the ‘stuff’? Students should predict what they think will happen for each experiment before they perform it. - slide 4 of 5 The students should be given enough time to perform these simple experiments and record their observations. Once all of the observations have been made bring the class back together to discuss them. The following should be the observations made for Oobleck and Flubber: - The Oobleck should relax in the pitcher and cover the bottom, no matter how it was put into the pitcher. The Flubber will remain in the shape it was in when the students put it into the pitcher. - Both can be used to form a baseball shape. Both will retain their shape when slapped. The Oobleck will flatten as soon as it is placed on a hard surface. The Flubber will maintain its shape. - The Flubber will bounce and the Oobleck will not. - Both can be cut with a wire. - The Oobleck will lose its shape immediately and the Flubber will maintain its shape. If enough is made you can have the students who made Oobleck combine theirs into a shallow baking dish and roll a marble across the top. The marble should not sink into the the Oobleck. - slide 5 of 5 Conclusion and Additional Informaion Both substances appear to be solids, however Oobleck is actually still a liquid. Another substance that appears to be solid, but is a liquid is glass. Glass windows that have been in place for many years are thicker at the bottom than at the top. Students can put their Flubber or Oobleck into plastic bags to take home.
World AIDS Day celebrates its 30th anniversary this year with the theme of ‘Know Your Status’. Great progress has been made since the first World AIDS Day in 1988 – 3 in 4 people living with HIV today know their status. However, the work is not yet done – especially for women. Women account for more than half of the people living with HIV worldwide. In particular, adolescent girls (10-19 years) and young women (15-24 years) are significantly affected by HIV and have high prevalence rates. In Eastern and Southern Africa, women make up 26% of new HIV infections despite making up only 10% of the population. Statistically, young women will acquire HIV five to seven years earlier than their male counterparts. Why are women and girls at high risk of infection? HIV disproportionately affects young women and girls because of their unequal social, cultural and economic status in society. These challenges include gender based violence, laws and policies that undermine women, and harmful cultural and traditional practices that reinforce stigma and the dynamic of male dominance. Here some other reasons why gender inequality leaves women vulnerable to HIV: - Lack of access to healthcare services – women encounter barriers to health services on individual, interpersonal, community and societal levels. - Lack of access to education – studies show that educated girls and women are more likely to make safer decisions regarding sexual and reproductive health and have lower risk of partner violence. - Poverty – an existing and overarching factor that increases the impact of HIV. - Gender-based violence & intimate partner violence – these types of violence prevent young women from protecting themselves from HIV. - ‘Blesser/Sugar Daddy’ culture and transactional sex – sex with older men for monetary or material benefits, exposes young women and girls to low condom use, unsafe sexual practices and increased rates of STIs. - Child marriage – girls who marry as children are likely to be abused by their husbands and forced into sexual practices. - Biological factors – adolescent girls are susceptible to higher rates of genital inflammation, which may increase the risk of HIV infection through vaginal intercourse. Importance of HIV testing HIV testing in young women and girls is essential. Many receive access to treatment and care services after testing. Some important determinants of testing are: - Going through antenatal care - Being married - Having primary and secondary education We need to aim for more young women and girls to being tested so that they know their status, and can access adequate care and treatment services. HIV testing is necessary for expanding on treatment and ensuring that people with HIV have healthy, productive lives. Addressing the Impact To address the impact of HIV on young women and girls we need to have approaches and interventions that incorporate the diverse perspectives of women and girls. This is needed on all platforms from campaigning and policy-making to program design. As the World Health Organization recommends, a woman-centred approach that includes women as participants is required, so that our needs, rights and preferences are considered. Better strategies are needed across all health system to improve accessibility, acceptability, affordability, uptake, equitable coverage, quality, effectiveness and efficiency of services, particularly for adolescent girls worldwide.
Edit 19 June 2016: the contents of this post, like the others in the series, do not necessarily reflect my current views. While writing my paper, I have come to a better understanding of the systems involved. Please refer to this soon-to-be-published paper for an updated analysis. In this third installment, we will expand our list of the constellations that are represented on folios f76v, f79r, f79v, f80r and f80v. As I’ve explained in posts (1) and (2), I think these five folios contain anthropomorphic depictions of the classical constellations, structured in the mnemonic form of Greco-Roman myth. Before we continue, I will first mention some new insights about these pages in general. These posts are a reflection of my ongoing research, so changes happen: - Given the fact that visual parallels are found in Greco-Roman artifacts and early manuscripts based on the same tradition (like a Revised Aratus Latinus or an Aratea, both early 9thC) and the fact that reference is made to the relative position on the Great Circles, it seems safe to assume that the underlying system is astronomical. This replaces the previous hypothesis that these five folios were based on a lesser known system for navigation. - While the Farnese Atlas statue offered an attractive visual starting point, I have explored a large number of other sources since. It appears that especially early works based on Aratus often provide a better match. That shouldn’t mean yet that the Voynich system is directly based on Aratus though – we must identify more constellations first. - It is possible that vertical water flows represent the Ecliptic rather than just any celestial Circle. The Ecliptic is the line described by the Sun, and it stands at an angle compared to the celestial Equator, Tropics and Poles. Note how in the example below, the constellation of Gemini is divided by three parallel lines (i.e. the ecliptic) on the Farnese Atlas, and three (!) lines in the Voynich. In that case, interruptions in the line might indicate the segments of the Zodiac, though at this stage it is too early to fully confirm this. - I have counted the figures on these five pages that might represent a constellation, and, depending on what counts and what doesn’t, the amount ranges from 43 to 54. So the number of constellations that are depicted will lie somewhere between those values. The amount of constellations listed by early Greek astronomers all lie within this range, with the 48 listed by Ptolemy as the most famous example. Hence, purely based on the number of nymphs (and animals), it is possible that every classical constellation is depicted here. Of course there can be deviations. The Farnese Atlas, for example, omits a handful of constellations that are mentioned in its most likely sources, and adds one unknown rectangular structure. There’s also the matter of the wandering stars, which might account for up to five nymphs. So, let’s get down to business. First, here’s an overview of the ancient constellations as listed by Ptolemy, including their original Greek names. In green are those constellations we have identified in the previous posts. Items marked in blue are ones which I still consider uncertain. Those may get shuffled around in the end, if a better candidate turns up. If there is one thing one learns quickly when studying the Voynich, it is that things are always less certain than they appear. Ursa Minor, Ἄρκτος μικρά (Arktos Mikra) Ursa Major, Ἄρκτος μεγάλη (Arktos Megale) Draco, Δράκων (Drakon) Virgo, Παρθένος (Parthenos) Libra, Χηλαί (Chelae) Corona Borealis, Στέφανος (Stephanos) Sagittarius, Τοξότης (Toxotes) Hercules, Ἐνγόνασι (Engonasi) Capricornus, Αἰγόκερως (Aigokeros) Aquarius, Ὑδροχόος (Hydrochoös) Cygnus, Ὄρνις (Ornis) Pisces, Ἰχθύες (Ichthyes) Auriga, Ἡνίοχος (Heniochos) Eridanus, Ποταμός (Potamos) Lepus, Λαγωός (Lagoös) Serpens, Ὄφις (Ophis) Canis Major, Κύων (Kyon) Sagitta, Ὀιστός (Oistos) Canis Minor, Προκύων (Prokyon) Aquila, Ἀετός (Aetos) Delphinus, Δελφίν (Delphin) Hydra, Ὕδρος (Hydros) Equuleus, Ἵππου προτομή (Hippou protome) Pegasus, Ἵππος (Hippos) Corvus, Κόραξ (Korax) Triangulum, Τρίγωνον (Trigonon) Lupus, Θηρίον (Therion) Aries, Κριός (Krios) Ara, Θυμιατήριον (Thymiaterion) Corona Australis, Στέφανος νότιος (Stephanos notios) Gemini, Δίδυμοι (Didymoi) Piscis Austrinus, Ἰχθύς νότιος (Ichthys notios) As you can see, we still have quite a way to go, so let’s add some more color to the list. First, a rather certain identification, the constellation of Ara, the altar. The mythology around it is not as expansive as is the case with some other constellations. In most cases it is an altar, upon which offers are burned. Depictions of Ara vary, ranging from a relatively simple altar to a flame-bearing tower, to some kind of chapel. I believe it is the tower type which is alluded to in the Voynich. These Altars are taken from early revised versions of Aratus‘ work. We also see the tower mentioned in the description by Manilius: Ara ferens turris, stellis imitantibus ignem – A tower bearing the Altar, stars imitating the fire. This fire or its smoke was the Milky Way. In the Voynich most constellations are portrayed by nymphs, who simultaneously play a part in a mythological scene. The nymph in the middle above is king Tereus’ son, about to be killed by Procne (his mother) and Philomela (his aunt), as I discussed in this post. They will cook him and serve his flesh to the king to get their revenge. There lies the first mnemonic link between myth and constellation: in both cases there is a sacrifice, and in both cases it is put on the fire. Ovid describes the scene in much the same way as one would describe the slaughter of an animal. “Philomela opened his throat with the knife. While the limbs were still warm, and retained some life, they tore them to pieces. Part bubble in bronze cauldrons, part hiss on the spit: and the distant rooms drip with grease” – Ovid’s Metamorphoses. However, there is a second mythological link between this scene from the story of Philomela and the Ara constellation. This is where the original creator of these images really shines as a master of synthesis. First of all, remember that this scene is about a mother cooking her son to feed him to his father. The Ara constellation, being an altar, was also associated with the goddess Vesta (Gr. Estia), to the extent that Vesta was one of the constellation’s titles (source). And apart from the goddess of sacrifice, Vesta was also the goddess of fire and the hearth. “Every hearth had its Vesta, and she presided over the preparation of meals“(source). The goddess closely associated with this constellation, was also the goddess of the family hearth, watching over the preparation of meals. The Altar coincides with a scene where a boy is about to be slaughtered, cooked and served. And that is how mnemonics work. And now for something completely different. We will move on to the most exciting constellation of all: the Triangle. Behold its glorious depiction in the Paris Aratea (Nouv. ac. lat. 1614): Unfortunately, this constellation was omitted from a number of sources, including the Farnese Atlas, despite its ancient origin. Manilius describes it as follows: “There follows, with two equal sides parted by one unequal, a sign seen flashing with three stars and named Deltoton, called after its likeness”. [Manilius, Astronomica, 1st century A.D, p.33] The ancient Greeks called the constellation Deltoton, like the capital letter, D, delta (Δ), in the Greek alphabet. It was associated with the fertile delta of the river Nile since ancient times. Herodotus (5thC BCE) called it “the gift of the Nile”(source). And that’s about all we can say about it: three stars, represented either with three equal sides or one side of different length, ancient association with the Nile Delta. So here’s the Voynich Triangulum/Deloton. I included some of the area below it for context – it’s located above the nymph I analyzed as the constellation Cetus, just like in real life. As you can see, there is a unique pipe formation here, with three smaller pipes gathered around the main one for no apparent reason. Their openings represent the three stars of Triangulum, and they form the shape of a triangle. It wouldn’t be the Master of Synthesis at work here if there wasn’t an extra hint added, though. Take a look at the bottom of the picture, where the water falls from the “pipe” into the Green water. This is not the typical “three parallel lines” depiction of the water flow that might represent the Ecliptic like we saw in the Gemini picture. Remember the association with the Nile delta, and now look again at the flow beneath the Triangle. Note how the lines branch out in the shape of a… Delta. Also know that maps were often drawn with North and South reversed before “North=up” became the standard. So if we adjust the image to what we are used to, it would look like this: That’s a decent Nile Delta, for a mnemonic at least. Given the fact that the lore originally surrounding Deltoton was relatively limited, one cannot help but marvel at the the artists’ creativity.
Early sociological studies were thought to be similar to the natural sciences due to their use of empiricism and the scientific method. Contrast positivist sociology with “verstehen”-oriented sociological approaches - Early sociological approaches were primarily positivist—they treated sensory data as the sole source of authentic knowledge, and they tried to predict human behavior. - Max Weber and Wilhelm Dilthey introduced the idea of verstehen, which is an attempt to understand and interpret meanings behind social behavior. - The difference between positivism and verstehen has often been understood as the difference between quantitative and qualitative sociology. - Quantitative sociology seeks to answer a question using numerical analysis of patterns, while qualitative sociology seeks to arrive at deeper a understanding based on how people talk about and interpret their actions. - positivism: A doctrine that states that the only authentic knowledge is scientific knowledge, and that such knowledge can only come from positive affirmation of theories through strict scientific method, refusing every form of metaphysics. - Verstehen: A systematic interpretive process of understanding the meaning of action from the actor’s point of view; in the context of German philosophy and social sciences in general, the special sense of “interpretive or participatory examination” of social phenomena. - empirical: Pertaining to, derived from, or testable by observations made using the physical senses or using instruments which extend the senses. Early sociological studies considered the field of sociology to be similar to the natural sciences, like physics or biology. As a result, many researchers argued that the methodology used in the natural sciences was perfectly suited for use in the social sciences. The effect of employing the scientific method and stressing empiricism was the distinction of sociology from theology, philosophy, and metaphysics. This also resulted in sociology being recognized as an empirical science. Positivism and Verstehen This early sociological approach, supported by August Comte, led to positivism, an idea that data derived from sensory experience and that logical and mathematical treatments of such data are together the exclusive source of all authentic knowledge. The goal of positivism, like the natural sciences, is prediction. But in the case of sociology, positivism’s goal is prediction of human behavior, which is a complicated proposition. The goal of predicting human behavior was quickly realized to be a bit lofty. Scientists like Wilhelm Dilthey and Heinrich Rickert argued that the natural world differs from the social world; human society has culture, unlike the societies of most other animals. The behavior of ants and wolves, for example, is primarily based on genetic instructions and is not passed from generation to generation through socialization. As a result, an additional goal was proposed for sociology. Max Weber and Wilhelm Dilthey introduced the concept of verstehen. The goal of verstehen is less to predict behavior than it is to understand behavior. Weber said that he was after meaningful social action, not simply statistical or mathematical knowledge about society. Arriving at a verstehen-like understanding of society thus involves not only quantitative approaches, but more interpretive, qualitative approaches. The inability of sociology and other social sciences to perfectly predict the behavior of humans or to fully comprehend a different culture has led to the social sciences being labeled “soft sciences. ” While some might consider this label derogatory, in a sense it can be seen as an admission of the remarkable complexity of humans as social animals. Any animal as complex as humans is bound to be difficult to fully comprehend. Humans, human society, and human culture are all constantly changing, which means the social sciences will constantly be works in progress. Quantitative and Qualitative Sociology The contrast between positivist sociology and the verstehen approach has been reformulated in modern sociology as a distinction between quantitative and qualitative methodological approaches, respectively. Quantitative sociology is generally a numerical approach to understanding human behavior. Surveys with large numbers of participants are aggregated into data sets and analyzed using statistics, allowing researchers to discern patterns in human behavior. Qualitative sociology generally opts for depth over breadth. The qualitative approach uses in-depth interviews, focus groups, or the analysis of content sources (books, magazines, journals, TV shows, etc.) as data sources. These sources are then analyzed systematically to discern patterns and to arrive at a better understanding of human behavior. Drawing a hard and fast distinction between quantitative and qualitative sociology is a bit misleading, however. Both share a similar approach in that the first step in all sciences is the development of a theory and the generation of testable hypotheses. While there are some individuals who begin analyzing data without a theoretical orientation to guide their analysis, most begin with a theoretical idea or question and gather data to test that theory. The second step is the collection of data, and this is really where the two approaches differ. Quantitative sociology focuses on numerical representations of the research subjects, while qualitative sociology focuses on the ideas found within the discourse and rhetoric of the research subjects. Max Weber: Max Weber and Wilhelm Dilthey introduced verstehen—understanding behaviors—as goal of sociology.
Nikhil Pal Singh Examines special topics in American history. How Race Matters: Racial Norms in US Culture (COURSE TITLE). Is US history best seen as a story of progress and ever increasing tolerance and inclusion, or a highly conflictual story about confronting and reconsolidating racial dominance? How have legacies of slavery, migration, Indian removal and imperial power defined and redefined US national identity? How do forms of racial exclusion impact upon, or intersect with other forms of socially significant identity such as class, gender and sexuality? Have the struggles to include formerly excluded peoples within the nation fought by abolitionists, civil rights activists and modern progressives and liberals enabled the nation to effectively transcend its racial past? These are some of the major questions we will address in this broad survey of the ideas and practices that have informed US "racial" history from the American Revolution to the contemporary, post-civil rights era. THIS COURSE FULFILLS THE REQUIREMENTS OF THE UNIVERSITY OF WASHINGTON DIVERSITY MINOR INITIATIVE. Student learning goals General method of instruction Class assignments and grading
The sago palm (Cycas revoluta) is a popular pick of residential gardeners and commercial landscapers in Solano County. Drought tolerant, it survives temperature extremes from 15 to 110 degrees and grows in sun or shade. With dark semi-glossy green leaves and shaggy trunk, this plant grows slowly reaching a height of only 20 feet in 50 to 100 years. But few people realize that the sago palm is a relative of the most primitive of seed-bearing plants, the Cycad (cycadeoids), which is now extinct. More than 150 million years ago during the Mesozoic era, cycads stood their ground against the dinosaurs. Providing plentiful food for the humongous herbivorous Stegosaurus, this plant flourished in abundance during the Jurassic period when ancient oceans distributed more of the sun’s heat from the equatorial regions to the polar areas resulting in a more uniform world climate without high latitude glaciers. (Source: www.fossilnews.com/1996/livingfossils.html) Geologists the world over have discovered petrified cycads next to dinosaur bones. In 1994, juvenile remains of a Stegosaurus were found in Wyoming measuring 15 feet long and 7 feet high with a weight estimate when alive of 2.6 tons. A large fossil forest of cycads once existed in the Black Hills of South Dakota near Minnekahta and was designated “Fossil Cycad National Monument” before the find was depleted by vandals and collectors, and the area closed. Here are some fascinating facts about cycads and sagos: Cycads are called “living fossils” because they have remained unchanged through millions of years. Its habitat varies from tropical, to temperate and subtropical regions and includes dense forests and semidesert areas on several continents. These particular cycads are endangered, protected by global import and export regulations. True cycads comprise approximately 185 species in three families — Cycadaceae, Stangeriaceae and Zamiaceae. The sago palm (Cycas revoluta) is the only genus recognized in the Cycadaceae family. The reason cycads and sagos are often confused with tree ferns or palm trees is that the whorl of leaves without side branches perched atop this plant’s central trunk resembles a fern or palm. Yet the cone-bearing Cycad is not related to the spore-bearing fern or the flower-producing palm. This gymnosperm’s closest cousins are conifers – firs, pines and spruce trees — along with the Chinese gingko. Research suggests a cycad’s relationship to these other living gymnosperms remains unclear. Gymnosperm means “naked seed” producing no fruit and no true flower. Cycad pollination is via insects, small animals or wind. Seed sizes vary from as small as a pea to as large as a goose egg. Cycads form seed cones whereas the male sago palm forms a pollen cone. The female sago palm does not form a seed cone but instead megaporophylls, a group of leaf-like structures containing seeds. Pollination occurs through air movement. The stiff narrow feather-like leaves of sago palms are often used in funeral wreaths or floral arrangements. They remain green long after cutting. Its Latin name revoluta means “curled back,” a specific reference to the leaves which grow outward in a circular pattern with new leaves emerging all at once periodically. Italy’s Etruscan civilization placed fossilized cycad trunks atop their tombs as funeral monuments. In the United States, coal miners who found fossilized trunks took them home for doorstops. This summer, if you’re looking for historically fascinating, drought-tolerant, long-lived plants for your garden, consider sago palms. Five years ago I purchased one in a quart-sized pot for $10 at a local big box store and the plant is now over 4 feet tall and almost 4 feet wide. In no time at all, I was imagining the perilous life of the ancient cycads in the real “Jurassic Park.” Launa Herrmann is a Master Gardener with the University of California Cooperative Extension office in Fairfield. If you have gardening questions, call the Master Gardener’s office at 784-1322.
Late Twentieth-Century Japan: An Introductory Essay by William M. Tsutsui, University of Kansas Until recently, Japan’s history since World War II was told as an inspiring fable of success. According to this story, Japan pulled itself out of the physical devastation, spiritual bankruptcy, and abject defeat of 1945 through wise leadership, hard work, and a partnership with the United States. It became one of the world’s richest, most stable, and most widely respected industrial democracies. Historians now realize that this narrative does not capture Japan’s postwar experience. What once seemed like a buoyant path of growth and recovery, termed a “miracle” by some patronizing Western observers, is now seen as a more nuanced story. Japan experienced downturns as well as booms, discord as well as consensus, and serious problems as well as elegant solutions. This short essay charts the history of Japan from the end of the war to the present day. It sketches the rapid and profound changes the Japanese people have enjoyed, endured, and embraced over the past 60 years. The Occupation of Japan (1945-1952) The occupation of Japan was called a joint allied operation. In fact, the United States dominated, as its forces had borne the brunt of the fighting in the Pacific theater during World War II. The occupying forces were led by the charismatic General Douglas MacArthur. MacArthur was styled the Supreme Commander for the Allied Powers (or SCAP, a term also applied to the occupation administration as a whole). The American occupiers wanted to remake Japan as a peaceful, democratic, and modern nation. They wanted to ensure it would never again threaten the world militarily. The occupiers approached their former foes not with bitterness or a thirst for revenge, but with compassion and generosity. The Japanese public, meanwhile, had suffered from extreme wartime deprivation as well as crippling allied air attacks. Exhausted, stunned, and defeated, Japan’s hungry masses showed no hostility to the Americans. Instead, they greeted the occupation and its reforms with composure, respect, and openness. Thus, victors and vanquished worked together in the remaking and rebuilding of Japan. The occupation is often compared to the Meiji Period (1868-1912), which was a time of rapid modernization. Both are seen as watersheds in Japanese history. U.S. forces entered Japan with grand visions of change. They sought to root out militarism. They also wanted to build democratic institutions and tutor the Japanese in the superiority of the American way of life. The occupation agenda was often naïve, heavy-handed, and condescending. Democratization was equated with Americanization. Japan was perceived (as MacArthur himself once put it) as a nation of 12-year-olds. Scholars have debated the extent of the change actually accomplished by the occupiers. Many have noted that SCAP’s fervor for reform cooled after only a couple years. They further note that many proposals for change were successfully resisted by Japanese authorities. Nevertheless, the occupation had a formative impact on postwar Japanese society, the conduct of political and economic life, and Japan’s place in the Cold War world. Demilitarization was the occupiers’ first priority. It was accomplished quickly. Imperial army and navy units were disarmed. The far-flung Japanese empire was dismantled. People important to the war effort, especially military officers, were barred from positions of public responsibility. The top leaders of the wartime Japanese state, including former prime minister Tōjō Hideki, were tried by an international military tribunal. Seven of those convicted were executed. Significantly, the Shōwa Emperor (Hirohito) was not charged. American officials opted to use the monarch as a tool to influence public opinion rather than punishing him as a part of the wartime regime. Democratization was the occupation’s other major goal. This goal was far more complicated and difficult to accomplish. Political reform was considered essential to Japan’s recasting as a peaceful member of the community of nations. After a Japanese commission failed to produce a new national constitution sufficiently progressive for the occupation, SCAP staff wrote a new draft in just a week’s time. This new draft was presented to the Japanese government for translation and enactment. The Japanese had no choice but to comply. The new constitution was promulgated on November 3, 1946. Many scholars have noted the irony of SCAP installing democratic political institutions in Japan through authoritarian means. The Japanese, it has been said, were “forced to be free.” Still, the Japanese people embraced the “MacArthur constitution” (as it is often known). It has endured (with not a single amendment) as a sound basis for Japan’s postwar democracy. Unlike the old Meiji constitution of 1889, the new document gave sovereignty to the Japanese people, not the emperor. The imperial institution was limited to a symbolic role. Shintō, which had been used to support wartime mobilization, was disestablished as a state religion. Both houses of the Diet, Japan’s parliament, were to be elected democratically. The prime minister and cabinet were to be selected on the standard British parliamentary model. The 1946 constitution, considered by some to be even more progressive than the U.S. constitution, guaranteed the Japanese people a wide range of civil rights. These rights included freedoms of assembly, “thought and conscience,” and the press. The equality of women was established explicitly. Indeed, in the April 1946 general election, the first in Japan to allow women the vote, 39 women were elected to the lower house of the Diet. That total has not been exceeded since. The occupation tried to extend reform deeply into Japanese society and the economy. Land reform, one of SCAP’s most popular and successful efforts, broke the hold of large landlords. Tenant farmers could purchase the property they worked. SCAP believed organized labor was a necessary counterweight to business and a fundamental part of a healthy democracy. They therefore encouraged unionization in industry. The educational system and police force were also reformed. Structures were decentralized, made more responsive to local communities, and recast on American models. An ambitious antitrust program was also launched. The occupiers feared that the zaibatsu, Japan’s financial and industrial conglomerates, would inhibit competition and stifle new business. Thus, sweeping antitrust legislation was passed. SCAP laid plans for the break-up of several hundred large corporations. The reformist flurry of the early occupation proved short-lived, however. SCAP’s sweeping programs threatened many influential groups in Japan. They were also worrisome to many U.S. policymakers. Japanese corporate interests complained that occupation policy was crippling the nation’s economic recovery. Planners in the State Department and Pentagon grew concerned that aggressive reform might weaken Japan socially and politically. They believed it needed to be built up as a stable Cold War ally in Asia. Wall Street bankers and Congress, anxious that Japan might become a long-term financial drain on America, argued for a quick end to SCAP’s experiments. In 1947, General MacArthur canceled a general strike organized by militant unions. This act marked the beginning of the occupation’s moderation of its reformist agenda. It even rolled back some of its high-profile policies. The change in policy became known as the “reverse course.” Democratization took a back seat to stabilization, economic recovery, and rehabilitation of Japan as America’s dependable partner in East Asia. Antitrust programs were muted. Occupation support for labor evaporated. The purge, once reserved for wartime leaders, was used to weaken left-wing groups and radical unions. The occupation also began to pressure the Japanese government to begin remilitarizing. Article 9 of the 1946 constitution rejected any military capability for Japan. But American planners were soon eager for the Japanese to take up arms in the Cold War defense of the Free World. Although pacifist feelings in Japan ran deep after Hiroshima and Nagasaki, the first step toward remilitarization was taken not long after the Korean War began in June 1950. That first step was creation of a 75,000-man National Police Reserve. The Reserve was well supplied with American arms. The popular image of the occupation, at least in the United States, has generally been very positive. In that view, MacArthur and his forces benevolently led the transformation of a former enemy into a modern, peaceful democracy. Skeptics argue that the role of the occupiers has been exaggerated. They say that SCAP’s reforms were only successful because they built on existing trends in Japan. Some in Japan (especially those on the political left) take a much more negative view. They assert that the occupation betrayed the Japanese people. The Americans promised thoroughgoing reform and true freedom. But SCAP compromised its ideals in the “reverse course.” According to this view, the occupation bolstered the conservative status quo in Japan. By not trying the emperor as a war criminal, by backtracking on labor and antitrust policy, and by working to rehabilitate Japan as a Cold War ally, the occupation confirmed existing power relationships in Japan. Whether or not one believes that the occupiers delivered on their promise of democratization, MacArthur and the SCAP staff clearly had a profound role in establishing the foundations of postwar Japan’s social stability, democratic political institutions, and dynamic capitalist economy. The High-Growth Era With the signing of the San Francisco Peace Treaty on September 8, 1951, Japan regained its sovereignty. The occupation came to an end. Still, a significant American presence remained in Japan. Hand-in-hand with the peace agreement went a new U.S.-Japan Security Treaty. This document allowed America to station troops in Japan. The stated purpose of the troops was to defend the islands. But, as many Japanese suspected, another purpose was to ensure internal stability. Some pundits tagged Japan’s status as “subordinate independence.” Japan rejoined the community of nations in 1952. Yet it remained reliant on America for its security. In a way, it was still occupied by the U.S. military. To the conservative elites in Japan, this was hardly a bad thing. Tying the nation’s fate to the United States had many potential benefits. Sheltering under the American nuclear umbrella during the Cold War allowed Japan to evade many of the costs and controversies of full-scale remilitarization. Japan did develop “Self Defense Forces.” This military establishment was vaguely named in order to tiptoe around Article 9. In addition, Japan hosted American operations during the Korean and Vietnam Wars. But MacArthur’s pacifist constitution spared Japan from active participation in these conflicts. In general, in the decades following the occupation, Japanese governments were content to maintain a low political and military profile internationally. They deferred to the United States in most matters of policy. More highly prioritized were economic development and the expansion of overseas trade. Domestically, the 1950s were a time of divided and contentious politics. The left-wing parties challenged the conservatives. The conservatives enjoyed covert support from the United States and a steady plurality in the Diet. In 1955, the two leading conservative parties merged, forming the Liberal Democratic Party (LDP). According to its detractors, this party was neither very liberal nor very democratic. The LDP would be a dominant electoral force for almost four decades. It held a majority in the lower house of the Diet and a lock on the prime ministership from 1955 until 1993. Critics complained that Japan had become a one-party state. They claimed that the rise of the Liberal Democrats stifled political debate and gave voters no real choice at the polls. To the majority of Japanese, however, the political stability the LDP offered was welcome. In addition, the economic benefits that LDP rule seemed to deliver were appealing. A number of factors contributed to the long-term success of the Liberal Democratic Party. First was an imbalance in the allocation of electoral districts for the lower house of the Diet. Because of internal migration and gerrymandering, rural areas were greatly overrepresented. Support for the LDP was strong in the rural areas. Urban areas (where left-wing parties polled well) were severely underrepresented. The LDP consolidated its base of power in the countryside by pursuing policies that helped farmers. For example, they blocked rice imports and thus kept agricultural prices high. They also targeted pork-barrel projects to small towns and villages. In addition, the LDP remained pragmatic and flexible ideologically. It was conservative in general outlook. Yet it never became doctrinaire like its left-wing rivals. Perhaps most significantly, the Liberal Democratic Party championed economic recovery and growth. This issue resonated with all Japanese in the wake of World War II. The LDP prioritized industrial and financial development. Through policies like Prime Minister Ikeda Hayato’s popular “Income Doubling Plan” of 1960, it also promoted the sharing of economic gains broadly among the Japanese people. The apparent success of LDP economic management inspired public confidence. It also created the widespread impression that the Liberal Democrats were the only party with the experience and qualifications necessary to govern the nation. Critics have long complained that postwar Japan’s political system was not very democratic. Single-party dominance was one issue. Beyond that, detractors claim that political decision-making has been shaped less by the democratic process than by backroom deals among unelected elites. Scholars have written of an “iron triangle” in Japan. This triangle is a loose coalition of three groups—LDP politicians, big business leaders, and central government bureaucrats. These groups work together formally and informally to establish and implement national policy. Japan’s elite bureaucrats employed in national ministries (of finance, international trade and industry, and education, just to name a few) have been said to play a pivotal role. These professional civil servants enjoyed substantial power and independence, especially in the postwar decades of high-speed growth. The bureaucracy was little altered by the occupation’s reforms. Its influence was retained (and even enhanced) after the war. Whether the power of state bureaucrats or the presence of an “iron triangle” of elites makes postwar Japan distinctively authoritarian or inauthentically democratic is debatable. Skeptics note that similar coalitions have also been common in the democracies of Western Europe and North America. From the end of the occupation to the early 1970s, one-party rule, bureaucratic elitism, and “iron triangle” governance seemed to bother few Japanese. The people seemed content with the status quo. The 1950s were, on the whole, a very good decade for Japan economically. The Korean War was an important catalyst. U.S. military purchasing pulled the Japanese economy out of its postwar funk and gave much-needed impetus to the manufacturing sector. Japan’s reentry into international trade proceeded smoothly, largely under American sponsorship. Many of the overseas markets lost during World War II were regained. Investment in new productive capacity and introduction of the latest industrial technology from the West proceeded briskly. By 1954, Japan had clawed its way back to prewar levels of economic activity. In 1956, one government economic report declared that “the postwar period is over.” In the latter half of the 1950s, Japanese national income grew at an average rate of 9.1 percent a year. By the 1960s, annual growth averaged well over 10 percent. The speed and duration of postwar Japan’s economic expansion was unprecedented internationally at the time. Many elements contributed to Japan’s high-growth economy. State “industrial policy” was charted largely within the Ministry of International Trade and Industry. It was carried out cooperatively with major corporations. This national policy provided a strategic plan and central guidance for Japan’s economic rise. Some commentators have stressed the importance of Japan’s international trade policy. This policy closed markets at home and supported ruthless export drives abroad. Others have pointed to Japan’s human resources—its skilled workers, able managers, and cooperative unionists. A few have accused the Japanese of getting a “free ride” to prosperity. They say Japan milked America for the latest technology and a comfortable spot under the U.S. nuclear umbrella during the Cold War. In recent years, however, many scholars have acknowledged what may have been the most important factor in Japan’s economic boom. While the Japanese are usually depicted as the world’s greatest savers, they have also proven to be some of the world’s foremost spenders. This was never truer than in the decades after World War II. In those years, Japan’s consumers, apparently compensating for the hardships of the war years, bought at unprecedented levels. The rise of consumption and soaring standard of living were captured well by a series of catchy slogans, made popular in the Japanese media. These slogans revolved around consumer desire and the intense social pressure in middle-class Japan to “keep up with the Tanakas.” In the late 1950s, the acquisitive dream of the average Japanese family was the “three S’s”: senpūki, sentaku, suihanki (electric fan, washing machine, and rice cooker). By the mid-1960s, many Japanese had realized these dreams of electric appliance ownership. Expectations had to be redefined. Hence the “three C’s”: kā, kūrā, karā terebi (car, air conditioner, and color television) became the goal. By the 1970s, only the “three J’s” would suffice: jūeru, jetto, jūtaku (jewelry, overseas vacations, and a house of one’s own). Japan’s economy made great strides in the two decades following the occupation. Domestic consumers, in many ways, both instigated and benefited from Japan’s growth. Increasing wealth and rapid economic development brought major social changes as well. Japan, an agrarian society through World War II, urbanized in the high-growth era. In 1950, only one-third of the population lived in cities. By 1975, over 75 percent did. Japan’s increasing postwar wealth was distributed remarkably evenly. Thanks to progressive tax policies and government programs to keep rural incomes rising as steadily as urban ones, income distribution was relatively egalitarian. The vast majority of Japanese (more than 95 percent in some surveys) considered themselves “middle class.” Medical care and public health standards improved rapidly after the war. Average life expectancy increased steadily, eventually becoming the highest in the world. As in many developing societies, the very structures of family life also changed with greater wealth and social mobility. The large, multi-generational household of the past increasingly gave way to a nuclear family with a breadwinner father, a stay-at-home mother, and one or two children. By the 1970s, social scientists had begun to comment on the unusual stability and order of Japanese society in a time of sweeping change. Much of the credit for this resilience went to the core institutions of Japanese society. The family, for instance, was hailed as a model of strength. Divorce rates in Japan were among the world’s lowest. The educational system was widely praised for demanding high levels of literacy and numeracy from all students. Discipline was the rule both in the schools and in society at large. Juvenile delinquency and overall crime rates were extremely low. The police were renowned for their efficient, community-based methods. Some protests did flare during the high-growth era. People rioted against the renewal of the U.S.-Japan Security Treaty in 1960. That same year, the violent Miike coal mine strike occurred. University students demonstrated against the Vietnam War. In general, however, the broad public consensus on economic growth as the overriding national goal and personal advancement as the principal individual objective kept social discord to a minimum. The stereotype of Japanese society as safe, polite, orderly, middle class, well educated, healthy, and still traditional despite the rapid modernization took shape and began to be embraced globally in the optimistic postwar decades of high-speed growth. From the Oil Shock through the Bubble Economy Pride ran high in Japan when the nation’s postwar achievements, from the rebuilding of war-scarred cities to technological marvels like the Shinkansen bullet train, were showcased at the 1964 Tokyo Olympics and Expo ’70, held in Osaka. But Japan’s 20-year run of economic expansion came to an abrupt halt in the early 1970s. The OPEC oil embargo of 1973-74, known in Japan as the “oil shock,” brought the high-flying but resource-poor Japanese economy back to earth. Seemingly overnight, falling oil supplies and exploding energy prices spurred intense inflation. Economic growth ended. Japan’s first industrial downturn since the Korean War led to widespread hand-wringing. The nation felt a heightened sense of its own vulnerability. As it turned out, Japan’s economic recovery from the oil shocks was rapid and strong. The engine of Japanese resurgence was exports. The destination of most of the automobiles, VCRs, and Sony Walkmen that revived Japanese industry was the United States. U.S. consumers, also reeling from the oil crisis, clamored for fuel-efficient Japanese cars. Buyers worldwide came to appreciate the high quality, sophisticated design, and good prices of Japanese electronic goods. In 1974, Japanese-U.S. trade was more-or-less in balance. By 1976, America’s trade deficit with Japan was about $4 billion. By 1978, the deficit was $10 billion. By 1985, it was more than $40 billion. The annual growth rate of Japan’s national income slowed in the late 1970s, yet hovered consistently around 5 percent, a more than respectable figure. Japan’s economic rebound thus proceeded briskly. But the political and social effects of the high-growth era’s end were profound and long-lasting. Many new concerns, interests, and agendas rose to the surface in 1970s Japan. Mass protests and new social movements became common. Many protests expressed outrage at the long-overlooked costs of Japan’s high-speed growth and the government’s feeble response to mounting social and environmental problems. Young people protested pollution and corporate irresponsibility. Urbanites frustrated by poor housing conditions and public infrastructure also spoke out, as did disgruntled farmers displaced from their land for the construction of Narita Airport. All challenged the establishment. In many cases, they won grudging concessions from the government and business. The oil shock and this rising chorus of discontent also took the shine off LDP political rule. The party faced declining electoral results from the 1960s. Still, it managed to cling to power. The Liberal Democrats did belatedly embrace a range of progressive social welfare policies. But many Japanese continued to see them as unresponsive, out of touch, and corrupt. Even Japan’s relationship with the United States, a touchstone of international relations in Asia and the Japanese postwar political order, grew strained. The so-called Nixon shocks (the floating of the dollar on global currency markets in 1971 and the opening of diplomatic relations with China in 1972) caught Tokyo off-guard. Japan’s export successes later in the decade prompted intense pressure from Washington to open Japanese markets to more U.S. goods. After the tensions of the 1970s, the 1980s were exhilarating times in Japan. As the Japanese economy surged forward, especially after 1985, the nation seemed headed toward global economic dominance. The Japanese, it seemed, were the world’s wealthiest, best educated, and longest-lived people. Many commentators heralded the end of the pax Americana. They foresaw the start of the “Pacific Century,” with Japan in the lead. As the Berlin Wall fell and the former superpowers took stock of decades of military spending, pundits declared that Japan had, in fact, won the Cold War. Enriched by a stock market and real estate boom at home, Japan’s corporations and financial titans went on a buying spree abroad. Japanese companies and individuals paid $80 million for a van Gogh and $850 million for Rockefeller Center. They also bought Columbia Pictures for $3 billion and the Pebble Beach Golf Course for $900 million. Japan’s banks were the largest in the world. Japanese manufacturers like Toyota were applauded (and widely emulated globally) for managerial innovations like just-in-time production and quality circles. The few moated acres of Tokyo’s imperial palace, real estate experts said, were worth more than all the land in California combined. The late 1980s were a dazzling and exuberant moment in Japanese history. To some, the affluence of the time led to excess. Critics bemoaned the conspicuous consumption and luxurious lifestyles of the urban elite. They pointed out the corrosive effects such wealth was having on Japanese youth. Social polarization also became an issue for the first time since the end of the war. Fortunes made overnight on the stock market or in real estate speculation meant that Japan was no longer the relatively egalitarian, middle-class society of the high-growth era. Japan’s stature on the world stage seemed to rise as quickly as the skyscrapers being built in Tokyo. Japan was criticized during the First Gulf War of 1990-91 for its “checkbook diplomacy,” contributing money rather than troops. But Japan’s economic might and its generosity with aid funds in the developing world earned it increasing global clout. This unaccustomed international influence and the nation’s mounting wealth seemed to go to the heads of some Japanese commentators. In widely read books like The Japan that Can Say No (1989), by Sony founder Morita Akio and conservative politician Ishihara Shintarō, opinion-leaders celebrated Japan’s cultural heritage, championed a foreign policy independent of the United States, and stoked nationalist sentiments. Morita and Ishihara were not alone in encouraging the Japanese to flaunt their nation’s success. Japan, it seemed, was on top of the world. As would only later become apparent, the prosperity of those times was built on the shakiest of financial foundations. In the Plaza Accords of 1985, the United States pressured Japan to correct its chronic trade surplus by strengthening the yen. In the wake of that agreement, the Bank of Japan pursued an expansionary monetary policy. This policy led to a speculative boom in real estate and equities, which gave rise to fierce competition in the banking sector. That competition, in turn, fueled reckless lending practices. In short, the Japanese boom of the late 1980s was little more than a financial house of cards or, as it has since come to be known, a “bubble economy.” When the bust came, and Japan’s bubble popped, the impact on Japanese politics, society, and culture, not to mention its economy, was little short of devastating. The Lost Decade and Millennial Japan Hirohito, Japan’s Shōwa emperor, died on January 7, 1989. His death set off a wave of remembrance and reflection in the Japanese media. Hirohito first assumed the throne in 1926. Thus, he had overseen Japan’s rise as an imperial power, its defeat in 1945, the occupation and ongoing domination by a foreign power, and Japan’s economic resurgence. In the emperor’s last years, Japan enjoyed unprecedented wealth and international esteem. Hirohito did not live to see his country humbled once again. Starting in 1990, Japan’s overheated economy, as well as the fabric of Japanese political and social life, rapidly began to unravel and fail. The inevitable collapse of the “bubble economy” was sudden and stunning. On the Tokyo Stock Exchange, the Nikkei index had soared to 39,000 in the last heady days of the 1980s. By 1991, it had withered to 14,000. It bottomed out at 8,000 just over a decade later. Real estate prices traced a similar path. In the early 1990s, over the span of just 30 months, Japanese investors and landowners saw the value of their assets shrink by $2.5 trillion. Commentators described the 1990s as Japan’s age of “vanishing wealth.” The crisis, which began in the financial and real estate markets, quickly sent shock waves through the entire economy. The growth rate fell sharply. It dropped from 3.1 percent in 1991 to 0.4 percent in 1992 and 0.2 percent in 1993. In 1998 and 2001, Japan actually experienced negative growth. Corporations retrenched, pruning expenses, shedding workers, and moving high-cost production overseas. (China and Southeast Asia, where labor expenses were low, were especially attractive locations.) The ranks of unemployed workers swelled, something unheard of in Japan since the tough days of the occupation. The official unemployment rate topped 5.5 percent in 2003. Economists estimated that the actual rate was closer to 9 percent. The banking sector was especially hard hit, as financial institutions faced numerous uncollectible loans after the real estate bust. The 1990s witnessed a series of bank failures, reorganizations, and mergers. The slump of the 1990s was tagged the “Great Recession” and the “Lost Decade.” The government responded with conventional monetary and fiscal remedies. Interest rates were slashed in the hopes of encouraging consumption and investment. There was little response from either individuals or corporations. At the same time, the state pumped money into the economy. For example, the government ramped up public works spending with the construction of hundreds of new bridges, dams, and highways. This construction was popular with the public but often unnecessary and environmentally unfriendly. Such fiscal stimulus did have positive short-term effects. It did not, however, jump-start the economy. Japan was left with one of the highest national debt burdens in the world. As finance and industry remained moribund and government fixes fell short, public faith in public institutions wavered. Much blame fell on the elite bureaucrats of Tokyo. Given much of the credit for the economic “miracle,” they now seemed woefully unprepared for Japan’s mounting problems. The LDP, racked by scandal, internally fragmented, and vacillating on recovery policy, finally imploded in 1993. It lost the prime ministership for the first time since 1955, ushering in a period of political instability just when Japan needed a firm hand on the helm of state. Between 1989 and 2001, Japan had 11 different prime ministers. Though the LDP limped back into power after two-and-a-half years, the political terrain was profoundly altered. Many Japanese found it disturbingly volatile. The “Lost Decade” also brought social worries. Japan suddenly was beset by unfamiliar problems. Suicide rates, historically relatively high in Japan, soared in the 1990s. Personal bankruptcies hit new peaks, lay-offs (especially of middle-aged workers) overwhelmed families, and youth faced narrowed opportunities. Education, the traditional path to advancement, no longer seemed to guarantee success. Increasing numbers of young people grew alienated and cynical. Analysts predicted the collapse of the Japanese family as divorce rates increased, delinquency spiked, and the media reported disturbing stories of schoolyard murderers and teenaged prostitutes. 1995 was a particularly trying year. The doomsday cult Aum Shinrikyō released deadly sarin gas on the Tokyo subway, revealing the depths of discontent in Japanese society. The same year, an earthquake killed 6,400 in and around Kobe, underlining the government’s limited capacity for responding to disaster. Amid the crises of the 1990s, basic questions of national identity seemed distressingly fluid. What qualities define being Japanese? The answer was no longer clear. The fundamental truths of postwar Japanese life—the passion for economic growth, the trust in elected and bureaucratic elites, the faith in social institutions—appeared to crumble. Would Japan ever again be fired by a sense of national purpose or united by a sense of common identity? With the new millennium, Japan seemed poised to break free of its “Lost Decade” gloom. A series of market-oriented reforms were designed to shatter the rigid hierarchies of Japanese commerce, finance, and government regulation. These reforms finally seemed to bear fruit after 2003. The economy began to rebound. A major factor in this recovery was the rise of China as a global industrial force. Japan benefited from its huge neighbor’s surging economic growth. The LDP was able to reconsolidate much of its political dominance, thanks in large part to the leadership of Koizumi Jun’ichirō, prime minister from 2001 to 2006. The media continued to obsess about the decline of traditional values and the failings of contemporary youth. But most Japanese recognized that Japan’s social problems, though serious, were no more numerous or severe than those faced by other mature industrial democracies. Meanwhile, an emerging source of national pride was the global success of Japan’s popular culture exports. International audiences have enjoyed the products of the postwar Japanese entertainment industry since the first Godzilla film arrived in America in 1956. In the closing years of the twentieth century, however, Japanese forms like manga (comic books), anime (animation), and character goods (Hello Kitty) became full-fledged global phenomena. Consumers around the world discovered Japanese creations (from Iron Chef and sushi to Super Mario and Pokémon) to be imaginative and refreshing alternatives to America's globalized pop culture of Hollywood, Walt Disney, and the golden arches. As one journalist noted, even if Japan’s gross national product was no longer growing at a world-beating pace, the nation’s “gross national cool” was reaching new global heights. Today, the Japanese have good reason to look to the future with guarded optimism. Yet they face many issues of lasting significance. In Northeast Asia, China is ascending and North Korea is an unpredictable wild card. The changing balance of economic, political, and military power provides challenges for Japanese policy in the region and in its relationship with the United States. Rising nationalist sentiments in Japan, which have become increasingly mainstream over the past three decades, seem certain to cause ongoing friction with Japan’s Asian neighbors. They are also likely to worsen divisions within domestic society. The question of remilitarization (including revision of Article 9 of the constitution) continues to cause debate. With Japan’s birth rate very low and immigration negligible, the costs of a long-lived and rapidly aging population will fall on a shrinking workforce of young Japanese in the years ahead. This coming demographic crisis has focused attention on the role of women in Japanese society. Career options for women have increased in recent decades. While female participation in the Japanese workforce is comparable to rates in Western Europe and the United States, barriers to advancement in fields like business and politics remain high. Government policies are contradictory. The government promotes motherhood to boost the sagging birthrate while simultaneously encouraging women to stay in the dwindling work force. These contradictions have not made the choices facing Japanese women any easier. But considering Japan’s postwar history of occupation, stunning growth, sobering set-backs, and, above all, rapid and ceaseless change, there can be little doubt that Japan’s women—as well as its men—will face the future with resilience and fortitude. Copyright © 2008 Program for Teaching East Asia, University of Colorado. Permission is given to reproduce this essay for classroom use only. Other reproduction is prohibited without written permission from the Program for Teaching East Asia.
Confucianism is an ethic religion practiced within China. Its founder Confucius (formerly known as Kong Qui) was born in 551 BCE in what is now modern day Qufu China. During his lifetime Confucius witnessed the deterioration of tradition Chinese principles and sought to correct them through his philosophy. Others prior to his birth had tried to strength the Chinese principles with their own philosophies, Confucious used these other philosophies and added them to his own creating the Analetects which would form the basis for Confucianism founded in 500 BCE. In its formative years Confucianism spread by hierarchical diffusion through the leaders with the Chinese empire along with relocation diffusion after Confucius' students began to relocate across China, eventually spreading to parts of Korea and Japan There are five basic beliefs within Confucianism. One belfief that is still upheld within modern times is the emphasis on manners and politeness. and education. Along with the respect of the elders there is a teaching that one should worship answers in order to ascertain their guidance. The belief that humans are inherently good also plays a key role within the philosophy. Confucius placed all of his philosophies within five texts "The Book of History" "The Book of Rites" "The Book of Changes" "The Book of Poetry" and "Spring and Autumn Annals". Symbols associated with Confucianism include the Chinese writing symbol for water and scholar, and despite not having a direct correlation to confucianism the yin yang symbol is commonly associated with the set of philosophies. Confucianism is commonly practiced at home or within holy temple in the presence of inscence burnning, in the following images a temples are shown. There are no specifc holy or sacred places within Confucianism. Many followers consider the human journey of life to be sacred. Certain followers consider Mount Tai in the Shandong Providnce, Confucuis' birthplace, certain temples and institutes in commemoration of of Confucius, as well as the family home.
Field Crickets, Gryllus sp.Distribution and Hosts: Various species of field crickets occur over most of the United States and all of Oklahoma. We believe there are five species in eastern Oklahoma and at least two of these occur in the western part of the state. Field crickets will feed on almost anything. They occasionally damage cultivated crops such as alfalfa, cotton, and strawberries. They can also damage vegetables and ornamentals when they are numerous. Damage : The major importance is as a nuisance pest when they come to lights in homes and urban areas during periods of high abundance. Their chirping or mere presence is a nuisance to some. Also, they will sometimes damage fabrics, especially if soiled, and may chew on wood, plastic, rubber, or leather goods. The most serious outbreak in recent years occurred in 1953 when large numbers of crickets invaded cities and towns in many parts of Oklahoma and surrounding states. One report stated, "During warm nights, the streets beneath bright lights were black with crickets, sides of buildings were completely covered with tremendous numbers of the pests, and some streets were hazardous for driving due to the slipperiness caused by the crushed crickets." Life Cycle: The eggs are usually laid in the soil. The newly hatched nymphs burrow to the surface. They will molt 8 to 10 times over a period of 2 to 3 months before becoming adults. One species overwinters as nymphs and the adults are present in the spring and early summer. Others overwinter as eggs and adults are present during the summer and fall. In certain years, field crickets appear in very large numbers during August and September. These outbreaks seem to occur after periods of prolonged dry weather in the spring and early summer followed by rainfall in July and August. Extensive soil cracking may be an important factor. Good sites for egg deposition, an abundance of favorable food, vegetation for shelter, and a scarcity of parasites and predators may also be involved. Description: Field crickets are black or dark brown insects about 1 inch long as adults. They have large hind legs (for jumping) and most have well-developed wings. Nymphs are similar but are smaller and lack wings. Both have long, slender antennae. Control: Crickets commonly spend the daylight hours hiding in dark, damp areas. The elimination of piles of bricks, stones, wood, or other debris around the home will help reduce numbers. Weeds and dense vegetation around the foundations of homes are other good hiding places. Nearby trash dumps, which provide both food and shelter, should be cleaned out. Since crickets are attracted to lights, the elimination of light sources at night will reduce the numbers attracted to the home area. Measures such as caulking, weather stripping, and making sure all screens and doors are tight fitting will help reduce the numbers that can enter your home or business. Adult crickets can be difficult to control. Inside homes or buildings, ready-to-use sprays or aerosols applied to baseboards, door thresholds, and cracks and crevices where crickets hide will normally control them. Also, it is frequently helpful to spray outside around the foundation, in ornamental beds, the patio, the area surrounding stacked firewood, etc. The outside treatment will help prevent crickets from moving into a building. Please contact your local county extension office for current information. Return to Main page
The potentiometer is used to measure voltages by comparison to a known reference voltage. The L&N K-3 universal potentiometer was the top-of-the-line research grade instrument used in industrial and university labs for best quality work. Prior to the development of digital electronics potentiometers were the standard for sensitive and precise voltage measurements required for measureing temperature, pH, conductivity, etc. instrument is composed of a series of precision resistors which may be placed in series with a precision slide-wire resistor to create a resistance of known value. In use the instrument is connected to known and unknown voltage sources, and the resistors are adjusted until the voltages are matched as indicated by a null reading on a sensitive galvanometer. The instrument is also set-up so that the known source may be readily calibrated against a reference voltage source (generally a Weston standard cell). The potentiometer made up the heart of a great number of instruments for measuring such quantities as pH, conductivity, light intensity Some contemporary/early descriptions of the potentiometer and its use are provided below: The instrument on display is in excellent, like-new condition. It is described in detail in the Cenco catalog description. The instrument is accompanied by an operating manual that is not on display, but may be viewed by clicking here.
This large, flat rock in Estabrook Park with its two deep oval-shaped hollows was said to have been used by early Native Americans to grind corn. Once this was an important historic attraction for the park but now has seemingly been forgotten. Milwaukee’s earliest settlers, 2,000 years ago. In 1857, 110 settlers would establish the first English colony on the island of Roanoke, VA. Shortly after, they would all disappear, and their fate remained a mystery for centuries. Now, in the present day, Ben uncovers a startling secret: he and his family are descendants of the "lost" colony. To unlock the mystery, he will undertake a perlious journey through dark family secrets and American history itself. Reading Literature Strands: RL.9-10.1, 2, 3, 4, 5; RL.11-12.1,2,3,4,5,6; Before Jamestown and Plymouth, the English attempted to forge a colony at Roanoke. Within three years, it had disappeared, leaving a mysterious clue behind. What really happened to the Roanoke settlers? Easter Island, also known as Rapa Nui, is remotely located 2,000 miles off the coast of Tahiti. The original settlers of the island were Polynesians who migrated to the far-off land between 400 and 600 BC. They built many shrines and statues, called moai, from stones quarried throughout the island including a volcano site. Researchers still question exactly how the large stones were moved. The 1607 site of the first permanent English settlement in North America has given up new secrets and an intriguing mystery as the 400-year-old graves of four “high status” settlers have been found inside the remains of the colony’s church in James Fort. In 1690, French traders unexpectedly came across a mysterious settlement in southern Appalachia. They reported that the people there lived in log cabins and had unusual olive skin and facial features reminiscent of Europeans. Since they resembled the North African merchants that the French had done business with in Europe, they assumed they had stumbled on a colony of Moors... Paiute tribes told of an ancient giant people as tall as 12-feet with red hair. These huge warriors were fierce and powerful. The Pauites told settlers that when there were only a few of the giants left, they chased them into a cave where they filled the opening with brush and burned it. At one point, an earthquake filled the opening of the cave. In 1911 in the Lovelock Cave in Nevada, the cave was unearthed. It contained 2 skeletons with red hair; the female was 6.5' & the male was 8 feet…
A radiant barrier is a surface that uses its reflectivity to reflect radiant heat, as opposed to regular surfaces, which absorb it. This effect reduces the overall gained heat and, correspondingly, reduces cooling expenses. Basically, a radiant barrier reflects the sun’s thermal energy away from a building, helping to negate it’s heating effect. Expanding upon this concept; radiant barriers specialize in inhibiting heating by reducing heat transfer through thermal radiation. There are three main ways heat can be transferred; thermal radiation, convection, and conduction. Thermal radiation is generated by the motion of particles; using a radiant barrier helps reflect these particles. Conduction is heat transfer due to particle collisions and convection is heat transfer due to fluid movement. By definition, a radiant barrier reflects thermal radiation; to mitigate the other two forms all that needs to be done is to add insulation behind the barrier. This results in a cooling surface that is outstandingly effective as it addresses all three of the major forms of heat transfer. To achieve this setup, there are a few guidelines that should be followed. First, a radiant barrier should be installed pointing outwards into open air space. Then, a normal insulation such as fiberglass can be fit into place behind the barrier. Regarding materials for the barrier; the main requirement is that it should have low reflectivity or emissivity. An example that meets this condition is aluminum foil. The foil is usually attached to cardboard or some other material to increase its durability and allow for easy installing. Particularly in humid climates such as Houston, an insulated radiant barrier can be very successful in reducing heat transfers and, therefore, cooling costs. One of the premier companies that is capable of installing radiant barriers is Triple Seal Insulation. With installation available quickly, there is virtually no reason not to make use of radiant barriers.
Use "?" for one missing letter: pu?zle. Use "*" for any number of letters: p*zle. Or combine: cros?w*d Select number of letters in the word, enter letters you have, and find words! Labyrinthitis is usually caused by a virus, but it can also arise from bacterial infection, head injury, extreme stress, an allergy or as a reaction to a particular medication. Both bacterial and viral labyrinthitis can cause permanent hearing loss, although this is rare. — “Labyrinthitis - Wikipedia, the free encyclopedia”, en.wikipedia.org Otitis interna (Internal otitis) is an inflammation of the inner ear and is usually considered synonymous with labyrinthitis. Hearing loss rarely accompanies the vertigo in labyrinthitis. — “Otitis interna - Wikipedia, the free encyclopedia”, en.wikipedia.org Otitis interna or labyrinthitis involves the inner ear. The inner ear includes sensory Chorioretinitis, Blepharitis, Conjunctivitis, Iritis, Uveitis) · ear (Otitis, Labyrinthitis, Mastoiditis). — “Otitis - Wikipedia, the free encyclopedia”, en.wikipedia.org
Although many of the severe health consequences of STDs manifest themselves among adults, these complications usually result from infections acquired or health behaviors initiated during adolescence. By the twelfth grade, nearly 70 percent of adolescents have had sexual intercourse, and approximately one-quarter of all students have had sex with four or more partners. Therefore, a national strategy to prevent STDs needs to focus on adolescents. The committee believes that adolescents should be strongly encouraged to delay sexual intercourse until they are emotionally mature enough to take responsibility for this activity. However, most individuals will initiate sexual intercourse during adolescence, and they should have access to information and instruction regarding STDs (including HIV infection) and unintended pregnancy and methods for preventing them. Many school-based programs and mass media campaigns are effective in improving knowledge regarding STDs and in promoting healthy sexual behaviors, and these two interventions should be major components of an STD prevention strategy. The committee believes that there is strong scientific evidence in support of school-based programs for STD prevention, that adolescence is the critical period for adopting healthy behaviors, and that schools are one of the few venues available to reach adolescents. Given the high rates of sexual intercourse among adolescents and the significant barriers that hinder the ability of adolescents to purchase and use condoms, condoms should be available in schools as part of a comprehensive STD prevention program. There is no evidence that condom availability or school-based programs for sexuality or STD education promote sexual activity. STD-related clinical services for adolescents, including hepatitis B immunization, should be expanded through school and student health clinics, because adolescents are less likely than adults to have health insurance and they infrequently use regular health care facilities. Adolescents who are not enrolled in school also need access to clinical services. Because confidentiality is a major concern for adolescents, they should be able to consent to STD-related services without parental knowledge. With respect to the above issues, the committee makes the following recommendations:
Vision impairment, also known as visual impairment or vision loss, is a decreased ability to see to a degree that causes problems not fixable by usual means, such as glasses. Some also include those who have a decreased ability to see because they do not have access to glasses or contact lenses. Visual impairment is often defined as a best corrected visual acuity of worse than either 20/40 or 20/60. The term blindness is used for complete or nearly complete vision loss. Vision impairment may cause people difficulties with normal daily activities such as driving, reading, socialising, and walking. The most common causes of visual impairment globally are uncorrected refractive errors (43%), cataracts (33%), and glaucoma (2%). Refractive errors include near sighted, far sighted, presbyopia, and astigmatism. Cataracts are the most common cause of blindness. Other disorders that may cause visual problems include age related macular degeneration, diabetic retinopathy, corneal clouding, childhood blindness, and a number of infections. Visual impairment can also be caused by problems in the brain due to stroke, prematurity, or trauma among others. These cases are known as cortical visual impairment. Screening for vision problems in children may improve future vision and educational achievement. Screening adults may also be beneficial. Diagnosis is by an eye exam. Source: Source: Wikipedia Vision loss in Australia Visual impairment and its causes are strongly related to age. Prevalence rates for both visual impairment and blindness are markedly greater among older age groups as are rates of major sight-threatening eye conditions. With the ageing of the population, the number of older people with vision problems will increase over future decades, if prevalence rates remain constant. Among Australians aged 40 and older in 2009, the major causes of vision impairment were age-related macular degeneration (AMD), cataracts, diabetic retinopathy, and glaucoma.The major causes of blindness are AMD. AMD affects around 500,000 Australians of whom 100,000 have significant vision loss.4 In the absence of treatment and prevention efforts, the number of people with late stage macular degeneration disease (vision loss) could double from 167,000 to 330,000 by the year 2030. In 2010, the total economic cost of vision loss associated with AMD was in excess of $5 billion. This includes health system costs, other costs to individuals and community, and loss of wellbeing. For every $1 invested in the current treatment for wet AMD, there has been a $2 saving in social benefit costs. Vision loss - is it preventable? The World Health Organisation estimates that 80% of visual impairment is either preventable or curable with treatment. This includes cataracts, the infections river blindness and trachoma, glaucoma, diabetic retinopathy, uncorrected refractive errors, and some cases of childhood blindness. Many people with significant visual impairment benefit from vision rehabilitation, changes in their environmental, and assistive devices. It is estimated that half of visual impairment and blindness can be prevented through early diagnosis and timely treatment. Despite cost-effective treatment and eye preservation interventions, the number of potentially blinding eye diseases continues to escalate. Increased awareness can help - remind family members and friends at higher risk for eye diseases and vision loss to have their eyes examined regularly. Detecting signs of low vision Below are some signs of low vision. Even when wearing your glasses or contact lenses, do you still have difficulty with: - Recognising the faces of family and friends? - Reading, cooking, sewing, or fixing things around the house? - Selecting and matching the color of your clothes? - Seeing clearly with the lights on or feeling like they are dimmer than normal? - Reading traffic signs or the names of stores? These could all be early warning signs of vision loss or eye disease. The sooner vision loss or eye disease is detected by an eye care professional, the greater your chances of keeping your remaining vision. Early detection can save your eye sight. Impact of vision loss Visual impairment can diminish the health and wellbeing of older people in many ways, for example by affecting their mobility and contributing to their risk of falls and injury. Their ability to perform everyday activities such as reading or watching television can be affected, as can their ability to drive and to interact socially. Visual impairment can significantly reduce quality of life and contribute to depression in older people. Preventing and treating visual impairment can increase the prospect of enjoying life as a healthy, productive older person. Depression rates amongst carers aged 65 years and older of someone with wet AMD are more than triple those in the general population while carers aged under 70 report even higher rates of depression, with one in nine suffering from the condition. The facts on vision impairment and loss In Australia in 2009 the total cost of all vision loss including uncorrected refractive error was $16.6 billion, a cost of $16,360 per person aged over 40. Loss of well-being was responsible for 57 per cent of costs, with health system and productivity costs contributing 18 and 14 per cent respectively. Lack of national services Australia has no national services for people 65 and older who are blind or vision impaired. In spite of exciting advances in treatment such as bionic eye implants, nanosecond lasers and vision regeneration, older Australians continue to miss out on integrated models of care and prevention that could halt or reduce their vision loss. Older Australians with severe vision loss face significant out of pocket costs for specialists, allied health services and technologies. In contrast, the Hearing Services Program provides free testing and hearing aids for concession card holders and Australians aged under 65 years with vision loss are supported through disability packages. Inconsistent access to support and services Inconsistent access to Home and Community Care (HACC) and Home Care services and technologies leave many older Australians unable to access visual supports, in spite of significant improvements in daily activities and functions associated with their use. Although non-government organisations attempt to fill this void, most require co-contributions and are unable to meet all requests for assistance. Reading, vision and orientation technologies such as computer screen scanners, text readers, smart phones and tablets are expensive for people on fixed incomes, including pensioners. However the use of such technologies delivers cost-effective benefits for independent living and reduces health co-morbidities. Subsidised assistive technologies are urgently required; for people at risk of, and living with, severe vision loss.
When you're writing on religion, morality, philosophy or even history, citing the Bible can enhance your paper by providing a spiritual, cultural or historical perspective. Because you must choose one of the Bible’s numerous versions, editions or publications, figuring out how to cite the Scriptures calls for a few more guidelines than citing a simple book or article. The Modern Language Association, or MLA, has established specific rules for referencing the Bible. In the Body of Your Paper Cite, in a parenthetical reference, the title of the specific edition or publication that you are using, such as the New Jerusalem Bible or the Thompson Chain-Reference Bible, followed by a period. Italicize this title. However, if you mention it in the text as a work of literature, rather than as a citation of the specific volume, don't italicize just the word “Bible.” Specify the book of the Bible you are referencing, and don’t italicize it. Add a comma after the title of the Bible version and then write the name of the book. If you mention the book in the text of a sentence, write out the full name, but if you place the book in the parenthetical citation, abbreviate the book’s title. For example, abbreviate “Revelations” to “Rev.” and abbreviate 1 Chronicles to “1 Chron” inside the parenthesis. Cite the chapter and verse after citing the book of the Bible. In MLA format, separate the chapter and verse by a period, not a colon. For example, to cite the sixth verse from the 34th chapter of Edoxus, write “Exodus 34.6” in the text of a sentence or write “Exod. 34.6” in a parenthetical citation. In the Works Cited List Begin the entry with the title of the specific Bible -- for example, the New Jerusalem Bible or the Thompson Chain-Reference Bible -- and italicize it. Follow the title with a period. Cite the version’s general editor, or the person who wrote the introduction and notes for that version or edition. Clarify what the person did for the volume. For example, write “Introd. and notes by John Doe,” “Ed. Robert Smith” or “Jane Brown, gen. ed.” Follow the entry with a period. Cite the publisher’s location, publisher’s name, and date of publication, as you would for a normal book in MLA style. First write the city where the Bible’s publisher is located, then add a colon. Next write the publisher’s exact and full name, followed by a comma. Then write the Bible’s year of publication, followed by a period. Write the word “Print,” capitalized and followed by a period. Specify the translation or version of the Bible, such as King James Version or New American Standard Bible. If the version’s title includes the word “version,” then abbreviate it to “Vers.” Add a period at the end, unless the title ends with the word “Vers.” and hence already has a period.
Militaries have been trying to attack one another with unmanned aircraft for more than 150 years. It all started in July 1849 when the Austrian army, after laying siege to Venice, tied bombs to balloons and floated them over the city. A timed fuse was supposed to release the bomb over the City of Canals, but, ironically, strong winds blew many of the balloons past the city and above Austrian encampments on the other side [source: Overy]. Both the Union and Confederate armies tried similar attacks during the American Civil War, but like the Austrians, their attempts were usually way off target [source: Garamone]. The Wright brothers' invention of piloted, powered flight in 1903 pushed drone experiments away from balloons and toward airplanes. The earliest prototypes, developed by the American military during World War I, were simply modified airplanes that could be pre-programmed to hit enemy targets. Despite some limited success, these early drones could not be recovered after an attack, and tests showed them to be too unreliable and imprecise for combat duty. Shortly after the war, advances in radio control allowed unmanned aircraft to be guided in real time, and on Sept. 15, 1924, an American-designed Curtiss F-5L became the first aircraft to take off, maneuver and land by remote control [source: Keane and Carr]. Similar technology powered the U.S. Navy's remotely piloted Curtiss TG-2, which conducted the first successful remote torpedo attack during an April 1942 test strike on a practice warship [source: Grossnick]. Drones got even more effective during the Cold War. In the early 1960s, the Ryan Aeronautical Company developed the Lightning Bug, a reconnaissance drone that could be recovered by parachute. Later, the company adapted the design for a new weapon known as the BGM-34A. During a test flight on Dec. 14, 1971, this drone became the first to strike a target with air-to-surface guided missiles, earning its place in history as the first modern UCAV. While the Israelis successfully used the new drone against Egyptian armored vehicles and missile sites during the 1973 Yom Kippur War, it never saw action in Vietnam because the Americans felt it wasn't as good as manned technology [source: Clark]. The military continued to use drones throughout the end of the 20th century, but they were mostly reserved for reconnaissance missions. That's how the Predator drone got its start in 1995, but by Feb. 16, 2001, it was outfitted with Hellfire missiles — just in time for the U.S. response to the Sept. 11 terrorist attacks [source: Matthews].
Here's a kinesthetic activity to help students formulate sentences To do this ESL activity: - Using content from your lesson, have your students come up and draw pictures of simple statements on the board. For example, "He can't drive a car." You'll need a number of them to make it successful. - Divide your class into two teams and have two students come forward to the board. - Say one of the sentences on the board. - Have your students race to touch and say the picture. - Give a point to the first to say it correctly.
Learning styles refer to various approaches to learning or ways a person learns. There are various types of learning styles with each having different strategies a person can use for their personal style of learning. If you are unsure of which learning style is yours, you could take a learning styles quiz or learning styles test to find out and then apply the appropriate strategies. Continue reading to better understand the different learning styles. Visual Learning Styles If you’ve taken a learning styles test and your results say you are a visual learner, this means that you learn best visually; through sight. Visual learners must be able to see facial expressions and body language in order to completely understand what is being presented to them. Visual types of learning styles mean the person would much rather sit in the front as to avoid any obstructions such as another person’s head. They tend to think visually, or in pictures, and tend to learn best through hand-outs, videos, overhead projectors, diagrams, flowcharts, and text books with illustrations. Those with visual types of learning styles typically prefer to absorb information by taking notes in great detail. Auditory Learning Styles Next in the learning styles inventory is auditory learning. If you’ve taken a learning styles quiz and found that you were an auditory learner, you learn best by through listening. Auditory learning is one of the different learning styles in which people learn better through listening to others, verbal lectures, talking about things, and discussions. They interpret the meaning of what is being said by listening to the pitch, speed, tone of a person’s voice, and various other nuances. What is written may mean nothing unless the auditory learner actually hears it. Auditory learners tend to benefit by using a voice recording device and reading things aloud while recording it for playback later. Learning Styles Inventory: Kinesthetic Learning If your learning styles test says that you are a kinesthetic learner, this means you learn best through touching. Kinesthetic learning is one of the different learning styles in which a person prefers a hands-on approach. They would much rather be actively involved in exploring than simply listening to someone talk or show them how things are done. These learners may find it difficult to sit still for any given length of time and may find that they are easily distracted. They have a strong need to explore and experience things. Also in the Learning Styles Inventory is Multiple Intelligence There are multiple ways a person demonstrates their intellectual abilities known as multiple intelligences. After you have learned which learning style you have by taking a learning styles quiz, you may also find that your intellectual abilities fall into one of the following: spatial/visual intelligence, linguistic/verbal intelligence, mathematical/logical intelligence, kinesthetic/bodily intelligence, rhythmic/musical intelligence, interpersonal intelligence, and intrapersonal intelligence. Below you will find a brief explanation to each. Spatial/Visual Intelligence – Just as with the visual learning style, individuals in this category enjoy videos, movies, charts, maps, and pictures. They have the ability to perceive things visually by thinking in pictures and creating mental images vividly as a way of retaining information. Linguistic/Verbal Intelligence – These individuals are highly developed in their auditory skills and make great public speakers. Rather than thinking in pictures like those with visual intelligence, these individuals think in words. Mathematical/Logical Intelligence – These individuals tend to think in numerical and logical patterns making connections in various pieces of information. They have the ability to use numbers, logic, and reason and have a high curiosity about the world surrounding them. They enjoy experiments and asking questions. Kinesthetic/Bodily Intelligence – As with the kinesthetic learning style these individuals tend to express themselves through action and movement. They have great eye-hand coordination and they process and remember information by interacting with what surrounds them. Rhythmic/Musical Intelligence –These individuals have a strong appreciation for, and the ability to produce, music. They think in patterns, sounds, and rhythms. They tend to be sensitive to the sounds of their environment as well. Interpersonal Intelligence – Those who have interpersonal intelligence have the ability to understand and relate to other people. They focus on seeing things from another person’s perspective in order to better understand how they feel and think. They can sense motivations, feelings, and intentions as well as work to maintain peace. Intrapersonal Intelligence – Individuals with intrapersonal intelligence have a great ability in the awareness of their own inner person; the ability of self-reflection. They tend to understand their relationships, feelings, and dreams, as well as any other weaknesses and strengths they have. To learn more about various learning styles and multiple intelligences, visit the following links.
The successive-approximation ADC is the most commonly used design (figure 8.6). This design requires only a single comparator and will be only as good as the DAC used in the circuit. Figure 8.6: The block diagram of an 8-bit successive-approximation ADC. The analog output of a high-speed DAC is compared against the analog input signal. The digital result of the comparison is used to control the contents of a digital buffer that both drives the DAC and provides the digital output word. The successive-approximation ADC uses fast control logic which requires only n comparisons for an n-bit binary result (figure 8.7). Figure 8.7: The bit-testing sequence used in the successive approximation method.
4 Answers | Add Yours The US had to do a few things to prepare for entry into WWI. Among them were: - Persuasion. The government needed to persuade the people that going to war was the right thing to do. This was done through the Committee on Public Information. - Coordination with business. The US government could not or did not want to simply tell businesses what to make to help the war effort. Instead, the government tried to coordinate its needs with those of business. It offerred good prices for the things it wanted made and it tried to persuade businesses to act out of patriotism to make what was needed. - Training the military. The US had to draft soldiers to fight in the war and it had to train them to do so effectively. In these ways, the US prepared for entry into WWI politically, economically, and militarily. Earlier on, as the war raged on in Europe, the general public in America was against direct participation in the war. However, as time went by, public opinion changed and the war was seen as a fight between bad governance represented by the Central Powers, against democracy represented by Allied Powers. The shift in opinion was fueled further by the deaths of 123 Americans, after a British liner they were in was torpedoed by Germany. Further, Germany sought support from Mexico, forcing America to join the war. The U.S. prepared for war by first building- up the military in readiness for deployment, by initiating a draft. President Woodrow Wilson sought Congress support in the declaration of war against Germany. The president further issued the Liberty Bond and pushed for public participation to raise money for the war efforts. The government raised income taxes to sustain financial support for the war. Businesses and citizens were strongly urged to support the war through government and media communications. The act of entering any kind of war is not an action taken lightly. The U.S. government faced many obstacles at the prospect of entering into WW1. Leonard Woods and Theodore Roosevelt led a campaign to strengthen the U.S. military when the war first began. In 1915, a movement began and argued that the U.S. needed to build strong naval and land forces thinking the U.S. would eventually be in the war. There was strong opposition to this. It moved quickly through Protestant churches and women's groups. The Democratic party also saw this as a threat. Theodore Roosevelt was a strong candidate for the presidency. The problem was, the U.S. military was, in fact, not ready or in shape to enter war. In 1917 the Germans sunk one of the U.S. naval ships, and this was the event that pushed us into war. The U.S. had to make the American people understand that war was the right answer. They had to convince businesses that they would succeed and not fail during the war. The war itself, still faced opposition, but for the most part Americans supported our military in their efforts. The entry into the war, was one that was not taken lightly. The U.S. went to great lengths to make sure our military was as prepared as they could be, and the rest of Americans knew the risks and benefits of going to war with Germany, of course this wouldn't be the last war we had with Germany. This war prepared the U.S. for the horrors they were soon to face. They got their military up to date with the war and they fought. We’ve answered 317,602 questions. We can answer yours, too.Ask a question
Researchers have found that Streptococcus pneumoniae, the bacteria that causes most pneumonia cases, secretes a toxin to help it move from one body to the next with the help of the host’s immune defenses. The study, “Host-to-Host Transmission of Streptococcus pneumoniae Is Driven by Its Inflammatory Toxin, Pneumolysin,” was published in the journal Cell Host & Microbe. The study explains a microbial strategy used by S. pneumoniae to spread to new hosts and why an organism expresses a toxin that damages the host upon which it depends. S. pneumoniae resides asymptomatically in the nasopharynx of healthy carriers. The respiratory tract, sinuses, and nasal cavity are the parts of the body that are usually infected. But in susceptible individuals, such as the elderly, immunocompromised people, and children, the bacterium may become pathogenic, spreading to other locations and causing disease. S. pneumoniae is the main cause of community-acquired pneumonia and meningitis in children and older people, and of septicemia in HIV-infected people. The methods of transmission include sneezing, coughing, and direct contact with an infected person. Using mouse models of S. pneumoniae in their experiments, researchers from NYU Langone Medical Center found that the pathogen has evolved to produce a pore-forming cytotoxin called pneumolysin, which promotes mucosal inflammation, increasing nasal secretions and enabling cells lining mucous membranes to expel the bacteria from the body. The researchers believe that these bacteria have evolved to gain a benefit from being expelled, riding the secretions out of the body and on to their next host. In their experiments, researchers observed that when S. pneumoniae was genetically altered to be unable to produce the toxin, it could not spread from one mouse to the next. “Factors that allow for the host-to-host transmission of disease-causing bacteria have not been thoroughly investigated by the field as a means of prevention,” Jeffrey Weiser, MD, chair of the Department of Microbiology at NYU Langone, said in a news release. “Our findings provide evidence of the tool used by these bacteria to spread, which promises to guide the design of new kinds of countermeasures.” According to the researchers, by secreting the pneumolysin toxin, S. pneumoniae is capable of both finding nutrients and leaving its current host to pass to the next one. They believe the toxin induces a response from the body, drilling pores in cells to get nutrients. As a result, the toxin gets nutrients for the bacteria to hold them over while outside the body. Then, the researchers say, the secretions the toxin triggers ensure that they exit a body that is attacking them with inflammatory responses, helping them find a new host. S. pneumoniae is known to spread more effectively when someone is sick, for example with the influenza virus. In previous studies conducted in mice, the secretions that accompany influenza viral infection were shown to help S. pneumoniae bacteria to overcome the population constraints that come with remaining in one host. In the new study, the researchers genetically altered a mouse model of bacterial transmission to examine pneumococcal transmission in the absence of flu. They found that the inflammation caused by colonization of pneumococcal bacteria, especially in response to the pneumolysin toxin, caused the bacterial shedding necessary for transmission between hosts. But why do bacteria that fully depend on their host trigger such a destructive toxin? According to the researchers, the benefit to the bacteria of a higher rate of transmission counterbalances the harmful effects of the toxin on the host. “Our study results argue that toxins made by bacteria are central mediators of transmission between hosts, which makes them attractive as a potential ingredient in vaccines, to which they could be added specifically to block transmission,” Weiser concluded. “There are precedents in using disarmed bacterial toxins, or toxoids, as vaccine ingredients, as with existing vaccines against diphtheria, tetanus and pertussis.”
Click on an item in the set below to see more info. 1. Uninvolved Parents Children who bully may not be receiving love and warmth from their families. These children also may not have rules at home. 2. Aggression in the Family Aggressive behaviors can be learned when children are beat up by older siblings or when they are physically punished by parents. 3. Peers Who Bully Children can learn bullying from their peers. Children often bully to make themselves feel more important, and victims of bullying often become bullies. 4. Friends of a Feather Children who tend to bully make friends with other children who bully. Consequently, these children support each others bullying behaviors. 5. Lack of School Rules Bullying is more likely to happen when schools do not have anti-bullying policies or when school rules are not enforced. 6. Poor Supervision in Schools Bullying can happen more easily when there is poor supervision in the classroom, hallways, cafeteria, or at recess. The set is continued below. 7. Social Aggression Girls more often bully each other emotionally rather than physically hurting each other. Girls often do this to gain attention or make themselves feel better. 8. Media Models Television, movies, and video games often have aggression and bullying behavior. Experiencing bullying in the media can reinforce children who bully.
When you hear the word 'fire,' what comes to mind? Were you ever afraid of it? Mesmerized by it? Comforted by it? No matter where or how you have experienced fire, it is essentially the same. Fire is a chemical reaction. In order for fire to burn, three elements must be present. Oxygen, fuel and heat combine to make what is called the “Fire Triangle.” Oxygen, fuels, and heat are around us all the time. So why don't we see fire more often? The answer lies in the details. Oxygen is pretty easy. Fire needs air that is about 16% oxygen. The earth's atmosphere is 21% oxygen. Fuel is anything that will burn. In the outdoors, that often includes wood, grass, shrubs, pine needles and the like. The presence of heat will vary. Wood needs to be about 617 degrees F to burn. If any one of these elements is missing or not present in the right form or amount, one “leg” of the fire triangle collapses and no burning occurs. So, once a fire is ignited and there is enough fuel and oxygen for it to burn, the fire will create all the heat it needs to sustain itself. The more fuel, the higher the temperature. The higher the temperature, the faster the fire spreads. The more the fire spreads, the more it “preheats” or heats the fuels around it increasing its size and the temperature around it. And the race is on! All of us have seen fire. Technically, the fuel you see burning isn't really on fire. Instead, the fuel is being converted into a gas. It's the gas produced by the fuel that's burning. Next time you are watching a log burning in your fireplace, see if you can see a space between the log and the flame. I'll bet you can! Not All Fires Are Alike If you listen to the news or if you talk to people who work with fire, you will hear it described in several ways. Here are some terms that will help you understand what is going on. Wildland fire is one of nature's oldest phenomena. Evidence of free-burning fires has been found in petrified wood and coal deposits formed as early as the Paleozoic Era, about 350 million years ago. Wildland fire is any fire burning in wildlands, including wildfires and all prescribed fires. A wildfire is a fire is one that is out of control and generally viewed as undesirable by land managers. It needs to be put out or suppressed. An example of a wildfire might be one that is burning the habitat of an endangered animal like the sage grouse, as has been the case in southern Idaho the past few years. Managers would call for fire fighters to suppress this fire. A prescribed fire is one that is considered desirable by managers because it meets some management objective. They can be naturally ignited, such as those that are started by lightning, or they can by lit by land managers to accomplish a specific task. Burning logging debris following a logging operation would be one example of a time that managers might ignite a fire. Allowing a lightning-caused fire to burn because it is clearing out dead branches and needles on the forest floor of a ponderosa pine forest would be an example of a prescribed natural fire. Fire in Ecosystems It is important to remember that fire behaves differently in different ecosystems. The lodgepole pine forest depends on fire to survive because the lodgepole cones need fire to open them so seeds can be released. Repeated fire in sagebrush-steppe country can destroy the sagebrush, an important part of that system. Ponderosa pine forests benefit from an occasional ground fire to help clear the forest floor of competing grasses and young trees. A healthy ponderosa pine forest has trees that are spaced far apart so that sun can reach the ground and grasses and shrubs can grow. Fires spread in three general patterns: ground fires, surface fires, and crown fires. Ground Fires — These fires burn organic material in the soil beneath the litter on the surface. They burn by glowing combustion. Surface Fires — Surface fires have a flaming front and burn leaf litter, fallen branches and other materials on the ground. Crown Fires — These fires are the hottest and most intense. They are often difficult to control, need strong winds, steep slopes and lots of fuel to keep burning. Crown fires burn the top layer of foliage on the tree. Once a wildfire is started, the way it behaves is determined by the current weather conditions, the amount of humidity in the air, the type and amount of fuel available to the fire, and the topography of land the. Because live plants contain so much water, they are less likely to burn than dry logs and branches or stems. High winds can create small fires out in front of a large fire by blowing embers into the unburned fuel. These fires are called spot fires and may burn some trees and shrubs and leave others untouched. A large fire can even create its own wind. As the fire heats the air around it, the air quickly rises. Cool air rushes in to replace the hot air, which creates a wind and increases the supply of oxygen to the fire. Trees can explode if water deep inside the tree turns quickly to steam. After the Fire After a fire, the hard work of rehabilitating the landscape begins. Most of it needs to be done quickly because there is often little to hold the soil in place, and erosion can be a big problem. This is especially true if the burn is on a steep slope. This was the case in Boise in both 1959 and 1996 when the Boise Foothills burned. Stablizing and Rehabilitating the Land Land managers use a variety of techniques to stabilize the soil and rehabilitate the land after a fire. Burned trees are cut down and logs are laid horizontally across the hillside. Most of the logs are 8–14 inches in diameter and held in place by wooden stakes. A small trench may also be dug on the uphill side of the log to collect water and store sediment. In the same area as the contour-felled logs, crews use hand tools to dig more small trenches uphill from the logs. These horizontal trenches catch water and dirt as it flows down the hill. A mini-excavator is used to dig larger horizontal trenches along the hillside. These trenches are usually 2–3 feet wide and 2–3 feet deep. Every 30–50 feet, dirt is piled into the trenches to create a dike. If a trench breaks, the water and soil will stop at the dike and not continue down the hill. Wattles are made of straw wrapped in a mesh that will break down in sunlight. They are about 8 inches in diameter, 25 feet long, and weigh about 35 pounds. They are placed horizontally across the hill, with stakes holding them in place. Wattles slow the water and soil moving down the hill and provide a good seed bed for future seedlings. From time to time horizontal strips of earth will be tilled 6–12 inches deep to allow water running down the hill to soak into the ground. This also provides a good seed bed. Bands of undisturbed earth are left between the tilled rows allowing plants surviving the fire to resprout. Straw-Bale Check Dams Dams made of three or more straw bales are built across gullies to slow water and soil as they wash downhill. The straw bales are wrapped in wire mesh to help hold them together. Then they are covered with a strong cloth. The straw and cloth are porous, allowing water to seep through while collecting the sediment behind the dam. There are usually 3–8 dams per gully. As the water runs down the hill, its velocity is slowed as it is routed from basin to basin behind each dam. Sediment Ponds and Basins Ponds and basins are built in gentle stream channels or at the base of hills to trap and store water running downhill. As the water sits in the pond, it soaks into the ground while providing a water supply to wildlife in the area. A rangeland drill pulled by a tractor is used to seed many burned areas. Round disks cut furrows into the ground and seeds are dropped from long tubes behind the discs. Chains dragging behind the tractor help cover the seeds with soil so they sprout. Some seeds, such as sagebrush seed, is aerially distributed from a bucket carried below a helicopter. It is usually dropped onto snow-covered ground so the seed will stay moist and sink into the ground as the snow melts. Seeding is sometimes done by volunteers using hand-held spreaders. They are followed by other volunteers who rake the seed into the ground.