content
stringlengths
275
370k
=image], single image created as a focal point of religious veneration, especially a painted or carved portable object of the Orthodox Eastern faith. Icons commonly represent Christ Pantocrator, the Virgin as Queen of the Heavens, or, less frequently, the saints; since the 6th cent. they have been considered an aid to the devotee in making his prayers heard by the holy figure represented in the icon. The icon grew out of the mosaic and fresco tradition of early Byzantine art (see Byzantine art and architecture ). It was used to decorate the wall and floor surfaces of churches, baptisteries, and sepulchers, and later was carried on standards in time of war and in religious processions. Although the art form was in common use by the end of the 5th cent., early monuments have been lost, largely because of their destruction during the iconoclastic controversy (726–843; see iconoclasm ). Little has survived that was created before the 10th cent. Byzantine icons were produced in great numbers until 1453, when Constantinople fell to the Ottoman Empire. The practice was transplanted to Russia, where icons were made until the Revolution (see Russian art and architecture ). The anonymous artists of the Orthodox Eastern faith were concerned not with the conquest of space and movement as seen in the development of Western painting but instead with the portrayal of the symbolic or mystical aspects of the divine being. The stiff and conventionalized appearance of icons may bear some relationship to the two-dimensional, ornamental quality of the Eastern tradition. It is this effect more than any other that causes the icons in Byzantine and later in Russian and Greek Orthodox art to appear unchanging through the centuries; there is, however, a stylistic evolution in Byzanto-Russian art that can be seen through variations of a standard theme by local schools rather than through the development of an art style by periods. The term icon came to mean in the 19th-century German school of art historical study, and from this meaning were derived the terms iconography See A. Schröder, Introduction to Icons (tr. 1967); K. Weitzmann et al., ed., A Treasure of Icons (tr. 1968); H. Skrobucha, The World of Icons (tr. 1971); D. and T. T. Rice, Icons and Their History (1974). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. See more Encyclopedia articles on: Art: General
Brazil’s farms are major, global producers of beef, soybeans, sugarcane, coffee, rice, and more. Yet they’re also major producers of greenhouse gas emissions. Two new resources aim to reduce the emissions intensity of Brazil’s agricultural sector. Today, the Greenhouse Gas Protocol launches the Agricultural Guidance and Emissions Calculation Tool, which will help Brazilian crop and livestock producers measure their greenhouse gas emissions at the farm level. By taking stock of the full emissions from their operations, farm managers can identify major emissions sources, develop reduction plans, and eventually, mitigate their climate impact. To comprehend how important these new resources are for agricultural companies—and for Brazil as a whole—it’s important to understand the country’s evolving farming landscape. The Greenhouse Gas Impacts of Brazil’s Agriculture Brazil is the world’s fifth-largest greenhouse gas emitter, largely due to the impacts of agriculture. Agricultural emissions increased 20 percent from 2005-2010 and now account for more than one-third of the national total. Without interventions, they’re on track to grow another 18 percent by 2030. The country’s farms also drive another emissions source— land use change. Brazil has lost 36 million hectares of forest over the last 12 years, mainly due to forest-clearing for cattle-ranching and other agricultural activities. While tree loss has declined in the country in recent years, deforestation still accounted for 22 percent of Brazil’s emissions in 2010. Brazil’s forest loss (shown above, in pink) far exceeds its forest gain (shown above, in purple) over the past 12 years. Credit: Global Forest Watch Reducing Agriculture’s Greenhouse Gas Footprint These greenhouse gas impacts prompted the Brazilian government to create the National Plan for Low Carbon Emissions in Agriculture, colloquially known as the “ABC Plan.” Enacted in 2010, the ABC Plan offers incentives for sustainable agriculture, such as by providing lines of credit to farmers who adopt less greenhouse gas-intensive practices. It also aims to eliminate illegal deforestation and encourage research on climate-resilient crops, among other initiatives. The problem was, government officials lacked a mechanism for tracking actual emissions reductions on individual farms. Unlike emissions from other sectors—such as energy or industry—agricultural emissions are notoriously difficult to measure. They’re highly influenced by environmental conditions like soil moisture and temperature. And in other cases, farming activities—such as reduced tillage and planting trees—may not reduce emissions immediately. This lack of understanding also stymies greenhouse gas management by larger producers and agribusinesses. For example, only one-quarter of agricultural companies targeted by the CDP’s climate change questionnaire actually reported their greenhouse gas emissions.. New Resources for Measuring and Managing Agricultural Emissions That’s where the new resources come in. Developed in partnership with Embrapa and Unicamp—and with the input of more than 100 experts from businesses, academia, NGOs, and government agencies—the guidance and calculation tool will allow Brazil’s agricultural companies to more accurately measure their emissions. They will also allow government officials to track the emissions impacts of Brazil’s national policies, including the ABC Plan. The guidance offers an emissions accounting framework for all companies with agricultural operations—whether they produce animals or plants for food, fiber, biofuels, drugs, or other purposes. The calculation tool drills down into specific practices and emissions-intensive subsectors like soy, corn, cotton, wheat, rice, sugar cane, and cattle. And both include methodologies for measuring and reporting land use change emissions. With better measurement data in hand, companies will be able to: Understand operational and reputational risks associated with their emissions; Identify emissions-reduction opportunities, set reduction targets, and track performance; Improve accountability and reputation through public disclosure of GHG emissions; and Reap emissions-reduction co-benefits, such as energy conservation, increased productivity, and improved soil and water quality. With the right tools, Brazil’s farms and ranches can start shrinking their greenhouse gas footprints—for the good of the planet and their own bottom lines. Blog português, aqui.
By Susan Chow, PhD, ELS The exact cause of schizophrenia is not currently known, and it is thought to occur as a result of various genetic, physical, psychological and environmental risk factors. In this way, some people are more likely to be affected by the condition due to genetic and physical susceptibility but a particular life event, usually stressful or emotional in nature, triggers the condition. Although it is well known that schizophrenia tends to run in families and is likely to be inherited from parents that carry a certain gene, there is no gene that has been linked to increased risk of schizophrenia. For this reason, many medical researchers believe that a combination of genes increases the risk of an individual for developing the condition. However, not everyone with the genetic makeup will become affected by schizophrenia, as it is also dependent on other risk factors. Studies of identical twins that share the same genes have enabled genetic research about schizophrenia to become a reality. It has been observed that if one twin develops schizophrenia, the other twin has a 50% chance of developing the condition. This is in contrast to non-identical twins, who have a one in seven chance of developing the condition if the other twin is affected. It has been observed that there are subtle changes in the physical structure of the brain of people affected by schizophrenia. However, these changes aren’t uniformly present in people with the condition and may exist in people that are not symptomatic. Additionally, the neurotransmitters responsible for carrying messages in the brain may also be involved. This has been suggested due to the efficacy of medications that alter neurotransmitters in the brain in the treatment of schizophrenia. In particular, dopamine and serotonin are thought to be linked to the development of the condition and some research suggests it is an imbalance of these two neurotransmitters that is problematic. Complications at Infancy Individuals that were subjected to complications before and during birth have also been observed to be at greater risk of developing schizophrenia. These complications may include premature labor, low birth weight and asphyxia during birth. Additionally, exposure to viruses or infections in the womb or early infancy may also have an effect. The pathophysiology of this link is not known for certain, although it is thought to be a result of subtle changes in the development of the infant’s brain. There are certain situations that tend to cause the development of schizophrenia in those people that are at risk of the condition due to genetic and physical factors. Stressful life events are the most common trigger for schizophrenia. The nature of the event can vary greatly and may include sudden job loss, divorce or abuse but any stressful event has the potential to trigger a psychotic episode in a susceptible individual. Additionally, misuse of drugs has also been linked to an increased risk of schizophrenia. Common drugs that have triggered the onset of the condition include cannabis, cocaine, LSD and amphetamines. Environmental triggers are almost always associated with the development of schizophrenia, but it is worth noting that they are not sufficient to cause the condition alone. Many individuals experience similarly stressful events throughout their life without developing schizophrenia and the predetermined susceptibility is, therefore, of particular importance. Last Updated: Aug 24, 2015
The Vedas are the earliest Hindu scriptures. The large body of texts known as the Vedas was composed roughly between 1500 and 500 BCE, and was predominantly orally transmitted until around 1000 CE. The Vedas are considered Shruti (“what is heard”), meaning they were directly revealed from the divine realm and not the original work of human beings. The core of the Vedas are the Samhitas, four collections of mantras and hymns: the Rig-Veda, Sama-Veda, Yajur-Veda, and Atharva-Veda. The Brahmanas are prose commentaries on the Samhitas that detail the rituals to be performed with each Samhita mantra. The Aranyakas contain further discussion and interpretation of the rituals in the Brahmanas, along with other material. Many schools of Hindu thought have left behind much of the ritualism of the Vedas, stressing instead their philosophical teachings and using post-Vedic literature as the predominant source of scriptural authority. The Vedas are the oldest scriptures of the Hindu tradition. The period of their composition, between the mid-second to the mid-first millennium BCE, has become known as the Vedic Period. The cultures that evolved during this period are collectively dubbed Vedic civilization, a testament to the profound influence these texts have had on the development of society in ancient India. The Vedas also determine Hinduism’s view on who is Hindu and who is not, as Hindu orthodoxy includes those sects that accept the authority of the Vedas, while those who reject them—Buddhists, Jains, Sikhs—are held as heterodox and non-Hindu.
sound barrierArticle Free Pass sound barrier, sharp rise in aerodynamic drag that occurs as an aircraft approaches the speed of sound and that was formerly an obstacle to supersonic flight. If an aircraft flies at somewhat less than sonic speed, the pressure waves (sound waves) it creates outspeed their sources and spread out ahead of it. Once the aircraft reaches sonic speed the waves are unable to get out of its way. Strong local shock waves form on the wings and body; airflow around the craft becomes unsteady, and severe buffeting may result, with serious stability difficulties and loss of control over flight characteristics. Generally, aircraft properly designed for supersonic flight have little difficulty in passing through the sound barrier, but the effect upon those designed for efficient operation at subsonic speeds may become extremely dangerous. See also sonic boom. What made you want to look up sound barrier?
Colored Women and Civil Rights Movement Buy custom Colored Women and Civil Rights Movement essay During the period between 1950s and 1970s, colored women were in a difficult situation. Despite the emergence of feminist movements during the era of civil war, and establishment of civil right movement, colored women did not have a strong support from these movements. This was so despite the fact that colored women, specifically the black women, were the main founders, and campaigners of the civil rights movement. Their contributions would be de-emphasized based on their skin color (Penrice). The emasculation of the views and opinions of colored women in the civil rights movement was driven by the white society, which felt compelled to adopt patriarchal roles (Penrice). When colored women joined the feminist movements, white women would discriminate against them. White members of feminist movements paid little attention to class issues, which affected many colored women. However, these actions did not prevent colored women from fighting against racism, sexism, inequality in the workplace, class segregation, and voting rights. After the end of the civil war, colored women continued to suffer political, economic, and social oppression (Willis 3). They bore the pain of discrimination in employment and education, class segregation, and demoralization of verbal abuse (Willis 3). As oppression continued, women felt the need to liberate themselves. Many women thought that since they had played significant roles in the civil rights era, the civil rights movement was the best avenue for them to attain liberation. However, many civil rights organizations allocated women to lower positions, while their colored men counterparts took the leading positions (Smith 14). Domination of men in the liberation movements prevented active contribution of women in such organizations. To illustrate this, in 1963, thousands of women including activists Ella Baker, Jo Ann Robinson, and Fannie Lou Hamer, joined the March on Washington (Smith 14). The committee to this movement was male dominated. During their activities, the committee members would neglect to invite women to make speeches before crowds, despite their active involvement in the movement. The role of women leaders in the movement was to type minutes, prepare food, wash dishes, and provide moral support to the male activists. While some women accepted these roles, others such as Kathleen Cleaver, Elaine Brown, and Ericka Huggins refused to take subordinate positions (Smith 15). Instead, they started to fight for equal positions in the movement. This led to the formation of the Black Panther Party for Self Defense (Smith). The party provided a platform for equality of all women. It attracted many women recruits where they continued with the struggle for political, social, and economic equality. The party was able to enter into an agreement with male Panthers where all members, male and female, were to treat each other as ‘comrades’ (Smith 15). All group activities and responsibilities were to be shared equally between the two genders. This arrangement provided an opportunity for women Panthers to earn respect from their male counterparts. Gradually, colored women of black origin were able to participate in leadership activities. According to Smith, during the World War I, Black women suffered from sexual exploitation and racial discrimination (16). After the end of the war, many black women migrated to urban areas, in a bid to find secure places for their families. Many of these back women migrated to Chicago, New York, and Illinois (Smith 16). A few of them were able to secure employment in shops, departmental stores, factories, and low-status formal jobs such as secretaries, and sales clerks. However, many of them were unable to find employment. This is because; many of the available jobs in the urban areas were reserved for either black men or war veterans of white origin (Smith 17). In fact, those who had already secured industrial jobs were fired to make way for the male war veterans. This led to increased poverty and oppression of black women. In the 1920s, a group of black women artists and poets started the Harlem Renaissance where they would sing and write about the frustration of black women during and after the war period (Smith 17). During the Great Depression, in the 1930s, poverty among black women living in urban areas increased significantly. However, this did not stop black women from participating in the fight against oppression and poverty. In the southern cities, black women participated in communists parties, where they fought against racial discrimination in steel industries (Barnett 163). On the other hand, black women in northern cities established nationalist economic programs and consumer groups such as the Housewives’ League of Detroit (Barnett 163). The Housewives’ League assisted in stabilization of economic status of Blacks. They achieved this by having all Negro owned businesses to adopt a non-discriminatorily policy where black people secured jobs without any discrimination, establishing training opportunities for Negro youth in commercial and trade related activities, and conducting educational campaigns where all Negros were taught how to spend wisely (Barnett 164). Activities of the Housewives’ Leagues were also characterized by boycotting of whites’ owned stores in black neighborhoods. Towards the end of 1930s, seventy-five thousand job opportunities had been created, meaning, seventy-five thousand blacks were able to secure employment. In efforts to fight political powerlessness among colored women, Mary McLeod Bethune, a black activist for black women rights and a famous member of President Roosevelt’s New Deal Project, founded the National Council of Negro Women (NCNW) (Willis). NCNW was involved in collecting, distributing, and interpreting information concerning activities of black women in the United States of America. The aim of NCNW was to develop courageous and competent women, who would be easily integrated in the social, political, economic, educational, and cultural activities of the nation. When the Second World War broke in 1941, NCNW leaders such as Estelle Riddle and Church Terrell encouraged black women to fill vacant positions, which were left by men who went to serve in the war (Willis). This saw many black women secure jobs in male-dominated fields such as welding, aviation, and defense. In addition, through NCNW, black women were able to serve in the armed Forces Nurse Corps, which they were previously restricted from serving (Willis). Black women involvement in civil rights movements did not end after the WWII. In the 1950s, black people faced a lot of segregation in public places and public transport. In public transport, black people had their seats reserved for the back in all public transport vehicles. However, the year 1955 marked the end of segregation laws against black people in the United States of America (Smith 17). This started when a 42 years old black woman, named Rosa Parks refused to give her seat to a white person. After she had been arrested, she filed a case against the bus company, in which she was the plaintiff. Rosa did not win the case, and she was fined $14.00. However, her arrest attracted mass action, which contributed to the end of segregation of black people. After her arrest, a group of women leaders from the Women’s Political Council arranged for a boycott of the buses in Montgomery, Alabama (Smith 18). The women were successful in arranging for a boycott, which lasted for one year. During the year 1956, the buses in Montgomery would ride empty or half full. Consequently, bus companies in Alabama faced enormous financial losses. After losing her case in lower courts, Rosa lawyers helped her file a new case in the United States District Court, saying that segregation of black people in public transportation was unconstitutional. In June 1956, the District Court ruled in her favor, citing that bus segregation against colored people was unconstitutional (Smith 18). However, the Montgomery city commissioners appealed the decision of District Court to the United States Supreme Court. Their appeal did not succeed because the US Supreme Court also ruled that racial segregation in buses was unconstitutional (Smith 18). Recent achievements of colored women through the civil rights movement include the adoption of Voting Rights Act of 1965, which allowed black women to participate in elections through voting (Smith 19). Today, more than a half of all Black graduates are women. Due to the increased level of education among Black women, employment level among Black women in the United States of America has increased steadily, resulting in reduction of the income gap between them and their white counterparts (Smith 19). Consequently, class segregation among black women by white women has drastically reduced. However, colored women participation in civil rights movement is far from the end. A good illustration is participation of Anika Rahman in reduction of the wage gap between men and women in the United States (Robb 34). According to Robb, women in the United States of America earn 77 percent of what men’s earnings. Apparently, black women earn less than 77 percent of what American men earn, while the percentage earned by Latino women when compared with what men earn is the US is excessively low. Moreover, Robb points that the US domestic workers, who are largely made of women from Black and Latino communities, are excluded from federal rules, which protect all wage earners (34). Luckily, through the Ms. Foundation for Women, Rahman is helping to fight this inequality. Recently, Ms. Foundation for Women assisted granting of minimum employment guarantees: paid overtime, three annual paid off days, one off day per week and protection from harassment, for all nannies and domestic workers in the New York State. Currently, Ms. Foundation is working with different international organizations in bringing change to women through supporting women’s health, ending domestic violence, promoting democracy, and advocating for economic justice among women. Irma McClaurin, the current president of Shaw University is also another example of colored women’s participation in civil rights movement in the modern era (Paying it Forward 17). She is a feminist scholar, where she engages herself in informing colored women about various forms of sexism, racism, and class intersect, and informing them how they can resist such vises. McClaurin writes about personal experiences of different women whom she works with, to help the society understand more about diversity and gender issues. According to Burk, women of color, especially the Latinos, are the worse victims of the current global recession (46). Since many of them do not have college degrees, they earn very little from their jobs. Generally, due to recession, women are being forced to take low-skill level jobs or work part time, because they are unable to find full time jobs. As the federal government tries to reduce the economic deficit through employing cuts in education and health care funding, women, who comprise the biggest percentage of employees in these sectors, are losing their jobs faster than men are. In fact, the worse hit group by this phenomenon is the Blacks and Latinos (Burk 46). However, in the year 2010, increased participation of the women of color in supporting government action has been observed. This group of women is advocating for creation of more jobs through the use of stimulus money as opposed to laying-off public sector workers, and equality of men and women in terms of salaries (Burk 46). Buy custom Colored Women and Civil Rights Movement essay
Holidays and Observances: Looking at Diversity and Culture What We Already Know Writing About a Favorite Holiday Research on Holidays Making a Display about Holidays Introductory Activity: What We Already Know What is a holiday? - break the class into small groups or pairs - have them answer the following questions What is a holiday? Why do we have holidays? What are different kinds of holidays? What kind of things do people do to celebrate or acknowledge holidays? - Have each group report back to the class - Write a group composition or paragraph about holidays in general Activity 1: Writing About a Favorite Holiday - Brainstorm a group list of different holidays on the blackboard - Write a list of adjectives on another part of the blackboard: happy, serious, religious, simple, important, political, traditional, historical, merry, fun, minor, ritualistic. Ask students if they can think of any other words that could describe a holiday. - Match the holidays and the adjectives. You can ask students to write it all down themselves, do it as a group, or make handouts with the holidays and the adjectives already printed on them. Give students some index cards. Ask each student to write down their three favorite holidays on each of three cards. (Refer to the list the group made for the Introductory Activities) students to write a sentence or two about each holiday. students to pick the one holiday they want to write about. Ask students to close their eyes and remember a happy time they celebrated that holiday. Help them remember by guiding them to think about when this was, who was there, where they were, what they did, what they ate, what they heard, etc. Have students draw a picture or make a collage of that good memory. Tell them it doesnt matter if they arent artists. This activity is just to get their thoughts moving. (Materials such as markers, crayons, construction paper, old magazines, and ribbon can make this more Break students into pairs and have them explain their artwork. Have students write the story of the time in the picture. Share the writing in groups. Ask for volunteers to read their stories to the whole class. Activity 2: Research on Holidays students review a big list of holidays. It can be the list you made in the "Introductory activities or this one: - Ask each student to divide the groups list into two lists: the holidays they know about and the ones they either never heard of or dont know much about. - Ask each student to draw a circle around the three holidays they would like to know more about. - Ask the class to share which holidays they know about and which ones students do some research on the holidays they dont know about. Students can work in groups, pairs or independently. (You probably need to offer a little guidance here. If someone picks Ramadan to research, they will have plenty of information to review. If someone picks Sweetest Day, you might want to steer the student towards another choice where their will be more to explore.) Here are some questions for students to answer: - When is this holiday? - Is this holiday on the same day every year? - Where is this holiday celebrated? - Who celebrates this holiday? - For how long has this holiday been celebrated? - What is the meaning of this holiday? - What customs go with this holiday? - Is there special food, decorations, or clothing associated with this students write a report based on their research. Ask students to read their reports to each other in small groups. Have students respond by saying what they just learned about the holiday and asking questions. Each student should leave the group with at least two new questions to answer. students do more research to answer the questions. students revise their reports to incorporate new information. Have students read their reports again in the same small groups. Culminating Activity: Making a Display about Holidays and Culture. (Everyone should have two pieces of writing one from each of the activities above. Students will use both of them in this activity. ) Make a list of all the holidays that students wrote about on the blackboard. Have students raise their hands and count how many people wrote about each Break up students into groups based on which holidays they wrote about. If there are holidays that only one person wrote about, put all of those into a group called Other Holidays. Ask each group create a display for each holiday and/or for the Other Holidays. Ask students to organize what they have already written, and write captions and summaries. Put the displays up on the walls or on tables. Put a blank sheet of paper next to each display so students can write comments. Have students circulate, reviewing the displays and writing comments. Note: The nature of your display will depend on your students. If everyone celebrates and knows about the same holidays, your display will be more uniform. If everyone knows about and wrote about different holidays, you will have a different kind of display. There are pretty many fun activities about holidays on Web sites. Have students do a search for holidays they havent written about or take this online (OPTIONAL -- Include real-world actions students can take to follow through on lesson concepts. These include activities such as interviews, community based art projects, performances, portfolios and letter or email writing to relevant government, academic or business personnel. For additional insight into community-based projects, go to the "Making Family and Community Connections" @ http://www.thirteen.org/wnetschool/concept2class/month9
Seven problem-solving techniques include inference, classification of action sequences, subgoals, contradiction, working backward, relations between problems, and mathematical representation. Also, problems from mathematics, science, and engineering with complete solutions. Based on Stanford University's well-known competitive exam, this excellent mathematics workbook offers students at both high school and college levels a complete set of problems, hints, and solutions. 1974 edition. The Nuts and Bolts of Proofs instructs students on the primary basic logic of mathematical proofs, showing how proofs of mathematical statements work. The text provides basic core techniques of how to read and write proofs through examples. The basic mechanics of proofs are provided for a methodical approach in gaining an understanding of the fundamentals to help students reach different results. A variety of fundamental proofs demonstrate the basic steps in the construction of a proof and numerous examples illustrate the method and detail necessary to prove various kinds of theorems. New chapter on proof by contradiction New updated proofs A full range of accessible proofs Symbols indicating level of difficulty help students understand whether a problem is based on calculus or linear algebra Basic terminology list with definitions at the beginning of the text Fascinating approach to mathematical teaching stresses use of recreational problems, puzzles, and games to teach critical thinking. Logic, number and graph theory, games of strategy, much more. Includes answers to selected problems. Free solutions manual available for download at the Dover website. Accessible text features over 100 reality-based examples pulled from the science, engineering, and operations research fields. Prerequisites: ordinary differential equations, continuous probability. Numerous references. Includes 27 black-and-white figures. 1978 edition. This workbook bridges the gap between lectures and practical applications, offering students of mathematics, engineering, and physics the chance to practice solving problems from a wide variety of fields. 2011 edition. Collection of 100 of the best submissions to a math puzzle column features problems in engineering situations, logic, number theory, and geometry. Most solutions include details of several different methods. Over 300 unusual problems, ranging from easy to difficult, involving equations and inequalities, Diophantine equations, number theory, quadratic equations, logarithms, more. Detailed solutions, as well as brief answers, for all problems are provided. Problems in Combinatorics, Arithmetic, and Geometry Author: Jiri Herman Publisher: Springer Science & Business Media This book presents methods of solving problems in three areas of elementary combinatorial mathematics: classical combinatorics, combinatorial arithmetic, and combinatorial geometry. Brief theoretical discussions are immediately followed by carefully worked-out examples of increasing degrees of difficulty and by exercises that range from routine to rather challenging. The book features approximately 310 examples and 650 exercises.
Schools bring the history of slavery to life “Learning from the past, understanding the present, building the future together” is the name of the research project conducted by students in three UNESCO Associated Schools (ASPnet) in Cuba, the Gambia and Spain, for the commemoration of the Transatlantic Slave Trade, using online platforms for blogging and file-sharing. The students conducted extensive research and exchange on the history and legacy of the slave trade, sharing documents, videos and writings on their common blogs from January to December of 2011 - the United Nations International Year for People of African Descent. The three schools (IES “Luis Seoane” in Pontevedra, Spain, the “Nusrat Senior Secondary School” in Gambia, and the IPVCE “Che Guevara” in Santa Clara, Cuba) continue to work together within the framework of the ASPnet Transatlantic Slave Trade Education Project. Two aspects of the slave trade received particular attention: how African slaves kept their culture alive in their new, foreign environment, and how gender roles evolved within those communities. The significant contributions of African culture to Latin American societies have recently received attention as has the gender perspective; research shows that women were the repositories of knowledge and culture within African slave communities. The students’ cross-disciplinary approach was also applied to the study of modern forms of slavery, especially human trafficking. Sites of Memory at each “point” of the triangle of the Transatlantic Slave Trade were a central topic of exchanges, as they present a tangible link between past and present. Connections were made between the Casa de Contratación in Seville, a registering point for ships sailing to and from the colonies; a fort on James Island in the Gambia used as a slave collecting point until 1820; and the Central Marcela Salado Lastra in Cuba, a major sugar refinery of the 18th and 19th centuries. Between 1492 and 1870, over a million captives were taken by the Spanish from Africa and brought to Central and Latin America through the triangular trade. This enormous population displacement and its legacy were the backdrop of this research project, with perspectives from each point of the Transatlantic Triangle brought together. The students were motivated by a sense of “duty to defend historical memory…and to increase knowledge about the Transatlantic Slave Trade as a human tragedy and the racism that resulted from it”, as indicated in their project statement, consistent with the ASPnet and UNESCO values: the defence of human rights and the importance of international cooperation. Tackling such difficult but rich topics head-on at a young age is an essential aspect of education for the remembrance of the Transatlantic Slave Trade. Through their work, these students have shown that mutual respect and the peaceful coexistence of people and cultures can take place not only between different countries but across three continents. Similar projects have been carried out around the world: the “Let’s Celebrate Africa” Festival and the “Connecting One’s Own History” project were conducted by secondary schools in Barbados; an international illustration competition was set up in France and a national writing competition organized in Norway. Entries submitted were in English, French, German, Norwegian and Spanish.
Inertial reaction forces (discussed in "The Origin of Inertia") are a commonplace of everyday life. When we push on stuff, it pushes back because of its inertial mass. Less common in everyday life are pronounced recoil forces -- a special type of inertial reaction force -- like those experienced when shooting a gun or stepping out of a small boat onto a dock. But we know that they're quite real. The generalization of those experiences is the realization that whenever some massive object ejects part of itself, since the ejected part carries away energy and momentum, the original object must experience a recoil force. It changes the object's momentum in such a way that the momentum of the two parts after ejection is the same as the its momentum before ejection. That's just a convoluted way of saying momentum must be "conserved" in any "isolated system". When folks figured out that when accelerating electric charges launch electromagnetic waves that move away from the source charges at the speed of light carrying energy and momentum with them, they realized that the charges must experience a recoil force, that is, a force of "radiation reaction". Normally, electromagnetic radiation reaction forces are ridiculously small. For example, if you rigged up a radio antenna to put out a kilowatt of power in one direction, the reaction force on the antenna would be a fraction of a dyne (the weight of several fleas roughly). So in almost all circumstances you can just ignore radiative reaction effects, pretend that they don't exist. But not always. In high energy elementary particle accelerators (like the ones at Fermilab or CERN) radiation reaction is an obvious fact of life. As the particles traveling at nearly the speed of light are bent into their circular paths by magnets, they are accelerated. And they radiate. The reaction force produced by the radiation slows the particles down, unless power is applied to replace the radiated energy and momentum. Radiation reaction is rarely a major part of a formal course of study in physics. In part this is because it's only important in rather unusual circumstances. And in part it's a consequence of the fact that radiation reaction has some peculiar features. The problems with radiation reaction have been known for almost a century. They were already old when Feynman put an outstanding summary of them into chapter 28 of volume two of his Lectures on Physics, now nearly forty years ago. A fair amount has been written on radiation reaction since then, but the difficulties he described remain. Chief among those difficulties are problems with "causality" (causes always preceding effects) and seeming transient violations of the conservation of energy and momentum. Much of what I'll say here follows Feynman's discussion, so you may want to take a look at what he has to say for yourself. The specific problem with radiation reaction that we're going to be concerned with is: What happens to the mass of something as it's radiating? Eventually we'll be looking at this question in the context of gravity, but it turns out to be a problem in electrodynamics too. And since electrodynamics is well-studied, it's instructive to see how this all works in that case. The masses of things nowadays are known not to be due chiefly to their electromagnetic properties. But early in the century some folks thought it might be possible to explain mass and inertia electromagnetically, especially after Einstein showed that energy and mass are equivalent (yes, E = mc2). Two lines of argument led to this belief. First, if you view electrons (the simplest of all the "elementary particles") as being little extended spheres made up of some electrical dust, since each particle of dust must repel all of the other particles of dust (like charges repel), a lot of work (energy) must have been invested in assembling them. By Einstein's relationship, the assembly energy must have mass. The assembly energy depends on how big the sphere of charged dust is (as one over the radius in fact). So the energy goes up as the radius goes down. If the radius is exactly the "classical electron radius", about 10-13 centimeters, then the assembly energy is computed to be the observed electron mass. The other line of argument that suggests that the mass of electrons might be electromagnetic in origin is the fact that the electromagnetic field of a moving electron has momentum in it. When you calculate the momentum (see Feynman for details) it turns out that the coefficient of the velocity, the mass that is, is essentially the same as the expression for the mass you get in the assembly energy calculation. This looked fairly promising. But it didn't work out. The reasons aren't germane to our purpose. (The history of the subsequent developments in this business, however, is one of the heroic tales of the 20th century. As such, it has been recounted many times at all levels of sophistication. Of the popularizations I've seen, I especially like Crease and Mann's, The Second Creation.) Although the masses of elementary particles are now known not to be attributable exclusively, or even chiefly, to electromagnetism, electromagnetism does make a contribution. And that turns out to be important in considering radiation reaction. We now ask: How does the launching of electromagnetic waves, radiation that is, produce the reaction force on electrons we know must be present if the conservation of energy and momentum are to be preserved? Well, if electrons are extended spheres of charged dust, then the electrical forces (and those that balance them) that act between the particles of the dust must take time to get across the distances separating them according to special relativity theory. So when we push on one part of the electron, only later do the other parts detect the changes our push has produced and adjust for them. Because of these time delays, during accelerations the forces between the dust particles are unbalanced. The net force due to the imbalance turns out to be the force of radiation reaction. The unbalanced force disappears when no accelerating force is present, even if the electron is moving. Formally, following Feynman's notation, the self reaction force on an electron in the case of a simple acceleration in the x direction is: + higher order terms. (1) is a numerical factor of order one, e the electric charge of an electron, a the radius of the dust sphere, and each dot over the xs means differentiation with respect to time once. The higher order terms in this "series expansion" scale with positive powers of the radius. For any reasonable electron radius they are so small that they can be safely ignored. The first term on the right hand side of Equation (1) is just the electromagnetic mass times -- that is, the normal inertial reaction force if a is taken to be the classical electron radius. The second term is the one that accounts for radiation reaction forces. It has two noteworthy properties. First, it doesn't depend on the size or shape of the electrical dust, telling us that it is independent of the self-energy business that presumably accounts for the inertial mass of the electron. Second, it depends on the third time derivative of position. This causes all sorts of trouble. It opens the way to "pre-accelerations" (accelerations that start before the force that causes them is applied) and "runaway solutions" (accelerations that continue after the applied force has been removed). Folks have developed some clever, if not entirely convincing, ways of trying to deal with these problems. We're going to ignore them. To investigate energy and momentum conservation in radiative processes we have to let the force act to produce a change in the energy. The rate at which the energy changes, the power that is, at any instant is just the force times the velocity . So, ignoring the higher order terms: Now is proportional to , so we see that the first term on the right hand side represents the change in the kinetic energy of the electron. To see the physical meaning of the radiation reaction term we rewrite it as: When the minus sign is multiplied through, the term on the right hand side of Equation (3) is always positive. It represents the energy carried away by the radiation. The term, however, is different. It can be either positive or negative. And for periodic motion of electrons, it averages to zero over time. As Feynman remarked, this term -- Equation (3) -- must be included in any account that hopes to conserve energy and momentum since we know with certainty that radiation carries energy and momentum away from accelerated charges. Radiation reaction is attended by some very thorny problems. I have already alluded to several of them, and Feynman discussed others. Arguably the nastiest problem associated with radiation reaction -- the problem that is directly related to transient mass fluctuations -- was not mentioned by Feynman. It is quite simple. It is known as a matter of fact that electric charges subjected to constant accelerations radiate electromagnetic waves, and the energy they carry away from their source charges is proportional to the square of the charges' acceleration. But when the acceleration of the radiating charges is constant, the time derivative of the acceleration vanishes, and with it the radiation reaction term in Equation (2) [i.e., Equation (3)] disappears too. So, our mathematical formalism tells us that during constant accelerations all of the work being done by the accelerating force goes into change of the kinetic energy of the charges. Nonetheless, they radiate energy too. We seem to be faced with an obvious violation of the conservation of energy here. This problem has been known since the early part of the century. A small, but substantial literature has grown up around it. [Two rather old, but very good papers that explore this problem are: Candelas and Sciama, "Is There a Quantum Equivalence Principle," in: Essays in honor of Bryce deWitt (Adam Higler, 1984), pp. 78-88, and Fulton and Rohrlich, "Classical Radiation from a Uniformly Accelerated Charge," Annals of Physics, 9, 499-547 (1960).] In addition to the apparent violation of the conservation of energy, this problem has attracted attention because it figures into discussions of the "equivalence principle" (the proposition that all things fall with the same acceleration in a gravity field, which is thus equivalent to an accelerated frame of reference), one of the cornerstones of general relativity theory. We're going to leave general relativity theory out of our considerations too. Energy (and momentum) conservation is problem enough by itself. The easiest way to see the details of this problem is to consider a sequence of accelerations of a charged particle and display graphically what's happening to all of the relevant quantities in time. The sequence of accelerations we examine is: We take the rates of change of the acceleration all to be the same and the intervals of the accelerations to result in causing the charge, starting from zero velocity, first to speed up, then slow down to zero velocity. During the periods of increasing and decreasing acceleration will be non-zero and constant. All of this is shown in Figure 1. (The scales for , , and are arbitrary.) Next, using Figure 1, we plot in Figure 2 the rate at which energy is carried away by the radiation -- which is proportional to according to electrodynamics -- and the work done by the radiation reaction force -- which is proportional to . Inspection of Figure 2 reveals the problem. If we consider the complete interval we know that the conservation of energy requires that the area under the curve be equal to minus the net area under the curve. Evidently, not only are the areas unequal, but instant-by-instant the energy flows don't balance anywhere during the process too. If we only consider the accelerating charge and the radiation, we have a sequence of transient energy conservation violations that some argue must balance out when averaged over time. To give you a sense of the seriousness of this problem, let me relate some of Fulton and Rohrlich's comments on it. "In the case of uniform acceleration, . . . , the total work done by the radiation reaction force vanishes. . . . The internal energy of the electron, . . . therefore, decreases while energy is being radiated. This result seems to lead to a very unphysical picture: The accelerated electron decreases its 'internal energy,' transforming it into radiation. Does this mean that the rest mass of the electron decreases? [They then do a little calculation.] Thus we obtain the comforting result that the change in internal energy of the particle does not affect its rest mass. Rather, the radiation energy is compensated by a decrease of that part of the field surrounding the charge, which does not escape to infinity (in the form of radiation) and which does not contribute to the (electromagnetic) mass of the particle." This is a pretty remarkable statement. Energies of all forms have an equivalent mass. And the energy that resides in a non-radiative field coupled to the particle can be expected to contribute to the measured mass of the particle, just as the energy in the non-radiative part of the electromagnetic field does. (That's why they specifically exclude the electromagnetic field as the source of the internal energy that goes into the radiation when the reaction force is absent. The energy in the "static" or "inductive" electric and magnetic fields is proportional to the square of the field strengths. So if energy is being drawn from these fields to feed the radiation, their field strengths go down. And as they go down, the momentum in these fields goes down too. As a result, the mass of the electron goes down, contrary to expectation.) When you stop and think about this magical source of "acceleration energy", it doesn't sound very convincing. Without it, though, instantaneous violations of energy conservation would occur. Perhaps that's why Fulton and Rohrlich went on to say, "If the emerging physical picture seems unsatisfactory, one can reject the equations of motion. This is a possible alternative, but then the question of energy conservation simply cannot be answered, because the equations of motion are unknown." How can this problem be dealt with? You might think that if we could let the rest mass change during accelerations we could get the energy we need for the radiation during constant accelerations. Alas, this doesn't help, because in reducing the mass to get the energy for the radiation, for a steadily applied force the acceleration increases, leading to yet more radiated energy. If Equation (2) is correct, this must be true, for the radiation reaction term vanishes. So there's no way to divert some of the energy that goes into kinetic energy into the radiation. Another possibility, taking a cue from quantum vacuum fluctuations, is to assume that transient violations of energy conservation actually occur. In light of the fluctuation-dissipation theorem that links vacuum fluctuations and radiation reaction, this might seem at least plausible. The problem with this approach, of course, is that we're not talking about quantum scale phenomena here. These transient violations can be made rather large and long. So large and so long that they could be produced in a well-equipped laboratory no doubt. It's instructive to isolate the source of all the trouble here. That turns out to be the "little calculation" that led Fulton and Rohrlich to the conclusion that the rest mass of the accelerating electrically charged particle is constant. The way that they came to that conclusion was by noting that the "four-velocity" and the "four-acceleration" are "orthogonal". That is, the spacetime generalizations of the acceleration and velocity of an object are perpendicular to each other. (Although this, and much of what follows, is easily demonstrated, I will not spell out the details here. You can find them spelled out with crystal clarity in Wolfgang Rindler's outstanding little book, Introduction to Special Relativity [Oxford, 1991]. The relevant pages in the second edition are 58-60 and 90-93.) Since this is a simple kinematic relationship, it is always true. Now if the four-force (yes, the spacetime generalization of normal "three"-forces) points in the same direction as the four-acceleration, it turns out that restmass must be constant. This is how Fulton and Rohrlich came to the conclusion they did. The question one may pose here is: Do the four-force and four-acceleration have to always point in the same direction? Not necessarily. It is, of course, obvious that restmass is not, in general, a constant during accelerations. Any deformable object stressed by an accelerating force stores part of the work done by the force, as the acceleration increases, in the form of elastic "internal" energy. That changes the restmass of the body. When the accelerating force is removed, the stresses relax, the added internal energy disappears, and the object recovers its original restmass. The increased restmass is present, quite clearly, even if the acceleration the object experiences is constant since the internal stress energy is stationary during constant accelerations. When the acceleration is changing, the internal energy (stresses) in the object will change so that the object does the requisite work on the accelerating agent. For example, as the accelerating force is removed, the transient stored internal energy present in the object during constant acceleration must be conveyed back to the agent. If something like this were going on during the acceleration of electrically charged particles, everything might be OK. The transient restmass increase, for a given applied force, will reduce the acceleration of the charge, decreasing both the amount of energy carried away by the radiation and the kinetic energy acquired by the charge during and after the acceleration. In effect, the increased restmass keeps all of the energy being delivered by the accelerating force from going into final-state kinetic energy so that some energy is available to feed the radiation field. When the charge is radiating, then, its restmass is greater than when it's in a state of inertial motion (i.e., unaccelerated). Since the net reaction force on an accelerating electron, given by Equation (1), was recovered by taking into account the time-delays across a presumed finite size of the charged dust that makes up the electron, we might guess that if we let the cloud of dust be squished by the acceleration, the (carefully chosen) squish might alter the restmass so as to make things work out alright. Sad to say, this doesn't work. For one thing, electrons are known to be very much smaller than the classical electron radius, so the model of a deformable electron is questionable to start out with. For another, setting aside the issue of the electron's self-energy as a function of radius, we know that Equations (1) and (2) are basically correct. They can't be easily fudged to get the desired restmass behavior that might solve our problems. This is easily shown by substituting the equivalent expression for the radiation reaction term in Equation (3) into Equation (2): If the sign of the term were reversed so that when is computed it didn't disappear, we'd have the sort of behavior that might get rid of the transient energy (and restmass) conservation violations we're stuck with. This would be true if instead of the term in Equation (2) we had . Then our radiation reaction term would be . And if we could assume the term to be small (ideally, completely negligible), the radiation reaction term would mimic the radiated power. Alas, there seems to be no justification for such fudging except for preserving the conservation of energy. How, you may be wondering, have folks actually dealt with all this? Well, by simply asserting that energy and momentum must be conserved no matter what and making things work. Dirac was the person who did this. A modified version of Equation (4) is known as the Lorentz-Dirac equation. This has been much discussed, both in the journal literature and advanced textbooks on electrodynamics. Arguably the clearest presentation of how this is handled is to be found in sections 10 and 11 of of chapter 21 of Panofsky and Phillips' classic text, Classical Electricity and Magnetism. There one finds that when one creates the "covariant generalization" of the radiation reaction force term, some freedom in the equations allows you to stick in a term that ends up reducing the external force by just the amount needed to account for the radiation being emitted -- even during periods of constant acceleration -- without changing the restmass. Neat, Huh? But keep in mind Fulton and Rohrlich's remarks and "accelerational energy". They were made in the context of the Lorentz-Dirac treatment of radiation reaction. I'm belaboring this business for a reason. When we consider gravity/inertia for accelerated stuff, as you might expect in view of the fact that in the lowest approximation the field equations are like those for electrodynamics, we'll find the same sort of higher order effects. (But instead of , we'll get which we see is just a classical radiative reaction effect plus a effect. It's worth noting that since is proportional to and thus the rate of change of kinetic energy, that , in general, will be proportional to the second time derivative of energy.) I'd like to be able to tell you that the gravity/inertia transient restmass effect could solve the instantaneous energy conservation problem here. But it doesn't. The reason why is the same as the one that louses up electromagnetic quantum vacuum fluctuation explanations of inertia: The charge to mass ratios for elementary particles aren't all the same. There is an important message to take away from all of this. It's that whenever you encounter effects that involve stuff that looks like radiation reaction, you should be prepared for apparent transient violations of energy and momentum conservation. I say apparent because the Wheeler-Feynman "absorber" interpretation of radiation reaction makes plain that "non-local" (i.e., retarded/advanced) interactions with distant matter not normally considered to be part of an appropriate "isolated system" take place. So prepared for peculiar possibilities, we're now ready to look at gravitational/inertial transient mass fluctuations. Back to the Overview page. Copyright © 1998, James F. Woodward. This work, whole or in part, may not be reproduced by any means for material or financial gain without the written permission of the author.
The place value grid can be used to assist you in understanding and working with the decimal system. Decimals can also be written in expanded notation, using the same techniques as when expanding whole numbers. Write 0.365 in expanded notation. Write 5.26 in expanded notation. To read a decimal or write a decimal in words, you start at the left and end with the place value of the last number on the right. Where a whole number is included, use the word “and” to show the position of the decimal point. Read the number 0.75. Read the number 45.321. forty‐five and three hundred twenty‐one thousandths Write two hundred and three tenths. If you want to compare decimals, that is, find out whether one decimal is greater than another, simply make sure that each decimal goes out to the same number of places to the right. Which is greater, 0.37 or 0.365? 0.37 = 0.370, so you can align the two decimals. It is easy to see that 0.37 is greater. You are really comparing three hundred seventy thousandths to three hundred sixty‐five thousandths. Put the decimals 0.66, 0.6587, and 0.661 in order from largest to smallest. First, change each number to ten‐thousandths by adding zeros where appropriate. Then align the decimal points to make the comparison. The order should be 0.661, 0.66, and 0.6587. You can also align the decimals first and then add the zeros as follows. Remember: The number of digits to the right of the decimal point does not determine the size of the number (0.5 is greater than 0.33). The method for rounding decimals is almost identical to the method used for rounding whole numbers. Follow these steps to round off a decimal: 1. Underline the place value to which you're rounding. 2. Look to the immediate right (one place) of your underlined place value. 3. Identify the number (the one to the right). If it is 5 or higher, round your underlined place value up 1 and drop all the numbers to the right of your underlined number. If the number (the one to the right) is 4 or less, leave your underlined place value as it is and drop all the numbers to the right of your underlined number. (Note that you do not have to replace dropped digits with zeroes.) Round off 0.478 to the nearest hundredth. 0.4 78 is rounded up to 0.48. Round off 5.3743 to the nearest thousandth. 5.37 43 is rounded down to 5.374
- Falls are the second leading cause of accidental or unintentional injury deaths worldwide. - Each year an estimated 424 000 individuals die from falls globally of which over 80% are in low- and middle-income countries. - Adults older than 65 suffer the greatest number of fatal falls. - 37.3 million falls that are severe enough to require medical attention, occur each year. - Prevention strategies should emphasize education, training, creating safer environments, prioritizing fall-related research and establishing effective policies to reduce risk. A fall is defined as an event which results in a person coming to rest inadvertently on the ground or floor or other lower level. Fall-related injuries may be fatal or non-fatal1 though most are non-fatal. For example, of children in the People's Republic of China, for every death due to a fall, there are 4 cases of permanent disability, 13 cases requiring hospitalization for more than 10 days, 24 cases requiring hospitalization for 1–9 days and 690 cases seeking medical care or missing work/school. The world population is rapidly ageing Between 2000 and 2050, the proportion of the world's population over 60 years will double from about 11% to 22%. The absolute number of people aged 60 years and over is expected to increase from 605 million to 2 billion over the same period. It took more than 100 years for the share of France's population aged 65 or older to double from 7 to 14%. In contrast, it will take countries like Brazil and China less than 25 years to reach the same growth. The number of people aged 80 years or older will have almost quadrupled between 2000 and 2050 to 395 million. There is no historical precedent for a majority of middle-aged and older adults having living parents, as is already the case today. More children will know their grandparents and even their great-grandparents, especially their great-grandmothers. On average, women live six to eight years longer than men. The functional capacity of an individual's biological system increases during the first years of life, reaches its peak in early adulthood and naturally declines thereafter. The rate of decline is determined, at least in part, by our lifestyle and environment throughout life. Factors include what we eat, how physically active we are and our exposure to health risks such as those caused by smoking, harmful consumption of alcohol, or exposure to toxic substances. Even in poor countries, most older people die of noncommunicable diseases such as heart disease, cancer and diabetes, rather than from infectious and parasitic diseases. In addition, older people often have several health problems, such as diabetes and heart disease, at the same time. Around 6% of older people in developed countries have experienced some form of maltreatment at home. Abusive acts in institutions include physically restraining residents, depriving them of dignity (by for instance leaving them in soiled clothes) and intentionally providing insufficient care (such as allowing them to develop pressure sores). The maltreatment of older people can lead to serious physical injuries and long-term psychological consequences. The number of older people who are no longer able to look after themselves in developing countries is forecast to quadruple by 2050. Many of the very old lose their ability to live independently because of limited mobility, frailty or other physical or mental health problems. Many require some form of long-term care, which can include home nursing, community care and assisted living, residential care and long stays in hospitals. The risk of dementia rises sharply with age with an estimated 25-30% of people aged 85 or older having some degree of cognitive decline. Older people with dementia in low- and middle-income countries generally do not have access to the affordable long-term care their condition may warrant. Often their families do not often have publicly funded support to help with care at home. When communities are displaced by natural disasters or armed conflict, older people may be unable to flee or travel long distances and may be left behind. Yet, in many situations they can also be a valuable resource for their communities as well as for the humanitarian aid process when they are involved as community leaders. A fall is defined as an event which results in a person coming to rest inadvertently on the ground or floor or other lower level. Fall-related injuries may be fatal or non-fatal1though most are non-fatal. For example, of children in the People's Republic of China, for every death due to a fall, there are 4 cases of permanent disability, 13 cases requiring hospitalization for more than 10 days, 24 cases requiring hospitalization for 1–9 days and 690 cases seeking medical care or missing work/school. Globally, falls are a major public health problem. An estimated 424 000 fatal falls occur each year, making it the second leading cause of unintentional injury death, after road traffic injuries. Over 80% of fall-related fatalities occur in low- and middle-income countries, with regions of the Western Pacific and South East Asia accounting for more than two thirds of these deaths. In all regions of the world, death rates are highest among adults over the age of 60 years. Though not fatal, approximately 37.3 million falls are severe enough to require medical attention occur each year. Such falls are responsible for over 17 million DALYs (disability-adjusted life years) lost2. The largest morbidity occurs in people aged 65 years or older, young adults aged 15–29 years and children aged 15 years or younger. While nearly 40% of the total DALYs lost due to falls worldwide occurs in children, this measurement may not accurately reflect the impact of fall-related disabilities for older individuals who have fewer life years to lose. In addition, those individuals who fall and suffer a disability, particularly older people, are at a major risk for subsequent long-term care and institutionalization. The financial costs from fall-related injuries are substantial. For people aged 65 years or older, the average health system cost per fall injury in the Republic of Finland and Australia are US$ 3611 and US$ 1049 respectively. Evidence from Canada suggests the implementation of effective prevention strategies with a subsequent 20% reduction in the incidence of falls among children under 10 could create a net savings of over US$ 120 million each year. While all people who fall are at risk of injury, the age, gender and health of the individual can affect the type and severity of injury. Age is one of the key risk factors for falls. Older people have the highest risk of death or serious injury arising from a fall and the risk increases with age. For example, in the United States of America, 20–30% of older people who fall suffer moderate to severe injuries such as bruises, hip fractures, or head traumas. This risk level may be in part due to physical, sensory, and cognitive changes associated with ageing, in combination with environments that are not adapted for an aging population. Another high risk group is children. Childhood falls occur largely as a result of their evolving developmental stages, innate curiosity of their surroundings, and increasing levels of independence that coincide with more challenging behaviors commonly referred to as ‘risk taking’. While inadequate adult supervision is a commonly cited risk factor, the circumstances are often complex, interacting with poverty, sole parenthood, and particularly hazardous environments. Across all age groups and regions, both genders are at risk of falls. In some countries, it has been noted that males are more likely to die from a fall, while females suffer more non-fatal falls. Older women and younger children are especially prone to falls and increased injury severity. Worldwide, males consistently sustain higher death rates and DALYs lost. Possible explanations of the greater burden seen among males may include higher levels of risk-taking behaviours and hazards within occupations. Other risk factors include: - occupations at elevated heights or other hazardous working conditions; - alcohol or substance use; - socioeconomic factors including poverty, overcrowded housing, sole parenthood, young maternal age; - underlying medical conditions, such as neurological, cardiac or other disabling conditions; - side effects of medication, physical inactivity and loss of balance, particularly among older people; - poor mobility, cognition, and vision, particularly among those living in an institution, such as a nursing home or chronic care facility; - unsafe environments, particularly for those with poor balance and limited vision. Fall prevention strategies should be comprehensive and multifaceted. They should prioritize research and public health initiatives to further define the burden, explore variable risk factors and utilize effective prevention strategies. They should support policies that create safer environments and reduce risk factors. They should promote engineering to remove the potential for falls, the training of health care providers on evidence-based prevention strategies; and the education of individuals and communities to build risk awareness. Effective fall prevention programmes aim to reduce the number of people who fall, the rate of falls and the severity of injury should a fall occur. For older individuals, fall prevention programmes can include a number of components to identify and modify risk, such as: - screening within living environments for risks for falls; - clinical interventions to identify risk factors, such as medication review and modification, treatment of low blood pressure, Vitamin D and calcium supplementation, treatment of correctable visual impairment; - home assessment and environmental modification for those with known risk factors or a history of falling; - prescription of appropriate assistive devices to address physical and sensory impairments; - muscle strengthening and balance retraining prescribed by a trained health professional; - community-based group programmes which may incorporate fall prevention education and Tai Chi-type exercises or dynamic balance and strength training; - use of hip protectors for those at risk of a hip fracture due to a fall. For children, effective interventions include multifaceted community programmes; engineering modifications of nursery furniture, playground equipment, and other products; and legislation for the use of window guards. Other promising prevention strategies include: use of guard rails/gates, home visitation programmes, mass public education campaigns, and training of individuals and communities in appropriate acute pediatric medical care should a fall occur. 1Within the WHO Global Burden of Disease database, fall-related deaths and non-fatal injuries exclude falls due to assault and self-harm; falls from animals, burning buildings, transport vehicles; and falls into fire, water and machinery. 2The disability-adjusted life year (DALY) extends the concept of potential years of life lost due to premature death to include equivalent years of “healthy” life lost by virtue of being in states of poor health or disability. WHO Media centre Telephone: +41 22 791 2222 E-mail: mediainquiries [at] who.int
Contains the full lesson along with a supporting toolkit, including teachers’ notes. Livestock species play a major role in economic development worldwide. Until recently selecting animals for breeding was done by personal judgement and observation of phenotypes or of pedigree data. In this way people have been guiding the evolution of domestic animals for about ten thousand years. This method has been quite successful but there have been some drawbacks; e.g. in dairy cattle increased milk yield has been accompanied by a decrease in reproductive efficiency. With the advent of genomics, our increased knowledge should lead to marked improvements in performance in both milk yield and reproductive efficiency, as well as many other features. Biological processes that influence performance The performance of cattle is influenced by a wide variety of factors and is a prime example of the nature versus nurture debate: a beef animal may have the genes for large muscles (nature) but if it is not fed properly (nurture) it will not have a high live-weight gain. Hormone levels are also important as they control the growth, development and productivity of all animals. Prolactin and oxytocin are important in the control of milk production, while growth hormone affects live-weight gain. Hormones are of course genetically determined. The use of artificial growth promoters is now banned in the European Union. If animals have a disease they will not perform at optimum levels and it has been shown that routinely dosing cattle with antibiotics increases their live-weight gain.
The 2014 national curriculum introduced a new subject, computing, which replaced ICT Computing is concerned with how computers and computer systems work, and how they are designed and programmed. A high-quality computing education equips pupils to use computational thinking and creativity to understand and change the world. Purpose of this statement - To establish an entitlement for all pupils in the subject of Computing; - To promote a shared understanding of the Computing curriculum; - To establish expectations for teachers and pupils; - To promote clarity, coherence and consistency in the teaching of Computing across the school; - To explain how Computing is taught at Bewick Bridge Community Primary School; - To give further guidance about resources available. In Computing pupils are taught the principles of information and computation, how digital systems work, and how to put this knowledge to use through programming. Building on this knowledge and understanding, pupils are equipped to use information technology to create programs, systems and a range of content. Computing also ensures that pupils become digitally literate – able to use, and express themselves and develop their ideas through, information and communication technology – at a level suitable for the future workplace and as active participants in a digital world. The role of programming in computer science is similar to that of practical work in the other sciences – it provides motivation, and a context within which ideas are brought to life. Information technology deals with applying computer systems to solve real-world problems. Computing is more than programming, but programming is an absolutely central process for Computing. In an educational context, programming encourages creativity, logical thought, precision and problem-solving, and helps foster the personal, learning and thinking skills required in the modern school curriculum. - Understanding Technology Children’s natural curiosity has always driven them to develop an understanding of the world around them and this is no different when it comes to understanding technology; both how it works and what it can do for us. From their first, early experiences with technology, pupils begin to make sense of how it works and the opportunities it can provide. Throughout their time in primary education, pupils now need to extend that understanding to include computer networks such as the Internet, and the services they can provide such as the World Wide Web. Teachers need to provide practical, fun experiences that allow pupils to make links with their existing understanding of the world around them. In doing so, pupils will become much more effective creators and users of digital content. - Digital Literacy Digital Literacy is the ability to effectively and critically navigate, find, evaluate summarise, use, create and communicate information using a range of digital technologies. It deals with the appropriate use of technology generated words, images sounds and motion. Developing digital literacy is increasingly important because it supports learners to be confident and competent in their use of technology in a wide variety of contexts. The inter-related components of Digital Literacy can and should be developed alongside subject specific knowledge and understanding. It may be useful to think of Digital Literacy as made up of several, intertwining elements, with aspects of collecting and manipulating data and presenting information running throughout. Your child's developing capability will benefit from experiencing a wide range of progressive learning experiences, with several of these areas linked together. Computing at Bewick Bridge Community Primary School At Bewick Bridge Computing is integrated into the IPC with one unit each year having a Computing focus with learning goals which ensure a coverage of the Computing curriculum, particularly the programming elements. The IPC provides purpose and context for pupils' learning in Computing. In addition to this Digital Literacy and Understanding Technology are also taught in a variety of contexts making links with other curriculum areas to develop skills in data collection and presentation, research skills and how to use the internet in different ways. How can I help my child at home? Computing is not just about using a computer. It also includes the use of tablets, game consoles , controllable toys, digital cameras and everyday equipment such as a tape recorder or DVD player. Children can be helped to develop their computing skills at home by: · Sending an email to a friend · Drawing a picture on screen · Using the Internet to research a class topic · Planning a route with a controllable toy · Using interactive games · Playing on an educational App or web based game.
Many woody plants and shrubs are affected by crown gall. Here’s how to control it without using toxic sprays. Crown gall is a common plant disease caused by the soil-borne bacterium Agrobacterium tumefaciens. It is found throughout the world and occurs on many woody shrubs and herbaceous plants, including grapes, raspberries, stone fruits and roses. Crown Gall can be identified by the large distorted growths that appear between the root and trunk of a plant, just above soil level. Plants with several galls may be unable to move water and nutrients up the trunk and become weakened, stunted and unproductive. Young plants can be killed by developing gall tissue. The bacteria responsible for crown gall can persist in the soil for many years and are released when galls become saturated with moisture or as older galls decompose. Susceptible plants are infected through fresh wounds or abrasions, many of which are a result of pruning, freeze injury, soil insects, cultivation and other factors that may damage plants. Nursery stock is often infected through grafting and budding scars. - Select resistant cultivars when possible and purchase plants from a reputable nursery. - Do not buy plants that shows signs of swelling or galling. - When caring for susceptible plants, avoid injury or pruning wounds that may come in contact with the soil. - Use Tree Wrap to protect against string trimmer damage and keep your garden tools clean. - Provide winter protection with natural burlap so bark won’t crack. - In many cases, existing galls can be removed with a sharp pruning knife. Destroy the infected plant tissue and treat the wound with pruning sealer. If the plant does not recover, remove and destroy it. Tip: To get rid of crown gall on roses, remove the infested plant and prune out gall tissue. Soak the entire root system and damaged areas for 15 minutes in a solution of 2 level Tbsp of Actinovate per 2-1/2 gallons of water. Replant in healthy soil, and apply 1/2 Tbsp per 2-1/2 gallons of water as a foliar spray at weekly intervals. Photo Credit: The University of Georgia College of Agricultural and Environmental Sciences
This ready-to-use guide is completely aligned to the Common Core Standards for Language Arts for grades 6-8. These activities will enhance your students' understanding of this award-winning novel and encourage critical thinking, text analysis, and synthesis of literature and informational text. Included in this guide are --vocabulary activities that encourage students to use context to determine word meanings --engaging pre-reading activities that provide historical context --chapter discussion questions that range in depth and difficulty and that require students to cite textual evidence to support their answers --grammar activities that incorporate passages from the novel as examples --a challenging but thought-provoking Performance Task that requires students to use information from several different sources to create a final piece of writing As a classroom teacher, curriculum developer, and author, I have found that these guides are wonderful ways to not just cover the required standards, but engage students in lively and thoughtful discussions about the themes, characters, and surprises of literature.
Physical Science - States of Matter We had some friends over to join us for our work period. I started by introducing the definition of "matter" as anything you could touch or that takes up space. I asked the kids for examples of things they could touch (the couch, the floor, a book) and we talked about how all of those things are made of matter. I showed them the matter tray with a container of water for liquid, air for gas, and a rock for solid. I passed the containers around so they could see the differences. Then we listened to this States of Matter song on YouTube. Next, we talked about a couple of the properties of liquids. Liquids don't have a definite shape, and will take the shape of the container they are poured into. Liquids are a type of matter, so they do take up space. Liquids also have a definite volume. You can measure the volume of a liquid, and that volume will stay constant if you transfer the liquid to a new container. I set up a water station at the table (with lots of towels - this would be better outside if it hadn't been so cold that day!). I had a large bucket of water in the middle of the table, and several trays around the table where kids could sit and experiment with pouring water into different shapes and sizes of containers. We were so involved in the activities that I didn't get any pictures (how did that happen??). Here are a few examples of things you could do: |Measure the volume of a liquid, then pour into a different container. | See how the shape changes, but volume stays the same. |Pour liquids into containers with very different sizes or shapes.| |Play with different sized/shaped containers in the pool or bathtub.| |Use a water table to fill lots of containers with water. Practice pouring the liquid from | one container to another to see how liquids flow and take the shape of their container. See more Science activities here.
Explanation merupakan salah satu genre of text. Nah, disini ada beberapa contoh dan strukturnya. Silakan dibaca-baca.. Social Function: To explain the processes involved in the formation or working of natural or sociocultural phenomena. - A general statement to position the reader - A sequenced explanation of why or how something occurs. Significant Lexicogrammatical Features - Focus on generic, non-human Participants - Use mainly of Material Processes and Relational Processes - Use mainly of Temporal and causal Circumstances and Conjunctions The United States of America is where the Venus’s fly trap has its origins. The Venus’s fly trap is a unique plant. It belongs to a group of plants called ‘carnivorous plants’. These plants feed on insect. The Venus’s fly trap has a special mechanism by which it traps its prey. This is how it works. At the end of each leaf – which grows from the base of a long, flowering stalk – there is a trap. The trap is made up of two lobes and is covered with short, reddish hairs which are sensitive. There are teeth like structures around the edge of the lobes. The trap contains nectar which attracts insect. When an insect comes in contact with the nectar, the trap snaps shut. There are certain digestive juice inside the trap which digest the insect. It takes about ten days for a trapped insect to be digested. We can tell when this digestion is complete, for then the walls automatically open to wait for another victim. There are two hundred species of carnivorous plants. Another kind of these well- known species is the pitcher plant. What differentiates this plant from the Venus’s fly trap is the shape; the mechanism to catch insects is the same in both plants. The pitcher plants which cling to other plants by means of tendrils. At one end of the tendril, there is a pitcher –shaped vessel with an open lid. The mouth and the lid of the pitcher contain glands which produce nectar to attract insect. When an insect settles on the nectar, the lid of the pitcher shuts, trapping its victim. The digestive juices inside the pitcher then begin to work. The effects of acid soil Soils with a pH of less than 7.0 are acid. The lower the pH, the more acid the soil. When soil pH falls below 5.5, plant growth is affected. Crop yields decrease, reducing productivity Soils provide water and nutrients for plant growth and development. Essential plant nutrients include phosphorus, nitrogen, potassium and sulfur. Plants require other elements such as molybdenum, in smaller quantities. Some elements eg aluminium and manganese, are toxic to plants. Nutrients become available to plants when they are dissolved in water. Plants are able to take up phosphate, nitrate, potassium and sulfate ions in solution. The solubility of nitients changes with pH. In acid soils (low pH), molybdenum becomes less soluble and aluminium becomes more soluble. Therefore, plant growth may be affected by either a deficiency of molybdenum or too much aluminium. Both crop and pasture plants are affected by acid soils. there may be a range of symptoms. Crops and pastures may be poorly established resulting in patchy and uneven growth. Plant leaves may go yellow and die at the tips. The root system of the plant may be stunted. Crops may yield less. Plants vary in their sensitivity to low pH. Canola and lucerne are very sensitive to acid soils so do not grow well. Lupins and triticale are tolerant to soils of low pH so they still perform well. Land can become unproductive if acid soil is left untreated. Incorporating lime into the soil raises the pH. Therefore, liming soil can reverse the effects of acid soil on plants and return a paddock to productivity.
A container is divided into 2 parts by a partition containing a hole of radius r. Helium gas in the two parts is held at temperature T1 = 75K and T2 = 300K respectively. After the system reaches a steady state, the mean free paths on each side are lambda1 and lambda2. What is the ratio lambda1/lambda2 when (a) r>>lambda1 and r>>lambda2, (b) r<<lambda1 and r<<lambda2. If r is much larger than the mean free path then molecules close to the hole will suffer many collisions when they move a distance equal to the hole radius. If on the other side of the hole there is a vacuum, then the molecules that are near the hole will suffer more collisions from molecules in the direction away from the hole than from the direction of the hole. So, a molecule that initially passes alongside the hole and would pass the hole if it didn't suffer any collision, would typically suffer many collisions as it traverses the length of the hole and most of these collisions have the effect of pushing the molecule toward the hole. So, such a molecule will typically move through the hole and go to the other part of the container. So, we see that the average velocity of molecules near the hole is not zero; it has a component in the direction of the hole. If the hole weren't there, then the gas molecules would have an isotropic ... A detailed explanation of the solution is given. Systems reaching a steady state are determined. The ratio of lambda1/lambda2 is determined.
DETAILED LESSON PLAN IN MAPEH I a. To distinguish the different kinds of notes. b. To read the notes that is located in the grand staff. c. To locate he Major Scales or called octave II. SUBJECT MATTER: Fundamentals of Music /,ACTIVE MAPEH 3, pp.10-31 | STUDENTS ACTIVITY A. Daily Routine B. REVIEW-Last meeting we tackled about body Movements isn’t it class?-Let’s have a short review, So what is movement? Yes…-Very Good.. C. MOTIVATION-Who among you loves music?-Well, it is good to know class. I know you learn something when you are in your elementary days. D. LESSON PROPER-Because our topic for today is all about the different kinds of notes and the major scale.-So class, what are notes? Yes…-Very Good… * Notes may have one, two, three or more parts which are the heads, stem and hook or hooks * Notes have different shapes to determine their exact value for example is their relative length or duration. * Let’s start with the Whole note ( )And what is a whole note? Yes,,, * Yes Very good.. And take note class that a whole note receives 4 beats. * Next is the half note, Yes..( ) * And half note receives 2 beats. Next is the quarter note and what is a quarter note? Yes? * Yes that’s right.. a quarter note receives one beat ( ). * The next one is the eight note and what is an eight note? ( ) * And an eight note receives.. how many beat? Yes? * Very good.. the next one is the sixteenth note and what is a sixteenth note? Yes? * Very good and a sixteenth note receives one eight beat. So class, did you notice something while an additional hook or shades are placed in the whole note? * What was it?? Yes John? * Very good... well said john.. E. Generalization(I will ask the students to draw the different kinds of notes on the blackboard and label its parts with their corresponding beats. F. Application * Okay class lets clap the beats of the... Please join StudyMode to read the full document
Scheuermann’s Kyphosis (Scheuermann’s Disease) — overview and treatment Scheuermann’s kyphosis, or Scheuermann’s disease, is the pronounced outward curvature of the upper portion of the spine (thoracic spine). This is commonly referred to as a “hunchback” and is often found in teenagers and younger adults who are still undergoing years of growth. While most cases of Scheuermann’s disease does not cause pain, some more advanced cases of this disease can result in compressed nerve pain due to the abnormal formation of the vertebrae in the thoracic spine (upper back). If you have been diagnosed with Scheuermann’s disease and have not yet completed your spinal growth, you may find treatment options that can correct the curvature of your spine. However, if you are diagnosed with Scheuermann’s disease after you have completed your spinal growth, you may find treatment options to relieve the pain but you will not be able to readjust your spine without invasive surgery. Causes and symptoms of Scheuermann’s disease The cause of Scheuermann’s disease is still unknown, though researchers have found that it is due to an abnormality in the vertebrae of the spine. Essentially, during the adolescent period when the spine is still growing, the front part of the vertebra stops growing altogether. The back part of the same vertebra continues to grow, resulting in a natural curve of the vertebra that should be relatively straight. The normal curve allowance for vertebrae is between 20 degrees and 50 degrees. Anything curved more than 50 degrees is the sign of a spine condition, including Scheuermann’s disease. Patients with Scheuermann’s disease typically do not experience pain or symptoms other than the curvature of the spine. However, some severe cases of Scheuermann’s disease may result in pain from nerve compression. There are several nerve roots in the spinal cord that send signals to the extremities. If one of those nerve roots is compressed from an abnormally shaped vertebra caused by Scheuermann’s disease, the patient might experience local pain at the site of the nerve root and radiating pain through the connected extremity. Treatment options for Scheuermann’s disease For patients diagnosed with Scheuermann’s disease who are not yet finished growing, a brace may to worn to help correct the over curvature of the spine. This brace is called a Milwaukee brace and aims to restore the height and increase growth in the front of the vertebra to even out the curvature of the spine. This brace may be prescribed for one to two years, depending on the severity of the condition and the age of the patient. If you have been diagnosed with Scheuermann’s disease and you are over the age of spinal growth, there is no treatment that can fix the appearance of your spine other than highly invasive surgery, which is not performed at Laser Spine Institute. However, there are several conservative treatment options that may help alleviate the pain and symptoms you experience. Consult your physician to learn more about your condition and the treatment options available to you.
* Unlimited wants, limited resources * Questions to answer: 1. What to produce? 2. How much to produce? 3. How to produce? 4. For whom to produce? * Criteria to classify economic systems 1. Productive resources owned by private individuals (private sector) or government (public sector) 2. Role of market forces of demand and supply in allocating resources, determining prices, distributing incomes 3. Role of government in production of goods and services and provision of infrastructure and welfare services to the community * Market Economy * Australia, USA, Japan, UK, Germany, France, Canada, Italy * What to produce and how much to produce are determined by the operation of price mechanism * Consumer preferences and consumer sovereignty determining pattern of production and quantities of output produced * How to produce is determined by profit motive * Producers will attempt to use least cost combination of resources in producing output * To whom to distribute is determined by distribution of factor incomes * Those on higher incomes have a greater capacity to consumer goods and services compared to those on lower/middle incomes * Incomes determined by contribution to production as measured by marginal productivity * People with higher levels of skill, education, experience, training tend to earn higher incomes * Mixed Market Economy * Private sector makes most of economic decisions, governments also play significant role in providing collective goods/services (health, education, transport), redistributing income * Government intervention: * Regulator- establish and enforce framework of law and order, regulations which protect consumers and producers in market dealings * Provider of collective goods and services eg. defense, health, education, transport and welfare * Producer of goods and services market will not provide eg. public transport, health, education, defense, postal services * Redistribute income through system of taxation and welfare to ensure a more even distribution of income * Planned Economy * China, Cuba, North Korea * Government planning to allocate resources according to priorities for production set by a state planning authority * Government makes most of the production, distribution and exchange decisions * Sets prices and incomes according to government priorities/goals * What and how much to produce decided by government through central planning agency * Establish short, medium, long term plans which set out production priorities and production targets for priority industries * Output of raw materials * Capital goods * Defense equipment * Government decides how resources are allocated * How to produce determined by government * Using estimates of resource balances and demand to allocate resources in sufficient quantities to meet output targets * To whom to distribute determined by government attempting to share out production on the basis of need and areas of state priority * High wages/salaries paid to workers in priority areas (military and heavy industry) * Prices set by planners; * Basic foodstuffs, water, housing, electricity often subsidized * Health and education may be provided at zero cost * Transitional Economy * Transition from planned economic systemsmarket economic systems * Result of: * The election of democratic governments * Desire of the majority of citizens to raise their incomes and standard of living by having more economic and political freedom * Establishment of markets and prices determined by market forces of demand and supply * Economies have opened up to foreign... Please join StudyMode to read the full document
Exams and Tests for Pineal Tumor The signs and symptoms of a brain tumor initially may be vague and come and go, making the diagnosis of a brain tumor difficult. Other diseases can cause similar signs and symptoms. Diagnosing a brain tumor involves several steps. The doctor may perform a neurologic exam, which among other things includes checking the patient's: - coordination and Depending on the results of the neurologic exam, the doctor may request one or more of these tests: Computerized Tomography (CT) Scan The CT scan uses a sophisticated X-ray machine linked to a computer to produce detailed, two-dimensional images of the brain. The patient lies still on a movable table, guided into what looks like an enormous doughnut where the images are taken. A special dye may be injected into the bloodstream after a few CT scans are taken. The dye helps make tumors more visible on X-rays. The CT scan generally takes less than 10 minutes. Magnetic Resonance Imaging (MRI) Scan The MRI scan uses magnetic fields and radio waves to generate images of the brain. The patient lies inside a cylindrical machine for 15 minutes to an hour. MRI scans are particularly useful in diagnosing brain tumors because they outline soft tissues of the body as well as bone. Sometimes a special dye is injected into the bloodstream during the procedure. The dye usually makes tumors easier to distinguish from healthy tissue. An angiogram involves injecting a special dye into the bloodstream. The dye, which flows through the blood vessels in the brain, can be seen by X-ray. This test helps show the location of blood vessels in and around a brain tumor. X-rays of the Head and Skull An X-ray of the head may show alterations in skull bones that could indicate a tumor. It may show calcium deposits, which are sometimes associated with brain tumors. However, a routine X-ray is a far less sensitive test than brain scans and so is used less often. Other Brain Scans Other tests, such as magnetic resonance spectroscopy (MRS), single-photon emission computerized tomography (SPECT) or positron emission tomography (PET) scanning, help doctors gauge brain activity by studying brain metabolism and chemistry as well as blood flow within the brain. These scans can be combined with MRIs to help doctors understand the effects of a tumor on brain activity and function, but doctors don't typically use them to make an initial diagnosis of brain tumor. If the doctor sees what appears to be a brain tumor on a brain scan, especially if there are multiple tumors, he or she may test for cancer elsewhere in the patient's body before making a definitive diagnosis. Letting the doctor know of a prior history of cancer anywhere in the body, even many years earlier, is important. The only test that can absolutely make a diagnosis of a brain tumor is a biopsy. This can be done as part of an operation to remove the tumor, or can be done in a separate procedure in which only a small sample of tissue is obtained. A needle biopsy may be used for brain tumors in hard-to-reach areas within the brain. The surgeon drills a small hole, called a burr hole, into the skull. A narrow, thin needle is then inserted through the hole. Tissue is removed using the needle, which is frequently guided by CT scanning. The tissue is then viewed under a microscope to determine if it is a tumor, and if so, what type of tumor. Additional tests on the tissue are often done to help determine the exact type of tumor, which may help in guiding treatment.
“HIV” stands for Human Immuno-deficiency Virus. It is a virus that attacks and slowly takes over the immune system - our body’s natural defense against illness and disease. HIV has no vaccine, and no cure. Once someone is infected, they will have the virus for the rest of their life. When someone is infected, HIV has three main impacts on their health: - Immune cell “hijacking”: When HIV infects an immune cell, it “hijacks” that cell, turning it from a body defense cell into an HIV factory. This is how HIV reproduces to out-of-control levels. - Immune system loss: As more immune cells are “hijacked” by HIV, there are fewer and fewer healthy immune cells to fight the HIV in the system – and other diseases. Also, healthy immune cells will attack those that are infected with HIV, killing them off. This leaves the body with much less immune system than it needs to stay healthy. - AIDS: Over time, the body’s immune system becomes so badly damaged by the HIV virus, it can’t fight off more serious illnesses. This stage of HIV infection is known as “AIDS”. How can someone tell if they have HIV? Are there symptoms The only way to know for sure if you’ve got HIV is to get a special blood test for HIV. Some people get symptoms when they’re newly infected – but not everybody. These symptoms are called “Seroconversion Illness”. Seroconversion Illness is caused by the body’s immune system reacting to HIV, and it can sometimes feel like a cold or flu. It can also include such things as swollen glands, a rash, headache, nausea and/or diarrhea. But any of these things could happen to someone for lots of reasons - not just HIV. So, if you’ve had unprotected sex or shared a needle, the only way to know for sure if you’ve been infected with HIV is to get a special blood test for the virus. Unfortunately, it’s almost impossible to find out right after possible exposure to HIV whether someone has been infected. There is a time-gap between when someone is exposed to HIV, and when a blood test can tell if they’ve been infected. This is known as the “Window Period”. There are a few different HIV tests out there, and each has a different window period. The most common test across Canada can detect evidence of HIV in the blood between 3-12 weeks after someone’s been infected. However, there are some that are more sensitive and can find the virus sooner. What is AIDS? “AIDS” stands for Acquired Immune Deficiency Syndrome. Someone can be diagnosed with AIDS when they have both HIV and a serious illness. The AIDS stage often follows several years of HIV infection, when the body’s immune system has been so destroyed by HIV that it can’t fight more serious illnesses or diseases. Because illnesses can take over quickly when there is no immune system to fight them, someone in the AIDS stage can get very sick and die much more quickly than usual. However, with the right medical treatment at the right time, a person with an AIDS illness can often get rid of it and get better - but they will still have the HIV virus.
Financial Literacy: Essential Basics for Children When teaching your children to manage their money you are helping your kids grow into financially savvy adults. You might even learn something about your own money habits along the way. Children see money nearly every day, and as they become old enough to recognise the currency value on coins and notes they’ll want to start counting – just be mindful that very small children and coins don’t mix well. If you decide to give children pocket money or to pay them for doing age-appropriate chores, encourage saving by giving them a money-box. Get yourself a money-box as well and each time your child puts money away, do so yourself – and vice-versa. It could be fun! As they get older, open their own bank account. Explain how interest works and talk about their savings goals. If, for example, they want to buy a new bike, discuss how much it will cost and how much they will need to save each week. When your child is old enough, introduce them to their bank statement and point out any fees and charges. Children often assume that ATMs supply unlimited cash. When making a withdrawal, show them the receipt and explain how the balance has reduced. Most kids today have mobile phones. This popular object is a great opportunity to teach them about meeting financial obligations. Show them how to put aside money for bills, and allocating the remainder for savings and spending. Part-time jobs are a standard way for teenagers to earn money. If you believe they are handling their money responsibly you might consider a pre-paid “credit” card. These work similarly to a credit card except they use the owner’s money instead of credit and are an excellent tool for learning how “plastic” works – when there’s no more left, there’s no more left. Learning early that plastic money is not limitless can avoid a lot of grief later in life. The Australian Securities and Investments Commission (ASIC) reports that the average Australian credit card holder owes just over $4,100 per card, on which they pay around $686 interest each year! However, not all debt is bad; few people can buy a home without a mortgage. Your child’s first debt will likely be a car. It’s tempting to help financially but you’ll probably do them a greater service by encouraging them to borrow. Not only will they earn their own credit history, they will understand the importance of borrowing, the effects of interest and price. If you decide to lend them money, establish a repayment schedule and be strict. Teaching your kids good money habits early is a lasting gift. And as the line goes – if you ever think no-one cares about you, try missing a mortgage payment! Disclosure Statement: Ann Little and Tarj Pty Ltd t/as Ann Little Insurance Brokers are Authorised Representatives of Synchron AFS License No 2443313. General Advice Warning: This website contains information that is general in nature. It does not take into account the objectives, financial situation or needs of any particular person. You need to consider you financial situation and needs before making any decisions based on this information.
Some potentially habitable planets began as gaseous, Neptune-like worlds Two phenomena known to inhibit the potential habitability of planets — tidal forces and vigorous stellar activity — might instead help chances for life on certain planets orbiting low-mass stars, University of Washington astronomers have found. In a paper published this month in the journal Astrobiology, UW doctoral student Rodrigo Luger and co-author Rory Barnes, research assistant professor, say the two forces could combine to transform uninhabitable “mini-Neptunes” — big planets in outer orbits with solid cores and thick hydrogen atmospheres — into closer-in, gas-free, potentially habitable worlds. Most of the stars in our galaxy are low-mass stars, also called M dwarfs. Smaller and dimmer than the sun, with close-in habitable zones, they make good targets for finding and studying potentially habitable planets. Astronomers expect to find many Earthlike and “super-Earth” planets in the habitable zones of these stars in coming years, so it’s important to know if they might indeed support life. Super-Earths are planets greater in mass than our own yet smaller than gas giants such as Neptune and Uranus. The habitable zone is that swath of space around a star that might allow liquid water on an orbiting rocky planet’s surface, perhaps giving life a chance. “There are many processes that are negligible on Earth but can affect the habitability of M dwarf planets,” Luger said. “Two important ones are strong tidal effects and vigorous stellar activity.” A tidal force is a star’s gravitational tug on an orbiting planet, and is stronger on the near side of the planet, facing the host star, than on the far side, since gravity weakens with distance. This pulling can stretch a world into an ellipsoidal or egg-like shape as well as possibly causing it to migrate closer to its star. “This is the reason we have ocean tides on Earth, as tidal forces from both the moon and the sun can tug on the oceans, creating a bulge that we experience as a high tide,” Luger said. “Luckily, on Earth it’s really only the water in the oceans that gets distorted, and only by a few feet. But close-in planets, like those in the habitable zones of M dwarfs, experience much stronger tidal forces.” This stretching causes friction in a planet’s interior that gives off huge amounts of energy. This can drive surface volcanism and in some cases even heat the planet into a runaway greenhouse state, boiling away its oceans, and all chance of habitability. Vigorous stellar activity also can destroy any chance for life on planets orbiting low-mass stars. M dwarfs are very bright when young and emit lots of high-energy X-rays and ultraviolet radiation that can heat a planet’s upper atmosphere, spawning strong winds that can erode the atmosphere away entirely. In a recent paper, Luger and Barnes showed that a planet’s entire surface water can be lost due to such stellar activity during the first few hundred million years following its formation. “But things aren’t necessarily as grim as they may sound,” Luger said. Using computer models, the co-authors found that tidal forces and atmospheric escape can sometimes shape planets that start out as mini-Neptunes into gas-free, potentially habitable worlds. How does this transformation happen? Mini-Neptunes typically form far from their host star, with ice molecules joining with hydrogen and helium gases in great quantity to form icy/rocky cores surrounded by massive gaseous atmospheres. “They are initially freezing cold, inhospitable worlds,” Luger said. “But planets need not always remain in place. Alongside other processes, tidal forces can induce inward planet migration.” This process can bring mini-Neptunes into their host star’s habitable zone, where they are exposed to much higher levels of X-ray and ultraviolet radiation. This can in turn lead to rapid loss of the atmospheric gases to space, sometimes leaving behind a hydrogen-free, rocky world smack dab in the habitable zone. The co-authors call such planets “habitable evaporated cores.” “Such a planet is likely to have abundant surface water, since its core is rich in water ice,” Luger said. “Once in the habitable zone, this ice can melt and form oceans,” perhaps leading to life. Barnes and Luger note that many other conditions would have to be met for such planets to be habitable. One is the development of an atmosphere right for creating and recycling nutrients globally. Another is simple timing. If hydrogen and helium loss is too slow while a planet is forming, a gaseous envelope would prevail and a rocky, terrestrial world may not form. If the world loses hydrogen too quickly, a runaway greenhouse state could result, with all water lost to space. “The bottom line is that this process — the transformation of a mini-Neptune into an Earthlike world — could be a pathway to the formation of habitable worlds around M dwarf stars,” Luger said. Will they truly be habitable? That remains for future research to learn, Luger said. “Either way, these evaporated cores are probably lurking out there in the habitable zones of these stars, and many may be discovered in the coming years.” Luger is lead author of the paper, with Barnes and Victoria Meadows his UW co-authors. Other co-authors are E. Lopez and Jonathan Fortney of the University of California, Santa Cruz, and Brian Jackson of Boise State University. The research was done through the Virtual Planetary Laboratory, a UW-based interdisciplinary research group, and funded through the NASA Astrobiology Institute under Cooperative Agreement Number NNA13AA93A .
the law of demand, individual and market demand, the demand curve the factors affecting demand - price, income, population, tastes, prices of substitutes and complements, expected future prices movements along the demand curve and shifts of the demand curve You will learn to: Examine economic issues examine the forces in an economy that tend to cause prices to rise Apply economic skills graph a demand curve and predict the impact of an equilibrium calculate the price elasticity of demand using the total outlay method Traditional economics sets out some basic premises. Let's think these through and see where they are happening in our everyday lives. Demand is a function of price and consumers will generally buy less of an expensive product. There are a variety of other factors that influence people's choices for consumption, e.g. income, population, tastes, prices of substitutes and complements, expected future prices. These factors cause the whole demand curve to shift left or right. Consumers will not react to price changes in some products for particular reasons. This is referred to as inelastic demand. Consumers will react immediately to price changes in some products for particular reasons. This is referred to as an elastic demand. Inflation describes when the whole economy experiences price rises. Too much money in an economy causes inflation. To decide what price to charge, a firm needs information about demand: how much potential consumers are willing to pay for its product. Figure 7.3 shows the demand curve (the curve that gives the quantity consumers will buy at each possible price). for Apple-Cinnamon Cheerios, a ready-to-eat breakfast cereal introduced by the company General Mills in 1989. In 1996, Jerry Hausman, an economist, used data on weekly sales of family breakfast cereals in US cities to estimate how the weekly quantity of cereal that customers in a typical city would wish to buy would vary with its price per pound (there are 2.2 pounds in 1 kg). For example, you can see from Figure 7.3 that if the price were $3, customers would demand 25,000 pounds of Apple-Cinnamon Cheerios. For most products, the lower the price, the more customers wish to buy. Click on the link above to find out more. If you were the manager at General Mills, how would you choose the price for Apple-Cinnamon Cheerios in this city, and how many pounds of cereal would you produce? You need to consider how the decision will affect your profits (the difference between sales revenue and production costs). Suppose that the unit cost (the cost of producing each pound) of Apple-Cinnamon Cheerios is $2. To maximize your profit, you should produce exactly the quantity you expect to sell, and no more. Then revenue, costs, and profit are given by: Total Costs = Unit Cost X Quantity = 2 X Q Total Revenue = Price X Quantity = P X Q Profit = Total Revenue - Total Costs = (P X Q) - (2 X Q) So we have a formula for profit: Using this formula, you could calculate the profit for any choice of price and quantity and draw the isoprofit curves, as in Figure 7.4. Just as indifference curves join points in a diagram that give the same level of utility, isoprofit curves join points that give the same level of total profit. We can think of the isoprofit curves as the firm’s indifference curves: the firm is indifferent between combinations of price and quantity that give you the same profit. Want to know more? Head to this link to read and answer some demand-ing questions! Are economists faking it until they make it? You know it's there, when a price goes up you generally buy less of it and conversely, when a price goes down, you will go and buy more. But it's hard to find a real demand curve. Listen to this podcast from Freakonomics and hear how Uber has provided economists with enough data to nail a demand curve down. In your first assessment task you are researching factors around consumer's spending and saving patterns. This information will help you develop a rationale and move ahead with your research. Price Elasticity - how quickly will consumers react when you change a price? One of the critical elements of pricing is understanding what economists call price elasticity. Reference: https://hbr.org/2015/08/a-refresher-on-price-elasticity What is price elasticity? Most customers in most markets are sensitive to the price of a product or service, and the assumption is that more people will buy the product or service if it’s cheaper and less will buy it if it’s more expensive. But the phenomenon is more quantifiable than that, and price elasticity shows exactly how responsive customer demand is for a product based on its price. “Marketers need to understand how elastic, sensitive to fluctuations in price, or inelastic, largely ambivalent about price changes, their products are when contemplating how to set or change a price,” says Avery. “Some products have a much more immediate and dramatic response to price changes, usually because they’re considered nice-to-have or non-essential, or because there are many substitutes available,” explains Avery. Take for example, beef. When the price dramatically increases, demand may go way down because people can easily substitute chicken or pork. How is it calculated? This is the formula for price elasticity of demand: Price elasticity of demand = % change in Qdd % change in P Let’s look at an example. Say that a clothing company raised the price of one of its coats from $100 to $120. The price increase is $120-$100/$100 or 20%. Now let’s say that the increase caused a decrease in the quantity sold from 1,000 coats to 900 coats. The percentage decrease in demand is -10%. Plugging those numbers into the formula, you’d get a price elasticity of demand of: -.10 = -.5 or .5 .20 Products and services can be: Perfectly elastic where any very small change in price results in a very large change in the quantity demanded. Products that fall in this category are mostly “pure commodities,” says Avery. “There’s no brand, no product differentiation, and customers have no meaningful attachment to the product.” Relatively elastic where small changes in price cause large changes in quantity demanded (the result of the formula is greater than 1). Beef, as discussed above, is an example of a product that is relatively elastic. Unit elastic where any change in price is matched by an equal change in quantity (where the number is equal to 1). Relatively inelastic where large changes in price cause small changes in demand (the number is less than 1). Gasoline is a good example here because most people need it, so even when prices go up, demand doesn’t change greatly. Also, “products with stronger brands tend to be more inelastic, which makes building brand equity a good investment,” says Avery. Perfectly inelastic where the quantity demanded does not change when the price changes. Products in this category are things consumers absolutely need and there are no other options from which to obtain them. “We tend to see this only in cases where a firm has a monopoly on the demand. Even if I change my price, you still have to buy from me,” explains Avery.
English Worksheet: Practice Capital Letter R - Uppercase Letter Tracing Practice Capital Letter R - Alphabet tracing worksheets for Preschoolers, Kindergarten and Class 1. Use this worksheet to Practice Uppercase Letter R Tracing and learn R for Rabbit. Kids can practice their Letter formation and handwriting skills by tracing all of the capital letters in the English alphabet. Related worksheet for Toddlers
Herd immunity and COVID-19: What you need to know Understand what's known about herd immunity and what it means for COVID-19.By Mayo Clinic Staff Curious about progress toward herd immunity against coronavirus disease 2019 (COVID-19)? Understand how herd immunity works, its role in ending the COVID-19 pandemic and the challenges involved. Why is herd immunity important? Herd immunity occurs when a large portion of a community (the herd) becomes immune to a disease. The spread of disease from person to person becomes unlikely when herd immunity is achieved. As a result, the whole community becomes protected — not just those who are immune. Often, a percentage of the population must be capable of getting a disease in order for it to spread. This is called a threshold proportion. If the proportion of the population that is immune to the disease is greater than this threshold, the spread of the disease will decline. This is known as the herd immunity threshold. What percentage of a community needs to be immune in order to achieve herd immunity? It varies from disease to disease. The more contagious a disease is, the greater the proportion of the population that needs to be immune to the disease to stop its spread. For example, the measles is a highly contagious illness. It's estimated that 94% of the population must be immune to interrupt the chain of transmission. How is herd immunity achieved? Herd immunity can be reached when enough people in the population have recovered from a disease and have developed protective antibodies against future infection. However, experts now believe it'll likely be difficult to achieve herd immunity for COVID-19. Getting COVID-19 offers some natural protection or immunity from reinfection with the virus that causes COVID-19. It's estimated that getting COVID-19 and COVID-19 vaccination both result in a low risk of another infection with a similar variant for at least six months. But because reinfection is possible and COVID-19 can cause severe medical complications, it's recommended that people who have already had COVID-19 get a COVID-19 vaccine. In addition, COVID-19 vaccination might offer better protection than getting sick with COVID-19. A recent study showed that unvaccinated people who already had COVID-19 are more than twice as likely as fully vaccinated people to be reinfected with COVID-19. Recent research also suggests that people who got COVID-19 in 2020 and then received mRNA vaccines produce very high levels of antibodies that are likely effective against current and, possibly, future variants. Some scientists call this hybrid immunity. Further research is needed. There are some major problems with relying on community infection to create herd immunity to the virus that causes COVID-19: It's estimated that getting COVID-19 results in a low risk of another infection with a similar variant for at least six months. However, even if you have antibodies, you could get COVID-19 again. Because reinfection can cause severe medical complications, it's recommended that people who have already had COVID-19 get a COVID-19 vaccine. - Health impact. Infection with the COVID-19 virus could lead to serious complications and millions of deaths, especially among older people and those who have existing health conditions. The health care system could quickly become overwhelmed. Herd immunity also can be reached when enough people have been vaccinated against a disease and have developed protective antibodies against future infection. Unlike the natural infection method, vaccines create immunity without causing illness or resulting complications. Using the concept of herd immunity, vaccines have successfully controlled contagious diseases such as smallpox, polio, diphtheria, rubella and many others. Herd immunity makes it possible to protect the population from a disease, including those who can't be vaccinated, such as newborns or those who have compromised immune systems. The U.S. Food and Drug Administration has approved two COVID-19 vaccines and given emergency use authorization to a handful of COVID-19 vaccines. But reaching herd immunity through vaccination against COVID-19 will likely be difficult for many reasons. For example: - Vaccine hesitancy. Some people object to getting a COVID-19 vaccine because of religious objections, fears about the possible risks or skepticism about the benefits. If the proportion of vaccinated people in a community is below the herd immunity threshold, a contagious disease could continue to spread. - Protection questions. Research suggests that COVID-19 vaccination results in a low risk of infection with the COVID-19 virus for at least six months. However, although COVID-19 vaccines are effective in preventing severe illness from current and possibly future variants, people who are vaccinated and up to date on their vaccines may still get breakthrough infections and spread the virus to others. - Uneven vaccine access. The distribution of COVID-19 vaccines has greatly varied among and within countries. If one community achieves a high COVID-19 vaccination rate and surrounding areas don't, outbreaks can occur if the populations mix. What's the outlook for achieving herd immunity in the U.S.? Given the challenges, it's unclear if herd immunity to the virus that causes COVID-19 will be reached. However, the number of fully vaccinated adults continues to rise. In addition, more than 80 million people in the U.S. have had confirmed infections with the COVID-19 virus — though, again, it's not clear how long immunity lasts after infection. Even if it isn't currently possible to stop transmission of the COVID-19 virus, the FDA-approved and FDA-authorized COVID-19 vaccines are highly effective at protecting against severe illness requiring hospitalization and death due to COVID-19. The vaccines are allowing people to better be able to live with the virus. How can you slow the transmission of COVID-19? There are steps you can take to reduce your risk of infection. When possible, get a COVID-19 vaccine. Also stay up to date with COVID-19 vaccines, including getting recommended booster doses, to prevent serious illness. You're considered up to date with your vaccines if you've gotten all recommended COVID-19 vaccines, including booster doses, when you become eligible. If you're up to date with your vaccines, you can more safely return to doing activities that you might not have been able to do because of the pandemic. However, if you are in an area with a high number of people with COVID-19 in the hospital and new COVID-19 cases, the CDC recommends wearing a mask indoors in public. The CDC recommends following these precautions: - Avoid close contact (within about 6 feet, or 2 meters) with anyone who is sick or has symptoms. - Keep distance between yourself and others (within about 6 feet, or 2 meters), when you're in indoor public spaces if you're not fully vaccinated. This is especially important if you have a higher risk of serious illness. Keep in mind some people may have COVID-19 and spread it to others, even if they don't have symptoms or don't know they have COVID-19. - Avoid crowds and indoor places that have poor air flow (ventilation). - Wash your hands often with soap and water for at least 20 seconds, or use an alcohol-based hand sanitizer that contains at least 60% alcohol. - Wear a face mask in indoor public spaces if you're in an area with a high number of people with COVID-19 in the hospital and new COVID-19 cases, whether or not you're vaccinated. The CDC recommends wearing the most protective mask possible that you'll wear regularly, fits well and is comfortable. - Cover your mouth and nose with your elbow or a tissue when you cough or sneeze. Throw away the used tissue. Wash your hands right away. - Avoid touching your eyes, nose and mouth. - Avoid sharing dishes, glasses, bedding and other household items if you're sick. - Clean and disinfect high-touch surfaces, such as doorknobs, light switches, electronics and counters, regularly. - Stay home from work, school and public areas and stay home in isolation if you're sick, unless you're going to get medical care. Avoid taking public transportation, taxis and ride-hailing services if you're sick. If you have a chronic medical condition and may have a higher risk of serious illness, check with your health care provider about other ways to protect yourself. April 20, 2022 From Mayo Clinic to your inbox Sign up for free, and stay up to date on research advancements, health tips and current health topics, like COVID-19, plus expertise on managing health. ErrorEmail field is required ErrorInclude a valid email address To provide you with the most relevant and helpful information, and understand which information is beneficial, we may combine your email and website usage information with other information we have about you. If you are a Mayo Clinic patient, this could include protected health information. If we combine this information with your protected health information, we will treat all of that information as protected health information and will only use or disclose that information as set forth in our notice of privacy practices. You may opt-out of email communications at any time by clicking on the unsubscribe link in the e-mail. Thank you for subscribing Our Housecall e-newsletter will keep you up-to-date on the latest health information. Sorry something went wrong with your subscription Please, try again in a couple of minutes See more In-depth - Poland GA. Preserving civility in vaccine policy discourse: A way forward. JAMA. 2019; doi:10.1001/jama.2019.7445. - Poland GA. SARS-CoV-2: A time for clear and immediate action. The Lancet. 2020; doi:10.1016/S1473-3099(20)30250-4. - Metcalf CJE, et al. Understanding herd immunity. Trends in Immunology. 2015; doi:10.1016/j.it.2015.10.004. - Kwok KO, et al. Herd immunity — Estimating the level required to halt the COVID-19 epidemics in affected countries. Journal of Infection. 2020; doi:10.1016/j.jinf.2020.03.027. - Celentano DD, et al. The dynamics of disease transmission. In: Gordis Epidemiology. 6th ed. Elsevier; 2019. https://www.clinicalkey.com. Accessed May 14, 2020. - Amanat F, et al. SARS-CoV-2 vaccines: Status report. Immunity. 2020; doi:10.1016/j.immuni.2020.03.007. - Herd immunity. Association for Professionals in Infection Control and Epidemiology. https://apic.org/monthly_alerts/herd-immunity/. Accessed May 15, 2020. - McIntosh K. Coronavirus disease 2019 (COVID-19): Epidemiology, virology, clinical features, diagnosis, and prevention. https://www.uptodate.com/contents/search. Accessed May 15, 2020. - Community mitigation. Centers for Disease Control and Prevention. https://www.cdc.gov/coronavirus/2019-ncov/php/open-america/community-mitigation.html. Accessed May 18, 2020. - Global strategy to respond to COVID-19. World Health Organization. https://www.who.int/docs/default-source/coronaviruse/covid-strategy-update-14april2020.pdf?sfvrsn=29da3ba0_19. Accessed May 18, 2020. - Coronavirus disease 2019 (COVID-2019). Centers for Disease Control and Prevention. https://www.cdc.gov/coronavirus/2019-ncov/index.html. Accessed May 19, 2020. - Gans H, et al. Measles: Epidemiology and transmission. https://www.uptodate.com/contents/search. Accessed May 20, 2020. - Myths and facts about COVID-19 vaccines. Centers for Disease Control and Prevention. https://www.cdc.gov/coronavirus/2019-ncov/vaccines/facts.html. Accessed April 7, 2022. - Benefits of getting a COVID-19 vaccine. Centers for Disease Control and Prevention. https://www.cdc.gov/coronavirus/2019-ncov/vaccines/vaccine-benefits.html. Accessed April 7, 2022. - Aschwanden C. Five reasons why COVID herd immunity is probably impossible. Nature. 2021; doi.org/10.1038/d41586-021-00728-2. - Interim public health recommendations for fully vaccinated people. Centers for Disease Control and Prevention. https://www.cdc.gov/coronavirus/2019-ncov/vaccines/fully-vaccinated-guidance.html. Accessed Dec. 15, 2021. - Rubin R. COVID-19 vaccines vs variants — Determining how much immunity is enough. JAMA. 2021; doi:10.1001/jama.2021.3370. - Pilz S, et al. SARS-CoV-2 reinfections: Overview of efficacy and duration of natural and hybrid immunity. Environmental Research 2022; doi:10.1016/j.envres.2022.112911. - Stay up to date with your COVID-19 vaccines. Centers for Disease Control and Prevention. https://www.cdc.gov/coronavirus/2019-ncov/vaccines/stay-up-to-date.html. Accessed April 6, 2022. - Use and care of masks. Centers for Disease Control and Prevention. https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/about-face-coverings.html. Accessed April 6, 2022.v - Tetteh JNA, et al. Network models to evaluate vaccine strategies towards herd immunity in COVID-19. Journal of Theoretical Biology. 2021; doi:10.1016/j.jtbi.2021.110894. - Morens DM, et al. The concept of classical herd immunity may not apply to COVID-19. The Journal of Infectious Diseases. 2022; doi:10.1093/infdis/jiac109. - Trends in number of COVID-19 cases and deaths in the U.S. reported to CDC, by state/territory. Centers for Disease Control and Prevention. https://covid.cdc.gov/covid-data-tracker/#trends_dailycases. Accessed April 6, 2022. - How to protect yourself & others. Centers for Disease Control and Prevention. https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/prevention.html. Accessed April 6, 2022. - COVID-19 vaccine work. Centers for Disease Control and Prevention. https://www.cdc.gov/coronavirus/2019-ncov/vaccines/effectiveness/work.html. Accessed April 7, 2022.
THE surrender of Confederate Gen. Robert E. Lee at Appomattox Court House, 150 years ago next month, effectively ended the Civil War. Preoccupied with the challenges of our own time, Americans will probably devote little attention to the sesquicentennial of Reconstruction, the turbulent era that followed the conflict. This is unfortunate, for if any historical period deserves the label “relevant,” it is Reconstruction. Issues that agitate American politics today — access to citizenship and voting rights, the relative powers of the national and state governments, the relationship between political and economic democracy, the proper response to terrorism — all of these are Reconstruction questions. But that era has long been misunderstood. Reconstruction refers to the period, generally dated from 1865 to 1877, during which the nation’s laws and Constitution were rewritten to guarantee the basic rights of the former slaves, and biracial governments came to power throughout the defeated Confederacy. For decades, these years were widely seen as the nadir in the saga of American democracy. According to this view, Radical Republicans in Congress, bent on punishing defeated Confederates, established corrupt Southern governments presided over by carpetbaggers (unscrupulous Northerners who ventured south to reap the spoils of office), scalawags (Southern whites who supported the new regimes) and freed African-Americans, unfit to exercise democratic rights. The heroes of the story were the self-styled Redeemers, who restored white supremacy to the South. This portrait, which received scholarly expression in the early-20th-century works of William A. Dunning and his students at Columbia University, was popularized by the 1915 film “Birth of A Nation” and by Claude Bowers’s 1929 best-selling history, “The Tragic Era.” It provided an intellectual foundation for the system of segregation and black disenfranchisement that followed Reconstruction. Any effort to restore the rights of Southern blacks, it implied, would lead to a repeat of the alleged horrors of Reconstruction. HISTORIANS have long since rejected this lurid account, although it retains a stubborn hold on the popular imagination. Today, scholars believe that if the era was “tragic,” it was not because Reconstruction was attempted but because it failed. Reconstruction actually began in December 1863, when Abraham Lincoln announced a plan to establish governments in the South loyal to the Union. Lincoln granted amnesty to most Confederates so long as they accepted the abolition of slavery, but said nothing about rights for freed blacks. Rather than a blueprint for the postwar South, this was a war measure, an effort to detach whites from the Confederacy. On Reconstruction, as on other questions, Lincoln’s ideas evolved. At the end of his life, he called for limited black suffrage in the postwar South, singling out the “very intelligent” (prewar free blacks) and “those who serve our cause as soldiers” as most worthy. Lincoln did not live to preside over Reconstruction. That task fell to his successor, Andrew Johnson. Once lionized as a heroic defender of the Constitution against Radical Republicans, Johnson today is viewed by historians as one of the worst presidents to occupy the White House. He was incorrigibly racist, unwilling to listen to criticism and unable to work with Congress. Johnson set up new Southern governments controlled by ex-Confederates. They quickly enacted the Black Codes, laws that severely limited the freed people’s rights and sought, through vagrancy regulations, to force them back to work on the plantations. But these measures aroused bitter protests among blacks, and convinced Northerners that the white South was trying to restore slavery in all but name. There followed a momentous political clash, the struggle between Johnson and the Republican majority (not just the Radicals) in Congress. Over Johnson’s veto, Congress enacted one of the most important laws in American history, the Civil Rights Act of 1866, still on the books today. It affirmed the citizenship of everyone born in the United States, regardless of race (except Indians, still considered members of tribal sovereignties). This principle, birthright citizenship, is increasingly rare in today’s world and deeply contested in our own contemporary politics, because it applies to the American-born children of undocumented immigrants. The act went on to mandate that all citizens enjoy basic civil rights in the same manner “enjoyed by white persons.” Johnson’s veto message denounced the law for what today is called reverse discrimination: “The distinction of race and color is by the bill made to operate in favor of the colored and against the white race.” Indeed, in the idea that expanding the rights of nonwhites somehow punishes the white majority, the ghost of Andrew Johnson still haunts our discussions of race. Soon after, Congress incorporated birthright citizenship and legal equality into the Constitution via the 14th Amendment. In recent decades, the courts have used this amendment to expand the legal rights of numerous groups — most recently, gay men and women. As the Republican editor George William Curtis wrote, the 14th Amendment changed a Constitution “for white men” to one “for mankind.” It also marked a significant change in the federal balance of power, empowering the national government to protect the rights of citizens against violations by the states. In 1867 Congress passed the Reconstruction Acts, again over Johnson’s veto. These set in motion the establishment of new governments in the South, empowered Southern black men to vote and temporarily barred several thousand leading Confederates from the ballot. Soon after, the 15th Amendment extended black male suffrage to the entire nation. The Reconstruction Acts inaugurated the period of Radical Reconstruction, when a politically mobilized black community, with its white allies, brought the Republican Party to power throughout the South. For the first time, African-Americans voted in large numbers and held public office at every level of government. It was a remarkable, unprecedented effort to build an interracial democracy on the ashes of slavery. Most offices remained in the hands of white Republicans. But the advent of African-Americans in positions of political power aroused bitter hostility from Reconstruction’s opponents. They spread another myth — that the new officials were propertyless, illiterate and incompetent. As late as 1947, the Southern historian E. Merton Coulter wrote that of the various aspects of Reconstruction, black officeholding was “longest to be remembered, shuddered at, and execrated.” Disunion: The Civil War America’s most perilous period, revisited through essays, diaries and archival images. There was corruption in the postwar South, although given the scandals of New York’s Tweed Ring and President Ulysses S. Grant’s administration, black suffrage could hardly be blamed. In fact, the new governments had a solid record of accomplishment. They established the South’s first state-funded public school systems, sought to strengthen the bargaining power of plantation laborers, made taxation more equitable and outlawed racial discrimination in transportation and public accommodations. They offered aid to railroads and other enterprises in the hope of creating a New South whose economic expansion would benefit black and white alike. Reconstruction also made possible the consolidation of black families, so often divided by sale during slavery, and the establishment of the independent black church as the core institution of the emerging black community. But the failure to respond to the former slaves’ desire for land left most with no choice but to work for their former owners. It was not economic dependency, however, but widespread violence, coupled with a Northern retreat from the ideal of equality, that doomed Reconstruction. The Ku Klux Klan and kindred groups began a campaign of murder, assault and arson that can only be described as homegrown American terrorism. Meanwhile, as the Northern Republican Party became more conservative, Reconstruction came to be seen as a misguided attempt to uplift the lower classes of society. One by one, the Reconstruction governments fell. As a result of a bargain after the disputed presidential election of 1876, the Republican Rutherford B. Hayes assumed the Oval Office and disavowed further national efforts to enforce the rights of black citizens, while white Democrats controlled the South. By the turn of the century, with the acquiescence of the Supreme Court, a comprehensive system of racial, political and economic inequality, summarized in the phrase Jim Crow, had come into being across the South. At the same time, the supposed horrors of Reconstruction were invoked as far away as South Africa and Australia to demonstrate the necessity of excluding nonwhite peoples from political rights. This is why W.E.B. Du Bois, in his great 1935 work “Black Reconstruction in America,” saw the end of Reconstruction as a tragedy for democracy, not just in the United States but around the globe. While violated with impunity, however, the 14th and 15th Amendments remained on the books. Decades later they would provide the legal basis for the civil rights revolution, sometimes called the Second Reconstruction. Citizenship, rights, democracy — as long as these remain contested, so will the necessity of an accurate understanding of Reconstruction. More than most historical subjects, how we think about this era truly matters, for it forces us to think about what kind of society we wish America to be.
The positioning of hydrogen in periodic table, as an element of Group I (Alkali metals) is somewhat contentious. This is because, hydrogen shares some similarities with elements of Group I, IV-A and VII-A. It also exhibits different behavior than elements of all these groups. Group IV-A (also known as Group 14) consists of carbon, silicon, germanium, tin, lead, etc. These elements share certain characteristics with hydrogen. Like hydrogen, all these elements have half filled valence shells. Hydrogen has an atomic number of 1 and its electronic configuration is `1s^1` . Similarly, carbon has an atomic number of 6 and an electronic configuration of `1s^2, 2s^2, 2p^2` . When comparing the electronegativity, hydrogen and members of group IV A have similar values. The electronegativity of hydrogen is 2.2 and carbon is 2.55, etc. The electron affinity of hydrogen and carbon are also comparable. The ionization potential of hydrogen is 13.6, while that of carbon is 11.3. Thus, hydrogen and members of group IV-A of the periodic table share some common characteristics. Hope this helps.
Australia is famous for its kangaroos. These cute marsupials have been a feature of the landscape for thousands of years and are an important part of Aboriginal history and culture. Confirming this is a recent discovery made by researchers in Western Australia. They found a 17,300-year-old painting of a kangaroo in a rock shelter in the Kimberley area, which is a place known for prolific rock art spanning thousands of years. Publishing their findings in the journal Nature Human Behavior, the team detailed the creative means used to learn more about this ancient rock art. The kangaroo painting was discovered on the ceiling of a rock shelter, which has helped preserve the artwork for thousands of years. The shelter was on the Unghango clan estate in Balanggarra country. The Balanggarra Aboriginal Corporation brought their long-held knowledge to the research—in conjunction with several universities and organizations—in a collaborative project to date rock art in the region. Two meters in length, the kangaroo is among other drawings which fall into the Naturalistic period of local rock art. Life-sized animals drawn in red ochre are typical of this late-Ice Age artwork. Damien Finch, the lead author of the research paper, pioneered a method of testing mud wasp nets with carbon dating. To date the kangaroo, the researchers used the fossilized mud wasp nests surrounding the painting. After deciphering the specific layer of ocher belonging to the kangaroo (under more recent artwork), the team tested a nest below the ocher as well as one above. This gives a date range for when the kangaroo was drawn. The results suggested the kangaroo is between 17,500 and 17,100 years old, with 17,300 years old being the best estimate. The team was lucky to find nests providing such a close date range. “This makes the painting Australia’s oldest known in-situ painting,” Finch said. According to Cissy Gore-Birch, Chair of the Balanggarra Aboriginal Corporation, partnership and knowledge sharing are critical to preserving this history. Gore-Birch commented, “It’s important that Indigenous knowledge and stories are not lost and continue to be shared for generations to come… The dating of this oldest known painting in an Australian rock shelter holds a great deal of significance for Aboriginal people and Australians and is an important part of Australia’s history.” Researchers in Australia have discovered a 17,300-year-old rock art painting of a kangaroo, making it the country's oldest known rock art.
Filter by Topic Paul Kammerer conducted experiments on amphibians and marine animals at the Vivarium, a research institute in Vienna, Austria, in the early twentieth century. Kammerer bred organisms in captivity, and he induced them to develop particular adaptations, which Kammerer claimed the organismss offspring would inherit. Kammerer argued that his results demonstrated the inheritance of acquired characteristics, or Lamarckian inheritance. The Lamarckian theory of inheritance posits that individuals transmit acquired traits to their offspring. James Edgar Till is a biophysicist known for establishing the existence of stem cells along with Ernest McCulloch in 1963. Stem cells are undifferentiated cells that can shift, or differentiate, into specialized types of cells and serve as a repair system in the body by dividing indefinitely to replenish other cells. Till’s work with stem cells in bone marrow, which produces the body’s blood cells, helped form the field of modern hematology, a medical discipline that focuses on diseases related to the blood. The HeLa cell line was the first immortal human cell line that George Otto Gey, Margaret Gey, and Mary Kucibek first isolated from Henrietta Lacks and developed at The Johns Hopkins Hospital in Baltimore, Maryland, in 1951. An immortal human cell line is a cluster of cells that continuously multiply on their own outside of the human from which they originated. Scientists use immortal human cell lines in their research to investigate how cells function in humans.
The Foundations of Physical Science All measurements involve a degree of uncertainty How much acceptable? error or uncertainty is In the real world, it is IMPOSSIBLE make measurement of the exact true value of anything (except counting) to Significant Digits or Significant Figures (or Sig Figs for short) are the meaningful digits in a measured quantity If you measure a paperclip and it is between 2.6 and 2.7. We usually say 2.65 cm But to a scientist 2.65 means and 2.67 between 2.62 The final digit in 2.65 is the 5, it represents the smallest amount and is always considered to be rounded up or down Digits that are ALWAYS 1. Non-zero numbers Examples: 54982 = 5 significant 2365149898= 10 2. Zeros between two significant digits Examples: 93100008= 8 1001= 4 3. All final zeros to the right of a decimal point Examples: 4451.630= 7 5012677.26090= 12 Digits that are NEVER significant 1. Leading point zeros to the right of a decimal Examples: 0.002= 1 0.000456= 3 2. Final zeros in a number that does not have a decimal point Examples: 9900000= 2 8765210= 6 This method depends on a question . When you are attempting to determine the number of sig fig’s in a number, ask yourself: Is the decimal point PRESENT ABSENT ? If the decimal point is NOT WRITTEN in the number, start on the other numbers. RIGHT side of the number, go through any zero until you get to the first nonzero digit, underline it and all The number of underlines is the number of sig fig’s! 300 = __ 1 ___ 2100 = __ 2 ___ 7890900 = __ 5 ___ Is the decimal point PRESENT or ABSENT? If the decimal point IS WRITTEN in the number, start on the number of sig fig’s! LEFT side of the number, go through any zero until you get to the first nonzero digit, underline it and all other numbers. The number of underlines is the 0.0033 = __ 2 ___ 0.000 000 4040 = __ 4 ___ 2.0000 = ___ 5 __ For addition or subtraction, your answer should be rounded off to the LEAST number of decimal places in the problem. 402.681 as 402 .68 Using the rule, this answer should be reported 1207.2 450.461 as 450.5 Using the rule, this answer should be reported For multiplication and division problems, your answer should have the same number of sig. figs. as the LEAST number of sig. figs. in the problem. How many sig. figs.? 2.51 3 2 = 2 . 61 = __ 153.11 _____ Final Answer _ 150 ___ How many sig. figs.? 2 450/32.1 = 14.0186915 / 3 = 2 . Final Answer ___ 14 ___ Accuracy- how close a measurement is to an accepted or true value Precision- describes how measurements are close together or reproducible reproducible repeated Resolution- refers to the smallest interval that can be measured In everyday conversation, “ like 2.56 and 2.56. same ” means two numbers that are the same exactly, When comparing scientific results “same” means “ not significantly different ”. that are MUCH error in the results are differences larger than the estimated How can you tell if two results are the same when both contain When we estimate error in a data set, we will assume the average is the exact value . If the difference in the averages is at least three times larger than the average error , we say the difference is “ How you can you tell if two results are the same when both contain error. Calculate error Average error Compare average error
Self-feeding: part 1 It has been 2 weeks between blogs as life has gotten busy with our beginning the change back to in rooms from providing therapy exclusively by Telehealth I thought we would have a change from gross motor development and talk about feeding It is something we get asked about constantly, and as self-feeding involves gaining skills across all areas of development, we will discuss it over a few blog posts. Self-feeding is an important and complex skill that includes a variety of reflexes, oral-motor, fine motor and gross motor skills, as well as sensory skills. As babies develop and meet new milestones in these areas, so do their feeding skills. Deb’s key takeaways: There are several areas of development needed for a baby to feed in the first 6 months. - General motor development - Behaviour imitation and interaction with others - Specific feeding skills General Motor Development - Babies can move their head to the breast or bottle. - They practice sucking between feeds by bringing their hands to their mouth. - They recognise the feeding position. - They will knead the breast to help with milk flow. - Babies hold objects placed in their hands. - They will hold the breast/bottle during feeding. - They hold and mouth objects and explore with their mouth and tongue. - Begin to develop head control at 3-4 months. - Can sit when supported on lap in high chair. - Increasing development of head and trunk control. - Sit without receiving help. - Babies show likes and dislikes by crying or settling. - Look at faces/imitate facial movements of parent. - Smiles at familiar faces. - Some imitation of facial expressions. - Watches events and responds to them. - e.g. follows a spoon to parent’s mouth. - Begins to turn-take with others. - Gets excited when they see food being prepared. - Lean towards/reach for your spoon in sitting. - Open their mouth in expectation of spoon. - Turn their head and/or push the spoon away when full. - May clamp their mouth shut when not interested in food. Specific feeding skills As well as the general motor and behaviour skills described above, there are particular reflexes and oral motor skills (lip and jaw control) needed for feeding. Reflexes 0 to 4 months Reflexes are automatic and accidental actions that the body does in response to different stimuli. Babies are born with several reflexes to help them find and attach to a nipple or teat to feed, while also protecting their airways. These include: - Rooting – the baby turns their face towards a sense of touch near their mouth and opens their mouth to look for a breast or bottle. - Suck-swallow – the baby sucks on a nipple or finger and their sucking is coordinated with their swallow to allow them to feed safely on their back. - Tongue-thrust – the baby’s tongue moves out of the mouth when touched on the lips, to help with feeding from a breast or bottle and prevents choking. - Gag – an object placed near the back of the mouth is pushed out by the tongue to protect the baby from choking. It’s amazing that newborns have these reflexes to help them feed safely! Reflexes 4 to 6 months - Rooting reflex - disappears by 4 moths and your baby will actively turn their head and move their mouth to suck - Sucking reflex - by 4 months sucking becomes a voluntary action rather than a reflex - Tongue thrust - begins to disappear, allowing introduction of spoon feeding. - Gag reflex - is reducing by 4-6 months due to the increased use of mouthing toys. As they mouth different objects, they get used to the gag reflex and it is stimulated less and needs an object to be further back in their mouth before they will gag. They can successfully deal with different textures, such as a spoon and thicker food, i.e. smooth solid food. - The gag reflex can still be seen in infants 6 months and older as they are given lumpier food. Oral-Motor Skills 0 to 4 months Babies develop their oral-motor skills from when they are still in the womb. Unborn babies have been seen to suck and swallow amniotic fluid from as early as 14-15 weeks in the womb. While in the womb, they also start experiencing different tastes and smells through the amniotic fluid. Babies are on an all-milk diet from when they are born as their tongue movement is limited by the small space in their mouth up until around 4 months old. They may be drinking milk from the breast or from a bottle. In their first 4 months, you can expect to see some oral-motor skills developing that help them feed. These include: - More tongue control and types of movements - moving tongue forward and backward in their mouth, in and out of their mouth, as well as up and down. - Large fatty cheeks that help stabilise the nipple or teat during feeding (not just to look cute!). - Opening of the mouth in preparation of sucking a nipple, teat, their own fist and even toys. Oral-Motor Skills 4-6 months - Babies can move their jaw up and down. - Munching movement. - Can move food from front to back of their tongue to swallow. - Early oral reflexes are disappearing. - Opens mouth when sees food approaching. - Increasing control of mouth, i.e. licking lips/blowing raspberries. - Exploring things by putting everything in their mouths – even their feet. Tips for successful feeding (how to help your baby with feeding): - Hold your baby straight up-and-down against your chest, arm behind their shoulders and one under their bottom (tummy down on your chest). - Sit in a comfortable, semi-reclined position. - Let the baby find the nipple or teat – let them find it from above. - Before you start solids (4-6 months), introduce your baby to family meal times. - Offer 1 tablespoon of food at first. - Recognise when they have had enough, e.g. closed mouth. - Give your baby their own bowl and spoon to encourage early self-feeding but feed from a separate bowl. Tips to encourage oral stimulation: - Gentle massage of lips and gums with finger. - Vibrating toys for facial massage/stimulation. - Gentle facial massage with different textures. - Exploring things with different textures for mouthing Possible signs of feeding/swallowing problems: - Refusing to eat. - Poor weight gain. - Coughing or throat clearing after feeds (wet or gurgly cry). - Gagging with feeding (especially as introducing solids). In the first few months of your baby’s life, their feeding changes from reflex to controlled choice. They move from totally dependent on the breast or bottle to the beginning of solid foods between 4-6 months. Babies are born with gut feelings, reflexes, and developmental skills to feed from birth. As they grow, these skills get better and develop, and you both are in for an experience of discovery and enjoyment. If you have any concerns related to your baby’s feeding, contact your community nurse, lactation therapist, or GP. Keep safe, happy, and well.
How to carry out a systematic review A systematic review is a type of evidence review. What is a systematic review? Systematic review overview A systematic review is a type of evidence review, which attempts to identify and bring together, or synthesise, all evidence associated with a specific question. Systematic reviews can be quantitative (with or without meta-analysis), qualitative (sometimes called qualitative evidence syntheses or meta-ethnographies), and (often using mixed methods) in other forms, such as scoping reviews and realist syntheses. Systematic reviews use clearly defined methodology in both the search strategy, the selection of studies, appraisal of evidence and data synthesis. This rigorous approach minimises bias in the search results, facilitates evidence-based decision making, and enables the review to be transparent, reproduced and updated. Systematic reviews are of particular importance in medical and other healthcare research, where they are frequently carried out in order to establish the effectiveness of particular treatment interventions, but they are also used in other disciplines. A systematic review is much more than a literature review carried out systematically. In summary, systematic reviews: - answer a specific question - have an explicit, reproducible methodology - have predefined eligibility criteria for studies - attempt to identify all studies that would meet the eligibility criteria - are carried out by at least two people to ensure validity of results, eg by removing bias - include an assessment of the validity of the findings of the included studies - result in a systematic presentation, and synthesis, of the characteristics and findings of the included studies. Systematic review versus literature review The table below highlights the similarities and differences between a systematic review and a literature review. |Systematic review||Literature review| |Question||Focused on a single question||Not necessarily focused on a single question but may describe an overview| |Protocol||A peer review protocol or plan is included||No protocol is included| |Background||Both provide summaries of the available literature on a topic| |Objectives||Clear objectives are identified||Objectives may or may not be identified| |Inclusion and exclusion criteria||Criteria stated before review is conducted||Criteria not specified| |Search strategy||Comprehensive search conducted in a systematic way||Strategy not explicitly stated| |Process of selecting articles||Usually clear and explicit||Not described in a literature review| |Process of evaluating articles||Comprehensive evaluation of study quality||Evaluation of study quality may or may not be included| |Results and data synthesis||Clear summaries based on high-quality evidence||Summary based on studies where the quality of articles may not be specified. May also be influenced by the reviewer's theories, needs and beliefs.| |Discussion||Written by an expert or group of experts with a detailed and well-grounded knowledge of the issues| |Search strategy||Detailed search strategy is included| |Sources of literature||List of databases, websites and other sources of included studies are listed. Both published and unpublished literature are considered.||Not usually stated and non-exhaustive, usually well-known articles. Prone to publication bias.| |Process of evaluating articles/critical appraisal||Rigorous appraisal of study quality||Variable evaluation of study quality or method| What you need to do - Spend time identifying and understanding the research question. - Check Prospero to see if anyone else has registered a similar review. - Develop a protocol that states what the review will include and exclude. View an example protocol. - Turn the question into a search strategy using free-text keywords and phrases. Use a thesaurus and database subject headings to identify synonyms. If necessary, contact Library Services for help. - Consider using a search strategy worksheet to manage the process, and aid the transparency and replicability of the systematic review. Example search strategy Download a copy of the below search strategy template. Concept 1 Concept 2 Concept 3 Concept 4 Key concept Synonyms Synonyms Synonyms Synonyms Synonyms Subject headings/MeSH* * MeSH = Medical Subject Headings, the controlled vocabulary used in the PubMed database. For a good example of a search strategy, have a look at the one used in the protocol for this systematic review: Quality of family relationships and outcomes of dementia: a SR (BMJ Open). Quantitative research strategy For quantitative research, consider using PICO to identify search concepts. PICO is used to answer clinical and healthcare questions that look at the effectiveness of interventions, eg "is drug x more effective than drug y?" - P: Person/population - I: Intervention - C: Comparison - O: Outcome You do not have to use all four elements. Quite often, only P and I are used. Agree with the team which criteria are needed. For qualitative reviews, consider using SPIDER: Sample The group of people being looked at – because qualitative research is not easy to generalise – a sample is preferred over a patient Phenomenon of Interest Looks at the reasons for behaviour and decisions, rather than an intervention Design The form of research used, such as interview or survey Evaluation The outcome measures Research type Qualitative, quantitative and/or mixed methods - Define the inclusion and exclusion criteria. - The project team agrees final search strategy. - Carry out scoping search first to identify key resources using two or three databases. - If necessary, modify the search to ensure that key articles are found. - Once you're happy with the search strategy, select the most relevant databases and run the search. If necessary, contact Library Services for help to identify relevant databases. - Run the search on all appropriate databases, including sources of grey literature, eg Open Grey, NIHR, or clinicaltrials.gov. Adapt the strategy as appropriate for each database. - Run a citation search using Scopus. - Identify journals not included in databases for manual searching. - Check references at the end of located articles for other relevant material. - Large numbers of results tend to be found, so use reference management software to help manage and screen these results. - Once all the literature has been found, the abstracts then need to be screened by reviewers to decide whether the full text is required. Fewer than 1% of results usually make it to final review once exclusion/inclusion criteria are applied. - Include a PRISMA statement in your review clearly explaining your methodology. For further information the following are useful links: - The Cochrane Handbook - The Centre for Research and Dissemination’s guidance for carrying out reviews in healthcare. - Doing a Systematic Review: a Student's Guide. - Students 4 Best Evidence There is a range of literature available from UWE Bristol Library to aid your understanding of systematic reviews. Supporting systematic literature reviews UWE Bristol Library Service is pleased to support systematic reviews as follows: - Standard offer – We can meet with you individually to discuss your search strategy. - Enhanced offer – If you have funding and are able to pay for more extensive support from your grant, we can meet with you to discuss timescales, develop a search strategy in conjunction with the review team, run the agreed searches and deposit results into a reference management package for you to access. Information on potential costs, charged at an hourly rate, can be found on the staff intranet. Please contact your Faculty Librarian to discuss further.
Conjunctivitis is also known as pinkeye which is an inflammatory condition affecting the eyes, especially the conjunctiva. The conjunctiva is a membrane that functions in covering the whites of the eyes and the inner surface of the eyelids. The membranes have small blood vessels that become visible when there is inflammation. The white part of the eye becomes red when infected with conjunctivitis. Common symptoms of conjunctivitis - There is itchiness of the eyes - Inflammation, swelling and the white portion of the eyes becomes red - Tearing or yellow and green discharge coming out of the eyes - There is crusting over the eyelids and blurriness - Increased photosensitivity - Burning sensation can be felt Possible causes of conjunctivitis - Infectious conjunctivitis which can affect one or both eyes is caused by infection due to bacteria or virus. Both types of infections are very contagious that can be spread by direct contact with eye secretions or contaminated surfaces. - Allergic conjunctivitis is an allergic reaction upon exposure to an allergen or irritant such as smoke, pollen and other substances and both eyes are usually affected. - Non-infectious conjunctivitis can be caused by exposure to irritants such as a splash of chemical or entry of foreign objects in the eye. Flushing of the eyes in an attempt to get rid of the irritant can cause irritation and redness. - Neonatal conjunctivitis is common among those suffering from STDs such as chlamydia and if not properly treated, it can lead to blindness. - Giant papillary conjunctivitis affects contact lens users which affects both the eyes and usually common in among individuals who utilize soft contact lens. Treatment and home remedies for conjunctivitis - Apply a compress to the affected eye. Soak a clean and lint-free cloth in water and wring out excess water. Apply it to over the closed eyelid of the individual for a few minutes several times every day. Using a cool water compress helps in soothing the affected eye, but a warm compress can also be used to help with the condition. - Use a prescribed over-the-counter eye drops called artificial tears that help in relieving the symptoms. Some eye drops contain antihistamines that help with allergic conjunctivitis. - The individual should avoid wearing contact lens until the affected eyes are totally healed. - Wash personal belongings such as clothes frequently if suffering from allergic conjunctivitis. - Take a bath or shower before going to bed helps with the condition. - Maintain a well balanced diet every day, especially foods rich in vitamin A that are essential for the overall health of the eye such as green leafy vegetables, yogurt, tomatoes, papaya, butter and carrots. Other remedies for conjunctivitis - Apply a thin layer of potato peels over the eyelid. This helps in minimizing the swelling and inflammation. - Prepare an Indian gooseberry juice and mix with honey. This solution is used as eyewash in order to treat conjunctivitis and must be applied at least twice every day. - Boil some dried coriander in a cup of water and use this as an eye wash in order to treat conjunctivitis.
Sounding the Sun through a technique similar to seismology has opened a new era for understanding the Sun’s interior. The COROT satellite has now applied this technique to three stars, directly probing the interiors of stars beyond the Sun for the first time. When global oscillations of the Sun were discovered, scientists realised they opened a window to the Sun’s interior. Like the propagation of seismic waves on Earth providing information about our planet’s interior, sound waves travel throughout the Sun carrying information about what is happening below the surface. These oscillations can also be observed on other stars. They can be detected through the variation in the light emitted by the star as the surface wobbles – the technique used by COROT. This reveals the internal structure of the star, and the way energy is transported from the core to the surface. “Other techniques to estimate stellar oscillations have been used from the ground, but they are limited in what they can do,” said Malcolm Fridlund, ESA Project Scientist for COROT at ESA’s European Space Research and Technology Centre (ESTEC), in the Netherlands, and co-author of the results. “Adverse weather conditions, plus the fact that you cannot observe stars during daytime, oblige ground astronomers to interrupt their observations,” he continued. “Now, the key to detecting such small stellar oscillations from big distances is not only the sensitivity of an instrument, but also the opportunity of observing the star without interruption: any interruption produces noise in the data that can cover a signal completely. Therefore, to be certain, we must approach the question with the right instruments and from space.” The three stars probed by COROT – known as HD49933, HD181420 and HD181906 – are similar to the Sun. They are not exactly in our stellar neighbourhood, but rather far away, so their brightness doesn’t blind COROT’s instruments. “The fact that COROT succeeded in probing the interior of Sun-like stars with direct measurements for the first time is a huge leap in understanding stars in general”, added Fridlund. “In addition, this will help us to understand, by comparison, our own Sun even better.” Note for editors The ESA/NASA mission SOHO, observing the Sun since 1995, has paved the way for stellar seismology, which is the approach now extended by COROT to the probing of other stars. The results appear in the 24 October issue of the scientific journal Science, in the paper ‘CoRoT measures solar-like oscillations and granulation in stars,’ by E. Michel et al. COROT is a mission led by the French Space Agency (CNES), with contributions from ESA, Austria, Belgium, Germany, Spain and Brazil. It was launched in December 2006 carrying a 27 cm-diameter telescope designed to detect tiny changes the brightness of nearby stars. The mission’s main objectives are to search for exoplanets and to study stellar interiors. For more information Malcolm Fridlund, ESA COROT Project Scientist Email: Malcolm.Fridlund @ esa.int
Introduction to Regenerative Medicine The body works as a complete system, where all its parts have different functions that support each other. In many ways, we can compare the function of certain organs to machines: the heart is like a pump that pumps blood, the brain is like a computer that processes thought. But, unlike a machine, the body has a fantastic ability to adapt and heal! Regenerative medicine works with the body' natural healing processes, so when an injury occurs that is beyond what the body can heal, doctors can intervene and help the healing occur faster or use special materials from the laboratory to regrow tissue. The injuries in question have to do with extensive tissue damage like broken bones, severe burns, heart attacks (damaged heart muscle) or spinal cord injuries. Some of these injuries will take months to heal. Sadly, some injuries will not heal at all. Where some individuals will get new treatments that utilize regenerative medicine that help them heal in weeks, not months. Other individuals will get new treatments that cure them forever, improving and even saving their lives. Where the body’s healing ends, regenerative medicine takes over! Our body has a fantastic ability to heal itself! But some injuries are more than the body can handle. Doctors and scientists are working on regenerative medicine to help speed the body's natural healing ability. Injuries and Healing When areas of tissue in the body are damaged or destroyed, the body tries to replace them with new tissue by producing cells capable of creating new tissue. When a tissue like bone is broken, both its cells and its blood vessels may be injured. The body is prepared to heal minor injuries like these, so when trauma occurs the body works to stop any blood loss by clotting. Then the body starts producing new cells to replace the damaged ones. These new cells are produced in several places in the body, the main one being the marrow in the center of the bone. They travel through the body to the damaged site and become new tissue. The body also builds new thin blood vessels, called capillaries, to bring nutrients and oxygen to the new cells. When the damage is too extensive and there is a big gap of missing or dead tissue where no capillaries survive, new cells cannot fill the area because there is no blood reaching them with nutrients and oxygen. Some tissues are easier to heal than other tissues in the body. Those that cannot heal as well form scars in the places where they were mended. The scar tissue is a weaker and more disorganized tissue than the original, healthy tissue. Cells, Tissues, and Organs All living things are made of cells. Humans, plants, birds, and bacteria are all made of cells. One Cell or Many Millions – The Complexity of Life Cells have existed for millions of years. A cell is the basic unit of life, which means that all living things are made of cell. Some life forms, like bacteria, are only one cell and others, like humans, are made of millions of cells. Single-cell organisms, like bacteria, are very efficient organisms. Some bacteria live in extreme environments where no human could go without special equipment. In more complex organisms, different groups of cells work together to do complex functions, such as protecting the body from the outside environment or allowing the organism to be able to move and look for food. With increased complexity comes an increased need for energy and an increased capacity to interact with the world! Cells can work together! Millions of years ago, life consisted mostly of one-celled organisms, but over time a myriad of bigger and more complex organisms emerged. One of the most interesting things scientists have found is that at some point some cells developed the ability to swallow other cells! Some scientists think this was the beginning of eukaryotic cells. At some point, some cells started to stick together like glue, forming multicellular organisms (organisms with more than one cell). When cells started to work together, cells were able to start focusing on different tasks: - Some swallowing cells took over the process of eating for their neighbors, so their neighbors could have time and energy to do other things. - The outer cells became "skin" cells to help protect the inner cells which became "eating cells." - Some cells became really good at moving around in a group of cells and sniffing out bad or dead cells to remove them before they harmed their neighbors. This kind of cell cooperation happens all over! For example, in your immune system, you have special cells called B and T cells that travel through the body and talk to each other to determine whether a newfound particle is good or bad. Your neurons carry messages from all over the body— signals to the brain from all over your body and from your brain out to all parts of your body. Cells and Tissues In complex organisms, like a human being, you can find many kinds of cells, such as muscle cells, bone cells, intestinal cells and immune cells! Each kind of cell works together to form a specific tissue; for example, there are a few different kinds of skin cells that work together to form the skin. A tissue is made when of one or more types of cells, group together. These cells are held together by the extracellular matrix which is sort of the glue that holds everything together. Blood vessels bring oxygen and nutrients that the cells need to live, into the tissue. Stem cells are special cells that are able to develop into many different types of cells. Stem cells can divide again and again, renewing other cells in the body of a person or animal. They serve as an internal repair system for the body, replacing damaged or dying cells with healthy new cells. There are two main types of stem cells: embryonic stem cells and non-embryonic stem cells. Embryonic stem cells are found in embryos- the small ball of cells created shortly after an egg is fertilized. Non-embryonic stem cells (also called "somatic" or "adult" stem cells) are found throughout the body in various tissues. When a stem cell divides it creates two new cells. Each cell that is created may remain a stem cell or may become another type of cell that has a distinct function, such as a skin, brain, lung, muscle, or even a red blood cell. The process of a stem cell turning into a specific type of cell is known as differentiation. In regenerative medicine, stem cells play an important role. Stem cells can be used to treat illnesses or injuries because of their unique ability to differentiate. For example, if a person has a disease, this person's stem cells could be collected. Then, in a laboratory, these cells can be made to differentiate into a specific type of cell that the person needs to help them. The cells could then be put back into the individual’s body to help treat their disease. Regenerative medicine provides an artificial support structure – a scaffold – to allow the new cells and capillaries to settle and make new tissue. The scaffold is structurally similar to the tissue that it is trying to help so the body will recognize it. It also must be strong enough to provide support but naturally degradable so that it can eventually be entirely replaced by healthy, new tissue. In the future, regenerative medicine could potentially be used to find cures for problems like spinal cord injuries, diabetes, diseases like Parkinson's, and replacing and entirely healing bones and organs like the heart.
CT Scans and MRI can play a crucial role when it comes to diagnosing health issues at the right time in the right way. MRIs and CT scans are utilized to take images within the body. The significant difference between the two is that MRI, or magnetic resonance imaging utilizes radio waves, and computed tomography or CT utilizes x-rays. Though both have low risks, there are certain differences which make each one a recommended choice according to the circumstances. The data obtained by MRI and CT scans have been directly associated with deteriorating cancer death rates and greater life expectancy. Both the scans assist surgeons in assessing and determining tumors, cysts, aneurysms, and soft tissue conditions. These imagining processes look similar to each other in some ways. Both MRI and CT develop cross-sectional pictures of the internal parts of the body. However, they achieve this by utilizing different methods. It is the responsibility of the doctor to find out which method is best according to the patient’s medical condition. It can mostly be determined according to the body part that is being scanned and the time or urgency the images are required. Also Read: Detecting Breast Cancer At An Early Stage What is meant by a CT scan? A CT scan utilizes x-rays captured at numerous different angles to develop a highly comprehensive cross-sectional picture of the interior part of the body. The x-ray beams can witness different levels of tissue and density within the solid organ, offering extremely exact details. The computer will join these pictures into three dimensional and detailed image of the inner parts of the body. The 3D views demonstrate any tumors or abnormalities which might exist. Certain times, the doctor will utilize the contrast agent, which is consumed by the patient, or be injected using an IV so as to obtain more detailed and contrast in the images. It is mostly utilized when the physician requires pictures of the head (sinuses, inner ear, eyes, brain, and vessels), the skeletal system (spine, shoulders, and neck), the chest (lungs and heart), hips, pelvis, gastrointestinal tract, bladder, and reproductive systems. Doctors recommend a CT scan to determine tumor abnormalities, which could be cancer. What is meant by an MRI scan? It uses powerful radio frequency and magnetic fields to produce clear images of bone, soft tissues, organs, and various internal body structures. The best part of MRI imaging is it does not utilize ionizing radiation at the time of the scan, just like mammograms, x-rays, and CT-scans. There are different kinds of MRI, like: - Pelvic MRI - Lumbar MRI - Heart MRI - Cranial MRI - Chest MRI - Cervical MRI - Abdominal MRI Though MRI generates excellent quality images, it is recommended on a limited basis due to the time it consumes to perform a scan. Certain patients experience claustrophobia when they are inside the MRI scan room. MRI versus CT scan CT scans are highly recommended than MRI. CT is less expensive than MRI. However, MRI is considered superior when it comes to clarity and detail of the picture. The main difference between the two is MRI does not utilize x-rays, where CT utilizes X-rays. Both MRI and CT scans pose certain risks when utilized. It is mainly according to the kind of imaging or how the image is performed. The risks of a CT scan include: - Possible reaction due to utilization of dyes - A small dosage of radiation - Harmful to unborn babies The risks of MRI scan include: - Body temperature increases when taking long MRIs - Hearing issues due to loud noises caused by the machine - Reactions to metals because of magnets It is necessary to discuss with your doctor before the MRI, especially if you have certain implants like a pacemaker, an IUD, eye implants, and artificial joints. Both CT and MRI scans display internal body structures. But CT scan is quicker and offers pictures of skeletal structures, organs, and tissues. An MRI offers detailed images. Early detection always heightens the chances of early cure! Visit Anderson Diagnostics today to choose the right scan as recommended by experienced medical professionals to detect any health issues earlier and pave the way for a healthier and happier tomorrow!
What is skeletal dysplasia? Skeletal dysplasia is a term used to describe over 200 different diseases that result in abnormalities in the development of bone or cartilage. Skeletal dysplasia can range from a condition commonly called dwarfism (achondroplasia) that results in a short body and limbs, to a condition called thanatophoric dysplasia where the child cannot live long after birth because the small rib cage does not allow the lungs to develop. Each type of skeletal dysplasia is quite rare. When all skeletal dypslasias are taken together, they occur in approximately 2.4 per 10,000 births. How is this condition managed during pregnancy? Because of the broad range of conditions that are included in the term skeletal dysplasia, it is important to try to differentiate between the different types before birth, if possible. When skeletal dysplasia is suspected and the type of dysplasia is not clear from the ultrasound, one step in the management is genetic testing (usually by amniocentesis). A combination of several genetic tests, called a skeletal dysplasia panel, can help determine which type of skeletal dysplasia is present. The prognosis and care after birth depend on the type of skeletal dysplasia present. Children with achondroplasia (the most common skeletal dysplasia) may need orthopedic surgeries throughout their lives because of the shape of their bones, whereas children with the severe forms of skeletal dysplasia cannot live for a long time after birth.
Scientists are now hoping that their research on the Great Smog of London will lead to other environmental breakthroughs and help solve problems in countries with high air pollution rates. The Great Smog of London descended upon the city on Dec. 5, 1952. A strange fog, yellow-black in color and thicker than even the native residents of the always foggy London had never seen before. The smell of the fog was different too, a smoky, chemical smell. People stuck outside as it appeared found themselves gasping for air, unable to breathe the thick, almost opaque air. Though they didn’t know it yet, the residents of London were experiencing what has come to be known as one of the deadliest environmental disasters to date. Before the smog lifted, 12,000 people would be dead and it would take almost 65 years for experts to figure out why. The Great Smog of London, a mixture of smoke and fog, was the result of a series of several unfortunate coincidences. Several days prior to the great smog, a cold front had moved in which caused Londoners to use their coal-burning stoves more often than they had been. Thus, smoke was being cranked out of chimneys at a higher rate. Additionally, Dec. 5 was a particularly still day. Rather than the usual 5-10 mile per hour gusts that the riverside city usually experienced, there was almost no wind, causing the smoke from the chimneys to linger above the streets rather than be blown away. On top of the chill and the stillness, the city was directly under an atmospheric anticyclone, which creates a circle of circulating air with an area of dead space in the center. The anticyclone above London effectively created a bubble around the city that prevented fresh air from getting in and the smog from escaping. The Great London Smog was so thick it essentially shut the city down. Visibility was reduced to almost nothing, causing residents to abandon their vehicles in the middle of the roads. The poor quality of the air made walking outside almost impossible, as the levels of pollutants had created a toxic atmosphere. Those who were outside during the fog, nicknamed the “pea-souper” for its yellowish-black color, suffered numerous health effects. Cases of respiratory tract infections, hypoxia, bronchitis, and bronchopneumonia were all reported by doctors, and the death toll soon reached 12,000. A later study revealed that high levels of sulfuric acid in the smog greatly contributed to the deaths. How exactly the sulfuric acid found its way into the air that day remained a mystery for almost 65 years. It wasn’t until November 2016 that a global team of scientists announced that they had finally solved the mystery. The scientists claimed that the sulfur dioxide entered the atmosphere mostly through coal burning. “People have known that sulfate was a big contributor to the fog, and sulfuric acid particles were formed from sulfur dioxide released by coal burning for residential use and power plants, and other means,” said research project leader Dr. Renyi Zhang, a professor at Texas A&M University. “But how sulfur dioxide was turned into sulfuric acid was unclear. Our results showed that this process was facilitated by nitrogen dioxide, another co-product of coal burning, and occurred initially on natural fog.” The scientists are now hoping that their research will lead to other environmental breakthroughs and help solve problems in countries with high air pollution rates, such as China. The fog, though deadly, did force parliament to look into the impact of humans on air pollution. Just four years after the Great Smog of London, the U.K. enacted the Clean Air Act of 1956, banning the burning of all pollutants across the United Kingdom. Next, check out this giant blob of poop, fat and condoms that blocked Londons sewer system last fall. Then, read about another scientific mystery that took years to solve, the mystery of Antarctica’s Blood Falls.
|History of war| |Types of War| |Civil war · Total war| |Air · Information · Land · Sea · Space| |Arctic · Cyberspace · Desert Jungle · Mountain · Urban |Armored · Artillery · Biological · Cavalry Chemical · Electronic · Infantry · Mechanized · Nuclear · Psychological Radiological · Submarine Chain of command · Formations Equipment · Materiel · Supply line Court-martial · Laws of war · Occupation |Government and politics| Military science · Philosophy of war A siege is a military blockade of a city or fortress with the intent of conquering by attrition and/or assault. Generally, the enemy begins by surrounding it, and then blocks reinforcements and supplies from entering it, or troops from escaping. Over time, sieges can demoralize fortress defenders, while attrition can occur through starvation, thirst, disease, and attacks. Fortifications can be reduced by means of siege engines, artillery bombardment, mining under walls, or bypassing defenses through trickery or treachery. To defend themselves against sieges, even the earliest cities were fortified. Ancient cities in the Middle East show archaeological evidence of having had fortified city walls. From ancient times siege warfare dominated the conduct of war throughout the world. However, with the advent of gunpowder and the increasing use of ever more powerful cannon, the value of fortifications was diminished. With the advent of mobile warfare, one single fortified stronghold is no longer as decisive as it once was. By the twentieth century, the significance of the classical siege had declined, although some of the costliest sieges in history were still recorded then. Thus, while sieges do still occur, they are not as common as they once were due to changes in modes of battle, principally the ease by which huge volumes of destructive power can be directed onto a static target. Siege warfare can be understood as a form of low-intensity warfare (at least until an assault takes place) characterized in that at least one party holds a strong defense position, it is highly static situation, the element of attrition is typically strong, and there are plenty of opportunities for negotiations. Nevertheless, sieges throughout history have claimed the lives of many, and not only those in the military, but also innocent citizens including women and children who took refuge within their fortified city believing it to be a place of safety. Thus, siege warfare is often barbaric, treating people with no respect or honor, with severe loss of life. Sieges are military efforts to conquer a city or fortress. The term derives from sedere, Latin for "to sit," since the attacking force sits and waits outside the surrounded city until they surrender. A siege occurs when an attacker encounters a city or fortress that cannot be easily taken by a coup de main and refuses to surrender. Sieges involve surrounding the target and blocking the reinforcement or escape of troops or provision of supplies (a tactic known as "investment," typically coupled with attempts to reduce the fortifications by means of siege engines, artillery bombardment, mining (also known as sapping), or the use of deception or treachery to bypass defenses. Failing a military outcome, sieges can often be decided by starvation, thirst, or disease, which can afflict both the attacker or defender. The most common practice of siege warfare is to lay siege and wait for the surrender of the enemies inside. This could take considerable time. For example, the Egyptian siege of Megiddo in the fifteenth century B.C.E. lasted for seven months before its inhabitants surrendered. The Hittite siege of a rebellious Anatolian vassal in the fourteenth century B.C.E. ended when the queen mother came out of the city and begged for mercy on behalf of her people. If the main objective of a campaign was not the conquest of a particular city, it could simply be passed by. The Hittite campaign against the kingdom of Mitanni in the fourteenth century B.C.E. bypassed the fortified city of Carchemish. When the main objective of the campaign had been fulfilled, the Hittite army returned to Carchemish and the city fell after an eight-day-siege. The well-known Assyrian Siege of Jerusalem in the eighth century B.C.E. came to an end when the Israelites bought them off with gifts and tribute, according to the Assyrian account, or when the Assyrian camp was struck by mass death, according to the Biblical account. Due to the problem of logistics, long-lasting sieges involving only a minor force could seldom be maintained. To end a siege more rapidly, various methods were developed in ancient and medieval times to counter fortifications, with a large variety of siege engines being developed for use by besieging armies. Ladders could be used to escalade over the defenses. Battering rams and siege hooks could be used to force through gates or walls, while catapults, ballistae, trebuchets, mangonels, and onagers could be used to launch projectiles in order to break down a city's fortifications and kill its defenders. A siege tower could also be used: a substantial structure built as high, or higher, than the walls, it allowed the attackers to fire down upon the defenders and also advance troops to the wall with less danger than using ladders. In addition to launching projectiles at the fortifications or defenders, it was also quite common to attempt to undermine the fortifications, causing them to collapse. This could be accomplished by digging a tunnel beneath the foundations of the walls, and then deliberately collapsing or exploding the tunnel. This process is known as "mining." However, defenders could dig counter-tunnels to cut into the attackers' works and collapse them prematurely or use large bellows to pump smoke into the tunnels in order to suffocate the intruders. Fire was often used as a weapon when dealing with wooden fortifications. The Byzantine Empire used Greek fire, which contained additives that made it hard to put out. Combined with a primitive flamethrower, it proved an effective offensive and defensive weapon. Disease was another effective siege weapon, although the attackers were often as vulnerable as the defenders. In some instances, catapults or like weapons would fling diseased animals over city walls in an early example of biological warfare. On occasion, a besieger could bribe the gate-keeper to gain entrance and thus claim his conquest undamaged, and retain his men and equipment intact. In the days of muzzle-loading muskets, the term "forlorn hope" was frequently used to refer to the first wave of soldiers attacking a breach in defenses during a siege. It was likely that most members of this group would be killed or wounded, thus they had a "forlorn hope" of victory. The intention was that some would survive long enough to seize a foothold that could be reinforced, or at least that a second wave with better prospects could be sent in while the defenders were reloading or engaged in mopping up the remnants of the first wave. A forlorn hope was typically led by a junior officer with hopes of personal advancement. If he survived, and performed courageously, he was almost guaranteed both a promotion and a long-term boost to his career prospects. As a result, despite the risks there was often competition for the opportunity to lead the assault. The French equivalent of the forlorn hope, called Les Enfants Perdus ("the lost children") were all guaranteed promotion to officers should they survive, so that both men and officers took up the suicidal mission as an opportunity to advance themselves in the army. The universal method for defending against siege is the use of fortifications, principally walls and ditches to supplement natural features. A sufficient supply of food and water is also important to defeat the simplest method of siege warfare: starvation. During a siege, a surrounding army would build earthworks (a line of circumvallation) to completely encircle their target, preventing food and water supplies from reaching the besieged city. If sufficiently desperate as the siege progressed, defenders and civilians might be reduced to eating anything edible—horses, family pets, the leather from shoes, and even each other. On occasion, the defenders would drive "surplus" civilians out to reduce the demands on stored food and water. Advances in the prosecution of sieges in ancient and medieval times naturally encouraged the development of a variety of defensive counter-measures. In particular, medieval fortifications became progressively stronger—for example, the advent of the concentric castle from the period of the Crusades—and more dangerous to attackers—witness the increasing use of machicolations and murder-holes, as well the preparation of hot or incendiary substances. Arrow slits (also called arrow loops or loopholes), sally ports (airlock like doors) for sallies, and deep-water wells were also integral means of resisting siege at this time. Particular attention would be paid to defending entrances, with gates protected by drawbridges, portcullises, and barbicans. Moats and other water defenses, whether natural or augmented, were also vital to defenders. In the European Middle Ages, virtually all large cities had city walls—Dubrovnik in Dalmatia is an impressive and well-preserved example—and more important cities had citadels, forts, or castles. Great effort was expended to ensure a good water supply inside the city in case of siege. In some cases, long tunnels were constructed to carry water into the city. Complex systems of underground tunnels were used for storage and communications in medieval cities like Tábor in Bohemia (similar to those used much later in Vietnam during the Vietnam War). Until the invention of gunpowder-based weapons (and the resulting higher-velocity projectiles), the balance of power and logistics definitely favored the defender. With the invention of gunpowder, cannon, and later mortars and howitzers, the traditional methods of defense became less and less effective against a determined siege. City walls and fortifications were essential for the defense of the first cities. Settlements in the Indus Valley Civilization were often fortified. By about 3500 B.C.E., hundreds of small farming villages dotted the Indus floodplain. Many of these settlements had fortifications and planned streets. The stone and mud brick houses of Kot Diji were clustered behind massive stone flood dikes and defensive walls, since neighboring communities quarreled constantly about the control of prime agricultural land. The Minoan civilization on Crete probably relied more on the defense of their outer borders or sea shores. Unlike the ancient Minoan civilization, the Mycenaean Greeks emphasized need for fortifications alongside natural defenses of mountainous terrain, such as the massive 'Cyclopean' walls built at Mycenae during the last half of the second millennium B.C.E. In the ancient Near East city walls may have served the dual purpose of defense as well as showing presumptive enemies the might of the Kingdom. The great walls surrounding the Sumerian city of Uruk gained such a wide-spread reputation. The walls were 6 miles (9.7 km) in length, and raised up to 40 feet (12 m) in height. Later the walls of Babylon, reinforced by towers and moats, gained a similar reputation. In Anatolia, the Hittites built massive stone walls around their cities, taking advantage of the hillsides. In Shang Dynasty China, at the site of Ao, large walls were erected in the fifteenth century B.C.E. that had dimensions of 65 feet (20 m) in width at the base and enclosed an area of some 2,100 square yards (1,800 m²). The Chinese capital for the State of Zhao, Handan (founded in 386 B.C.E.), had walls that were again 65 feet (20 m) wide at the base, a height of 50 feet (15 m), with two separate sides of its rectangular enclosure measuring 1,530 square yards (1,280 m²). Although there are depictions of sieges from the ancient Near East in the historical sources and in ancient Near Eastern art, there are very few examples of siege systems (towers, trenches, and associated weaponry) that have been found archaeologically. Of the few examples, several are noteworthy: The late-ninth century B.C.E. siege system surrounding Tell es-Safi/Gath, Israel. This system, which consists of a 1.6 miles (2.6 km) long siege trench, towers, and other elements—the earliest evidence of a circumvallation system known in the world—was apparently built by Hazael of Aram, Damascus, as part of his siege and conquest of the Philistine Gath mentioned in II Kings 12:18. The late-eighth-century B.C.E. siege system surrounding the site of Lachish (Tell el-Duweir) in Israel. This system, which was built by Sennacherib of Assyria in 701 B.C.E., is not only evident in the archaeological remains, but is described in the Assyrian and biblical sources and in the reliefs of Sennacherib's palace in Nineveh. The earliest representations of siege warfare are dated to the Protodynastic Period of Egypt, c.3000 B.C.E. These show symbolic destruction of city walls by divine animals using hoes. The first siege equipment is known from Egyptian tomb reliefs of the twenty-fourth century B.C.E., showing Egyptian soldiers storming Canaanite town walls on wheeled siege ladders. Later Egyptian temple reliefs of the thirteenth century B.C.E. portray the violent siege of Dapur, a Syrian city, with soldiers climbing scale ladders supported by archers. Assyrian palace reliefs of the ninth to seventh centuries B.C.E. display sieges of several Near Eastern cities. Though a simple battering ram had come into use in the previous millennium, the Assyrians improved siege warfare and built huge wooden tower shaped battering rams with archers positioned on top. In ancient China, sieges of city walls (along with naval battles) were portrayed on bronze hu vessels dated to the Warring States (fifth century B.C.E. to third century B.C.E.), like those found in Chengdu, Sichuan, China in 1965. Although there are numerous ancient accounts of cities being sacked, few contain any clues to how this was achieved. Some popular tales exist on how the cunning heroes succeeded in their sieges. The best-known is the Trojan Horse of the Trojan War, and a similar story tells how the Canaanite city of Joppa was conquered by the Egyptians in the fifteenth century B.C.E. The Biblical Book of Joshua contains the story of the miraculous Battle of Jericho. A detailed historical account from the eighth century B.C.E. Piankhi stela records how the Nubians laid siege to and conquered several Egyptian cities using battering rams, archers, slingers, and building causeways across moats. Alexander the Great's Macedonian army successfully besieged many powerful cities during his conquests. Two of his most impressive achievements in siegecraft took place at Siege of Tyre and the Sogdian Rock. Most conquerors before him had found Tyre, a Phoenician island-city about .6 miles (0.97 km) from the mainland, impregnable. The Macedonians built a mole, a raised spit of earth across the water, by piling stones up on a natural land-bridge that extended underwater out to the island. Then his engineers built a causeway and the soldiers pushed siege towers housing stone throwers and light catapults to bombard the city walls. Though the Tyrians rallied by sending a fire-bombed ship to destroy the towers, and captured the mole in a swarming frenzy, the city eventually fell to the Macedonians after a seven-month siege. In contrast to Tyre, Sogdian Rock was captured by stealthy attack. Alexander used commando-like tactics to scale the cliffs and capture the high ground. The demoralized defenders surrendered. The importance of siege warfare in the ancient period should not be underestimated. One of the contributing causes of Hannibal's inability to defeat Rome was his lack of siege training; thus, while he was able to defeat Roman armies in the field, he was unable to capture Rome itself. The legionary armies of the Roman Republic and Empire are noted as being particularly skilled and determined in siege warfare. An astonishing number and variety of sieges, for example, formed the core of Julius Caesar's mid-first-century B.C.E. conquest of Gaul (modern France). In his Gallic Wars, Caesar describes how at the Battle of Alesia the Roman legions created two huge fortified walls around the city. The inner circumvallation, 10 miles (16 km) long, held in Vercingetorix's forces, while the outer contravallation kept relief from reaching them. The Romans held the ground in between the two walls. The besieged Gauls, facing starvation, eventually surrendered after their relief force met defeat against Caesar's auxiliary cavalry. In the Middle Ages, the Mongol Empire's campaign against China (then comprising the Western Xia Dynasty, Jin Dynasty, and Southern Song Dynasty) by Genghis Khan until Kublai Khan, who eventually established the Yuan Dynasty in 1271 with their armies, was extremely effective, allowing the Mongols to sweep through large areas. Even if they could not enter some of the more well-fortified cities, they used 20 catapults against the Bab al-Iraq (Gate of Iraq) at the siege of Aleppo, Hulegu. There are several episodes in which the Mongols constructed a large number of siege machines in order to surpass the number which the defending city possessed. Another Mongol tactic was to use catapults to launch corpses of plague victims into besieged cities. The disease-carrying fleas from the bodies would then infest the city, and the plague would spread allowing the city to be easily captured, although this transmission mechanism was not known at the time. Some psychological warfare tactics were also used: On the first night while laying siege to a city, the leader of the Mongol forces would lead from a white tent: if the city surrendered, all would be spared. On the second day, he would use a red tent: if the city surrendered, the men would all be killed, but the rest would be spared. On the third day, he would use a black tent: no quarter would be given. However, the Chinese were not defenseless, and from 1234 until 1279 C.E. the Southern Song Chinese held out against the enormous barrage of Mongol attacks. Much of this success in defense lay in the world's first use of gunpowder (with early flamethrowers, grenades, firearms, cannons, and land mines) to fight back against the Khitans, the Tanguts, the Jurchens, and then the Mongols. The Chinese of the Song period also discovered the explosive potential of packing hollowed cannonball shells with gunpowder. During the Ming Dynasty (1368–1644 C.E.), the Chinese were very concerned with city planning in regards to gunpowder warfare. The site for constructing the walls and the thickness of the walls in Beijing's Forbidden City were favored by the Chinese Yongle Emperor (r. 1402–1424)—, because they were positioned to resist cannon volley and were built thick enough to withstand attacks from cannon fire. The introduction of gunpowder and the use of cannons brought about a new age in siege warfare. Cannons were first used in Song Dynasty China during the early thirteenth century, but did not become significant weapons for another 150 years or so. By the sixteenth century, they were an essential and regularized part of any campaigning army, or castle's defenses. The greatest advantage of cannons over other siege weapons was the ability to fire a heavier projectile, further, faster, and more often than previous weapons. They could also fire projectiles in a straight line, so that they could destroy the bases of high walls. Thus, "old fashioned" walls—that is high and, relatively, thin—were excellent targets and, over time, easily demolished. In 1453, the great walls of Constantinople were broken through in just six weeks by the 62 cannon of Ottoman Sultan Mehmet II's army. Leonardo da Vinci, the Italian polymath, was renowned as an engineer and inventor. In Venice in 1499 he devised a system of movable barricades to protect the city from attack. He also had a scheme for diverting the flow of the Arno River in order to flood Pisa. His journals include a vast number of inventions, both practical and impractical. They include [hydraulics|hydraulic]] pumps, reversible crank mechanisms, finned mortar shells, and a steam cannon. The castles that in earlier years had been formidable obstacles were easily breached by the new weapons. For example, in Spain, the newly equipped army of Ferdinand and Isabella was able to conquer Moorish strongholds in Granada in 1482-1492 that had held out for centuries before the invention of cannons. Once siege guns were developed the techniques for assaulting a town or a fortress became well known and ritualized. The attacking army would surround a town. Then the town would be asked to surrender. If they did not comply the besieging army would surround the town with temporary fortifications to stop sallies from the stronghold or relief getting in. The attackers would then build a length of trenches parallel to the defenses and just out of range of the defending artillery. They would then dig a trench towards the town in a zigzag pattern so that it could not be enfiladed by defending fire. Once within artillery range, another parallel trench would be dug with gun emplacements. If necessary using the first artillery fire for cover, this process would be repeated until guns were close enough to be laid accurately to make a breach in the fortifications. In order to allow support troops to get close enough to exploit the breach, more zigzag trenches could be dug even closer to the walls with more parallel trenches to protect and conceal the attacking troops. After each step in the process, the besiegers would ask the besieged to surrender. If attacking troops stormed the breach successfully the defenders could expect no mercy. In the early fifteenth century, Italian architect Leon Battista Alberti wrote a treatise entitled De Re aedificatoria which theorized methods of building fortifications capable of withstanding the new guns. He proposed that walls be "built in uneven lines, like the teeth of a saw." He proposed star-shaped fortresses with low thick walls. However, few rulers paid any attention to his theories. A few towns in Italy began building in the new style late in the 1480s, but it was only with the French invasion of the Italian peninsula in 1494-1995 that the new fortifications were built on a large scale. Charles VIII invaded Italy with an army of 18,000 men and a horse-drawn siege-train. As a result he could defeat virtually any city or state, no matter how well defended. In a panic, military strategy was completely rethought throughout the Italian states of the time, with a strong emphasis on the new fortifications that could withstand a modern siege. The most effective way to protect walls against cannon fire proved to be depth (increasing the width of the defenses) and angles (ensuring that attackers could only fire on walls at an oblique angle, not square on). Initially walls were lowered and backed, in front and behind, with earth. Towers were reformed into triangular bastions. This design matured into the trace italienne. Star-shaped fortresses surrounding towns and even cities with outlying defenses proved very difficult to capture, even for a well- equipped army. Fortresses built in this style throughout the sixteenth century did not become fully obsolete until the nineteenth century, and were still in use throughout World War I (though modified for twentieth-century warfare). However, the cost of building such vast modern fortifications was incredibly high, and was often too much for individual cities to undertake. Many were bankrupted in the process of building them; others, such as Siena, spent so much money on fortifications that they were unable to maintain their armies properly, and so lost their wars anyway. Nonetheless, innumerable large and impressive fortresses were built throughout northern Italy in the first decades of the sixteenth century to resist repeated French invasions that became known as the Italian Wars. Many stand to this day. The Siege of Vienna in 1529 C.E. was the first attempt of the Ottoman Empire, led by Sultan Suleiman I, to capture the city of Vienna, Austria. In this case, the city walls of Vienna represented an impressive bulwark to the opposing forces. Traditionally, the siege held special significance in Western history, indicating the Ottoman Empire's highwater mark. In the 1530s and 1540s, the new style of fortification began to spread out of Italy into the rest of Europe, particularly to France, the Netherlands, and Spain. Italian engineers were in enormous demand throughout Europe, especially in war-torn areas such as the Netherlands, which became dotted by towns encircled in modern fortifications. For many years, defensive and offensive tactics were well balanced leading to protracted and costly wars, involving more and more planning and government involvement. The new fortresses ensured that war rarely extended beyond a series of sieges. Because the new fortresses could easily hold 10,000 men, an attacking army could not ignore a powerfully fortified position without serious risk of counterattack. As a result, virtually all towns had to be taken, and that was usually a long, drawn-out affair, potentially lasting from several months to years, while the members of the town were starved to death. One such bloody battle was the Siege of Malta (also known as the Great Siege of Malta), which took place in 1565 when the Ottoman Empire invaded the island, then held by the Knights Hospitaller. The Knights won the ensuing battle but the casualty rate was large. Most battles in this period were between besieging armies and relief columns sent to rescue the besieged. At the end of the seventeenth century, Marshal Vauban, a French military engineer, developed modern fortification to its pinnacle, refining siege warfare without fundamentally altering it: Ditches would be dug; walls would be protected by glacis; and bastions would enfilade an attacker. He was also a master of planning sieges. Before Vauban, sieges had been somewhat slapdash operations. Vauban refined besieging to a science with a methodical process that, if uninterrupted, would break even the strongest fortifications. Planning and maintaining a siege is just as difficult as fending one off. A besieging army must be prepared to repel both sorties from the besieged area and also any attack that may try to relieve the defenders. It was thus usual to construct lines of trenches and defenses facing in both directions. The outermost lines, known as the lines of contravallation, would surround the entire besieging army and protect it from attackers. This would be the first construction effort of a besieging army. A line of circumvallation would also be constructed, facing in towards the besieged area, to protect against sorties by the defenders and to prevent the besieged from escaping. The next line, which Vauban usually placed at about 2,000 feet (610 m) from the target, would contain the main batteries of heavy cannons so that they could hit the target without being vulnerable themselves. Once this line was established, work crews would move forward creating another line at 820 feet (250 m). This line contained smaller guns. The final line would be constructed only 100 feet (30 m) to 200 feet (61 m) from the fortress. This line would contain the mortars and would act as a staging area for attack parties once the walls were breached. It would also be from there that miners working to undermine the fortress would operate. The trenches connecting the various lines of the besiegers could not be built perpendicular to the walls of the fortress, as the defenders would have a clear line of fire along the whole trench. Thus, these lines (known as saps) needed to be sharply jagged. Advances in artillery made previously impregnable defenses useless. For example, the walls of Vienna that had held off the Turks in the mid-seventeenth century were no obstacle to Napoleon in the late eighteenth. Where sieges occurred (such as the Siege of Delhi and the Siege of Cawnpore during the Indian Rebellion of 1857), the attackers were usually able to defeat the defenses within a matter of days or weeks, rather than weeks or months as previously. The great Swedish white-elephant fortress of Karlsborg was built in the tradition of Vauban and intended as a reserve capital for Sweden, but it was obsolete before it was completed in 1869. Railways, when they were introduced, made possible the movement and supply of larger armies than those that fought in the Napoleonic Wars. This also reintroduced siege warfare, as armies seeking to use railway lines in enemy territory were forced to capture fortresses which blocked these lines. During the Franco-Prussian War, the battlefield front-lines moved rapidly through France. However, the Prussian and other German armies were delayed for months at the Siege of Metz and the Siege of Paris, due to the greatly increased firepower of the defending infantry, and the principle of detached or semi-detached forts with heavy-caliber artillery. This resulted in the later construction of fortress works across Europe such as the massive fortifications at Verdun. It also led to the introduction of tactics which sought to induce surrender by bombarding the civilian population within a fortress rather than the defending works themselves. The Siege of Sevastopol (1854-1855) during the Crimean War and the siege at Petersburg, Virginia (1864-1865) during the American Civil War showed that modern citadels, when improved by improvised defenses, could still resist an enemy for many months. The Siege of Pleven during the Russo-Turkish War (1877–1878) proved that hastily constructed field defenses could resist attacks prepared without proper resources, and were a portent of the trench warfare of World War I. The Siege of Malakand took place between July 26 to August 2, 1897, constituting a siege of the British garrison in the Malakand region of modern-day Pakistan's Northwest Frontier Province. The British faced a force of Pashtun tribesmen whose tribal lands had been bisected by the Durand Line, the 1,519-mile border between Afghan and British forces, which was established due to Anglo-Russian rivalry in the region. The siege was lifted when garrison forces were relieved by British troops with superior arms. Advances in firearms technology without the necessary advances in battlefield communications gradually led to the defense again gaining the ascendancy. An example of siege during this time, prolonged during 337 days due to the isolation of the surrounded troops, was the Siege of Baler. A reduced group of Spanish soldiers was besieged in a small church by the Philippine rebels in the course of the Philippine Revolution and the Spanish-American War until months after the Treaty of Paris, the end of the conflict. Furthermore, the development of steamships availed greater speed to blockade runners, ships with the purpose of bringing cargo, including food, to cities under blockade, as with Charleston, South Carolina during the American Civil War. Mainly as a result of the increasing firepower (such as machine guns) available to defensive forces, First World War trench warfare briefly revived a form of siege warfare. Trench warfare utilized many of the techniques of siege warfare in its prosecution (sapping, mining, barrage and, of course, attrition) but on a much larger scale and on a greatly extended front. More traditional sieges of fortifications took place in addition to trench sieges. The Siege of Tsingtao was one of the first major sieges of the war, but the inability for significant resupply of the German garrison made it a relatively one-sided battle. The Germans and the crew of an Austro-Hungarian protected cruiser put up a hopeless defense and after holding out for more than a week surrendered to the Japanese, forcing the German East Asia Squadron to steam towards South America for a new coal source. The other major siege outside Europe during the First World War was in Mesopotamia, at the Siege of Kut. After a failed attempt to move on Baghdad, stopped by the Ottomans at the bloody Battle of Ctesiphon, the British and their large contingent of Indian sepoy soldiers were forced to retreat to Kut, where the Ottomans under German General Baron Colmar von der Goltz laid siege. The British attempts to resupply the force via the Tigris river failed, and rationing was complicated by the refusal of many Indian troops to eat cattle products. By the time the garrison fell on April 29, 1916, starvation was rampant. The largest sieges of the war, however, took place in Europe. The initial German advance into Belgium produced four major sieges, the Battle of Liege, the Battle of Namur, the Siege of Maubeuge and the Siege of Antwerp. All three would prove crushing German victories, at Liege and Namur against the Belgians, at Maubeuge against the French and at Antwerp against a combined Anglo-Belgian force. The weapon that made these victories possible were the German Big Berthas and the Skoda 305 mm Model 1911 siege mortars on loan from Austria-Hungary. These huge guns were the decisive weapon of siege warfare in the twentieth century, taking part at Przemysl, the Belgian sieges, on the Italian Front and Serbian Front, and even being reused in World War II. At the second Siege of Przemysl the Austro-Hungarian garrison showed an excellent knowledge of siege warfare, not only waiting for relief, but sending sorties into Russian lines and employing an active defense that resulted in the capture of the Russian General Kornilov. Despite its excellent performance, the garrison's food supply had been requisitioned for earlier offensives, a relief expedition was stalled by the weather, ethnic rivalries flared up between the defending soldiers, and a breakout attempt failed. When the commander of the garrison Hermann Kusmanek finally surrendered, his troops were eating their horses and the first attempt of large-scale air supply had failed. Use of aircraft for siege running, bringing supplies to areas under siege, would nevertheless prove useful in many sieges to come. The largest siege of World War I, and the arguably the roughest, most gruesome battle in history, was the Battle of Verdun. The main fortifications were Fort Douaumont, Fort Vaux, and the fortified city of Verdun itself. The Germans, through the use of a huge artillery bombardments, flamethrowers, and infiltration tactics, were able to capture both Forts Vaux and Douaumont, but were never able to take the city, and eventually lost most of their gains. In World War II the Siege of Leningrad lasted over 29 months, about half of the duration of the entire war. Along with the Battle of Stalingrad, the Siege of Leningrad on the Eastern Front was the deadliest siege of a city in history. In the West, apart from the Battle of the Atlantic, the sieges were not on the same scale as those on the European Eastern front; however, there were several notable or critical sieges: the island of Malta for which the population won the George Cross, Tobruk, and Monte Cassino. In the South-East Asian Theatre there was the siege of Singapore and in the Burma Campaign sieges of Myitkyina, the Admin Box, and the Battle of the Tennis Court, which was the high water mark for the Japanese advance into India. The airbridge methods which were developed and used extensively in the Burma Campaign for supplying the Chindits and other units, including those in sieges such as Imphal, as well as flying the Hump into China, allowed the Western powers to develop air-lift expertise that would prove vital during the Cold War Berlin Blockade. During the Vietnam War, the battles of Dien Bien Phu (1954) and Khe Sanh (1968) possessed siege-like characteristics. In both cases, the Vietminh and [[National Front for the Liberation of South Vietnam[[ (NLF) were able to cut off the opposing army by capturing the surrounding rugged terrain. At Dien Bien Phu, the French were unable to use air power to overcome the siege and were defeated. However, at Khe Sanh a mere 14 years later, advances in air power allowed the United States to withstand the siege. The resistance of U.S. forces was assisted by the North Vietnamese Army (PAVN) and PLAF forces' decision to use the Khe Sanh siege as strategic distraction to allow their mobile warfare offensive, the first Tet offensive, to unfold securely. The Siege of Khe Sanh displays typical features of modern sieges, as the defender has greater capacity to withstand siege, the attacker's main aim is to bottle operational forces, or create a strategic distraction, rather than take a siege to conclusion. Still, sieges have continued through the end of the twentieth century and into the twenty-first century. From 1980 to April 11, 1991, during the Soviet war in Afghanistan, and subsequent Afghan Civil War, the city of Khost was under siege for more than 11 years. It is considered the longest siege in modern history. The Siege of Sangin lasted from June 2006 to April 2007, during which time Taliban insurgents attempted to besiege the district center of Sangin District in Helmand Province, Afghanistan, occupied by British International Security Assistance Force (ISAF) soldiers. Despite the overwhelming might of the modern state, siege tactics continue to be employed in certain police conflicts, such as hostage situations. This is due to a number of factors, primarily risk to life, whether that of the police, the besieged, bystanders, or hostages. Police make use of trained negotiators, psychologists and, if necessary, force, generally being able to rely on the support of their nation's armed forces if required. The 1993 police siege on the Branch Davidian church in Waco, Texas, lasted 51 days, an atypically long police siege. Unlike traditional military sieges, police sieges tend to last for hours or days rather than weeks, months, or years. One of the complications facing police in a siege involving hostages is the Stockholm syndrome, whereby hostages can sometimes develop a sympathetic rapport with their captors. If this helps keep them safe from harm this is considered to be a good thing, but there have been cases where hostages have tried to shield the captors during an assault or refused to co-operate with the authorities in bringing prosecutions. All links Retrieved February 4, 2009. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
Educational buildings can vary vastly in scale and capacity; from small village nurseries, through to schools that cater for hundreds of students and universities where the campuses can be like self-contained cities. What they have in common, however, is that each of them contains a combination of students who are there to learn, staff that are there to teach, and facilities support staff that are there to ensure both of these activities can happen now and into the future. Research demonstrates that attentional capacity (which is essential for cognitive functioning) is restored when children/students engage with nature. This means they are less easily distracted and are more able to manage daily tasks. However, many educational institutions are reducing outdoor space and land, which is either being sold to raise funds, or developed to accommodate increasing numbers of students. If we as architects and interior designers want to apply Biophilic principles in educational settings we need to consider how spaces can incorporate natural elements, or take advantage of limited exterior spaces. There are a number of key recommendations for incorporating Biophilic Design into education spaces: Increase natural light Maximise the use of skylights, windows and reflective surfaces – research shows that optimising exposure to daylight alone can increase the speed of learning by 20-26% (Wells & Evans, 2003). It can also improve attendance by an average of 3.5days/year and test scores by 5-14%. Create views out to nature These need to be at appropriate heights for the students and staff – whether this is to gardens, courtyards with planters, or window boxes – in one study (of office workers) those with views of vegetation performed 10-25% better in mental function and memory recall tests. Whilst studies have also suggested that for younger children regular experiences in nature can also reduce the impact of ADHD. Introduce indoor plants Plants can be a highly effective addition; trials have found that plants in classrooms can lead to improved performance in spelling, mathematics and science of 10-14%. Features such as green walls can enhance a learning space visually, improve air quality which helps concentration and reduce distracting noise to improve acoustics in education spaces. Include natural elements Where possible use natural materials such as tactile wooden furniture, exposed beams and stonework to stimulate the sense of touch. Tactile stimulation can be used to reduce stress, to energize or to relax (Spence, 2010). Incorporate references to nature Where a primary experience of natural elements are not available the use of natural textures, patterns, colours and images in floor and wall coverings as a secondary alternative to the real thing is shown to aid psychological recuperation. Create safe spaces Define zones according to activities i.e. for focus and productivity, or relaxation and restoration. Restorative time away from studies/work can enhance productivity. Muted colours, soft furnishing and low lighting can be used to create retreats from activity during the day that will revive staff and students alike. Human-focused environments which take the well-being into consideration will improve the daily experience of its users; students and staff alike. For institutions, incorporating Biophilic principles can have numerous benefits: - Day lit spaces have reduced energy consumption & costs. - An improved educational experience is more likely to lead to on-going engagement with education – benefitting the student, the institution and the economy on the whole. - Improving the working environment for staff can reduce staff turnover and the subsequent replacement costs. As educational institutions are increasingly becoming privatised the distinction between the design of private and public sector buildings is dissolving. At the same time businesses at the vanguard of design, such as Apple, Google, and Amazon are beginning to refer to their headquarters as “campuses”. I suggest that architects and designers take inspiration from this new breed of leading business campuses, where the aim is optimal productivity and innovation, health and well being as a demonstration of how to incorporate Biophilic Design into future education settings. Have you had a positive education experience in a particular space? What was it about the space that made it such a good space to learn in? I’d like to hear your thoughts on this emerging field of design of incorporating nature and education.
How soils are under threat Around the world man manipulates his environment for food production and other objectives. When soils are managed sustainably the quality is maintained over time or increases. Yet, if not managed adequately, soils easily degrade and are negatively affected. Overexploitation, overgrazing, inappropriate clearing techniques and unsuitable land use practices have resulted in severe nutrient decline, water and wind erosion, compaction and salinization. At present, soil is a threatened natural resource. Some 17% of the land surface has already been strongly degraded and the affected area is still growing. This even though there is a wealth of know-how related to land management, improvement of soil fertility, and protection of soil resources.The resulting decrease in productivity has especially affected marginally suitable lands that were not given the opportunity to recuperate for a sufficient long time after prolonged cultivation. This is not seldom the result of lack of resources, such as labour or inputs to maintain soil quality and is often related to an increasing population pressure that limits the area for cultivation. Cultivating the soil always results in a decline of fertility. Part of the nutrients that are taken up by the plants is removed by harvesting. To keep the soil productive, its nutrient levels need to be replenished on a regular basis. Crop residues should be returned to soil to maintain or even increase their organic matter status. A sufficiently high organic matter level is important to increase soil stability, soil water holding capacity and the nutrient holding capacity and supply. However, additional organic and/or inorganic fertilization is inevitable to restore and maintain the optimal productivity of the soil.
The Fixed Mindset Testing Culture in Math This video explains how the current testing culture in math makes students afraid of making mistakes and enforces a fixed mindset. - Constant testing focuses on right or wrong. This makes students afraid of making mistakes and discourages them from diving into material more deeply. - Timed tests convey that getting answers quickly is more important than fully understanding concepts and causes math anxiety. - For ideas on teaching automaticity and number sense, visit youcubed.org/category/teaching-ideas/number-sense/. - To see similar videos about growth mindset in math, sign up for Professor Jo Boaler’s course, How to Learn Math, and check out youcubed.org. Time of year: Anytime Class period: Anytime
by Anne of anne_rats Air enters the rat's nostrils and flows past a patch of skin rich with smell receptors called the olfactory epithelium. Here are olfactory neurons, which are tipped with little hair-like cilia that project into a thin bath of mucus at the cell surface. Odor particles in the air, called odorants, bind to special receptors on the cilia of the olfactory neurons, and their binding triggers a neural response that shoots up to the brain. Incredibly, there are between 500 and 1,000 types of olfactory receptors, coded for by between 500 and 1,000 genes! That is a staggering number of genes, about 1% of the rat's DNA. That means that in rats, one in out of every 100 genes is involved in the detection of odors. This jaw-dropping number of genes involved in olfaction gives an idea of how important the sense of smell is to a rat! (article on finding these odorant receptors) Photo courtesy of R. A. of the Dapper Rat The message from the olfactory neurons speeds along a pathway to the olfactory bulbs, which are stem-like projections from the forebrain. The olfactory bulb is covered with about 2,000 tiny basketlike structures, each the diameter of a human hair, called glomeruli. The glomeruli are the basic units of olfactory perception. Each glomerulus is tuned to a specific odorant (odor molecule). So, a different pattern of glomeruli is activated when a rat smells different odors, such as bananas, caraway seed, spearmint, and the complex smell of peanut butter! These activation maps also change when the odor concentration increases, and when highly similar odors are presented in sequence, such as a series of aldehydes that only differ in their number of carbon molecules, which demonstrates how rats can discriminate between subtly different odors. At this page, you can type in a chemical compound and see the corresponding map of activated of glomeruli in a rat's olfactory bulb! During olfactory learning, this odor-induced pattern of activity in the olfactory bulb is sharpened, such as when a newborn rat learns its mother's unique odor (Brennan & Kevarne 1997) Rats have a second way to detect odors, called the vomeronasal organ, or VNO. The VNO in mammals is situated in a pouch off the nasal cavity. In rats, the VNO is located in a cigar-shaped passage in the floor of the nasal cavity, right next to the septum, with a narrow opening just inside the nostril. This dead-end position means that air can't flow into it, like the olfactory epithelium of the nose (Agosta 1992). When rats sniff and lick, molecules from the environment stick to the moist nose and dissolve, and are then transported to the VNO suspended in mucus. The VNO dilates and constricts to pump the odor-bearing liquid inside rapidly (more on the VNO). The vomeronasal organ primarily detects pheromones, chemical signals transmitted between members of the same species. It specializes in nonvolatile chemicals found in the urine and other secretions (Brennan 2001), though it does detect some colatile pheromonal compounds as well (Trinh and Storm 2003, Zufall et al. 2002). Unlike the vast numbers of receptors in the olfactory epithelium, there are only 30-100 kinds of olfactory receptors in the VNO, and only one or a few per cell. The messages from the vomeronasal organ shoot up a separate pathway to the accessory olfactory bulbs, and from there to the amygdala, then to both the preoptic area and the hypothalamus, areas known to be involved in reproductive behavior (see Meredith, FSU Neuroscience Program, for a summary). The VNO is critical in chemical communication between animals -- mate attraction, courtship, copulation, aggression, and parental care are all mediated by the VNO (Bradbury and Vehrencamp 1998). Chemical signals are found in all sorts of secretions, such as urine, feces, and secretions from the skin glands. They are picked up by sniffing or licking an individual, or through odors that have been deposited on the ground or volatilized into the air. One of the most familiar methods of chemical communication in rats is urine marking. Sexually mature males are the most prolific urine markers, though sexually mature females may show some urine marking as well, especially on the night before they come into heat (Calhoun 1962, p. 151). Urine marking is therefore considered an advertisement of one's presence and a sex attractant -- adult males advertise and females choose their mate from among the advertisers (Doty 1974). Female urine marking may be an advertisement of sexual receptivity. Chemical secretions contain an enormous amount of information (Agosta 1992). Through odors contained in secretions such as urine, rodents can determine all the following about the animal who produced the odor: So, urine contains all sorts of highly personal information! References: Agosta 1992, Brown 1975 & 1977, Giesecke 1997, Mackay-Sim 1980. Chemical signals may even lead to changes in the recipient, too. Urine contains pheromones that accelerate or decelerate puberty in immature females, influence the timing of the estrus cycle, and pheromones that cause a male to mount a receptive female. Specifically, male odor accelerates female puberty, while female odor delays it and suppresses estrus in sexually mature females. The odor of a strange male may also cause a newly pregnant mouse to reabsorb her litter. (Agosta 1992). Chemical signaling plays an essential role in animal communication. Such chemical signals are involved in all aspects of courtship and mating, aggression, parental behavior, and foraging: Courtship and mating behavior Chemical signals are essential for the proper performance of courtship and mating behavior (Larsson 1971). For example, if males don't have access to information from their nose or vomeronasal organ, they cannot mate. Visual and auditory cues are not enough (Sachs 1997). Sexually experienced males may get by with either the nose or the VNO, but inexperienced males are severely impaired if their VNO is hampered. If the VNO is removed in female mice, male odor no longer accelerates puberty, adult females no longer influence each others' estrus cycles, and foreign males no longer cause miscarriage in recently impregnated females (Meredith, of FSU). Female odor also enhances sperm production in dominant males, but not in subordinates (Koyama 2000). Females require olfactory cues for normal reproductive behavior as well (Aron 1979). Olfaction is used in aggression, too. Lactating mother rats, who usually display increased aggression toward strange adult rats, do not do so if their sense of smell is impaired (Kolunie & Stern 1995, Ferreira et al. 1987). Male rats usually display some aggression toward each other, and this inter-male aggression is normally increased if they can smell a receptive female. However, if the males' sense of smell is impaired, they become less aggressive toward each other and there is little increase in aggression when they are presented with the odor of a receptive female (Bergvall et al. 1991, Cain 1974). Odor is critical for newborn animals, as they locate their mother's teats through smell. After birth, mothers spread their amniotic fluid onto their teats, and the infants crawl toward the familiar smell and latch on to the nipples. If the mother is prevented from licking and spreading the secretions on herself, the babies cannot find her teats, but they can find her teats again if the mother's saliva or amniotic fluid is brushed back on (Teicher & Blass, 1977). A few days later, the pups are attracted to the smell of their own saliva. Wash the nipple and they stop suckling, spread infant saliva around and they nurse again (Teicher & Bass, 1976). Odor is critical for maternal behavior. Impair a mother's sense of smell, and her parental care decreases drastically, leading to the death of many of her pups (Kolunie & Stern 1995, Fleming 1971). Newborn rats learn about what to eat later in life through odor cues received through the mother's milk. Later in life, when rats forage independently, their food choices are influenced by the scent of foods recently eaten, carried on fur, whiskers, and breath of other rats (Galef 1996). Keeler (1942) found that albino rats took twice as long to back away from a pungent-smelling piece of garlic as normally pigmented rats (9.87 seconds vs. 4.85 seconds). In another experiment with different rats, Keeler found that pigmented rats backed away from a piece of garlic after 7 seconds, but albino rats never backed away at all. In fact, after 15 seconds three of the albinos in the experiment actually tested the garlic with their teeth to see if it was edible. Sachs (1996) tested male albino and pigmented rats' response to the remote cues of a female rat in heat. Sachs used a testing chamber that was divided in half by two wire mesh screens separated by a few centimeters, so the male rat on one side could see and smell a female in heat on the other side but could not touch her. Sachs found that 83% of pigmented male rats became aroused, but only 4% of albino rats did. In conclusion, albino rats appear to have a dulled sense of smell and/or reduced responsiveness to olfactory cues.
A new way of keeping time and sending time-based signals around the globe took a step forward in a new European test. Atomic clocks based on the oscillations of a cesium atom keep amazingly steady time and also define the precise length of a second. But cesium clocks are no longer the most accurate. That title has been transferred to an optical clock housed at the U.S. National Institute of Standards and Technology (NIST) in Boulder, Colo. that can keep time to within 1 second in 3.7 billion years. Before this newfound precision can redefine the second, or lead to new applications like ultra-precise navigation, the system used to communicate time around the globe will need an upgrade. Recently scientists from the Max Planck Institute of Quantum Optics, in the south of Germany, and the Federal Institute of Physical and Technical Affairs in the north have taken a first step along that path, successfully sending a highly accurate clock signal across the many hundreds of kilometers of countryside that separate their two institutions. The researchers will present their finding at Conference on Lasers and Electro Optics taking place May 6 -11 in San Jose, Calif. "Over the last decade a new kind of frequency standard has been developed that is based on optical transitions, the so-called optical clock," says Stefan Droste, a researcher at the Max Planck Institute of Quantum Optics. The NIST optical clock, for example, is more than one hundred times more accurate than the cesium clock that serves as the United States' primary time standard. Extremely precise time keeping—and the ability to communicate the world time standard across long distances—is vital to myriad applications, including in navigation, international commerce, seismology, and fundamental quantum physics. Unfortunately, the satellite-based links currently used to communicate that standard are not up to the task of transmitting such a stable signal, so the second retains its less precise measure. Optical fiber links could work better, but had previously been tested only over short distances, such as those separating buildings on the same campus or within the same urban area. "The average distance between institutes that operate frequency standards in Europe is on the order of a few thousand kilometers," notes Droste. "Spanning these great distances with an optical link is challenging not only because of the additional degradation of the transferred signal, but also because multiple signal conditioning stations need to be installed and operated continuously along the link path." Droste and his colleagues were able to overcome the challenges by installing nine signal amplifiers along a 920-kilometer-long fiber link. They successfully transferred a frequency signal with more than 10 times the accuracy than would be required for today's most precise optical clocks.
Post by Sarah Schmer, J.D. Expected, May 2019, Elisabeth Haub School of Law at Pace University How do indigenous peoples’ rights fit into the international legal space, and do indigenous peoples have a right to bring claims at the international level and in international courts? A pressing issue of indigenous peoples has been violations of land safety and security. Indigenous peoples have been bringing claims at the international level since 1920, and did so before the first international body to act on indigenous issues: the International Labor Organization (“ILO”). The rights of indigenous people have overlapped with many other human rights, and States have avoided defining the international legal rights of indigenous peoples, because those rights are not framed in specific indigenous peoples’ rights treaties, but are part of more general international instruments, like the Universal Declaration of Human Rights or the Convention on the Prevention and Punishment of the Crime of Genocide. Indigenous rights stem from various branches of international law, including international human rights law, international labor law, and international environment law. International and regional human rights jurisprudence has further advanced the application of key indigenous peoples’ rights in the conservation of land and natural resources. In 2007, the General Assembly adopted the UN Declaration on the Rights of Indigenous Peoples, and by 2010, it was supported by the clear majority of United Nations Member States and opposed by none. It is applicable to indigenous human rights, thereby helping reverse the vast historical exclusion of indigenous peoples from the international legal system. The Declaration is the most comprehensive instrument detailing the rights of indigenous peoples in international law and policy, containing minimum standards for the recognition, protection and promotion of these rights. Indigenous rights have not been properly acknowledged; this is unfortunate as the rights reorganized in the declaration are far from being realized. According to the UN High Commissioner of Refugees, the rights of indigenous peoples have recently been violated in the name of conservation. Indigenous land rights have been at issue, especially as articles of the UN Declaration have been violated for the sake of legally accepted methods of conservation. Article 8(2)(b) provides that Furthermore, the conservationist agendas have resulted in several indigenous relocations, a blatant a violation of the UN Declaration. A few examples are as follows: Ethiopian government under its “villagization” program, the Ogiek and Sengwer people of Kenya, and the case of the Lubicon Cree in Canada. Article 10 states: Indigenous peoples shall not be forcibly removed from their lands or territories. No relocation shall take place without the free, prior and informed consent of the indigenous peoples concerned and after agreement on just and fair compensation and, where possible, with the option of return.” [Emphasis added]. Article 10 has been violated as states have not obtained or given informed consent of the indigenous communities to take conservationist actions. Examples of these violations are stated above, especially in the case of the Lubicon Cree, regarding the land claim agreement and the need to consult with the Lubicon prior to any resource exploration or exploitation. Article 25 of the Declaration states, Indigenous peoples have the right to maintain and strengthen their distinctive spiritual relationship with their traditionally owned or otherwise occupied and used lands, territories, waters and coastal seas and other resources and to uphold their responsibilities to future generations in this regard. While international law does not apply generally retroactively and lands that were taken prior to the UN Charter, ILO treaty and UN Declaration do not have to be returned to indigenous peoples under national law, these provisions should inhibit countries from taking sacred lands currently. This last article has been disregarded as the spiritual relationship and links indigenous peoples have with the plants, trees and animals on their lands and protecting their lands as a sacred duty have not been respected. For example, these conflicts between conservation policies and indigenous communities (as of 2016 after the UN charter, ILO treaty and UN declaration ) have arisen in the experiences of the Sengwer and Ogiek peoples in Kenya. Cherangani Hills in western Kenya is home to several indigenous peoples, including the Sengwer community. However, Kenya’s conservation policies have resulted in alienation of indigenous peoples’ from their lands. Milka Chepkorir Kuro, a Sengwer herself and 2016 participant in the UN Human Rights Office of Indigenous Fellowship Program stated We have been facing a lot of human rights violations, forceful evictions from our forest homes…and as a result we do not have a place where we can sit and say ‘This is our home.’ Despite these connections and spiritual meanings, the legal world does not recognize these ties as specifically conserving the land and resources, because the conservation community has failed to acknowledge indigenous peoples’ contribution to conservation. Society has increasingly acknowledged that ancestral lands of indigenous peoples maintain the most intact ecosystems and provide the most effective sustainable form of conservation; yet, to date, the important role played by indigenous peoples’ as environmental guardians has still failed to gain due recognition. Globally, many indigenous peoples face similar issues. The controversy over the United States’ pipeline, a seemingly domestic issue, is also of international significance but a little recognized piece of the dispute. If indigenous issues are not addressed and respected on an international legal level, how can these groups be protected by the international community? These are peoples whose land has been taken in the name of colonialism and expansion; however, their voices seem so faintly heard by the international community. - Victoria Tauli-Corpuz, Report of the Special Rapporteur of the Human Rights Council on the Rights of Indigenous Peoples - UN High Commissioner on Refugees, Indigenous Peoples’ Rights Violated in the Name of Conservation - UN Voluntary Fund for Indigenous Peoples - Indigenous Peoples and the United Nations Human Rights System - United Nations Declaration on the Rights of Indigenous Peoples - Does the Dakota Access Pipeline Violate Treaty Law?
Technology has advanced considerably over the last century, the birth of the internet sped up our advancement and indeed desire for new and improved communication services, but just how much has changed? Early forms of communication Early forms of communication mainly centered on the use of imagery rather than words, using these methods to communicate a story or historical event. These communication methods included: - The Petroglyphs – Early man used stone engravings and drawings to communicate with each other, way before even spoken word appeared. - The pictograph – Engraved or painted drawings were used to communicate a story or an event and were very popular around 6000 – 5000 BC. - The Ideogram – Pictograms evolved into ideograms which were graphical symbols that represented an idea, the Egyptians and Aztecs were particularly fond of these early forms of communication. - Smoke Signals – 150BC – Chinese soldiers were able to transmit messages in just a few hours with smoke signals along the Great Wall of China. - The First Handwritten Manuscript – 301 – 800AD Getting more sophisticated In the early 1400’s communication methods began to get more advanced with the Gutenberg printing press leading the way allowing for communications over long distances and languages. - Semaphore Lines – Invented in 1792 in France by Claude Chappe, were the precursor of the electrical telegraph. The Semaphore telegraph was a system of conveying information by the method of visual signs using towers and shutters. - Morse Code – In 1837 the Morse code was developed and patented by Samuel Morse - Typewriter – 1800’s – The typewriters first publication was the Adventure of Tom Sawyer by Mark Twain. It wasn’t until 19140 that the mechanical typewriter became standardised and indeed electric, gaining widespread popularity and an office essential! By the 1980’s the humble typewriter had made way to personal computers and desktop publishing. - The Telegraph – Patented in May 1837 by Sir William Fothergill Cooke and Charles Wheatstone the telegraph was created initially to communicate between train stations. In 1845 the Electric Telegraph company was established, followed by rapid expansion and the era of mass communication. Before the Telegraph a letter by post from London took 12 days to reach New York and 73 days to reach Sydney, Australia - The Telephone – 1848, invented by Alexander Graham Bell and was the first device in history that enabled people to talk directly to each other over long distances. It wasn’t until the 1960’s when telephones evolved digitally. - The Radio – The use of the radio picked up during World War I with the development for military communications, but it wasn’t until the 1920’s that commercial mass broadcasting began. Video Killed the Radio Star - The television –Developed in 1927 the television didn’t become common place until after World War II. The colour TV was introduced in the mid 1960’s, from there we have seen the television progress into Smart TV’s with 3D capability. - The Internet – Started in 1969 as a US military project and was the foundation for the modern internet, but it wasn’t commercialised until 1990 when English scientist Tim Berners-Lee developed the World Wide Web. The internet has grown phenomenally since the 90’s and is now a tool that business and the general public alike struggle to live without. - Search Engines – The first major commercial search engine went live in 1994 and was called Lycos, soon after many search engines appeared including: AltaVista and Yahoo, but it wasn’t until 2000 Google’s search engine rose to prominence. Search engines are one of the most useful communication tools out there. - Wikis – Believe it or not the first Wiki was created in 1994 in Portland Oregon. Wikipedia is the most well-known wiki site and contains useful information about pretty much any subject known to man. Let’s get digital - The Pager – Originally developed as a professional tool, allowing business people to keep connected to the office this device quickly became a social tool. A precursor to the modern mobile phone this communication had a brief window of time until eventually the mobile phone took over and replaced it. I still miss getting the lottery results on a Saturday evening! - Instant Messaging – Or online chat as it was historically known was developed in 1996 and allowed online users to communicate with each other in chat rooms, online bulletin boards . This has since progressed to applications such as Facebook messenger and Blackberry Messenger which also enable video calling and web conferencing services such as Skype and Facetime. - Electronic mail (eMail) – Has been around as we know it since 1993, however the first hosted mail systems were introduced as early as the 1960’s. Email has revolutionised the way we communicate with each other, in fact it is hard to imagine how long things would take to get done if we had to rely on other forms of communication. - The mobile phone – Amazingly the first mobile phone was demonstrated by Motorola in 1973, but it wasn’t until 1983 that the first 1G phone went to market with talk time of just 35 minutes and a 10 hour charge life! It is a far cry from where this technology is today. - The Smart Phone – Mass adoption of Smart phones took place in 1999. Initially with BlackBerry dominating the market as the go-to communication tool of the 00’s. However this was quickly over taken by Apple who introduced the very first iPhone in 2007. Since then Smart Phone technology has grown rapidly, integrating our work, social and family life together in one singular device. It is clear to see that technology has hurtled forward, but there are no signs of stopping. The internet of things is revolutionising our working and personal lives and I strongly believe that we will soon tools such as 3D virtual conferencing emerging into the business world. It is inevitable that communication will get even faster, smarter, easier and more often, but I feel that it will take time for context and understanding to catch-up. How many times have you sent an email, a text message or Facebook message that has been misunderstood by the recipient, I expect the answer is often. This is one area that our communication tools need to get better at and to do so it will require sophisticated systems to learn things like sarcasm and sense of humour. One thing for sure though, we are living in very exciting and fast paced times and communication will continue to evolve as we do.
U.S.-based Fermilab to try and break speed of light U.S.-based particle accelerator Fermilab said it would conduct tests that investigate whether neutrino particles can travel faster than the speed of light, a feat the European research center CERN recently claimed to have achieved. If the findings are upheld, it could fundamentally rewrite humanity’s understanding of physics, proving that Albert Einstein’s theory of special relativity was incorrect to declare that nothing can travel faster than the speed of light. An earlier experiment at Fermilab had detected neutrino particles traveling faster than the speed of light, but results were dismissed because they fell within the measurements’ margin of error. The announcement that Fermilab would attempt to replicate CERN’s experiment comes as another part of Fermilab, the Tevatron particle accelerator, prepares to shut down forever due to lack of funding. The Tevatron accelerator is credited with helping humanity discover the quark, a tiny particle considered to be one of the building blocks of all mater. (H/T: Talking Points Memo) Image credit: Flickr user zugaldia.
Best Practices: Math Decks It’s not that I’m so smart, it’s just that I stay with problems longer. — Albert Einstein So many students scorn math at an early age. They decide “I’m not a math person,” or “I can’t do math.” For our students, math is often a subject with no creativity or imagination. In a math classroom, the correct answer isn’t flexible and there are many ways to be wrong. Students are at risk of quickly giving up. So, how do we encourage the kind of curiosity, risk-taking, and exploration that drove Einstein to stick with the hard problems? Ask Big Questions That Students Can Answer Without a Formula For example, instead of starting class by introducing the concept of Circumference, start with an interesting question with no, one answer. In Pear Deck, you can let students answer with a guess on a Number Slide and show the results. Ask Students What They Need to Know Rather than explain a new formula, ask students to think about the problem and what they would want to know to solve it. In Pear Deck, you can let students draw on your image to think about what information and measurements they want to gather. Graphing and Formative Assessment Use the “Overlay Drawings” option while you display responses to quickly spot which students grasp graphing and which need more review. A student who sees his or her own points plotted against the others can get a better idea of where he or she got confused and why. One hangup for a lot of students is that they don’t see what math problems have to do with anything. “When will I use this?” It can be difficult for students to see and experience math as an expression of the real world. To help your students make the connection, you can embed fun online simulations right into your Pear Deck presentation. Students will be able to work out the simulation on their own devices to explore the relationships between math and the real world. When you are ready to move on, you can click to the next slide and student screens will be synced up. Tips and Tricks - Math Symbols To type math symbols and expressions in Pear Deck, start by typing “##,” type the equation, then close with “##”. Pear Deck will show you a preview of what the formula will look like when you present. - Students can do the same in a Free Response: Text Slide. When puzzling out a problem, let students write out their work on a Drawing Slide with a Blank Canvas. This way you’ll be able to see their thinking as it is happening. You can also turn this work into a notes document for students to return to and review later. To do this, click “Publish Student Takeaways” when you are done presenting. - We recommend assigning the Takeaways Doc as homework to extend the lesson and encourage metacognition. Ask students to review the slides as well as their answers to the questions. Then ask them to reflect on what they learned, what was interesting, and what was confusing in the spaces provided.
1. Immediate access to resources. - virtual libraries are available anytime - facilitate just-in-time learning 2. Information updated immediately. - TL able to respond to immediate needs of teachers - provide resources at short notice - contains up-to-date information 3. No physical boundaries. - people from all over the world can access information - as long as there is an Internet connection 4. Support different learning styles. - access material in a variety of formats - tailored to characteristics of the learner or community of learners - range of resources to meet the information needs of different users - can be customized for particular schools, grades and subjects 5. Accessible for the disabled. - offers an alternative for those who have physical difficulty accessing resources in a regular library - through use of audio and video, resources are made available to the visually and hearing impaired - integrate voice, video, and text for users involved in distance education in remote locations 6. Present student work. - share and showcase student work - student-created art, photography, oral histories to support local curriculum and compensate for lack of local resources on the Internet 7. Information retrieval. - provides user-friendly interfaces, giving clickable access to resources - use any search term such as word, phrase, title, name, subject to search entire collection 8. Teaching tool for information literacy. - enables students to find their way more easily around the various search choices - as an instructional tool, students learn the skills of selecting and using appropriate search engines, reading URLs and how to use an online database when needed - can be taught information ethics ie. plagiarism, reference sources, copyright issues 9. Storage of information. - potential to store much more information that traditional library - requires very little physical space to contain information 10. Networking capabilities. - one digital library can provide a link to any other resources of other digital libraries - a seamlessly integrated resource sharing can occur 11. Directs students to relevant resources. - students spend more time thinking about information rather than participating in time consuming searching - that complement the library's print resources - customized to meet the needs of a particular school community - resources selected to match research topics, age and reading levels of students There are some disadvantages or concerns that need attention and consideration when creating a virtual library. 1. Restricted by copyright law. - works cannot be shared over different periods of time like a traditional library - content is public domain or self generated - if copyright exists, permission should be requested 2. Requires connectivity. - instability of Internet sites requires regular checks should be carried out to ensure that web links are still active - if there is not Internet connection, the VL is inaccessible - many people do not have Internet access - the Digital Divide may apply - may have access to the Internet but lack skill to utilize the available information 3. Skilled professionals are required. - to organize, maintain and help students - guide students in their selection, evaluation and use of electronic choices - need the knowledge of Boolean searching and advanced searching skills 4. Increased number of resources challenges student selection. - purchase of online materials are not tailored for a particular community of learners - increased need for instruction in use and evaluation of resources - students face difficulty in selecting quality material from the increased assortment of resources The building of a virtual library requires consideration of both the advantages and disadvantages in order to create an effective library. With careful design and the support of skilled information professionals, virtual libraries can provide a powerful environment for student learning. (Gunn, 2002) Digital Library. (2008). Wikipedia. Retrieved February 29, 2008, from Grantham, C. (2007). Virtual library: e-ssential. ACCESS. 21 (3) 5-8. Gunn, H. (2002). Virtual Libraries Supporting Student Learning. Retrieved March1, 2008 from
Atlantic Emancipation Celebrations Formerly enslaved people marked the sacred moment of their freedom—whether from manumission, escape, legal abolition, or otherwise—privately and publicly. Though liberty rarely meant equality, at least not at first, individuals often referred to their new-found freedom as a turning point in their lives, evoking its arrival as a key moment that changed how they viewed themselves, their families, and their world. Many changed their names to reflect their new status and identity, adopting such surnames as Freedom, Freeland, Freeman, Liberty, or Justice, as in the case of several who found themselves free after the American Revolution. Freedpeople collectively celebrated their liberty, especially after general abolition occurred in such places as the British Caribbean (in 1834), French Caribbean (1794, then again in 1848) and the United States (in 1865). Caribbean people adapted celebrations begun in slavery, such as Jonkonnu in Jamaica and Canboulay in Trinidad (as well as various crop-over festivals resembling post-harvest holidays in pre-emancipation Louisiana). “August First” commemorations in the Caribbean marked the legal abolition of British colonial slavery on that day in 1834 but came to be celebrated outside the British colonial world, including in Haiti. In Jamaica and Trinidad, freedpeople’s commemorations of emancipation clashed with formal August First commemorations managed by British administrators and missionaries, who marked the end of legal slavery in 1834 with the beginning of a transitional four-year-long “apprenticeship” system. In the United States, Emancipation Day and Juneteenth parades marking the legal end of slavery in North America may have emerged from an August First tradition among African-descendant communities in the broader Atlantic world—in Haiti, Jamaica, Trinidad, the United States, Canada, the United Kingdom, and even Liberia. Today, emancipation celebrations, in a variety of forms, continue in African-descendant communities throughout the Americas and Europe. Such commemorations continue to inspire new cultural events, works of art, and statues and markers that are re-making public spaces in cities that once were known for their slave-ship ports and thriving slave markets.
The U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) has developed a technique to produce hydrogen from acidic water. What the researchers there have done is create a molecule that is structurally and chemically similar to the industrial catalyst molybdenite. The new molecule will replace platinum as the catalyst of choice for creating hydrogen from acidic water for a fraction of the cost. According to the Berkeley Lab, “Christopher Chang and Jeffrey Long, chemists who hold joint appointments with Berkeley Lab and the University of California (UC) Berkeley, led a research team that synthesized a molecule to mimic the triangle-shaped molybdenum disulfide units along the edges of molybdenite crystals, which is where almost all of the catalytic activity takes place. “Since the bulk of molybdenite crystalline material is relatively inert from a catalytic standpoint, molecular analogs of the catalytically active edge sites could be used to make new materials that are much more efficient and cost-effective catalysts … Recent studies have shown that in its nanoparticle form, molybdenite also holds promise for catalyzing the electrochemical and photochemical generation of hydrogen from water. Hydrogen could play a key role in future renewable energy technologies if a relatively cheap, efficient and carbon-neutral means of producing it can be developed.” Right now, platinum is selling for more than $2,000 an ounce, which is the most expensive element in most fuel cells. Molybdenite, on the other hand, is not as rare and is selling for 1/70th the price of platinum, which will greatly reduce the cost of fuel cells in general. Bringing down the cost of fuel cells is imperative if the hydrogen car industry is to take off. Besides using fuel cells in cars, manufacturers use fuel cells in reverse to produce hydrogen from water. So, on both the creation and consumption ends, low cost fuel cells mean a faster path to market for H2 cars and fueling stations which is the goal many people (like myself) have.
Ecosystem Portfolio: Ocean Fish, coral, seaweed, dolphins, whales, and plankton are examples of biotic factors. Water, amount of sun, temperature, air, and the depth of the water are examples of abiotic factors in this ecosystem. Limit of how many individual organisms an ecosystem can support. Animals need food, water, shelter, and space to survive. The population in this ecosystem will change over time depending on how many resources are available. For example the population of dolphins will decrease depending on the food source. If more fishing goes on, there will be less resources available to the dolphins so there will be less dolphins alive. Or if there isn't enough food for the fish, there wouldn't be enough food for the dolphins. If there is less fishing & there is enough food for those fish to survive, then there will be more dolphins. It will all change over time. It won't stay the same all the time. How many dolphins are alive at one time depends on the food source mainly. Factors that influence the carrying capacity. Limiting factors can be disease, fishing, and predators. Predators limit how many animals are alive. Sharks are predators, so they limit how many prey animals will live. Prey to a shark can be fish, dolphins, seals, and penguins depending on where they live in the world. (Predator-prey relationship: Shark and seal.) Producers can be algae,coral,or anything else that gets its energy from the sun. Consumers can be fish,whales,dolphins,sharks,or anything that gets its energy from consuming other plants or animals. Decomoposers are bacteria, fungi, or anything else that breaks down dead plants or animals. Consumers are either herbivores, omnivores, carnivores, or scavengers. Sharks, dolphins, certain whales, and certain fish are carnivores. Green Sea Turtles, Manatees, and Parrot Fish are examples of herbivores. Sea birds, crustaceans, and mollusks are examples of scavengers in an ocean environment. Salt water crabs, sea otters, sharks, and whales are examples of omnivores in the ocean ecosystem. Producers get their energy from the sun, carnivores get their energy from eating animals, omnivores get their energy from eating animals and plants, and scavengers eat organisms that have been killed by something else. Decomposers get their energy from breaking down dead organisms which include both plants and animals. Producers are needed to have an ecosystem because if there were no producers, then consumers wouldn't get energy, and decomposers and scavengers wouldn't be able to get energy either. The whole ecosystem would not exist if there was no producer. Food Webs and Food Chains: Food chains are different from food webs. A food web shows which animal gets energy from which other animal. Food chains show energy that starts from the sun and goes to producers. If I removed one population from the food web, the rest would be affected. Some negatively, but some positively. If I took away the whales, there would be a lot more fish because one of their predators would be gone. That would be positive for the fish, but negative for other animals that rely on eating whales. If I took away the sea otters, there would be too many crabs roaming around the ocean. Taking away any population from a food chain or web would greatly affected the environment around them. Trophic Levels & Energy Pyramids Energy pyramids are shaped that way because it shows the energy from one animal to another. On the bottom you have an autotroph, which could be phytoplankton. The next level to an energy pyramid, is the primary consumer, which could be something like a sea snail. Then you could have a fish, which would be the secondary consumer. After that would be the tertiary consumer, which could be a kind of shark. A producer is on the bottom because they give energy to a herbivore, which gives energy to a carnivore, which gives energy to a secondary carnivore, which gives energy to the apex predator, or top carnivore. The process of photosynthesis takes place in the chloroplast of a plant cell. Sunlight, carbon dioxide, water,
Prairie Grassland Habitats in the Rocky Flats National Wildlife Refuge One of the main habitats in the Rocky Flats National Wildlife Refuge is the prairie grasslands. At the refuge, there are three main types of grasslands: xeric tallgrass prairies, tallgrass prairies, and mixed prairie grasslands. The most interesting types of grassland found here is the xeric tallgrass. Xeric tallgrass prairies are some of the oldest grasslands in existence, having been around since the Ice Age. Remnants of this rare habitat are preserved at the refuge. The tallgrass prairies within the Rocky Flats National Wildlife Refuge are home to several rare species of grass. These include the big bluestem grass, switchgrass and dropseed. The mixed prairie grasslands get their name from the mix of mountain grasses and plains grasses found there. Within the Rocky Flats National Wildlife Refuge one can find an abundance of bluestem grass, both big and small. Bluestem grasses are characteristic of plains grasslands. One can also find mountain muhly and Porter’s aster, grasses that are normally characteristic of mountainous areas. The presence of these two types of grasslands in the same area is fairly unique, making the Rocky Flats National Wildlife Refuge a rarity amongst grasslands. Studies show great stability of these grasslands ecosystems so many more generations should be able to enjoy them for years to come, including those lucky enough to live nearby who can gaze upon them every day! Learn more about the wildlife in the Rocky Flats National Wildlife Refuge here! Photo Courtesy of Candelas Rocky Flats
Black Carbon, or soot, has been indentified as the number two source of global warming, after carbon dioxide (CO2). According to The New York Times, soot accounts for up to 18 percent of the planet’s warming, in comparison with CO2, which is seen causing 40 percent. Decreasing soot emissions by replacing wood-burning cooking stoves with more efficient models in developing countries may be a quick-win in bringing down global temperatures. “Decreasing black carbon emissions would be a relatively cheap way to significantly rein in global warming — especially in the short term, climate experts say. Replacing primitive cooking stoves with modern versions that emit far less soot could provide a much-needed stopgap, while nations struggle with the more difficult task of enacting programs and developing technologies to curb carbon dioxide emissions from fossil fuels.” Soot particles have a powerful impact because, being dark in color, they absorb heat. As they move from developing countries and settle in cold polar regions, they land on ice and speed melting. The New York Times writes: “One recent study estimated that black carbon might account for as much as half of Arctic warming. While the particles tend to settle over time and do not have the global reach of greenhouse gases, they do travel, scientists now realize. Soot from India has been found in the Maldive Islands and on the Tibetan Plateau; from the United States, it travels to the Arctic. The environmental and geopolitical implications of soot emissions are enormous.” Limiting the amount of soot in the atmosphere could also have rapid, positive effects. “Unlike carbon dioxide, which lingers in the atmosphere for years, soot stays there for a few weeks. Converting to low-soot cookstoves would remove the warming effects of black carbon quickly, while shutting a coal plant takes years to substantially reduce global CO2 concentrations.” However, getting rid of the small, inefficient wood-burning stoves in countries like India is a challenge. There are millions spread throughout villages, replacements cost money, and food cooked on high-tech solar stoves doesn’t taste the same. Photo credit: Adam Ferguson for The New York Times
Central banks wield enormous influence over exchange rates, and currencies can unexpectedly dive or soar off of even minor policy shifts. That in turn can cause successful trades in other asset classes to unravel, as profits on stocks or bonds are eaten up by currency effects. The gold standard was characterized by the free flow of gold between individuals and countries, the maintenance of fixed values of national reserves in terms of gold and therefore each other, and the absence of an international coordinating organization. Together, as Eichengreen and Temin noted, these arrangements implied that there was an asymmetry between countries experiencing balance-of-payments deficits and surpluses. There was a penalty for running out of reserves, and being unable to maintain the fixed value of currency. But there was, aside from forgone interest, no penalty for accumulating gold. The adjustment mechanism for deficit countries was deflation rather than devaluation, i.e., a change in domestic prices instead of a change in the exchange rate. During the interwar years of 1920s and 1930s, it was widely believed that maintenance of the gold standard was the primary prerequisite for prosperity. The prevailing worldview was that a stable exchange implied a stable economy. In such an environment, supplies of money and credit depended on the quantity of gold and foreign exchange convertible into gold in the hands of central banks. Indeed, a surfeit or scarcity of gold reserves drove the economic fortunes of nations during the interwar years. In the 1920s, the U.S. had become a gigantic sink for gold reserves and by the end of the decade had accumulated nearly 40% of the world’s gold reserves. In the run-up to the Great Depression, France had also increased its gold reserves at a rapid pace from 1927 to 1933. In contrast, UK and Germany never had reserves anywhere as large, and German gold reserves all but vanished in 1931. But there was only so much gold to go around. Soon the effects of asymmetry kicked in, and central banks jacked up interest rates in their desperate attempts to obtain more gold. This destabilized commercial banks, and depressed prices, production and employment. Subsequent bank closures disrupted the provision of credit to firms and households, forcing them to curtail production and cut consumption. Deflation amplified the burden of outstanding debt, forcing debtors to curtail spending still further in order to maintain their credit worthiness. As the gold-exchange standard collapsed into a pure gold-based system, economies were destabilized as never before. What is most important to note here is that national policies had cross-border repercussions. For example, when the U.S. raised interest rates sharply in October 1931 to defend the dollar’s gold parity, it drained gold from other gold standard countries and ratcheted up the deflationary pressure on them. When France raised interest rates in 1933, it intensified the deflationary pressure on members of the gold bloc and triggered a race to the bottom. Had there been a way for countries to coordinate their actions, things might have turned out differently. As Eichengreen and Temin explained, while it was impossible for one country acting alone to cut interest rates to counter deflation, as it would cause gold losses and jeopardize gold convertibility, several countries acting in concert would have been able to do so. This is because loss of gold due to interest rate cuts by oneself would be offset by gain of gold due to interest rate cuts by all the others. But efforts to arrange this in 1933 went nowhere; as was often the case, domestic politics got in the way of international financial cooperation. The 21st century analog – the euro – is not identical to the gold standard, according to Eichengreen and Temin, but the parallels are there. The euro did not simply follow the gold standard; it also followed the Bretton Woods System implemented after the Second World War. Both the gold standard and the euro are extreme forms of fixed exchange rates. Bounded by a common fate, the surplus as well as deficit countries are like inseparable Siamese twins; their actions have systemic reveberations throughout the euro zone, and the rest of the world beyond. - Eichengreen, Barry and Temin, Peter (2010, July). Fetters of Gold and Paper. NBER Working Paper 16202. Retrieved from: http://www.nber.org/papers/w16202.pdf
Adding Text and More In this lesson, you will learn how to use HTML to add text and headings in your Web pages. You'll also learn how to add mathematical notations, information about your Web page, and special characters (such as ampersands). You might not realize it, but you already learned how to create an HTML paragraph in Lesson 2, "Creating Your First Page." In HTML, a paragraph is created whenever you insert text between the <p> tags. Look at the code from Lesson 2 again: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>My First Web Page</title> </head> <body> <p>This is my first Web page.</p> </body> </html> Web browsers see that you want text and they display it. Web browsers don't pay any attention to how many blank lines you put in your text; they only pay attention to the HTML tags. In the following HTML code, you see several lines of text and even a blank line, but the browser only recognizes paragraphs surrounded by the <p> and </p> tags (or paragraph tags). The <p> tag tells the browser to add a blank line before displaying any text that follows it, as shown in Figure 3.1. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>Typing Paragraphs in HTML</title> </head> <body> <p>This is the first line. But is this the second?</p> <p>No, this is.</p> </body> </html>Figure 3.1 The browser ignores the blank line that I inserted and puts the line break before the <p> tag instead. Web browsers do something else with paragraph text that you should be aware of: They wrap the text at the end of the browser window. In other words, when the text in your Web page reaches the edge of the browser window, it automatically continues on the next line regardless of where the <p> is located. The <p> tag always adds a blank line, but you might not always want a blank line between lines of text. Sometimes you just want your text to appear on the next line (such as the lines of an address or a poem). You can use a new tag for thisthe line break, or <br> tag, shown in Figure 3.2. This new tag forces the browser to move any text following the tag to the next line of the browser, without adding a blank line in between. Figure 3.3 shows how the browser uses these two tags to format your text.Figure 3.2 The <p> and <br> tags help to separate your text into lines and paragraphs. Figure 3.3 The browser inserts line breaks and blank paragraph separators only where you place the correct HTML tags.
Naphthalene is a white solid that evaporates easily. It is also called mothballs, moth flakes, white tar, and tar camphor. When mixed with air, naphthalene vapors easily burn. Fossil fuels, such as petroleum and coal, naturally contain naphthalene. Burning tobacco or wood produces naphthalene. The major products made from naphthalene are moth repellents, in the form of mothballs or crystals, and toilet deodorant blocks. It is also used for making dyes, resins, leather tanning agents, and the insecticide, carbaryl. Naphthalene has a strong, but not unpleasant smell. Its taste is unknown, but must not be unpleasant since children have eaten mothballs and deodorant blocks. You can smell naphthalene in the air at a concentration of 84 parts naphthalene per one billion parts (ppb) of air. You can smell it in water when 21 ppb are present. 1-Methylnaphthalene is a naphthalene-related compound which is also called alpha methylnaphthalene. It is a clear liquid. Its taste and odor have not been described, but you can smell it in water when only 7.5 ppb are present. Another naphthalene-related compound, 2-methylnaphthalene, is also called beta methylnaphthalene. It is a solid like naphthalene. The taste and odor of 2-methylnaphthalene have not been described. Its presence can be detected at a concentration of 10 ppb in air and 10 ppb in water. 1-Methylnaphthalene and 2-methylnaphthalene are used to make other chemicals such as dyes, resins, and, for 2-methylnaphthalene, vitamin K. Along with naphthalene, they are present in cigarette smoke, wood smoke, tar, and asphalt, and at some hazardous waste sites. Fate & Transport Naphthalene enters the environment from industrial uses, from its use as a moth repellent, from the burning of wood or tobacco, and from accidental spills. Naphthalene at hazardous waste sites and landfills can dissolve in water. Naphthalene can become weakly attached to soil or pass through the soil into underground water. Most of the naphthalene entering the environment is from the burning of woods and fossil fuels in the home. The second greatest release of naphthalene is through the use of moth repellents. Only about 10% of the naphthalene is from coal production and distillation, and less than 1% is attributable to naphthalene production losses. Cigarette smoking also releases small amounts of naphthalene. Naphthalene evaporates easily. That is why you can smell mothballs. In the air, the moisture and sunlight make it break down, often within 1 day. The naphthalene can change to 1-naphthol or 2-naphthol. These chemicals have some of the toxic properties of naphthalene. Some naphthalene will dissolve in water in rivers, lakes, or wells. Naphthalene in water is destroyed by bacteria or evaporates into the air. Most of the naphthalene will be gone from rivers or lakes within 2 weeks. Naphthalene breaks down faster in water containing other pollutants, such as petroleum products. Naphthalene binds weakly to soils and sediment. It easily passes through sandy soils to reach underground water. In soil, some microorganisms break down naphthalene. When near the surface of the soil, it will evaporate into air. Healthy soil will allow the growth of microorganisms which break down most of the naphthalene in 1 to 3 months. If the soil has few microorganisms, it will take about twice as long. Microorganisms may change the chemical structure of naphthalene. Some common bacteria grow on naphthalene, breaking it down to carbon monoxide. Naphthalene does not accumulate in the flesh of animals and fish that you might eat. If dairy cows are exposed to naphthalene, some naphthalene will be in their milk; if laying hens are exposed, some naphthalene will be in their eggs. Naphthalene and the methylnaphthalenes have been found in very small amounts in some samples of fish and shellfish from polluted waters. Scientists know very little about what happens to 1-methylnaphthalene and 2-methylnaphthalene in the environment. These compounds are similar to naphthalene and should act like it in air, water, and soil. You can be exposed to naphthalene if you live in a city that has polluted air. Typical air concentrations of naphthalene in cities are about 0.0000001 ppm (0.0000001 parts of naphthalene per million parts of air). Naphthalene is generally not found in water, but when it is present, the levels are usually lower than 0.01 ppm. The levels of naphthalene in typical urban, suburban or rural soils are not known. At hazardous waste sites, naphthalene and 2-methylnaphthalene are found more frequently in soils and sediments than in water. You can be exposed to naphthalene in your home through the use of mothballs or by breathing air that contains tobacco smoke. Because it is unusual for naphthalene to be found in drinking water, you are not likely to be exposed by this route. The levels of naphthalene in foods are not known. You can also be exposed to naphthalene if you work in an industry such as coal-tar production, wood preserving, tanning, or ink and dye production. The levels of naphthalene in the air at some workplaces might be over 1,000 times higher than the levels of naphthalene in the air of most cities. Naphthalene and 2-methylnaphthalene can enter your body when you breathe air that contains these chemicals, when you eat or drink contaminated food or water, and through contact with your skin. Human exposure to naphthalene and 2-methylnaphthalene occurs mainly by breathing air that contains these compounds. The body changes naphthalene into other chemicals that leave your body in the urine over the course of several days. Little is known about how 2-methylnaphthalene leaves the body. Hemolytic anemia (a condition involving the breakdown of red blood cells) is the primary health concern for humans exposed to naphthalene for either short or long periods of time. Other effects commonly found include nausea, vomiting, diarrhea, kidney damage, jaundice (yellowish skin or eyes) and liver damage. These effects can occur from either breathing or eating naphthalene. Cataracts (cloudy spots) might also occur in the eyes of persons who eat or breathe naphthalene. Laboratory animals that breathed or ate naphthalene for several weeks showed effects on the blood, kidneys, and liver. Naphthalene can cause cataracts in the eyes of some animals. Cancer has not been seen in humans or animals exposed to naphthalene. In pregnant women, naphthalene and its breakdown products in blood can reach the fetus. It is not known whether these substances can cause birth defects. Infants whose mothers were exposed to naphthalene during pregnancy developed blood problems (hemolytic anemia). In some animals injected with naphthalene, lung damage has developed. Although there is some information about the effects that occur in humans from breathing or eating naphthalene, the levels of naphthalene at which these effects can occur are not known. Naphthalene can be smelled in air at a concentration of about 0.08 ppm and can be tasted in water at a concentration of about 0.02 ppm. 2-Methylnaphthalene can be tasted in water at levels of about 0.01 ppm; the concentration at which it can be smelled in air is not known. The effects of skin contact with naphthalene or 2-methylnaphthalene have not been carefully studied. There have been several medical reports showing that babies dressed in clothing that had been stored in naphthalene mothballs developed liver problems and hemolytic anemia. Information excerpted from: Toxicological Profile for Naphthalene December 1990Agency for Toxic Substances and Disease Registry U.S. Dept. of Health and Human Services
Curanderismo (folk healing) Folksong as an Ethnic Expression Chicano/Black/Asian Ethnic Minority Families Changing Male Roles Changing Female Roles Child Rearing Practices The Single Parent Family Minorities in the Work Force Women in the Labor Force Its role in U.S. economy Its role in Culture Maintenance in U.S. Ethnic Minority Women in Political Movements Contemporary Ethnic Music Minority Arts in American Culture Folklore and Folk Arts Ethnic Minority Movements in the U.S. Civil Rights Movements Farm Labor Struggle Minority Groups in American Politics Pre-World War II Post-World War II Religion in Minority Cultures Role of Catholic/Protestant Churches in Chicano Communities Impact of legal rulings in the education of minorities Lau vs. Nichols Brown vs. Board of Education Great Black/Chicano/Asian educators--past and present Issues in educating the Black/Mexican/Asian child race, IQ & achievement Segregation, desegregation, and busing Athletics vs. Academics in Higher Education Church role in Civil Rights Movement and Economic Development Family roles during slavery Matriarchal system--Myth or Reality? Impact of African culture on Black Americans Early African practices and beliefs Impact of Native American culture on Mexican Americans / on American culture Historical sketch of Blacks in Politics Right to vote Birth of Jim Crowism Impact of the 1960's Civil Rights Movement on politics Organizations: NAACP, SNCC Chicano Student Organizations Contemporary Leadership in current human rights struggles Impact of Black/Chicano/Asian vote Local, state, national Nutritional value of ethnic minorities' foods Racism in American Sports Racism and discrimination in American society The justice system Research on the physiological/psychological differences of Black athletes Cultural revival movements Harlem Renaissance--writers and styles Contemporary Chicano writers Minorities in Business Minorities and the Mass Media (TV, film, radio, etc.) U.S. Involvement in Latin America El Salvador, Mexico, Guatemala, etc. Effects on Immigration Policies Impact on Inter-ethnic relations Language Differences in America Drug Abuse among Minorities Traditions in Chicano/Black/Asian culture ã 2001 by John A. Cagle, Professor of Communication, California State University, Fresno. This glossary is intended to assist you in understanding commonly used terms and concepts when reading, interpreting, and evaluating scholarly research in the social sciences. Also included are general words and phrases defined within the context of how they apply to research in the social and behavioral sciences. - Acculturation -- refers to the process of adapting to another culture, particularly in reference to blending in with the majority population [e.g., an immigrant adopting American customs]. However, acculturation also implies that both cultures add something to one another, but still remain distinct groups unto themselves. - Accuracy -- a term used in survey research to refer to the match between the target population and the sample. - Affective Measures -- procedures or devices used to obtain quantified descriptions of an individual's feelings, emotional states, or dispositions. - Aggregate -- a total created from smaller units. For instance, the population of a county is an aggregate of the populations of the cities, rural areas, etc. that comprise the county. As a verb, it refers to total data from smaller units into a large unit. - Anonymity -- a research condition in which no one, including the researcher, knows the identities of research participants. - Baseline -- a control measurement carried out before an experimental treatment. - Behaviorism -- school of psychological thought concerned with the observable, tangible, objective facts of behavior, rather than with subjective phenomena such as thoughts, emotions, or impulses. Contemporary behaviorism also emphasizes the study of mental states such as feelings and fantasies to the extent that they can be directly observed and measured. - Beliefs -- ideas, doctrines, tenets, etc. that are accepted as true on grounds which are not immediately susceptible to rigorous proof. - Benchmarking -- systematically measuring and comparing the operations and outcomes of organizations, systems, processes, etc., against agreed upon "best-in-class" frames of reference. - Bias -- a loss of balance and accuracy in the use of research methods. It can appear in research via the sampling frame, random sampling, or non-response. It can also occur at other stages in research, such as while interviewing, in the design of questions, or in the way data are analyzed and presented. Bias means that the research findings will not be representative of, or generalizable to, a wider population. - Case Study -- the collection and presentation of detailed information about a particular participant or small group, frequently including data derived from the subjects themselves. - Causal Hypothesis -- a statement hypothesizing that the independent variable affects the dependent variable in some way. - Causal Relationship -- the relationship established that shows that an independent variable, and nothing else, causes a change in a dependent variable. It also establishes how much of a change is shown in the dependent variable. - Causality -- the relation between cause and effect. - Central Tendency -- any way of describing or characterizing typical, average, or common values in some distribution. - Chi-square Analysis -- a common non-parametric statistical test which compares an expected proportion or ratio to an actual proportion or ratio. - Claim -- a statement, similar to a hypothesis, which is made in response to the research question and that is affirmed with evidence based on research. - Classification -- ordering of related phenomena into categories, groups, or systems according to characteristics or attributes. - Cluster Analysis -- a method of statistical analysis where data that share a common trait are grouped together. The data is collected in a way that allows the data collector to group data according to certain characteristics. - Cohort Analysis -- group by group analytic treatment of individuals having a statistical factor in common to each group. Group members share a particular characteristic [e.g., born in a given year] or a common experience [e.g., entering a college at a given time]. - Confidentiality -- a research condition in which no one except the researcher(s) knows the identities of the participants in a study. It refers to the treatment of information that a participant has disclosed to the researcher in a relationship of trust and with the expectation that it will not be revealed to others in ways that violate the original consent agreement, unless permission is granted by the participant. - Confirmability Objectivity -- the findings of the study could be confirmed by another person conducting the same study. - Construct -- refers to any of the following: something that exists theoretically but is not directly observable; a concept developed [constructed] for describing relations among phenomena or for other research purposes; or, a theoretical definition in which concepts are defined in terms of other concepts. For example, intelligence cannot be directly observed or measured; it is a construct. - Construct Validity -- seeks an agreement between a theoretical concept and a specific measuring device, such as observation. - Constructivism -- the idea that reality is socially constructed. It is the view that reality cannot be understood outside of the way humans interact and that the idea that knowledge is constructed, not discovered. Constructivists believe that learning is more active and self-directed than either behaviorism or cognitive theory would postulate. - Content Analysis -- the systematic, objective, and quantitative description of the manifest or latent content of print or nonprint communications. - Context Sensitivity -- awareness by a qualitative researcher of factors such as values and beliefs that influence cultural behaviors. - Control Group -- the group in an experimental design that receives either no treatment or a different treatment from the experimental group. This group can thus be compared to the experimental group. - Controlled Experiment -- an experimental design with two or more randomly selected groups [an experimental group and control group] in which the researcher controls or introduces the independent variable and measures the dependent variable at least two times [pre- and post-test measurements]. - Correlation -- a common statistical analysis, usually abbreviated as r, that measures the degree of relationship between pairs of interval variables in a sample. The range of correlation is from -1.00 to zero to +1.00. Also, a non-cause and effect relationship between two variables. - Covariate -- a product of the correlation of two related variables times their standard deviations. Used in true experiments to measure the difference of treatment between them. - Credibility -- a researcher's ability to demonstrate that the object of a study is accurately identified and described based on the way in which the study was conducted. - Critical Theory -- an evaluative approach to social science research, associated with Germany's neo-Marxist “Frankfurt School,” that aims to criticize as well as analyze society, opposing the political orthodoxy of modern communism. Its goal is to promote human emancipatory forces and to expose ideas and systems that impede them. - Data -- factual information [as measurements or statistics] used as a basis for reasoning, discussion, or calculation. - Data Mining -- the process of analyzing data from different perspectives and summarizing it into useful information, often to discover patterns and/or systematic relationships among variables. - Data Quality -- this is the degree to which the collected data [results of measurement or observation] meet the standards of quality to be considered valid [trustworthy] and reliable [dependable]. - Deductive -- a form of reasoning in which conclusions are formulated about particulars from general or universal premises. - Dependability -- being able to account for changes in the design of the study and the changing conditions surrounding what was studied. - Dependent Variable -- a variable that varies due, at least in part, to the impact of the independent variable. In other words, its value “depends” on the value of the independent variable. For example, in the variables “gender” and “academic major,” academic major is the dependent variable, meaning that your major cannot determine whether you are male or female, but your gender might indirectly lead you to favor one major over another. - Deviation -- the distance between the mean and a particular data point in a given distribution. - Discourse Community -- a community of scholars and researchers in a given field who respond to and communicate to each other through published articles in the community's journals and presentations at conventions. All members of the discourse community adhere to certain conventions for the presentation of their theories and research. - Discrete Variable -- a variable that is measured solely in whole units, such as, gender and number of siblings. - Distribution -- the range of values of a particular variable. - Effect Size -- the amount of change in a dependent variable that can be attributed to manipulations of the independent variable. A large effect size exists when the value of the dependent variable is strongly influenced by the independent variable. It is the mean difference on a variable between experimental and control groups divided by the standard deviation on that variable of the pooled groups or of the control group alone. - Emancipatory Research -- research is conducted on and with people from marginalized groups or communities. It is led by a researcher or research team who is either an indigenous or external insider; is interpreted within intellectual frameworks of that group; and, is conducted largely for the purpose of empowering members of that community and improving services for them. It also engages members of the community as co-constructors or validators of knowledge. - Empirical Research -- the process of developing systematized knowledge gained from observations that are formulated to support insights and generalizations about the phenomena being researched. - Epistemology -- concerns knowledge construction; asks what constitutes knowledge and how knowledge is validated. - Ethnography -- method to study groups and/or cultures over a period of time. The goal of this type of research is to comprehend the particular group/culture through immersion into the culture or group. Research is completed through various methods but, since the researcher is immersed within the group for an extended period of time, more detailed information is usually collected during the research. - Expectancy Effect -- any unconscious or conscious cues that convey to the participant in a study how the researcher wants them to respond. Expecting someone to behave in a particular way has been shown to promote the expected behavior. Expectancy effects can be minimized by using standardized interactions with subjects, automated data-gathering methods, and double blind protocols. - External Validity -- the extent to which the results of a study are generalizable or transferable. - Factor Analysis -- a statistical test that explores relationships among data. The test explores which variables in a data set are most related to each other. In a carefully constructed survey, for example, factor analysis can yield information on patterns of responses, not simply data on a single response. Larger tendencies may then be interpreted, indicating behavior trends rather than simply responses to specific questions. - Field Studies -- academic or other investigative studies undertaken in a natural setting, rather than in laboratories, classrooms, or other structured environments. - Focus Groups -- small, roundtable discussion groups charged with examining specific topics or problems, including possible options or solutions. Focus groups usually consist of 4-12 participants, guided by moderators to keep the discussion flowing and to collect and report the results. - Framework -- the structure and support that may be used as both the launching point and the on-going guidelines for investigating a research problem. - Generalizability -- the extent to which research findings and conclusions conducted on a specific study to groups or situations can be applied to the population at large. - Grounded Theory -- practice of developing other theories that emerge from observing a group. Theories are grounded in the group's observable experiences, but researchers add their own insight into why those experiences exist. - Group Behavior -- behaviors of a group as a whole, as well as the behavior of an individual as influenced by his or her membership in a group. - Hypothesis -- a tentative explanation based on theory to predict a causal relationship between variables. - Independent Variable -- the conditions of an experiment that are systematically manipulated by the researcher. A variable that is not impacted by the dependent variable, and that itself impacts the dependent variable. In the earlier example of "gender" and "academic major," (see Dependent Variable) gender is the independent variable. - Individualism -- a theory or policy having primary regard for the liberty, rights, or independent actions of individuals. - Inductive -- a form of reasoning in which a generalized conclusion is formulated from particular instances. - Inductive Analysis -- a form of analysis based on inductive reasoning; a researcher using inductive analysis starts with answers, but formulates questions throughout the research process. - Insiderness -- a concept in qualitative research that refers to the degree to which a researcher has access to and an understanding of persons, places, or things within a group or community based on being a member of that group or community. - Internal Consistency -- the extent to which all questions or items assess the same characteristic, skill, or quality. - Internal Validity -- the rigor with which the study was conducted [e.g., the study's design, the care taken to conduct measurements, and decisions concerning what was and was not measured]. It is also the extent to which the designers of a study have taken into account alternative explanations for any causal relationships they explore. In studies that do not explore causal relationships, only the first of these definitions should be considered when assessing internal validity. - Life History -- a record of an event/events in a respondent's life told [written down, but increasingly audio or video recorded] by the respondent from his/her own perspective in his/her own words. A life history is different from a "research story" in that it covers a longer time span, perhaps a complete life, or a significant period in a life. - Margin of Error -- the permittable or acceptable deviation from the target or a specific value. The allowance for slight error or miscalculation or changing circumstances in a study. - Measurement -- process of obtaining a numerical description of the extent to which persons, organizations, or things possess specified characteristics. - Meta-Analysis -- an analysis combining the results of several studies that address a set of related hypotheses. - Methodology -- a theory or analysis of how research does and should proceed. - Methods -- systematic approaches to the conduct of an operation or process. It includes steps of procedure, application of techniques, systems of reasoning or analysis, and the modes of inquiry employed by a discipline. - Mixed-Methods -- a research approach that uses two or more methods from both the quantitative and qualitative research categories. It is also referred to as blended methods, combined methods, or methodological triangulation. - Modeling -- the creation of a physical or computer analogy to understand a particular phenomenon. Modeling helps in estimating the relative magnitude of various factors involved in a phenomenon. A successful model can be shown to account for unexpected behavior that has been observed, to predict certain behaviors, which can then be tested experimentally, and to demonstrate that a given theory cannot account for certain phenomenon. - Models -- representations of objects, principles, processes, or ideas often used for imitation or emulation. - Naturalistic Observation -- observation of behaviors and events in natural settings without experimental manipulation or other forms of interference. - Norm -- the norm in statistics is the average or usual performance. For example, students usually complete their high school graduation requirements when they are 18 years old. Even though some students graduate when they are younger or older, the norm is that any given student will graduate when he or she is 18 years old. - Null Hypothesis -- the proposition, to be tested statistically, that the experimental intervention has "no effect," meaning that the treatment and control groups will not differ as a result of the intervention. Investigators usually hope that the data will demonstrate some effect from the intervention, thus allowing the investigator to reject the null hypothesis. - Ontology -- a discipline of philosophy that explores the science of what is, the kinds and structures of objects, properties, events, processes, and relations in every area of reality. - Panel Study -- a longitudinal study in which a group of individuals is interviewed at intervals over a period of time. - Participant -- individuals whose physiological and/or behavioral characteristics and responses are the object of study in a research project. - Peer-Review -- the process in which the author of a book, article, or other type of publication submits his or her work to experts in the field for critical evaluation, usually prior to publication. This is standard procedure in publishing scholarly research. - Phenomenology -- a qualitative research approach concerned with understanding certain group behaviors from that group's point of view. - Philosophy -- critical examination of the grounds for fundamental beliefs and analysis of the basic concepts, doctrines, or practices that express such beliefs. - Phonology -- the study of the ways in which speech sounds form systems and patterns in language. - Policy -- governing principles that serve as guidelines or rules for decision making and action in a given area. - Policy Analysis -- systematic study of the nature, rationale, cost, impact, effectiveness, implications, etc., of existing or alternative policies, using the theories and methodologies of relevant social science disciplines. - Population -- the target group under investigation. The population is the entire set under consideration. Samples are drawn from populations. - Position Papers -- statements of official or organizational viewpoints, often recommending a particular course of action or response to a situation. - Positivism -- a doctrine in the philosophy of science, positivism argues that science can only deal with observable entities known directly to experience. The positivist aims to construct general laws, or theories, which express relationships between phenomena. Observation and experiment is used to show whether the phenomena fit the theory. - Predictive Measurement -- use of tests, inventories, or other measures to determine or estimate future events, conditions, outcomes, or trends. - Principal Investigator -- the scientist or scholar with primary responsibility for the design and conduct of a research project. - Probability -- the chance that a phenomenon will occur randomly. As a statistical measure, it is shown as p [the "p" factor]. - Questionnaire -- structured sets of questions on specified subjects that are used to gather information, attitudes, or opinions. - Random Sampling -- a process used in research to draw a sample of a population strictly by chance, yielding no discernible pattern beyond chance. Random sampling can be accomplished by first numbering the population, then selecting the sample according to a table of random numbers or using a random-number computer generator. The sample is said to be random because there is no regular or discernible pattern or order. Random sample selection is used under the assumption that sufficiently large samples assigned randomly will exhibit a distribution comparable to that of the population from which the sample is drawn. The random assignment of participants increases the probability that differences observed between participant groups are the result of the experimental intervention. - Reliability -- the degree to which a measure yields consistent results. If the measuring instrument [e.g., survey] is reliable, then administering it to similar groups would yield similar results. Reliability is a prerequisite for validity. An unreliable indicator cannot produce trustworthy results. - Representative Sample -- sample in which the participants closely match the characteristics of the population, and thus, all segments of the population are represented in the sample. A representative sample allows results to be generalized from the sample to the population. - Rigor -- degree to which research methods are scrupulously and meticulously carried out in order to recognize important influences occurring in an experimental study. - Sample -- the population researched in a particular study. Usually, attempts are made to select a "sample population" that is considered representative of groups of people to whom results will be generalized or transferred. In studies that use inferential statistics to analyze results or which are designed to be generalizable, sample size is critical, generally the larger the number in the sample, the higher the likelihood of a representative distribution of the population. - Sampling Error -- the degree to which the results from the sample deviate from those that would be obtained from the entire population, because of random error in the selection of respondent and the corresponding reduction in reliability. - Saturation -- a situation in which data analysis begins to reveal repetition and redundancy and when new data tend to confirm existing findings rather than expand upon them. - Semantics -- the relationship between symbols and meaning in a linguistic system. Also, the cuing system that connects what is written in the text to what is stored in the reader's prior knowledge. - Social Theories -- theories about the structure, organization, and functioning of human societies. - Sociolinguistics -- the study of language in society and, more specifically, the study of language varieties, their functions, and their speakers. - Standard Deviation -- a measure of variation that indicates the typical distance between the scores of a distribution and the mean; it is determined by taking the square root of the average of the squared deviations in a given distribution. It can be used to indicate the proportion of data within certain ranges of scale values when the distribution conforms closely to the normal curve. - Statistical Analysis -- application of statistical processes and theory to the compilation, presentation, discussion, and interpretation of numerical data. - Statistical Bias -- characteristics of an experimental or sampling design, or the mathematical treatment of data, that systematically affects the results of a study so as to produce incorrect, unjustified, or inappropriate inferences or conclusions. - Statistical Significance -- the probability that the difference between the outcomes of the control and experimental group are great enough that it is unlikely due solely to chance. The probability that the null hypothesis can be rejected at a predetermined significance level [0.05 or 0.01]. - Statistical Tests -- researchers use statistical tests to make quantitative decisions about whether a study's data indicate a significant effect from the intervention and allow the researcher to reject the null hypothesis. That is, statistical tests show whether the differences between the outcomes of the control and experimental groups are great enough to be statistically significant. If differences are found to be statistically significant, it means that the probability [likelihood] that these differences occurred solely due to chance is relatively low. Most researchers agree that a significance value of .05 or less [i.e., there is a 95% probability that the differences are real] sufficiently determines significance. - Subcultures -- ethnic, regional, economic, or social groups exhibiting characteristic patterns of behavior sufficient to distinguish them from the larger society to which they belong. - Testing -- the act of gathering and processing information about individuals' ability, skill, understanding, or knowledge under controlled conditions. - Theory -- a general explanation about a specific behavior or set of events that is based on known principles and serves to organize related events in a meaningful way. A theory is not as specific as a hypothesis. - Treatment -- the stimulus given to a dependent variable. - Trend Samples -- method of sampling different groups of people at different points in time from the same population. - Triangulation -- a multi-method or pluralistic approach, using different methods in order to focus on the research topic from different viewpoints and to produce a multi-faceted set of data. Also used to check the validity of findings from any one method. - Unit of Analysis -- the basic observable entity or phenomenon being analyzed by a study and for which data are collected in the form of variables. - Validity -- the degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. A method can be reliable, consistently measuring the same thing, but not valid. - Variable -- any characteristic or trait that can vary from one person to another [race, gender, academic major] or for one person over time [age, political beliefs]. - Weighted Scores -- scores in which the components are modified by different multipliers to reflect their relative importance. - White Paper -- an authoritative report that often states the position or philosophy about a social, political, or other subject, or a general explanation of an architecture, framework, or product technology written by a group of researchers. A white paper seeks to contain unbiased information and analysis regarding a business or policy problem that the researchers may be facing. Free Social Science Dictionary. Socialsciencedictionary.com . Glossary. Institutional Review Board. Colorado College; Glossary of Key Terms. Writing@CSU. Colorado State University; Glossary A-Z. Education.com; Glossary of Research Terms. Research Mindedness Virtual Learning Resource. Centre for Human Servive Technology. University of Southampton; Jupp, Victor. The SAGE Dictionary of Social and Cultural Research Methods. London: Sage, 2006.
Send the link below via email or IMCopy Present to your audienceStart remote presentation - Invited audience members will follow you as you navigate and present - People invited to a presentation do not need a Prezi account - This link expires 10 minutes after you close the presentation - A maximum of 30 users can follow your presentation - Learn more about this feature in our knowledge base article 07.06 Genocide: Assessment Transcript of 07.06 Genocide: Assessment Adolf Hitler was a German leader born on April 20th, 1889. He is most responsible for the Holocaust due to many death of Jews he was responsible for. This image displays a Gestapo troop which was a secret Nazi police force created by Hitler. Their job was to take Jews to concentration camps were they would be suffering. When Jews were found, they were brought to concentration camps. In concentration camps, Jews would be imprisoned, overworked, tortured, starved and killed. In order for Jews not to be found by the Gestapo, some would hide in the closets, or secret rooms in someone's house. If people were caught for helping Jews, they would be killed. On December 7th, 1941 Hitler issued the Night and Fog Decree. This was a secret order stating that if anyone was suspected of helping out Jews, they would be sentenced to death. Summary: Impact on Jews The Holocaust was a tragic event that resulted in the death of about 6 million Jews. Those who survived probably still get haunted by the experience they went through. They also may have lost many family members. Plus, many Jews struggled with finding family members due to identity loss. Another negative impact of the Holocaust was that the Yiddish language greatly decline. In order to recognize the Yiddish culture, there was what was known as the Yiddish Renaissance. Although the Holocaust had many negative impacts, there is a positive. Now through this tragic event in history, we know what happens when someone is given too much power, and that we should never discriminate people due to their race. Call To Action In order to prevent genocides in the future we need to be aware and stay informed of what is going on in the world. We also need to accept people for who they are no matter their, race, culture or beliefs.
Space X cargo capsule returns home The Magma source, or hot zone, of Campi Flegrei in Italy suggests that the supervolcano may erupt soon. Scientists said the volcano is now like a boiling pot of soup beneath the surface. ( Donar Reiskoffer | Wikimedia Commons ) Scientists have discovered the magma source, also called the “hot zone,” of Campi Flegrei, a supervolcano in southern Italy that experts fear may be brewing for an eruption. The volcano has not erupted for centuries, but scientists fear that it may blow up soon. A May 2017 study found evidence that the energy inside the volcano is building up in the past few decades, which increases the possibility of an eruption. Hot Zone Located In the 1980s, injection of magma or fluids in the Camp Flegrei’s shallower structure caused small earthquakes. In a new study published in the journal Scientific Reports on Aug. 14, researchers found the location of the hot zone that served as source of the magma that flooded into the volcano’s chamber and caldera. Using seismological techniques, Luca De Siena, of the University of Aberdeen, and colleagues, were able to identify the hot zone where the hot materials rose to flood the caldera in the 1980s. “The temporal and spatial correlations we observe between seismic, tomographic, geochemical, and deformation models show that the high-attenuation and deformation area offshore Pozzuoli was the most feasible hot feeder for the seismic, deformation, and geochemical 1983-84 unrest,” the researchers wrote in their study. The research likewise suggests that a 1-2 km-deep rock formation prevented the magma from rising to the surface in the 1980s. The rock formation blocked the magma forcing the latter to release the stress along a lateral route. One Of World’s Most Dangerous Volcanoes May Blow Up Soon Analysis of the hot zone backs up findings of earlier studies that suggest Campi Flegrei, one of the most dangerous supervolcanoes on Earth, could be nearing eruption. The supervolcano showed relatively low amount of seismic activity for decades suggesting that pressure could be building within the supervolcano’s caldera. Researchers said that this makes Campi Flegrei more dangerous. De Siena explained that whatever activity that was produced under Pozzuoli in the 1980s has likely migrated elsewhere, which means that the danger no longer lies in the same spot. “You can now characterise Campi Flegrei as being like a boiling pot of soup beneath the surface,” De Siena said. Volcanologists said that a modern-day eruption of Campi Flegrei could be catastrophic. The eruption that formed the volcano’s caldera about 39,000 years ago is believed to be the largest that occurred in Europe in the past 200,000 years. Thousands of people live inside and near the volcano’s caldera. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.
Learn how to assess the solar energy potential of a site using a pyrheliometer, an unshaded pyranometers and a shaded pyranometer. These measure respectively, the Direct Normal Irradiation (DNI), the Global Horizontal Irradiation (GHI), and the Diffuse Horizontal Irradiation (DHI). These three values can be used to characterize a site for a variety of solar energy uses. GHI= DNI cosØ +DHI where Ø is the solar azimuthal angle Find the separate values of DNI, GHI and DHI and evaluate the feasibility of installing a large fixed photovoltaic system. Find the separate values of DNI, GHI and DHI and evaluate the feasibility of installing a CPV (Concentrating photovoltaic ) system Find the separate values of DNI, GHI and DHI and evaluate the feasibility of installing a CSP(Concentrating solar power) system. (Hint Reference #10 has the map which is a result of the study on the global annual solar radiation. If the radiation is falling within the feasible range of more than 1700kWh/m2 per year, it is possible to have a solar installation ) Background and Theory The background for the different types of solar radiation was already given in the Pyranometer and Pyrheliometer experiments. If you have not completed those experiments, please do so before attempting this experiment. There are two primary types of solar energy systems - non-concentrating and concentrating. Non-concentrating solar energy systems are those you typically see around you. Examples include most photovoltaic (PV) solar systems and flat plate collectors for heating water for domestic home use. These systems are commonly placed on rooftops or other areas which receive good sunlight. The rays from the sun are received on the system and converted to heat or electricity. These systems are able to collect solar energy which comes from a wide variety of directions. As a result, non-concentrating solar energy systems are able to utilize both direct (DNI) and diffuse (DHI) radiation. This means that they will even work in cloudy weather. A sample graph with different radiations is given below: Concentrating solar energy systems, on the other hand, take the rays which come from the sun and focus them on a smaller area. They frequently use mirrors or other optical devices to achieve this goal. However, the geometry of these systems of mirrors and reflectors requires that they are only able to utilize sunlight which is coming directly from the sun, and they must frequently be pointed towards the sun. In other words, they are only able to use DNI solar energy and they require tracking. Concentrating solar energy systems can be further broken down into 2-D and 3-D concentrating systems. A 2-D concentrating solar thermal system is able to track the sun through one axis of freedom. They concentrate the sun's energy onto a line. Examples of this type of system include parabolic troughs and linear Fresnel collectors. The theoretical maximum concentration ratio of a 2-D concentrating system is 212:1 and can achieve temperatures into the hundreds of degrees Celsius (see Power from the Sun in the Reference section for a complete derivation of this proof). A 3-D concentrating system, on the other hand is able to track the sun through two axes of freedom. As a result, 3-D systems able to focus the sun's energy to a point. Examples include the "Power Tower" Central Receiver system and Sterling Dishes. They are able to achieve concentration ratios as high as 45,000:1 and temperatures into the thousands of degrees Celsium (see Power from the Sun in the Reference section for a complete derivation of this proof). • Direct solar radiation is the radiation that comes directly from the sun, with minimal attenuation by the Earth’s atmosphere or obstacles. • Diffuse solar radiation is that which is scattered, absorbed, and reflected within the atmosphere, mostly by clouds, but also by particulate matter and gas molecules. •The direct and diffuse components together are referred to as total or global radiation Necessity of ground measurements •Datasets are considered adequate for planning purposes, but planners should be aware of the uncertainty associated to the data. Annual DNI sums and yearly distribution differ very much among datasets for the same specific sites since the models apply different atmospheric corrections. •Locations with similar average DNIs can see variations of up to ± 9% in annual electricity production due to differences in DNI frequency distribution. (IEA, 2010). •Ambient temperature, wind speed and direction and relative humidity conditions at the site AFFECT the performance.Therefore, satellite based datasets must be scaled with ground measurements in order to obtain reliable and “bankable resource assessments” during the project development phase. •Solar resource uncertainty risk is perceived as one of the highest by financiers. A minimum of one year of on-site measurements is required. The information obtained, together with satellite and historic data, must be analyzed to produce long term estimates of the solar resource.
ZIMSEC O Level History Notes: Missionaries: Problems faced by Missionaries in Zimbabwe between 1850-1900 - Problems faced by the missionaries included: - Diseases such as malaria and yellow fever. - Poor transport and communication as journeys had to be made by foot or cattle wagons. - Language barrier: most natives only spoke their native tongues while most of the missionaries who were of European origin could not speak these. - Opposition from Muslims. - Opposition from African Traditional Leaders. - Poor Security. - Resistance to education by the locals. - Resistance to Christianity by the locals. - Some of the missionaries were killed by wild animals especially as they made journeys across untamed lands. - They were often viewed with suspicion and distrust by the local population. - They lacked access to proper medications - Often faced starvation as food could be scarce. - They often faced delays in being granted permission to operate. - Their converts or potential converts were often threatened, killed or sent away e.g. Bernard Mizeki. - The African climate was hostile and unforgiving with plenty of sunshine compared to the milder climates of Europe. - They were exposed to cruel African customs such as killing of twins and albinos and raiding. To access more topics go to the History Notes page.
We can control movement by our own free will. But not all of us can do that. Some people cannot control their movements – and even speech – voluntarily. This doesn’t necessarily mean they’re crazy. It’s beyond their control. This condition is called Tourette Syndrome. What is Tourette Syndrome? Tourette syndrome (TS) is a neurological disorder characterized by repetitive, stereotyped, involuntary movements and vocalizations called tics. The disorder is named for Dr. Georges Gilles de la Tourette, the pioneering French neurologist who in 1885 first described the condition in an 86-year-old French noblewoman. TS does characteristically wax and wane, can be suppressed temporarily, and are preceded by a premonitory urge. Tourette’s is defined as part of a spectrum of tic disorders, which includes provisional, transient and persistent (chronic) tics. Usually, the tics can manifest right away then slowly disappears and then appears again. Sometimes, Tourette Syndrome is called the swearing disease because some people with TS involuntarily blurts out inappropriate comments and/or curse words. TS does not affect intelligence or life expectancy. How common is Tourette Syndrome? Tourette Syndrome is not as rare as it presents, though uncommon, with mild symptoms to some people. Males are about three to four times more likely than females to develop Tourette syndrome. Between 0.4% and 3.8% of children ages 5 to 18 may have Tourette’s;the prevalence of other tic disorders in school-age children is higher, with the more common tics of eye blinking, coughing, throat clearing, sniffing, and facial movements. What causes Tourette Syndrome? Genetics and environment may play a role in the development of TS, but the exact cause is unknown. Experts don’t know the exact cause of TS, but some research points to changes in the brain and problems with how nerve cells communicate. A disturbance in the balance of neurotransmitters — chemicals in the brain that carry nerve signals from cell to cell — might play a role. Tics are believed to result from dysfunction in cortical and subcortical regions, the thalamus, basal ganglia and frontal cortex. Neuroanatomic models implicate failures in circuits connecting the brain’s cortex and subcortex, and imaging techniques implicate the basal ganglia and frontal cortex. What are the symptoms of Tourette Syndrome? The main symptoms of TS are motor tics (sudden, apparently uncontrollable movements like exaggerated blinking of the eyes) or vocal tics (apparently uncontrollable uttered sounds such as throat clearing, grunting, or sniffing). When under stress, the tics usually exacerbate. It can be prolonged or changed. Tics are classified as either simple or complex. Simple motor tics usually involve just one group of muscles. Some examples are eye blinking and grimacing. In contrast, complex motor tics usually involve more muscle groups and might look like a series of movements. Simple vocal tics can be throat clearing, sniffing, or humming, whereas complex vocal tics can involve repeating other people’s words (a condition called echolalia) or involuntary swearing (called coprolalia). Motor tics involve movement. They include: Arm or head jerking Making a face Vocal tics include: Barking or yelping Clearing your throat Repeating what someone else says Before a motor tic happens, a person with TS may get a sensation that can feel like a tingle or tension. The movement makes the sensation go away. Symptoms usually subside as the child grows up and tics can become minimal when they become adolescents and adults. Some people can control tic symptoms. But tension builds, and it eventually has to be released as a tic. Since controlling a tic requires much effort, they usually cannot concentrate in other activities (i.e. classroom or work). This is an illustration of how Tourette Syndrome is presented. How is Tourette Syndrome diagnosed? For Tourette Syndrome to be diagnosed, the patient is referred to a neurologist. There he or shecwill be tested and asked by the neurologist about how TS presents. The following are the usual questions asked: -What did you notice that brought you here today? -Do you often move your body in a way you can’t control? How long has that been happening? -Do you ever say things or make sounds without meaning to? When did it start? -Does anything make your symptoms better? What makes them worse? -Do you feel anxious or have trouble focusing? -Does anyone else in your family have these kinds of symptoms? MRI. It uses powerful magnets and radio waves to make pictures of organs and structures inside your body. CT scan. It’s a powerful X-ray that makes detailed images of your insides. How is Tourette Syndrome treated? There is no cure for TS. But there are strategies used to manage and/or control its symptoms: Habit reversal therapy – involves monitoring the pattern and frequency of the tics and identifying any sensations that trigger them. The next stage is to find an alternative, less noticeable method of relieving the sensations that cause a tic (known as premonitory sensations). This is known as a competing response. Exposure with response prevention (ERP) –involves increasing exposure to the urge to tic in order to suppress the tic response for longer. This works on the theory that you get used to the feeling of needing to tic until the urge, and any related anxiety, decreases in strength. Medications can also help if the symptoms are more frequent and severe like alpha2-adrenergic agonists, muscle relaxants and dopamine antagonists to relax muscles or control nerve impulses in tics. Antidepressants and anxiolytics can help when the patient has depression and/or anxiety. What is the prognosis for Tourette Syndrome? The symptoms usually improve in two-thirds of TS cases after 10 years. The tics may diminish a lot and may even disappear. Medications and therapy are not necessarily needed. However, the symptoms continue in the remaining one-third of cases and medications will be continued. Nevertheless, TS symptoms get milder as the patient ages. What happens if Tourette Syndrome is not diagnosed? If not identified right away, complications may arise. The patient may lose concentration in classes or work. They may even become subjects of ridicule and discrimination and he or she will have low self-esteem, anxiety, and depression. Can someone with Tourette Syndrome be successful? Yes. A person with TS can become successful in many areas of life given he or she has awareness, acceptance, and/or therapy or medicine. The most important factor is, that he or she will be given support and understanding instead of discrimination. Below are some famous people with diagnosed or suspected TS: Jim Eisenreich, professional baseball player Mahmoud Abdul-Rauf, professional basketball player Samuel Johnson, British writer who penned the Dictionary of the English Language and the Lives of Poets Wolfgang Amadeus Mozart (huh? Mozart again? He also had suspected ADHD and/or autism) Tim Howard, soccer player Howard Hughes, one of the richest men in history Dan Ackroyd, actor (he also had Aspergers) Michael Wolff, jazz musician and actor Dash Mihok, actor So, people with TS should be treated equally with us instead of laughing at them. Who knows, the one with involuntary tics may become more successful than you! This concludes the description of the conditions nder the umbrella of neurodiversity. There are more conditions aside from dyslexia, ADHD, autism, dyspraxia, dyscalculia, and Tourette Syndrome, but I will discuss them in my following blog posts. 5. Robertson MM. “Gilles de la Tourette syndrome: the complexities of phenotype and treatment”. Br J Hosp Med (Lond). 2011 Feb;72(2):100–7. PMID 21378617 Posted from WordPress for Android
An Ice Age brought on by global warming was the scenario depicted in the movie THE DAY AFTER TOMORROW. While the science on which the movie based has been called into question, there may be some merit in the theory that global warming could cause an Ice Age. Why is Europe's climate comparitively milder than other places at the same latitude? Alaska and Greenland, both the same distance to the North Pole as Europe, are covered with ice and permafrost while most of Europe is not. The ocean currents called the Gulf Stream bring warm waters up to Europe from the Caribbean. This water brings warmth to the countries in its path. Cooler water from Europe feeds back into the loop and causes the water to flow back to the Caribbean in a continuous cycle. The Gulf Stream has been significantly weakened in every major cooling event, including the last great Ice Age. In the past this weakening has been brought on by natural events. In current times, global warming brought on by human activities could be the cause of slowing or even stopping the Gulf Stream. If this were to happen, the cold waters would stay in the area of Europe and the Northeastern US and could mean an Ice Age for those regions. If an Ice Age occurs, it will likely be due to the melting of polar ice. This will dump large quantities of cold, fresh water into the ocean. It would disrupt the Gulf Stream and cause the cooling of many areas that now have milder climates. The return flow of cold water from Greenland, which goes back to the Caribbean, has already showed a weakening over the last 50 years. There has been a twenty percent decline in the amount of current flowing in this direction. It stands to reason that the warm waters returning from the Caribbean have also decreased in volume. The change would not be gradual. This is a phenomenon that takes place rather quickly. Perhaps it does not happen as fast as depicted in THE DAY AFTER TOMORROW. However, it could happen within a few short years. A slowing or stoppage of the Gulf Stream would affect the entire earth. Observations have been made of current data and historical information gleaned by studying the ocean and the lands around it. With all the information at hand, it appears that it is indeed possible that global warming could bring about a modern Ice Age.
Invasive Species Management What are invasive species? Invasive species are introduced plants and animals that cause harm to the environment, the economy, and/or human health. Often displacing native species, these invaders skew the delicate native balance between animals, plants, and important processes such as water flow and fire. Florida is a good breeding ground for invasives due to tropical weather conditions and rising temperatures which allow certain plants to spread prolifically. Exotic plants are those that have been introduced, either purposefully or accidentally, from a natural range outside of Florida. Not all exotics are invasive; only those that have been shown to significantly alter habitat or biological processes. Every two years, the Florida Exotic Pest Plant Council releases an Invasive Plants List. In 2009, approximately 150 Category I & II plants were identified as invasive in Florida, more than half of those are located in central and north Florida. Category I plants have been shown to alter native plant communities by displacing native species, changing community structure or ecological functions, or hybridizing with neighbors. Category II plants have increased in abundance or frequency, but have not yet altered Florida plant communities to the extent of Category I plants. Within County property, these are the most common invasive plants: (All links lead to the University of Florida website.)
How astronomers fill in uncharted areas of the universe Thanks to new tools, scientists are quickly mapping the stars. Astronomers are filling in the blank spaces on their 3-D map of our universe thanks to their ability to sense almost every conceivable form of electromagnetic radiation. Those blanks include remote regions of space and time when the first stars formed and when young galaxies began to group themselves into gravitationally bound clusters. Last April, NASA's Swift gamma ray space telescope detected what astronomers called a gigantic "blast from the past." Gamma rays are the most energetic form of electromagnetic radiation. Astronomers would still be scratching their heads over what exactly they had found if those gammas were their only data. So observatories around the world immediately began studying the event through radio waves, infrared radiation, and X-rays. Now two international research teams report that those data give direct insight into the unexplored era when the first stars switched on. In an announcement from Britain's University of Leicester, Nial Tanvir, who led one team, said, "This observation allows us to begin exploring the last blank space on our map of the universe." See the journal Nature for technical details. In an announcement from the National Radio Astronomy Observatory in Socorro, New Mexico one of the other team's researchers, Derek Fox at Pennsylvania State University explained: "It's important to study these explosions with many kinds of telescopes.... The result is a unique look into the very early universe that we couldn't have gotten any other way." That result will appear in Astrophysical Journal Letters. The teams are studying the death explosion of one of the first stars. It's the most distant astronomical object yet discovered. It happened some 13 billion light years away at a time when the universe was only about 630 million years old. That's a mere 4 percent of it's current 13.7 billion-year age. At that time those very early stars were generally brighter, hotter, and more massive than their later successors. The gamma ray explosion (labeled GRB 090423) produced an expanding nearly-spherical gaseous halo whose characteristics reflect the nature of the expired star. Multidata astronomy – infrared, radio, optical, and X-ray – is helping astronomers explore another little know era – the time when galaxy clusters first formed. NASA's Chandra X-ray Center reports that such combined data has discovered the most distant – and oldest – cluster yet known. Labeled JKCS041, it formed when the universe was about a quarter of it's present age. It now is located 10.2 light years away. Research team member Stefano Andreon at Italy's National Institute for Astrophysics explained, "We don't think gravity can work fast enough to make galaxy clusters much earlier," Ben Maughan at Britain's University of Bristol, likened the discovery to "finding a Tyrannosaurus Rex fossil that is much older than any other known." That could give a new perspective on dinosaur evolution. Likewise, finding very old galaxy clusters could change ideas about cosmological evolution. Details coming in Astronomy & Astrophysics. NASA's announcement explained it took all the different kinds of data to pin down the nature of this galaxy grouping. The X-ray data in particular were crucial in proving it is a genuine cluster of galaxies bound together by their mutual gravity. In the 21st century, astronomers pouring over data from only one type of telescope could be considered quaint.
Blood Vessel Anatomy We need to briefly discuss the anatomy of the vessels. There are three types of vessels - arteries, veins, and capillaries. Arteries, veins, and capillaries are not anatomically the same. They are not just tubes through which the blood flows. Both arteries and veins have layers of smooth muscle surrounding them. Arteries have a much thicker layer, and many more elastic fibers as well. The largest artery, the aorta leaving the heart, also has cardiac muscle fibers in its walls for the first few inches of its length immediately leaving the heart. Arteries have to expand to accept the blood being forced into them from the heart, and then squeeze this blood on to the veins when the heart relaxes. Arteries have the property of elasticity, meaning that they can expand to accept a volume of blood, then contract and squeeze back to their original size after the pressure is released. A good way to think of them is like a balloon. When you blow into the balloon, it inflates to hold the air. When you release the opening, the balloon squeezes the air back out. It is the elasticity of the arteries that maintains the pressure on the blood when the heart relaxes, and keeps it flowing forward. if the arteries did not have this property, your blood pressure would be more like 120/0, instead of the 120/80 that is more normal. Arteries branch into arterioles as they get smaller. Arterioles eventually become capillaries, which are very thin and branching. Capillaries are really more like a web than a branched tube. It is in the capillaries that the exchange between the blood and the cells of the body takes place. Here the blood gives up its carbon dioxide and takes on oxygen. In the special capillaries of the kidneys, the blood gives up many waste products in the formation of urine. Capillary beds are also the sites where white blood cells are able to leave the blood and defend the body against harmful invaders. Capillaries are so small that when you look at blood flowing through them under a microscope, the cells have to pass through in single file. As the capillaries begin to thicken and merge, they become venules. Venules eventually become veins and head back to the heart. Veins do not have as many elastic fibers as arteries. Veins do have valves, which keep the blood from pooling and flowing back to the legs under the influence of gravity. When these valves break down, as often happens in older or inactive people, the blood does flow back and pool in the legs. The result is varicose veins, which often appear as large purplish tubes in the lower legs. Next page > The Heart and the Circulatory System > William Harvey > Types of Circulatory Systems > Anatomy of the Heart > The Pulmonary and Systemic Circuits > The Blood Vessels > Circulatory Problems > Modern Cardiovascular Medicine Source: Carolina Biological Supply /Access Excellence
How Much? FIVE TIMES Higher Than Normal Soil Scientists have known for many years that coal — and its combustion wastes contain radioactive elements. However, they lacked a complete picture of radioactivity in coal ash, which is the country’s second-largest waste stream. A newly-released Duke University-led study published in Environmental Science & Technology, now shows that radioactive elements are present in both coal and coal ash from all three major coal basins — the Illinois, Appalachian and Powder River basins. The levels of radioactivity in the coal ash were also up to FIVE TIMES higher than levels in normal soil and up to TEN TIMES higher than in the parent coal itself itself because of the way combustion concentrates radioactivity. What is Coal Ash? Coal combustion waste, or coal ash, is a byproduct of burning coal. Illinois generates more than 4.4 million tons of coal ash every year, and imports toxic ash from six other states. This toxic waste is stored at more than 24 power plant sites throughout the state, ranking Illinois as #1 in the country for the total number of coal ash disposal sites. Most of these disposal pits are not lined, and many are leaking. Coal ash contaminants have been found in groundwater at every single coal-fired power plant site investigated by the Illinois EPA in Illinois. Living near coal ash impoundments increases one’s risk for serious medical problems, such as: - Birth defects. - Neurological damage. - Reproductive issues. - Tumors and cancers. Is Monitoring Required? While companies must monitor levels of contaminants in coal ash ponds and nearby groundwater, they are not required to monitor radioactivity. Therefore, we don’t yet know how much of these contaminants are released to the environment, and how they might affect human health in areas where coal ash ponds and landfills are leaking. What We Can Do The first-ever nationwide coal ash bill was passed in December, 2014. However, in late July, the House passed H.R. 1734, a bill aimed at gutting this bill before it takes effect in October, 2015. We need stronger, not weaker legislation. While the EPA rules do not specifically address radioactive elements, they do address the leaking of contaminants into ground water, the blowing of contaminants into the air as dust, and the catastrophic failure of coal ash surface impoundments H.R. 1734 eliminates the EPA’s ban on dumping toxic coal ash directly into drinking water aquifers. The bill is currently in the Senate, awaiting action. Watch EJC’s website for actions you can take to help prevent HR 1734 and its companion bill SB 180 from becoming law.
Almost every year, Texas experiences hazy skies for a few days or weeks in the Spring and Summer. The sky can look hazy and your favorite landmarks can be harder to see. Let’s talk about that haze, where it comes from, and the potential impacts to Houston’s air quality. The main cause of this haze is smoke transported across the Gulf of Mexico from agricultural fires in Mexico and Central America. Open burning of crop residue is a method used by growers around the world to improve yields, reduce the need for herbicides and pesticides, reduce fire hazards, and control disease, weeds, and pests. It’s cheap and easy to do, and particularly for farmers in countries that don’t have many other options to prepare their land for planting, it can be very helpful. Unfortunately, there are two big disadvantages to agricultural burning. First, fire is unpreditable and it can spread and spark wildfires. Second, the smoke produced by these fires can raise air pollution levels and potentially impact human health, even hundreds of miles away. Smoke reaches the U.S. from Central America when upper level winds transport it northward. In Texas, we have been seeing the effects of the burning season in Mexico and Central America on Southeast Texas. Most years, this drifting smoke elevates fine particle and ozone levels across Houston and South Texas but does not create a significant public health threat. One year, however, smoke concentrations caused ozone pollution to increase, resulting in a public health threat. During the period from April 1, 1998 through June 20, 1998, large amounts of smoke were transported into Texas from fires in Mexico and Central America. These fires were unusually intense and widespread because of severe drought conditions. The smoke also produced high levels of ozone and carbon monoxide. These pollutants accompanied the smoke into Texas. The image below shows the intensity of the smoke during this episode. Source: NASA’s Goddard Space Flight Center at www.gsfc.nasa.gov Clouds show up in the SeaWiFS image as bright white, the smoke plumes and haze are a light tan. In some cases, you can see the smoke plumes streaming off the Yucatan coast. The other two images represent alternate views of the smoke and its components. By May 1998, smoke intensity climbed to levels that could threaten public health. Concerned by this threat, the Texas Natural Resource Conservation Commission (the predecessor agency to the Texas Commission on Environmental Quality (TCEQ)) stepped up its air quality monitoring activities and worked with the news media and other governmental agencies to make the public aware of dangers posed by these smoke levels. They shifted additional ground monitors into the Rio Grande Valley and made numerous flights with an airborne air pollution monitor. Local schools and the Texas University Interscholastic League even cancelled and relocated outdoor sports at times, the smoke was so severe. After the episode, TCEQ performed a comparison analysis of air pollution on a smoke day (May 8th) to a non-smoke day (October 3rd) with almost identical weather conditions in Brownsville. It showed that ozone, carbon monoxide, and particulate levels were much higher on the smoke day. Ozone levels on the smoke day reached 1-hour values near 100 parts per billion, whereas on the non-smoke day the ozone peaked at only 20 parts per billion. 80 parts per billion from long-distance smoke is quite an impact! As a result of the analyses performed by TCEQ, it was determined that smoke from the fires in Mexico actually created several days of increased ozone pollution in Houston that spring. If you want to keep close track of these events as they occur, TCEQ maintains an air pollution forecast page here http://www.tceq.texas.gov/airquality/monops/forecast_today.html that you can sign up to receive each week. For More Information: The Texas Department of Agriculture and TCEQ have adopted rules and guidelines for farmers and ranchers that want to use burning as a part of their agricultural management plan. You can learn more about Texas’ rules here. http://www.texasagriculture.gov/home/productionagriculture/prescribedburnprogram.aspx Learn more about how NASA tracks fires and weather events through their satellite research program at: http://earthobservatory.nasa.gov/GlobalMaps/view.php?d1=MOD14A1_M_FIRE
In English, the gerund is formed by adding -ing to a verb root. It is identical in form to the present participle (ending in -ing) and can behave as a verb within a clause (so that it may be modified by an adverb or have an object), but the clause as a whole (sometimes consisting of only one word, the gerund itself) acts as a noun within the larger sentence. For example: Eating this cake is easy. In "Eating this cake is easy", "eating this cake", although traditionally known as a phrase, is referred to as a non-finite clause in modern linguistics. "Eating" is the verb in the clause, while "this cake" is the object of the verb. "Eating this cake" acts as a noun phrase within the sentence as a whole, though; the subject of the sentence is the non-finite clause, specifically eating. Other examples of the gerund: - I like swimming. (direct object) - Swimming is fun. (subject) Not all nouns that are identical in form to the present participle are gerunds. The formal distinction is that a gerund is a verbal noun – a noun derived from a verb that retains verb characteristics, that functions simultaneously as a noun and a verb, while other nouns in the form of the present participle (ending in -ing) are deverbal nouns, which function as common nouns, not as verbs at all. Compare: - I like fencing. (gerund, an activity, could be replaced with "to fence") - The white fencing adds to the character of the neighborhood. (deverbal, could be replaced with an object such as "bench") Double nature of the gerund[change | change source] As the result of its origin and development, the gerund has nominal and verbal properties. The nominal characteristics of the gerund are as follows: - The gerund can perform the function of subject, object and predicative: - The gerund can be preceded by a preposition: - I'm tired of arguing. - Like a noun the gerund can be modified by a noun in the possessive case, a possessive adjective, or an adjective: - I wonder at John's keeping calm. - Is there any objection to my seeing her? - Brisk walking relieves stress. The verbal characteristics of the gerund include the following: - The gerund of transitive verbs can take a direct object: - I've made good progress in speaking Basque. - The gerund can be modified by an adverb: - Breathing deeply helps you to calm down. - The gerund has the distinctions of aspect and voice. - Having read the book once before makes me more prepared. - Being deceived can make someone feel angry. Verb patterns with the gerund[change | change source] Verbs that are often followed by a gerund include admit, adore, anticipate, appreciate, avoid, carry on, consider, contemplate, delay, deny, describe, detest, dislike, enjoy, escape, fancy, feel, finish, give, hear, imagine, include, justify, listen to, mention, mind, miss, notice, observe, perceive, postpone, practice, quit, recall, report, resent, resume, risk, see, sense, sleep, stop, suggest, tolerate and watch. Additionally, prepositions are often followed by a gerund. - I will never quit smoking. - We postponed making any decision. - After two years of deciding, we finally made a decision. - We heard whispering. - They denied having avoided me. - He talked me into coming to the party. - They frightened her out of voicing her opinion. References[change | change source] - "Definition: gerund". http://www.websters-online-dictionary.org/credits/websters1913.html#: Webster's Revised Unabridged Dictionary (1913). http://www.websters-online-dictionary.org. Retrieved 2010-03-09. "A kind of verbal noun, having only the four oblique cases of the singular number, and governing cases like a participle." - Re: Post Hey man, I gots ta know (Gerund versus gerundive), Phil White, Mon August 7, 2006 1:35 pm
This concept describes the steps to take when dealing with issues of adding and subtracting values with differing numbers of significant figures. Provides guided examples where viewers are asked to preserve significant figures in problems involving addition and subtraction. Reviews the rules for rounding values obtained by addition or subtraction. Practice Uncertainty in Addition and Subtraction questions To encourage students to generate questions, activate their prior knowledge, and collect information to answer their own questions using a Question and Answer Table. Explains the rules for rounding numbers and how banks round numbers. Explanations and examples of significant figures while adding or subtracting. Also has an example with scientific notation.
Send the link below via email or IMCopy Present to your audienceStart remote presentation - Invited audience members will follow you as you navigate and present - People invited to a presentation do not need a Prezi account - This link expires 10 minutes after you close the presentation - A maximum of 30 users can follow your presentation - Learn more about this feature in our knowledge base article Do you really want to delete this prezi? Neither you, nor the coeditors you shared it with will be able to recover it again. Make your likes visible on Facebook? You can change this under Settings & Account at any time. Transcript of The Aztecs First the nobility or pilli, then the common people or macehualli. Each of these was further broken up into groups of people that had quite different lives. There were also slaves which were normally well treated. The kids of a slave were free. There were ways for a slave to gain freedom such as purchasing it. The Mexican people of the Aztec empire had education for everyone regardless of gender or class. Though boys received a wider education than girls. Girls were taught how to run a home, cook, and care for a family. Women had a lot of power in society. though it was behind the scenes. The Aztec economy was based mostly on trade and agriculture which was provided by the cities in outlying lands. The market was where farmers and craftsmen presented their goods to around 60,000 village and city residents a day The Aztecs used any unused land they could find. Merchants traveled on rivers and the Pacific ocean and trade with other tribes. The Aztec traded a lot and their wealth started to grow rapidly. -The Aztecs were stationed in Mexico and had more than one leader. The most important leader was Moctezuma. -New emperors were elected by a high council of four nobles who were related to the previous ruler. -Aztec laws were based on royal decrees and on customs that had been passed down from generation to generation. These laws were also interpreted and applied by Aztec judges in the various court systems. -It's hard to know today how much crime there actually was in the empire. As we will see, punishment was harsh. Some crimes considered serious would include stealing from an other's crops, public drunkenness, and murder. -A flower war or flowery war is the name given to the battles fought between the Aztec Triple Alliance and some of their enemies. -The supreme court participates in all government decisions. Aztec Environment & Intellectual The Aztec was able to make 1,000 medicines from plants.They used it for fevers, cure stomach aches, and to heal wounds.They were also set broken bones and practice dentistry.The Aztec was a small country until the started conquering like crazy.The empire was one of the places to mandatory education for everyone.The Aztec invented popcorn. The Aztec also made a spicy kind of hot chocolate with cacao beans, corn flour,water and chillies. Richly colored clothing, architecture, ceremonial knives, head dresses - many things were adorned with jewels and feathers. It is said that the emperor never wore the same clothes twice. The Nahuatl language is an agglutinate language, which means that words and phrases are put together by combining prefixes, suffixes, and root words, in order to form an idea. Picture Of Aztec Money Most of the Aztec people are farmers.They used irrigation to water their crops.They grew corn,squash,and beans.The Aztec are located in Mexico in South Central Mexico.The valley is almost 7,000 feet above sea level also they are surrounded by mountains.The high altitude of the Aztec was around 12 degrees Celsius or 56 degrees Fahrenheit .That made the area only good for limited amounts of crops because there were frost bites sometimes.The crops could die easily.The rainfall amount was almost 450mm in in the north. In the south up to 1,000mm. In temples (like the one above) Aztec priests performed ceremonies, including human sacrifice, to please their main god Huitzilopotchli (pronounced Weetz-ee-loh-POSHT-lee) it the Aztecs main god, the god of war. The Aztec priests used their knowledge to create an accurate calendar. Aztec astronomers also predicted eclipses and movements of planets. They also kept records in hieroglyphics. Aztec, Maya, Inca
Today we started learning the process for word study. We started by practicing sorting words using our names. Students amazed me at the ways they thought to sort. After learning that words follow patterns (not all words - some are oddballs), we looked at this week's list and sorted them by their pattern. See if they can explain the pattern to you tonight. Starting next week students will work in a small group with the me to identify and practice the patterns in their words. This week we are learning the process all together. An instruction sheet cam e home to help you assist your student's homework. Please keep them in a safe place. We will use these practices all year with new words every week.
Origins 25(2):74-98 (1998). ARCHAEOLOGY: SHIP-BUILDING ERECTINES Morwood MJ, O'Sullivan PB, Aziz F, Raza A. 1998. Fission-track ages of stone tools and fossils on the east Indonesian island of Flores. Nature 392:173-176. Many southeastern Asian islands, such as Java, Sumatra and Borneo, lie in shallow waters, and were once connected to the Asian mainland during low sea level. Other islands, including Flores, lie in deeper water, and are believed to have been always isolated. Stone tools have been found associated with fossils of Stegodon pygmy elephants and Geochelone tortoises, indicating the activities of humans in the area. Fission track dating of volcanic tuffs adjacent to the fossils produced ages of about 800,000 to 880,000years. Homo erectus was the only known hominid in the area at the time. This indicates that Homo erectus must have had the ability to build boats and cross short stretches of ocean, an ability not generally attributed to the erectines. BIOGEOGRAPHY: DISPERSAL OR VICARIANCE? Baum DA, Small RL, Wendel JF. 1998. Biogeography and floral evolution of baobabs (Adansonia, Bombaceae) as inferred from multiple data sets. Systematic Biology 47:181-207. Baobabs are often-photographed trees with distinctive shapes. One species is found in Africa, six in Madagascar and one in northwestern Australia. At least three explanations might be offered for this unusual distribution. It might be produced by fragmentation of a continuous distribution as a consequence of the breakup of Pangaea, or it might be the result of overwater dispersal. A third possibility, that the trees are not truly related, seems unlikely. Relationships among the eight baobab species were estimated from morphology and from three different molecular sequences. Morphologically, four of the Madagascan species are more similar to the Australian species than to the African species. Internal transcripted sequences from nuclear ribosomal DNA showed the Madagascan species to be more similar to the African species. Comparisons of a chloroplast intron showed two of the Madagascan species to be most similar to the African species, but the other four Madagascan species equally similar to the African and Australian species. Restriction site analysis showed the Australian species to be most similar to the African species. All the molecular differences among the species were small in magnitude. This fact, coupled with the discordance among the gene phylogenies and the lack of Adansonia pollen in Mesozoic rocks, led the investigators to conclude that the present distribution of baobabs is the result of overwater dispersal. This is made more plausible by the tough water-resistant seeds. BIOGEOGRAPHY: HURRICANES AS DISPERSAL AGENTS Censky EJ, Hodge K, Dudley J. 1998. Over-water dispersal of lizards due to hurricanes. Nature 395:556. The Caribbean island of Anguilla lacked green iguanas until 4 October 1995, when at least 15 of the lizards washed up on the shore on a floating mat of logs and trees. This event was preceded by two large hurricanes in the month of September, and it is postulated that the lizards were washed away from their original home by the winds and rain of one or both of these hurricanes. The most likely source of the iguanas is thought to be the island of Guadeloupe, at a distance greater than 250 km. The group of lizards included both males and females, and this observation confirms that overwater dispersal and successful colonization can occur. DESIGN: THE GENETIC CODE Freeland SJ, Hurst LD. 1998. The genetic code is one in a million. Journal of Molecular Evolution 47:218-248. The genetic code consists of 64 codons (groups of 3 bases), each of which codes for a specific amino acid or a reading signal. Similar codons generally code for the same or similar amino acids, which means that a point mutation (change of a single nucleotide) is likely to result in replacement of an amino acid by the same or a similar amino acid. Considering that transition mutations (purine to purine, or pyrimidine to pyrimidine) are more frequent than transversion mutations (purine to pyrimidine, or pyrimidine to purine), what is the probability that a code assembled at random would be as efficient as the existing code? Calculations indicate that the probability is only one in a million, indicating that the genetic code is non-random in arrangement, as expected if it were produced by selection. DESIGN: MOLECULAR MACHINES Alberts B. 1998. The cell as a collection of protein machines: preparing the next generation of molecular biologists. Cell 92:291-294. This is the introductory article to a collection of articles describing molecular machines in cells. The author no longer views cellular processes as driven by random collisions of proteins, colliding two at a time in an undirected sequence. He now believes that virtually every major cellular process involves assemblies of 10 or more protein molecules, each of which is interacting with other molecular assemblies. Each of these assemblies function as machines, with highly coordinated moving parts. The author then discusses whether one should have expected such well-engineered protein machines as are found, and to appeal to university science departments to consider how better to prepare the next generation of molecular biologists. They will need to have a good knowledge of mathematics and the physical sciences in order to unravel the mysteries of how these machines operate. Baker TA, Bell SP. 1998. Polymerases and the replisome: machines within machines. Cell 92:295-305. Replication of DNA involves an interacting complex of several molecular machines. These include the primases and polymerases that copy the DNA, the exonucleases that correct copying mistakes, the clamping proteins that attach the polymerases to the DNA, and the helicases that separate the DNA strands so they can be copied. The clamping proteins provide an example of mechanical action. A sliding clamp protein forms a ring around the DNA strand at the appropriate location, and attaches the polymerase unit. A clamp loader protein opens the ring-shaped sliding clamp protein, moves it to the correct location, and closes it around the DNA strand. This requires specific recognition sites for DNA initiation sites, the sliding clamp protein, and ATP, along with specific structural features that facilitate appropriate configurational changes to open and close the sliding clamp protein at the proper times. Bukau B, Horwich AL. 1998. The Hsp70 and Hsp60 chaperone machines. Cell 92:351-366. Molecular chaperones are molecules that cause conformational changes in other molecules, converting them into active forms. For example, an inactive protein may be converted into an active enzyme by a molecular chaperone that causes an appropriate change in the folding of the protein. In some cases, proper folding may be achieved by a series of molecular chaperones. Chaparonins consist of double-ring assemblies with a central cavity that can attach to proteins that are not properly folded. Attachment of the protein molecule triggers conformational changes in the chaparonin that result in proper folding of the protein. The chaparonin's function is made possible by its three-part, hinged structure, its inner hydrophobic recognition site, an ATP recognition site, and by its specific shape that facilitates appropriate conformational changes. DeRosier DJ. 1998. The turn of the screw: the bacterial flagellar motor. Cell 93:17-20. The mechanism for movement of the bacterial flagellar motor is unknown, but much has been learned of the structure. The flagellum "resembles a machine designed by a human...." It consists of a rigid filament connected to a curved piece by two junction proteins. The flagellum is set into a socket in the inner cell membrane, and rotates within two bushing-like rings embedded in the outer cell membrane. Three other rings are also present. About 50 genes are involved in flagellar structure and sensitivity to chemicals. Kinosita K, Yasuda R, Noji H, Ishiwata S, Yoshida M. 1998. F1-APTase: a rotary motor made of a single molecule. Cell 93:21-24. ATP, the major energy carrier molecule for living cells, is constructed with the aid of an enzyme, ATP synthase. The ATP synthase molecule includes a rotary motor only 10 nm in size. The motor has a central shaft that rotates inside a hexagonal structure of six subunits. Rotation of the central shaft has been confirmed by observing movement of a larger molecule attached to the shaft. When ATP is added to the medium, the central shaft rotates counterclockwise, hydrolyzing the ATP, probably with an efficiency near 100%. If the shaft is turned clockwise, as by passage of a proton, ATP is produced. Comment. The articles in these issues of Cell provide an impressive reminder of the complexity of cellular processes, and the extraordinary amount of information in their sequences. If, as seems probable, each molecular machine is irreducibly complex, creation by direct agency seems the best explanation for the origins of these molecular machines. EVOLUTION: THE HOMOLOGY PROBLEM Tautz D. 1998. Debatable homologies. Nature 395:17-18. The concept of homology is foundational for evolutionary theory, but extremely difficult to define. Further, "homology concepts tend to fail when it comes to tracing evolutionary novelties." Studies of developmental genes have complicated the issue by showing that genes with similar sequences may produce similar structures, such as eyes or legs, in different phyla, when such structures are believed absent from their common ancestors. The potential for gene duplications, losses and re-duplications means that sequence similarities are not, of themselves, sufficient to establish homology. Studies of sea urchin hybrids may provide increased understanding of development, but it appears that their regulatory modules may be composed of subunits that can be combined in different ways. If homology must be identified through similarities in complex regulatory modules, the term "homology" would indeed be "ripe for burning," as suggested by evolutionist J. Maynard Smith. EVOLUTION: IS THE PRESENT THE KEY TO THE PAST? Thompson JN. 1998. Rapid evolution as an ecological process. Trends in Ecology and Evolution 13:329-332. The importance of rapid changes in species has been overlooked, but greater recognition of such change rates would increase the importance of the field of evolutionary ecology. Introduced species provide many examples of rapid changes, perhaps because they are more likely to undergo directional selection rather than fluctuating selection. Rates of proportional change over time are measured in units known as "darwins." Calculated rates of change are inversely proportional to the estimated time. Rates over short time scales, as measured in real time, tend to be relatively very high. Rates of change over long time scales, as calibrated against the geological time scale, tend to be relatively very low. EVOLUTION: NATURAL SELECTION AND LIZARDS Losos JB, Jackman TR, Larson A, de Queiroz K, Rodriguez-Schettino L. 1998. Contingency and determinism in replicated adaptive radiations of island lizards. Science 279:2115-2118. Lizards of the genus Anolis are abundant on Caribbean islands. Especially on the larger islands, these lizards have diversified into different "ecomorphs" differing in size and habitat preference. The question is whether similar ecomorphs on different islands are most closely related to each other or if the different ecomorphs of an island are most closely related to each other. Phylogenetic relationships among Anolis lizards of the four Greater Antilles (Cuba, Hispaniola, Jamaica and Puerto Rico) were determined using mitochondrial DNA sequences. Results indicate that the different ecomorphs of an island are closely related to each other, and that similar ecomorphs from different islands have independent origins. This example shows that natural selection is more important than chance in modifying body form in these lizards. EVOLUTION: A NATURAL SELECTION AND PEPPERED MOTHS Coyne JA.1998. Not black and white. (Review of) Melanism: evolution in action, by M.E.N. Majerus, Oxford University Press. Nature 396:35-36. The peppered moth has been the centerpiece in the story of the power of natural selection. Moths resting on lichen-colored tree trunks were camouflaged if white but conspicuous to predators if black. Accordingly, most moths in pre-industrial England were white. As soot darkened the tree trunks, black moths were better camouflaged, and the white forms were selectively removed. As pollution came under better control after 1950, the white forms again became common. The story has lost much of its punch by the realization that the moths do not normally rest on tree trunks during the day, do not choose matching backgrounds in controlled studies, and the white form increased in areas where there was no change in the abundance of lichens on the tree trunks. According to Coyne, we must now abandon the claim that we understand how natural selection has caused shifts in the proportions of white and black peppered moths. Especially notable is his comment: It is also worth pondering why there has been general and unquestioned acceptance of Kettlewell's work. Perhaps such powerful stories discourage close scrutiny. Moreover, in evolutionary biology there is little payoff in repeating other people's experiments, and unlike molecular biology, our field is not self-correcting because few studies depend on the accuracy of earlier ones. Comment. The above quote should provide a sobering reminder to all involved in discussions of creation and evolution that the nature of historical science simply does not justify the degree of confidence seen in more experimental science, despite the reassurances of some of the discussants. GENETICS: MOBILE GENES IN VERTEBRATES Kordis D, Gubensek F. 1998. Unusual horizontal transfer of a long interspersed nuclear element between distant vertebrate classes. Proceedings of the National Academy of Sciences (USA) 95:10704-10709. LINEs, or long interspersed nuclear elements, are segments of DNA found repeated in many copies in a genome. One particular LINE, called ART-2 retroposon, was previously found first in cattle, then throughout the ruminants. It was thought to be specific to the ruminants until it was discovered also in vipers. This appeared to be a case of horizontal genetic transfer, perhaps by a common parasite. This possibility was tested by surveying 22 species of snakes, 17 species of lizards, 2 crocodilians, and 2 turtles. ART-2 retroposons were discovered in all the snake species and a majority of the lizard species, but not in the crocodilians or turtles. Horizontal transfer appears to be the best explanation for this pattern. MOLECULAR EVOLUTION: STRONG OR RELAXED SELECTION? Bargelloni L, Marcato S, Patarnello T. 1998. Antarctic fish hemoglobins: Evidence for adaptive evolution at subzero temperature. Proceedings of the National Academy of Sciences (USA) 95:8670-8675. The Antarctic fish fauna is dominated by a group known as notothenioids, which have some exceptional physiological features. Notothenioids include the icefish, famous as the only vertebrate lacking hemoglobin. Icefish are believed able to survive without hemoglobin because of the high oxygen content and reduced metabolic needs in the cold Antarctic waters. MOLECULAR PHYLOGENY: WOOLY MAMMOTH DNA Noro M, Masuda R, Dubrovo IA, Yoshida MC, Kato M. 1998. Molecular phylogenetic inference of the woolly mammoth Mammuthus primigenius, based on complete sequences of mitochondrial cytochrome b and 12S ribosomal RNA genes. Journal of Molecular Evolution 46:314-326. The elephant family, Elephantidae, includes both types of living elephants and the extinct mammoths. Controversy has surrounded the question as to which two types are the more closely related. Immunological and hair comparisons showed the three genera to be equally distant from each other. Dental studies suggested mammoths and Asian elephants to be more closely related. Previous molecular studies suggested mammoths and African elephants to be more closely related. This study is the first to use the complete sequences of two mitochondrial genes. This study agrees with previous molecular studies that the mammoth is slightly more closely related to African elephants than to Asian elephants. Osawa T, Hayashi S, Mikhelson VM. 1998. Phylogenetic position of mammoth and Steller's sea cow within Tethytheria demonstrated by mitochondrial DNA sequences. Journal of Molecular Evolution 44:406-413. Mitochondrial DNA sequences were compared for African elephants, Asian elephants, extinct woolly mammoths, and several other species. Results show that the woolly mammoth was more closely related to the Asian elephant than to the African elephant. This is consistent with the fossil record, where the African elephant appears before the Asian elephant. Comment. The relationships of the three genera of elephants remain ambiguous, despite a good fossil record, availability of molecular sequences, and accessibility of specimens for morphological comparison. In this example, it is doubtful that one can infer phylogenetic branching sequences based on sequences of first appearances of the species, as attempted by Osawa et al. The Asian elephant, African elephant, and wooly mammoth are each in a separate genus, and all three genera have first appearances close together in the fossil record. A common ancestry of the three genera of elephants seems plausible to many creationists, and the lack of resolution may be due to a recent, near-simultaneous geographic isolation and genetic divergence. ORIGIN OF LIFE: CHIRALITY Clery D, Bradley D. 1994. Underhanded "breakthrough" revealed. Science 265:21. A previous report of separation of chiral molecules in a strong magnetic field has been retracted. The report was discussed in Origins 24:94 without knowledge of the problem. It turns out that the experiment had been manipulated by one member of the investigating team. Regrettably, fraud occasionally shows up in science, and we apologize for not correcting our report sooner. We thank the Origins reader who informed us of the error. ORIGIN OF LIFE: LIFE ON MARS PUT TO REST Bada JL, Glavin DP, McDonald GD, Becker L. 1998. A search for endogenous amino acids in Martian meteorite ALH84001. Science 279:362-365. Meteorite ALH84001 was discovered in Antarctica, and identified as having come from Mars, presumably as the result of an asteroidal impact on that planet. Certain features of the meteorite were interpreted as probable evidence of life on Mars, a claim that was eventually abandoned. This paper reports that amino acids extracted from the meteorite had an excess of L-enantiomers, indicating contamination by terrestrial sources, rather than being carried from an extraterrestrial source. Jull AJT, Courtney C, Jeffrey DA, Beck JW. 1998. Isotopic evidence for a terrestrial source of organic compounds found in Martian meteorites Alan Hills 84001 and Elephant Moraine 79001. Science 279:366-370. Possible traces of life were previously reported from meteorites believed to have been ejected from Mars. This possibility was tested by analyzing both the carbon-14 and carbon-13 contents of the meteorites. Carbon-14 is produced by interaction of nitrogen and cosmic rays, and is not expected to be present in significant quantities in the Martian meteorites, since nitrogen is much less abundant on Mars than on Earth. The level of carbon-14 in the meteorites was determined to be about half that of modern terrestrial carbon. This is much greater than expected if the meteorites were from Mars, and strongly indicates that the meteorites have been contaminated by terrestrial carbon. Carbon from living organisms tends to be low in carbon-13 as compared to carbon from inorganic sources. The proportion of carbon-13 in the Martian meteorites was reduced, indicating that the meteorites have been contaminated by organic carbon from terrestrial organisms. It seems that the organic compounds found in the Martian meteorites did not originate on Mars, but were added to the meteorite after it fell to the Earth. Comment. These results confirm that the Martian meteorites do not provide evidence for life on Mars. ORIGIN OF LIFE: PROTEINS PRODUCED IN WATER? Huber C, Wachtershauser G. 1998. Peptides by activation of amino acids with CO on (Ni,Fe)S surfaces: implications for the origin of life. Science 281:670-672. Origin of life experiments have shown that amino acids can be produced abiotically, but no way is known for combining the amino acids into proteins. Water tends to hydrolyze the peptide bonds of proteins rather than facilitating their bonding. In this experiment, peptide bonds were formed in hot aqueous solution. Peptide bonding required the presence of carbon monoxide, nickel and/or iron sulfide, hydrogen sulfide, methanethiol, and a pH of 7-10. Amino acids tested include L-phenylalanine, L-tyrosine, and L-glycine. Products were racemic dipeptides. Dipeptides hydrolyzed rapidly in separate experiments using the same conditions. The authors claim these results support the hypothesis of a thermophilic origin of life. PALEONTOLOGICAL PATTERNS: DIVERSITY Adrain JM, Fortey RA, Westrop SR. 1998. Post-Cambrian trilobite diversity and evolutionary faunas. Science 280:1922-1925. Ordovician rocks are noted for two notable diversity trends: 1) a rapid increase in diversity as compared to Cambrian rocks; and 2) an abrupt turnover in the types of fossils at the top of the Ordovician. This paper is an attempt to better understand the details of the patterns as they apply to trilobites, which are one of the major components of Ordovician fossils. The authors were able to identify two faunal components among Ordovician trilobites. (One family did not fit in either component.) One group, called the Ibex Fauna, dominates the lowest Ordovician layers, but declines through the Ordovician, disappearing from the record at the top of the Ordovician. The other group, called the Whiterock Fauna, increases through the Ordovician and on into the Silurian rocks. The two groups differ in geographic range and in depositional environment. The declining group is found mainly in Gondwana and Baltica, and in a variety of depositional environments, while the expanding group is found mainly in Laurentian regions, and in depositional environments interpreted as platform-margins. Many of the Whiterock families first appear without obvious relationships, while many Ibex families are known also from Cambrian rocks. Jablonski D. 1998. Geographic variation in the molluscan recovery from the end-Cretaceous extinction. Science 279:1327-1330. The Cretaceous-Tertiary boundary is characterized by a major change in fossil types, commonly called a "mass extinction." A previous study of North American Gulf Coast fossil faunas showed that certain molluscan families abruptly become abundant above the boundary. This has been interpreted as a population explosion among groups who took advantage of new ecological opportunities following the "mass extinction" (so-called "bloom taxa"). The pattern of sudden expansion of ecological opportunists after a "mass extinction" has been accepted as the normal pattern of recovery after a geological catastrophe. However, the pattern is different in northern Europe, northern Africa and Pakistan-northern India, where the abundance of so-called "bloom taxa" remains relatively steady through Paleocene sediments, or even increases at the top of the Paleocene. These observations are problematic in view of the similar extent of end-Cretaceous taxonomic turnover in all four regions. DeHaan RF. 1998. Do phyletic lineages evolve from the bottom up or develop from the top down? Perspectives on Science and Christian Faith 50:260-271. The received view of evolution is that it branches from the bottom up, so that new higher taxa originate through accumulation of small changes over long ages. But this view is refuted by two observations. First, living species change at rates much greater than those inferred from the fossil record. Furthermore, the changes are minor, and variable rather than cumulative. Second, the pattern of diversity in the fossil record is inconsistent with the bottom-up hypothesis. The major taxa appear first, followed by diversity at lower taxonomic levels. This second fact, especially, points toward a top-down development of diversity. The Cambrian Explosion involved numerous distinct phyla, the highest taxonomic category. The eleven phyla of fossil marine invertebrates can be divided into 62 classes, the next-lower category. The stratigraphic midpoint of first appearances for Classes occurs in the Ordovician, above the Cambrian. (Half of all classes appear before the midpoint, and half after the midpoint.) The midpoint of the 307 orders occurs in the Devonian, well above the Ordovician. The greatest number of fossil species is in the upper layers, deposited after virtually all the higher taxa. The view of top-down evolution is reinforced by the stability of body plans, and the top-down direction of development. Miller AI. 1998. Biotic transition in global marine diversity. Science 281:1157-1169. Three types of global diversity patterns are easily observable in the fossil record: expansion (e.g., Cambrian, post-Paleozoic); abrupt turnover (e.g., end-Permian, end-Cretaceous), and gradual transitions in dominance among higher taxa. "Mass extinctions" seem to occur abruptly, while expansions are more gradual. However, combining all data into a single global pattern may mask smaller-scale patterns. Ordovician sediments display a major expansion (Ordovician radiation), followed by a major turnover (end-Ordovician extinction). Major faunal transitions generally appear abrupt locally or regionally, but differences in stratigraphic position cause global patterns to appear more gradual. The fossil record is determined by local and regional processes, so that mass extinction events are not fundamentally different from those operating during background times. Rampino MR, Adler AC. 1998. Evidence for abrupt latest Permian mass extinction of foraminfera: Results of tests for the Signor-Lipps effect. Geology 26:415-418. The end-Permian mass extinction was the largest in the fossil record, but controversy continues over whether it was abrupt or gradual. One difficulty is that extinctions are difficult to estimate for species with an incomplete fossil record. This results in a "smearing" of apparent extinctions (last appearances) when real extinctions are simultaneous. This is known as the Signor-Lipps effect. Stratigraphic analysis of forams in an end-Permian section in Italy are consistent with an abrupt mass extinction rather than a gradual, stepped extinction. The mass extinction appears to coincide with a negative carbon-13 anomaly probably caused by a global ecological stress event. PALEONTOLOGICAL PATTERNS: DEPOSITIONAL Taylor PD, Allison PA. 1998. Bryozoan carbonates through time and space. Geology 26:459-462. Bryozoans are invertebrate animals, usually with a calcareous skeleton, found commonly as fossils from the Ordovician onward. Paleozoic and post-Paleozoic bryozoans are mostly classified in different taxonomic orders. Living bryozoans leave significant sedimentary remains in temperate zones, but not in the tropics. This paper reports a test of whether bryozoan-rich limestones have a similar distribution pattern in the fossil record. The study included 176 bryozoan-rich Paleozoic and Jurassic to Pleistocene stratigraphical units. (No bryozoan limestones have yet been found in Triassic or Lower Jurassic rocks.) Results show that the present extra-tropical distribution pattern goes back only to the Jurassic. Paleozoic bryzoan-rich deposits are mostly from regions thought to have been tropical, based on plate reconstructions. One explanation for the difference in patterns is that predators were less common in Paleozoic tropical habitats, permitting greater bryozoan growth than in Mesozoic tropical habitats. PALEONTOLOGICAL PATTERNS: ECOLOGY AND BEHAVIOR Labandeira CC. 1998. Plant-insect associations from the fossil record. Geotimes 43(9):18-24. Herbivorous insects produce tell-tale effects of their mode of feeding. These effects may be preserved in the fossil record, and provide evidence concerning the diversity of feeding strategies in the fossil groups being studied. Evidence of Paleozoic insect diversity is seen in the diversity of insect damage seen in fossil plants from the upper Pennsylvanian of Illinois and the lower Permian of Texas. Most groups of Paleozoic insects are not found in Mesozoic sediments, but are replaced by insects more similar to those living today. A large majority of modern insect families with a fossil record are found in sediments below the mid-Cretaceous, which is the point at which angiosperm diversity begins to expand. Lower and middle Mesozoic plant fossils show evidence of feeding strategies matching virtually any seen at present. Insect diversity does not appear to be dependent on angiosperm diversity. Lockley MG. 1998. The vertebrate track record. Nature 396:429-432. Vertebrate trackways are much more common than thought only a few decades ago. To illustrate, the number of Jurassic dinosaur trackways in the western United States is estimated to be equal to the total number of identifiable dinosaur skeletal remains for the entire world (about 2000). Fossil footprints are abundant in terrestrial sediments from the Carboniferous to Holocene. Fossil tracks have also been used to infer gait and posture, social behavior, to establish stratigraphic correlations, and to fill in gaps in distributions of taxa. Tracks may appear lower in the fossil record than any evidence from body fossils, as is the case for shorebirds, and perhaps for tetrapods. The study of fossil trackways has the potential to greatly enhance our understanding of the fossil record. PALEONTOLOGICAL PATTERNS: MORPHOLOGY Alroy J. 1998. Cope's Rule and the dynamics of body mass evolution in North American fossil mammals. Science 280:731-734. The author reports the results of comparing body sizes of fossil mammal species from the same genus but from different stratigraphic levels. His study covers essentially the entire North American mammal fossil record, from the Upper Cretaceous (Campanian) to the Upper Pleistocene. His results show that species found higher in the stratigraphic record are larger than those from the same genus found lower in the stratigraphic record. The difference averages 9.1%, and is greater for large than for small species. A stratigraphic trend toward increasing body size has been called "Cope's Rule" in honor of the paleontologist who first proposed the trend. Although Cope's Rule has sometimes been found not to apply, this study shows that it does generally apply to species within the same genus of mammals. PALEONTOLOGY: FOSSIL INVERTEBRATES Li C-W, Chen J-Y, Hua T-E. 1998. Precambrian sponges with cellular structures. Science 279:879-882. Fossil sponge spicules have been identified in Precambrian phosphatic sediments significantly lower than any previously known sponges. The spicules are derived from the Class Demospongiae, which is the most abundant group of living sponges. A fossil embryo appears to be from a different Class, the calcareous sponges. The sponges are thought to have been buried catastrophically. The Demospongiae have been thought to have evolved from the glass sponges, Class Hexactinellida, but these fossils appear lower than any glass sponges. This may indicate a need for revising sponge phylogeny. Moldowan JM, Talyzina NM. 1998. Biogeochemical evidence for dinoflagellate ancestors in the Early Cambrian. Science 281:1168-1170. Dinoflagellates are abundant single-celled organisms in aquatic environments. Their known fossil record extends only down to the Middle Triassic, and their apparent absence from Paleozoic strata is puzzling. Dinoflagellates produce certain chemicals not found in other taxa. The presence of dinosterane and 4-alpha-methyl-24-ethylcholestane is considered to be indicative of dinoflagellates. Examination of Cambrian sediments in Estonia revealed the presence of these dinoflagellate-specific compounds. This indicates that dinoflagellates or their ancestors were incorporated in Cambrian sediments, despite the difficulties in identifying their cysts. It appears that many acritarchs (fossil cysts of uncertain affinity) may actually be fossil dinoflagellates or their ancestors. PALEONTOLOGY: FOSSIL PLANTS Gandolfo MA, Nixon KC, Crepet WL, Stevenson DW, Friis EM. 1998. Oldest known fossils of monocotyledons. Nature 394:532-533. Monocot fossils have a meager fossil record, and fossil flowers are especially rare. Tiny fossil flowers representing at least 100 species have recently been identified from Turonian sediments (lower Cretaceous) in New Jersey. Among them are the geologically oldest known monocot flowers. The fossil flowers have features that indicate a close relationship to the family Triuridaceae, a group of tropical saprophytic plants lacking chlorophyll. This discovery suggests the need for new hypotheses of monocot origins and diversification. Sun G, Dilcher DD, Zheng S, Zhou Z. 1998. In search of the first flower: a Jurassic angiosperm, Archaefructus, from Northeast China. Science 282:1692-1695. The Yixian Formation of China has yielded many well-preserved fossils, representing both freshwater and terrestrial habitats. Among these is a plant fossil with a fruit containing seeds, a defining characteristic of angiosperms. The surrounding sediments are classified as Upper Jurassic. This is the most plausible claim for a fossil angiosperm in sediments below the Cretaceous. The fossil has a unique combination of characters, leading the discoverers to propose a new subclass to contain it. Currently dominant hypotheses of angiosperm origins are inconsistent with the newly discovered fossil. Davis PG, Briggs DEG. 1998. The impact of decay and disarticulation on the preservation of fossil birds. Palaios 13:3-13. The condition of a fossil can provide important clues to the circumstances under which it was fossilized. This study identified five stages in the decomposition of bird carcasses, one of which was disarticulation of the skeleton in seven steps. Disarticulation typically began after about 4 days, and was complete by about 52 days in protected specimens. Scavengers greatly hastened the process of disarticulation of unprotected specimens. It was noted that decomposition occurs much more rapidly in the subtropical waters of Florida than in previous experiments performed in cooler latitudes. The results were compared to the preservational condition of fossil birds from some famous fossil localities. These include the Jurassic Solnhofen Limestone in Germany (the source of Archaeopteryx), the Eocene Messel shale from Germany (124 specimens from 18 families of birds), the Eocene Green River Formation of Wyoming (42 specimens from 5 families), and the Eocene La Meseta shoreline deposit of Antarctica (1243 specimens, mostly penguins). The Solnhofen specimens show the least decomposition. The Messel and Green River specimens show a moderate amount of disarticulation, while the La Meseta specimens are preserved as isolated bones, many of them broken. The authors conclude that experiments such as this can aid in interpreting fossil deposits. Hof CHJ, Briggs DEG. 1997. Decay and mineralization of mantis shrimps (Stomatopoda: Crustacea) a key to their fossil record. Palaios 12:420-438. Mantis shrimps are active predators common on tropical and subtropical seafloors. Their fossils first appear in the upper Jurassic Solnhofen limestone, and are also known from several widely scattered localities throughout the world. However, their relatively scanty fossil record contrasts with their present abundance. Experiments were conducted to study the processes of decomposition of stomatopod bodies as a means of interpreting the conditions under which fossil stomatopods were preserved. Three stages of decomposition were identified: swollen but complete; ruptured (by 1 week); and partially decomposed to fragmentary (by 4 weeks). Mineralization occurred through precipitation of calcium carbonate and replacement of soft tissue by calcium phosphate. All known Mesozoic and Tertiary fossil stomatopods were assigned to one of the three preservation states. About 40% are complete (probably buried alive), 40% ruptured, and 20% fragmentary. Stomatopods exhibit a high potential for fossilization, and their poor record must be due to causes other than decay. Comment. The presence of fossils is often an indicator of catastrophic conditions, but more quantitative data are often needed. The quantitative studies reported in these papers might provide a way to test the hypothesis that catastrophic depositional conditions dominate the fossil record. PALEONTOLOGY: VERTEBRATE FOSSIL DISCOVERIES Loope DB, Dingus L, Swisher CC, Minjin C. 1998. Life and death in a Cretaceous dune field, Nemegt basin, Mongolia. Geology 26:27-30. The Gobi Desert of Mongolia continues to be an important source of dinosaur fossils. The Upper Cretaceous Ukhaa Tolgod fossil locality is especially rich, with more than 100 dinosaur skeletons and more than 500 mammalian and reptilian skulls recovered. The sediments are largely sandstone, with some siltstones and conglomerates. They have been interpreted as wind-blown, with animals overcome and buried by sand storms, but this hypothesis has problems. Present-day windstorms are not known to be able to overcome and bury live animals, and it seems highly unlikely that live dinosaurs would permit themselves to be buried by blowing sand. Closer analysis shows three different sandstone facies, two of which are eolian (wind-blown) and lack fossils. If sandstorms were responsible, more fossils should be found in the eolian sands. The fossiliferous layer lacks sedimentary structure, and might be interpreted as a gradual accumulation of wind-blown sand, except for the presence of articulated skeletons. The articulated skeletons indicate rapid deposition, probably by landslides from surrounding hills. Thewissen JGM, Madar SI, Hussain ST. 1998. Whale ankles and evolutionary relationships. Nature 395:452. Summary. Whales are an order of mammals with obvious similarities to each other, but with major differences from any other mammals. Their relationships to other mammals are controversial. Molecular sequences have been used to argue that the hippopotamus is the closest living relative of the whales. Hippos are artiodactyls, which are distinguished by a particular morphology of their ankle bones. Fossil evidence has been used to argue that whales are most closely related to a group of extinct terrestrial mammals known as mesonychians. A fossil ankle bone from a "walking whale" found in Pakistan was compared with ankle bones from artiodactyls and mesonychians. The "whale" ankle bone has features that seem to exclude it from the artiodactyls, and also argue against a close relationship between whales and mesonychians. Extensive convergence or reversals must have occurred in these groups. Comment. This report suggests that whales, mesonychians and artiodactyls are not directly related, based on their ankle bones. Whales and hippos do share some distinct molecular sequence similarities, but this might be due to convergence (designed similarities), or to horizontal transfer. Fossils with whale-like traits and short limbs might be extinct types of animals rather than evolutionary intermediates. Whale-hippo relationships are enthusiastically defended by some evolutionists, but results such as this remind us that such enthusiasm is sometimes not strictly a matter of data. SCIENCE AND RELIGION: IMPLICATIONS OF SCIENTIFIC THEORIES Lubenow ML. 1998. Pre-Adamites, sin, death and the human fossils. Creation Ex Nihilo Technical Journal 12:222-232. The idea that some humans may have existed before Adam is advocated by some Christians today. The concept of pre-Adamites is used to explain the existence of fossil humans with dates much older than 10,000 years. Almost all who hold a pre-Adamite view maintain that Adam was a Neolithic human who lived about 10,000 years ago. This raises a major theological question how can death be the result of Adam's sin, if pre-Adamites were dying before Adam existed? This proposal removes the basis for Christ's physical death in our place, thus endangering the doctrine of salvation which is the heart of Christianity. The human fossil record shows evidence of premature death, periodic starvation, cannibalism, violence, and disease in fossils dated at over 10,000 years. This is inconsistent with the biblical description of a creation that was "very good," and with Romans 5:12-21, which states that death came to all because of Adam's sin. The solution to these theological problems is to interpret all human fossils as having lived after the fall of Adam. SCIENCE AND RELIGION: WHICH IS THE ULTIMATE AUTHORITY? Day AJ. 1998. Adam, anthropology and the Genesis record. Science and Christian Belief 10:115-143. According to this article, the perceived conflict between science and Scripture is not intrinsic to the two disciplines, but is due principally to emerging scientific theories conflicting with static interpretations of Scripture. We need to be open to reinterpret Scripture if that seems necessary. "Taking the Bible seriously" seems to mean that Scripture teaches some truths, but not necessarily about the material world. "Taking science seriously" seems to mean that Scriptural interpretation should be determined by science. Two key issues for Christians are the method by which the "image of God" was installed into humans, and how the Fall occurred. The text permits the interpretation that both processes could have occurred gradually, and science confirms the need for this interpretation. Adam need not have been an individual, but a term to symbolize humanity. The Fall need not have been an individual act, but a developing separation from God. The "soul" need not be a separate entity, but a reference to the whole person. Insistence on God's intervention in the special creation of humans requires a series of miracles, making science irrelevant. All contents copyright Geoscience Research Institute. All rights reserved. Send comments and questions to [email protected] | About Us | Contact Us |
My Report about Whales: by Pike Street Fish Fry How do whales hold their breath? |The sperm whale can hold its breath for 20 minutes to even an hour or more. Many other whales can hold their breath for 10 minutes to a half hour. All marine mammals have special physiological adaptations during a dive. These adaptations enable a whale to conserve oxygen while underwater.| How do whales sleep? |Whales sleep in the water, usually at the surface. Studies suggest that, unlike in land mammals, deep sleep in whales probably happens in only one hemisphere of the brain at a time.| |Killer whales have been observed resting both day and night for short periods of time or as long as eight hours straight. While resting, killer whales may swim slowly or make a series of 3 to 7 short dives of less than a minute before making a long dive for up to three minutes.| How much krill does a whale eat each day? |Most whales do not eat krill, but the baleen whales that do can eat more than a thousand pounds of krill each day. An adult blue whale, the largest animal in the world, can eat four tons (8,000 pounds) of krill a day.| Can whales swim far? |Yes, some whales like the gray whales swim as much as 12,000 miles each year on their migration from feeding grounds in the Arctic all the way down to breeding and calving grounds in Baja California, Mexico.| Do whales ever fight with each other? |Dolphins, including killer whales have many social behaviors and some of these such as raking (scratching the skin of another dolphin with its teeth) and head-butting are used to establish and maintain dominance in dolphin groups. Some killer whales hunt and feed on other marine mammals, including other whales such as Dahl's porpoises and even gray whales and blue whales.| |6.||Are whales related to dolphins?| |Yes, dolphins are a kind of whale, so all dolphins are whales, but not all whales are dolphins. Other kinds of whales (that are not dolphins) include baleen whales (like gray whales and humpbacks), sperm whales, porpoises, river dolphins, beluga whales and narwhals, and beaked whales. For more information on whales, view the Baleen Whales, Beluga Whale,Bottlenose Dolphin, Killer Whale, and Toothed Whales infobooks.| |7.||How many teeth do whales have?| |It depends on the kind of whale. Killer whales have 40 to 56 teeth. Most beaked whale males only have a single pair of teeth and most female beaked whales have no teeth. A sperm whale only has teeth in its lower jaw, which fit into grooves in its upper jaw. Baleen whales, like gray, humpback or blue whales have no teeth - instead they have rows of long plates of baleen in their mouths that they use to strain food out of the water.| |8.||How many different kinds of whales are there?| |Scientists are still discovering new species of whales. There are at least 85 species of whales including at least 73 species of toothed whales and 12 to 14 species of baleen whales.| |9.||How long is a blue whale?| |The blue whale is the largest animal on earth (growing larger than even the biggest of dinosaurs were). A 100 foot blue whale is as long as three school buses. Blue whales grow to about 70 to 80 feet in the northern hemisphere (north of the equator) and 90 to 100 feet in the southern hemisphere (south of the equator). Female blue whales grow larger than males. The longest blue whale ever recorded was 110 feet.|
Regional or local conditions can change, even reverse, expectations based solely on world studies. Ozone is a tricky gas. In the stratosphere, it shields us from the sun's ultraviolet radiation. But it's an obnoxious pollutant at ground level. Now research tells us that it also impairs the ability of plants to absorb the global-warming gas carbon dioxide (CO2). That dampens the hopeful assumption that plants will take enough CO2 out of the air to partly offset the emission of that gas from burning fossil fuels. It also highlights a significant weakness in scientists' ability to assess global-warming climate change. Various types of air pollution can change the way planetwide average warming affects local or regional climates. Thus local circumstances can reduce, or even reverse, expectations based solely on global studies. The new ozone study, led by Stephen Sitch with England's Hadley Center for Climate Change, illustrates this. Ozone pollution reduces plant CO2 uptake by as much as one-third, depending on local ozone concentrations. These vary widely around the world. To take proper account of the ozone effect in climate simulations, scientists need more detailed knowledge of local ozone concentrations throughout the globe and how these change over time. The Hadley study, published online by Nature last month, is a first step in gaining such knowledge. New research on sooty, so-called "brown cloud" pollution makes a similar point. Veerabhadran Ramanathan at Scripps Institution of Oceanography in La Jolla, Calif., leads an ongoing study of these clouds over southern Asia and the Indian Ocean. His progress report in Nature earlier this month showed that the clouds trapped enough heat to double global warming over the Himalayan glacier region. This is speeding up the melting of those glaciers – Earth's third-largest frozen fresh water reservoir.
Using nature’s clues to manage landscape pests This year more than most, use of local growing degree day accumulations can help pinpoint management stages in the life cycle of insects. For most of us in Michigan, spring does not come quick enough. We are teased by temperatures in early spring that rise into the 60s just to plummet back into the teens at night. Slowly, winter’s icy grip is replaced by cool spring days. In nature, these changes in temperature bring plants out of their slumber or winter dormancy, and as the plants become active so do the insects that feed on them. This connection between temperature and insect activity provides insight into when these insect life stages will occur. Today, scientists predict insect activity by a system, called Growing Degree Days (GDD), which measures the daily accumulation of temperatures averaging above 50 degrees. This system can be used as a tool to determine the hatch of pine needle scale, emergence of beetles such as emerald ash borers, or even to time when black vine weevils will be feeding on rhododendron leaves. Variations in temperatures across the state cause differences in when insect activity will occur. For the GDD system to be meaningful to landscape and nursery professionals, it requires the availability of local degree day information. As more weather station sites have been developed across Michigan, it has improved the accuracy for timing insect activity. Michigan State University’s Enviro-weather web site provides current GDD information for monitoring and managing specific pests of landscape and nursery pests. Learn more from Enviro-weather coordinator Beth Bishop’s article on Looking at growing degree days: Just how far behind normal are we? Whether you are managing pine shoot borer on a Christmas tree farm, gall forming insects on spruce in a nursery, or Magnolia scale in the home landscape, use of local GDD information through Enviro-weather can help pinpoint critical management stages in the life cycle of insects. This type of accuracy reduces the need for multiple pesticide sprays. A check of the closest weather station to you at the Enviro-weather site gives current degree days for your area, helping to time management options. This year more than most, it is critical to monitor degree days instead of using treatments by a certain date, since current degree day data shows temperatures are running one to two weeks behind for much of Michigan. Basing any pesticide treatments on a calendar date would be poorly timed and could prove costly as spring temperatures fall short of the average for Michigan.
Human activity is directly linked to the hot and dry winter in California, according to a paper published in Proceedings of the National Academy of Scientists. Scientists affiliated with two departments at Stanford University (the Department of Environmental Earth System Science and the Stanford Woods Institute for Environment) blamed the historic drought squarely on anthropogenic climate change. The study used historical statewide data for observed temperature, precipitation and drought in California. Researchers came to a stunning conclusion about the relationship between climate change and the California drought using a process called bootstrapping, which is a technique that lets statisticians use the same sample over and over again to improve their estimates of specific effects. The bootstrapping in this study was used to compare climate data with measures of populations from different time periods to allow for analysis of how changes in population are associated with different climate conditions. The study found that the warming across California happens in climate models that include both natural and human factors, but not in simulations that include only natural factors, arstechnica notes. The difference between the two scenarios has a very high level of statistical significance. In their conclusion, the scientists say their results strongly suggest that human-caused warming has increased the probability of co-occurring temperature and precipitation conditions that have historically led to California’s droughts. They add that continued warming is likely to lead to situations where every future dry period, whether seasonal, annual or multiannual, will come along with historically warm conditions.
A. Subject: Ancient Civilizations B. Grade & Ability level: 6th Grade; all levels of Students C. Unit Title: Mesopotamia D. Time Frame: 10 - 12 days E. I used several textbooks to create this unit. For assignments with page numbers, substitute with appropriate material. TSWBAT (The student will be able to ...) Overview and Rationale A. Scope and major concepts 1. This unit covers the history of Ancient Mesopotamia. 2. This unit will include lessons on: (a) The key role of geography in the development of Civilization (b) Mesopotamia peoples, work, food, shelter (c) The rule of law, and development of government (d) The development of written language (e) The concepts of Religion, myths, legends, epics (f) Important inventions of the Mesopotamian people 3. This unit will concentrate on geographic and language arts skills. 4. The Unit will focus on student personal discovery and challenge to student to express their own ideas and beliefs concerning world events. B. Rationale: This unit is designed for all students. The unit will broaden their horizons by showing how ancient peoples are similar to peoples today. It will also help prepare students for Maryland State exams by introducing concepts used in Maryland State, and U.S. government. It is designed to increase students map skills by giving them the opportunity to see how geography affects people and history. Objectives (C = Cognitive, A = Affective, P = Psychomotor) The Student will be able to (TSWBAT) use map skills to locate Mesopotamia, Tigris and Euphrates rivers, Zagros mountains, Syrian desert, and the Persian gulf. (C) TSWBAT discuss and support either side of an argument in a debate given an appropriate subject. (C, A) TSWBAT demonstrate writing skills. (C, P) TSWBAT demonstrate research skills. (C, P) TSWBAT demonstrate presentation skills. (C, P) TSWBAT describe items using proper terminology. TSWBAT compare and contrast differing views about a subject. TSWBAT demonstrate, understand, and use maps, charts and graphs. (C, P) TSWBAT discuss the interdependence of peoples. (C) TSWBAT give personal judgments and express values concerning world events. (C, A) TSWBAT broaden their personal horizons through role playing and panel work. (A, P) A. Ways to evaluate: The student's participation in classroom discussions, debates, completion of assigned homework, activities, and an end of unit test, will demonstrate the students understanding of the lessons. The students are given a daily drill question to answer. The students will be graded mostly on effort and attempt to answer. A directed writing activity will be assigned. The students will be graded on writing skills, and the appropriateness, and content of their work. A quiz on the chapter will be given. Quiz will be T/F, multiple choice, essay. Sample unit test questions. Subject Matter/Skills Outline A. Following is a list of essential thinking skills and related concepts that will be related to each days activities. Each skill will be numbered and this number will be listed at the end of each days subject matter outline. This listing of skills is taken from the Dimensions of Learning handout given by the Anne Arundel County Public Schools, Office of Staff Development, Instructional Leadership 1. Positive Attitudes and Perceptions: A. Classroom Climate B. Classroom Tasks II. Ability/Resources to perform tasks 2. Acquiring and Integrating A. Declarative Knowledge I. Construct Meaning B. Procedural Knowledge I. Construct Models 3. Extending and Refining I. Directed Teaching of Thinking Skills VI. Analyzing Errors VII. Constructing Support IX. Analyzing Perspectives IV. Meaningful Use of Knowledge I. Directed Teaching of Dimension 4 Mental Processes II. Decision Making IV. Experimental Inquiry V. Problem Solving 5. Productive Habits of the Mind: II. Critical thinking III. Creative Thinking Daily Activities/Lessons: For each lesson and activity, the objectives from Section TSWBAT major list of objectives will be in quotes the Dimension of Learning outcome will be in parens. DAY ONE: GEOGRAPHY First day/ Introduction, knowledge assessment, geography. Student Outcome: The Student will be able to: Clean out notebook Use and understand an atlas Drill Question: What is an illustrated dictionary? (a) Students will be introduced to the term Mesopotamia (Greek for “land between the rivers”) and asked if they know of anyplace that is between rivers (short class discussion) "9" (1A.I, II, III) (b) A pretest on geography skills, and vocabulary will be given.(at this point if students show a deficiency in map skills, a short unit on map skills may be introduced). "9" (1B.II) (c) Students will be given a blank map of the middle east and asked to locate various places on it using either a textbook map or Atlas (if available) place names will include Mesopotamia, Tigris and Euphrates rivers, Zagros mountains, Syrian desert, Persian gulf, Iraq. (students may work either singly of in pairs). "1, 9" (1B.II, III; 2A. I, II) (d) Selected students (those who you have seen are working correctly) are then asked to come up to the large map and show where these areas are located. "1, 5, 9" (1A.I, II; 2B.II) (e) If time permits, discuss why being surrounded by mountains and desert was an asset in developing civilization. "20" (3.V) (f) Closure, review the daily objective, ensure all students have a basic understanding of the location of Mesopotamia. "9" (1A.III; 2A.III) DAY TWO: AGRICULTURE Second day/ drill, motivation, development of agriculture Student Outcome: Mesopotamia #2 The Student will be able to: Use and understand an atlas. Evaluate the discovery of agriculture, and its effect on civilization. (a) Review location of Mesopotamia, continue (or start) discussion of how the geography allowed civilization to develop. "1,9, 14" (1A.III; 2A.III) (b) Ask students what they had for breakfast (list on an overhead). This may be done in small groups. Then ask students to figure out where each item came from (i.e. toast from bread, bread from grain, eggs, butter, yeast) Then have students list where each of these items are found (i.e. wheat farms, dairy farms). "4" (2A.II; 3.III) (c) Classroom discussion what would they have for breakfast if there were no farms. Explain vocabulary terms “hunter-gatherer”, “nomadic/nomad”, “agriculture”. Tie in to Native Americans, before the advent of Europeans, and other societies in Africa and South America that still lead a hunter-gatherer existence. "16" (3.VIII; 2A.I, II, III) (d) Have students list advantages, and disadvantages of the hunter gatherer lifestyle. Have students list the advantages and disadvantages of agriculture. "8" (3.VIII; 5.II) (e) From textbook/readings, have students describe the climate of Mesopotamia, list on blackboard/transparency (terms should include: dry, dusty, hot, spring rains, flooding). Have students read how the people of Mesopotamia overcame these hardships (the development of irrigation) "1, 4, 9" (2A.I, II, III) (f) Closure/review: review daily objective. Discuss with students agriculture and irrigation. "10" (3.II) DAY THREE: CAUSE & EFFECT Third day/Cause and Effect Student Outcome: Mesopotamia #3 The Student will be able to: Evaluate the discovery of agriculture, and its effect on civilization. (a) Drill, students will complete daily drill. (1A.III) (b) Motivation, discuss quickly cause and effect in students daily life. "20" (1A.I; 1B.I, II, III) (c) Use cause and effect worksheets, have students develop a three step cause and effect chain starting from: people developed agriculture. "10" (3.II; 4.II, V). Example: |People developed agriculture||A steady supply of food was available| |A steady supply of food was available||Development of permanent housing| |Development of permanent housing||Beginnings of government| This should be taken directly from their readings and could include, domestication of animals, construction of irrigation ditches, development of religion, and many others. Have students pair up and compare their chains. (this work may be collected and checked). (d) Directed reading with questions from text. "3" (2A.I, II, III; 3. I, II, III, IX) (e) Review/closure: discuss with students the start of cities and the development of agriculture. "10" DAY FOUR: RELIGION & EPICS Fourth day/ Cities of Mesopotamia, Religion and Epics. Student Outcome: Mesopotamia #4 The Student will be able to: Evaluate the discovery of agriculture, and its effect on civilization. Question: Nomadic people, who live by eating whatever they can find, are called what? (a) Daily Drill (b) Motivation - Show students pictures/overheads of Pyramids, Ziggurats, Mayan Temples. Ask why they think ancient peoples built these huge structures. "20" (3.II, VIII) (c) Have students read aloud text section on Sumerian religion. Discuss with students similarities in Sumerian religion with activities in students daily life. "4, 16" (3.II; 4.III) (d) Define “Epic, Myth, Legend”. Introduce the epic of Gilgamesh. Have students read sections aloud. Compare to Comic book heroes. Show how Sumerians used these tales to entertain. "6, 16" (2A.I, II, III; 3.II, III) (e) Closure/review - Review, Religion, Epics, Makeup and construction of cities. "10" DAY FIVE: FIRST WEEK REVIEW Fifth day/ review. Complete any unfinished tasks from the previous days lessons. The four lessons above should take five days to complete. If there is extra time, use it for vocabulary games, or map skills. I use a lesson on paraphrasing here. For a worksheet on paraphrasing, see this site: http://owl.english.purdue.edu/Files/31.html Student Outcome: Mesopotamia #5 The Student will be able to: Use organization skills to clean out, and set up their notebooks. Use sequencing skills to set up a cause effect graphic organizer on the discovery of agriculture, and its effect on civilization. DAY SIX: TOOLS Sixth day/ Tools and tool making. Student Outcome: Mesopotamia #6 The Student will be able to: Use the writing skill of paraphrasing to help understand the textbook. Use reading strategy (reading for a purpose) skills to answer questions about a film strip. Drill Question: (today, do not write the question, just the answer in complete sentence form.). Where, in relative terms (i.e. north, southwest, etc.) is the Persian Gulf located in relation to Mesopotamia? (use the maps in your textbook or assignment book) (a) Daily drill (b) Motivation: Ask how many of the students if they have ever used a tool. Ask what type and what they did. Then ask how they could have done the job without that tool. "16" (3.II, Vii; 5.III) (c) From their reading have students make a list of tools developed/invented by the Sumerians. Explain the Bronze age to the students and describe Bronze to them. "4, 10" (2A.II) (d) Have students select from the list of tools mentioned and draw one. Then have them describe how that tool was used underneath their drawing. collect this work. "3,4,6" (3.I, III, VIII) (e) Discuss with students important inventions and tools that they use (or are used by their parents/guardians) daily that were invented by the Sumerians. "10" (3.II) (f) Review/Closure: Discuss with students some of the tools invented by the people of Mesopotamia."10" (2A.II, III) DAY SEVEN: CUNEIFORM 7. Seventh day/ Cuneiform, pictographs, and writing Cuneiform Lesson Plan Student Outcome: Mesopotamia #7 The Student will be able to: Use team skills to prepare a sentence written in cuneiform Question: Name at least one of the empires that controlled Mesopotamia (a) Daily Drill (1A.I, II, III) (b) Motivation - Ask students why they think writing is important. "20" (5.III) (c) Make (or buy) Clay tablets with Pictograph or Cuneiform writing on them. Have students move into small groups. Give each group a clay tablet to work from. Provide resources that will allow students to translate a portion of the tablet. As works proceeds, provide students with additional translation material until they have enough to translate about 1/2 the tablet. "4, 16" (2A.II; 3.III, IV) (d) Have each group orally provide their translation of their tablet. Inform students that they have been doing an archeologists job. That is to translate an unknown language with only partial meanings known. They need to guess at actual meanings for some items. "4,5,16" (3.VII; 5.II, III) (e) Provide each group with a written handout with full cuneiform to English translations ( See reading the past cuneiform by C.B.F. Walker for translations) with an exercise that allows them to write and draw Cuneiform and English translations. "3,4" (2A.I, II, III) (f) Collect written work. Discuss with students what a written language is. "4" (3.III, IX) (g) If time permits, give each group a small piece of clay, and have them make their own tablets. "4,5,16" (2B.I, II, III) (h) Review/Closure: Review with students that Cuneiform is the first written language and the importance of a written language in their daily lives. "10" (2A.I, II, III) (a) Daily drill. (1A. I, II, III) (b) Motivation - Ask students to describe a typical/regular day of theirs. "5" (3.III, IV) (c) Activity - Show slides/overheads about Sumerian housing, Food, education, shopping, religious rites, and other Sumerian daily activities. Discuss each daily activity with students. "10" (2A.I, II, III; 3.II, III) (d) Have students write a couple of sentences describing what they think the life of a Sumerian child of 11 or 12 would be like. "3, 14, 16" (3.VII; 5.III) (e) Have students share their thoughts with the rest of the class. Have class discuss these activities and compare to their own typical day. "5" (3.III, IV) (f) Closure - Compare a typical students day to the typical day of a Sumerian child. "10" (2A.I, II, III) DAY TEN: GOVERNMENT Student Outcome: The Student will be able to: Compare the governments of Mesopotamia to our own. (a) Daily drill. (1A.I, II, III) (b) Motivation - Ask students if they think they will (or have) voted in School elections, or if any of them have or will run for student government. "20" (1A.I, II) (c) Ask students how they would punish people who broke the law (be specific i.e. stole, hit their parents, hurt someone else) Write down answers on overhead. "20" (3.IX; 4.II; 5.II) (d) Bring out copies of Hammurabi’s code. Have students read aloud. "4" (2A.I, II, III) (e) Compare students answers about punishment under the law with Hammurabi’s code "7" (3.II). (f) Have students write “Which of these codes do you find more fair. Why?” "2,3,7" (5.II) (g) Closure Discuss with students the idea of a written code of law. "7, 14" (2A.I, II, III) DAY ELEVEN: GROWTH OF EMPIRE Student Outcome: Mesopotamia #11 The Student will be able to: Use the technique of paraphrasing as a study and writing tool. (a) Daily Drill. (1A.I, II, III) (b) Motivation - Ask students if they have seen the Star Wars trilogy. discuss the idea of Empire with them. "16" (3.II, V) (c) Use maps to show the spread of empires. Arcadian, Babylonian, Hittite, Assyrian, Persian. "4, 5" (2B.I, II, III; 3.III) (d) Have Students construct a time line to show the various empires. "4, 5" (2B.I, II, III; 3.III) (f) Closure/review Review the growth of empires and how they supplanted each other. Advise students of upcoming unit test. (2A.I, II, III) DAY TWELVE: REVIEW OF ACHIEVEMENTS (a) Daily Drill (b) Motivation - Ask students what they would do without, a car, written language, a government based on laws. (5.III) (c) Review with student in Jeopardy style game, the important achievements of the Civilizations of Mesopotamia. (2.A.I,II,III) (d) Closure - Remind students of upcoming test DAY THIRTEEN: UNIT TEST REVIEW AND TEST (a) Review for test. (2A.I, II, III) (c) Have activities available for students who finish early, word search, crossword puzzles, etc. (1A.I, II, III) UNIT TEST - MESOPOTAMIA Multiple Choice (2 points each) Circle the answer that best completes the sentence. 1. The Sumerians wrote on a. paper b. clay tablets c. stone d. wood e. papyrus 2. The most important people in Sumer were a. slaves b. scribes c. farmers d. priests 3. To sign their names, the Sumerians used a a. cylinder seal b. pen c. signet rings d. stamps and ink pads e. thumbprint 4. One of the surviving Sumerian legends concerns a. Hercules b. Enlil c. Hammurabi d. Gilgamesh e. Darius 5. Prior to the city states of Mesopotamia, people were a. urban dwellers b. non-existent c. hunter-gatherers d. pastoral True or False. (2 points each) Circle either true or false. 6. Sumerian writing is called hieroglyphics. True False 7. Sumerians signed their names with a cylinder seal. True False 8. The Sumerians worshipped many gods. True False 9. Sumerian temples were called Ziggurats. True False 10. In Sumer, a priest was a very important person. True False Essay questions: (10 points each) Answer on the blank paper attached. 1. Describe the Sumerian invention that you think is most important and then give your reasons why using at least two examples of how that invention changed peoples lives. 2. Compare the Code of Hammurabi with the laws of the United States today. Answer the following questions in paragraph/sentence form. 1. Who is Hammurabi? 2. What were some of his laws. 3. How were his laws similar and different from the laws we have today? 4. How might you have felt living back in the time of Hammurabi? Homework Assignments: Paraphrase the following statements. Supply the paraphrased statement on your own paper. Use complete sentences. Example: (Statement) Agriculture was of great importance to the Sumerians. Through the use of irrigation they were able to grow a surplus of crops. (paraphrase) The Sumerians used irrigation to grow enough food for everyone. They felt this was very important. 1. The surplus of food allowed the Sumerians to settle in one place and build permanent structures. These permanent buildings grouped together, and slowly developed into towns and cities. 2. Having a surplus of food allowed some people to specialize. Everyone did not have to farm. Some people became metal workers, some became builders, some became brick makers, and a priest caste developed. The priests were in charge of the irrigation projects and ensured that all farmers were provided with the water they needed to grow crops. 3. As the cities grew, and the importance of the priests grew, temples, called Ziggurats, were built to honor the Gods. Everyone brought gifts to the temples for the Gods, but only the High Priest was allowed to speak to the Gods. 4. To keep track of the gifts that had been given to the gods by each individual, the priests slowly developed a system of writing called pictographs. Pictographs evolved over the years into stylized symbols, where each symbol represented a sound instead of representing a word. These markings are called Cuneiform. Cuneiform is the first written language that we have discovered so far. 5. In addition to inventing the first written language the Mesopotamians invented many other things we use today. These items included the wheel, and Wheeled platform (carts and chariots), the sailboat, the plow and plowseeder, irrigation, the hoe, many other tools, and finally a written set of laws. 6. Hammurabi’s code was written down so that everyone would know the laws. Each law had a set punishment which was applied equally to everyone throughout the empire. While harsh by our standards, these laws and punishments were the cornerstone of the idea of rule by law verses rule by decree and the idea of rule by law is a cornerstone of our own government. Lesson Plan: CUNEIFORM TRANSLATORS NEEDED: APPLY WITHIN Part of Unit: Mesopotamia 6th-grade Social Studies Don Donn/Corkran Middle School; Maryland USA Make or purchase clay tablets with pictograph writing on them. Divide your identifications of pictographs into 4 or 5 different sources (ensuring that there are enough sources for each group to have one of each.) Define/give translations through your sources for about 1/2 the pictographs. Run copies of a Cuneiform activity worksheet. One per student. A) Back of Worksheet: Draw 5-6 pictographs and assign each a one word definition. Example: * = star Do the same with the letters of the alphabet A-Z. Assign each a "cuneiform" value. Example: A = a triangle. B = two sideways triangles. C = 2 sideways, 2 upright triangles. D = // These do not need to be historically correct, but should use consistent shapes; ie: triangles in various arrangements. If you have a source, great. If not, simply make them up. B) Front of Worksheet: I. Name these pictographs (pick 4 from your list) II. What does this cuneiform say? (using the "letters" you made up, create 3-4 words in cuneiform, such as HELLO, SUMER, MESOPOTAMIA.) III.Write your name in Cuneiform. Introduction/Motivation: Daily Drill: 5 minutes: Start the day with your daily drill. Introduce the students to the word Cuneiform. Inform them that this was the first written language. Activity: 10 minutes. Reading from text about Cuneiform. If pictures are provided in the text, great. If not, find a source and use the overhead. The students should see examples of actual Cuneiform writing. Activity: 15 minutes. Divide your students into small groups of 4-6 students per group. Assign or have them select Moderator, Recorder (and any other jobs your groups routinely select. Ours select a Reporter, also.) Have the Recorder list the members of the group on a separate sheet of paper and title this paper "translations". Give each group a clay tablet and their first source. (This activity works best if each group is given a different source at first.) Inform groups that their job for the day is to translate the clay tablet. After about 5 minutes, give the groups the second course. After about 3 minutes, give the groups their third source. Wait about 2-3 minutes, and give them their final source. End this part of the activity after about 2 more minutes. Activity: 10 minutes. Ask each group to report on their translations. Now ask them to read the tablet. If you get lucky (I usually do), you will find at least one group in each class has tried to make up enough words and/or letters to fill in the blanks on their own. Praise that group more vocally than the others. Now inform students that they were doing the same job as an archeologist. From bits and pieces, archeologists piece together languages. Class discussion about activity. Activity: 10 minutes. Hand out activity worksheet on translating Cuneiform. Inform students that they are now writing in an entirely new (to them) language. Using the "translation" from the back of the worksheet, have students translate the cuneiform writing on the front of their worksheet. Be sure to mention that this is "your" cuneiform writing, and not actual cuneiform, which is much more complicated. Discuss this activity. Homework: Students will write a paragraph describing the advantages of having a written language. Students will use at least two sources. This assignment worked so well that, after they left class, my kids wrote notes to each other and to some of their teachers, in Cuneiform. Some have decided to do an extra credit project - making clay cuneiform tablets. It's an easy lesson to do, it gets the point across, and the kids really like it. We hope it works as well for you!
by Philip D. Mannion*1 Today, most living species are found in the tropics, the region of the Earth that surrounds the Equator. Species numbers, a measure of biodiversity, decline towards both the North and South poles (Fig. 1). This is known as the latitudinal biodiversity gradient (LBG), and it is the dominant ecological pattern on Earth today. Although there are exceptions to the rule, including high-latitude peaks in diversity of many marine or coastal vertebrates (including seals and albatrosses), the LBG describes the distribution of species diversity for the vast majority of animals and plants, both on land and in the sea, and in the Northern and Southern hemispheres. Understanding the causes and evolution of the LBG helps researchers to explain present-day geographical variation in biodiversity and to model the responses of species to climate change, for example by making forecasts of future dispersals and extinctions. Drivers of the latitudinal biodiversity gradient: Net rates of diversification (calculated as the rates at which species originate minus the rates at which they go extinct) are higher in the tropics than elsewhere, but it is unclear whether this is because origination rates are higher in the tropics (the tropics are a ‘cradle’), extinction rates are lower (the tropics are a ‘museum’) or both. Movement of species out of or into the tropics probably complicates this picture (Fig. 2). Although it was first recognized in the early part of the nineteenth century by the German polymath Alexander von Humboldt (Fig. 3), the underlying causes of the LBG are still not fully understood, and more than 100 hypotheses have been proposed to explain it. Most of these are circular, interlinked or too specific to one group of organisms and/or region of the world to explain the gradient, leaving three broad (although not necessarily mutually exclusive) themes that could explain it: historical hypotheses, geographical hypotheses and climatic hypotheses. The historical explanation for the LBG — the ‘time and area’ hypothesis — originates from the work of the British scientist Alfred Russel Wallace (Fig. 4) in the mid-nineteenth century. It suggests that the tropics have been less perturbed than regions outside the tropics (the extratropics) by past climate events such as the Pleistocene Ice Ages (between 1.8 million and 11,000 years ago), and have been able to accumulate species over a longer time. Related to this is the ‘tropical conservatism hypothesis’, in which factors intrinsic to the organism, such as the inability of tropical species to tolerate cooler temperatures, keep them from dispersing out of the tropics. However, historical hypotheses are problematic if the LBG has existed throughout deep time (see below), and the proposed mechanisms by which they might affect distributions of species have been heavily criticized. This model suggests that because the tropics are bigger than the extratropics, they are able to support more species. This is amplified by the fact that the tropics straddle the Equator, forming one continuous area, whereas the temperate and polar regions of each hemisphere are isolated from their counterparts. Problems with the geographic hypothesis include the observations that the modern tundra, which has low species diversity, covers more area than other extratropical regions, and that 70% of land area is in the Northern Hemisphere (see Fig. 1), but its terrestrial species diversity is not consistently higher than in the Southern Hemisphere. The historical and geographical hypotheses have been strongly criticized, but climate is widely regarded as the primary driver of the LBG. The tropics display much lower seasonal variability than the extratropics, which could result in tropical species being unable to cope with varied environments. As a consequence, they might not be capable of dispersing across newly formed environmental barriers such as mountains and sea inlets, leading populations to fragment and, ultimately, break up into different species. Additionally, the tropics receive more solar energy (insolation) than the extratropics, which promotes increased plant productivity. This might lead to larger viable populations of primary producers, which in turn would support species higher up in the food chain. Seasonality has been regarded by some authors as the most important driver of the LBG, but most studies have difficulty teasing apart the effects of geographical insolation from those of seasonality. A deep-time view of the LBG: The fossil record offers a unique window onto the causes and evolution of the LBG. Natural shifts in historical, geographical and climatic factors are typically too slow to detect in modern environments, but these factors are thought to have fluctuated substantially over time spans of thousands, millions or even hundreds of millions of years. Furthermore, because the LBG ultimately arose from a complex interplay of origination, extinction and dispersal, data on living species alone are not sufficient to understand it. Information from the fossil record is also necessary. In general, it has been thought that some form of the modern LBG has persisted throughout the Phanaerozoic eon (the past 541 million years), even if it has not always been so pronounced. However, this picture changes if we take into account important biases in our sampling of the fossil record. Such factors include geological biases (for example, different areas might have different amounts of fossil-bearing rock) and biases associated with human collecting effort (for example, certain ‘key’ time intervals are more studied than others). These biases can produce false peaks in diversity when sampling is good (such as when we have lots of opportunity to collect fossils) and troughs for relatively poorly sampled time intervals (when we have few opportunities to collect fossils). This can produce a misleading picture of the fossil record. By using rigorous statistical methods to minimize these biases, we can infer that at certain times during the Phanaerozoic the LBG has weakened, flattened or even developed into a ‘palaeotemperate peak’ in which species numbers were highest at temperate latitudes, between 30° and 60° north and south of the Equator. Strong evidence for a modern-type LBG with a tropical peak and poleward decline in species numbers is restricted to the early Palaeozoic era (approximately 458 million to 423 million years ago), possibly the late Palaeozoic era (330 million to 270 million years ago), and the past 30 million years. In the remaining periods, the available evidence argues for palaeotemperate peaks in biodiversity, or flattened gradients. This evidence comes from a variety of fossil groups, including marine invertebrates, dinosaurs, early mammals, insects and coral reefs (Fig. 5). The two intervals in deep time for which there is strong evidence of a modern-type LBG both represent periods of a much cooler Earth (icehouse worlds), whereas temperate peaks in biodiversity and flattened gradients occurred during hotter greenhouse intervals or interglacials (short periods of warming within icehouse regimes) (Fig. 6). It is likely that the tropics were less perturbed by glaciations than the extratropical regions; for example, the tropics might have acted as a refuge for species during the Pleistocene Ice Ages. Conversely, during warmer, more equable time intervals, the tropics might simply have become too hot for many organisms to survive. It seems that the present pattern developed only in the past 30 million years or so, coinciding with a transition to global cooling, a steepening of the climatic gradient (with a greater differentiation between tropical and extratropical climate), and the onset of Antarctic glaciation. The causes of the LBG remain obscure, but the fossil record presents a unique opportunity to explore these patterns in both time and space, and supports an important role for climate. Much more work needs to be done in examining the deep-time LBG, with many groups, regions, environments and time intervals currently neglected. Modern ecological ‘rules’ that seem to be correlated with the LBG, such as Bergmann’s rule on organisms’ body size, have yet to be thoroughly tested in the fossil record. Further insights into drivers of the LBG from the fossil record will be crucial in understanding the threat to extant organisms from ongoing climate change. One outcome of global warming might be the development of a shallower climatic gradient (comparable to that of the time of the dinosaurs); the future LBG might follow this climatic pattern, with different levels of extinction and/or the dispersal of organisms out of the tropics potentially producing a temperate biodiversity peak. Current predictions of climatically driven biodiversity change will need to consider the complex interactions of climate and geography if they are to make accurate forecasts about future dispersals and extinctions. The fossil record, which includes instances of rapid global warming analogous to that generated by human activity today, is a vital resource for understanding and predicting what will happen to life on Earth in the coming years, decades and centuries. Suggestions for further reading: Archibald, S. B., Bossert, W. H., Greenwood, D. R. & Farrell, B. D. 2010. Seasonality, the latitudinal gradient of diversity, and Eocene insects. Paleobiology 36, 374–398. (doi:10.1666/09021.1) Jablonski, D., Roy, K. & Valentine, J. W. 2006. Out of the tropics: evolutionary dynamics of the latitudinal diversity gradient. Science 314, 102–106. (doi:10.1126/science.1130880) Mannion, P. D., Upchurch, P., Benson, R. B. J. & Goswami, A. 2014. The latitudinal biodiversity gradient through deep time. TRENDS in Ecology and Evolution 29, 42–50. (doi: 10.1016/j.tree.2013.09.012) Mittelbach, G. G. et al. 2007. Evolution and the latitudinal diversity gradient: speciation, extinction and biogeography. Ecology Letters 10, 315–331. (doi:10.1111/j.1461-0248.2007.01020.x) Willig, M. R., Kaufman, D. M. & Stevens, R. D. 2003. Latitudinal gradients of biodiversity: pattern, process, scale, and synthesis. Annual Review of Ecology, Evolution, and Systematics 34, 273–309. (doi:10.1146/annurev.ecolsys.34.012103.144032) 1Department of Earth Science and Engineering, Imperial College London, South Kensington Campus, London, SW7 2AZ, UK.
An introduction to the polar regions and the Catlin Arctic Survey expeditions, this lesson focuses on the journey taken by the team members from their homes to the Arctic. Students will be able to compare the geographical similarities and differences between their local area and an extreme environment. Students will also use GIS, in the form of Google Earth, and multimedia content to develop their knowledge of the Arctic and Arctic Ocean. - Know about the polar regions - List differences and similarities between students’ local area and the Arctic - Describe the extreme life of a polar expedition - Know about adventurous geographical careers - A rude awakening: why was Charlie shocked? (15 mins) Using an audio file recorded by explorer Charlie Paton, students will hear what it is like to wake up with the Arctic sea ice opening up underneath your tent. Students develop their understanding of this extreme polar environment. - Journey to the poles (15 mins) Students use the journey taken by the team members from their homes to the Arctic to explore similarities and differences with their local area. Use Google Earth, atlases and online weather data to develop this step. - Living on the ice (20 mins) Using multimedia stimulus materials from the Discovery Zone, e.g. photos and videos, students write a diary entry imagining their first day as a member of the team at the Ice Base. - Team profiles (10 mins) This lesson step enables students to learn more about the different members of the team, giving pointers for home learning and to discuss geographical careers.
Thanks to environmental DNA, or eDNA, scientists now have the most comprehensive answer to that question. A study led by UCLA and published in the journal PLOS One has identified 80 species of fish and rays living in the surf zones of southern California, where ocean waves crash onto the beach. According to Paul Barber, a professor of ecology and evolutionary biology at UCLA and the senior author of the paper, “Environmental DNA opens up a wealth of possibilities to monitor our local beach ecosystems.” The researchers collected samples of ocean water from 18 sites spanning from the Channel Islands to Catalina. They extracted DNA from the water, which animals shed in the form of dead skin, scales, and other body parts. This DNA was then matched to species using genetic libraries. The study revealed a diverse range of fish, sharks, and rays, including leopard sharks, school sharks, bat rays, round stingrays, opaleye, northern anchovies, flatfish, giant kelpfish, and surfperch. Zack Gold, the study’s lead author, conducted the research as a doctoral candidate at UCLA and is now a marine scientist at the Pacific Marine Environmental Lab Ocean Molecular Ecology group of the National Oceanic and Atmospheric Association. Gold highlighted one positive finding: white seabass DNA samples consistently appeared at all the test sites. This is significant because white seabass, which has been historically overfished, was the focus of a 1995 California conservation plan. While earlier studies indicated a slow recovery for the species, the new study provides evidence of a more robust comeback than previously thought. The researchers compared eDNA to two traditional methods, beach seines (nets used to capture species) and baited video cameras, to survey ocean life. They found that eDNA revealed a more comprehensive picture of the species living in the surf zones studied, detecting 58 species that the other methods missed. However, the other research methods still have scientific value. For instance, eDNA does not provide information about the size, age, and sex of the animals. Additionally, eDNA data is limited in quantifying the abundance of each species in a given area. Recent advancements in technology may improve this aspect. Surf zones, which serve as ecological boundaries between land and ocean, are challenging to study due to their turbulent and ever-changing nature, as noted by UC Santa Barbara marine ecologist Jenifer Dugan, a co-author of the paper. She emphasized that little is known about the fish communities in surf zones, and the study relied on scientists using beach seines and baited cameras to gather information. While genetic labs are required to process eDNA samples, even non-scientists such as lifeguards and beachgoers can collect water samples, making the process more affordable, accessible, and frequent. This provides conservation science with an opportunity to better understand how marine life is affected by events such as oil spills, pollution, extreme temperatures, and other short-term environmental concerns. McKenzie Koch, a lead author of the paper who teaches science to children at Ocean Institute, a nonprofit education organization, believes that involving citizen scientists in research increases their awareness and care for wildlife. She said, “If we continue to connect recreation to science, we’ll end up with a more motivated generation.” Barber suggested that environmental nonprofits like Heal the Bay, which regularly test water samples from the region’s beaches for bacteria, could also utilize those samples for long-term monitoring of marine life. This would be especially informative as the oceans warm due to climate change. Apart from these benefits, the study provides the clearest understanding to date of the marine life near popular beach destinations like Malibu and Santa Monica. For example, evidence of leopard sharks and school sharks was found in these areas. However, Gold assured beachgoers that there is no need to avoid these beaches as humans are not the prey of these sharks. He humorously added, “SUVs on the 405 are a much bigger threat than sharks.” Nonetheless, it is helpful to be aware of the presence of round rays, which can sting when disturbed. Gold shared from personal experience that shuffling one’s feet along the sand can help avoid painful injuries.
In This Article What Is A Dental X-Ray? Dental x-rays, also known as dental radiographs, are images of your teeth that dentists use to identify potential oral health problems. Dentists use digital x-rays with low levels of radiation to visualize the internal structures of your teeth and gums, based on which they can diagnose dental decay, cavities, and other dental problems. Dental x rays are the most common tools used by dentists for the diagnosis of dental problems. How Do Dental X-Rays Work? Dental radiography might sound complex, but it’s fairly simple. The dental x-ray sends x-rays through the mouth. The teeth, bones, and hard tissues absorb more of the rays, so they appear lighter on the final radiography. Meanwhile, the gums and soft tissues absorb less of the rays, so they appear darker on the radiography. The infected, abscessed, and decayed regions are usually softer, and they don’t absorb as much of the rays, so they appear darker. Dentists study the light and dark contrasts in dental x rays to diagnose health problems. How Often Should You Get Dental X-Rays? The frequency of dental x-rays depends entirely on your oral health and requirements. If you have a high risk of dental decay, bacterial infections, or other health problems, your dentist may recommend dental x rays every 6 to 12 months. Frequent dental x rays will allow the dentist to identify potential dental health problems at the earliest stage before they progress further. Most oral health problems are chronic, so it’s important to identify and treat them at the earliest stage possible. The longer you delay the treatment, the worse the condition becomes. Children and teenagers generally need dental x rays more frequently than adults because their teeth aren’t completely developed, so there’s a higher risk of dental problems. Dentists also take dental x rays before major procedures, such as root canal tooth x-rays and periodontal disease x-rays. You may also need dental x rays when you switch over to a new dental professional or if your dentist identifies a potential sign of dental infection. People with healthy teeth and a low risk of infections and dental decay can get dental x rays every other year, but you must follow your dentist’s recommendations above all else. What Can Dental X-Rays Identify? Dental x-rays provide contrasting light and dark images of the insides of your mouth. The teeth and bone look bright white, and the soft tissues and infected areas look gray or black. The images taken from dental radiographs provide dentists with the information they need to curate a personalized treatment plan. Dentists can review the x-rays to identify possible abscesses, infections, cavities, or impacted teeth. While different types of dental restorations look different in dental x rays, the dentist can also assess the general health and quality of the restoration. Are Dental X-Rays Really Necessary? Dental x-rays are necessary because they provide important diagnostic information to dentists. While dentists can determine possible cavities and infections from an external inspection of your teeth, dental x rays are necessary to identify deeper issues. In most cases, infections and cavities only become externally visible when they spread considerably, at which point stopping their progress becomes harder. But dental x rays reveal potential infections and cavities at the earliest stage possible when they’re easily treated. Undergoing dental x rays can save considerable time, energy, and expenses in more complex treatments down the line. Are Digital Teeth X-Rays Safe? Some patients worry that dental x-ray radiation may cause adverse health problems in the long run. While dental x rays equipment does generate radiation, the amount of exposure is fairly limited and tolerable. Studies have shown that dental radiography is completely safe for everyone in most situations. The dental x-ray radiation level is no different from the level of radiation you receive from everyday activities, like watching TV or using smoke detectors. As such, digital teeth x-rays are completely safe for you — the potential health impacts of infections, abscesses, and decay are far more pressing. Can You Get Dental X-Rays When Pregnant? Unless absolutely necessary, pregnant women are generally asked to avoid dental x rays, even though the radiation level in dental x-rays should be safe for everyone. But since pregnant women also have a higher risk of periodontitis, you shouldn’t avoid periodontal dental x rays if you notice the signs and symptoms of gum disease, such as redness or bleeding gums. Pregnant women undergoing dental x-rays are generally asked to wear lead thyroid collars and aprons to protect vulnerable areas from radiation. Breastfeeding and nursing women can proceed with dental x rays without any concern or problems. What Are The Types Of Dental X-Rays? - Bitewing Dental X-Rays: You bite down on a piece of paper. The dentist checks for cavities between the teeth and determines if your crowns match up. - Occlusal Dental X-Rays: You close your jaw, and the dentist identifies abnormalities in the floor of your mouth or in how your upper and lower teeth match. - Panoramic Dental X-Rays: The dental x-ray equipment rotates around your head to capture a complete image of your mouth. It allows optional dental implant planning. - Periapical Dental X-Rays: The dentist uses this dental x rays technique to focus comprehensively on two teeth from root to crown. Schedule Your Dental Radiology In Houston URBN Dental is a state-of-the-art dental clinic specializing in digital dental x-rays that use even less radiation than traditional dental x-rays. We always diagnose and treat potential dental problems at the earliest stage possible to prevent further complications. You can find our dental clinic at 3201 Allen Pkwy, Houston, a short drive from the Museum District, West University Place, Upper Kirby, or River Oaks in Houston. Please schedule an appointment for your dental x rays in Houston.
In this tutorial, we are going to make a simple text editor with the Tkinter module which comes with Python so we don't have to install anything. It will have the following features. We start by importing some modules which we all later need. We import everything from Tkinter so we have all the variables available and we also import the filedialog modules individually. The scrolledtext will be the text area where we write and the filedialog allows us to trigger the # Import from tkinter import * from tkinter import scrolledtext from tkinter import filedialog Next, we import ctypes to enable high DPI (Dots per Inch) so our window looks sharper. Below the code, you will find a comparison. Here's what a low DPI looks like: sys so we can analyze the arguments given through the command line. We later use this to enable open with: Now we set up some variables for our little program. The first two variables are used to keep consistency when titling our program. The currentFilePath is used when saving the file so we know where to save it. This string will also be appended to the window title like many programs do to show what file is being edited. At last, we define which file types can be opened with our editor. We used this variable in the file dialogues: # Setup Variables appName = 'Simple Text Editor' nofileOpenedString = 'New File' currentFilePath = nofileOpenedString # Viable File Types, when opening and saving files. fileTypes = [("Text Files","*.txt"), ("Markdown","*.md")] Next, we are going to set up the Tkinter window. To do that we make a new Tk object. After that, we give the window a title, and we use the variables we defined earlier. Because we have no file opened at the moment it will say Simple Text Editor - New File. We will also make it so an asterisk will be added in front of the file name so we know when we have unsaved changes. Then we set the initial window dimensions in pixels with the geometry method of Tkinter. Last but not least we set the first column to take up 100% of the space so our text area will be the full width: # Tkinter Setup window = Tk() window.title(appName + " - " + currentFilePath) # Window Dimensions in Pixel window.geometry('500x400') # Set the first column to occupy 100% of the width window.grid_columnconfigure(0, weight=1) Now we are going to set up two functions that are connected to some events called by Tkinter widgets. The first function is called when we press any of the file buttons so we can save open and make new files. Later you will see how we connect them. We have to get the currentFilePath because it was defined outside this function. Our function will take one argument namely the action, which defines what we want to do. We will check for this argument and do stuff dependent on that. So if the action is open, we will trigger the askopenfilename() function through the filedialog module. We supply it with the fileTypes we defined earlier so the user will only be able open these file types. After we choose the file, the function will return the path of the file. Then we set the window title to our " file". After that, we set our currentFilePath to this file path. Now we just open the file and insert the content into our text area called txt after we clear it with the # Handler Functions def fileDropDownHandeler(action): global currentFilePath # Opening a File if action == "open": file = filedialog.askopenfilename(filetypes = fileTypes) window.title(appName + " - " + file) currentFilePath = file with open(file, 'r') as f: txt.delete(1.0,END) txt.insert(INSERT,f.read()) If the action is new, we will set the file path to the new file. We also delete the text in the text area and reset the window title: # Making a new File elif action == "new": currentFilePath = nofileOpenedString txt.delete(1.0,END) window.title(appName + " - " + currentFilePath) Last but not least, we will check for saveAs. Now if the file is new or we pressed the Save As button, we will ask the user where they want to save the file. Then, we open the file and save the text from the text area there. After that, we reset the window title because there probably was an asterisk: # Saving a file elif action == "save" or action == "saveAs": if currentFilePath == nofileOpenedString or action=='saveAs': currentFilePath = filedialog.asksaveasfilename(filetypes = fileTypes) with open(currentFilePath, 'w') as f: f.write(txt.get('1.0','end')) window.title(appName + " - " + currentFilePath) Now for a simple function. Whenever the text area is changed, we'll call this function to simply add an asterisk after the appName and before currentFilePath to show the user that there are unsaved changes: def textchange(event): window.title(appName + " - *" + currentFilePath) Now we are going to set up the Graphical Elements. First, we set up the Text Area and set its height to 999 so it spans the full height, we set its position through the sticky=N+S+E+W to tell the widget to grow in all directions when the user resizes the window. bind() method, we say that whenever a key is pressed in the text area, we call the # Text Area txt = scrolledtext.ScrolledText(window, height=999) txt.grid(row=1,sticky=N+S+E+W) # Bind event in the widget to a function txt.bind('<KeyPress>', textchange) Now let's set up our Dropdown menu for file interactions. We first make a new Menu that has the root window. We make a second one that has the root of the first menu, and we set the False so the user won't be able to tear off this menu to have it as a separate window. Then we add commands to this menu with its add_command() method. We have to supply this function with a label that represents the displayed text and a command which is then called function if the button is pressed. We need to make a lambda function that calls our fileDropDownHandler() function. We have to do this so we can supply our function with an argument. We can also add separators with the add_seperator() method. In the end, we add this menu as a cascade to the menu button and we set this menu to be the main menu: # Menu menu = Menu(window) # set tearoff to 0 fileDropdown = Menu(menu, tearoff=False) # Add Commands and and their callbacks fileDropdown.add_command(label='New', command=lambda: fileDropDownHandeler("new")) fileDropdown.add_command(label='Open', command=lambda: fileDropDownHandeler("open")) # Adding a seperator between button types. fileDropdown.add_separator() fileDropdown.add_command(label='Save', command=lambda: fileDropDownHandeler("save")) fileDropdown.add_command(label='Save as', command=lambda: fileDropDownHandeler("saveAs")) menu.add_cascade(label='File', menu=fileDropdown) # Set Menu to be Main Menu window.config(menu=menu) Now we need to enable the user to open a file directly with our program with this little code snippet. We check if the system argument length equates to two we know that the second argument is the path of the desired file. So we now set the currentFilePath to this path. After that, we do essentially the same as with the open with handler function: # Enabling "open with" by looking if the second argument was passed. if len(sys.argv) == 2: currentFilePath = sys.argv window.title(appName + " - " + currentFilePath) with open(currentFilePath, 'r') as f: txt.delete(1.0,END) txt.insert(INSERT,f.read()) In the end, we call the main loop method on the window so the window displays: # Main Loop window.mainloop() Let's see how the program works in this short GIF: Excellent! You have successfully created a simple text editor using Python code! See how you can add more features to this program such as making key shortcuts. If you want to learn more about using Tkinter, check this tutorial if you want to make a file explorer using Python. or this one where you create a calculator app along with many features! Get the complete version of the code here. If you want to build more GUIs with Python, check our GUI programming tutorials page! Learn also: How to Make Buttons in PyGame. Happy coding ♥View Full Code
Learning ObjectivesBy the end of this section, you will be able to do the following: - Contrast traditional economies, command economies, and market economies - Explain gross domestic product (GDP) - Assess the importance and effects of globalization Think about what a complex system a modern economy is. It includes all production of goods and services, all buying and selling, all employment. In addition, the modern economy answers the basic economic questions: What goods and services should be produced, how should they be produced, and to whom should they be distributed? The economic life of every individual is interrelated, at least to a small extent, with the economic lives of thousands or even millions of other individuals. Who organizes and coordinates this system? Who insures that, for example, the number of televisions a society provides is the same as the amount it needs and wants? Who insures that the right number of employees work in the electronics industry? Who insures that televisions are produced in the best way possible? How does it all get done? There are at least three ways societies have found to organize an economy and answer the basic economic questions. The first is the traditional economy, the oldest economic system to answer the basic economic questions. This economy can be found in parts of Asia, Africa, and South America. Traditional economies organize their economic affairs the way they have always done (i.e., tradition). Occupations stay in the family. Most families are farmers who grow the crops they have always grown using traditional methods. What you produce is what you get to consume. Because things are driven by tradition, there is little economic progress or development. Command economies are very different. In a command economy, economic effort is devoted to goals passed down from a ruler or ruling class. Ancient Egypt was a good example: A large part of economic life was devoted to building pyramids, like those shown in Figure 1.7, for the pharaohs. Medieval manor life is another example: The lord provided the land for growing crops and protection in the event of war. In return, vassals provided labor and soldiers to do the lord’s bidding. In the last century, communism emphasized command economies. In a command economy, the government decides what goods and services will be produced and what prices will be charged for them. The government decides what methods of production will be used and how much workers will be paid. Many necessities like healthcare and education are provided for free. Currently, Cuba and North Korea have command economies. Although command economies have a very centralized structure for economic decisions, market economies have a very decentralized structure. A market is an institution that brings together buyers and sellers of goods or services, who may be either individuals or businesses. The New York Stock Exchange, shown in Figure 1.8, is a prime example of market in which buyers and sellers are brought together. In a market economy, decision-making is decentralized. Market economies are based on private enterprise. The means of production (resources and businesses) are owned and operated by private individuals or groups of private individuals. Businesses supply goods and services based on demand. In a command economy, by contrast, resources and businesses are owned by the government. In a market economy, what goods and services are supplied depends on what is demanded. A person’s income is based on his or her ability to convert resources (especially labor) into something that society values. The more society values the person’s output, the higher the income (think Lady Gaga or LeBron James). In this scenario, economic decisions are determined by market forces, not governments. Most economies in the real world are mixed; they combine elements of command and market (and even traditional) systems. The U.S. economy is positioned toward the market-oriented end of the spectrum. Many countries in Europe and Latin America, while primarily market-oriented, have a greater degree of government involvement in economic decisions than the U.S. China and Russia, while they are closer to having a market-oriented system now than several decades ago, remain closer to the command economy end of the spectrum. A rich resource of information about countries and their economies can be found on the Heritage Foundation’s website, as the following Clear It Up feature discusses. Clear It Up What Countries are Considered Economically Free? Who is in control of economic decisions? Are people free to do what they want and to work where they want? Are businesses free to produce when they want and what they choose, and to hire and fire as they wish? Are banks free to choose who will receive loans? Or does the government control these kinds of choices? Each year, researchers at the Heritage Foundation and The Wall Street Journal look at 50 different categories of economic freedom for countries around the world. They give each nation a score based on the extent of economic freedom in each category. The 2015 Heritage Foundation’s Index of Economic Freedom report ranked 178 countries around the world: Some examples of the most free and the least free countries are listed in Table 1.1. Several countries were not ranked because of extreme instability that made judgments about economic freedom impossible. These countries include Afghanistan, Iraq, Syria, and Somalia. The assigned rankings are inevitably based on estimates, yet even these rough measures can be useful for discerning trends. In 2015, 101 of the 178 included countries shifted toward greater economic freedom, although 77 of the countries shifted toward less economic freedom. In recent decades, the overall trend has been a higher level of economic freedom around the world. |Most Economic Freedom||Least Economic Freedom| |1. Hong Kong||167. Timor-Leste| |2. Singapore||168. Democratic Republic of Congo| |3. New Zealand||169. Argentina| |4. Australia||170. Republic of Congo| |5. Switzerland||171. Iran| |6. Canada||172. Turkmenistan| |7. Chile||173. Equatorial Guinea| |8. Estonia||174. Eritrea| |9. Ireland||175. Zimbabwe| |10. Mauritius||176. Venezuela| |11. Denmark||177. Cuba| |12. United States||178. North Korea| Regulations: The Rules of the Game Regulations: The Rules of the Game Markets and government regulations are always entangled. There is no such thing as an absolutely free market. Regulations always define the rules of the game in the economy. Economies that are primarily market-oriented have fewer regulations—ideally just enough to maintain an even playing field for participants. At a minimum, these laws govern matters like safeguarding private property against theft, protecting people from violence, enforcing legal contracts, preventing fraud, and collecting taxes. Conversely, even the most command-oriented economies operate using markets. How else would buying and selling occur? But the decisions of what will be produced and what prices will be charged are heavily regulated. Heavily regulated economies often have underground economies, which are markets where the buyers and sellers make transactions without the government’s approval. The question of how to organize economic institutions is typically not a black or white choice between all market or all government, but instead involves a balancing act over the appropriate combination of market freedom and government rules. The Rise of Globalization The Rise of Globalization Recent decades have seen a trend toward globalization, which is the expanding cultural, political, and economic connections between people around the world. One measure of this is the increased buying and selling of goods, services, and assets across national borders—in other words, international trade and financial capital flows. Globalization has occurred for a number of reasons. Improvements in shipping, as illustrated by the container ship shown in Figure 1.9, and air cargo have driven down transportation costs. Innovations in computing and telecommunications have made it easier and cheaper to manage long-distance economic connections of production and sales. Many valuable products and services in the modern economy can take the form of information—for example: computer software; financial advice; travel planning; music, books and movies; and blueprints for designing a building. These products and many others can be transported over telephones and computer networks at ever-lower costs. Finally, international agreements and treaties between countries have encouraged greater trade. Table 1.2 presents one measure of globalization. It shows the percentage of domestic economic production that was exported for a selection of countries from 2010–2013, according to an entity known as The World Bank. Exports are the goods and services that are produced domestically and sold abroad. Imports are the goods and services that are produced abroad and then sold domestically. The size of total production in an economy is measured by the gross domestic product (GDP). Thus, the ratio of exports divided by GDP measures what share of a country’s total economic production is sold in other countries. |Higher Income Countries| |Middle Income Countries| |Lower Income Countries| In recent decades, the export/GDP ratio has generally risen, both worldwide and for the U.S. economy. Interestingly, the share of U.S. exports in proportion to the U.S. economy is well below the global average, in part because large economies like the United States can contain more of the division of labor inside their national borders. However, smaller economies like Belgium, South Korea, and Canada need to trade across their borders with other countries to take full advantage of division of labor, specialization, and economies of scale. In this sense, the enormous U.S. economy is less affected by globalization than most other countries. Table 1.2 also shows that many medium and low income countries around the world, like Mexico and China, have also experienced a surge of globalization in recent decades. If an astronaut in orbit could put on special glasses that make all economic transactions visible as brightly colored lines and look down at Earth, the astronaut would see the planet covered with connections. Hopefully, you now have an idea of what economics is about. Before you move to any other chapter of study, be sure to read the very important appendix to this chapter called The Use of Mathematics in Principles of Economics. It is essential that you learn more about how to read and use models in economics. Bring It Home Decisions in the Social Media Age The world we live in today provides nearly instant access to a wealth of information. Consider that as recently as the late 1970s, the Farmer’s Almanac, along with the Weather Bureau of the U.S. Department of Agriculture, were the primary sources American farmers used to determine when to plant and harvest their crops. Today, farmers are more likely to access online weather forecasts from the National Oceanic and Atmospheric Administration or watch the Weather Channel. After all, knowing the upcoming forecast could help decide when to harvest crops. Consequently, knowing the upcoming weather could change the amount of crop harvested. Some relatively new information forums are rapidly changing how information is distributed; hence, influencing decision making. In 2014, the Pew Research Center reported that 71 percent of online adults use social media. Topics posted range from the National Basketball Association, to celebrity singers and performers, to farmers. Information helps us make decisions. Some decisions are as simple as what to wear today, while others may be more complicated, such as how many reporters should be sent to cover a crash. Each of these decisions is an economic decision. After all, resources are scarce. You might decide that you need to go shopping for new clothes. And if ten reporters are sent to cover an accident, they are not available to cover other stories or complete other tasks. Information provides the knowledge needed to make the best possible decisions on how to utilize scarce resources. Welcome to the world of economics!
The components of an aircraft or a spacecraft that support the weight of the craft and its load and give it mobility on ground or water. Source: EUROCONTROL ATM Lexicon The landing gear is the principal support of the airplane when parked, taxiing, taking off, or landing. The most common type of landing gear consists of wheels, but airplanes can also be equipped with floats for water operations or skis for landing on snow. The wheeled landing gear on small aircraft consists of three wheels: two main wheels (one located on each side of the fuselage) and a third wheel positioned either at the front or rear of the airplane. Landing gear with a rear mounted wheel is called conventional landing gear. Airplanes with conventional landing gear are sometimes referred to as tailwheel airplanes. The two main wheels are attached to the airframe ahead of its centre of gravity (CG) and support most of the weight of the aircraft. The tailwheel is located at the very back of the fuselage and provides a third point of support. This arrangement allows adequate ground clearance for a larger nose-mounted propeller and is more desirable for operations on unimproved fields. It is therefore popular with small, general aviation aircraft such as the PIPER L-18C and the C170. With the CG located behind the main landing gear (MLG), directional control is more difficult while on the ground. For example, if the pilot allows the aircraft to swerve while rolling on the ground at a low speed, they may not have sufficient rudder control and the CG will attempt to get ahead of the main gear, which may cause the airplane to ground loop. Touching down with the tailwheel may, depending on the speed, produce enough lift (due to the increased Angle of Attack (AOA)) and cause the aircraft to become airborne again. Diminished forward visibility when the tailwheel is on or near the ground is another disadvantage of tailwheel landing gear airplanes. Specific training is required to operate tailwheel airplanes. When the third wheel is located on the nose, it is called a nosewheel, and the design is referred to as a tricycle gear. It has the following advantages compared to the conventional type: - Allows more forceful application of the brakes during landings at high speeds without causing the aircraft to nose over. - Tends to prevent ground looping (swerving) by providing more directional stability during ground operation since the aircraft’s CG is forward of the main wheels. This keeps the airplane moving forward in a straight line rather than ground looping. - Provides better forward visibility for the pilot during takeoff, landing, and taxiing. A steerable nosewheel or tailwheel permits the airplane to be controlled throughout all operations while on the ground. Most aircraft are steered by moving the rudder pedals, whether nosewheel or tailwheel. Airplane brakes are located on the main wheels and are applied by either a hand control or by foot pedals (toe or heel). Foot pedals operate independently and allow for differential braking, i.e. applying different force to the left and right main landing gear assemblies. During ground operations, differential braking can supplement nosewheel/tailwheel steering. Landing gear can also be classified as either fixed or retractable. Fixed landing gear always remains extended and has the advantage of simplicity combined with low maintenance. Retractable landing gear is designed to streamline the airplane (reduce the drag) by allowing the landing gear to be stowed inside the structure during cruising flight. Fixed landing gear is common with slow (e.g. general aviation) aircraft and most commercial aircraft use retractable landing gear. Heavier aircraft require more complex landing gear. These consist of multiple wheels and sometimes the MLG is made of more than two assemblies. For example, the Airbus A340 Family is equipped with a MLG comprising three parts (one under each wing and the third under the fuselage) and the AIRBUS A-380-800 and the Boeing B747 Series have four (one under each wing and two under the fuselage). Some large cargo aircraft, e.g. the ANTONOV An-124 Ruslan and ANTONOV An-225 Mriya also have nose landing gears comprising two assemblies (in addition to the complex MLG design). Retractable landing gear is normally powered by the hydraulic system. In the case of failure, an emergency extension system is available. This may be a manually operated crank or pump, or a mechanical free-fall mechanism. Airflow is sometimes used to get the gear into the locked position. Landing with the gear in the "up" position or with an unlocked gear can lead to loss of directional control on the ground, a Runway Excursion, extensive structural damage or Fire, Smoke & Fumes. Accidents and Incidents This section contains A&I examples that have landing gear as a contributory factor. - A30B, Bratislava Slovakia, 2012 (On 16 November 2012, an Air Contractors Airbus A300 departed the left the side of the landing runway at Bratislava after an abnormal response to directional control inputs. Investigation found that incorrect and undetected re-assembly of the nose gear torque links had led to the excursion and that absence of clear instructions in maintenance manuals, since rectified, had facilitated this. It was also considered that the absence of any regulation requiring equipment in the vicinity of the runway to be designed to minimise potential damage to aircraft departing the paved surface had contributed to the damage caused by the accident.) - A310, Vienna Austria, 2000 (On 12 July 2000, a Hapag Lloyd Airbus A310 was unable to retract the landing gear normally after take off from Chania for Hannover. The flight was continued towards the intended destination but the selection of an en route diversion due to higher fuel burn was misjudged and useable fuel was completely exhausted just prior to an intended landing at Vienna. The aeroplane sustained significant damage as it touched down unpowered inside the aerodrome perimeter but there were no injuries to the occupants and only minor injuries to a small number of them during the subsequent emergency evacuation.) - A320, Khartoum Sudan, 2005 (On 11 March 2005, an Airbus A321-200 operated by British Mediterranean Airways, executed two unstable approaches below applicable minima in a dust storm to land in Khartoum Airport, Sudan. The crew were attempting a third approach when they received information from ATC that visibility was below the minimum required for the approach and they decided to divert to Port Sudan where the A320 landed without further incident.) - A320, Los Angeles USA, 2005 (On 21 September 2005, an Airbus A320 operated by Jet Blue Airways made a successful emergency landing at Los Angeles Airport, California, with the nose wheels cocked 90 degrees to the fore-aft position after an earlier fault on gear retraction.) - A320, Perth Australia, 2018 (On 14 August 2018, an Airbus A320 departed Perth without full removal of its main landing gear ground locks and the unsecured components fell unseen from the aircraft during taxi and takeoff, only being recovered after runway FOD reports. The Investigation identified multiple contributory factors including an inadequately-overseen recent transfer of despatch responsibilities, the absence of adequate ground lock use procedures, the absence of required metal lanyards linking the locking components not attached directly to each gear leg flag (as also found on other company aircraft) and pilot failure to confirm that all components were in the flight deck stowage.) - A320, Singapore, 2015 (On 16 October 2015, the unlatched fan cowl doors of the left engine on an A320 fell from the aircraft during and soon after takeoff. The one which remained on the runway was not recovered for nearly an hour afterwards despite ATC awareness of engine panel loss during takeoff and as the runway remained in use, by the time it was recovered it had been reduced to small pieces. The Investigation attributed the failure to latch the cowls shut to line maintenance and the failure to detect the condition to inadequate inspection by both maintenance personnel and flight crew.)
Restoring North American Bison Bison galloping, (credit: Eadweard Muybridge) The day has been set aside to remember an animal of national importance, the American Bison. National Bison Day was established to commemorate this noble beast, virtually on the edge of extinction where millions had once roamed. National Bison Day recognizes the ecological, cultural, historical, and economic contributions provided by this wildlife icon. The restoration of the North American bison (Bison bison) to original rangeland in Canada's Banff National Park is underway. The shaggy animals haven't been part of Canada's northern Great Plains since being exterminated at the beginning of the 20th Century. Coming back from the edge of extinction in the USA, the large beasts represent one of the great success stories in wildlife conservation. The Canadian's are trying to replicate the US success with a bison restoration program of their own. The early results of the reintroduction has been rewarding. As numbers grow it is expected that, similar to the rapid ecological adjustments observed when wolves were restored to Yellowstone National Park, returning Canadian bison to their prairie ranges will bring similar changes to the ecosystem in the Banff park as well. Success in wildlife restoration projects requires mindful planning, smart engineering, an appreciation of animal behavior, good timing, and luck. WHB:
A New, Colorful Way to Attract Pollinators to Crops By Andrew Porterfield Intensive agriculture practices often depend on pollinators for success, but these practices also tend to eliminate the plants that are popular among bees, wasps, and other pollinators. Farmers and scientists have looked at planting wildflowers in growing areas, but a team from the University of Wyoming looked at an alternative source of attraction: specialty cut flowers. These flowers—marigolds, zinnias, strawflower, and others that are grown only during a short season—could produce a win-win situation for growers. They could attract a wide diversity of pollinators and provide supplemental economic benefits to farmers. Most specialty cut flowers in the U.S. are imported from near-equatorial countries, but growing them here could provide an alternative to imports. But do they attract pollinators? And how much? Little research had been done on specialty cut flowers and pollination. Until now, that is. A team of entomologists and horticulturalists at University of Wyoming (UW) headed by graduate student Samantha Nobes found that planting several species of specialty cut flowers in “high tunnel” shelters did indeed attract a wide variety of pollinating insects. Their results are reported in an article published last week in the Journal of Economic Entomology. The high-tunnel experiments were also significant because they showed that insects visited the flowers inside the structures, which have sides that can be rolled up in good weather to open access to flowers. (The structures also protect plants from high winds, frost, and large swings in temperature from day to night, common in high-altitude locations like Laramie, Wyoming.) The researchers planted the flowers in two high tunnels at about 7,200 feet above sea level. Sides were opened when temperatures reached above 40 degrees Fahrenheit. Six flower species were raised in the spring and summer: cultivars of marigold, stock, strawflower, ornamental carrot, cockscomb, and zinnia. Eighteen plots of flowers were planted in total. The group then measured the types and numbers of insects visiting each. The flowers attracted a diverse group of pollinators. These included members of Diptera (flies), Hymenoptera (bees and wasps), Coleoptera (beetles), and Lepidoptera (butterflies and moths). Flies visited flowers most often (45 percent of visits), followed by bees and wasps, and, more distantly, beetles, butterflies, and moths. Bees (mostly Bombus species) most often visited marigolds, strawflowers, and silver cock’s comb. Wasp visits did not vary much among the flower species, but wasp family preferences were distinct. Ten unique wasp families were identified. The study yielded some unexpected results. “We were pleasantly surprised to observe the diversity of insects visiting these specialty cut flowers,” says Randa Jabbour, Ph.D., associate professor of agroecology at UW and senior author on the study. “Although many entomologists are interested in the best ways to provide floral resources to bees, ornamentals or cut flowers are rarely considered as a possible avenue to accomplish this goal. … Our collaboration between entomologists and horticulturalists allowed us to take an innovative approach to consider insect conservation alongside potentially marketable flower production.” While the high tunnels are useful to high-altitude areas like Laramie, other types of structures could also be used to cultivate flowers and attract pollinators, as long as there is easy access from the outside. Year-round greenhouses, for instance, are always closed, and pollinators are rarely seen in them (unless they were introduced there). The diversity of pollinators was also seen as good news. Syrphid and non-syrphid flies provide both pollination (as adults) and pest control (as larvae). Non-bees may be less effective at depositing pollen for each flower visit, but they visit flowers more often than do bees. Certain wasps, such as crabronids, sphecids, and vespids, are good “back up” pollinators (compared to bees) and are predators to plant pests. Many species of cut flowers can be used to attract pollinators (as well as horticultural income streams), but these plants must be handled very carefully at each stage of production. There may be tradeoffs between harvesting and providing resources to bees and other pollinators—which the team recommended as a future avenue of research. “Insect Visitors of Specialty Cut Flowers in High Tunnels” Journal of Economic Entomology Andrew Porterfield is a writer, editor, and communications consultant for academic institutions, companies, and nonprofits in the life sciences. He is based in Camarillo, California. Follow him on Twitter at @AMPorterfield or visit his Facebook page. I mentioned to my wife about this, she got excited and placed lots flowering plants along her organic garden. Thank you for your nice work.
Interpersonal skills are the qualities and behaviors we exhibit while interacting with other people. The workshop addresses differences between self-esteem, self-awareness and self-knowledge. Participants will learn about their strengths and weaknesses, recognize personal skills and talents. By gaining this knowledge, participants will have better personal understanding, which can increase one’s self-esteem and they will learn how to deal consciously with their own emotions and reactions. Also, they will learn effective communication skills and types of communication, learn their personal boundaries and differences between healthy and harmful relationships. Legal skills and human rights Improve the basic knowledge and skills about the legal age, about citizenship, and European citizenship for a young adult. Useful information about personal documents. Career and job-related skills Career and job-related skills are one of the most important ones for getting and maintaining a decent job which can enable us with independence and additional sense of self-worth. In this unit we are dealing with various concepts that can help an individual to discover the right occupation and/or education for him/her as well as how to be more aware of the possibilities offered in their local communities. Additionally, many practical information and tools are presented in order to help young people to effectively search for and get a job. Appropriate job-related behaviour and the employer’s expectations are also presented in order to provide insight into meaningful and successful working environment for a young individual. Money management refers to how you handle all aspects of your finances, from making a budget for where each paycheck goes to setting long-term goals to picking investments that will help you to reach those goals. Money management is not just about saying „no” to any purchase, but developing a plan that allows you to say „yes” to the things that are most important to you. Any amount of money can prove to be too little if you don’t have good money management skills. Knowledge on Active Citizenship module “Active citizenship is the glue that keeps society together. Democracy doesn’t function properly without it, because effective democracy is more than just placing a mark on a voting slip. By definition, participative democracy requires people to get involved, to play an active role … in their workplace, perhaps, or by taking part in a political organization or supporting a good cause. The area of activity does not matter. It is the commitment to the welfare of society that counts.” (Active Citizenship For a better European society
Plagiarism: Brief Summary Plagiarism can be described as the act of wrongfully appropriating another person’s words, ideas or intellectual property and passing them off as your own. As a concept, it is primarily concerned with false claims of authorship and it is especially prevalent in fields such as journalism, academia, science and the arts. Although plagiarism is not a crime in of itself, instances of plagiarism can often constitute copyright infringement and even fraud. Generally, plagiarism is punished by institutions, rather than through law courts. For instance, it is considered a serious breach of ethics within the fields of journalism and academia, where it is often punished severely, sometimes even resulting in dismissal. Despite the serious nature of the offence, most accusations of plagiarism can be avoided by simply crediting original sources when their words, images, or ideas are used. Plagiarism: Detailed Summary While the act of plagiarism involves copying or stealing someone else’s intellectual property, it can take many different forms. For example, copying and pasting chunks of text from a book and failing to give credit to the original source would be one example of plagiarism, but it may also be plagiarism if ideas are copied, even if the words are changed significantly. This is sometimes referred to as plagiarism of ideas. The issue of plagiarism is particularly common in academic settings, despite serious attempts to stamp it out. One study, carried out by the Psychological Record, found that 36 percent of university students at undergraduate level admit to having plagiarised written material. Meanwhile, research conducted by staff at Rutgers University found that 58 percent of high school students admit to having done the same at least once. In many instances, plagiarism is not carried out for malicious reasons, but because people lack an understanding of precisely what plagiarism is and why it is such a serious issue. It is for this reason that many universities have made efforts to increase so-called plagiarism education, and many academic institutions, publishing companies and websites have their own definitions, with established guidelines for avoiding problems. One form of plagiarism that often occurs, but which people may carry out with no malicious intent, is known as self-plagiarism. This is where a person re-uses work they have previously published, such as in a book or website article, without acknowledging that they have done so. This type of plagiarism may not seem particularly serious, but it can raise issues surrounding copyright if the rights to the original work were transferred to another party. Within the fields of journalism, book publishing and blogging, plagiarism is considered to be a very serious breach of ethics and is often viewed as akin to stealing. As a result, writers who are found to have plagiarised other authors may face disciplinary action and experience significant damage to their reputation. In some cases, journalists have even lost their jobs as a direct consequence of their actions. A range of online and offline tools are available to help individuals and institutions with the detection of plagiarism, with examples including Copyscape, Turnitin and Viper. Nevertheless, while these tools may help by checking work against an internal database or external sources, they will not be able to detect all forms of plagiarism, especially when it is the plagiarism of ideas, rather than the straightforward re-use of chunks of text. Types of Plagiarism As previously stated, plagiarism can take many forms and while a dictionary can provide a definition of what it is, there are few clearly established rules. Indeed, some forms of plagiarism are much more obvious than others and even people with a basic understanding of the topic may struggle to know exactly where the line is drawn with certain practices. To help out, below, we have compiled a list of some common forms of plagiarism. - Directly copying another person’s work and failing to cite them as the original source. - Submitting or publishing another person’s intellectual property as your own. - Quoting somebody else’s words, but failing to cite the original source. - Re-using your own work, without acknowledging that it has previously been published elsewhere. - Stealing other people’s ideas or thoughts, without referencing them as the original author. - Translating foreign language content into English and then publishing it as your own. - Inaccurately citing the original source, or citing the wrong original source. - Only referencing some of the sources that should be cited. - Re-writing another person’s work, without introducing original thought and without citing them. Ways to Avoid Plagiarism Despite the stigma attached to plagiarism, a huge number of instances are accidental, stemming from bad writing practices, rather than malicious intent. One of the single best ways to avoid plagiarism is to make sure direct quotes are placed in quotation marks, with a reference made to the original source. It is also important for authors to try to limit the amount of content they take from a single source, even if it is referenced properly. A major issue with plagiarism is the fact that multiple authors may use similar phraseology, even if they are completely unaware of each other’s work. Yet, a writer who publishes content which closely resembles previously published work may be opening themselves up to accusations of plagiarism, regardless of intent. For this reason, authors should consider investing in plagiarism detection software, or using a free online tool. Where possible, writers should try to develop their own unique writing style or ‘voice’. Multiple different sources should be consulted when researching a topic and any ideas that originate from them should be clearly referenced. Authors should also aim to introduce as much original thought into their writing as is possible. In schools and other academic institutions, plagiarism can be reduced by making sure students are aware of what plagiarism is and which practices fall under its umbrella. Some of the more obvious and malicious cases of plagiarism may be detected through the use of plagiarism detection software, but teachers and other staff should be aware of the other forms of plagiarism that exist and try to keep an eye out for examples of them. Finally, writers can take certain steps to try to prevent their own work from being plagiarised. These range from simple steps, such as placing copyright warnings on content, or asking for quotes to be referenced appropriately, to more advanced methods, like disabling the copy, cut and right-click functions on a web page. Plagiarism describes the act of stealing another person’s intellectual property and attempting to pass it off as your own work. It can take many forms, ranging from straight forward copying of content to plagiarism of ideas and even self-plagiarism. Despite the fact that plagiarism itself is not a crime, plagiarism of copyrighted materials can be, and plagiarism is considered a serious breach of ethics in fields like journalism, academia and the arts. To avoid plagiarism, it is important to clearly reference original sources when ideas, expressions or direct quotes are taken from them. Authors should also aim to create genuinely unique content, containing original thoughts. Studies show that plagiarism is especially prevalent in schools and universities, so it is essential that academic institutions take measures to help educate students and detect when plagiarism occurs in their work.
Behavioral Economics is the intersection of Economics and Psychology and it examines the market forces when some agents exhibit human limitations and impediments. It is a branch of economic research that combines fundamentals of psychology to long-established models of economics to realize and understand decision-making by investors, consumers and other economic participants. Economics conventionally conceptualizes a world populated by calculating, unresponsive optimizers known as “Homo economicus”. The theory talks about how humans are “consistently rational and narrowly self-interested agents who usually pursue their subjectively-defined end optimally”. The typical economic framework ignores or rules out virtually all the behavior studied by cognitive and social psychologists. This economic model of human behavior includes three unrealistic traits i.e. boundless rationality, uncontrolled willpower, and unrestrained selfishness and these are modified by behavioral economics. Behavioral economics developed after the realization that human behavior and choices change and this also affects the decision making process. Behavioral economics is concerned with improving the explanatory power of economic theories by giving them a sound psychological basis to explain wide variety of glitches. Behavioral economics believes that human beings do have bounded willpower, rationality and self interest because due to this only people make choices which are not in their long run interest, limited cognitive abilities which constrain their problem solving skills and sometimes people are willing sacrifice for the well being of others. The field of behavioral economics talks about how individuals do not behave in their own best interests. It provides an outline on how people make errors. These systematic errors or biases persist in particular circumstances. Thus, through behavioral economics creates such an environment which will help people to take better decisions. Individuals are in the best position to know what is best for them. Thus, the behavioral goal of an individual can be affirmed as maximizing happiness and reaching this goal requires contributions from several brain regions. This branch attempts incorporate psychologists’ understanding of human behavior into economic analysis. Also, it recommends the policy makers on how to restructure the environments to facilitate better choices. For example, basically rearranging items that are offered within the school which persuade the children to buy more nutritious items like keeping the fruit nearby, making choices less convenient by moving soda machine into more distant areas, or requiring student pay cash for desserts and soft drinks. In sum, this approach complements and enhances the traditional economic model and helps in understanding that where people go wrong and helping them for the same. Behavioral economics was first acclaimed by Adam Smith in 18th century where he addresses the issues related to human psychology and how it is imperfect and how these imperfections impact economic decisions and affect market forces. Until the early 20th century, behavioral economics was unpopular. However, economists such as Irving Fisher and Vilfredo Pareto started believing in the idea of “human” factor in economic decision-making and how it was a potential reason for the stock market crash of 1929 and the events that happened after. In 1955, economist Herbert Simon coined the term bounded rationality within the backdrop of behavioral economics and believed that human beings do not have the capabilities of taking infinite decisions and choices. This branch of research has been ignored for several years. In 1979, Kahneman and Tversky gave the "Prospect Theory" that offered a framework for how people structure economic outcomes as gains and losses and it affects people's preferences. Behavioral economics is still a new field and many new concepts are yet to be explained. Human behavior changes in different situations depending on location, time, and influences of the society, emotional judgments, and thoughts based on prejudice, and simultaneously it affects their choices. In the 1976, Gary Becker establishes the rational choice theory and its relationship with human behavior. This theory assumes that human beings have stable preferences and engage in maximizing behavior. Read more about Rational Expectations and Adaptive expectations Prospect theory discusses about the nature of people i.e. how people dislike losses more than they like equal gains i.e. giving up on something is more painful than the joy we derive from receiving it. This pillar of behavioral economics deals with the number of observed biases that the traditional models could not explain. Also, the theory tells us that decisions are not always optimal and our willingness to take risks is manipulated by the way in which alternatives are framed, i.e. it becomes context-dependent. Daniel Kahneman talks about a dual-system framework which explains why our judgments and decisions often do not match to the prescribed rules of rationality. Firstly, system 1 which comprises of thinking processes those are instinctive, automatic, experience-based, and quite unconscious. Second, system 2 is more reflective, restricted, conscious, and logical. System 1 works repeatedly and quickly and requires less effort and has no sense of voluntary control. On the other side, system 2 distributes our attention to the effortful mental activities. The operations of system 2 are biased because it varies with our choice and concentration. System 1 is fast and apprehends the situation quickly around us both knowingly and unknowingly whereas system 2 is slow and planned. The interaction between these two systems yields our choices which contain bias. Different kinds of biases due to different reasons are generated such as: Behavioral economics also considers social forces through which decisions are made by individuals and gets shaded and embedded in the social environments. People are strongly influenced by the environment they live in and the decisions of their fellow beings. Need research paper help with topics on social dimensions of economics or other essay help with topics on development economics? Contact online economics research paper help team now. The application of behavioral economics is related with the decision makings of the market as well as the individual preferences and choices. The central point of behavioral economics is to suggest a better approach for the economic analysis, enhance the study of Economics by all means-producing hypothetical experiences and bring about a significant improvement in forecasting the field phenomena using psychological experimentation. Behavioral economics has changes the way economists think about people’s perceptions of value and expressed preferences. It suggests that people’s thinking is subjected to insufficient knowledge, feedback, processing ability which also involves uncertainty and people do not always have stable preferences. Behavioral economics acknowledge the fact that people are social beings with social preferences with emotions like trust, reciprocity, and fairness and are vulnerable to social norms and there always exist a need for self-consistency. Also, people use the available information in their memory and have poor predictions about the future. Behavioral economics possess the capability to add value to the rational choice theory. The model of rational choice should accommodate the behavioral insights in all dimensions and then only it will be able to make better predictions and necessary prescriptions. The implications of behavioral economics are extensive, and its ideas have been used in various domains like personal and public finance, health, energy, public choice, and marketing etc. This branch of economics has encouraged research that concerns with actual behavior and has promoted a ‘test and learn’ culture among governments and corporations. Behavioral Economics therefore needs to be considered alongside rather than as a replacement for traditional interventions. In the private sector too, behavioral economics has revitalized practitioners’ interest in psychology, predominantly in marketing, consumer research as well as business and policy consulting. The explanatory power of Economics has increased by behavioral economics because it provides us with a more rational, psychological foundation by surrounding various relevant aspects adhering to it. Behavioral economics has made a significant impact on the decision making of individuals and suggested the ways in which the human behavior differs depending on the circumstances, time, location, emotional judgments as well as societal influences. The subject matter of behavioral economics requires advanced understanding of core concepts of microeconomics theory, macroeconomics theory, behavioral sciences like psychology and sociology as well as applications of mathematical methods for economics and statistical techniques for econometric analysis. Therefore, behavioral economics is an advanced interdisciplinary course that is often tough to understand and practice. Students of economics often get stuck with behavioral economics homework answers and assignment problems given in universities and colleges. By taking online economics assignment help from behavioral economics tutors at assignmenthelp.net students studying behavioral economics can easily understand even the toughest concepts and ideas and can get instant reliable and affordable help for behavioral economics solutions. All the solutions provided by our online economics tutors have detailed explanations and are solved step-by-step in a detailed manner to ensure that students get a thorough understanding of the subject and benefit maximum from our behavioral economics homework help service. Assignment Writing Help Engineering Assignment Services Do My Assignment Help Write My Essay Services
Lake Erie is a vital resource to both the United States and Canada. It is the fourth largest of the Great Lakes and thirteenth in the world by surface area (25,655 km2) with a volume of 484 km3 and an average residence time of 2.6 years. The lake provides a potable water source for many cities including Buffalo, NY and Cleveland, OH; as well as a source of commercial fishing, industrial ports, leisure, transportation and agricultural irrigation. The United States has already begun to see the effects of water shortage due to extensive agriculture and population growth in water limited regions. Being the fourth largest lake in the US, Lake Erie will undoubtedly play an even larger role in our economy and livelihood in the future. Therefore we must fully understand the variables that control the availability of this resource, which can be assessed through a detailed water balance of Lake Erie. Despite the importance of Lake Erie, it appears that the most recent published paper conducting a detailed water balance of Lake Erie was Quinn and Guerra (1986), using data from 1940 to 1979. There have been vast changes in the use and management in Lake Erie’s water supply in the last thirty years. The most noticeable changes being from the 1960’s through the 1970’s when Lake Erie became extremely polluted by phosphorus runoff from agriculture, leading to lake-wide eutrophication, algal blooms, and fish death. Largely as a result of this case, the US Congress passed the Clean Water Act of 1972 in an effort to restore Lake Erie to its natural ecosystem. This act may have had profound effects on the hydrology of Lake Erie, depending on what action was taken to limit nutrient inflow. Fortunately, there are many agencies that monitor Lake Erie’s hydrology on a daily to monthly basis, providing ample data to update Quinn and Guerra’s interesting work in 1986. The data Quinn and Guerra compiled from 1940 to 1979 for the water balance of Lake Erie produced fascinating results. They found that the mean precipitation from 1940 to 1979 was 5% higher than from 1900-1939, an increase of 37 m3 s-1. During this same period, the total water supply in the lake increased, as did the Detroit River discharge both of which were likely the result of the increased precipitation. They also noted a cooling trend starting in the 1950’s and lasting throughout the measured interval until 1979. These are just a few examples of how an updated water balance, looking at hydrological and meteorological data from 1979 to 2009, would help us understand the overall trends in the hydrology of Lake Erie. Considering the importance of Lake Erie to the US, it is essential to understand and monitor the factors that influence its hydrology and available water supply. I will attempt to answer many of the open-ended questions posed in Quinn and Guerra’s paper as to the long-term trends in Lake Erie’s hydrology. I believe that the trend seen from 1940 to 1979 of increasing precipitation, lake water level, river flow, and decreasing temperatures will continue to 2009. This effect may largely be due to changing climate regimes; global warming seems like a viable mechanism for the increase in precipitation but one would expect an increase in temperatures as opposed to a decrease. Also, I examine how, if at all, the Clean Water Act of 1972 has changed the hydrology and water balance of Lake Erie. The basic equation used for a water budget of a lake simplifies down to Input – Output = Change in Storage. In this scenario, it is necessary to determine all inputs and outputs of water for Lake Erie. In addition, change in storage was calculated by taking the change in overall lake level from one month to the next and multiplying it by the surface area of the lake to get the overall volume of change in storage. The inputs include inflow from the Detroit River, which is the only river source into Lake Erie and contributes significantly to the overall water budget. Additional inputs are precipitation falling on the lake, overland flow into the lake from precipitation on nearby land, and groundwater flow into the lake. The outputs include outflow through the Niagara River as well as the Welland Diversion. It also includes evaporation, consumption from the lake and groundwater outflow. Consumption of water throughout the lake was measured only for a short period of time through the 1960’s, which measured an average of 48 m3/s (Quinn and Guerra, 1986). This value was used as a constant throughout the entire time series, and although a gross approximation, it amounts to less than 1% of the water leaving the lake. Groundwater flow into and out of the lake was not measured for Lake Erie and thus I do not take this into account in my calculation of change in storage. There is a good probability that this, like the consumption, would be a small fraction of the total water input or output. Also, the input of groundwater may act to balance the output of groundwater and thus cause little overall change in the storage of the lake. Evaporation rates were needed in order to fully characterize the output of water from Lake Erie. GLERL collected temperature and wind speed measurements from Lake Erie from 1948 to 2000 and thus those measurements were used to model evaporation on the lake. Evaporation was calculated using the Mass-Transfer approach, which takes into account wind speed, surface topography, and both saturated and air vapor pressure. The lake temperature can be used to calculate saturated and vapor pressure through a relatively straightforward use of the Clausius-Clapeyron Equation. This was then used to calculate evaporation through the Mass-Transfer approach: E = KEua(ea-es), where KE is derived empirically. I was not able to use some of the more robust models such as the Penman-Combination Approach for evaporation since I did not have data on the heat budget of Lake Erie. The National Oceanic and Atmospheric Administration (NOAA) runs the Great Lakes Environmental Research Laboratory (GLERL). The GLERL monitors hydrology and hydraulics data for all of the Great Lakes, including stream flow, connecting channel flow, lake evaporation, evaporation, water temperature, and water quality assessments. It also records precipitation on and around the lakes, temperature, runoff and lake level. I used this data supplied by GLERL to conduct a current water budget for Lake Erie. The Detroit River accounted for approximately 80% of the inflow into Lake Erie, while the Niagara River accounted for nearly 86% of the outflow from the lake. The average lake level throughout the 100-year record was 174.1 meters. |In (m3/month)||% of In||Out (m3/month)||% of Out| |Runoff to Lake Erie||1.56×109||9.0||Welland Division||4.08×108||2.3| |Average Lake Level||174.1 m| Throughout the 100-year record, the maximum lake level recorded is 175.04m and the lowest is 173.17m, a difference of 1.87 meters. As you can see from the figures below of the bathymetry of the lake, it appears that there may be an inherent error when calculating water volume by multiplying the values by the surface area of the lake. As the lake level rises and falls, the surface area of the lake changes and this needs to be quantified or explained. After further research, it is clear that the effect of changes in lake levels on the surface area of Lake Erie is assumed to be negligible. As the water drops it does cover less surface area, however, the large size of Lake Erie mitigates the effect of changing water levels on surface area. In the USGS Scientific Investigations Report 2004-5100 on the Great Lakes Water Balance, they discovered that a change in lake level resulting in an average loss or gain of approximately 500 ft of shoreline would be needed to change the area of the Great Lakes by 1%. For this reason, change in surface area of Lake Erie was not taken into account when quantifying the change in storage. As a qualitative assessment of the change in surface area due to the change in lake level, one may assume the relative direction and magnitude of variation in this estimation. The low levels in change in storage are most likely overestimated since the surface area would be slightly less, and therefore the troughs in the change in storage graphs may be in reality slightly shallower. On the other hand, the high values in change in storage are most likely underestimated, since the lakes surface area would have increased slightly. This would mean that the peaks in the change in storage figure, in reality, should be more pronounced. Change in Storage via Lake Level Below is the calculated change in storage using the monthly lake level data. One month’s lake level was subtracted from the previous month’s lake level and multiplied by the total area of the lake (25,655 km2). This gave a running series of the changes in storage for Lake Erie. It appears that from around month 700 (1958) onwards there is less variation in the data, maybe suggesting that the lake became more regulated by hydroelectric dams and/or water use. Due to a large number of data points and the difficulty of teasing out valuable information about the change in storage from the graph above, a running sum of the change in storage was calculated to show long-term variation. Below is the running average of the change in storage for Lake Erie. One can notice that the overall change in storage increases from 1900 to 2000. Also, there are several low points, namely at around months 400 and 800, coinciding with 1933 and 1966. The dust bowl occurred in 1933 so that may be the cause of the dramatic decrease. There are also several drastic increases in the change in storage, the most prominent at approximately month 350, corresponding to the flooding of 1929. In addition, one can see three prominent cycles in the change in storage, which appears as longer trends of rising and falling of the change in storage. To better see this trend it is useful to look at the 5-year running mean of the sum of the change in storage. This clearly shows that there is a 30-year variability, visualized as “humps” in the sum of the change in storage. This trend was hidden in the initial change in storage plot but is clear in the running sum. There appears to be a 30-year regime that repeats itself throughout the record, although it is hard to say how long this regime has been present since we only see 100 years of the record. One can also see the major climactic events in North America throughout the last century in Lake Erie. The Dust Bowl, in the 1930’s, is clearly evident in addition to the 1929 flooding events, which took place right before the Dust Bowl. During the 1960’s Lake Erie became widely eutrophic, killing vast amounts of fish and turning the lake nearly anoxic. As a response to this eutrophication, Congress passed the Clean Water Act around 1970, which implemented regulations in the usage and dumping of sewage or fertilizer into the lake. This event is marked on the change in storage figure, and corresponds to a dramatic increase in lake wide storage, alluding to the success of this act in helping to regulate the Lake’s resource. Change In Storage (Input-Output) Change in storage was also calculated via the method described earlier by subtracting the output from the input to Lake Erie. The input is the Detroit River input + precipitation + runoff into the lake + base flow into the lake. The output is evaporation + Welland Division output + Niagara River output + base flow out of the lake + consumption. One caveat is that this method was only applicable from 1948 to 2000 as opposed to 1900 to 2000 for the lake level change method. This is because there are only temperature and wind speed measurements (which were used to calculate the evaporation rate) for Lake Erie from 1948 to 2000. As you can see below it is very much the same as the change in storage calculated by the change in lake levels. Melting Season Precipitation A useful approach to looking at the causation of the fluctuations in change in storage may be to look at the temperature variations for each given month throughout the record. Lake Erie is partly supplied by meltwater from snow covering the region and thus a change in the temperature from year to year may indicate the magnitude and temporal variability in the water reaching Lake Erie. The figure below shows the monthly changes in temperature from 1948 to 2005, assuming that the majority of the melting occurs from January to April of each given year. The time series is somewhat hard to delineate long term trends or significant fluctuations. The most significant change is the decrease in temperatures throughout each month from around 1975 to 1978. This period, when looking at the sum of the change in storage corresponds to one of the peaks and where it starts to descend. This could indicate that the reason for the peaking of the change in storage, and subsequent decrease, is the unusually cold temperatures limiting the snow melt from reaching the lake. The use of a Morlet Wavelet can indicate signals of periodicity in a data series that are not visible to the naked eye. Below are the wavelet spectra for the change in storage calculated from lake level change as well as input – output methods. Both show similar trends, which is expected since they ideally should be exactly the same as one another. On both graphs there is a strong yearly (12 month) periodicity that runs throughout the entire record, the black contour line that surrounds the red band at 12 months is the 10% significance level, indicating that this periodicity is in fact significant. Also, when looking at part C, the Global Wavelet, the confidence interval indicates that 12 months is the only significant periodicity in both of the spectra. This is expected since there are seasonal inputs of melt water that supply much of the total water input to the lake each year. Fig: (a) CIS (Lake Level). (b) The wavelet power spectrum. The contour levels are chosen so that 75%, 50%, 25%, and 5% of the wavelet power is above each level, respectively. The cross-hatched region is the cone of influence, where zero padding has reduced the variance. Black contour is the 10% significance level, using a white-noise background spectrum. (c) The global wavelet power spectrum (black line). The dashed line is the significance for the global wavelet spectrum, assuming the same significance level and background spectrum as in (b). Reference: Torrence, C. and G. P. Compo, 1998: A Practical Guide to Wavelet Analysis. Bull. Amer. Meteor. Soc., 79, 61-78. Fig: (a) CIS (In-Out). (b) The wavelet power spectrum. The contour levels are chosen so that 75%, 50%, 25%, and 5% of the wavelet power is above each level, respectively. The cross-hatched region is the cone of influence, where zero padding has reduced the variance. Black contour is the 10% significance level, using a white-noise background spectrum. (c) The global wavelet power spectrum (black line). The dashed line is the significance for the global wavelet spectrum, assuming the same significance level and background spectrum as in (b). Reference: Torrence, C. and G. P. Compo, 1998: A Practical Guide to Wavelet Analysis. Bull. Amer. Meteor. Soc., 79, 61-78. Comparison of two Change in Storage methods I compared the change in storage calculated from the lake water level and from subtracting the outputs from the inputs for Lake Erie. Surprisingly they match up almost identically when you plot them both together, indicating that both methods appear to be accurate. This also indicates that the absence of a value for the base flow in and out as well as the approximation of consumption may not have played a large role in the change in storage for the lake. It also verifies that the measurements are accurate enough to correlate closely with the observed fluctuations in lake level. Lastly, I plotted the two changes in storage against one another in order to compare the two and see quantitatively how well they correlated with one another. As expected, they correlate extremely well, with an R2 = 0.94. The slope of the line is also nearly 1, at 0.98, which indicates that there is practically a 1:1 relationship between the two methods as would be expected since they ideally measure and record the same information. Lastly one can see that there is no skew in the relationship depending on whether there was a positive or a negative change in storage, indicating that there was no instrumental or measuring bias due to an increase or decrease in change in storage. A significant amount of information has been gained through this water balance of Lake Erie. Firstly, the measurement collection conducted by GLERL should be commended, as the two methods of calculating the change in storage were so closely correlated. It was also shown that Lake Erie has an approximately 30-year cyclicity in its water storage, giving insight into the long-term trends that the US government can expect to see in Lake Erie. In addition to the 30-year cyclicity, it was shown that Lake Erie has a yearly periodicity that is most likely influenced by the melting of snow accumulation in spring of each year. The Lake also recorded major climatic variations in addition to the onset of regulations such as the Clean Water Act in the 1970’s. Lake Erie is an important part of the United States economy and well being, and it is encouraging to have the resources to look at the lake through a water balance to better understand the hidden messages that lie within the measurements. These will inevitably be useful tools in determining how we will use and conserve this resource in the future. - National Oceanic and Atmospheric Administration, Great Lakes Environmental Research Laboratory. Used for raw data: - Neff, Brian P., and Nicholas, J.R., 2005, Uncertainty in the Great Lakes Water Balance: U.S. Geological Survey Scientific Investigations Report 2004-5100, 42 p. - Neff, B.P., Killian, J.R., 2003. The Great Lakes Water Balance: Data availability and annotated bibliography of selected references. USGS. Water Resources Investigations Report 02-4296. - Torrence, C. and G. P. Compo, 1998: A Practical Guide to Wavelet Analysis. Bull. Amer. Meteor. Soc., 79, 61-78. - Quinn, F.H., Guerra, B., 1986. Current perspectives on the Lake Erie water balance. J. Great Lakes Res. 12, 109 – 116.
What is sleep? Before diving straight into what REM sleep is, it is best to understand sleep itself. We can always tell if someone is either sleeping or not. The common characteristics we subconsciously look for are… - The person’s eyes are closed - The person does not hear anything unless it is a sudden loud noise - Breathing is slow and and rhythmic in pattern - The person is completely relaxed and none of the muscles are tensed - The person only moves occasionally, maybe once or twice in an hour Sleep cannot only be measured by what we see, but we are able to understand what exactly the brain does when a person sleeps. Using an electroencephalograph it is possible to measure brainwaves and brain activity while person sleeps from the moment they lie down to when they wake up the next morning. If a person is awake and relaxed their brain generates alpha waves. Alpha waves are waves that have oscillations that are about 10 cycles per second. If a person is alert then their brain activity is much higher than before. They are now generating beta waves that oscillate at twice the speed of alpha waves. Sleeping produces different wave patterns from those when a person is awake. These patterns are slower than alpha and beta waves. The first wave pattern is theta waves. Theta waves oscillate between 3.5 – 7 cycles per second. The second, delta waves, oscillate at less than 3.5 cycles per second. As a person falls deeper into sleep their brain activity patterns slow. As the brainwaves get slower it becomes harder to wake the person up from sleep. One night of sleep is filled with cycles and stages. The two general categories of sleep are non-REM sleep and REM sleep. Non-REM sleep can be broken down even further into three or four sleep stages. Being asleep isn’t one slow process, but rather it is a series of cycles that allow the body to get a full night’s rest while gradually waking up rather than suddenly wake up. The first sleep cycle lasts normally around 90 minutes while the following cycles last 100-120 minutes. Though these numbers can vary upon the individual’s sleep patterns. Each cycle begins and ends the same, but the time spend in each stage is different. A sleep cycle begins with non-REM sleep which is composed of stage 1, stage 2, stage 3. After entering stage 3 the body reverses the stages and goes stage 3, stage 2, stage 1. Unlike when the person fell asleep when the body reaches stage 1 again it does not tell the body to wake up, but instead transitions into REM sleep before going through another cycle. As the person sleeps longer time in each stage starts to shift and change until by the last cycle the body is in REM sleep most of the time just before the person wakes up. Non-REM Sleep vs. REM Sleep Non-REM sleep is short for non-rapid eye movement sleep. Majority of sleep time is spent in non-REM sleep, composing about 80% of average sleep time. Non-REM sleep is dreamless, the breathing and heart rates are slow and regular, the person sleep is still, but not paralyzed. REM sleep, however, is much different than non-REM sleep. REM sleep comprises the rest of sleep time, about 20-25%. At a young age the percent is much higher at 80% and as we get older the time spent in REM sleep is decreased. REM sleep stands for rapid eye movement. This does not mean your eyes are open while you sleep. It means that they move quickly, but the movements are not consistent and constant. They are more intermittent. It is unknown exactly what the eye movements are for and what purpose they serve. REM sleep has also been nicknamed “paradoxical sleep” because the body and brain is in a state that is very similar to being awake. The oxygen and energy consumption levels are high compared to other stages of sleep and sometimes higher than we are awake. Your brain almost reaches a stage similar to being awake while working on a complex problem. During REM sleep your breathing becomes much more rapid and irregular. Your heart rate and blood pressure also increase similar to awake levels. The body’s core temperature is not as well regulated like before during sleep. The muscles become completely paralyzed and brain signals that control muscle movements are blocked. The only brain signals that are not blocked are ones that control eye movements and other essential functions. This includes the heart pumping and the diaphragm expanding and collapsing allowing the person to breathe. There are really no negative effects if you start to lack REM sleep, but it is found to be vital in someway. In the short term lack of REM sleep has been found to impair someone’s ability to learn and complete complex tasks. It has also been found to be a vital part in early childhood development. If you are lacking in REM sleep the body may try to compensate for it. Your body will speed up the cycle stages and you will fall into REM sleep quicker and stay in that stage longer than normal. REM Sleep Behavior Disorder Some people suffer from RBD, or REM sleep behavior disorder. Unlike most people, these people are able to move and act out their dream that occur during REM sleep. They are able to move their limbs and get out of bed and sometimes engage in activities they would do when they are awake. RBD is not like sleepwalking or sleep terrors. People who suffer from RBD can be woken up easily. When they do wake up they can clearly recall details of their dream. Some of the most common actions with RBD are as followed It is also very rare for a person to eat, drink, engage n sexual activity, or go to the bathroom when they suffer from RBD.
thread : An execution process in a process The difference between thread and process : Each process needs an independent memory address space allocated by the operating system , All threads in the same process work in the same address space , These threads can share the same block of memory and system resources . deadlock : When a thread waits for a lock held by another thread , While the latter is waiting for a lock already held by the first thread , A deadlock will occur . Implementation mode : Java There are two ways to implement multithreading , One is inheritance Thread class , Second, implementation Runnable Interface . Thread state and its transition : When a thread executes start After method , Does not mean that this thread will be executed immediately , It only means that this thread is in a running state , Finally by OS To determine which runnable thread is executed . There is a time limit for a thread to be selected for execution at one time , This time period is called CPU Time slice of , When the timeslice runs out but the thread is not finished , This thread will become runnable again , wait for OS Rescheduling of ; Execute in running thread Thread.yeild() Method can also make the current thread runnable . * Waiting for user input in a running thread , call Thread.sleep(), Calling the join() method , The current thread becomes blocked . * Blocked thread user input completed ,sleep Time out ,join End of thread for , The current thread changes from blocked to runnable . * Running thread call wait method , This thread enters the waiting queue . Running thread encountered synchronized At the same time, I didn't get the lock mark of the object , Thread waiting for queue wait Time out , The thread waiting for the queue is notify Method wake up , There are other thread calls notifyAll method , Then the thread becomes the lock pool state . * Thread in lock pool state gets object lock token , The thread becomes runnable . * Running threads run Method completed or main End of thread , End of thread running . Method introduction : * Thread.yield() The currently running thread becomes runnable . * t2.join() Causes the current thread to block until t2 Thread execution completed . * Thread.sleep() Causes the current thread to block until sleep End of . wait,notify,notifyAll The way is Object Method of class , Its calling environment must have synchronized Call in synchronization block , Otherwise, it will be thrown java.lang.IllegalMonitorStateException abnormal . sleep() and yeild() The difference between : 1,sleep() Method gives other threads the chance to run , Regardless of the priority of other threads , So it gives lower threads a chance to run ;yield() Method only gives threads with the same or higher priority a chance to run . 2, When the thread executes sleep(long millis) After method , Will go to blocked state , parameter millis Specify sleep time ; When the thread executes yield() After method , Will go to ready . 3,sleep() Method declaration throw InterruptedException abnormal , and yield() Method does not declare any exception to be thrown 4,sleep() Method ratio yield() The method has better portability About object locks : When a thread attempts to access the synchronized(this) When a code block is marked , Must obtain this Lock of the object referenced by the key , There are two situations as follows : If the lock is occupied by another thread ,JVM This thread will be put into the lock pool of this object . This thread is blocked . There may be many threads in the lock pool , Wait until the other threads release the lock ,JVM A thread will be randomly removed from the lock pool , Lock this thread , And go to ready . 2, If the lock is not occupied by other threads , This thread will get this lock , Start execution of synchronization code block . ( In general, the synchronization lock will not be released when the synchronization code block is executed , But there are also special cases where an object lock can be released Such as when executing synchronization code block , Thread termination due to exception , The lock will be released ; When executing a code block , Of the object to which the lock belongs wait() method , This thread will release the object lock , Enter the waiting pool of the object ) Characteristics of thread synchronization : 1, If a synchronized code block and an unsynchronized code block operate on shared resources at the same time , There will still be competition for shared resources . Because when a thread executes an object's synchronization code block , Other threads can still execute unsynchronized blocks of code for objects .( So called synchronization between threads , When different threads execute the synchronized code block of the same object , To get the synchronization lock of an object and hold each other ) 2, Each object has a unique synchronization lock 3, You can use before static methods synchronized Modifier . When a thread starts executing a synchronized block of code , Doesn't mean you have to run it all the time , The thread entering the synchronization code block can execute Thread.sleep() Or execution Thread.yield() method , It does not release the object lock at this time , Just give the chance to run to other threads . Synchronized Declaration will not be inherited , If one uses synchronized Decorated method is covered by subclass , This method is not synchronized in the subclass , Unless synchronized modification . Thread safe classes : 1, Objects of this class can be accessed safely by multiple threads at the same time . 2, Each thread can perform atomic operations normally , Get the right results . 3, After the atomic operation of each thread is completed , The object is in a logical and reasonable state . Release lock on object : 1, When the synchronization code block is executed, the lock of the object will be released 2, In the process of executing synchronization code block , Thread termination due to exception , The lock will also be released 3, In the process of executing synchronization code block , Of the object to which the lock belongs wait() method , This thread will release the object lock , Enter object's waiting pool .
After the Civil War, while the nation debated the range of rights which would be secured to the freedmen, the women's rights movement worked to insure that women would also receive equal rights, particularly the right to vote. When the Fourteenth and Fifteenth Amendments did not enfranchise women, many women who had fought against slavery and for universal suffrage felt betrayed. The Republican Party's refusal to include women's rights in the party platform deepened the sense of betrayal and forced suffragists I to take up their cause for equal rights of citizenship separately from the freedmen. Two inequities in particular animated women's rights activists' fight: married women's civil death in marriage, and all women's lack of political rights. Louisa S. Ruffine, Civil Rights and Suffrage: Myra Bradwell's Struggle for the Equal Citizenship for Women , 4 Hastings Women's L.J. 175 Available at: https://repository.uchastings.edu/hwlj/vol4/iss2/2
Zoogeography is the part of the art of biogeography that is related to the geographic appropriation of animal species. Schmarda (1853) proposed 21 regions, while Woodward proposed 27 earthly and 18 marines, Murray (1866) proposed 4, Blyth (1871) proposed 7, Allen (1871) 8 regions, Heilprin (1871) proposed 6, Newton (1893) proposed 6, Gadow (1893) proposed 4. Philip Sclater (1858) and Alfred Wallace (1876) recognized the primary zoogeographic districts of the world utilized today: Palaearctic, Aethiopian (today Afrotropic), Indian (today Indomalayan), Australasian, Nearctic and Neotropical. Marine regionalization started with Ortmann (1896). Thus to geobotanic divisions, our planet is separated in zoogeographical areas additionally partitioned as regions, regions and locale, some of the time including the classifications Empire and Domain. The current pattern is to order the floristic kingdoms of organic science or zoogeographic areas of zoology as biogeographic domains. A biogeographic domain or ecozone is the broadest biogeographic division of the Earth's territory surface, in light of distributional examples of earthbound life forms. They are subdivided in ecoregions, which are grouped in biomes or living space types. The domains outline huge zones of the Earth's surface inside which life forms have been developing in relative detachment over significant lots of time, isolated from each other by geographic highlights, for example, seas, wide deserts, or high mountain runs, that establish boundaries to movement. Accordingly, biogeographic domains assignments are utilized to show the general groupings of living beings dependent on their common biogeography. Biogeographic domains compare to the floristic kingdoms of plant science or zoogeographic districts of zoology. Biogeographic domains are portrayed by the transformative history of the living beings they contain. They are particular from biomes, otherwise called significant natural surroundings types, which are divisions of the Earth's surface dependent on living thing, or the adjustment of animals, growths, small-scale life forms and plants to climatic, soil, and different conditions. Biomes are described by comparable peak vegetation. Every domain may incorporate various distinctive biomes. A tropical sodden broadleaf woodland in Central America, for instance, might be like one in New Guinea in its vegetation type and structure, atmosphere, soils, and so on, yet these backwoods are occupied by animals, parasites, smaller scale living beings and plants with altogether different developmental narratives. The examples of conveyance of living animals on the planet's biogeographic domains were formed by the procedure of plate tectonics, which has redistributed the world's territory masses over topographical history. Animal topography is a subfield of the nature-society/human-condition part of geology and also a piece of the bigger, interdisciplinary umbrella of Human-Animal Studies (HAS). Animal topography is characterized as the investigation of "the complex entanglings of human-animal relations with space, put, area, condition and landscape" or "the investigation of where, when, why and how nonhuman animals cross with human societies." Recent work propels these viewpoints to contend around a nature of relations in which people and animals are enmeshed, considering important the lived spaces of animals themselves and their conscious connections with human as well as other nonhuman bodies as well. The Animal Geography Specialty Group of the Association of American Geographers was established in 2009 by Monica Ogra and Julie Urbanik. The Animal Geography Research Network was established in 2011 by Daniel Allen. Outline of Animal Geography First Wave of Animal Geography The primary rush of animal topography, known as zoogeography, came to conspicuousness as a geographic subfield from the late 1800s through the early piece of the twentieth century. Amid this time the investigation of animals was viewed as a key piece of the control and the objective was "the logical investigation of animal existence with reference to the dissemination of animals on the earth and the common impact of condition and animals upon each other." The animals that were the focal point of studies were only wild animals and zoogeographers were expanding on the new hypotheses of development and regular choice. They mapped the advancement and development of species crosswise over the existence and furthermore tried to see how animals adjusted to various ecosystems."The aspiration was to build up general laws of how animals masterminded themselves over the world's surface or, at littler scales, to set up examples of spacial co-variety among animals and other natural factors" Key works incorporate Newbigin's Animal Geography, Bartholomew, Clarke, and Grimshaw's Atlas of Zoogeography, and Allee and Schmidt's Ecological Animal Geography. By the center of the twentieth century, developing controls, for example, science and zoology started going up against the customary zoogeographic inventories of species, their dispersions, and ecologies. In topography, zoogeography exists today as the energetic subfield of biogeography. Second Wave of Animal Geography The center of the twentieth century saw a get some distance from zoogeography towards inquiries concerning and enthusiasm for the effect of people on untamed life and in human relations with domesticated animals. Two key geographers forming this flood of animal geology were Carl Sauer and Charles Bennett. Sauer's enthusiasm for the social scene – or social biology – essentially included tending to the subject of animal taming. In Sauer's study, he centered around the historical backdrop of training, and how human employments of domesticated animals formed the scene. Bennett required a 'social animal geology' that concentrated on the connections of animals and human societies, for example, subsistence chasing and fishing. The move from the principal wave to the second influx of animal topography needed to do with the species being contemplated. Second wave animal topography carried tamed domesticated animals into the view rather than simply concentrating on untamed life. For the following quite a few years’ animal topography, as social biology, was overwhelmed by an investigation into the beginnings of taming, social ceremonies around training, and diverse societies animal’s relations. Key works incorporate Simoons and Simoons' A Ceremonial Ox of India, Gades' work on the guinea pig, and Cansdale's Animals and Man. Baldwin gives a brilliant review of second wave animal topography examine. Third Wave of Animal Geography In the mid-1990s a few things happened to influence geographers with an enthusiasm for animals and human-animal relations to reexamine what was conceivable inside animal geology. The 1980s and mid 1990s saw the ascent of the overall animal support development tending to everything from pet overpopulation to sparing jeopardized species, presenting cold-bloodedness to animals in mechanical cultivating, and dissenting carnivals, the utilization of hide, and chasing – each of a push to raise the perceivability of how people treat non-human others among the overall population. In the foundation, scientists and ethologists were considering animal conduct and species misfortune/disclosure bringing issues to light about the experiential existences of animals and also their dangerous presence close by people. Social researchers were reassessing being a subject and breaking into the black box of nature to study new understandings of the relations among people and whatever is left of the planet. Creature geographers acknowledged there was an entire range of human-animal relations that ought to be tended to from a geographic point of view. At the cutting edge of this third flood of animal, topography was Tuan's work on pets in Dominance and Affection and an uncommon subjects issue of the diary Environment and Planning D: Society and Space altered by Wolch and Emel. The two key highlights of the third rush of animal geology that recognize it from the prior waves are (1) an extended idea of human-animal relations to incorporate unsurpassed periods and areas of human-animal experiences, and (2) endeavors to acquire the animals themselves as subjects. Since the 1995 distribution, there has been a blast of contextual analyses and hypothesizing. Key works that unite third wave animal topography are Wolch and Emel's Animal Geographies: Place, Politics and Identity in the Nature-Culture Borderlands, Philo and Wilbert's Animal Spaces, Beastly Places: New Geographies of Human-Animal Relations, Urbanik's Placing Animals: An Introduction to the Geography of Human-Animal Relations, Gillespie and Collard's Critical Animal Geographies: Politics, Intersections and Hierarchies in a Multispecies World, and Wilcox and Rutherford's Historical Animal Geographies. Territories of Focus There are directly nine territories of center inside animal geography: - Estimating animal topography. Two noteworthy works tending to how to consider human-animal relations, all in all, are Whatmore's Hybrid Geographies, Hobson's work on political animals through the act of bear-document farming, and new grant that take a gander at animals' relations with the material world. - Urban animal geology. Scientists here look to comprehend that urban areas are, generally and today, multi-species spaces. Hypothetical work originates from Wolch et al. on what comprises a transspecies urban hypothesis and Wolch on showing a multi-animal groups city, alongside Philo's work on the authentic setting for the expulsion of domesticated animals from the city. - Morals and animal topography. How space, place, and time shape what rehearses on different species are correct or wrong is the worry of this territory. Articles by Lynn on what he terms geoethics and Jones on what he terms morals of the encounter are a decent place to begin. - Human personalities and animals. How people utilize animals to recognize themselves as people or to recognize human gatherings has an intriguing land history. Darker and Rasmussen look at the issue of bestiality, Elder et al. study how animals are utilized to victimize human groups and Neo examinations how ethnicity becomes possibly the most important factor with pig generation in Malaysia. Others, for example, Barua contend that the personalities of animals might be cosmopolitan, established by the dissemination of animals and their contact with dissimilar cultures. These are on the whole fantastic contextual investigations. - Creatures as subjects. A standout amongst the most troublesome parts of examining animals is the way that they can't disrespect us in human dialect. Creature geographers have been handling how, precisely, to address the way that people of different species are experiential elements. Precedents incorporate work by Barua on elephants, Bear on fish, Hinchliffe et al. on water voles, and Lorimer on nonhuman charisma. Geographers are likewise fighting with how to recreate the lives of animal subjects before, how these lives might be restored from the authentic record, and how spatially arranged human-animal relations have changed through time. - Pets. A standout amongst the coziest connections that individuals have with different species is frequently through the animals living in their homes. How we have molded these animals to fit human ways of life and what this implies for arranging a more-than-human presence is the worry here. Key articles incorporate Fox on dogs Lulka on the American Kennel Club, and Nast on basic pet studies. - Working animals. Human employments of different species as work are very broad both verifiably and today. From logging elephants to lab mice and zoo animals to military pooches and draft animals, the spaces and places of how animals function for us make intriguing geologies. For understanding see Anderson's work on zoos, Davies' work on virtual zoos and research facility mice, and Urbanik's work on the legislative issues of animal biotechnology. - Cultivated animals. How we raise and homestead animals – both as sustenance and for their parts (e.g., hide) – is the biggest classification of real utilization of animals. Research around there has concentrated on the advancement of mechanical cultivating frameworks, the morals of expending animals, and how domesticated animal’s relations affect thoughts of place. Buller and Morris talk about homestead animal welfare, Holloway inspects mechanical advances in dairy production, Hovorka takes a gander at urban domesticated animals in Africa, and Yarwood et al. study the domesticated animal's landscape. - Wild animals. To date, animal geographers have done the most work with this classification of human-animal relations. From hypothetical investigations of natural life characterization to contextual analyses of human-untamed life strife, natural life the travel industry, to specific human-wild animal geologies, this has demonstrated a dynamic road. Scratch articles incorporate Emel's work on wolves, take a shot at untamed life and mobility, Vaccaro and Beltran's work on reintroductions, Whatmore and Thorne's work on social typologies of wildlife, and further expansions of the last's work through studies of animals and protection in historical and contemporary trans-national contexts.