content
stringlengths 275
370k
|
---|
|Click on the A or B labels to change the input values.|
The truth table tell us which values we expect on the output of the XOR gate for which values of the inputs A and B. For example, if A=1 and B=0, C will be 1. If A = 0 and B = 0, C will be 0. If A=1 and B=1, C will be 0, etc.
Clicking on the gate labels A and B (in the diagram above) switches their values. Take a piece of paper and make a three column table. The first column represents input A. The second input B. The third column represents input C.
Try all possible values of input A and B by switching their values on the logic gate above. Write the values of A and B down in the table, in the same row. Record the resulting value of the XOR gate's output C in the third column. Verify that your table matches the one listed above.
Logic gates implement what we call "Boolean logic". The XOR operation is part of that logic. In Boolean logic, an XOR operation combines two statement into one. The resulting statement is true when both statement are true, and false when either or both statements are false.
Let's look at a situation in which we would need to apply such logic.
Imagine an airplane used to teach people how to fly. The airplane has two controls: one for the teacher, one for the student. (I don't know whether such planes actually exist, but it seems a good idea:-)).
The plane can be controlled from either controls, so that the teacher can flip a switch and put the plane under the control of the student. From that point on the plane is under the control of the student so he or she can exercise flying the plan, until the teacher toggles the switch and regains control over the plane. Of course, either one of teacher and student must be in control at any given time for the plane not to crash. However, they can not both be in control because there would be obvious conflict. For example, the student could steer to the left, the teacher could steer to the right, etc.
This is a case of XOR logic. Let's say C indicates whether the plane is under control (1=yes, 0=no), and A indicates whether the teacher controls it (1=yes, 0=no), and B indicates whether the student controls the plane (1=yes, 0=no). The plane is under control (C=1) when either the teacher (A=1, B=0) or the student steer it (B=1, A=0), but when neither do (A=0, B=0), or both do (a=1, B=1), it is not under control.
Think of another example in which this situation would occur. Represent the outcome by the label C, and the two conditions by the labels A and C. Determine whether the relation between A, B and C corresponds to the XOR truth table.
|
Lifestyle is the interests, opinions, behaviours, and behavioural orientations of an individual, group, or culture. The term was introduced by Austrian psychologist Alfred Adler with the meaning of "a person's basic character as established early in childhood", for example in his 1929 book "The Case of Miss R.". The broader sense of lifestyle as a "way or style of living" has been documented since 1961. Lifestyle is a combination of determining intangible or tangible factors. Tangible factors relate specifically to demographic variables, i.e. an individual's demographic profile, whereas intangible factors concern the psychological aspects of an individual such as personal values, preferences, and outlooks.
A rural environment has different lifestyles compared to an urban metropolis. Location is important even within an urban scope. The nature of the neighborhood in which a person resides affects the set of lifestyles available to that person due to differences between various neighborhoods' degrees of affluence and proximity to natural and cultural environments. For example, in areas within a close proximity to the sea, a surf culture or lifestyle can often be present.
A lifestyle typically reflects an individual's attitudes, way of life, values, or world view. Therefore, a lifestyle is a means of forging a sense of self and to create cultural symbols that resonate with personal identity. Not all aspects of a lifestyle are voluntary. Surrounding social and technical systems can constrain the lifestyle choices available to the individual and the symbols she/he is able to project to others and the self.
The lines between personal identity and the everyday doings that signal a particular lifestyle become blurred in modern society. For example, "green lifestyle" means holding beliefs and engaging in activities that consume fewer resources and produce less harmful waste (i.e. a smaller ecological footprint), and deriving a sense of self from holding these beliefs and engaging in these activities. Some commentators argue that, in modernity, the cornerstone of lifestyle construction is consumption behavior, which offers the possibility to create and further individualize the self with different products or services that signal different ways of life.
Lifestyle may include views on politics, religion, health, intimacy, and more. All of these aspects play a role in shaping someone's lifestyle. In the magazine and television industries, "lifestyle" is used to describe a category of publications or programs.
History of lifestyles studiesEdit
Three main phases can be identified in the history of lifestyles studies:
- Lifestyles and social position
- Earlier studies on lifestyles focus on the analysis of social structure and of the individuals’ relative positions inside it. Thorstein Veblen, with his ‘emulation’ concept, opens this perspective by asserting that people adopt specific ‘schemes of life’, and in particular specific patterns of ‘conspicuous consumption’, depending on a desire for distinction from social strata they identify as inferior and a desire for emulation of the ones identified as superior. Max Weber intends lifestyles as distinctive elements of status groups strictly connected with a dialectic of recognition of prestige: the lifestyle is the most visible manifestation of social differentiation, even within the same social class, and in particular it shows the prestige which the individuals believe they enjoy or to which they aspire. Georg Simmel carries out formal analysis of lifestyles, at the heart of which can be found processes of individualisation, identification, differentiation, and recognition, understood both as generating processes of, and effects generated by, lifestyles, operating “vertically” as well as “horizontally”. Finally, Pierre Bourdieu renews this approach within a more complex model in which lifestyles, made up mainly of social practices and closely tied to individual tastes, represent the basic point of intersection between the structure of the field and processes connected with the habitus.
- Lifestyles as styles of thought
- The approach interpreting lifestyles as principally styles of thought has its roots in the soil of psychological analysis. Initially, starting with Alfred Adler, a lifestyle was understood as a style of personality, in the sense that the framework of guiding values and principles which individuals develop in the first years of life end up defining a system of judgement which informs their actions throughout their lives. Later, particularly in Milton Rokeach’s work, Arnold Mitchell’s VALS research and Lynn Kahle’s LOV research, lifestyles’ analysis developed as profiles of values, reaching the hypothesis that it is possible to identify various models of scales of values organized hierarchically, to which different population sectors correspond. Then with Daniel Yankelovich and William Wells we move on to the so-called AIO approach in which attitudes, interests and opinions are considered as fundamental lifestyles’ components, being analysed from both synchronic and diachronic points of view and interpreted on the basis of socio-cultural trends in a given social context (as, for instance, in Bernard Cathelat’s work). Finally, a further development leads to the so-called profiles-and-trends approach, at the core of which is an analysis of the relations between mental and behavioural variables, bearing in mind that socio-cultural trends influence both the diffusion of various lifestyles within a population and the emerging of different modalities of interaction between thought and action.
- Lifestyles as styles of action
- Analysis of lifestyles as action profiles is characterized by the fact that it no longer considers the action level as a simple derivative of lifestyles, or at least as their collateral component, but rather as a constitutive element. In the beginning, this perspective focussed mainly on consumer behaviour, seeing products acquired as objects expressing on the material plane individuals’ self-image and how they view their position in society. Subsequently, the perspective broadened to focus more generally on the level of daily life, concentrating – as in authors such as Joffre Dumazedier and Anthony Giddens – on the use of time, especially loisirs, and trying to study the interaction between the active dimension of choice and the dimension of routine and structuration which characterize that level of action. Finally, some authors, for instance Richard Jenkins and A. J. Veal, suggested an approach to lifestyles in which it is not everyday actions which make up the plane of analysis but those which the actors who adopt them consider particularly meaningful and distinctive.
A healthy or unhealthy lifestyle will most likely be transmitted across generations. According to the study done by Case et al. (2002), when a 0-3 year old child has a mother who practices a healthy lifestyle, this child will be 27% more likely to become healthy and adopt the same lifestyle. For instance, high income parents are more likely to eat organic food, have time to exercise, and provide the best living condition to their children. On the other hand, low income parents are more likely to participate in unhealthy activities such as smoking to help them release poverty-related stress and depression. Parents are the first teacher for every child. Everything that parents do will be very likely transferred to their children through the learning process.
I have come to know hundreds of young people who have found that illness or bingeing on drugs and sugar became the doorway to health. Once they reestablished their own health, we had in common our interest in food. If one can use that overworked word lifestyle, we shared a sugarfree lifestyle. I kept in touch with many of them in campuses and communes, through their travels here and abroad and everywhere. One day you meet them in Boston. The next week you run into them in Southern California.
Lifestyle research can contribute to the question of the relevance of the class concept.
"Life-styles", the culture industry’s recycling of style in art, represent the transformation of an aesthetic category, which once possessed a moment of negativity [shocking, emancipatory], into a quality of commodity consumption.
In our drafts, we spoke of "mass culture." We replaced that expression with "culture industry" in order to exclude from the outset the interpretation agreeable to its advocates: that it is a matter of something like a culture that arises spontaneously from the masses themselves, the contemporary form of popular art.
Diversity is more effectively present in mass media than previously, but this is not an obvious or unequivocal gain. By the late 1950s, the homogenization of consciousness had become counterproductive for the purposes of capital expansion; new needs for new commodities had to be created, and this required the reintroduction of the minimal negativity that had been previously eliminated. The cult of the new that had been the prerogative of art throughout the modernist epoch into the period of post-war unification and stabilization has returned to capital expansion from which it originally sprang. But this negativity is neither shocking nor emancipatory since it does not presage a transformation of the fundamental structures of everyday life. On the contrary, through the culture industry capital has co-opted the dynamics of negation both diachronically in its restless production of new and "different" commodities and synchronically in its promotion of alternative "life-styles."
- webster.com/dictionary/lifestyle Lifestyle[permanent dead link] from Merriam-Webster's Dictionary
- Lynn R. Kahle; Angeline G. Close (2011). Consumer Behavior Knowledge for Effective Sports and Event Marketing. New York: Routledge. ISBN 978-0-415-87358-1.
- Online Etymology Dictionary
- Online Etymology Dictionary
- Spaargaren, G., and B. VanVliet (2000) "Lifestyle, Consumption and the Environment: The Ecological Modernisation of Domestic Consumption", Environmental Politics 9(1): 50-75.
- Giddens, A. (1991) Modernity and self-identity: self and society in the late modern age, Cambridge: Polity Press
- Lynn R. Kahle, Eda Gurel-Atay, Eds (2014). Communicating Sustainability for the Green Economy. New York: M.E. Sharpe. ISBN 978-0-7656-3680-5.
- Ropke, I. (1999) "The Dynamics of Willingness to Consume", Ecological Economics 28: 399-420.
- Giuffrâe, K., & DiGeronimo, T. (1999) Care and Feeding of Your Brain : How Diet and Environment Affect What You Think and Feel, Career Press.
- Berzano L., Genova C., Lifestyles and Subcultures. History and a New Perspective, Routledge, London, 2015 (Part I).
- Ponthiere G. (2011) "Mortality, Family and Lifestyles", Journal of Family and Economic Issues 32 (2): 175-190
- Case, A., Lubotsky D. & Paxson C. (2002) "Economic Status and Health in Childhood: The Origins of the Gradient", The American Economic Review 92(5): 1308-1334
- William Dufty (1975) Sugar Blues, page 204
- Bögenhold, Dieter. "Social Inequality and the Sociology of Life Style: Material and Cultural Aspects of Social Stratification". American Journal of Economics and Sociology. Retrieved 26 April 2012.
- Bernstein (1991) p.23
- Adorno p.98
- Adorno, Th., "Culture Industry Reconsidered," in Adorno (1991).
- Adorno, The Culture Industry - Selected essays on mass culture, Routledge, London, 1991.
- Amaturo E., Palumbo M., Classi sociali. Stili di vita, coscienza e conflitto di classe. Problemi metodologici, Ecig, Genova, 1990.
- Ansbacher H. L., Life style. A historical and systematic review, in “Journal of individual psychology”, 1967, vol. 23, n. 2, pp. 191–212.
- Bell D., Hollows J., Historicizing lifestyle. Mediating taste, consumption and identity from the 1900s to 1970s, Asghate, Aldershot-Burlington, 2006.
- Bénédicte Châtel (Auteur), Jean-Luc Dubois (Auteur), Bernard Perret (Auteur), Justice et Paix-France (Auteur), François Maupu (Postface), Notre mode de vie est-il durable ? : Nouvel horizon de la responsabilité, Karthala Éditions, 2005
- Bernstein, J. M. (1991) "Introduction," in Adorno (1991)
- Berzano L., Genova C., Lifestyles and Subcultures. History and a New Perspective, Routledge, London, 2015.
- Burkle, F. M. (2004)
- Calvi G. (a cura di), Indagine sociale italiana. Rapporto 1986, Franco Angeli, Milano, 1987.
- Calvi G. (a cura di), Signori si cambia. Rapporto Eurisko sull’evoluzione dei consumi e degli stili di vita, Bridge, Milano, 1993.
- Calvi G., Valori e stili di vita degli italiani, Isedi, Milano, 1977.
- Cathelat B., Les styles de vie des Français 1978-1998, Stanké, Parigi, 1977.
- Cathelat B., Socio-Styles-Système. Les “styles de vie”. Théorie, méthodes, applications, Les éditions d’organisation, Parigi, 1990.
- Cathelat B., Styles de vie, Les éditions d’organisation, pàgiri, 1985.
- Chaney D., Lifestyles, Routledge, Londra, 1996.
- Fabris G., Mortara V., Le otto Italie. Dinamica e frammentazione della società italiana, Mondadori, Milano, 1986.
- Faggiano M. P., Stile di vita e partecipazione sociale giovanile. Il circolo virtuoso teoria-ricerca-teoria, Franco Angeli, Milano, 2007.
- Gonzalez Moro V., Los estilos de vida y la cultura cotidiana. Un modelo de investigacion, Baroja, [San Sebastian, 1990].
- Kahle L., Attitude and social adaption. A person-situation interaction approach, Pergamon, Oxford, 1984.
- Kahle L., Social values and social change. Adaptation to life in America, Praeger, Santa Barbara, 1983.
- Leone S., Stili di vita. Un approccio multidimensionale, Aracne, Roma, 2005.
- Mitchell A., Consumer values. A tipology, Values and lifestyles program, SRI International, Stanford, 1978.
- Mitchell A., Life ways and life styles, Business intelligence program, SRI International, Stanford, 1973.
- Mitchell A., The nine American lifestyles. Who we are and where we’re going, Macmillan, New York, 1983.
- Mitchell A., Ways of life, Values and lifestyles program, SRI International, Stanford, 1982.
- Negre Rigol P., El ocio y las edades. Estilo de vida y oferta lúdica, Hacer, Barcellona, 1993.
- Parenti F., Pagani P. L., Lo stile di vita. Come imparare a conoscere sé stessi e gli altri, De Agostini, Novara, 1987.
- Patterson M. Consumption and Everyday Life, 2006
- Ragone G., Consumi e stili di vita in Italia, Guida, Napoli, 1985.
- Ramos Soler I., El estilo de vida de los mayores y la publicidad, La Caixa, Barcellona, .
- Rokeach M., Beliefs, attitudes and values, Jossey-Bass, San Francisco, 1968.
- Rokeach M., The nature of human values, Free Press, New York, 1973.
- Shields R., Lifestyle shopping. The subject of consumption, Routledge, Londra, 1992.
- Shulman B. H., Mosak H. H., Manual for life style assessment, Accelerated Development, Muncie, 1988 (trad. it. Manuale per l’analisi dello stile di vita, Franco Angeli, Milano, 2008).
- Sobel M. E., Lifestyle and social structure. Concepts, definitions and analyses, Academic Press, New York, 1981.
- Soldevilla Pérez C., Estilo de vida. Hacia una teoría psicosocial de la acción, Entimema, Madrid, 1998.
- Valette-Florence P., Les styles de vie. Bilan critique et perspectives. Du mythe à la réalité, Nathan, Parigi, 1994.
- Valette-Florence P., Les styles de vie. Fondements, méthodes et applications, Economica, Parigi, 1989.
- Valette-Florence P., Jolibert A., Life-styles and consumption patterns, Publications de recherche du CERAG, École supériore des affaires de Grenoble, 1988.
- Veal A. J., The concept of lifestyle. A review, in “Leisure studies”, 1993, vol. 12, n. 4, pp. 233–252.
- Vergati S., Stili di vita e gruppi sociali, Euroma, Roma, 1996.
- Walters G. D., Beyond behavior. Construction of an overarching psychological theory of lifestyles, Praeger, Westport, 2000.
- Wells W. (a cura di), Life-style and psycographics, American marketing association, Chicago, 1974.
- Yankelovich D., New criteria for market segmentation, in “Harvard business review”, 1964, vol. 42, n. 2, pp. 83–90.
- Yankelovich D., Meer D., Rediscovering market segmentation, in “Harvard business review”, 2006, febbraio, pp. 1–10.
|
Breakthrough after field project collects richly detailed ice core records from Antarctica
A new multi-institutional study including Scripps Institution of Oceanography, UC San Diego, shows that the rise of atmospheric carbon dioxide that contributed to the end of the last ice age more than 10,000 years ago did not occur gradually, but was characterized by three “pulses” in which CO2 rose abruptly.
Scientists are not sure what caused these abrupt increases, during which levels of carbon dioxide, a greenhouse gas, rose about 10-15 parts per million (ppm) – or about five percent per episode – over a period of one to two centuries. It likely was a combination of factors, they say, including ocean circulation, changing wind patterns, and terrestrial processes. Scripps geoscientist Jeff Severinghaus said the three episodes, which took place 16,100 years ago, 14,700 years ago, and 11,700 years ago are strongly linked to abrupt climate change events that took place in the Northern Hemisphere.
“Abrupt climate change has its own small but significant impacts on atmospheric CO2 and no one knew that before now,” said Severinghaus, a study co-author.
Results of the National Science Foundation-funded study appear today in the journal Nature.
“We used to think that naturally occurring changes in carbon dioxide took place relatively slowly over the 10,000 years it took to move out of the last ice age,” said Shaun Marcott, lead author on the article who conducted his study as a postdoctoral researcher at Oregon State University. “This abrupt, centennial-scale variability of CO2 appears to be a fundamental part of the global carbon cycle.”
Some previous research has hinted at the possibility that spikes in atmospheric carbon dioxide may have accelerated the last deglaciation, but that hypothesis had not been resolved, the researchers say. The key to the new finding is the analysis of an ice core from the West Antarctic that provided the scientists with a detailed enough record to be able to see changes on fine time scales. The core was retrieved – with difficulty – from a region in which snow accumulated and compacted at a rate of 25 centimeters per year. That meant the ice core preserved a more detailed record from the gas bubbles trapped in the ice than did other cores from Antarctic regions where only 2 centimeters of ice represent a year.
Scientists studying past climate have been hampered by the limitations of previous ice cores. Cores from Greenland, for example, provide unique records of rapid climate events going back 120,000 years – but high concentrations of impurities don’t allow researchers to accurately determine atmospheric carbon dioxide records. Antarctic ice cores have fewer impurities, but generally have had lower “temporal resolution,” providing less detailed information about atmospheric CO2.
Severinghaus said the new cores, collected during the recently concluded multi-year WAIS Divide field project, came from a part of Antarctica so snowy that researchers’ camps were frequently buried in snowdrifts.
Coring in that location is “not an easy thing to do, so that’s why it wasn’t done before,” Severinghaus said.
The new core from West Antarctica, drilled to a depth of 3,405 meters (11,170 feet) in 2011 and spanning the last 68,000 years, has “extraordinary detail,” said Oregon State paleoclimatologist Edward Brook, a co-author on the Nature study and an internationally recognized ice core expert.
“It is a remarkable ice core and it clearly shows distinct pulses of carbon dioxide increase that can be very reliably dated,” Brook said. “These are some of the fastest natural changes in CO2 we have observed, and were probably big enough on their own to impact the Earth’s climate.”
“The abrupt events did not end the ice age by themselves,” Brook added. “That might be jumping the gun a bit. But it is fair to say that the natural carbon cycle can change a lot faster than was previously thought – and we don’t know all of the mechanisms that caused that rapid change.”
The researchers say that the increase in atmospheric CO2 from the peak of the last ice age to complete deglaciation was about 80 parts per million, taking place over 10,000 years. Thus, the finding that 30-45 ppm of the increase happened in just a few centuries was significant.
The rate of change during these events is still significantly less than present-day changes in atmospheric CO2 concentrations. The Keeling Curve record of atmospheric carbon dioxide, launched by the late Scripps geochemist Charles David Keeling, recorded levels of 315 ppm when it began in 1958. In 2014, monthly average concentrations reached 401 ppm, an increase of more than 85 parts per million in less than 60 years.
The overall rise of atmospheric carbon dioxide during the last deglaciation was thought to have been triggered by the release of CO2 from the deep ocean – especially the Southern Ocean. But the century-scale events must involve a different mechanism that can act faster, said Severinghaus. One possibility is a major increase in the winds that blow around Antarctica, which are known to bring up CO2 from mid-depths and cause it to outgas into the atmosphere.
|
1, 2, 3, 4, 6, 8, 12, 24 are all divisors (or factors) of 24. You can divide 24 by any of them and you will arrive at a whole number. So there are 8 divisors of the number 24. This article tells you how to calculate this number quickly.
1As an example, we calculate the number of divisors of 24. First we factor the number. 24 = 23*3
2Note the exponents. These are 3 and 1. When a number doesn’t have an exponent its exponent equals 1.
3Add one to each of the exponents and multiply. (3+1)*(1+1) = 4*2 = 8. The number of divisors is 8.
Why This Works
- This is not a rigorous proof, it’s an explanation.
- A number d is a divisor of 24 when d is a positive integer, and the result of the division, n, is an integer. This means that any d must be of the form 2p*3q, where p and q are integers. Furthermore p<=3, q<=1, otherwise n would still not be an integer. p and q are nonnegative, otherwise d is not an integer.
- Remember that any nonzero number raised to the zeroth power equals 1. So p and q can also be 0. So p can be 0, 1, 2, or 3 which gives for 4 possibilities for p. q can be 0 and 1, which gives 2 possibilities for q. The number of possibilities is just one larger than the corresponding exponent.
- Because these possibilities are independent we can multiply them in order to arrive at the total number of divisors.
- 17 = 171 --> 2
- 25 = 52 --> 3
- 60 = 22*3*5 --> 12
- 100 = 22*52 --> 9
- 45360 = 24*34*5*7 --> 100
- When the number is a square, the number of divisors will be odd. When it's not a square then the number of divisors will be even.
Categories: Multiplication and Division
In other languages:
Español: determinar el número de divisores de un entero, Русский: найти количество делителей целого числа, Italiano: Determinare il Numero di Divisori di un Numero Intero, Português: Determinar a Quantidade de Divisores de um Número Inteiro
Thanks to all authors for creating a page that has been read 74,919 times.
|
The Sun and white dwarfs
The life of stars
Professor Denis Sullivan at Victoria University of Wellington studies white dwarfs. These are some of the oldest stars in the galaxy – not characters from Lord of the Rings! To Denis, white dwarfs are laboratories for studying the evolution of stars and tell us what might happen to our own Sun in the future.
Becoming a white dwarf
Inside main sequence stars like our Sun, nuclear fusion converts hydrogen into helium. This releases huge amounts of energy. The light and heat that reaches Earth is a result of this process in our Sun. Most stars will eventually run out of hydrogen fuel – they will expand to become red giants, then shrink and cool down into white dwarfs. These stars contain a similar mass to the Sun, but squashed into the size of our Earth. A white dwarf is so dense that a teaspoonful of it on Earth would weigh as much as a car.
Studying white dwarfs
Radiation escaping from some white dwarfs makes them pulsate or quiver – a bit like an earthquake makes the surface of the Earth shake. Someone who studies the structure of the Earth using earthquakes is called a seismologist. Someone like Denis, who studies the inside of white dwarfs using star quakes, is called an asteroseismologist.
White dwarfs are small and cool, so they are faint and hard to find. Once a white dwarf has been found, very sensitive photometers attached to telescopes are used to detect very small changes in the light coming from it. Denis measures these short pulses of light from star quakes to get a picture of a white dwarf cooling down.
Denis can only measure what is happening on the surface of a white dwarf – he can’t see inside – so he uses his data to build a computer model. A possible structure of a star is put into the model and made to pulsate. The variables making up the inside of the model star are then changed until the observed pulsations match those measured from a real star. In this way, it has been possible to put together a picture of what is inside a white dwarf – a dense core made of carbon and oxygen, with a very thin envelope of hydrogen, or occasionally helium.
Nature of Science
Scientists can only collect data from the outside of stars, so they create computer models of what the insides might be like. They use their knowledge of physics and chemistry to change the variables of the model until they match their actual observations.
The story of our Sun
The study of stars that have become white dwarfs has helped Denis learn about the history and future of our own Sun. Like all stars, our Sun will go through several stages in its life. At the moment, it is only a middle-aged star, but like most stars, it will eventually become a white dwarf.
Our hot Sun has spent the last 5 billion years turning hydrogen into helium. Another 5 billion years into the future, the hydrogen will be running out. Gravity will cause the inside of the star, which will then be mostly helium, to shrink and get even hotter, while the outside layer will expand and cool down. Our Sun will have become a red giant, big enough to swallow up where the Earth is now.
As the core of the red giant gets hotter, the helium becomes the new fuel, building up heavier atoms of oxygen and carbon. The outer layers will be lost into space as a planetary nebula, leaving the core so densely packed together that gravity will be about a million times that of Earth.
The Sun, cooling down, with most of its fuel used up, will have become a white dwarf. Eventually, it will even stop shining.
This video 'Classroom Demonstrations: Colour and Temperature of Stars' uses a light bulb to explain the relationship between colour and temperature in stars.
|
While affluent regions and social classes struggle with surplus production and surplus consumption, close to one fifth of the global population lives in constant under-nourishment. Subsistence production of basic foods is restricted in many regions by lack of access to capital, land and water. At the same time, more favoured growing areas are used for commercial production of speciality crops or animal feeds for export to affluent regions.
The major constraints to food security are found in social, economic and political conditions rather than in production methods themselves. Nevertheless, demand for food will increase in the future, so productivity of agricultural systems needs to be addressed.
Against this backdrop, it should be kept in mind that:
- The main strategy for increasing both food production and access to food is through increased production by farmers in developing countries.
- Conventional agriculture may give short-term gains in production, but in most cases it is not sustainable in the long-term, nor does it guarantee safe food.
- In particular, conventional production methods are inadequate for disadvantaged farming communities and are thus not a suitable solution for many of those who face food shortage.
- Organic production has the potential to produce sufficient food of a high quality. In addition Organic Agriculture is particularly well suited for those rural communities that are currently most exposed to food shortages.
Organic Agriculture contributes to food security by a combination of many features, most notably by:
- Increasing yields in low-input areas
- Conserving biodiversity and nature resources on the farm and in the surrounding area
- Increasing income and/or reducing costs
- Producing safe and varied food
- Being sustainable in the long term
Organic Agriculture should be an integral part of any agricultural policy aiming for food security.
|
Common Core Connection
The students will analyze how illustrations and details help a reader determine the big ideas in a nonfiction text. Learners look at things from the perspective of the author and illustrator and what they are trying to show is important by the way they structure the text and use illustrations. In addition, students engage in a highly complex concept and stretch their phonological skills as they pronounce and understand the illustration of the nitrogen cycle and the water cycle.
The lesson consists of Transitions about every twenty minutes to keep the students focused and I made a video in the resources about this. In addition, the students work in small groups that I call peanut butter jelly Partners throughout the lesson. There is also a fun chant (Fun Chant to Refocus Class After Discussion) we do to refocus the class after discussion in the resource section.
I put the lesson image on the board and ask the class to discuss what we can learn by looking at diagrams. While my students talk I am assessing their prior knowledge about using diagrams to gain information.
Then I explain the lesson goal and the plan for the lesson.
Then the students transition to the center table and analyze the Water Cycle with their partner. They are specifically analyzing the illustrations and the details in the text to determine the key ideas.
I have made several videos on student work and scaffolding. I think it takes a lot of scaffolding at times to arrive at nice work. It might help to watch the student work video first, but it really doesn't matter. The scaffolding video shows how I lead my class to their finished product.
Then the class move to the lounge where they practice their speaking, listening, and evaluation skills. I find being proactive really helps my students meet my expectations, so I go over all the rules of listening, speaking, and evaluation.
Next I select about three students to share their work, and after each Presentation I ask other students to share their Peer Evaluation. What did you think they did well? What could they work on? I try to avoid any focus on writing conventions and really foster creativity and getting my students thinking about the concepts in the text.
Last I assess the students understanding by asking them to tell their partner one thing they learned about illustrations and details in a text today. I share some of the great ideas I hear, and I also add my ideas. This gives the class an example of what I am really looking for because sometimes my students are a little lost about what they learned. Which brings me to the importance of having the class restate the lesson goal. I can use the illustrations and details in a text to describe the big idea.
|
Presentation on theme: "Methods for Summarizing Texts Effective Strategies."— Presentation transcript:
Methods for Summarizing Texts Effective Strategies
The Benefits of Summarizing Summarizing the information presented in a textbook or novel reinforces and consolidates the processes involved in learning that information. These processes include separating important from unimportant information, recognizing the structure of the text, and drawing inferences.
Strategies for Summarizing Hierarchical Summaries R.E.A.P. G.I.S.T.
Hierarchical Summaries Preview the text, giving special attention to headings, subheadings, bolded or italicized vocabulary, etc. Develop a skeletal outline. Read the text and use the outline as a guide. Generate main ideas for each main point in the outline and then include supporting detail. Generate a summarizing statement for the entire passage.
R.E.A.P. This is an acronym for the four stages of reading and understanding: R ead the text; E ncode the text in your own language; A nnotate the text by writing down the message conveyed in the text; P onder the message on your own or with others.
G.I.S.T.: Generating Interactions between Schemata and Text First, restate the first sentence of a paragraph in 15 words or less. Then, summarize sentences one and two in 15 words or less. Continue until the entire paragraph has been summarized in 15 words or less.
Source Alvermann, Donna E., and Stephen F. Phelps. Content Reading and Literacy: Succeeding in Today’s Diverse Classrooms. 2 nd ed. Boston: Allyn and Bacon, 1998.
|
We can show that (x + 1)² = x² + 2x + 1 by considering the area of an (x + 1) by (x + 1) square. Show in a similar way that (x + 2)² = x² + 4x + 4
Try entering different sets of numbers in the number pyramids. How does the total at the top change?
Spotting patterns can be an important first step - explaining why it is appropriate to generalise is the next step, and often the most interesting and important.
Think of a number, add one, double it, take away 3, add the number you first thought of, add 7, divide by 3 and take away the number you first thought of. You should now be left with 2. How do I. . . .
You can work out the number someone else is thinking of as follows. Ask a friend to think of any natural number less than 100. Then ask them to tell you the remainders when this number is divided by. . . .
A little bit of algebra explains this 'magic'. Ask a friend to pick 3 consecutive numbers and to tell you a multiple of 3. Then ask them to add the four numbers and multiply by 67, and to tell you. . . .
Imagine starting with one yellow cube and covering it all over with a single layer of red cubes, and then covering that cube with a layer of blue cubes. How many red and blue cubes would you need?
Take a look at the multiplication square. The first eleven triangle numbers have been identified. Can you see a pattern? Does the pattern continue?
When number pyramids have a sequence on the bottom layer, some interesting patterns emerge...
What are the areas of these triangles? What do you notice? Can you generalise to other "families" of triangles?
Can you find sets of sloping lines that enclose a square?
It starts quite simple but great opportunities for number discoveries and patterns!
How could Penny, Tom and Matthew work out how many chocolates there are in different sized boxes?
List any 3 numbers. It is always possible to find a subset of adjacent numbers that add up to a multiple of 3. Can you explain why and prove it?
It's easy to work out the areas of most squares that we meet, but what if they were tilted?
The NRICH team are always looking for new ways to engage teachers and pupils in problem solving. Here we explain the thinking behind maths trails.
Rectangles are considered different if they vary in size or have different locations. How many different rectangles can be drawn on a chessboard?
A three digit number abc is always divisible by 7 when 2a+3b+c is divisible by 7. Why?
Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges.
Take any two positive numbers. Calculate the arithmetic and geometric means. Repeat the calculations to generate a sequence of arithmetic means and geometric means. Make a note of what happens to the. . . .
Square numbers can be represented as the sum of consecutive odd numbers. What is the sum of 1 + 3 + ..... + 149 + 151 + 153?
Triangular numbers can be represented by a triangular array of squares. What do you notice about the sum of identical triangle numbers?
Can you tangle yourself up and reach any fraction?
Four bags contain a large number of 1s, 3s, 5s and 7s. Pick any ten numbers from the bags above so that their total is 37.
The sum of the numbers 4 and 1 [1/3] is the same as the product of 4 and 1 [1/3]; that is to say 4 + 1 [1/3] = 4 × 1 [1/3]. What other numbers have the sum equal to the product and can this be so for. . . .
Imagine a large cube made from small red cubes being dropped into a pot of yellow paint. How many of the small cubes will have yellow paint on their faces?
What would you get if you continued this sequence of fraction sums? 1/2 + 2/1 = 2/3 + 3/2 = 3/4 + 4/3 =
Explore the effect of reflecting in two parallel mirror lines.
Can you describe this route to infinity? Where will the arrows take you next?
Can you find an efficient method to work out how many handshakes there would be if hundreds of people met?
It would be nice to have a strategy for disentangling any tangled ropes...
Pick the number of times a week that you eat chocolate. This number must be more than one but less than ten. Multiply this number by 2. Add 5 (for Sunday). Multiply by 50... Can you explain why it. . . .
In how many ways can you arrange three dice side by side on a surface so that the sum of the numbers on each of the four faces (top, bottom, front and back) is equal?
Can you work out how to win this game of Nim? Does it matter if you go first or second?
The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it?
Explore the effect of reflecting in two intersecting mirror lines.
This article for teachers describes several games, found on the site, all of which have a related structure that can be used to develop the skills of strategic planning.
The Egyptians expressed all fractions as the sum of different unit fractions. Here is a chance to explore how they could have written different fractions.
Find some examples of pairs of numbers such that their sum is a factor of their product. eg. 4 + 12 = 16 and 4 × 12 = 48 and 16 is a factor of 48.
Some students have been working out the number of strands needed for different sizes of cable. Can you make sense of their solutions?
Charlie has made a Magic V. Can you use his example to make some more? And how about Magic Ls, Ns and Ws?
Imagine we have four bags containing numbers from a sequence. What numbers can we make now?
Choose four consecutive whole numbers. Multiply the first and last numbers together. Multiply the middle pair together. What do you notice?
Consider all two digit numbers (10, 11, . . . ,99). In writing down all these numbers, which digits occur least often, and which occur most often ? What about three digit numbers, four digit numbers. . . .
Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make?
Jo made a cube from some smaller cubes, painted some of the faces of the large cube, and then took it apart again. 45 small cubes had no paint on them at all. How many small cubes did Jo use?
What size square corners should be cut from a square piece of paper to make a box with the largest possible volume?
Start with any number of counters in any number of piles. 2 players take it in turns to remove any number of counters from a single pile. The loser is the player who takes the last counter.
A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target.
Start with any number of counters in any number of piles. 2 players take it in turns to remove any number of counters from a single pile. The winner is the player to take the last counter.
|
Are you aware of bullying taking place?
Are you experiencing a form of bullying?
Bullying/Hazing - District Policy
The governing board believes strongly that schools should be safe places for children and that the school district must make every effort to make schools physically and psychologically safe for all students. Just as the Board expects professional behavior of its staff, similar behavior is expected of the students. The Board also believes that students should not be disruptive or create a climate of fear by bullying other students verbally, in writing, or electronically. No child should be threatened, teased, taunted, or tormented for any reason.
In order to create a positive climate for education, all reports of bullying will be investigated and resolved promptly to avoid an atmosphere of harassment. Additionally, no student shall engage in hazing, participate in hazing, or commit any act that causes, or is likely to cause bodily danger, physical harm, or personal degradation or disgrace resulting in physical or mental harm to any fellow student.
WHAT IS BULLYING?
What is bullying and how does it differ in boys and girls?
What is bullying?
● An intentional act. The child who bullies wants to harm the victim; it is no accident.
● Characterized by repeat occurrences. Bullying is not generally considered a random act,nor a single incident.
● A power differential. A fight between two kids of equal power is not bullying; bullying is a fight where the child who bullies has some advantage or power over the child who is victimized.
Strategies students use to bully others:
● Physical - hitting, kicking, beating up, pushing, spitting, property damage, and/or theft.
● Verbal - teasing, mocking, name calling, verbal humiliation, verbal intimidation, threats, coercion, extortion, and/or racist, sexist or homophobic taunts.
● Social - gossip, rumor spreading, embarrassment, alienation or exclusion from the group, and/or setting the other up to take the blame.
● Cyber or electronic - using the Internet, email or text messaging to threaten, hurt, single out, embarrass, spread rumors, and/or reveal secrets about others.
Bullying and gender:
● Boys tend to be physically aggressive.
● Boys may be more accepting of bullying than girls.
● Boys are more likely to both bully and be bullied than girls.
● Girls tend to bully other girls indirectly through peer groups. Rather than bully a targeted child directly, girls more often share with others hurtful information about the targeted child.
● Girls experience sexual bullying more often than boys (for example, spreading rumors about sexual activity or being targeted as the recipient of sexual messages.)
References on www.education.com
1. Shelley Hymel, Susan M. Swearer. Bullying: An age-old problem that needs new solutions.
2. Tanya Beran. Bullying: What are the Differences between Boys and Girls and How Can You
|
From Wikipedia, the free encyclopedia - View original article
|Legionella sp. under UV illumination|
Brenner et al. 1979
|Legionella sp. under UV illumination|
Brenner et al. 1979
The genus Legionella is a pathogenic group of Gram-negative bacteria, that includes the species L. pneumophila, causing Legionellosis (all illnesses caused by Legionella) including a pneumonia type illness called Legionnaires' disease and a mild flu like illness called Pontiac fever.
The side-chains of the cell wall carry the bases responsible for the somatic antigen specificity of these organisms. The chemical composition of these side chains both with respect to components as well as arrangement of the different sugars determines the nature of the somatic or O antigen determinants, which are essential means of serologically classifying many Gram-negative bacteria.
Legionella acquired its name after a July 1976 outbreak of a then-unknown "mystery disease" sickened 221 persons, causing 34 deaths. The outbreak was first noticed among people attending a convention of the American Legion—an association of U.S. military veterans. The convention in question occurred in Philadelphia during the U.S. Bicentennial year in July 21–24, 1976. This epidemic among U.S. war veterans, occurring in the same city as—and within days of the 200th anniversary of—the signing of the Declaration of Independence, was widely publicized and caused great concern in the United States.
On January 18, 1977, the causative agent was identified as a previously unknown bacterium subsequently named Legionella. See Legionnaires' disease for full details.
Legionella is traditionally detected by culture on buffered charcoal yeast extract (BCYE) agar. Legionella requires the presence of cysteine and iron to grow and therefore does not grow on common blood agar media used for laboratory based total viable counts or on site dipslides. Common laboratory procedures for the detection of Legionella in water concentrate the bacteria (by centrifugation and/or filtration through 0.2 micrometre filters) before inoculation onto a charcoal yeast extract agar containing antibiotics (e.g. glycine vancomycim polymixin cyclohexamide, GVPC) to suppress other flora in the sample. Heat or acid treatment are also used to reduce interference from other microbes in the sample.
After incubation for up to 10 days, suspect colonies are confirmed as Legionella if they grow on BCYE containing cysteine, but not on agar without cysteine added. Immunological techniques are then commonly used to establish the species and/or serogroups of bacteria present in the sample.
Although the plating method is quite specific for most species of Legionella, one study has shown that a coculture method that accounts for the close relationship with amoebas may be more sensitive since it can detect the presence of the bacteria even when masked by its presence inside the amoeba. Consequently, the true clinical and environmental prevalence of the bacteria is likely to be underestimated due to false negatives inherent in the current lab methodology.
Many hospitals use the Legionella Urinary Antigen test for initial detection when Legionella pneumonia is suspected. Some of the advantages offered by this test is that the results can be obtained in a matter of hours rather than the five days required for culture, and that a urine specimen is generally more easily obtained than a sputum specimen. Disadvantages are that the urine antigen test only detects antigen of Legionella pneumophila serogroup 1 (LP1); only a culture will detect infection by non-LP1 strains or other Legionella species and that isolates of Legionella are not obtained, which impairs public health investigations of outbreaks of LD.
New techniques for the rapid detection of Legionella in water samples are emerging including the use of polymerase chain reaction (PCR) and rapid immunological assays. These technologies can typically provide much faster results.
Legionella live within amoebae in the natural environment.
Upon inhalation the bacteria can infect alveolar macrophages. subverting the normal host cell machinery to create a niche where the bacteria can replicate. This results in Legionnaires' disease and the lesser form, Pontiac fever. Legionella transmission is airborne via respiratory droplets containing the bacteria.
Once inside a host, incubation may take up to two weeks. Prodromal symptoms are flu-like, including fever, chills, and dry cough. Advanced stages of the disease cause problems with the gastrointestinal tract and the nervous system and lead to diarrhea and nausea. Other advanced symptoms of pneumonia may also present.
However, the disease is generally not a threat to most healthy individuals, and tends to lead to harmful symptoms only in those with a compromised immune system and the elderly. Consequently, it should be actively checked for in the water systems of hospitals and nursing homes. The Texas Department of State Health services provides recommendations for hospitals to detect and prevent the spread of nosocomial infection (hospital acquired disease) due to legionella. According to the journal "Infection Control and Hospital Epidemiology," Hospital-acquired Legionella pneumonia has a fatality rate of 28%, and the source is the water distribution system.
In the United States, the disease affects between 8,000 to 18,000 individuals a year.
Person-to-person transmission of Legionella has not been demonstrated.
Legionella species typically exist in nature at low concentrations, it has been found in groundwater, lakes, and streams. After entering manmade equipment, given the right environmental conditions, it may reproduce.
Documented sources include cooling towers, swimming pools (especially in Scandinavian countries), domestic water systems and showers, ice making machines, refrigerated cabinets, whirlpool spas, hot springs, fountains, dental equipment, automobile windshield washer fluid and industrial coolant.
The largest and most common source of Legionnaires' disease outbreaks are cooling towers (heat rejection equipment used in air conditioning and industrial cooling water systems) primarily because of the risk for widespread circulation. Many governmental agencies, cooling tower manufacturers, and industrial trade organisations have developed design and maintenance guidelines for controlling the growth and proliferation of Legionella within cooling towers.
Recent research in the Journal of Infectious Diseases provides evidence that Legionella pneumophila, the causative agent of Legionnaires' disease, can travel at least 6 km from its source by airborne spread. It was previously believed that transmission of the bacterium was restricted to much shorter distances. A team of French scientists reviewed the details of an epidemic of Legionnaires' disease that took place in Pas-de-Calais, northern France, in 2003–2004. There were 86 confirmed cases during the outbreak, of which 18 resulted in death. The source of infection was identified as a cooling tower in a petrochemical plant, and an analysis of those affected in the outbreak revealed that some infected people lived as far as 6–7 km from the plant.
There is no vaccine for legionellosis, and antibiotic prophylaxis is not effective. Any licensed vaccine for humans in the US is most probably still many years away. Vaccination studies using heat-killed or acetone-killed cells have been carried out, and guinea pigs were challenged intraperitoneally or by using the aerosol model of infection. Both vaccines were shown to give moderately high levels of protection. Protection was found to be dose dependent and correlated with antibody levels as measured by enzyme-linked immunosorbent assay to an outer membrane antigen and by indirect immunofluorescence to heat-killed cells.
It has been discovered that Legionella is a genetically diverse species with 7-11% of genes strain specific. The molecular function of some of the proven virulence factors of Legionella have been discovered by some researchers.
Control of Legionella growth can occur through chemical or thermal methods. The most expensive of these two options is temperature control—i.e., keeping all cold water below 25 °C (78 °F) and all hot water above 51 °C (124 °F). The high cost incurred with this method arises from the extensive retrofitting required for existing complex distribution systems in large facilities and the energy cost of chilling or heating the water and maintaining the required temperatures at all times and at all distal points within the system.
A very effective chemical treatment is chlorine. For systems with marginal issues chlorine will provide effective results at 0.5 ppm residual in the hot water system. For systems with significant Legionella problems, temporary shock chlorination—where levels are raised to higher than 2 ppm for a period of 24-hours or more and then returned to 0.5 ppm—may be effective. Hyper-chlorination can also be used where the water system is taken out of service and the chlorine residual is raised to 50 to 100 ppm or higher at all distal points for 24-hours or more. The system is then flushed and returned to 0.5 ppm chlorine prior to being placed back into service. These high levels of chlorine will penetrate biofilm killing both the Legionella bacteria and the host organisms. Annual hyper-chlorination can be an effective part of a comprehensive Legionella prevention action plan.
Industrial-size copper-silver ionization is recognized by the U.S. Environmental Protection Agency and WHO for Legionella control and prevention. Copper and silver ion concentrations must be maintained at optimal levels, taking into account both water flow and overall water usage, to control Legionella. The disinfection function within all of a facilities water distribution network will occur within 30 to 45 days. Key engineering features such as 10 amps per ion chamber cell and automated variable voltage outputs having no less than 0–100 VDC are but a few of the required features for proper Legionella control and prevention, using a specific, non-referenced CuAg system. Swimming pool ion generators are not designed for potable water treatment.
Questions remain whether the silver and copper ion concentrations required for effective control of symbiotic hosts could exceed those allowed under the U.S. Safe Drinking Water Act's Lead and Copper Rule. In any case, any facility or public water system using CuAg for disinfection should monitor their copper and silver ion concentrations to ensure that they are within intended levels - both minimum and maximum. Further, there are no current standards for silver in the EU and other regions utilizing this technology.
CuAg ionization is an effective process to control Legionella in potable water distribution systems found in health facilities, hotels, nursing homes and most large buildings. CuAg is not intended for cooling towers because of pH levels over 8.6 that cause ionic copper to precipitate. In 2003, researchers who heavily support ionization developed a validation process that supports their research on ionization. Ionization became the first such hospital disinfection process to have fulfilled a proposed four-step modality evaluation; by then it had been adopted by over 100 hospitals. Additional studies indicate ionization is superior to thermal eradication.
Chlorine dioxide has been EPA approved as a primary potable water disinfectant since 1945. It does not produce any carcinogenic byproducts like chlorine and is not a restricted heavy metal like copper. It has proven excellent control of Legionella in cold and hot water systems and its ability as a biocide is not impacted by pH, or any water corrosion inhibitors like silica or phosphate. Monochloramine is an alternative. Like chlorine and chlorine dioxide, monochloramine is EPA approved as a primary potable water disinfectant. EPA registration requires an EPA biocide label which lists toxicity and other data required by the EPA for all EPA registered biocides. If the product is being sold as a biocide then the manufacturer is legally required to supply a biocide label. And the purcharser is legally required to apply the biocide per the biocide label. When first applied to a system chlorine dioxide can be added at disinfection levels of 2 ppm for 6 hours to clean up a system. This will not remove all biofilm but will effectively remediate the system of Legionella.
Several European countries established the European Working Group for Legionella Infections (EWGLI) to share knowledge and experience about monitoring potential sources of Legionella. The EWGLI has published guidelines about the actions to be taken to limit the number of colony-forming units (CFU, that is, live bacteria that are able to multiply) of Legionella per litre:
|Legionella bacteria CFU/litre||Action required (35 samples per facility are required, including 20 water and 10 swabs)|
|1000 or less||System under control.|
|more than 1000|
up to 10,000
|Review program operation. The count should be confirmed by immediate re-sampling. If a similar count is found again, a review of the control measures and risk assessment should be carried out to identify any remedial actions.|
|more than 10,000||Implement corrective action. The system should immediately be re-sampled. It should then be "shot dosed" with an appropriate biocide, as a precaution. The risk assessment and control measures should be reviewed to identify remedial actions. (150+ CFU/ml in healthcare facilities or nursing homes require immediate action.)|
Minimal monitoring guidelines are stated in ACOP L8 in the UK. These are not mandatory however are widely regarded as so. An ACOP is an Approved Code of Practice which an employer or property owner must follow, or achieve the same result. Failure to show monitoring records to at least this standard has resulted in several high profile prosecutions, e.g. Nalco + Bulmers - Both could not prove a sufficient scheme to be in place whilst investigating an outbreak, therefore both were fined in the region of £300,000GBP. Important case law in this area is R v Trustees of the Science Museum 3 All ER 853, (1993) 1 WLR 1171
Any building within the UK which is subject to HASAW 1974 is required under COSHH and ACOP L8 to have a Legionella risk assessment carried out. The report should include a detailed narrative of the site, asset register, simplified schematic drawings (if none available on site), recommendations on compliance and a proposed monitoring scheme.
Log books should be held on site for a minimum of 5 years. E-logbooks are available, however issues can arise if a site audit is carried out and the auditor cannot access the server for any reason (User isn't set up, someone is on holiday/ill, etc.). Electronic logbooks are generally more useful when managing large portfolios, however a duplication is advisable because of the 5 year 'on site / available for inspection' requirement, and therefore kills the 'no paper' argument.
The requirements of the L8 ACOP and regulations says that the legionnaires risk assessment should be reviewed at least every 2 years and whenever there is reason to suspect it is no longer valid, such as if you have added to, or modified, your water systems, or if the use of the water system has changed, or if your Legionella control measures are no longer working.
It has been suggested that Legionella could be used as a weapon and indeed genetic modification of Legionella pneumophila has been shown where the mortality rate in infected animals can be increased to nearly 100%.
LEGIONELLA and the prevention of legionellosis
|This Gammaproteobacteria-related article is a stub. You can help Wikipedia by expanding it.|
|
Incorporating Games Into Classroom Curriculum
Games are a motivating way to help students practice important skills and reinforce learning.
By Jacqueline Dwyer
Games are a quick and easy way to motivate children, and to foster an atmosphere of friendship and cooperation in the classroom. They can be adapted to suit the age and ability level of your students, and used across the curriculum at any point in the school year. The following games have worked well in my classroom, when both time and space have been limited.
Learning About Order
A popular game that can help students practice organizational and team work skills is called Line Up. Children in elementary school line up many times a day so it is fun to make a game of it. When students play the game Line Up, they arrange themselves in order according to different criteria. Middle to upper elementary students might enjoy lining up according to their age, but you don't want to make it too easy.
Begin by telling students that they are going to line up according to their age. Once students understand the basic idea, suggest that they use different criteria to arrange themselves in order. They can start by lining up according to the year they were born, then their birth month, and, finally, by their birth date. If two children happen to share the same birthday, they should line up according to their time of birth, if they know it. If not, morning or evening will suffice. If you think your students can do this activity without getting overexcited, make it harder for them by giving them a time limit! A variation on this game would be to have students use alphabetizing skills to line up according to the first letter of the street they live on. If more than one student lives on a street that starts with the same letter, they have to work together to figure out who comes first by alphabetizing using the second, or even the third, letter.
Putting the Puzzle Pieces Together
This is one of my favorite puzzle games. It requires a little preparation, but it's definitely worth it! You begin by laminating pictures related to a teaching theme, and then cut each picture into a number of puzzle pieces, depending on the age of your students. Next, you hand a puzzle piece to each student. Then you ask students to walk quietly around the classroom to find their puzzle mates. As an extra challenge, use pictures that are similar looking. For example, if you’re teaching a unit on plants, you could use similarly-colored examples that had leaves of different sizes or shapes. You might want to invite other teachers to contribute their own homemade puzzles. This way, you can have a collection of puzzles that travels from classroom to classroom around the school.
An Addition Game
Students in elementary grades never grow tired of playing an addition game called Digits. First, you place your students in pairs, facing each other. Then you count to three. On three, students hold out their hands, showing different amounts of fingers. Whoever identifies how many fingers the other person is holding out first is the winner. You can adapt this game for use with preschool students by asking them to hold out the fingers on only one hand. For older students, you can ask them to multiply the number of fingers they see.
It's All in a Phrase
The game Phrases works well when you are introducing a new topic that students might find confusing, or that contains a lot of information. It’s also a great way for students to practice their listening skills. You start by writing keywords or phrases on sentence strips, then you hand them to students. Once you start teaching the topic, students have to listen carefully for the keyword or phrase on their sentence strip. When they hear it, they put it up on the board. Once you are done with your lesson, you can use the strips to play the game again to review what students have learned. This time, however, students are given additional sentence strips, handed out at random, containing details related to the keyword or phrase. When I mention the phrase, both the student with the phrase and the student with the details relating to that phrase put the strips up on the board at the same time. What follows are some more ways to incorporate games into classroom curriculum.
Games Lesson Plans:
Students review recently learned material by following the format of several popular game shows.
Students review vocabulary words using a teacher-made pack of Go Fish cards.
Students collaborate with a partner to come up with their own unique games. They write the rules and demonstrate the game for their classmates.
|
Marriage is a sacred bond, a promise between two people to share their lives with each other forever.
However, a study conducted revealed that the concept of marriage was created only to avoid sexually transmitted diseases among ancient farmers.
As a trait, monogamy is prevalent in only three percent of the mammal species. Ancient farmers were known to engage in an activity that is also currently called as “sleeping around.” The study found that just like today, such practice increased the scope of contraction of diseases like genital herpes. According to the study, the STDs became more prevalent in large groups of people dwelling in villages, towns and cities and the increase in cases happened after hunter gatherers settled down to farm.
With STDs on the rise, a shift started to happen. As revealed by the study which was first reported by Discovery News, polygamists then started being considered outcasts and the ancient farmers started choosing their respective partners to make love with exclusively.
A mathematical model created to show the demographics of hunter gatherers and the likely sexually transmitted infections spread among them showed “how growing STI disease burden in larger residential group sizes can foster the emergence of socially imposed monogamy in human mating.”
The study revealed “how events in natural systems, such as the spread of contagious diseases, can strongly influence the development of social norms and in particular our group oriented judgements. Our research illustrates how mathematical models are not only used to predict the future, but also to understand the past,” Professor Chris Bauch, the lead author of the study was quoted as saying by the Daily Mail.
He also added that the natural environment “can strongly influence the development of social norms, and in particular our group-oriented judgements.”
|
Perfect resource for writing center or individual student work areas. The alphabet linking chart comes in both full color and black and white. Students can use to practice letter sound relationships as a part of their daily routine and/or when spelling inventively. Additional activities are included to help students master the crucial link between written letters and spoken sounds!
- Cut and Paste Chart Literacy Center and/or remediation activity
- Magnetic Letter Match Literacy Center
- Letter Order Literacy Center
- Dab a Linking Letter Literacy Center
- Missing Parts worksheet
- Linking Letters Practice Sheets
In addition, a bonus Consonant Cluster chart is included.
Check out the preview for a closer look at the activities included.
You may also be interested in other activities from my store:
Back to School! Practice Sheets for Kindergarten ELA and Math
Chicka Chicka Boom Boom! Alphabet Activities
Letter Identification and Beginning Sounds Practice Sheets
Numbers 0 - 20 Practice Sheets
My Little Readers - Interactive Books - Bundle
Write it 4 Ways! For Letters
Write it 4 Ways! For Numbers 0 - 10
|
Scientists have concluded that more than half of the Earth’s heat is due to nuclear fission process and it results in the movement of earth’s continent and crust.
The confirmation comes after scientists used a Borexino and Neutrino detector in Japan and Italy- the Kamioka Liquid-Scintillator Antineutrino Detector (KamLAND) to measure the flow of antithesis of these neutral particles (anti-neutrinos) that emanate from the Earth. (Detailed Results — July 17 in Nature Geoscience.)
Our Earth has many radioactive elements present in it such as uranium, thorium, potassium, etc. Once the radioactive materials are decayed, they release energy(heat) and anti-neutrinos. Since neutrinos and anti-neutrinos (geo-neutrinos) don’t have charges, they can travel through mass and space freely. The scientist can determine the amount of heat that results from radioactive decay by measuring these particles.
The scientists found that around 20 Terawatts of earth’s heat is resulted from the antineutrino emission which is almost as twice as the energy used by humans at present. If this massive energy is combined with 4 Terawatts of decaying potassium then it can move mountains or cause the collisions which created them.
The accurate measurement was made possible by the shutdown of the Kashiwazaki-Kariwa nuclear reactor following an earthquake in Japan (2007). If the plant hasn’t been shutdown then the particles would mix with naturally emitted geo-neutrinos. It would have made difficult for the KamLAND team to make correct measurement.
The detector used can hide from cosmic rays – properties similar to neutrinos and anti-neutrinos. The detector, 13 meter in diameter, is a transparent balloon filled with a mixture of special liquid hydrocarbons. The hydrocarbons itself suspends in a bath of mineral oil which was contained in an 18 meter diameter stainless steel sphere and it is covered on the inside with detector tubes. The detector captured the telltale mark of some 90 geo-neutrinos over the course of seven years of measurements.
The measurements suggest that more than half of the Earth’s heat is due to radioactive decay and it is estimated around 44 terawatts on the basis of temperature found at the bottom of deep boreholes into the planet’s crust. The decay happens in the crust and mantle of the Earth and some of the heat might have been trapped in Earth’s molten iron core. The Earth is unlikely to cool because of the heat pumped out of fission process, and because of long half-lives of these elements it will prevent the collision of continents.
[via Scientific American]
|
Wisdom teeth, or third molars, are the last teeth to develop and appear in your mouth. They come in between the ages of 17 and 25, a time of life that has been called the "Age of Wisdom."
Wisdom teeth may not need to be extracted if they grow in completely and are functional, painless, cavity-free, disease-free and in a hygenic environment with healthy gum tissue. They do, however, require regular, professional cleaning, annual check-ups and periodic X-rays to monitor for any changes.
to do is
When a tooth doesn't fully grow in, it's "impacted"–usually unable to break through the gums because there isn't enough room.
An impacted wisdom tooth can damage neighboring teeth or become infected. Because it's in an area that’s hard to clean, it can also invite bacteria that lead to gum disease. Oral bacteria can also travel through your bloodstream and lead to infections and illnesses that affect your heart, kidneys and other organs. In some cases, a cyst or tumor can form around the base of the impacted tooth, which can lead to more serious problems as it hollows out the jaw and damages surrounding nerves, teeth and other parts of your mouth and face.
Generally, wisdom teeth should be surgically removed when there are:
- Infections and/or periodontal (gum) disease
- Cavities that can’t be restored
- Cysts, tumors or other pathologies
- Damage to neighboring teeth
|
Targeted therapy is a cancer treatment that uses drugs. However, it is different from traditional chemotherapy. The drugs known as targeted therapy help stop cancer from growing and spreading. They work by targeting specific genes or proteins. These genes and proteins are found in cancer cells or in cells related to cancer growth, like blood vessel cells.
Doctors often use targeted therapy with chemotherapy and other treatments. So it might be part of your treatment. The U.S. Food and Drug Administration (FDA) has approved targeted therapies for many types of cancer. Scientists are also testing drugs on new cancer targets.
The “targets” of targeted therapy
Knowing how cancer cells develop helps understand how targeted therapy works. First, cells make up every tissue in your body. There are many different cell types, such as blood cells, brain cells, and skin cells. Each type has a specific function. Cancer begins when specific genes in healthy cells change. Scientists call the change a mutation.
Genes tell cells how to make proteins that keep the cell working. If the genes change, these proteins change, too. This makes cells divide abnormally or live too long. When this happens, the cells grow uncontrollably. The out-of-control cells form a tumor. Learn more about the genetics of cancer .
Researchers are learning that certain gene changes happen in specific cancers. So they are developing drugs that target the changes. The drugs can:
Block or turn off the signals that tell cancer cells to grow and divide,
Keep cells from living longer than normal, or
Kill the cancer cells.
Types of targeted therapy
There are two main types of targeted therapy:
Monoclonal antibodies. Drugs called “monoclonal antibodies” block a specific target on the outside of cancer cells. Or the target might be in the area around the cancer. These drugs work like a plastic plug you put in an electric socket. The plug keeps electricity from flowing out of the socket.Monoclonal antibodies can also send toxic substances directly to cancer cells. For example, they can help chemotherapy and radiation get to cancer cells better. You usually get these drugs intravenously (IV).
Small-molecule drugs. Drugs called “small-molecule drugs” can block the process that helps cancer cells multiply and spread. These drugs are usually pills you take. Angiogenesis inhibitors are one example of this type of targeted therapy. These drugs keep tissue around the tumor from making blood vessels. Angiogenesis is the name for making new blood vessels. A tumor needs blood vessels to bring it nutrients. The nutrients help it grow and spread. Anti-angiogenesis therapies starve the tumor by keeping new blood vessels from developing.
Matching a patient to treatment
Studies show that not all tumors have the same targets. So the same targeted treatment does not work for everyone. For example, a gene called KRAS (pronounced kay-rass) controls tumor growth and spread. About 40% of colorectal cancers have this gene mutation. When this happens, the targeted therapies cetuximab (Erbitux) and panitumumab (Vectibix) do not work. The American Society of Clinical Oncology (ASCO) recommends that patients with metastatic colorectal cancer have their tumors tested for KRAS mutations. This helps your doctor give you the most effective treatment. It also protects you from unnecessary side effects. And you do not have to pay for drugs that probably will not help.
Your doctor might order tests to learn about the genes, proteins, and other factors in your tumor. This helps find the most effective treatment. Many targeted therapies cause side effects. Also, they can be expensive. So doctors try to match every tumor to the best possible treatment. Learn more about the importance of molecular testing .
Examples of targeted therapies
Below are a few examples of targeted therapies. Ask your doctor or another member of your health care team for more information.
Breast cancer. About 20% to 25% of all breast cancers have too much of a protein called human epidermal growth factor receptor 2 (HER2, pronounced her-too). This protein makes tumor cells grow. ASCO and the College of American Pathologists recommend HER2 testing for everyone with invasive breast cancer . If the cancer is HER2 positive, several targeted therapies are available.
Colorectal cancer. Colorectal cancers often make too much of a protein called epidermal growth factor receptor (EGFR). Drugs that block EGFR may help stop or slow cancer growth. These cancers have no mutation in the KRAS gene. Another option is a drug that blocks vascular endothelial growth factor (VEGF, pronounced vedge-eff). This protein helps make new blood vessels.
Lung cancer. Drugs that block the protein called EGFR may stop or slow down lung cancer. This may be more likely if the EGFR has certain mutations. Targeted therapy is also available for lung cancer with a mutation in the ALK gene. Doctors can also use angiogenesis inhibitors for certain lung cancers.
Melanoma. About half of melanomas have a mutation in the BRAF gene (pronounced bee-raff). Researchers know specific BRAF mutations make good drug targets. So the FDA has approved several BRAF inhibitors. But these drugs can be dangerous if you do not have the BRAF mutation.
The list above does not include every targeted therapy. Researchers are studying many new targets and drugs. You can learn more about specific drugs and targeted therapy in other cancers in the cancer type guides . Look at the Treatment Options and Latest Research pages.
Challenges of targeted therapies
Using a drug that works on your specific cancer may seem simple. But targeted therapy is complicated and not always effective. It is important to remember that:
A targeted treatment will not work if the tumor does not have the target.
Having the target does not mean the tumor will respond to the drug.
For example, the target may not be as important as doctors first thought. So the drug may not help much. Or the drug might work at first but then stop working. Finally, targeted therapy drugs may cause serious side effects. These are usually different from traditional chemotherapy effects. For example, patients getting targeted therapy often develop skin, hair, nail, or eye problems .
Targeted therapy is an important cancer treatment. But so far, doctors can only get rid of a few cancers with these drugs alone. Most patients also need surgery, chemotherapy, radiation therapy, or hormone therapy. Researchers will develop more targeted drugs as they learn more about specific changes in cancer cells.
|
Sweating sickness, also called English sweat or English sweating sickness, a disease of unknown cause that appeared in England as an epidemic on five occasions—in 1485, 1508, 1517, 1528, and 1551. It was confined to England, except in 1528–29, when it spread to the European continent, appearing in Hamburg and passing northward to Scandinavia and eastward to Lithuania, Poland, and Russia; the Netherlands also was involved, but with the exception of Calais (a seaport in northern France), the disease did not spread to France or Italy.
Apart from the second outbreak, all the epidemics were severe, with a very high mortality rate. The disease was fully described by the physician John Caius, who was practicing in Shrewsbury in 1551 when an outbreak of the sweating sickness occurred. His account, A Boke or Counseill Against the Disease Commonly Called the Sweate, or Sweatyng Sicknesse (1552), is the main historical source of knowledge of the extraordinary disease.
The illness began with rigors, headache, giddiness, and severe prostration. After one to three hours, violent, drenching sweat came on, accompanied by severe headache, delirium, and rapid pulse. Death might occur from 3 to 18 hours after the first onset of symptoms; if the patient survived for 24 hours, recovery was usually complete. Occasionally there was a vesicular rash. Immunity was not conferred by an attack, and it was not unusual for patients to have several attacks. Each epidemic lasted for only a few weeks in any particular locality.
Since 1578 the only outbreaks of a disease resembling the English sweat have been those of the Picardy sweat, which occurred frequently in France between 1718 and 1861. In that illness, however, there was invariably a rash lasting for about a week, and the mortality rate was lower.
It is difficult to know what the sweating sickness really was. Caius attributed it to dirt and filth. All the epidemics occurred in late spring or summer, so it may very well have been spread by insects. The disease seemed to be more severe among the rich than among the poor, and the young and healthy were frequent victims. It is unlikely to have been a form of influenza or typhus. One 20th-century writer identified it with relapsing fever, which is spread by lice and ticks and has many characteristics in common with sweating sickness. That explanation is certainly plausible. It is improbable that sweating sickness should appear as a well-defined disease and then vanish altogether, although such disappearances, while rare, are not unknown. Contemporary scholars have suggested that the illness was caused by hantavirus infection.
|
You are here:
Summary of the Clean Water Act
33 U.S.C. §1251 et seq. (1972)
The Clean Water Act (CWA) establishes the basic structure for regulating discharges of pollutants into the waters of the United States and regulating quality standards for surface waters. The basis of the CWA was enacted in 1948 and was called the Federal Water Pollution Control Act, but the Act was significantly reorganized and expanded in 1972. "Clean Water Act" became the Act's common name with amendments in 1972.
Under the CWA, EPA has implemented pollution control programs such as setting wastewater standards for industry. We have also set water quality standards for all contaminants in surface waters.
The CWA made it unlawful to discharge any pollutant from a point source into navigable waters, unless a permit was obtained. EPA's National Pollutant Discharge Elimination System (NPDES) permit program controls discharges. Point sources are discrete conveyances such as pipes or man-made ditches. Individual homes that are connected to a municipal system, use a septic system, or do not have a surface discharge do not need an NPDES permit; however, industrial, municipal, and other facilities must obtain permits if their discharges go directly to surface waters.
Compliance and Enforcement
History of this Act
The Office of Water (OW) ensures drinking water is safe, and restores and maintains oceans, watersheds, and their aquatic ecosystems to protect human health, support economic and recreational activities, and provide healthy habitat for fish, plants, and wildlife.
- The EPA Watershed Academy provides training courses on statutes, watershed protection, and other key Clean Water Act resources.
|
Before you can understand what SSB is, you must understand how audio is transmitted via radio waves. The method by which audio is impressed on a radio signal is called modulation. The two types of modulation that most people are familiar with are AM (amplitude modulation) and FM (frequency modulation), for which the AM and FM broadcast bands were named.
In an AM-modulated radio signal, a base signal, called the carrier, is continuously broadcast. The two modulating signals are called the sidebands. Any audio that you hear on an AM broadcast station is from the two sidebands. When the radio station is not transmitting any sound, you can still hear that a signal is present; that is the carrier. These two modulating (audio) sidebands are located on either side of the carrier signal--one just above the other just below. As a result, the sideband located just above the carrier frequency is called the upper sideband and that which is located just below the carrier frequency is called the lower sideband.
The pieces that fit together to form an AM broadcast signal are quite important. Although AM signals were transmitted almost exclusively for decades, it was discovered that the AM signal could be dissected. The first amateur radio operators to experiment with these processes often used both sidebands without the carrier. This is known as double sideband (DSB). DSB was typically used in the earlier operations because it was much easier to strip out just the carrier than to strip out the carrier and one of the sidebands.
Several years later (and still true today), it was much more common in the amateur bands to transmit merely using one of the sidebands, which is known as single sideband (SSB). Single sideband transmissions can consist of either the lower sideband (LSB) or the upper sideband (USB). If you listen to an SSB signal on an AM modulation receiver, the voices are altered and sound a lot like cartoon ducks. As a result, you must have a special SSB receiver to listen to these transmissions. Although this was often difficult for the amateur radio operators of the 1950s to obtain, it is no longer a problem with today's modern SSB transceivers, such as the SG-2000 and SG-2020.
Broadcasters Need Fidelity
You might wonder why SSB modulation is used for some applications and AM is used for broadcasting. Broadcasters must have excellent audio fidelity when transmitting music; otherwise, the typical radio listener will tune to another station. In order to achieve excellent fidelity when transmitting music, both sidebands and the carrier are necessary. To produce this AM signal, the transmitter is, in effect, working as three transmitters: one to produce a strong carrier for each of the sidebands, an upper sideband, and a lower sideband. The result is that approximately half of the transmitter power is "wasted" on a blank carrier and the rest of the power is divided between the two sidebands. As a result, the actual audio output from a 600-watt AM transmitter (300 watts of carrier + 150 watts on each sideband) would be the same as the SG-2000 150-watt SSB transmitter.
SSB's High Efficiency
Let's run some numbers: Suppose you have a typical 5-kW broadcast transmitter. You will only be able to impress 2.5 kW of audio power on that signal. This means that each of the two sidebands will have only 1.25 kW of power. But in highly effective communications using single sideband, a single sideband signal removes the carrier and one sideband and concentrates all of its energy in one sideband. Thus, a 1-kW SSB signal will "talk" as far as a 4-kW conventional AM or FM transmitter. It is one reason why long distances can be covered effectively with SSB. Single sideband's benefit is not only evident on transmission. The reverse happens on receive. When you work out the math, the efficiency with an SSB signal is 16 times greater than with a conventional AM signal.
HF Signal Characteristics
HF (high frequency) is synonymous with the more familiar term, shortwave. The only difference is that HF is the term typically used for two-way and point-to-point communications. Shortwave is typically used when referring to broadcast stations in the same range. In amateur radio, both terms are frequently used. The HF band extends from 1700 to 30,000 kHz (1.7 to 30 MHz). To give some perspective to these numbers:
The AM broadcast band runs from 540 to 1630 kHz.
The Citizen's Band (CB) runs from 26,960 to 27,230 kHz (within the HF band).
Television channel 2 is on 54,000 kHz. (in the VHF band).
Each of these sample frequencies has different characteristics, and it is vitally important to learn this information so that you can effectively use the HF spectrum. When talking about HF, most people list the frequencies in either kHz (kilohertz) or MHz (megahertz). This is a matter of convenience only. The base rate for frequency is the hertz (Hz), named after Heinrich Hertz, an important "father of radio." One kHz equals 1000 Hz and one MHz equals 1,000 kHz (1 million Hz).
The Hz divisions of the radio spectrum relate directly to the frequency. Signals such as light, radio, and sound are all waves. These waves travel through the air in a manner that is somewhat similar to waves in a pond. Each radio wave has a peak and a valley. The length of each radio wave is (not surprisingly) known as the wavelength. Radio waves travel at the speed of light, so the longer each wave is, the fewer waves can arrive in one second. The number of waves that arrive per second determines the frequency.
Although the wavelength and the frequency are different ways of saying the same thing, wavelengths for radio are rarely given. In the 1920s through the 1940s, the wavelength was more frequently used than the frequency. This was probably the case because the wavelength seemed like a more tangible measurement at the time. The wavelength of the radio signal is also important because it determines the length of the antenna that you will need for receiving and especially for transmitting.
Because of the signal characteristics on the AM and FM broadcast bands, combined with the less effective internal antennas, radio signals are often thought of as being used for primarily local reception (100 miles or so). However, with two-way communications in the HF band, you are not listening for entertainment to the strongest station that you can find. You are attempting to communicate with a particular station under what could be life-threatening circumstances.
In the 1910s and 1920s, most radio enthusiasts thought that the wavelengths below 180 meters were useless, that the frequencies above the top of today's AM broadcast band were unusable. Little did they know that the opposite was true for communications over medium to long distances. These pioneers were mislead because they didn't yet understand the methods by which radio waves travel.
When you listen to a local AM broadcast station, you are receiving the ground wave signal. The ground wave travels along the ground for often a hundred miles or so from the transmitter location. The low frequencies, such as those in the AM broadcast band and lower, produce large ground-wave patterns that produce solid, virtually fade-free reception.
You could also receive sky waves. Sky waves travel toward the sky, rather than hang out on the ground. You would not be able to hear the sky-wave signals, except for the ionosphere. The ionosphere is many miles above the earth, where the air is "thin"--containing few molecules. Here, the ionosphere is bombarded by x-rays, ultraviolet rays, and other forms of high-frequency radiation. The energy from the sun ionizes this layer by stripping electrons from the atoms.
When a sky-wave signal reaches the ionosphere, it will either pass through it or the layer will refract the signal, bending it back to earth. The signal can be heard in that area where the signal reaches the earth, but depending on a number of variables, there might be an area where no signal from that particular transmitter is audible between the ground wave and where the sky wave landed. This area is the skip zone. After the sky-wave signal bounces on the earth, it will return toward the sky again.
Skipping Around the World
Again, the signal will be refracted by the ionosphere and return to the earth. If the HF signals all bent and bounced off the ionosphere with no loss in signal strength, HF stations around the world would be heard across the earth with perfect signals (something like if a "super ball" was sent bouncing in a frictionless room). Whenever radio signals are refracted by the ionosphere or bounce from the earth, some of the energy is changed into heat, causing absorption of the signal. As a result, the signal at the first skip is stronger than the signal at the second skip, and so on. After several skips, typical HF signals will dissipate.
The skip and ground waves can be remarkably close together. It is not unusual for one station to receive a booming signal while a nearby station cannot hear a trace of the sending station even though using a better receiver with a better antenna. The first station was receiving either the ground wave or the first skip and the other station was located somewhere between these two
Angles of Radiation
If the HF users only had skip to contend with, the theories and uses of the HF spectrum would be simple. But several other factors also come into play. The critical angle of radiation is the steepest angle at which a radio signal can be refracted by the ionosphere. The critical angle depends on such factors as the frequency that is being used, the time of year, the time of day. Sometimes a signal that shoots straight up from the antenna will be refracted by the ionosphere. In this case, the critical angle would be 0 degrees. In another case, the signal might slice through the ionosphere and continue into space. From this signal, you would not be able to determine the critical angle; you would only know that the sky-wave signal was above the critical angle.
Natural Cycles Affect Propagation
Aside from the critical angle, the frequency used can also affect whether the signal will be passed through or refracted by the ionosphere. When a signal penetrates through the ionosphere without being refracted, the signal is said to operate above the Maximum Usable Frequency (MUF). The MUF is not a set frequency; it varies greatly, depending on the time of day and the part of the world that you are attempting to contact. Nearly the opposite of the MUF is the lowest usable frequency (LUF). However, the LUF has nothing to do with whether or not the signal will be refracted by the ionosphere; instead, it is the lowest frequency that you can use to reach a particular region (using a base standard amount of power).
In the daylight hours, the MUF is highest; in night hours, it is lower. There is also some seasonality, too. In the winter, with longer hours of darkness, the MUF is generally lower than the summer when the MUF is higher. Likewise, during the hours of darkness, when the ionosphere is less ionized, the LUF is lower, and during the daylight hours, it is much higher. The MUF and the LUF provide the boundaries between which you should operate the transceiver in order to make your contacts.
Cycles that Affect Propagation
Propagation is affected by cyclical environmental conditions. The shortest of these conditions is the day/night cycle. In general, the transmitting and receiving conditions are by far the best in the nighttime hours. During the daytime, the MUF and LUF both rise -- in order to talk across great distances, less reliable (because of the very long skip) higher frequencies must be used. The season of the year also affects propagation The winter/summer cycles are somewhat like the day/night cycles, except having a lesser influence. In general, the MUF and LUF will both be higher in the summer and lower in the winter. Also, the noise from thunder storms and other natural phenomena is much higher during the summer. In fact, except for local transmissions, communications in the 1700- to 3000 kHz range during the summertime are of limited regular use.
The longest environmental cycle that affects propagation is the sunspot cycle. Before the age of radio, it was noticed that the number of solar storms (sun spots) varies from year to year. Also, the number of sunspots per year was not entirely random. The number of solar storms during a good propagational month exceeds 150 and the number during a weak month is often fewer than 30. The sunspot cycle reaches its peak approximately every 11 years, cycles that have a great impact on radio propagation.
Between these peaks are several years with very low sunspot activity. During years with high sunspot activity, the MUF dramatically increases and long-distance communications across much of the HF band is possible. During the peak of the last sunspot cycle, in 1989, the MUF was often above 30 MHz! When the cycle is at its low point, the MUF decreases and much less of the HF band is usable for long-range communications. Generally, the frequencies above 10,000 kHz dramatically improve during the peak years of the sunspot cycle, and the frequencies below 10,000 kHz are much less affected.
Although the long distances that HF radio signals can be received is amazing, in comparison to the other radio bands, several types of distance-related interference can ruin reception or make listening unpleasant. The most widespread type of interference fits under the broad heading of noise. Noise consists of natural and man-made noise. Natural noise is produced by everything from thunder storms to planets (hence, radio telescopes).
Thunder storms are the worst because they cause very loud crashes; because of the long distances that shortwave signal travel, the noise produced by thunderstorms is also likely to travel hundreds of miles (or further). Even if the weather is clear (you should never operate HF equipment during a local thunderstorm), a distant thunderstorm could ruin your reception of a weak station that would otherwise be audible at your location.
Man-made interference can arrive from a vast variety of sources. If nothing else, at least most man-made interference is limited in its range; most is limited to the building that the radio equipment is located in or to a several-block surrounding area. One of the worst causes of man-made interference is fluorescent lights, which create a medium-strength buzz across the HF range, although it is often at its worst on the lower frequencies. In fact, fluorescent lights near an antenna can drown a normally receivable signal. If your radio is located near computers, it will probably receive a light buzz across the bands and much stronger "bleeps."
Adjacent-channel interference is a special type of man-made interference where a station from a nearby frequency is "washing over" or "splattering across" another. A somewhat similar type of interference is co-channel interference, where the interfering station is on the same frequency. A good example of co-channel interference is the 1400- to 1500 kHz "graveyard" region of the AM broadcast band in the evening hours, where dozens of signals are all "fighting" to be heard.
Other types of HF interference cause signal distortion from propagational effects. One of the most interesting effects is polar echo, which occurs when one component of a radio signal takes an East-West path and another arrives over one of the poles of the Earth. Most every morning, one can tune into one of the BBC broadcast transmitters and hear the effect of polar echo. Because the signals take different paths, they arrive at different times, creating an echo on the audio signal. During the lightest effects, the voices sound a bit "boomy;" at worst, the delay is so long that the programming is difficult to understand. A related phenomenon is polar flutter, where the signal passes over one of the poles and quickly fades up and down in strength, creating a "fluttery" sound.
Fading is the most common and damaging form of propagational interference. The two most common types of fading are selective fading and multipath fading. With selective fading, the ionosphere changes orientation quickly and the reception is altered (somewhat like a ripple passing through the signal). FM and AM signals are especially prone to selective fading, SSB is slightly affected, and the CW mode is almost free from selective fading. The other type, multipath fading, occurs when signals take different paths to arrive at the same location. Multipath fading is a variation of polar echo; instead of the signals creating an echo effect, the phase of the signals are altered as they as refracted by the atmosphere. As a result, the received signal fades in and out.
The last major propagational effect does not actually cause interference to a signal; it absorbs it. Although sun spots are beneficial to propagation as a whole, solar flares destroy communications. During a solar storm, communications across a wide frequency range can suddenly be cut off. Many listeners have thought that their receivers either weren't working or that the exterior antenna had come down because virtually no signals were audible. Instead, they had turned on their radios during a major solar flare. On the other hand, other listeners had thought they were listening during a solar flare, but actually didn't have their antenna connected or they had tuned their radio above the MUF or below the LUF.
Signals take various routes to travel to a receiver from the transmitter. The problems that can result from signal paths include polar flutter and echo, and multipath fading. The signal path is also important when attempting to contact or receive signals from a particular area. When you receive a signal, you can typically assume that it took the shortest path to reach you (i.e. you could connect the points between the transmitting and receiving locations with a line on a globe). This is known as short-path reception. Exceptions to this rule occur when two or more different paths are nearly the same distance (such as the BBC example of polar flutter, where the north-south path isn't much longer than the east-west path).
The other major signal path is the long path. The long-path radio signal travels the opposite direction from the short-path signal. For example, the long-path signal from the BBC transmitter (mentioned earlier) would be east: across Europe, Asia, the Pacific Ocean, most of North America, finally arriving in Pennsylvania. Signals received via long path are often very weak--especially if the long path was very long and the frequency is low.
On the other hand, if the station is on the other side of the world and there is little difference between the long path and the short path, you could be receiving either or both. This case occurred recently to a listener on the east coast of the USA who was listening to a small, private broadcast station from New Zealand -- 12 time zones away. At the same time he was listening to it, it was also being heard throughout North America and in Germany. Because the signals were generally a bit better in the West and Midwest, we can assume that he heard the Pacific Ocean-to-Western North America route, rather than the one that passed through Asia and Europe.
One of the most intriguing propagational anomalies is the effect of the grey line on HF radio transmissions. The grey line region is the part of the world that is neither in darkness nor in daylight. Because two grey-line stripes move constantly around the earth, the propagational alterations are brief (usually only about an hour or so in length). Many amateurs and hard core radio listeners actively scour the bands at sunrise or sunset. The ionosphere is highly efficient at these times, so listeners can often pull in some amazing signals. Grey-line propagation is probably of far less interest to those who use the radio bands in conjunction with their occupation. If you are one of these users, chances are that grey-line propagation will be either a curiosity or a nuisance, as more stations that could cause interference to your signal become audible."
SGC Inc., Tel:
425-746-6310 Fax: 425-746-6384
Email: [email protected] reserves the right to change specifications, release dates and price without notice.
|
Sample maximum and minimum
In statistics, the sample maximum and sample minimum, also called the largest observation, and smallest observation, are the values of the greatest and least elements of a sample. They are basic summary statistics, used in descriptive statistics such as the five-number summary and seven-number summary and the associated box plot.
The minimum and the maximum value are the first and last order statistics (often denoted X(1) and X(n) respectively, for a sample size of n).
If there are outliers, they necessarily include the sample maximum or sample minimum, or both, depending on whether they are extremely high or low. However, the sample maximum and minimum need not be outliers, if they are not unusually far from other observations.
The sample maximum and minimum are the least robust statistics: they are maximally sensitive to outliers.
This can either be an advantage or a drawback: if extreme values are real (not measurement errors), and of real consequence, as in applications of extreme value theory such as building dikes or financial loss, then outliers (as reflected in sample extrema) are important. On the other hand, if outliers have little or no impact on actual outcomes, then using non-robust statistics such as the sample extrema simply cloud the statistics, and robust alternatives should be used, such as other quantiles: the 10th and 90th percentiles (first and last decile) are more robust alternatives.
Other than being a component of every statistic that uses all samples, the sample extrema are important parts of the range, a measure of dispersion, and mid-range, a measure of location. They also realize the maximum absolute deviation: they are the furthest points from any given point, particularly a measure of center such as the median or mean.
Firstly, the sample maximum and minimum are basic summary statistics, showing the most extreme observations, and are used in the five-number summary and seven-number summary and the associated box plot.
The sample maximum and minimum provide a non-parametric prediction interval: in a sample set from a population, or more generally an exchangeable sequence of random variables, each sample is equally likely to be the maximum or minimum.
Thus if one has a sample set and one picks another sample then this has probability of being the largest value seen so far, probability of being the smallest value seen so far, and thus the other of the time, falls between the sample maximum and sample minimum of Thus, denoting the sample maximum and minimum by M and m, this yields an prediction interval of [m,M].
For example, if n=19, then [m,M] gives an 18/20 = 90% prediction interval – 90% of the time, the 20th observation falls between the smallest and largest observation seen heretofore. Likewise, n=39 gives a 95% prediction interval, and n=199 gives a 99% prediction interval.
However, with clean data or in theoretical settings, they can sometimes prove very good estimators, particularly for platykurtic distributions, where for small data sets the mid-range is the most efficient estimator.
They are inefficient estimators of location for mesokurtic distributions, such as the normal distribution, and leptokurtic distributions, however.
For sampling without replacement from a uniform distribution with one or two unknown endpoints (so with N unknown, or with both M and N unknown), the sample maximum, or respectively the sample maximum and sample minimum, are sufficient and complete statistics for the unknown endpoints; thus an unbiased estimator derived from these will be UMVU estimator.
If only the top endpoint is unknown, the sample maximum is a biased estimator for the population maximum, but the unbiased estimator (where m is the sample maximum and k is the sample size) is the UMVU estimator; see German tank problem for details.
If both endpoints are unknown, then the sample range is a biased estimator for the population range, but correcting as for maximum above yields the UMVU estimator.
If both endpoints are unknown, then the mid-range is an unbiased (and hence UMVU) estimator of the midpoint of the interval (here equivalently the population median, average, or mid-range).
The reason the sample extrema are sufficient statistics is that the conditional distribution of the non-extreme samples is just the distribution for the uniform interval between the sample maximum and minimum – once the endpoints are fixed, the values of the interior points add no additional information.
The sample extrema can be used for a simple normality test, specifically of kurtosis: one computes the t-statistic of the sample maximum and minimum (subtracts sample mean and divides by the sample standard deviation), and if they are unusually large for the sample size (as per the three sigma rule and table therein, or more precisely a Student's t-distribution), then the kurtosis of the sample distribution deviates significantly from that of the normal distribution.
For instance, a daily process should expect a 3σ event once per year (of calendar days; once every year and a half of business days), while a 4σ event happens on average every 40 years of calendar days, 60 years of business days (once in a lifetime), 5σ events happen every 5,000 years (once in recorded history), and 6σ events happen every 1.5 million years (essentially never). Thus if the sample extrema are 6 sigmas from the mean, one has a significant failure of normality.
Further, this test is very easy to communicate without involved statistics.
These tests of normality can be applied if one faces kurtosis risk, for instance.
Extreme value theory
Sample extrema play two main roles in extreme value theory:
- firstly, they give a lower bound on extreme events – events can be at least this extreme, and for this size sample;
- secondly, they can sometimes be used in estimators of probability of more extreme events.
However, caution must be used in using sample extrema as guidelines: in heavy-tailed distributions or for non-stationary processes, extreme events can be significantly more extreme than any previously observed event. This is elaborated in black swan theory.
|
Even if your tomato plants are healthy, they sometimes fall prey to diseases from bacteria.
In a new study published in Current Biology, scientists show how a certain bacteria gets past a tomato’s defenses and infects the plant with bacterial speck disease, leaving black lesions on leaves and fruits.
They hope to use the results to study ways to protect plants without pesticides.
In order to study the way that the bacteria invaded the tomato, European scientists used a plant called Arabidopsis, which is also affected by the bacterial speck disease and works well in experimental studies.
When they studied the infection process, they found thatthe bacteria sent a protein into the plant cells. This protein found surface locations on the cell that would normally announce invaders. The protein worked to deactivate and destroy these surface receptors.
As one author, Professor Mansfield says: “This area of research has a wider significance beyond black speck disease in tomato, because the microbes that cause plant diseases probably all employ similar attacking strategies to suppress resistance in their hosts. The more we understand about how the pathogens that cause disease overcome the innate immunity to infection in crop plants, the better our chances of developing approaches to disease control that do not require the use of potentially harmful pesticides”
This is indeed an exciting study. If they are able to create methods of controlling bacteria that do not include pesticides, it will be better for the environment. Let’s hope they figure it out!
|
Teacher/Student Learning Packet
- Learners will be able to identify several types of local plant and animal life.
- Learners will recognize the "food chain" concept.
- Learners can list the endangered animals of the Black Canyon site.
LIFE IN THE DESERT:
A quick glance at the desert might have the appearance of a lifeless environment. Yet, the Mojave Desert is alive with plants, animals, insects, fish and reptiles which have all adapted to the desert climate. The desert environment meets their needs for:
FOOD - Each type of animal will only eat certain foods. Some plants provide more nutritional value than others. Both the quantity and quality of the food are important.
WATER - All wildlife needs water. There are many water sources such as rain, dew, snow and moisture in food.
SHELTER - All wildlife needs cover for protection while feeding, sleeping, playing, traveling, etc. Cover can come in many forms, for example: vegetation, burrows, and rocks.
SPACE - Overcrowding leads to competition among animals looking for food, water, and shelter. For this reason, only a set number of animals can live in an area.
The desert is a delicate land of plant and animal life dependent on each other for their survival. The following pages identify and describe some of the most commonly found plants and animals in the desert area surrounding Hoover Dam.
OUR ENDANGERED WILDLIFE:
Small changes created by man can disrupt the delicate balance of nature in the desert. The tortoise, bonytail chub, and razorback sucker are examples of life endangered by man's intrusion in the environment.
Desert tortoises are easily recognized by their thick, elephant- like legs. Their front legs are larger than their rear legs in order to dig burrows. This is an important activity in the life of a tortoise because burrows protect them on hot summer days. They also hibernate in these burrows during the winter.
The desert tortoise is a herbivore, meaning it eats only plants, such as grasses, blossoms, and cactus. It can be found grazing in the mornings and late afternoons to avoid the heat of the summer sun. Desert tortoises can live to be 100 years old. Female tortoises normally lay four to six eggs during the month of June. The eggs are deposited in a shallow hole and covered with dirt. The eggs take several months to hatch.
Bonytail chubs and razorback suckers are endangered species which should be reported to National Park Service, U.S. Fish & Wildlife Service or Nevada Division of Wildlife, and thrown back into the water if caught.
ANIMALS OF THE AREA:
Nevada's most famous animal is the bighorn sheep. It is the official state animal. You can often see these magnificent animals near Hoover Dam. Adult males, called rams, weigh from 150 to 200 pounds. Females, called ewes, are somewhat smaller. Baby sheep are called lambs and are normally born in May or June. Bighorn sheep are surefooted animals that can swiftly climb the mountains in which they live. They use their speed to escape from predators, such as mountain lions. Bighorns are brown to grayish-brown with white rumps. Rams have large, curled horns. Ewes have smaller, straight horns.
Bighorns normally travel in herds, led by the oldest ewe. Rams separate from the herd during the summer months. The males return to join the ewes and lambs in the fall. All bighorn sheep have horns that grow throughout the animal's life. As the sheep grow older their horns grow distinct rings, one for each year. Counting these growth rings will tell you the bighorn sheep's age. Bighorn sheep can live as long as 14 years. Telling the age of a ram is easier than determining the age of a ewe. This is because the horns of a ram are larger than a ewe's and have more growth during the year. Therefore, the rings on a ram's horns are larger and more distinct.
Coyotes are carnivores, or meat eaters. Coyotes are gray or rusty gray with white throats and bellies. Adult coyotes weigh between 20 and 50 pounds. They are fast runners and can easily outrun any human. When running, the coyote holds its tail between its hind legs.
In southern Nevada, the coyotes usually eat rodents, rabbits, lizards and birds. Coyotes will eat berries if there is no other food available. They will also eat animals that have been killed by automobiles and whatever food they can find in garbage dumps.
ANTELOPE GROUND SQUIRREL
You can identify the antelope ground squirrel by the white lines running down each side of its gray body. Its cousin, the chipmunk, lives at Mount Charleston. Antelope ground squirrels are well adapted to southern Nevada's desert climate. They are able to let their body temperatures rise to high levels. Because of this, they are often the only living creatures you will see in the desert during hot summer days. These squirrels dig burrows where they go to cool off. They will also hibernate in their burrows if forced to by harsh weather. Their favorite foods are green plants and insects. Their predators include hawks, falcons, and coyotes.
This animal averages in length from 24 to 31 inches long. The body is catlike and the face is fox-like. The cat has a long, bushy tail with black and white bands around it. The ringtail cat is found in the rocky canyon areas like where Hoover Dam is located.
LITTLE BROWN BAT
The bats most frequently found in the area of Hoover Dam are grayish to dark brown in color and average in length from 3 3/4 to 3 5/8 inches. They live in the tunnels and caves in the surrounding canyons. The bats help pollinate desert plants and eat small insects.
Roadrunners are very common to Southern Nevada. The greater roadrunner is a big bird with a long tail and bill. It has a bushy crest on its head. Greater roadrunners are fast runners who seldom fly. A roadrunner is often seen running with its neck outstretched and its tail held out flat. They are ground dwellers that hunt lizards, snakes, birds, and invertebrates.
This large graceful bird can be seen soaring at great heights above southern Nevada. Adults measure up to three feet long. They are brown with a white tail band and feathered legs. Eagles usually build their nests on suitable cliff ledges or, less frequently, in trees. Their prey includes rabbits, mice, and injured water birds.
Everyone who lives in Southern Nevada has seen this bird, but few know its name. The bird has a beautiful song that can be heard when it echoes off canyon walls. The adult wren is about 3-4 inches long. It has a white throat and breast and a brown belly. The little wren eats gnats and seeds of desert plants.
This is one of four types of quail found in Nevada. The others are the California quail, mountain quail and scaled quail. Gambel's quail are easily identified by tufts of feathers, called topknots, on their heads. They can often be seen in vacant lots around the Las Vegas Valley. Their food consists mostly of seeds and fruit.
The turkey vulture varies in length from 26 to 32 inches with a wingspan of 72 inches. Its color is brown-black all over with an unfeathered head. Sometimes this bird is referred to as a "buzzard". They serve as scavengers of the desert by eating carcasses of dead animals.
This bird is all black and ranges in sizes from 19 to 21 inches. The raven has a heavy bill, wedged shape tail and long throat feathers. The bird is found in areas of mesquite and it needs trees or power lines for nesting.
Scorpions are found all over the world, but most like to live in warm, dry climates such as the desert. Scorpions have pincers and a long tail with a stinger at its tip. Though they have many eyes, they do not see well. When running, they hold their pincers out. Males have broader pincers and longer tails than females. Like wolf spiders, scorpions feed at night on insects. The mother carries her babies on her back until they shed their first skins. Scorpions sting to defend themselves. Never touch or play with a scorpion!
Desert tarantulas can get as large as four inches long. They have brownish black, hairy bodies and legs. Female tarantulas may live for 20 years. In the day, tarantulas hide in holes or under stones. In the dim light of sunset or near dawn, tarantulas come out to hunt food. They eat insects, lizards and other small animals. Tarantulas do not like to attack humans. Usually their bite is no more poisonous than a bee sting.
The Tarantula Hawk is a velvety black wasp with orange wings. It depends on the tarantula for its survival. Here's how: The female tarantula hawk paralyzes the spider with its stinger. Then she quickly digs a large hole. Next, she drags the spider inside. lays an egg, then covers the hole. When the egg hatches, the larva feeds on the spider. When it is full grown, the tarantula hawk feeds on plant nectar.
This snake varies in size from 24 to 51 inches. It has uniform white scales surrounding brown diamonds on its back from the midline to its tail. The upper half is greenish brown to olive green. You may find this snake in areas where mesquite, creosote and cacti are prominent. Its venom is extremely toxic. Keep your distance!
The average length of this lizard is 11 to 16 1/2 inches in length and it is very obviously potbellied. Its skin is loose and floppy. These lizards are seen around large boulders or rocky areas and live strictly on leaves, flowers, buds, and fruit.
PLANTS OF THE AREA
Perhaps the most recognized cactus in Las Vegas is the barrel cactus. It is not hollow, as many believe, but has a spongy pulp inside. When growing, most barrel cactus lean to the South. It is also known as the bisnaga, red barrel, fire barrel, solitary barrel and compass barrel cactus.
This cactus has flat, greenish jointed stems with rose or lavender flowers from March to June. The height is 6 to 12 inches and frequently found in dry, rocky desert flats or slopes. The beavertail cactus looks like the prickly pear, but does not have long spines. It has tiny hair-like spines instead.
The cholla (pronounced "cho-yah") cactus has jointed stems that are tubular. These joints can break off and take root in the ground to grow a whole new cholla cactus. After the plant dies, a skeleton of "ventilated wood" remains in the desert. There are many different kinds of cholla in the Mojave Desert.
This large shrub has small, round leaves which look and feel oily or sticky. This coating called "lac", helps to keep water from being lost to the dry air. Indians used lac as glue. Mexicans called this plant, "little stinker".
The mallow is common to roadsides and vacant lots. This plant has orange flowers and fuzzy leaves. The star-shaped hairs may get in your eyes if you handle the plant. That is why it is called the "sore-eye poppy".
This common plant has inch wide yellow flowers. These flowers look like small sunflowers on tall stalks. The marigold's fuzzy leaves grow at its base.
The flowers of this small colorful plant are barely visible. A "brush" of bright orange or red surrounds the tiny flowers. The top of the plant looks as if it has been dipped in paint.
PRICKLY PEAR CACTUS
There are many kinds of prickly pear cactus (nearly every state has a native species). Most can be recognized by flattened stems, called pads, that grow from joints. Indians would carefully scrape or burn off the spines and cook the pads for food. The egg-shaped fruits, called "tunas", can still be found in some grocery stores.
This plant is found in dry, rocky places or on canyon walls in the desert. A rounded, bushy plant with stinging hairs and flowers, blooms from April to June. The flowers are cream or pale yellow in color. Do not pick the flowers -- the stinging hairs are vicious!
This plant is unusual for the desert. The datura is vinelike with large, grey-green leaves. The flowers look like large white trumpets, several inches long. It is sometimes called the "moon-lily", because the flowers open at night. This is when the Giant Sphinx Moth pollinates the flowers. It is also known as "jimson-weed" or "thornapple" because of its round, spiny seed pod. All parts of this plant are poisonous.
FISHING IN LAKE MEAD AND ON THE COLORADO RIVER
Bait: anchovies, shad, and lures at different depths (seasonal). It is found in the Overton Arm, Las Vegas Bay, and Temple Bar.
Bait: minnows, worms, insects, crayfish, flies (wet or dry), and popping bugs. The "big ones" live near the canyon walls.
Bait: cheese and marshmallows. This trout likes deeper levels and cold water.
Bait: night crawlers, minnows, and lures. Largemouth bass are more active at dawn and dusk and prefer weedy areas and shoreline.
Bait: natural or prepared stink baits. They can be identified easily by their large whiskers. Bottom fishing is best at day or night.
It is usually found in pools along the edges, usually around mud, sand, and debris. This small fish is used for bait.
Its body is short, stocky and narrow. It lives in vegetated lakes and muddy rivers. Bait: night crawlers, red worms and small lures.
Bring into class a dried branch, common to the Lake Mead Recreational Area. (Choose a large, interesting branch.) This branch should be hung on a bulletin board or planted in a container. The student will draw, color and cut out a bird found in this region. A report on their habitat might be presented orally to the class.
To incorporate plant and wildlife into the above project, create a model desert scene from materials available to students (such as clay, plaster of paris, leaves, branches, paper, Styrofoam, etc.). Include reptiles, birds, and mammals in as many habitats as possible.
Desert tortoises may drink up to 40% of their weight in water per day. Select some desert plants and weigh them while they are fresh to determine how much water is in the plant. Dry the plant and reweigh. You may now calculate how much water weight is in the plant.
Weigh a desert tortoise and calculate how much water he might consume in a day and how much he must eat to provide sufficient water for survival.
Discuss the concept of the "food chain". Follow up this study by collecting pictures of native animals. Collect smaller pictures of plants, insects, and other animals and create a display of how the food chain works for a specific animal.
Create mobiles of food chains for various species. This activity can be done independently or with a small group. Cut plants and animals from magazines and post on cardboard or the students may do original artwork. Each mobile must follow a food chain for a single animal.
Use pictures (animals, reptiles, fish, birds) to introduce wildlife specific
to the area. Groups of students are to select two animals to investigate
and tell about:
1. The survival rate of each animal.
2. What may have contributed to this animal's success or failure.
Each group may presents their findings to the class by means of skits, debates, discussions, puppet shows, or reports.
Freshwater Fishes, Lawrence M. Page & Brooks M. Burr, Houghton Mifflin Company, Boston MA 1991.
1980 National Wildlife Week, March 16-22. 1980, Published by National Wildlife Federation, 1412 16th St. N.W. Washington D.C. 20036.
Mojave Desert Discovery, (teachers guide) National Park Service, 1994.
Our Living Desert, Las Vegas Review-Journal Newspaper in Education. Las Vegas, NV 702-383-0470.
1996 Arizona Fishing Regulation, Produced by the AZ Game and Fish Department Information and Education Division. 602-942-3000.
Lake Mead National Recreation Area, National Park Service, U.S. Dept. of the Interior, Fishing Information. 702-293-8900.
Last Reviewed: 9/15/2004
|
Triangles Of The Neck
While the primary focus of this article is to review anatomical landmarks of the triangles of the neck, it is important to first discuss the anatomy of the neck. There are several important functions of the neck:
- The neck works with the shoulders to provide support for the head and facilitates rotation of the head about its axis.
- It acts as a conduit for neurovascular structures (including the spinal cord) travelling to and from the head.
- It is the major passageway from the upper digestive and respiratory tract to the lower regions of the same.
- It can be used as an emergency route to ventilate a patient when the oral and nasal routes are compromised or use of those pathways is contraindicated.
The neck is the bridge between the thorax and the head. It is of variable length and width depending on the individual’s gender, age and body habitus. While those parameters may vary from person to person, other features remain constant. The neck extends from the bases of the skull and mandible (superiorly) to the level of the thoracic inlet. It can be divided into symmetrical halves by an imaginary line known as the median line of the neck through the mandibular symphysis (symphysis menti). It consists of cutaneous, fascial, muscular, fatty and bony layers.
This article will look at the gross anatomy of the neck in order to lay the foundation for understanding the triangles of the neck.
Gross Anatomy of the Neck: Fascial Layers
There are two fascial layers in the neck – there is a superficial and a deep cervical fascia. The latter is further subdivided into investing, pretracheal and prevertebral layers.
Superficial cervical fascia
A thin lamina mixed with adipose tissue is found immediately deep to the skin; it is known as the superficial cervical fascia. This fascial layer, which lies over the platysma muscle, merges with the aponeurosis of the same muscle inferiorly. The merged tissue may either form skin ligaments or merge with the deltopectoral fascia (covering the deltoid and pectoralis major muscles). The fascia is continuous circumferentially around the entire neck.
Deep cervical fascia
The deep cervical fascia was previously believed to be a single layer of fascia. However, subsequent studies supported the notion that the deep cervical fascia is further subdivided into three other layers. The investing (superficial) layer of the deep cervical fascia also circumferentially wraps around the neck by meeting with the analogous fascia of the contralateral side, while simultaneously covering the sternocleidomastoid and trapezius muscles. The investing layer has periosteal attachments at along the base of the mandible, the mastoid process and the superior nuchal line of the occipital bone superiorly.
In the mandibulomastoid region, the fascia travels behind the parotid gland and attaches to the arch of the zygomatic bone. There are also inferior periosteal attachments between the investing layer and the manubrium sterni, acromion and clavicle. Just above the attachment to the manubrium sterni, the investing layer of deep cervical fascia divides into superficial and deep layers after integrating with the aponeurosis of the platysma . The superior layer attaches to the anterior border of the manubrium, while the deep layer attaches to the clavicular ligament and posterior surface of the manubrium. The two layers create a space known as the suprasternal space that houses the jugular venous arch and the caudal part of the anterior jugular veins, the sternal heads of sternocleidomastoid and areolar tissue.
The pretracheal layer of the deep cervical fascia is deep to the investing layer of the same. Cranially, it attaches to the hyoid bone and caudally it extends into the superior mediastinum adjacent to the great vessels before merging with the pericardium. Laterally, it is continuous with the investing layer of deep cervical fascia as well as the carotid sheath. The fascia also encircles the oesophagus, infrahyoid strap muscles, thyroid and parathyroid glands, as well as the trachea, larynx and pharynx.
The last layer of the deep cervical fascia is the prevertebral layer. As the name suggest, this fascial layer is superficial the anterior group of vertebral muscles. Loose areolar tissue occupies the retropharyngeal space, which lies posterior to the buccopharyngeal fascia (covering of the pharynx) and anterior the prevertebral fascia. Inferiorly, the fascia extends into the superior mediastinum (anterior to longus colli) before blending with the anterior longitudinal ligament. Superiorly it attaches to the base of the cranium. It continues bilaterally to cover the scalenus anterior and medius muscles, as well as the levator scapulae muscle. As it becomes thin and loose areolar tissue laterally, it merges with the fascia of the sternocleidomastoid and the carotid sheath. It is drawn inferolaterally by the emerging subclavian artery and brachial plexus as the axillary sheath, in the retroclavicular space.
The carotid sheath is a fascial layer that encases the internal jugular vein, vagus nerve (CN X), parts of the ansa cervicalis (C1, C2, C3) and the common and internal carotid arteries. It is a product of consolidating parts of the deep cervical fascia.
Gross Anatomy of the Neck: Musculature
There are several muscular layers in the neck. They can be divided into superficial and deep muscles, as well as suprahyoid and infrahyoid muscles (i.e. above and below the hyoid bone). Owing to the fact that there are numerous muscles in the neck (most of which have been covered in previous articles), this piece will focus on those neck muscles that form the borders of the triangles of the neck.
The platysma is a superficial neck muscle that primarily acts as a muscle of facial expression. It originates from the cranial portion of the fascia of pectoralis major and the fascia of the deltoid muscle. Its fibers have attachments to the lower border of the mandible, the integument of the lower face, and the lower lip. This muscle receives arterial supply from the submental branch of the facial artery and from the thyrocervical trunk via the suprascapular artery. Innervation to the muscle is derived from CN VII (facial nerve).
The sternocleidomastoid muscle is a prominent structure located on the lateral aspect of the neck. It has two heads and therefore two origins. Medially, the sternal head arises from the anterior surface of manubrium sterni; laterally, the clavicular head arises from the upper surface of medial third of the clavicle. The former travels in a posterosuperior manner while the latter in an almost vertical direction until they meet midway along the neck and begin to blend. Fibers of the clavicular head insert into the lateral surface of mastoid process and those of the sternal head insert into the lateral half of superior nuchal line of occipital bone . The superior thyroid and occipital arteries give direct branches to sternocleidomastoid. It is also perfused by the occipital branch of posterior auricular and muscular branch of suprascapular arteries. Ventral rami of C2, C3 and C4 and the spinal part of CN XI (spinal accessory nerve) give motor innervation to the muscle.
The digastric muscle is a paired, suprahyoid structure that has two bellies. The anterior belly originates in the digastric fossa near the midline of the base of the mandible, while the posterior belly originates in the mastoid notch of the temporal bone. The muscles slope posteroinferiorly and anteroinferiorly (respectively), to insert in the intermediate tendon attached to the greater cornu (horn) of the hyoid bone. The two bellies of the digastric muscle have different blood supplies and innervations. The anterior belly is supplied by branches of the facial artery and the mylohyoid part of the inferior alveolar nerve, while the posterior belly receives the occipital and posterior auricular arteries and CN VII.
Like the digastric muscle, the omohyoid muscle has two bellies that meet at an intermediate tendon. The inferior belly of the omohyoid muscle originates from the superior scapular border, adjacent to the scapular notch. The narrow, flat muscle travels superomedially, deep to sternocleidomastoid and inserts in the intermediate tendon. From the intermediate tendon, the superior belly continues superiorly to insert in the inferior border of the body of the hyoid bone. Its blood supply is derived from the external carotid artery via the superior thyroid and lingual branches. The nerve supply to the inferior belly is derived from the ansa cervicalis (C1, C2, and C3), while that to the superior belly is from the superior ramus of the ansa cervicalis (C1).
While the trapezius muscle is primarily considered a superficial back muscle, its superior fibers also provide support to the neck. The muscle originates from the superior nuchal line and the external occipital protuberance of the occipital bone, in addition to the nuchal ligament and the spinous processes of C7 to T12 vertebrae. The acromion, scapular spine and lateral third of the clavicle serve as insertion points for the muscle. Blood supply to trapezius is from the transverse cervical artery as well as contributing dorsal perforating branches of the posterior intercostal arteries. Innervation to the muscle is via CN XI.
Gross Anatomy of the Neck: Other Structures
In addition to the muscles and fascia discussed above, the neck also contains numerous large vessels that carry blood to the head and back to the heart. The common, internal and external carotids and the jugular venous system are responsible for supplying the head and its contents. Branches of the subclavian arteries and veins perform the same function to structures in the neck and head as well.
Viscera and bones
Furthermore, the neck also houses visceral such as the thyroid and parathyroid glands, oesophagus, larynx and numerous lymph nodes. At the core of the neck is the proximal segment of the vertebral column (from C1 to T1) and its constituents. The hyoid bone (mentioned previously) is located in the anterior part of the neck.
Anatomical Triangles of the Neck
As was mentioned earlier, the median line of the neck divides the neck into symmetrical halves. The sternocleidomastoid muscle, in its oblique (posterosuperior) course, further divides the neck into anterior and posterior triangles. The anterior triangle of the neck is further subdivided into four smaller triangles, while the posterior triangle is broken up into two smaller triangles.
The anterior triangle is formed by the anterior border of sternocleidomastoid posteriorly, the median line of the neck anteriorly and by the base of the mandible together with a horizontal line extending to the mastoid process superiorly. The apex of the anterior triangle extends towards the manubrium sterni. These triangles can also be described as infrahyoid (muscular) and suprahyoid (submental and digastric) triangles; the carotid triangle crosses the hyoid bone.
The muscular triangle also shares one margin with the anterior triangle – the median line of the neck. However, the muscular triangle begins at the inferior border of the body of the hyoid bone. It has two posterior borders – the proximal part of the anterior border of sternocleidomastoid inferiorly and the anterior part of the superior belly of omohyoid superiorly. This puts the apex of the muscular triangle at the intersection of sternocleidomastoid and omohyoid. The muscular triangle contains:
- Superior thyroid artery
- Inferior thyroid artery
- Anterior jugular veins
- Thyroid gland
- Parathyroid glands
Similar to the muscular triangle, the carotid triangle has the omohyoid and sternocleidomastoid muscles as parts of its borders. However, it is the posterior margin of the superior omohyoid muscle that limits the triangle anteriorly and the anterior margin of the sternocleidomastoid posteriorly. Superiorly, the posterior belly of the digastric muscle and stylohyoid close the triangle. It is floored by the inferior and middle pharyngeal constrictors, hyoglossus and parts of thyrohyoid. Its roof is formed by deep and superficial fascia, platysma and skin. This triangle contains:
- Common carotid artery
- External carotid artery (and branches except maxillary, superficial temporal and posterior auricular)
- Internal carotid artery (and sinus)
- Internal jugular vein
- Common facial vein
- Lingual vein
- Superior thyroid vein
- Middle thyroid vein
Like the anterior triangle, the digastric (submandibular) triangle is limited superiorly by the same structures. Its inferior boundaries are formed by the posterior belly of the digastric and stylohyoid muscles posteriorly, and the anterior belly of the digastric muscle anteriorly. The apex of the triangle rests at the intermediate tendon of the digastric muscle. Its floor is formed by the mylohyoid and hyoglossus, while it is roofed by skin, fascia and platysma. The digastric triangle houses:
- Submandibular gland and lymph nodes (anteriorly)
- Caudal part of the parotid gland (posteriorly)
- Facial artery and vein
- Submental artery and vein
- Lingual arteries and veins
The submental triangle is located between the anterior bellies of the left and right digastric muscles. The base of the triangle is formed by the body of the hyoid bone and its apex extends towards the symphysis menti. This triangle, like the submandibular triangle, is floored by the mylohyoid muscles and roofed by the platysma, fascia and skin. Small venous tributaries to the anterior jugular vein, and the submental lymph nodes also occupy this space.
The posterior border of sternocleidomastoid and the anterior border of trapezius form the anterior and posterior borders of the posterior triangle of the neck, respectively. The base of the posterior triangle is formed by the middle third of the clavicle. The investing layer of deep cervical fascia and integument forms the roof of the space, while the floor is covered with the prevertebral fascia along with levator scapulae, splenius capitis and the scalene muscles . The inferior belly of omohyoid subdivides the posterior triangle into a small supraclavicular, and a large occipital, triangle.
The anterior and posterior margins of the occipital triangle are the same as those of the posterior triangle. However, its base is now formed by the superior margin of the inferior omohyoid muscle. The semispinalis capitis (occasionally), splenius capitis, levator scapulae and scaleni medius and posterior muscles line the floor of the occipital triangle in that craniocaudal order. The roof of the triangle is (from superficial to deep) skin, superficial and deep fascia.
Finally, the supraclavicular triangle (greater supraclavicular fossa) is the smaller of the two posterior triangles. It shares anterior and inferior margins with the posterior triangle. However, it is limited superiorly by the inferior border of omohyoid. Scalenus medius, the first digitation of serratus anterior and the first rib are in the floor of this triangle. The roof is formed from the skin, fascia and platysma.
The subdivisions of the posterior triangle are occupied by the following:
- The third part of the subclavian artery
- Suprascapular and transverse cervical branches of the thyrocervical trunk
- External jugular vein
- The trunks of the brachial plexus
- Fibers of the cervical plexus
|
In a two-dimensional coordinate plane, Coordinates are the pairs of numbers which specify the position or location of a point or of an object.
Coordinates are represented by putting the ordered pairs in parentheses. For example: (x, y).
X-coordinate is the first number in an ordered pair and represents the horizontal position of an object in the coordinate plane.
Y-coordinate is the second number in an ordered pair and represents the vertical position of an object in the coordinate plane.
The point P is eight units to the left of the Y-axis (negative direction) and is eight units below the X-axis (negative direction). So the coordinates of point P is (- 8, - 8).
A. (- 5, 1)
B. (5, - 1)
C. (- 1, - 5)
D. (- 5, - 1)
Correct Answer: D
Step 1: Start at the origin.
Step 2: The point C is 5 units to the left of the Y-axis.
Step 3: So, the X-coordinate of point C is - 5.
Step 4: Point C is 1 unit below the X-axis.
Step 5: So, the Y-coordinate of point C is - 1.
Step 6: The coordinates of the point C are (- 5, - 1).
|
ALEX Lesson Plans
Subject: Mathematics (9 - 12)
Title: Rational Exponents Rock!!
Description: During this lesson, students will be introduced to rational exponents. Rational exponents are fractional powers, or where a number is raised to a fraction.
Subject: Arts Education (7 - 12), or Mathematics (5 - 12), or Technology Education (9 - 12)
Title: Just the facts! Exploring Order of Operations and Properties of Real Numbers
Description: Students use their imagination while learning the importance of 'Order of Operations' and 'Properties of Real Numbers'. This lesson incorporates class discussions, wiki and/or online discussion threads (free at www.wikispace.com and/or quicktopic.com), art and puzzles.This lesson plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family Foundation.
Subject: Mathematics (7 - 12)
Title: Calendar Fun Operations
Description: This activity is designed to help students evaluate numerical expressions by using order of operations. The students will be provided a calendar for the current month of the year. Students will then be provided with a worksheet that contains 30 expressions and a different symbol for each expression. The students will manually calculate each expression using order of operations. Once the numerical value has been discovered for each expression, the symbol next to the expression will be drawn on the calendar for that date. This lesson plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family Foundation.
Subject: Mathematics (9 - 12), or Technology Education (9 - 12)
Title: You Mean ANYTHING To The Zero Power Is One?
Description: This lesson is a technology-based project to reinforce concepts related to the Exponential Function. It can be used in conjunction with any textbook practice set. Construction of computer models of several Exponential Functions will promote meaningful learning rather than memorization.
Thinkfinity Lesson Plans
Title: Stacking Squares
Description: This Illuminations lesson prompts students to explore ways of arranging squares to represent equivalences involving square- and cube-roots. Students explanations and representations (with their various ways of finding these roots) form the basis for further work with radicals.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
|
Date of this Version
Pocket gophers of concern to foresters in the Pacific Northwest belong to the genus Thomomys (13). The two species believed responsible for most conifer damage are the northern pocket gopher (T. talpoides) which occurs east of the Cascade mountains in Washington, Oregon, and south into the northeastern edge of California and the nearly identical Mazama pocket gopher (j_. mazama), which ranges throughout western Oregon and into north central California (13). Pocket gopher damage is best known to agriculturalists who for many years have suffered losses to root, hay, fruit, and bulb crops, as well as damage to irrigation canals (23). As early as 1922, Dixon (9) estimated gopher caused damage in California at eight million dollars annually. More recently, Marsh and Cummings (17) verified pocket gopher damage as a serious problem in California and other states. Literature referring to gophers and their control on agricultural and range lands is common because these are recognized problem areas. On the other hand, gopher damage to forest crops has little published documentation. Crouch (7), in 1942, listed mortality of forest trees from root gnawing in his summary of destructive activities of pocket gophers. Absence of yellow pine (Pinus ponderosa) seedlings in forest openings in the Ochoco National Forest, Oregon, was related indirectly to pocket gophers by Moore (19) in 1943. He reported a positive correlation between white footed mouse (Peromyscus spp.) occupancy of unused gopher runways and absence of seedlings. Papers on gopher damage in pine plantations by Dingle (8) in 1956 and by Hermann (1) in 1963 complete the pertinent early literature.
|
Everyone should start to create software and apps, even 6 and 7 year olds. This course is intended to teach early readers, and students up to age 9, the concepts and skills of creating code.
This coding course starts with a blend of online activities and "unplugged" activities where students learn and understand
concepts without a computer, and writing out commands that then become a game, animation or story. Students also have debugging exercises which helps kids learn
to fix a puzzle or solve a problem.
Some of the lessons include the Engineering Design Process where children are asked to understand the problem, create a plan to solve it, perform the task of
solving and then seeing if improvements can be made.
Different tech lessons and software platforms will be used including Scratch ®, developed by MIT media labs.
|
What do computers, cooking, and math all have in common? Well, recipes for one thing! There are ways of working or rules that need to be followed in all three areas. In cooking, you use recipes. Computer programmers and math students use rules or “recipes” too. If you don’t follow the rules, then disaster is inevitable!
Figuring the Facts
In mathematics and computer programming, there are rules to clarify which procedures or operations should be performed first, and these rules are the same for both subjects! A math student thinks of these rules as the order of operations, but a computer programmer calls these rules operator precedence.
Just as a chef uses a recipe, operator precedence is a recipe that a computer programmer uses. If the programmer follows the rules of operator precedence when writing a computer program, then he or she can make the computer complete desired tasks.
If you listen to how Larry Wall describes the basics of computer programming, you will be able to draw even more connections between cooking and computers! See for yourself:
Check out the websites below to find out how you can start learning the basics of computer programming.
|
Contributed by Prof. Dr. Nazeer Ahmed, PhD
Critical moments in history are like earthquakes. They manifest themselves as convulsions releasing the pent up stresses of generations. When the tremors are over, they leave behind a legacy, which becomes a prelude to the next major event. The Sepoy Mutiny of 1857-1858 was one such event. With it medieval India died and in its wake grew social and political movements that paved the way for the emergence of the modern nations of India and Pakistan.
India was the first country where Muslims were faced with a challenge to define their interface with two global civilizations from a position of political weakness. European arms and diplomacy had smashed their power. The Sepoy Uprising confirmed this loss of power. The initial response of the Muslims to this debacle was to stay aloof from the British, to shun their language, institutions, culture and methods. Withdrawal only increased their isolation and set them behind in the race for political and social re-awakening. At the same time, the Hindus whom the Muslims had dominated for 500 years appeared poised to dominate them. The changing relationships were most acutely felt in the Gangetic plain, in the populous region extending from Delhi to Calcutta. And it was this region that set the tone for the interaction between the Muslims, the Europeans and the Hindus in the years to come.
What was the appropriate relationship between Islam and Christian Europe? The legacy of the Crusades in the Mediterranean region was not an encouraging one. In the 7th and 8th centuries, the Muslims conquered vast areas of the eastern Mediterranean, North Africa and southwestern Europe and displaced Christianity with their own faith. In a counter thrust, during the 12th and 13th centuries, the Christians wrested Spain and Portugal from the Muslims and in the succeeding centuries, completed extirpated Islam from the Andalusian peninsula. The English thrust at India in the 18th century was primarily mercantile and motivated by economic domination. Nonetheless, the history of interactions between Islam and Christianity did not provide a framework for a mutually satisfactory accommodation.
With the large Hindu population of India, the situation was somewhat different. In the 8th century, Muslim armies, after their swift advance through Persia, had paused at the Indus River. For 500 years thereafter, the Indus River roughly defined the geographical boundary between Muslim dominions and northern India, which was dominated by the Rajputs. The situation changed when Muhammed Ghori captured Delhi in 1192, and from that date onward until the arrival of the British, the Indo-Gangetic plain was ruled by successive Muslim dynasties. Some of the Muslim monarchs, such as Alauddin Khilji, Muhammed bin Tughlaq and Jalaluddin Akbar, treated their Indian subjects fairly. Most were content to collect taxes from Hindus and Muslims alike and made no attempt either to facilitate the spread of Islam or to deter it. Except in the northwest and the northeast, Islam remained a super-layer on a fossilized Hindu society. The two great communities continued to coexist but did not co-mingle. The powerful Islamic message of equality of man ensured that the Muslims were not submerged in the Hindu caste matrix, yet the rigidity of Hindu society was too tenacious for Islam to displace Hinduism.
Sufic Islam tried to bridge the gap between the various communities of India. The Sufis arrived in the Indo-Gangetic plain at about the same time they emerged in Central Asia and North Africa. The spiritual and physical space of the Sufi qanqahs was secular in which men and women of all faiths were welcome. With their emphasis on love, brotherhood, service and openness to local culture, they convinced a large number of Indians to accept Islam so that by the turn of the 19th century, Muslims constituted roughly a quarter of the total population of the subcontinent.
The numerical inferiority of the Muslims was compensated by their political and cultural dominance. Only in the field of economics did the Hindus fare better. The far-sighted among the Muslim monarchs found it wise to accept the services of Hindu ministers to rationalize their tax collection systems. With the advent of British rule, the advantages that the Muslims had enjoyed were chipped away. Political and military ascendancy was the first casualty. Bengal (1757), Oudh (1765) and Mysore (1799) fell one by one. Some of the potentates, such as the Nizam of Hyderabad, found it more expedient to accept the protection of the British than to fight them.
The second front was economic. The thriving manufacturing industry and the trade guilds of Bengal were ruined by the deliberate policies of the Company who saw Hindustan as a vast market for its goods. Where industry faltered, usury crept in. Since interest was forbidden in Islam, the Muslims stayed away from usury. Hindu moneylenders had no such taboo and they moved in as credit suppliers for the impoverished masses.
Language was the third front. In 1835, the East India Company introduced English medium schools and replaced Persian with English in the higher courts. Persian, the lingua franca of Muslim Asia, was the court language of Delhi for 500 years. The displacement of Persian as the court language not only severed intellectual contacts between Muslim India and Persia, it also stripped the advantage that Muslims had enjoyed in education. The Hindus had nothing to lose by this change and embraced English education with open arms and moved to fill in whatever government positions were offered by the British to Indians. The educational gap between the Hindu and Muslim communities increased. This in turn augmented mutual suspicions, jealousy and social tensions.
The Sepoy Uprising of 1857-1858 released the pent up tensions between India and the British and proved to be a calamity for the Muslims. Defeat prompted withdrawal. It was the contribution of Sir Syed Ahmed Khan that he brought the Muslims of northern India from their cocoon and made them face the historical currents so they could participate in the molding of their own destiny. His response to the British and to the Hindus was markedly different. He foresaw, that British rule, no matter how entrenched it seemed at the time, was ultimately bound to disappear. But the Hindus were neighbors, living with the Muslims. Two global faiths, Islam and Hinduism, had arrived in India at different historical epochs and each claimed the same land as its homeland. In the dialogue to coexist and co-prosper, the adherents of the two faiths were largely unsuccessful and in their failure they left behind the legacy of partition and the accompanying holocaust of 1947.
In the aftermath of the Sepoy Uprising, the Muslim intelligentsia in northern India was decimated. Under the incessant hammer of British persecution, Muslims in the Indo-Gangetic belt recoiled from active participation in national life. Too proud to accept defeat at the hands of the “infidels”, mired in the glory of a bygone era, imprisoned in a paradigm of Persian-Arabic education, suspicious of an emerging Hindu educated class, exploited by money lenders and talukdars, they sank deeper into a despondency with each passing year. The British carried their vendetta into the succeeding decades. Open discrimination was practiced against the Muslims in government jobs. The result was a general decay in the economic and political status of the Muslims and an increasing gap between the Muslims and Hindus in education and social awareness. This chasm was to have a profound effect on the events that unfolded in the last quarter of the century when Sir Syed Ahmed Khan launched his educational reform movement (1875) and the Indian National Congress was founded (1885). Indeed, the increasing gap in the economic and educational well being of Hindus and Muslims had a decisive impact on the shape of the struggle for the independent nations of India and Pakistan.
The thrust of European arms and ideas evoked a wide spectrum of responses in the Muslim world. The Ottomans resisted this thrust until the resistance was destroyed during the First World War. In Egypt and Turkey the impact of European ideas influenced the reform movements of Muhammed Ali Pasha, Sultan Abdul Hamid and the Young Turks. In India it produced the reform movement of Syed Ahmed Khan.
In the dialectic between Europe and the Muslim world, Syed Ahmed Khan of India occupies a unique position. He was perhaps the first Muslim leader to contemplate the possibility of coexistence between the two global civilizations. Muslim reformers before him had either totally disregarded the European challenge (Shah Waliullah of Delhi, Shaykh Abdul Wahhab of Arabia and Shehu Dan Fuduye of Nigeria fall into this category) or were hostile to any accommodation with Europe. The initiatives taken by Sir Syed had far reaching consequences for the Muslims. He demonstrated the possibility of coexistence and cooperation between the European and Islamic civilizations, although in his own lifetime, with the British firmly entrenched in India, he could achieve no more than a supportive role for Indian Muslims.
Syed Ahmed Khan was born in 1817 near Delhi, into a distinguished family. He received his early education in the traditional disciplines of Qur’an and Hadith and was then exposed to an English education. When the Sepoy Uprising of 1857 broke out, he was employed with the Company as a civil servant in the “Northwestern Provinces”, as the area west of Oudh was then called. The carnage of the Uprising and the subsequent decimation of the Muslim intelligentsia left a major void in the Islamic community of northern India. The initial response of the community was to conserve and withdraw into its social cocoon. While the British viewed the Muslims with deep suspicion, the Muslims shunned the British as infidels and foreigners who had usurped what had been rightfully theirs. Hostility and resentment fed upon each other and it looked like the Muslims would miss the opportunity to be a part of the new order imposed by newcomers from the British Isles.
While the Muslims remained aloof from British administration, the Hindus, Parsis and other communities forged ahead in education and social development. The replacement of Persian by English as the language of the higher courts (1835) was resented by the Muslims but was welcomed by the other communities. They embraced English education much more eagerly than did the Muslims. In 1878 there were 3155 college educated Hindus as against 57 college educated Muslims. In a country, growing poorer by the year due to Company practices, government service was a major career path for poor people and the Muslims missed these opportunities. The situation was particularly acute in Bengal and Uttar Pradesh. Since the fall of Bengal in 1757, all of the higher positions in civilian, military and judiciary service were reserved for the British. The more educated Hindus filled the lower positions that were open to Indians. The Muslims were practically shut out.
Syed Ahmed Khan saw the dangers in this isolationist posture. As long as mutual suspicion and hostility persisted between the Muslims and the British, the former would be excluded from participation in the political and social life of the country. Sir Syed visited England in 1870 and came back with a conviction that English education was a key to the advancement of the Muslims. In 1877 he established the Mohammedan Anglo-Oriental College” at Aligarh. The name of the college was self-descriptive and its orientation was decidedly western. It faced immediate hostility from the Muslim religious establishment. Mullahs denounced him as a “turncoat” and a “kafir”. Undaunted, Sir Syed persisted. He invited a noted Englishman, Theodore Beck to serve as the first principal of the College. As hostility towards his efforts intensified in the areas around Delhi, he traveled throughout the Punjab in search of support and funds. Punjabi Muslims, who felt the British had recently liberated them from the Sikhs, welcomed Sir Syed with open arms and generously provided him moral and material support.
Aligarh College grew by the year and soon became a center for Muslim educational and political activities in northern India, although its doors were open to all communities and many distinguished British as well as Hindu professors served on its faculty. The college served as a magnet for young men and women from families of zamindars and peasants alike from all over India. It provided a boost to the Muslims in their competition with the other communities for government jobs. But it was in the political arena that its impact was most profoundly felt. Graduates of AligarhUniversity were in the forefront of the political struggle in India and their efforts were decisive in the struggle for Pakistan.
Economics was yet another area where the Muslims fell behind the larger community. Following the Battle of Plassey (1757), the manufacturing base of Bengal was destroyed by the discriminatory policies of the Company. The artisans and merchants, who were primarily Muslim, were economically ruined. The Permanent Settlement Act of 1793 imposed Hindu landlords on the Muslim population of Bengal. In 1858, following the Sepoy Uprising, when the zamindari system was reinstated by the British in Uttar Pradesh, the Hindus were the primary beneficiaries. Thus in the crucial area between Delhi and Calcutta, the Muslim economic condition went from bad to worse. Only in parts of the Punjab, Sindh and the Frontier areas, where the Pathans and some Punjabis had cooperated with the British, was there a remnant of Muslim landed aristocracy.
Given the educational, political and social backwardness of the Indian Muslim community, Sir Syed felt that its best option was to cooperate with the British. As long as mutual suspicion and hostility between the British and the Muslims of northern India persisted, the latter could not take advantage of any opportunities that a more cooperative environment might present. Accordingly, Sir Syed recommended to the Muslims that their interest, for the time being at any rate, lay in seeking a working relationship with the British. This position was at odds with that of the Hindu nationalists. Since the Hindus were far more advanced educationally and they were also the numerical majority, they could package the demands of their community in a “nationalist” terminology. For the Hindus there was co-linearity of a national and communal vision. This was not so for the Muslims. Except in the northwest and the northeast, they were a small minority in the great landmass of the subcontinent. The aftermath of the 1857-1858 uprising, the decimation of their leadership, their educational backwardness and their numerical inferiority ensured that they could not compete with the Hindus on equal terms.
The years following the Great Uprising saw the first stirrings of a nationalist movement in India. Most of the nationalists were English-speaking Hindus and Parsis. An English education gave the Hindus not only access to government jobs but enabled them to articulate their social and political aspirations. The Indian National Congress was formed in 1885 by an Englishman Allan Hume to encourage Indians to provide input and feedback to the government on how the administration of the Raj could be improved. In later years, the Congress grew to be the most powerful political organization in British India and political demands grew to give political representation to the Indians. Sir Syed was concerned that the Muslims would be submerged in a vastly Hindu India should political initiative pass on to the Hindus. He articulated the fears of the Muslim community in these words:
“India, a continent in itself is inhabited by vast populations of different races and different creeds. The rigor of religious institutions has kept even neighbors apart. The system of caste is still dominant and powerful . . . In a country like India where caste distinctions still flourish, where there is no fusion of the various races, where religious distinctions are still violent, where education in its modern sense has not made an equal or proportionate progress among all the sections of the population, I am convinced that the introduction of the principle of election, pure and simple, for representation of various interests on the local boards and district councils would be attended with evils of greater significance than purely economic considerations . . . .The larger community would totally override the interests of the smaller community and the ignorant public would hold Government responsible for introducing measures which might make differences of race and creed more violent than ever.”
Sir Syed opposed the participation of Muslims in the Indian National Congress as he was concerned that representative government based on a one man-one vote concept would leave the Muslims at the mercy of the more numerous Hindus. His fears were reinforced by the movement in 1867 to replace Urdu, a language that had evolved through a Hindu-Muslim linguistic synthesis, with Sanskritized Hindi. Sir Syed saw that education, at least western education, far from bringing the two great communities of the subcontinent closer together, was separating them further apart. As the movement to replace Urdu with Hindi gathered momentum, he wrote: “I am convinced that the two communities will not sincerely cooperate in any work. Opposition and hatred between them which is felt so little today, will in the future be seen to increase on account of the so-called educated classes.”
Sir Syed’s opposition to Muslim participation in the Indian National Congress was based on his conviction that the Muslims of his day were not ready to compete with the other communities in education and politics. The destruction of the manufacturing base in Bengal and Uttar Pradesh had eliminated the artisans and merchants who had formed the economic backbone for the Moghul Empire. The moneylenders and the talukdars, most of whom were Hindu, now took their place. The differences between the two communities were exacerbated in the aftermath of the Sepoy Uprising of 1857-1858. The British had singled out Muslim leaders for punishment. In Delhi alone, over 27,000 Muslims were hanged, with many thousands more in Meerat, Lucknow and Allahabad. With the introduction of English as the medium of instruction, Muslims had fallen further behind. Meanwhile, the Hindus had taken advantage of the new opportunities, had acquired education and were able to fill any positions offered the Indians. Sir Syed felt that the introduction of representative government at that stage in history would solidify the advantage of the Hindu community over the Muslims and would relegate the latter to a permanent handicap.
Sir Syed did not live to see the full impact of the reforms introduced by him. It was left to later generations to realize the benefits of his initiatives in education and politics. He passed away in 1898. Twenty-three years after his death, in 1921 Aligarh College blossomed into Aligarh Muslim University and became a magnet for Muslim intellectual activity in the subcontinent. The generations that came after him derived their inspiration from the legacy of Sir Syed and went on to carve out their own destiny. He stood tall among the reformers of the 19th century who gave a new lease and a new direction to Islamic civilization.
Some among the later generations would call him a revolutionary, some would label him an apologist, but there is no doubt that Sir Syed Ahmed Khan opened the door to communication between the Muslims and the Europeans. Until he came along, this door had been locked shut with a steel bar of mutual suspicion and hostility.
|
Following the fall of the centuries-old Toltec Empire in Mexico, the Aztecs – the last of the Nahuatl-speaking people – migrated to the Valley of Mexico, a high south-central plateau. They established the city of Tenochtitlan on a small island on the western shore of Lake Texcoco in 1325 CE.
A century later, Tenochtitlan – later to be known as Mexico City – became the dominant city-state of the Aztec Triple Alliance, which was formed in 1430 and also included Texcoco and Tlacopan. At the height of the Aztec civilization, the city had many temples, palaces and large commercial and residential areas.
Most historical Aztec structures were destroyed when the Spanish arrived in 1519, but many ancient Aztec structures at the Teotihuacan archaeological site remain. The site is a designated United Nations Educational, Scientific and Cultural Organization (UNESCO) World Heritage Site. The holy city of Teotihuacan – meaning the place where the gods were created – is located about 30 miles northeast of Mexico City. Built between the 1st and 7th centuries CE, it is a vast collection of ancient monuments, which include the Temple of Quetzalcoatl and the Pyramids of the Sun and the Moon, which are laid out on geometric and symbolic principles. The Pyramid of the Sun rises to a height of 210 feet.
The Aztec civilization reached its zenith by 1519 and was the most powerful Mesoamerican empire of all time. The multi-ethnic, multi-lingual realm stretched for more than 80,000 square miles through many parts of what is now central and southern Mexico.
When Spanish explorer Hernan Cortes arrived with his troops in 1520, Tenochtitlan’s population was an estimated 200,000 people. By 1521, the Spanish had conquered the Aztecs and founded the Spanish capital of Mexico City on this site. The Aztec ceremonial and political center was rebuilt as the Plaza Mayor, or Zôcalo of the city. Under the Spanish, Mexico City became the center of the country’s political and religious institutions, economy and the home of Spanish social elites.
Shortly after the War of Independence from Spain ended in 1820 after 11 years of fighting, the Mexican Federal District was established, encompassing Mexico City and surrounding municipalities.
Post-independence, Mexico City was captured by U.S. forces during the 1846 – 1848 Mexican-American War and also saw violence during both the French Intervention in the 1860s and the Mexican Revolution, which began in 1910 and ended 10 years later.
In the early 20th century, Mexico City’s population was about 500,000 and it remained the political, religious, financial and cultural center of Mexico. Between 1900 and 1960, Mexico City began expanding, adding municipalities reached by new public transportation systems. As the city expanded outward from its center, it grew upwards. The first 40-story building – the Torre Latinoamericana – was built in the 1950s.
But Mexico City’s population and modernization accelerated in the 1960s when it hosted the Olympic Games in 1968. Its Metro rail system began operating in 1969 and has grown to be the ninth busiest system in the world. The city’s population doubled to nearly 9 million people by 1980, many trying to escape poverty in rural areas.
Today, large numbers of people from rural areas still settle in Mexico City, helping to increase its population to well over 20 million. With population growth, air pollution and other environmental problems have increased. The city’s elevation as well as industry and automobiles can be blamed for much of the environmental damage, although Mexico City’s air quality has improved somewhat due to measures implemented by the government.
In 2016, Mexico City changed its official designation from Mexico Distrito Federal (DF) to Ciudad de Mexico, or CDMX, after two centuries. Like Washington D.C., Mexico City is closely controlled by the federal government, which is based in the city. Under its new status, Mexico City will acquire some of the functions of Mexico’s 31 states, with a constitution and congress holding legislative powers over public finance and security.
|
Epidermal Growth FactorEGF is part of a family of proteins that controls aspects of cell growth and development
The cells in your body constantly communicate with each other, negotiating the transport and use of resources and deciding when to grow, when to rest, and when to die. Often, these messages are carried by small proteins, such as epidermal growth factor (EGF), shown here in red from PDB entry 1egf . EGF is a message telling cells that they have permission to grow. It is released by cells in areas of active growth, then is either picked up by the cell itself or by neighboring cells, stimulating their ability to divide. The message is received by a receptor on the cell surface, which binds to EGF and relays the message to signaling proteins inside the cell, ultimately mobilizing the processes needed for growth.
Domains and Dimers
The EGF receptor, shown here in blue, is a flexible protein with many moving parts, including a large extracellular portion, a section that crosses the cell membrane, a kinase domain and a long flexible tail. The portion facing outwards from the cell, shown at the top here, is composed of four articulated domains that recognize EGF. When EGF is not around, it folds back on itself, as shown in the structure on the left. Then, when EGF binds, the receptor opens up and binds to another copy of the receptor, forming the dimeric complex shown on the right. This brings together two copies of the kinase domain, shown at the bottom here. Since the kinase domains are close to one another, they can add phosphate groups to tyrosine amino acids on the long, flexible tails of the receptor (the tails are not seen in this structure, so they are shown here with dots). The phosphorylated tails then stimulate the signaling proteins inside the cell.
Since the EGF receptor is so flexible, it has been studied by breaking it into several pieces and studying each one separately. Consequently, several PDB files were needed to create this illustration, including 1nql, 1ivo, 2jwa, 1m17 and 2gs6 . The structures of the different parts of the EGF receptor revealed several surprises. First of all, researchers found that EGF binds on either side of the receptor complex, not in the middle like other similar receptors. EGF appears to mold the receptor into the proper shape for dimerization, instead of acting like glue between the two chains. Also, careful analysis of the kinase domain showed that it is activated by association in an asymmetric head-to-tail fashion, in spite of the symmetric association of the extracellular portion.
EGF and the EGF receptor are part of an extended family of proteins that together control aspects of cell growth and development. These include at least seven similar protein messages, such as transforming growth factor alpha and amphiregulin, and four receptors, collectively termed ErbB or HER receptors. These messages and receptors can mix and match, with different messages bringing together two identical receptors or two different receptors. In this way, a wide variety of messages may be carried by the system, tailored for the needs of each type of cell.
Turning the Receptor Off
Of course, once a signal is sent through the EGF receptor, it eventually needs to be turned off. A separate enzyme performs this job by clipping off the phosphates that are added to the flexible tails of the receptor when it is activated. Protein tyrosine phosphatase 1B (PDB entry 1ptu ) is shown here (in orange), with a small piece of a phosphorylated protein chain bound in the active site. The phosphorylated tyrosine is buried deep in the active site.
Exploring the Structure
The signal carried by EGF can be dangerous if used improperly. Many forms of cancer circumvent the normal EGF signaling process, giving themselves permission to grow without control. Because of this, drugs that block EGF signaling are effective for the treatment of cancer. Two examples are shown here. At the left, the drug lapatinib is bound in the kinase domain of the receptor, blocking the signal inside the cell (PDB entry 1xkk ). It is very similar to the ATP used by the receptor, and binds tightly in the active site. Therapeutic antibodies are also used for cancer treatment. Herceptin is shown on the right bound to the extracellular domain of HER2/ErbB2 (PDB entry 1n8z), and the antibody cetuximab bound to the EGF receptor may be found in PDB entry . To explore these structures in more detail, click on the images for an interactive Jmol image.
Topics for Further Discussion
- Compare the mode of dimerization in the EGF receptor and the human growth hormone receptor.
- EGF receptor and similar receptors are currently the targets of development of drugs for cancer therapy. Can you find other examples of structures in the PDB with drugs?
June 2010, David Goodselldoi:10.2210/rcsb_pdb/mom_2010_6
|
There's a reason that towering mammals the likes of King Kong are resigned to fiction. Our aching bones can only take so much weight before they start crumbling under the pressure. But if that's the case, then why were dinosaurs able to reach such phenomenal heights? According to a new study, the answer isn't so much about the bones themselves as it is the soft, squishy joints they lay between.
The scientists leading the new study published in PLOS ONE measured the ends of bones in both mammals and dinosaurs as well as their descendants to see how joint and bone shape changes as size increases.
As mammals grow, our bones become progressively rounder at the ends to be able to support the increase in weight while minimizing pressure as much as possible. Reptiles and birds, however, (as well as the dinosaurs that came before them) have bones that grow wider and flatter as more weight is added to the frame. So considering that these two very different shapes are both meant to sustain more weight, the joints and cartilage that connect them must also work differently.
For humans and other mammals, as the bones become rounder the connecting cartilage continues to stretch thin and tight across the bones surface. Because the soft, connective cartilage is close-fitting and maleable, our weight is able to distribute more evenly. The wider, flatter bones of reptiles, however, solve the problem by packing as many layers of the stuff as they can—which as it turns out, is a much more efficient method. According to Matthew Bonnan from the Richard Stockton College of New Jersey and one of the lead authors on the study:
More than just evenly distributing the pressure, the joint itself may be deforming a little — it’s actually squishier, increasing the force it can sustain.
Of course, these gelatinous joint fillings weren't the only thing letting dinos tower over the rest of the prehistoric world. The lighter, hollow bones favored by reptiles also meant that larger frames didn't require as much support as our own solid bricks for bones. This does, however, at least begin to explain why dinosaurs were able to reach such larger than (modern) life proportions.
Still, as pillowy and bouncy as their joints may be, everything has its limits. You know, like extinction-event comets. [Live Science]
|
The Fizeau–Foucault apparatus (1850) (Figure 1) was designed by the French physicists Hippolyte Fizeau and Léon Foucault for measuring the speed of light. The apparatus involves light reflecting off a rotating mirror, toward a stationary mirror some 20 miles (35 kilometers) away. As the rotating mirror will have moved slightly in the time it takes for the light to bounce off the stationary mirror (and return to the rotating mirror), it will thus be deflected away from the original source, by a small angle. If the distance between mirrors is h, the time between the first and second reflections on the rotating mirror is 2h/c (c = speed of light). If the mirror rotates at a known constant angular rate , the angle θ is swept in the same time as the light roundtrip, so:
In other words the speed of light is calculated from the observed angle θ, known angular speed and measured distance h as
The detector is at an angle 2θ from the source direction because the normal to the rotating mirror rotates by θ, decreasing by θ both the angle of incidence of the beam and its angle of reflection.
Foucault based his apparatus on an earlier experiment by Fizeau (Figure 2) who, in 1849, used two fixed mirrors about 8 km apart, one partially obscured by a rotating cogwheel with over 100 teeth rotating a few hundred times a second. Fizeau's value for light's speed was about 5% too high.
The Fizeau experiment to measure the speed of light in water has been viewed as "driving the last nail in the coffin" of Newton's corpuscle theory of light when it showed that light travels more slowly through water than through air. Newton predicted refraction as a pull of the medium upon the light, implying an increased speed of light in the medium. However, Foucault showed the speed of light in water to be less than in air, not more, by inserting a tube of water in the light path.
- Ralph Baierlein (2001). Newton to Einstein: the trail of light : an excursion to the wave-particle duality and the special theory of relativity. Cambridge University Press. p. 44; Figure 2.6 and discussion. ISBN 0-521-42323-6.
- Abdul Al-Azzawi (2006). Photonics: principles and practices. CRC Press. p. 9. ISBN 0-8493-8290-4.
- David Cassidy, Gerald Holton, James Rutherford (2002). Understanding Physics. Birkhäuser. ISBN 0-387-98756-8.
- Bruce H Walker (1998). Optical Engineering Fundamentals. SPIE Press. p. 13. ISBN 0-8194-2764-0.
- Diagram showing the original experimental design found in the second volume of Foucalt's collected works: Volume Two - Recueil des travaux scientifiques de Léon Foucault 1878.
- Speed of Light (The Foucault Method)
- Light in moving media
|
If you ever wanted to be slightly taller, maybe you should think about going into space. Apparently, the National Aeronautics and Space Administration (NASA) and space buffs have known for years that voyaging into outer space can add up to 3 percent in height for astronauts. That means that an astronaut who is 6 feet tall on Earth can grow two inches. Now, for the first time, NASA is exploring just why this phenomenon occurs.
The space bureau will use ultrasound technology to examine what happens to the spines of astronauts in microgravity. According to Space.com, it is believed that, when the spine is free from the constraints of gravity, the vertebrae can expand and relax. Six astronauts will participate in the clinical trial, using the ultrasound on one of their crewmates. They will be required to take scan of their spinal areas 30, 90 and 150 days into the flight. This experiment will be the first time that ultrasound examinations on the spine will be performed in space, because spinal ultrasounds are more complicated than ultrasounds on other parts of the body.
"Today there is a new ultrasound device on the station that allows more precise musculoskeletal imaging required for assessment of the complex anatomy and the spine," Scott A. Dulchavsky, the principal investigator, said in a statement. "The crew will be able to perform these complex evaluations in the next year due to a newly developed Just-In-Time training guide for spinal ultrasound, combined with refinements in crew training and remote guidance procedures."
Researchers hope that if they understand better the way that the spine changes, they can improve rehabilitation efforts for astronauts who have returned to Earth.
However, for people hoping to go into space, the height increase is only temporary. Astronauts shrink back to their normal size after a couple of months back on Earth.
|
A minibeast, also called an invertebrate, is a creature without either a backbone or an internal skeleton. Humans have backbones and internal skeletons so are called vertebrates.
There are many different kinds of invertebrates, around 40,000 species in Britain alone, and millions across the rest of planet Earth.
Butterflies, moths, dragonflies, centipedes, spiders, scorpions, snails, beetles, crabs and worms are all invertebrates.
About 97% of creatures on Earth are invertebrates and without them we would not be able to survive. They help to pollinate plants, recycle waste material, provide food for other creatures such as birds and reptiles and much, much more.
Invertebrates have been living on earth for about 550 million years and have adapted to survive in different habitats, from woodlands to desserts.
There are several different types of invertebrates, each with their own characteristics.
|
The Ebola virus is considered one of the world's most dangerous pathogens. During the most severe outbreak to date in West Africa, over 11,000 deaths were documented between 2014 and 2016. Single cases are repeatedly reported from Europe as well, which are connected to previous travels to affected regions. An important source of infection are so-called reservoir hosts that carry the virus without being affected by it. For the various types of the Ebola virus, the most likely involve reservoir hosts are various species of bats and fruit bats.
For the first time, scientists investigated where nine of such bat and fruit bat species may encounter suitable habitats and climatic conditions in Africa. "Zaire ebolavirus is one of the most dangerous Ebola viruses. It kills up to 88 percent of those infected with it. To prevent or curb outbreaks of this virus, it is essential to know exactly where potential hotspots of infection may lurk," explains parasitologist Prof. Dr. Sven Klimpel of the Goethe University in Frankfurt and the Senckenberg Biodiversity and Climate Research Centre.
Based on ecological niche modeling, his team was able to show that the respective bat and fruit bat species are able to thrive in West and East Africa, including large parts of Central Africa. A wide belt of potential habitats extends from Guinea, Sierra Leone, and Liberia in the west across the Central African Republic, the Republic of the Congo and the Democratic Republic of the Congo to Sudan and Uganda in the East. A few of the studied bats and fruit bats may even occur in the eastern part of South Africa.
In a second step, the researchers compared the potential habitats with range maps of the bat and fruit bat species that were generated by the International Union for the Conservation of Nature (IUCN) on the basis of observations of these animals. In addition, the team considered where a Zaire ebolavirus endemic has broken out in the past. The results were surprising: "The modeled habitats of the Zaire ebolavirus hosts are larger than their previously known ranges. It is possible that the bats and fruit bats have not yet been able to reach habitats beyond these ranges due to the presence of certain barriers," says Klimpel.
"Another, more worrying explanation could be that science has hitherto underestimated the range of Ebola-transmitting bat and fruit bat species. In this case, the models would provide a more realistic picture," explains Dr. Lisa Koch, the study's lead author from Goethe University. Regions affected by Ebola outbreaks often suffer not only from health effects, but also from economic and social effects of the epidemic. The study's findings suggest to keep a closer eye on diseases that occur in the modeled ranges of the reservoir hosts and to inform the public about potential Ebola infections, ultimately alleviating the consequences of an epidemic.
With regard to Europe, Klimpel states: "Ebola viruses, just like the SARS-CoV-2 (Coronavirus), are viruses from the animal kingdom that can be transmitted to humans. It can be expected that diseases of this type will occur more frequently in the future, since humans have increasing contact with wild animals, and globalization facilitates the spread of viruses around the world. In Europe, with its overall efficient health system, Ebola infections are certainly going to remain isolated incidents in the future. Nonetheless, in view of these trends it would be beneficial to intensively train and further educate physicians and nursing personnel in the treatment of tropical infectious diseases in our latitudes as well."
To study and understand nature with its limitless diversity of living creatures and to preserve and manage it in a sustainable fashion as the basis of life for future generations - this has been the goal of the Senckenberg Gesellschaft für Naturforschung (Senckenberg Nature Research Society) for 200 years. This integrative "geobiodiversity research" and the dissemination of research and science are among Senckenberg's main tasks. Three nature museums in Frankfurt, Görlitz and Dresden display the diversity of life and the earth's development over millions of years. The Senckenberg Nature Research Society is a member of the Leibniz Association. The Senckenberg Nature Museum in Frankfurt am Main is supported by the City of Frankfurt am Main as well as numerous other partners. Additional information can be found at http://www.senckenberg.de.
|
Michael Faraday was the first scientist to discover benzene in 1825. He extracted benzene from cylinders of compressed illuminating gas which had been collected from the pyrolysis of whale oil. Faraday called this newly discovered liquid bicarburet of hydrogen.
Recognition of Faraday's enormous contribution to the sciences has been recognised in many ways, from his appearance on £20 notes to his depiction on stamps in various countries.
In 1833, Eilhard Mitscherlich a German chemist produced what he called benzin via the distillation of benzoic acid (from gum benzoin) and calcium oxide (lime).
In 1845 benzene was found in coal tar by the English chemist Charles Mansfield, working under August W. Hofmann. Four years later, Mansfield began the first industrial-scale production of benzene, based on the coal-tar method. Coal tar is made by destructively distilling coal and is still a source of benzene today.
Benzene was first synthesized in a laboratory in 1870 by Pierre Berthelot who passed acetylene through a red hot tube.
Benzene is a non-polar colourless, inflammable liquid, with a sweet and distinctive aromatic smell that some find pleasant. But beware as benzene is highly toxic and absorbed through the skin. Benzene has a melting point of 5.5°C and a boiling point of 80°C and is therefore a liquid at room temperature. Benzene is immiscible with water and will form the upper layer since it has a lower density of 0.879 g/cm3. Benzene is a very good solvent for organic compounds, but it is safer to use its derivative methylbenzene (toluene).
For a quarter of a century following its discovery, benzene's structure continued to puzzle the world's greatest scientists. It was known to have a molecular weight of 78 which was due to the presence of six carbon atoms (6 x 12) plus six hydrogen atoms (6 x 1). August Kekule was one of the organic chemists who was working on the structural elucidation of benzene. He originally placed all six carbon atoms in a row but soon realised that this did not make sense for where was he to put all the hydrogen atoms? The story goes that whilst trying to solve the problem of benzene's Kekule had a daydream whilst dozing on a London bus.
The old saying goes that: "you wait ages for one bus, then three come along at once". In Kekule's case, six came along.
Kekule visualized a snake with its tail in its mouth that was spinning around. That snake was benzene, biting itself in the tail, which made sense if it possessed alternating single and double bonds. Kekule's dream is shown as an interesting animation. Kekule's dream of a snake eating its own tail, is an ancient symbol called the Ouroboros which represents the cyclicality of life. One should bear in mind that this 'story' first appeared in the journal Berichte der Durstigen Chemischen Gesellschaft, which is a parody of the respected journal Berichte der Deutschen Chemischen Gesellschaft. What is clear is that Kekule's understanding of the tetravalent nature of carbon was built on the foundation of the often overlooked work of Archibald Scott Couper. Every student who has ever drawn covalent bonds as lines on paper joining atoms is following in Couper's footsteps. The former East Germany (DDR) commemorated the work of Kekule on a stamp which shows his structure of benzene.
Kekule drew what he believed to be two identical structures for benzene and now he needed to find proof that his 'daydream' was correct. Rod Beavon of Westminster School has written an online biography of Kekule.
It is important to realise that benzene has a planar structure. The distance between adjacent carbon atoms found by X-ray diffraction is 0.139 nm (139 picometres). This is a distance which is intermediate between the longer single C-C bonds (147 pm) and the shorter double C=C bonds (135 pm). The relative length of the C-C bonds in benzene can be explained in terms of the delocalized electrons, which leads to the intermediate bond lengths. The cyclic nature of benzene was finally confirmed by the eminent crystallographer Kathleen Lonsdale.
In 1931 Linus Pauling proposed his resonance theory which describes delocalised electrons and is able to account for benzene's known reactions. This theory explained the stability of the delocalised electrons (lower energy) and the reason why benzene's reactions are mainly electrophilic substitution reactions. Pauling's theory states that instead of the kekule structures I and II shown below we have a single structure III with the delocalised electrons shown on paper as a circle in the middle of a hexagon.
Problems with Kekule's structure were first hinted at when it became apparent that the enthalpy of hydrogenation of benzene (-208 kJ mol-1) was found not to be three times the value found for cyclohexene (-121 kJ mol-1) with its one C=C bond. The 'missing' energy of hydrogenation (155 kJ mol-1), is called resonance energy, and is a measure of benzene's stability. The aromatic stability comes from the sideways overlap of electrons in the π-bond above and below the six carbon atoms in the ring. The delocalised electrons are shown as a circle in the hexagon. The reason substitution is preferred is that benzene and its derivatives are more thermodynamically stable after a substitution reaction than if an addition reaction took place. For those who realise the bond order in benzene is in fact 1.5 then another way to represent the structure of benzene IV is as a hexagon with a dotted circle inside of the hexagon. Like the spelling of sulfur, the drawing of benzene can also lead to debate amongst chemists.
An interesting challenge to give our brightest students is to get them to work out the structure of three other molecules having the same molecular formula as benzene. Once these isomeric structures are solved they could also be asked to predict their spectra. There are two linear structures which are the positional isomers; 1,5-hexadiyne and 2,4-hexadiyne, and a cyclic structure 5-methylene-1,3-cyclopentadiene, known trivially as fulvene.
Benzene's spectra are surprisingly simple unless one considers its planar structure and its symmetry. The proton nmr spectrum of benzene consists of a single peak at 7.26 ppm as all its protons are equivalent. The signal is downfield (+δ) compared to the two equivalent vinylic protons (=CH) of cyclohexene at 5.6 ppm due to the diamagnetic ring current.
Benzene's peak is shifted downfield because of a ring current effect where the circulating delocalized electrons are at 90° to the applied magnetic field. The result is that benzene's protons are deshielded because the induced magnetic field is in the same direction as the applied magnetic field. This means that a higher frequency is needed to achieve resonance because the local magnetic field is higher for the protons. The diagram below should help for those not studying Physics at advanced level as should a look at Ampere's Law.
Likewise the carbon-13 nmr spectra of benzene is a single peak at 128 ppm.
The mass spectrum of benzene is worth a look at, if only because the [M+1]+ peak is 6.6% of the [M]+ peak due to the presence of the 13C isotope.
The infra-red (IR) spectrum of benzene is one of the most simple and it shows all the expected aromatic C-H resonances. C-H stretches typically occur around 3000 cm-1 as sharp troughs and also =C-H bending around 1500 cm-1.
The crystal structure of benzene was first published in 1932 by two scientists from Leeds University and the original paper can be found in Cox & Smith , Proc. Roy. Soc., A, 135, 491 (1932). You too can look at the crystal structure of benzene by clicking on a wonderful resource written by Chas McCaw (Winchester College) on crystal structures.
There is no specific positive test for arenes. They do not decolourize bromine water since they do not readily undergo addition reactions. Aromatic compounds do however burn with smoky flames due to the very high percentage of carbon in these molecules.
Benzene is an important feedstock for the chemical and pharmaceutical industries. It is for this reason that its reactions are studied in detail in sixth forms.
The chemistry of aromatic compounds is dominated by electrophilic substitution reactions. In these reactions the π-cloud and delocalisation is preserved.
Concentrated nitric acid reacts slowly and inefficiently with benzene, producing a yield of around 5% nitrobenzene; a toxic yellow oil with a melting point of 5°C and a boiling point of 210°C. When a nitrating mixture of concentrated sulfuric and concentrated nitric acid is used much higher yields are produced.
This is because the sulfuric acid catalyses the formation of the electrophilic 'nitronium ion' (NO2+) according to the following equation:
HNO3 + 2 H2SO4 NO2+ + 2 HSO4- + H3O+
The 'nitronium ion' then goes on to react with the benzene in a electrophilic substitution reaction producing predominantly nitrobenzene. If the temperature is allowed to rise above 50°C then some dinitrated product is produced which is a pale-yellow solid at room temperature.
Friedel-Crafts alkylation is also an electrophilic substitution reaction in which a alkyl group replaces a hydrogen atom on a benzene ring. The Friedel-Crafts alkylation requires the use of a catalyst to form the electrophile from the alkyl halide and anhydrous aluminium chloride is normally used for this purpose. The electrophile (R+) is formed according to the following equation: RX + AlCl3 R+ + AlCl4-
An example of a Friedel-Crafts alkylation is shown right:
Friedel-Crafts acylation is very similar to a Friedel-Crafts alkylation, it is the electrophilic substitution reaction of an acyl halide with benzene. An acyl group is an alkyl group attached directly to a carbonyl group. Once again the electrophile (CH3CO+) is generated using AlCl3 as the catalyst.
CH3COCl + AlCl3 CH3CO+ + AlCl4-
In this example, ethanoyl chloride is reacted with producing the phenylethanone known trivially as acetophenone) according to the reaction shown on the right.
Benzene reacts with halogens in the presence of a catalyst (halogen carrier) producing halobenzenes (C6H5X) and hydrogen chloride in a substitution reaction. The catalyst normally used is aluminium or iron powder. These metals react with some of the halogen, for example chlorine, forming the corresponding trihalide salts of the metal according to:
2 Al + 3Cl2 2AlCl3
The overall reaction is:
An interesting side reaction is that in the presence of UV light and high temperatures, benzene can undergo an addition reaction forming an unsaturated compound, for example with excess chlorine the compound 1,2,3,4,5,6-hexachlorocyclohexane is formed according to:
Aromatic sulfonation is when an H is replaced by SO3H (a sulfonic acid group). In the case of benzene, it needs to be heated with conc. H2SO4 for 8 h to produce benzenesulfonic acid. This reaction is too slow and not to be attempted as benzene is implicated in childhood leukemia.
Instead, it's better (faster and safer) to use toluene (methylbenzene), as the methyl (CH3) group is electron-releasing and speeds up the reaction. The procedure is: Add 30 drops of conc. H2SO4 to 12 drops of (toluene) in a test-tube. Warm until the methylbenzene has dissolved into the acid layer. Pour the mixture into 30 cm3 of a cold saturated solution of sodium chloride. White crystals of sodium methylbenzene sulfonates are formed.
A selection of downloadable aromatic reactions is to be found in Chapter 22 of the Pre-U chemistry course book that I finished in 2010. Have fun working out the names of compounds A-M and the conditions and mechanisms needed for the reaction steps 1-12.
The term 'aromatic' was originally used for naturally occurring, sweet-smelling compounds with an aroma. Today the term is associated with benzene rings. A selection from the multitude of aromatic compounds include; naphthalene, anthracene and pyrene.
Can you think of another way to fuse three aromatic rings that is different to anthracene? Remember, Einstein said, "imagination is more important than knowledge". Click here when you think you have the answer.
And finally...Purple benzene
Back to Molecule of the Month page.
|
The concept intelligence covers all the abilities of the mind such as thought, reasoning, problem solving, language use and learning. This concept has been changing from time to time and there are different theories that have been put forward to explain it. This paper seeks to evaluate three models of intelligence: Sternberg’s model Spearman’s model and Gardner’s model.
For proper evaluation of the three models of intelligence, it is imperative to establish an understanding of each model of intelligence. First Sternberg’s model of intelligence also known as ‘triarchic theory of intelligence’ is composed of three sub theories all focusing on the information processing ability of the human mind. The three sub theories are analytic intelligence, which measures the ability of an individual to solve common problems such as in academic tests, creative intelligence, which is shown by an individual’s way of reacting to novel situation, and practical intelligence which measures an individual’s actual use of intelligence in solving real life problems (Williams et al, 2003). Spearman’s model of intelligence postulates that all tests of mental ability are correlated because there are some ‘general’ factors that are tested by most intelligence tests. The occurrence of differences in tests is because of testing ‘specific’ factors (Lubinski, 2004). Gardners’s model of intelligence postulates that human beings do not have general intelligence but multiple intelligences which all form independent system in the brain.
These three models of intelligence are in some way correlated to each other and all explain the same aspect of intelligence. When compared, Sternberg’s model and Gardner’s model all suggest that intelligence should not be measured by using only a test which measures a single ability but rather on different range of tests which test several abilities. Similarly, Spearman’s model also suggests that there are different factors which he calls ‘specific factors’ which will produce different results when mental abilities are tested. In this case, Spearman’s model and Sternberg’s model all suggest that when testing the abilities of human brain, several aspects should be included (Sternberg, 2003). Similarly, Spearman’s model which indicates that there are ‘specific factors’ that can produce different results when tested is similar to Gardner’s multiple intelligence model which indicates that there are multiple intelligence in human beings some of which are conditioned by the environment. The ‘specific factors’ in Spearman’s model are similar to ‘cultural’ intelligence’s in Gardner’s model which also are related to practical and processing intelligence in Stenberg’s model.
All these models of intelligence are different from Galton’s original theory of general intelligence which postulates that intelligence can be measured by measuring time of an individual’s reaction to a cognitive task. Stenberg’s, Gardner’s and Spearman’s models differ because they suggest that there is no single way that intelligence can be measured as human brain works in different ways depending on different factors such as environment, and culture that an individual is. To Galton, there is general intelligence which can be measured by a single test of mental activity while Sternberg, Gardner and Spearman suggest that there are different forms of intelligence which cannot be tested by a single mental activity test
According to me, the best model of measuring intelligence is Gardner’s model, which suggests that humans have multiple intelligences. This model efficiently shows how human brain works. There are some cases where an individual may have lost one aspect of cognitive functioning but has developed another. Should just a single aspect of cognitive functioning is used to measure the intelligence of such individual, then the results would not be accurate. Also there are cases where people from a given location all have a given level of intelligence especially in handling a given task. This is due to the environmental factors that influence their use of brain. This explains why Eskimos are good at ice fishing compared to people from other regions.
|
How do you add a style tag in HTML?
CSS can be added to HTML documents in 3 ways: Inline – by using the style attribute inside HTML elements. Internal – by using a <style> element in the <head> section.
How do I add a style tag to my body?
Use new internal style sheet on the <head> tag, i.e., use of a new <style> element outside the area you have access (that will be, on the site, not inside the body tag and yes inside the head tag). A new Style object will be created.
Where is style tag used in HTML?
The <style> HTML element contains style information for a document, or part of a document. It contains CSS, which is applied to the contents of the document containing the <style> element.
How do you add a style rule in CSS?
CSS can be added to HTML documents in 3 ways:
- Inline – by using the style attribute inside HTML elements.
- Internal – by using a <style> element in the <head> section.
- External – by using a <link> element to link to an external CSS file.
How do you use style tag?
The <style> tag is used to define style information (CSS) for a document. Inside the <style> element you specify how HTML elements should render in a browser.
What is the difference between logical tags and physical tags?
The basic difference between logical tags and physical tags are closely related to the concept of HTML.
1 Answer Written.
|Logical tag||Physical tag|
|They are used to mention visually impaired texts||They are used to indicate the specific characters which need formation|
What is a style?
A style is a set of formatting attributes that define the appearance of an element in the document. For example, a character style will contain font or font face attributes, while a paragraph style will contain paragraph alignment and line spacing attributes.
What are embedded styles?
Embedded styles reside in the head of the document. They’re encased in <style> tags and look much like external CSS files within that portion of the document. Embedded styles affect only the tags on the page they are embedded in. Once again, this approach negates one of the strengths of CSS.
What CSS selector would style a tag that looks like this?
The descendant selector is how you can apply styles to all elements that are descendants of a specified element. Selecting all <h1> elements nested inside <div> elements looks like this.
What are the 3 types of CSS?
There are three types of CSS which are given below:
- Inline CSS.
- Internal or Embedded CSS.
- External CSS.
What is correct CSS syntax?
The selector points to the HTML element you want to style. The declaration block contains one or more declarations separated by semicolons. Each declaration includes a CSS property name and a value, separated by a colon.
What is the use of tags in HTML?
HTML tags are used to create HTML documents and render their properties. Each HTML tags have different properties. An HTML file must have some essential tags so that web browser can differentiate between a simple text and HTML text. You can use as many tags you want as per your code requirement.
What is the style rule?
A style rule is made of three parts − Selector – A selector is an HTML tag at which a style will be applied. This could be any tag like <h1> or <table> etc. Property – A property is a type of attribute of HTML tag. Put simply, all the HTML attributes are converted into CSS properties.
What is hyperlink tag for?
The <a> tag (anchor tag) in HTML is used to create a hyperlink on the webpage. This hyperlink is used to link the webpage to other webpages. It’s either used to provide an absolute reference or a relative reference as its “href” value.
What are the different CSS selectors?
There are several different types of selectors in CSS.
- CSS Element Selector.
- CSS Id Selector.
- CSS Class Selector.
- CSS Universal Selector.
- CSS Group Selector.
|
|Vietnam Country Studies index|
Vietnam - System of Government
More about the Government of Vietnam.
System of government
The communist party-controlled government of Vietnam has ruled under three state constitutions. The first was promulgated in 1946, the second in 1959, and the third in 1980. Significantly, each was created at a milestone in the evolution of the VCP, and each bore the mark of its time.
The purpose of the 1946 constitution was essentially to provide the communist regime with a democratic appearance. The newly established government of the Democratic Republic of Vietnam (DRV) was sensitive about its communist sponsorship, and it perceived democratic trappings as more appealing to noncommunist nationalists and less provocative to French negotiators. Even though such guarantees were never intended to be carried out, the constitution provided for freedom of speech, the press, and assembly. The document remained in effect in Viet Minh-controlled areas throughout the First Indochina War (1946-54) and in North Vietnam following partition in 1954, until it was replaced with a new constitution in 1959.
The second constitution was explicitly communist in character. Its preamble described the DRV as a "people's democratic state led by the working class," and the document provided for a nominal separation of powers among legislative, executive, and judicial branches of government. On paper, the legislative function was carried out by the National Assembly. The assembly was empowered to make laws and to elect the chief officials of the state, such as the president (who was largely a symbolic head of state), the vice president, and cabinet ministers. Together those elected (including the president and vice president) formed a Council of Ministers, which constitutionally (but not in practice) was subject to supervision by the Standing Committee of the National Assembly. Headed by a prime minister, the council was the highest executive organ of state authority. Besides overseeing the Council of Ministers, the assembly's Standing Committee also supervised on paper the Supreme People's Court, the chief organ of the judiciary. The assembly's executive side nominally decided on national economic plans, approved state budgets, and acted on questions of war or peace. In reality, however, final authority on all matters rested with the Political Bureau.
The reunification of North and South Vietnam (the former Republic of Vietnam) in 1976 provided the primary motivation for revising the 1959 constitution. Revisions were made along the ideological lines set forth at the Fourth National Congress of the VCP in 1976, emphasizing popular sovereignty and promising success in undertaking "revolutions" in production, science and technology, culture, and ideology. In keeping with the underlying theme of a new beginning associated with reunification, the constitution also stressed the need to develop a new political system, a new economy, a new culture, and a new socialist person.
The 1959 document had been adopted during the tenure of Ho Chi Minh and demonstrated a certain independence from the Soviet model of state organization. The 1980 Constitution was drafted when Vietnam faced a serious threat from China, and political and economic dependence on the Soviet Union had increased. Perhaps, as a result, the completed document resembles the 1977 Soviet Constitution.
The 1980 Vietnamese Constitution concentrates power in a newly established Council of State much like the Presidium of the Supreme Soviet, endowing it nominally with both legislative and executive powers. Many functions of the legislature remain the same as under the 1959 document, but others have been transferred to the executive branch or assigned to both branches concurrently. The executive branch appears strengthened overall, having gained a second major executive body, the Council of State, and the importance of the National Assembly appears to have been reduced accordingly. The role of the Council of Ministers, while appearing on paper to have been subordinated to the new Council of State, in practice retained its former primacy.
Among the innovative features of the 1980 document is the concept of "collective mastery" of society, a frequently used expression attributed to the late party secretary, Le Duan (1908- 1986). The concept is a Vietnamese version of popular sovereignty that advocates an active role for the people so that they may become their own masters as well as masters of society, nature, and the nation. It states that the people's collective mastery in all fields is assured by the state and is implemented by permitting the participation in state affairs of mass organizations. On paper, these organizations, to which almost all citizens belong, play an active role in government and have the right to introduce bills before the National Assembly.
Another feature is the concept of socialist legality, which dictates that "the state manage society according to law and constantly strengthen the socialist legal system." The concept, originally introduced at the Third National Party Congress in 1960, calls for achieving socialist legality through the state, its organizations, and its people. Law, in effect, is made subject to the decisions and directives of the party.
The apparent contradiction between the people's right to active participation in government suggested by collective mastery and the party's absolute control of government dictated by "socialist legality" is characteristic of communist political documents in which rights provided the citizenry often are negated by countermeasures appearing elsewhere in the document. Vietnam's constitutions have not been guarantors, therefore, of the rights of citizens or of the separation and limitation of powers. They have been intended instead to serve the partycontrolled regime.
The 1980 Constitution comprises 147 articles in 12 chapters dealing with numerous subjects, including the basic rights and duties of citizens. Article 67 guarantees the citizens' rights to freedom of speech, the press, assembly, and association, and the freedom to demonstrate. Such rights are, nevertheless, subject to a caveat stating "no one may misuse democratic freedoms to violate the interests of the state and the people." With this stipulation, all rights are conditionally based upon the party's interpretation of what constitutes behavior in the state's and people's interest.
You can read more regarding this subject on the following websites:
Politics of Vietnam - Wikipedia
Vietnam Country Studies index
Country Studies main page
|
Caterpillars, the larvae of butterflies and moths (Order Lepidoptera), are exquisite creatures that display an array of colors, patterns, and interesting behaviors. Although some species are considered pests in our gardens and forests, they are extremely important in terrestrial food webs, serving as the primary food source for many of our resident and migrant songbirds. Without caterpillars, our forest would be silent in spring. Caterpillars also play an important role as macrodecomposers by shredding and consuming leaves which helps to accelerate the nutrient cycling process. Fall is a great time of year to hunt for caterpillars. Some species, like the familiar wooly bear (Pyrrharctia isabella) pictured below, can be found roaming about in search of a hibernaculum, such as your wood pile, where they overwinter. Not only is caterpillar hunting good sport, it can give us a greater appreciation for the little creatures that go unnoticed around us.
One doesn’t have to travel to some far away exotic place to find caterpillars. They can be found right in ones backyard along forest edges, fields, and gardens. Many species have evolved cryptic coloration and behaviors that can fool even the most intelligent predators (including caterpillar hunters). The first step to finding caterpillars, or for that matter any insect, is to walk more slowly, observe more closely, and magically they will start to appear as you develop a “search image”. Rarely will the caterpillar be sitting exposed on the leaf surface, so be sure to examine the underside and along the leaf margins, stems, and on flower heads. A more efficient method is to use a beating sheet or “drop cloth”. This can be as simple as using a white bed sheet, or umbrella, and placing it under a limb of a tree or shrub and hitting the limb with a stick to dislodge the caterpillars from the foliage. In addition to caterpillars, a multitude of other species including jumping spiders, ants, beetles, and stinkbugs can be found on the sheet, but that is another story. One word of caution, be careful when handling spiny or hairy caterpillars, as the hairs of some species can cause an allergic reaction to some people.
Once you find a caterpillar, the next step is figuring out what it is and learning about its life history. Questions like: What does it turn into? What does it eat? What is its range? Is it a pest in my garden? These questions can best be answered by referring to a field guide of the caterpillars occurring in your area. An outstanding guide is “Caterpillars of Eastern North America” by David Wagner, which will enable you to identify just about any species you will find in your backyard or in our region.
Provided below are some photographs I recently took during a caterpillar hunt in Deering, along with a few brief comments. If you find any caterpillars that you would like to share, please don’t hesitate to email images to [email protected].
Polyphemus Moth (Anthera polyphemus)
You can imagine my excitement when I turned over the leaves of a sugar
maple and found this spectacular fluorescent green silkmoth caterpillar. It has been reported that Polyphemus caterpillars sometimes make a snapping sound with their mandibles.
Great Ash Sphinx (Sphinx chersis)
White Ash (Fraxinus americana) is a common tree in Deering, so it comes as no surprise to find this large ash-feeding species. However, this may quickly change now that the invasive emerald ash borer (Agrilus planipennis) has been detected in town (2017). This beetle has already killed hundreds of millions of ash trees in the eastern United States. Infested trees die quickly; within 3-5 years. For more information about this destructive forest pest and what you can do, please refer to nhbugs.org. It’s sad to think that if we lose our ash trees, we may no longer see the Great Ash Sphinx, or the other species that depend on ash for their survival.
Monkey Slug (Phobetron pithecium)
One of the strangest looking caterpillars I have ever encountered, with its slug-like body. I was lucky to find two monkey slugs on the same day; one on cherry and the other on oak in a field. It’s hard to imagine what this bizarre creature is trying to look like.
Wooly Bear (Pyrrharctia isabella)
Perhaps one of our best known caterpillars, it is frequently seen this time of year crossing roads and driveways. I remember as a child being told that the width of the orange band can predict the severity of the coming winter. It turns out the width of the band is quite variable, increasing in size as it molts.
Red-humped Oakworm (Symmerista canicosta)
This is a good year for red-humped oakworms, as caterpillars were found in just about every beating sample from red oak. The caterpillars start life as gregarious feeders forming large clusters on the underside of leaves and become solitary in later instars. They have been known to cause widespread defoliation of oaks, especially in the northeast. Full-grown caterpillars drop to the ground in late September and pupate in the leaf litter.
Southern Oak Dagger Moth (Acronicta increta)
The color of this caterpillar is variable, ranging from green to a beautiful salmon-pink, with pairs of white spots on top of the abdomen. It was found resting on the underside of a red oak leaf in a characteristic position with the head bent back along the abdomen. This moth is a part of a difficult species complex making identifications quite challenging.
Unicorn Caterpillar (Schizura unicornis)
This caterpillar is aptly named for its unicorn-like horn on its abdomen. A master of camouflage, it is easily overlook when mimicking the edge of a partially eaten leaf. It feeds on a wide range of trees and shrubs, including cherry.
Hickory Tussock Moth (Lophocampa caryae)
The common name of this moth is somewhat misleading, as it feeds on a number of trees species other than hickory. In fact, I’m not aware of any hickory growing in the area where this caterpillar was commonly seen feeding on white ash, red oak, and birch. The hairs of this species can cause an allergic reaction in some people.
Banded Tussock Moth (Halysidota tessellaris)
One of the few caterpillars in our area that will rest entirely exposed on the upper surface of the leaf, suggesting they are distasteful to birds. This caterpillar is a generalist, feeding on many species of woody shrubs and trees.
Eastern Tiger Swallowtail (Papilio glaucus)
Its life as a caterpillar is nearly over, as it prepares to pupate and overwinter as a chrysalis. Just prior to pupation, the caterpillar turns from green to dark brown and spins a silken girdle around the thorax to hold the chrysalis in an upright position.
Gray furcula (Furcula cinerea)
A favorite among caterpillar hunters with its long anal prolegs that resemble a forked tail. when disturbed, the larva raises the erect ‘tail’ above its body in a threatening-like manner. It was found feeding on a poplar along the edge of a field.
|
What is a complete blood count?
A complete blood count or CBC is a blood test that measures many different parts and features of your blood, including:
- Red blood cells, which carry oxygen from your lungs to the rest of your body
- White blood cells, which fight infection. There are five major types of white blood cells. A CBC test measures the total number of white cells in your blood. A test called a CBC with differential also measures the number of each type of these white blood cells
- Platelets, which help your blood to clot and stop bleeding
- Haemoglobin, a protein in red blood cells that carries oxygen from your lungs and to the rest of your body
- Hematocrit, a measurement of how much of your blood is made up of red blood
A complete blood count may also include measurements of chemicals and other substances in your blood. These results can give your health care provider important information about your overall health and risk for certain diseases.
Other names for a complete blood count: CBC, full blood count, blood cell count
What is it used for?
A complete blood count is a commonly performed blood test that is often included as part of a routine checkup. Complete blood counts can be used to help detect a variety of disorders including infections, anaemia, diseases of the immune system, and blood cancers.
Why do I need a complete blood count?
Your health care provider may have ordered a complete blood count as part of your checkup or to monitor your overall health. In addition, the test may be used to:
- Diagnose a blood disease, infection, immune system and disorder, or other medical conditions
- Keep track of an existing blood disorder
What happens during a complete blood count?
A health care professional will take a blood sample from a vein in your arm, using a small needle. After the needle is inserted, a small amount of blood will be collected into a test tube or vial. You may feel a little sting when the needle goes in or out. This usually takes less than five minutes.
Will I need to do anything to prepare for the test?
You don’t need any special preparations for a complete blood count. If your health care provider has also ordered other blood tests, you may need to fast (not eat or drink) for several hours before the test. Your health care provider will let you know if there are any special instructions to follow.
Are there any risks to the test?
There is very little risk of having a blood test. You may have slight pain or bruise at the spot where the needle was put in, but most symptoms go away quickly.
What do the results mean?
A CBC counts the cells and measures the levels of different substances in your blood. There are many reasons your levels may fall outside the normal range. For instance:
- Abnormal red blood cell, haemoglobin, or hematocrit levels may indicate anaemia, iron deficiency, or heart disease
- Low white cell count may indicate an autoimmune disorder, bone marrow disorder, or cancer
- High white cell count may indicate an infection or reaction to the medication
If any of your levels are abnormal, it does not necessarily indicate a medical problem needing treatment. Diet, activity level, medications, a women’s menstrual cycle, and other considerations can affect the results. Talk to your health care provider to learn what your results mean.
Is there anything else I need to know about a complete blood count?
A complete blood count is only one tool your health care provider uses to learn about your health. Your medical history, symptoms, and other factors will be considered before a diagnosis. Additional testing and follow-up care may also be recommended.
|
Heart and Circulatory System
What Does the Heart Do?
The heart is a pump, usually beating about 60 to 100 times per minute. With each heartbeat, the heart sends blood throughout our bodies, carrying oxygen to every cell. After delivering the oxygen, the blood returns to the heart. The heart then sends the blood to the lungs to pick up more oxygen. This cycle repeats over and over again.
What Does the Circulatory System Do?
The circulatory system is made up of blood vessels that carry blood away from and towards the heart. Arteries carry blood away from the heart and veins carry blood back to the heart.
The circulatory system carries oxygen, nutrients, and hormones to cells, and removes waste products, like carbon dioxide. These roadways travel in one direction only, to keep things going where they should.
What Are the Parts of the Heart?
The heart has four chambers — two on top and two on bottom:
- The two bottom chambers are the right ventricle and the left ventricle. These pump blood out of the heart. A wall called the interventricular septum is between the two ventricles.
- The two top chambers are the right atrium and the left atrium. They receive the blood entering the heart. A wall called the interatrial septum is between the atria.
Watch the Heart Pump
Animation showing the normal heart anatomy and blood pumping through pulmonary and systemic circulation.
The atria are separated from the ventricles by the atrioventricular valves:
- The tricuspid valve separates the right atrium from the right ventricle.
- The mitral valve separates the left atrium from the left ventricle.
Two valves also separate the ventricles from the large blood vessels that carry blood leaving the heart:
- The pulmonic valve is between the right ventricle and the pulmonary artery, which carries blood to the lungs.
- The aortic valve is between the left ventricle and the aorta, which carries blood to the body.
What Are the Parts of the Circulatory System?
Two pathways come from the heart:
- The pulmonary circulation is a short loop from the heart to the lungs and back again.
- The systemic circulation carries blood from the heart to all the other parts of the body and back again.
In pulmonary circulation:
- The pulmonary artery is a big artery that comes from the heart. It splits into two main branches, and brings blood from the heart to the lungs. At the lungs, the blood picks up oxygen and drops off carbon dioxide. The blood then returns to the heart through the pulmonary veins.
In systemic circulation:
- Next, blood that returns to the heart has picked up lots of oxygen from the lungs. So it can now go out to the body. The aorta is a big artery that leaves the heart carrying this oxygenated blood. Branches off of the aorta send blood to the muscles of the heart itself, as well as all other parts of the body. Like a tree, the branches gets smaller and smaller as they get farther from the aorta.
At each body part, a network of tiny blood vessels called capillaries connects the very small artery branches to very small veins. The capillaries have very thin walls, and through them, nutrients and oxygen are delivered to the cells. Waste products are brought into the capillaries.
Capillaries then lead into small veins. Small veins lead to larger and larger veins as the blood approaches the heart. Valves in the veins keep blood flowing in the correct direction. Two large veins that lead into the heart are the superior vena cava and inferior vena cava. (The terms superior and inferior don't mean that one vein is better than the other, but that they're located above and below the heart.)
Once the blood is back in the heart, it needs to re-enter the pulmonary circulation and go back to the lungs to drop off the carbon dioxide and pick up more oxygen.
How Does the Heart Beat?
The heart gets messages from the body that tell it when to pump more or less blood depending on a person's needs. For example, when we're sleeping, it pumps just enough to provide for the lower amounts of oxygen needed by our bodies at rest. But when we're exercising, the heart pumps faster so that our muscles get more oxygen and can work harder.
How the heart beats is controlled by a system of electrical signals in the heart. The sinus (or sinoatrial) node is a small area of tissue in the wall of the right atrium. It sends out an electrical signal to start the contracting (pumping) of the heart muscle. This node is called the pacemaker of the heart because it sets the rate of the heartbeat and causes the rest of the heart to contract in its rhythm.
These electrical impulses make the atria contract first. Then the impulses travel down to the atrioventricular (or AV) node, which acts as a kind of relay station. From here, the electrical signal travels through the right and left ventricles, making them contract.
One complete heartbeat is made up of two phases:
- The first phase is called systole (SISS-tuh-lee). This is when the ventricles contract and pump blood into the aorta and pulmonary artery. During systole, the atrioventricular valves close, creating the first sound (the lub) of a heartbeat. When the atrioventricular valves close, it keeps the blood from going back up into the atria. During this time, the aortic and pulmonary valves are open to allow blood into the aorta and pulmonary artery. When the ventricles finish contracting, the aortic and pulmonary valves close to prevent blood from flowing back into the ventricles. These valves closing is what creates the second sound (the dub) of a heartbeat.
- The second phase is called diastole (die-AS-tuh-lee). This is when the atrioventricular valves open and the ventricles relax. This allows the ventricles to fill with blood from the atria, and get ready for the next heartbeat.
How Can I Help Keep My Child's Heart Healthy?
To help keep your child's heart healthy:
- Encourage plenty of exercise.
- Offer a nutritious diet.
- Help your child reach and keep a healthy weight.
- Go for regular medical checkups.
- Tell the doctor about any family history of heart problems.
Let the doctor know if your child has any chest pain, trouble breathing, or dizzy or fainting spells; or if your child feels like the heart sometimes goes really fast or skips a beat.
- Heart Murmurs
- Patent Ductus Arteriosus (PDA)
- Patent Foramen Ovale (PFO)
- Interrupted Aortic Arch (IAA)
- Cardiac Catheterization
- Congenital Heart Defects
- Atrial Septal Defect (ASD)
- Mitral Valve Prolapse
- Arrhythmia (Abnormal Heartbeat)
- Supraventricular Tachycardia (SVT)
- Body Basics: The Heart (Slideshow)
- Words to Know (Heart Glossary)
Note: All information is for educational purposes only. For specific medical advice, diagnoses, and treatment, consult your doctor.
© 1995- KidsHealth® All rights reserved.
Images provided by The Nemours Foundation, iStock, Getty Images, Veer, Shutterstock, and Clipart.com.
|
Memory Town System for Languages
The memory town system was described by Dominic O’Brien in some of his books. Foreign vocabulary words are associated with mnemonic images, and then the images are mentally placed in sections of a town based on their grammatical functions. The memory town functions as a large Memory Palace.
Choose a town that you know well. Divide the town into sections based on the languages grammar. For example, in Spanish there are two noun genders: masculine and feminine. You might divide your town in half, and place all of the masculine nouns in locations in one half of the town and all the feminine nouns in the other half of the town.
Mnemonic images for verbs might be placed in a stadium, while images for adjectives might be placed in a park.
If you are learning more than one language, you can use a different town or city for each language.
This technique can also be used to separate other parts of speech. For example, if you want to memorize verb conjugation groups, you could place each verb conjugation group in a different section of the town. It could also be used for things like German noun pluralizations.
|
Students enter silently according to routine. Do Now assignments are handed at the door and introduce students to the concept of nets. This word however, is not used. These figures are called composite figures, made up of more than one 2D shape. In each item, students are asked to find the area of each composite figure. Students will have 5 minutes to work independently and silently.
Once those 5 minutes have passed it will be time to review the answers. It is important here to push students to copy down the work as horizontal algebraic expressions, including initial formulas and substitutions. The prior skill needed to be able to complete the work this way includes evaluating algebraic expressions through substitution. This is a skill we will continue to refine throughout the next few days as it impacts surface area and volume of 3D figures. It is also important to emphasize MP6, attention to precision. A drop of a negative sign or misuse of parenthesis can lead to a wrong solution. Thus, checking your work is supremely important in these tasks.
After reviewing the answers to the Do Now, students are given 3d prisms and pyramids to explore. Each year I have collected 3D shapes that students have created and I use them to give to students the next year for observations. Students have 3 – 4 minutes to write 2 facts they’ve noticed while manipulating these 3D shapes. These facts can include the counting some part of the shape or noting the 2D shapes that make up their 3D figure. No names are yet given, not terms are used. Students are left to freely observe and describe what they hold. By introducing something concrete they can hold in their hands and exploring it before we begin throwing around new words, I hope students will build the connections to the new vocabulary on their own.
Next, students receive their classnotes. Any lettering in red font indicates words that must be copied by students off the board. We review the definitions of prisms and pyramids and I ask students to tell me the name of the 3D figure they were holding during the previous exploration, making sure to justify their answer using the vocabulary included in the notes. For example, I’m holding a pyramid. I know this because all of these are the sloping sides that meet at this top point called the vertex.
Finally, we reach the second page in the class notes. I ask students to read the first paragraph and example with their partners and attempt that problem, showing their work in the way specified by the notes (using algebraic expressions). I give students 3 – 4 minutes to do this before I stop everyone to make sure the algebraic expressions are written on their paper. The following is explained to students:
Students will be asked to form groups of 4 to complete the Class work . Once they have selected their groups and are seated I will distribute the work sheet including surface area problems only. I will be working with a small group of students, selected during the Do Now because I noticed them struggling to write algebraic expressions. I will begin working with them on white boards before asking them to work on paper. I will also ask the first 4 groups to complete the class work correctly to write solutions up on the board. These solutions should include a drawing. This is a good opportunity to give the artist in class an opportunity to display a skill and practice the skill at the same time.
Once students complete this assignment, they will be handed an additional problem sheet to complete independently. This worksheet will include additional problems with evaluating algebraic expressions. Students are advised that these sheets will be graded. Those who cannot complete the assignment during class will be allowed to take it home to complete for homework.
Throughout this class work section it is important to ensure students are being pushed to show their work neatly and algebraically. I can be quite particular about the equal signs and parentheses students are asked to use in class, but I find it pays off because it lessens the likelihood of making small errors.
Students will receive an exit ticket and will have 10 minutes to solve. This is a somewhat complex problem and I want to be able to give students some feedback as they solve to see how far they can get in this problem. While everything we’ve been covering today is more skill and fluency driven, this example is closer to the type of common core problem solving needed for students to meet the new academic standards. Students should be reminded of any problem solving strategies we covered this year while they work.
Once students have completed the exit ticket they receive their HW and are asked to line up.
|
- Research suggests that most human brains take about 25 years to develop, though these rates can vary between men and women, and among individuals.
- Although the human brain matures in size during adolescence, important developments within the prefrontal cortex and other regions still take place well into one's 20s.
- The findings raise complex ethical questions about the way our criminal justice systems punishes criminals in their late teens and early 20s.
At what age does someone become an adult? You might say that the 18th birthday marks the transition from childhood to adulthood. After all, that’s the age at which people can typically join the military and become fully independent in the eyes of the law.
But in light of research showing that our brains develop gradually over the course of several decades, and at different paces among individuals, should we start rethinking how we categorize children and adults?
“There isn’t a childhood and then an adulthood,” Peter Jones, who works as part of the epiCentre group at Cambridge University, told the BBC. “People are on a pathway, they’re on a trajectory.”
Brain development and age
One key part of that trajectory is the development of the prefrontal cortex, a significant part of the brain, in terms of social interactions, that affects how we regulate emotions, control impulsive behavior, assess risk and make long-term plans. Also important are the brain’s reward systems, which are especially excitable during adolescence. But these parts of the brain don’t stop growing at age 18. In fact, research shows that it can take more than 25 years for them to reach maturity.
The cerebellum also affects our cognitive maturity. But unlike the prefrontal cortex, the development of the cerebellum appears to depend largely on environment, as Dr. Jay Giedd, chair of child psychiatry at Rady Children’s Hospital-San Diego, told PBS:
“Identical twins’ cerebellum are no more alike than non-identical twins. So we think this part of the brain is very susceptible to the environment. And interestingly, it’s a part of the brain that changes most during the teen years. This part of the brain has not finished growing well into the early 20s, even. The cerebellum used to be thought to be involved in the coordination of our muscles. So if your cerebellum is working well, you were graceful, a good dancer, a good athlete.
But we now know it’s also involved in coordination of our cognitive processes, our thinking processes. Just like one can be physically clumsy, one can be kind of mentally clumsy. And this ability to smooth out all the different intellectual processes to navigate the complicated social life of the teen and to get through these things smoothly and gracefully instead of lurching. . . seems to be a function of the cerebellum.”
The effects that our environment can bring upon the cerebellum even further complicate the question of when a child become an adult, considering the answer might depend on the kind of childhood an individual experienced.
Adulthood and the criminal justice system
The factors behind cognitive development raise many philosophical questions. But the most important are arguably those related to how we punish criminals, especially young men, whose brains develop an average of two years later than women.
“The preponderance of young men engaging in these deadly, evil, and stupid acts of violence may be a result of brains that have yet to fully developed,” Howard Forman, an assistant professor of psychiatry at Albert Einstein College of Medicine, told Business Insider.
So, does that mean young criminals — say, 19- to 25-year-olds — should be receive the same punishment as a 35-year-old who commits the same crime? Both criminals would still be guilty, but each might not necessarily deserve the same punishment, as Laurence Steinberg, a professor of psychology at Temple University, told Newsweek.
“It’s not about guilt or innocence… The question is, ‘How culpable are they, and how do we punish them?'”
After all, most countries have separate juvenile justice systems to deal with children who commit crimes. These separate systems are predicated on the idea that there ought to be a spectrum of culpability that accounts for a criminal’s age. So, if we assume that the importance of age in the eyes of the justice system is based largely on cognitive differences between children and adults, then why shouldn’t that culpability spectrum be modified to better match the science, which clearly shows that 18 is not the age at which the brain is fully matured?
Whatever the answer, society clearly needs some definition of adulthood in order to be able to differentiate between children and adults in order to function smoothly, as Jones suggested to the BBC.
“I guess systems like the education system, the health system and the legal system make it convenient for themselves by having definitions.”
But that doesn’t mean these definitions make sense outside of a legal context.
“What we’re really saying is that to have a definition of when you move from childhood to adulthood looks increasingly absurd,” he said. “It’s a much more nuanced transition that takes place over three decades.”
This article was originally published March 20, 2019. It was updated in January 2022.
|
A team of international scientists led by ETH researcher Paolo Sossi has gained new insights into Earth’s atmosphere of 4.5 billion years ago. Their results have implications for the possible origins of life on Earth.
Four-and-a-half billion years ago, Earth would have been hard to recognize. Instead of the forests, mountains, and oceans that we know today, the surface of our planet was covered entirely by magma – the molten rocky material that emerges when volcanoes erupt. This much the scientific community agrees on. What is less clear is what the atmosphere at the time was like. New international research efforts led by Paolo Sossi, senior research fellow at ETH Zurich and the NCCR PlanetS, attempt to lift some of the mysteries of Earth’s primeval atmosphere. The findings were published today in the journal Science Advances.
Making magma in the laboratory
“Four-and-a-half billion years ago, the magma constantly exchanged gases with the overlying atmosphere,” Sossi begins to explain. “The air and the magma influenced each other. So, you can learn about one from the other.”
To learn about Earth’s primeval atmosphere, which was very different from what it is today, the researchers therefore created their own magma in the laboratory. They did so by mixing a powder that matched the composition of Earth’s molten mantle and heating it. What sounds straightforward required the latest technological advances, as Sossi points out: “The composition of our mantle-like powder made it difficult to melt – we needed very high temperatures of around 2,000° Celsius.”
That required a special furnace, which was heated by a laser and within which the researchers could levitate the magma by letting streams of gas mixtures flow around it. These gas mixtures were plausible candidates for the primeval atmosphere that, as 4.5 billion years ago, influenced the magma. Thus, with each mixture of gases that flowed around the sample, the magma turned out a little different.
“The key difference we looked for was how oxidized the iron within the magma became,” Sossi explains. In less accurate words: how rusty. When iron meets oxygen, it oxidizes and turns into what we commonly refer to as rust. Thus, when the gas mixture that the scientists blew over their magma contained a lot of oxygen, the iron within the magma became more oxidized.
This level of iron oxidation in the cooled-down magma gave Sossi and his colleagues something that they could compare to naturally occurring rocks that make up Earth’s mantle today – so-called peridotites. The iron oxidation in these rocks still has the influence of the primeval atmosphere imprinted within it. Comparing the natural peridotites and the ones from the lab therefore gave the scientists clues about which of their gas mixtures came closest to Earth’s primeval atmosphere.
A new view of the emergence of life
“What we found was that, after cooling down from the magma state, the young Earth had an atmosphere that was slightly oxidizing, with carbon dioxide as its main constituent, as well as nitrogen and some water,” Sossi reports. The surface pressure was also much higher, almost one hundred times that of today and the atmosphere was much higher, due to the hot surface. These characteristics made it more similar to the atmosphere of today’s Venus than to that of today’s Earth.
This result has two main conclusions, according to Sossi and his colleagues: The first is that Earth and Venus started out with quite similar atmospheres but the latter subsequently lost its water due to the closer proximity to the sun and the associated higher temperatures. Earth, however, kept its water, primarily in the form of oceans. These absorbed much of the CO2 from the air, thereby reducing the CO2 levels significantly.
The second conclusion is that a popular theory on the emergence of life on Earth now seems much less likely. This so-called “Miller–Urey experiment,” in which lightning strikes interact with certain gases (notably ammonia and methane) to create amino acids – the building blocks of life – would have been difficult to realize. The necessary gases were simply not sufficiently abundant.
Reference: “Redox state of Earth’s magma ocean and its Venus-like early atmosphere” by Paolo A. Sossi, Antony D. Burnham, James Badro, Antonio Lanzirotti, Matt Newville and Hugh St.C. O’Neill, 25 November 2020, Science Advances.
|
The ancient Egyptians mummified animals as well as humans, most commonly as votive offerings to the gods available for purchase by visitors to temples. Many of those mummified remains have survived but are in such a fragile state that researchers are loath to disturb the remains to learn more about them. Now an inter-disciplinary team of scientists has managed to digitally "unwrap" three specimens—a mummified cat, bird, and snake—using a high-resolution 3D X-ray imaging technique, essentially enabling them to conduct a virtual postmortem, according to a new paper published in the journal Scientific Reports.
Studying fragile ancient artifacts with cutting-edge imaging technology confers a powerful advantage on archaeological analysis. For instance, in 2016, an international team of scientists developed a method for "virtually unrolling" a badly damaged ancient scroll found on the western shore of the Dead Sea, revealing the first few verses from the book of Leviticus. The so-called En Gedi scroll was recovered from the ark of an ancient synagogue destroyed by fire around 600 CE.
In 2019, we reported that German scientists used a combination of cutting-edge physics techniques to virtually "unfold" an ancient Egyptian papyrus, part of an extensive collection housed in the Berlin Egyptian Museum. Their analysis revealed that a seemingly blank patch on the papyrus actually contained characters written in what had become "invisible ink" after centuries of exposure to light. And earlier this year, we reported that scientists had used multispectral imaging on four supposedly blank Dead Sea Scrolls and found the scrolls contained hidden text, most likely a passage from the book of Ezekiel.
Now scientists are applying advanced imaging methods to the study of mummified remains. Early techniques, dating as far back as the 1800s, were quite intrusive, usually involving the unwrapping of the mummy to study the bones and any artifacts wrapped up with the remains. They could often yield insight into wrapping techniques and the mummification process (thanks to chemical analysis), but they also resulted in damage to, or destruction of, the remains. These days, non-invasive techniques are heavily favored, such as polarized light microscopy, conventional radiography, and medical X-ray computed tomography (CT).
However, while the latter is an improvement over 2D radiography in terms of capturing volumetric (3D) qualities, medical CT also has lower resolution. Micro CT brings high-resolution capability to 3D images. It involves combining several radiographs to build a "tomogram" (3D volume image), which can then be 3D printed or analyzed in a virtual reality setting. The technique is commonly used by scientists for imaging the internal structure of materials at the microscale. Previously, Micro CT had been used to image a mummified falcon, enabling researchers to determine its likely last meal and to image a mummified severed human hand.
"Using micro CT we can effectively carry out a postmortem on these animals, more than 2,000 years after they died in ancient Egypt," said co-author Richard Johnston of Swansea University. "With a resolution up to 100 times higher than a medical CT scan, we were able to piece together new evidence of how they lived and died, revealing the conditions they were kept in, and possible causes of death. Our work shows how the hi-tech tools of today can shed new light on the distant past."
Swansea University maintains a collection of mummified specimens, and the team selected three animal remains that varied in both size and shape: a cat, a bird, and a snake. The cat head was decorated with a painted burial mask and was wrapped separately from the body, likely after mummification. The bird of prey mummy was intact except for a severed leg protruding from the bottom of the wrapping. The mummified remains of the snake were not definitely identified as a snake until a 2009 radiograph, courtesy of a local veterinary clinic, revealed it to be coiled up inside.
Using micro CT, the team was able to image the remains in extraordinary detail, including small bones and teeth, desiccated soft tissues, and mummification materials. For instance, the researchers were able to determine that the mummified cat (likely the domestic Felis catus species) was actually a kitten, roughly five months old, after identifying teeth within the jaw bone that hadn't yet emerged. Examination of the cat's body showed an unfused distal epiphysis, further evidence of its young age. They even determined a likely cause of death: the separation of the vertebrae was consistent with strangulation.
The bird likely belonged to the Eurasian kestrel family, based on virtual bone measurements. As for the snake, it was identified as a mummified young Egyptian Cobra. There was evidence of kidney damage (calcification in particular) and gout, suggesting that the animal had not been kept in very good conditions and likely didn't get enough water. There were multiple bone fractures, consistent with the snake being killed by a strong whipping motion while being held by the tail.
There was also evidence of the mouth opening being filled with resin (most likely natron) to render the snake harmless. Alternatively, the authors suggest that the material may have been placed in the mouth as part of an "opening of the mouth" mummification procedure. "The latter is supported by the fact that the snake's jaw is wide open, an unlikely final position without some intervention to prize open and maintain separation of the upper and lower jaws," the authors wrote. "There is also clear trauma to the jaw bones and teeth, which has been observed in human mummies that have undergone the opening of the mouth procedure."
If so, this would be the first evidence for this practice being applied to a snake, although historical texts suggest a similar practice for the Apis bull, involving placing myrrh and natron under the tongue to slow down decomposition. In short, said co-author Carolyn Graves-Brown from the Egypt Centre at Swansea University, "Our findings have uncovered new insights into animal mummification, religion, and human-animal relationships in ancient Egypt."
|
hieroglyphic (hĪˌrəglĭfˈĭk, hĪˌərə–) [key] [Gr., = priestly carving], type of writing used in ancient Egypt. Similar pictographic styles of Crete, Asia Minor, and Central America and Mexico are also called hieroglyphics (see Minoan civilization; Anatolian languages; Maya; Aztec). Interpretation of Egyptian hieroglyphics, begun by Jean-François Champollion, is virtually complete; the other hieroglyphics are only very imperfectly understood. The distinguishing feature of hieroglyphics is that they are conventionalized pictures used chiefly to represent meanings that seem arbitrary and are seldom obvious. Egyptian hieroglyphics appear in several stages: the first dynasty (3110–2884 B.C.), when they were already perfected; the Old Kingdom; the Middle Kingdom, when they were beginning to go out of use; the New Empire, when they were no longer well understood by the scribes; and the late hieroglyphics (from 500 B.C.), when the use of them was a tour de force. With a basic number of 604 symbols, hieroglyphics were written in several directions, including top to bottom, but usually from right to left with the pictographs facing the beginning of the line.
There were in general three uses to which a given hieroglyphic might be put (though very few were used for all three purposes): as an ideogram, as when a sign resembling a man meant "man" or a closely connected idea (thus a man carrying something meant "carrying"); as a phonogram, as when an owl represented the sound m, because the word for owl had m for its principal consonant; or as a determinative, an unpronounced symbol placed after an ambiguous sign to indicate its classification (e.g., an eye to indicate that the preceding word has to do with looking or seeing). As hieroglyphic developed, most words came to require determinatives. The phonograms were, of course, the controlling factor in the progress of hieroglyphic writing, because of the fundamental convenience of an alphabet.
In the Middle Kingdom a developed cursive, the hieratic, was extensively used for private documents where writing speed was essential. In the last centuries B.C. a more developed style, the demotic, supplanted the hieratic. Where the origin of most hieratic characters could be plainly seen in the hieroglyphics, the demotics were too conventionalized to bear any resemblance to the hieroglyphics from which they had sprung.
See A. H. Gardiner, Egyptian Grammar (3d ed. 1957); N. Davies, Picture Writing in Ancient Egypt (1958); E. A. Budge, Egyptian Language (8th ed. 1966); H. G. Fischer, Ancient Egyptian Calligraphy (1983); W. V. Davies, Egyptian Hieroglyphics (1988).
More on hieroglyphic from Infoplease:
See more Encyclopedia articles on: Language and Linguistics
|
Cornwallis moved to the 13 colonies in North America in 1776 to try to control the rebelling colonies. He fought at the Battle of Princeton. George Washington led the Americans in that battle. Later, Cornwallis led British forces through North Carolina and South Carolina, where he fought against American forces under Nathanael Greene. In October 1781, Cornwallis's forces surrendered to George Washington at the Battle of Yorktown, ending the American Revolution.
From 1798 to 1801 Cornwallis was Lord Lieutenant and Commander in Chief of Ireland.
|
Understanding the condition
Children with Cystic Fibrosis are born with the condition, it is not a condition that you can develop later in life. Cystic Fibrosis is caused by two faulty or “mutated” CF genes. The faulty genes should control the movement of salt and water in and out of cells. Children with the faulty gene that causes Cystic Fibrosis experience a build-up of thick sticky mucus in their lungs, digestive system and other organs. This can cause a wide range of additional difficulties that affect a child’s entire body.
Cystic Fibrosis is an inherited condition and will occur if both parents have the faulty gene however parents may not be aware that they carry the faulty gene and in doing so, it does not guarantee that they will have a child with Cystic Fibrosis.
“One in 25 people carry the CF gene. For someone to be born with CF, both parents must carry the faulty gene. If both parents have the gene, there is a 25% chance the child will have CF. If both parents carry the gene there is also a 50% chance of the child being a gene carrier but not having CF and a 25% chance, they will not have the CF gene.”
The additional conditions experienced by children with CF are generally physical in nature such as poor lung function, frequent and persistent lung infections and the inability to effectively digest food, particularly fats as well as additional complications such as CF-related diabetes, bone disease and infertility. However due to the nature of the condition it may cause the child to have above average absence from school to attend appointments and treatment for lung infections which can lead to the child experiencing low self-esteem, reduced self-confidence and in cases mental health difficulties such as anxiety, depression and self-harm.
Challenges faced by students
As a teacher, it is important to be aware that certain topics in school may cause distress to children and young people with CF. It is important to meet with the child/ young person and their family/ carers ahead of a lesson to discuss what you intend to cover and check with the family/ carers that the child is aware of the impact CF has on their life for example if you are covering life expectancy, does the child know that their life is limited? When covering fertility in Biology, does the child know that infertility is a major difficulty for individuals with CF?
A child/ young person with CF may experience loose stools, a frequent need to go to the toilet and stomach aches. Supporting them to have access to a toilet when needed with sensitivity and without raising attention to other students will enable them not to feel “different” to other children or feel worried about leaving class to go to the toilet.
In school, with “difference” and lack of understanding spurring some children to bully, it is important to be observant and support the child and their family/ carers to talk with you about whether thy want to share the condition with the class and if so how.
To find out more about how to support a student in your class, please visit: www.cysticfibrosis.org.uk
Cystic Fibrosis UK have a range of resources from Me and CF Fact Sheet for students to use, a template health plan to be used in schools, short video clips and information on how to support other students in your class to understand the condition.
|
As speech therapists, we spend much of our time talking about, thinking about and using words. Vocabulary is so important when encouraging a child’s language skills; whether you are waiting for those words, or trying to improve sentences for SATS test, a good vocabulary is key.
Verbs are one of the main building blocks in a sentence. They are the words that tell us what people are doing. They allow you to go from using only nouns, to being able to make short sentences to comment and ask questions. As your language expands, a good range of verbs allows you talk about what people are doing. For example, the man isn’t eating his dinner he is devouring it.
So whatever level your child is currently at, here are some ideas on how to help encourage verbs.
For younger children:-
- Before a child can say a word, they need to understand it first. To make sure that your child understands a range of verbs, try singing action songs like “If you’re happy and you know it clap your hands”, or brush your hair or stamp etc or “This is the way we drink our juice, wash our hands” etc (you can sing this to the tune of ‘Here we go round the mulberry bush’). When your child has practised the song with you, try stopping when you get to the verb and see if they can fill in the gap.
- Talk about what you are doing e.g. Daddy reading, Mummy walking etc. You could encourage your child to go and see what other people are doing, or look in books and talk about what the characters are doing. If they find it hard to give you the right word, you could give them a choice e.g. “Is she eating or sleeping?”
- When you are playing, talk about what the toys are doing, “Thomas is stopping, now he’s ready to go” or “look, Teddy’s sleeping” You can narrate what your child is doing whilst they play. Hopefully they will join in! At first, concentrate on 2 or 3 new verbs at a time. It may feel repetitive, but it will help your child learn.
- Play Simon Says type games giving simple instructions for your child to follow e.g. “Simon Says walk?”, “Simon Says jump?” etc. If your child does not respond, or does the wrong action, show them what to do “Look, I’m jumping, can you jump?” Begin by repeating the same few verbs, then introduce more. Let your child have a turn at giving the instructions too; all children love playing teacher!
- If your child continues to have difficulty with verbs, choose 2-3 simple actions and focus on them for a few weeks. Try to use them as often as you can in everyday life and play situations.
For older children:-
- Pick a basic verb and see how many other verbs you can think of that mean nearly the same thing. For example for ‘eat’ you could have ‘consume’, ‘devour’, ‘nibble’ or ‘snack’. Having a good range of vocabulary is critical for making progress within the national curriculum. If appropriate, you/ your child could look up similar words in a dictionary or look online. Talk about the words to ensure your child understands them.
- Make a basic sentence and try and change the verb – see how many different verbs will fit. For example “They …….. the food”, you could have ate, cooked, prepared, chopped, stirred, bought etc.
- Another extension of this activity is to choose an item and think of how many things you can do with it/ to it. For example if you picked a book, you can open it, read it, shut it, carry it, look at it, throw it (although this may not be a good idea!). This can also allow for discussion of harder vocabulary; you can devour a book, but this does this mean you have eaten it?
So, whatever your age, practising verbs is important; whether you are just starting to talk and link words, or whether you need to improve your vocabulary.
|
What Is Gastroesophageal Reflux Disease (GERD)?
Gastroesophageal reflux disease or GERD is a term to describe the common experience of acid reflux in an individual. Acid reflux typically refers to a single occurrence or instance of acid backflow from the stomach into the esophagus, described as heartburn.
Someone is usually diagnosed with GERD when acid reflux happens in mild cases twice a week, or moderate to severe acid reflux at least once a week. GERD can occur at any age, but typically begins around age 40.
If left untreated, patients can eventually develop Barrett’s esophagus.
What Causes GERD?
When you swallow food it passes through the esophagus and past the lower esophageal sphincter into the stomach. When the LES becomes compromised it can weaken and allow stomach acid to flow back into the esophagus. There usually is not a single cause that leads to this happening often, but you are more likely to have or develop GERD if you meet one or more of the following criteria:
- Hiatal Hernia – when the upper part of the stomach bulges through the diaphragm
- Scleroderma – connective tissue disorder
- Eat large meals late at night
- Eat spicy foods
- Eat raw onion or garlic
- Lie down often after eating
- Drink coffee
What Are The Major Symptoms Of GERD?
The major symptoms of GERD are similar to acid reflux but happen more often. Those symptoms include:
- Chest pain
- Regurgitation of food or sour liquid
- Lump in your throat sensation
- Unexplained weight loss
- Chronic cough
- Disrupted sleep
You should make an appointment with your gastroenterologist today if you experience any of these symptoms frequently and are in pain, or if you take over the counter heartburn medication more than twice a week.
What Are The Available Treatments For GERD?
Treatments to prevent or relieve GERD include:
- Avoid the foods and beverages prone to cause acid reflux (seen in above lists)
- Eat in moderation and slowly
- Stay up and stand up after eating
- Don’t eat at least 2 hours before going to bed
- Sleep on an incline
- Quit smoking
- Lose weight
- Tell your gastroenterologist about the current medications you are taking.
- Limit your coffee/caffeine intake
- Over the counter (OTC) antacids
- Prescription strength antacids (H-2 receptor blockers)
- Medication to strengthen the LES
- Fundoplication- surgery wrapping your stomach around the LES
- LINX device- magnetic beads wrapped around the junction of the stomach and esophagus.
|
Compare Fractions or Decimals - CCSS 4.NF.A.2, 5.NF.B.5.A, 6.NS.C.7.D, 7.NS.A.2.D, 7th Grade Mathematical Practices
Links verified on 7/2/2014
- Any Fractions Method Game - Match the percentages and fractions by dragging numbers onto the blackboard.
- Comparing Decimal Numbers - Determine is the number is greater than, less than, or the same.
- Comparing Fractions Quiz - Choose a level and test your knowledge on fractions.
- Comparing Fractions Side by Side Game - Compare the fractions to see if one is larger or if they are the same.
- Comparing Integers - Comparing integers with absolute values.
- Comparing Percentages and Fractions Game - Match the percentages and fractions graphics.
- Computation Castle - A game that requires the utilization of several math skills: mixed numbers/improper fractions, equivalent fractions, metric conversions, exponents, rounding to the nearest thousands and thousandths and place value.
- Decimal Number Lines - Three whiteboard resources to assist in the teaching and learning of decimals; zero to one, zero to ten, and a decimal number line.
- Dolphin Racing - Find the fractions needed to power your dolphin to the finish line.
- Fraction Ordering Game - Put the fractions in the correct order.
- Fraction Sorter - Interactive site posted by Shodor. Practice comparing fractions.
- Ordering Decimal Numbers - Click and drag the numbers to put them in order from least to greatest.
- Ordering Decimal Numbers II - Click and drag the numbers to put them in order from least to greatest.
- Percent with a Calculator - Work the problems, choose your answer and then run your mouse over the colored blocks to see if you were correct.
- Simplifying Fractions Game - Compare 2 fractions to see if one is larger or if they are the same.
|
Today’s STEM Club was a lot of fun; we made edible models of plant cells and though we got a little sticky, everyone was able to summarize the function of the cell organelles. After we cleaned up, the kids were given the opportunity to make their own slides (onion skin and cheek cells) so they could observe real cells under a microscope. At home, my own kiddos spent a little more time with the microscope, sketching and labeling the organelles they could identify under magnification. To aide in remembering the function of cell organelles, the kids also enjoyed creating an interactive tool for their science notebooks.
- All organisms are composed of one or more cells.
- Cells carry out important life functions including taking in nutrients and releasing materials, obtaining energy, and growing.
All living beings are made up of cells. Some of them are made up of only one cell and others have many cells. Cells got their name from an Englishman named Robert Hooke in the year 1665. He first saw and named “cells” while he was experimenting with a new instrument we now call a “microscope.” For his experiment he cut very thin slices from cork. He looked at these slices under a microscope. He saw tiny box-like shapes. These tiny boxes reminded him of the plain small rooms that monks lived in called “cells”.
Look around at your house and the houses around you. They are made from smaller building materials such as wood, bricks and cement. So are the cars in the street and the bike you ride. In fact, everything is made from building blocks including living things. If you take a look at your home you will notice it is enclosed by outer walls. All cells are enclosed within something called a plasma membrane (sometimes called the cell membrane). The plasma membrane is not exactly the same thing as the wall in your house, but it does hold parts of a cell inside. These parts of the cell are what biologists call organelles (Latin for little organs).
By the help of microscopes, there is nothing so small as to escape our inquiry; hence there is a new visible world discovered to the understanding. ~ Robert Hooke
If you look at very simple organisms, you will discover cells that have no defined nucleus (prokaryotes), others that have a nucleus and many organelles (eukaryotes), and cells that have hundreds of nuclei (multi-nucleated). Humans may have hundreds of types of cells. The thing all cells have in common is that they are compartments surrounded by some type of membrane. The main purpose of a cell is to organize. Cells hold a variety of pieces and each cell has a different set of functions that are unique to each type of organism.
Cells are amazing. They are all made of similar building blocks but they do many different things depending on how they are programmed. Some cells carry oxygen to parts of our body. Other cells defend against invading bacteria and viruses. Some cells are used to carry oxygen through the blood (red blood cells) and others transmit signals through out the body like the signals from your hand to your brain when you touch something hot. Some cells can even convert the sun’s energy into food (photosynthesis). There are hundreds of jobs that cells can do. Cells also make other cells in a process called cell division – something other building blocks can not do.
Plant cells are easy to identify because they have a protective structure called a cell wall made of cellulose. Plants have the wall; animals do not. Plants also have cell organelles like the chloroplast (which contain green pigments or chlorophylls where photosynthesis takes place, giving plants their green color) and large water-filled vacuoles.
As the kids worked together to create an edible model of a plant cell, I distributed a handout that defined the role of the organelles in detail. I also compared the cell organelles to a factory, giving a real life analogy for each cell function. These descriptions were easier for the kids to remember. As they worked, I walked around and ‘quizzed’ everyone on the role each organelle played in the cell function. At the conclusion of the class, I distributed a diagram that labeled the cell parts and included small little flip books. They were instructed to cut out and glue each into their notebook, numbering each to correspond to the diagram and handout. If you would like these handouts, you may download them for free simply by subscribing. These printables will be included in the next Science Logic curriculum unit to be released soon.
- Observe onion cell and cheek cells under a microscope and sketch the cells you observe in your notebook.
- Create a model or poster of an animal cell.
- For a challenge, you may wish to try out this fun DNA extraction lab: http://ucbiotech.org/resources/display/files/dna_extraction_from_strawberrie.pdf
- Research cell division (mitosis and meiosis) and create a flip book to illustrate the stages of cell division.
- Here are a few great websites to allow further exploration of cells:
To receive the free printables like the one shown here … Subscribe to My Newsletter
|
What makes information accessible
- A Digital Information Technology (DIT) Document is accessible if the content (text, images, controls and other included objects) can be presented by assistive technology to make it usable by all identified populations with special needs.
- Well-structured HTML files can be converted by a screen reader to voice or braille output for people with blindness. Thus properly structured HTML is accessible.
- Voice commands are sent to voice recognition software for people who cannot use a mouse or keybard. The voice is converted to text that looks like keyboard input. An accessible webpages can recognize this input and operate appropriately.
Special Case for Reading: An accessible DIT document that includes textual information that is meant to be read by people, can always be read by assistive technology that recognizes every character of text with 100% accuracy
Images of text fail this test. No OCR program can recognize with 100% accuracy.
The key word is can. Accessible content can be modified to serve many different groups.
Making Accessible Information Happen
How Do You Know You Have Covered Everyone?
How do stakeholders communicate?
Accessible DIT Support Ecology
- Disability Triad
- End User with Disability - you and me
- Support System - American Council of the Blind, US Access Board, National Library for the Blind and Physically Handicapped
- DIT Producers - Google, California State University, SoCal Edison
- Diagnosis - Retinopathy (CONGENITAL TOXOPLASMIC CORIORRETINITIS)
- Functional Needs (Reading Support, Transportation)
- Point of View
- I have this diagnosis.
- I cannot do certain essential functions.
- How do I work and live?
- Standards and Law
- Direct Service Accommodation
- Assistive Technology
- Point of View
- How do I map user needs to DIT requirements so that everyone gets served?
- What direct services are needed when technology breaks down?
- How do I ensure that assistive technology talks to DIT?
- What must end users and producers know to reach harmony?
- Equally Effective Functionality
- Barrier removal and prevention
- Product Flexibility / Robustness
- Point of View>
- How is this done?
- How much will it cost?
- What must I do?
- What should I do?
Functional Needs: The Basis of Communication for the Triad
Specific Functional Requirements for DIT Document Accessibility
- Provide text alternatives for non-text content.
- Provide captions and other alternatives for multimedia.
- Create content that can be presented in different ways, including by assistive technologies, without losing meaning.
- Make it easier for users to see and hear content.
- Make all functionality available from a keyboard.
- Give users enough time to read and use content.
- Do not use content that causes seizures.
- Help users navigate and find content.
- Make text readable and understandable.
- Make content appear and operate in predictable ways.
- Help users avoid and correct mistakes.
- Maximize compatibility with current and future user tools.
Note: The above rules are taken from: WCAG 2.0 At a Glance, verbatim.
And these are the WCAG 2.0 Guidelines. The functional requirements are partititined by 4 Principles: Perceivable, Operable, Understandable and Robust (POUR). Beneath the principles there are functional needs.
To complete the Guidelines we have Success Criteria. Each success criteria identifies a barrier that must be removed to allow the functionality of the guideline.
The essential bottleneck was the interpretation of the flexible data guideline. In addition, the guidelines for making it easier to see were too specific and did not account for the range of experience for people with low vision.
The original drafters of the guidelines did not see the need for visual flexibility. The could not see the difference in reading a document where the words wrappe nicely compared to lines running off the page when test was resized enlarged.
The guideline authors could understand the need for data be flexible enough to be converted to text, but not to alternative customized text.
We are justified in asking for this because the technology to do it already exists for cell phones.
History of Policy and Law for DIT Accessibility
- The Rehabilitation Act of 1973 established rights for public entities and their contractors
- Section 502 of the Rehabilitation act creates the US Access Board.
- Section 504 establishes equal to the government for people with disabilities
- Talking Books became available for all people with disabilities in 1974,
- The ADA extended the Rehabilitation Act to the Private Sector
- The World Wide Web is created.
- The World Wide Web Consortium (W3C) founded with the Web Accessibility Initiative (WAI) 1995
- All of these events occurred in paralell in the 1998-2001 period.
- Cynthia D Wadell developes the first municipal accessibility policy for DIT.
- The Web Content Accessibility Guidelines (WCAG 1.0) establish guidelines for web content.
- The first Section 508 gives federal rules all DIT.
- The iPhone is released in 2007.
- WCAG 2.0 was approved in 2008
- Responsive Web Design makes mobile websites accessible to normal readers
- The 508 Refresh is up for approval now.
|
This week we are learning about the properties of 2D shapes. In your homework book, draw a picture using as many 2D shapes as possible. Label the shapes you have used. Purple and green groups, please write some sentences to tell me about the properties of the shapes you have used.
- 9,807 hits
|
WHAT DARWIN GOT RIGHT
NATURAL SELECTION (HORIZONTAL EVOLUTION)
Our chemistry lecturer announced his dismay that the 1961 Nobel Prize in Chemistry was given to Melvin Calvin for discovering the C3 cycle in plants, the distinctive biochemical pathway by which carbon dioxide from air is converted to sugars in broad-leaved plants. In his opinion, it should have been awarded to Andrew Benson and James Bassham who, at the very least, should have shared the prize. Yet, the C3-cycle for photosynthesis has always been associated with Calvin and not Benson nor Bassham. Similarly, the fame for the ‘Origin of Species’ by natural selection should be shared with Russel Wallace yet the theory is always associated with Charles Darwin.
Both Wallace and Darwin were meticulous naturalists documenting everything they observed on their travels and collecting numerous specimens for later study. While Darwin travelled around the Americas and the Galapagos Wallace spent his time in the Amazon basin and on the numerous islands of the Malay Archipelago. Both men made astounding discoveries and independently came to the same conclusion that the influences of nature could have the same effect on diversity (speciation to evolutionists) as animal and plant breeders had produced in developing new breeds and cultivars. In fact, there is no difference between natural selection and artificial selection except, that in the former, the selection pressure is applied by the environment whereas, in the latter case, the selection pressure is applied by the breeder.
From a genetic point of view there is absolutely no difference. In artificial breeding or artificial selection (equivalent to horizontal evolution where no new genes are created), or intensive domestication where the breeder selects which variant in the litter should survive and be kept for breeding future populations. In the case of natural selection, taking Darwin’s finches on the 19 islands of the Galapagos as an example, it was the type of food availability on different islands that determined which finches were to dominate the population according to beak size and strength. The finches originally emigrated from South America. If the island was populated with hard seeded plants then obviously fledglings with weaker beaks would not be able to compete successfully and survive.
Thus, both natural selection and artificial selection are cases of 'Horizontal Evolution' where no new information is created. Instead, it usually involves a loss of genetic information. The difference between 'Horizontal' and 'Vertical' was discussed in the previous article.
We are all familiar with other examples, such as the strongest stag or the most powerful lion that will pass its genes on to the next generation. Prize bulls are kept because they have the most desired and most fertile semen. The same works for race horses where vast sums are paid for mares to be sired by cup winners. In actual fact, both Darwin and Wallace knew about all of this because both were British and the British were mad keen on pigeon and dog breeding at the time. It only required a little spark and plenty of field observations to confirm that nature provided the selection pressure as to which progeny were to survive into future generations. Natural selection is what has been referred to as the 'survival of the fittest'. Because many individuals are eliminated the gene pool of the population becomes poorer. Genes suited for different environments are now missing which could have been advantageous if conditions were to change. Natural selection leads to an impoverishment of available genes. Natural Selection destroys genetic information.
Charles Darwin was well aware what breeders could achieve over very short times with pigeons, chickens and dogs. It was just a matter of genetic manipulation by selecting progeny for further breeding according to taste - preference for appeal, meat, hunting, cock fighting, racing or egg-laying; these were the selection pressures applied by man. Nature can do the same especially in years of drought and famine. In the latter case it is called Natural Selection. Natural Selection follows exactly the same genetic laws as observed in a breeder’s pen. It has nothing to do with the evolution of a new kind unless you choose to give the progeny a new species name as some scientists do for no scientific reason at all.
WHAT DARWIN GOT WRONG
Basing his belief on the diversity he observed amongst finches and other species Darwin assumed that, as more effort was devoted to natural history, geologists would discover fossils that would demonstrate how one kind of animal or plants would slowly morph into another. He was not aware of any skeletal remains which would demonstrate this, but he had faith that one day the evidence would be found.
Time has proven that the so-called transitional fossils required for the gradual descent of species by Darwinian evolution will not be forthcoming. More than fifty years ago botanists asked themselves why there is a sudden appearance of flowering plants in the geologic column. Their supposed precursors in deeper sedimentary layers were missing. Much more recently fossilized butterflies with their typical long proboscis for sucking up nectar from deep ‘cavities’, that can roll back into their mouths during flight, were found earlier in the fossil record than flowering plants:
‘Exquisite wing fossils reveal the world's first butterflies appeared 200 million years ago, long BEFORE there were flowers on Earth to pollinate
Moths and butterflies were thought to have evolved alongside flowers
However, the world's first flowers sprouted around 140 million years ago
This suggests moths and butterflies emerged before flowers appeared
Scientists say they must have developed coiled mouthparts for a purpose other than feeding on nectar’
This is just another example of evolutionists having to scramble for alternate answers in the face of reality. Just like they were red faced when the coelacanth, the supposedly extinct lobed-walking fish thought to be a vital transitional link between marine and terrestrial species, was found alive and well in deep waters off the coast of South Africa and Sulawesi in Indonesia. It was thought to have become extinct along with the dinosaurs.
There is also no explanation why when they first appear in the fossil record,they were hardly different from their modern counterparts except in one thing - they used to be much bigger then. Generally, this is also the case with echo-locating bats, dragon flies, spiders, ants and snails. They seem to have devolved instead of evolved. There is no clue from what they might have evolved.
The Australian platypus also has evolutionists stumped. They have bird-like and animal-like features. They have the bill of a duck, they lay eggs, have webbed feet, fur like a mammal, a pouch like a marsupial, a tail like a beaver, a poisonous barb and possess sensitive hairs that can detect vibrations and electrical signals under water to find prey in the sediments.
Therefore, in terms of phylogenetic trees, the ancestry of life is interpreted differently by evolutionists and creationists. Evolution theory traces everything back to the first living ‘blobs’ or single cells from which everything eventually developed over eons of time (Vertical Evolution). Creationists think of phylogenetic trees more like trees in an orchard with the stump of each tree representing the basic kinds that God created right from the start. The same can be said of dogs, that have since diversified through natural or artificial selection. The Bible does not tell us whether there were only two dogs on the ark or two dogs and two wolves, etc. The important principle is that the trunk of each tree already contains all the genetic material for the resulting diversification that occurred (Horizontal evolution or change). The trunk for dogs would not have included genes for wings, fins for swimming or echo-location or metamorphosis, as happens in caterpillars, etc. The two types of trees are illustrated below. Firstly, a typical evolutionary tree constructed on the basis of gene similarities similar to those earlier based on morphological (form) similarities. The second is the orchard model where there are many trees, one for each basic kind, as the Scripture implies:
‘And ‘’God said, Let the earth bring forth the living creature after its kind, cattle, and creepers, and its beasts of the earth after its kind; and it was so. And God made the beasts of the earth after their kind, cattle after their kind, and all creepers upon the earth after their kind. And God saw that it was good'. (Genesis 1:24-25). ‘All flesh is not the same flesh, but one kind of flesh of men, and another flesh of beasts, and another of fish, and another of birds’. (1 Corinthians 15:39).
We can agree with the apostle Paul that the flesh of one kind is different from that of other kinds. Beef, lamb, chicken, pork, emu, kangaroo, horse, crocodile, lobster and fish readily spring to mind. Other exotic edible ones could include bees, ants, locusts, witchetty- grubs and snake. The difference in the flesh of fruits, nuts, cereals and vegetables is also well known.
The orchard model for creation and subsequent diversity
SCHOOL AND UNIVERSITY TEXT BOOKS
In pre-industrial England white and dark coloured moths could be found in more or less equal numbers. But when soot began to cover all surfaces in towns and villages, during the industrial revolution, birds found that the pale coloured moths were easy picking against the dark background. As a result, the dark moths began to heavily outnumber the pale moths so that mating was now more frequent between dark couples. When the towns were cleaned up light-coloured moths had a better chance of camouflage over dark moths hence the population average began to return to a more equal balance, between the dark and light-coloured moths, according to known genetic laws that governed colour in moths.
Text-books, in my day, fooled students by citing the above case of the Peppered Moth as proof for Darwinian evolution - ie the development of new kinds of plants and animals by slow and gradual descent. The idea that took hold of minds, and is still with us today, is that eventually a wingless creature could develop wings given sufficient time forgetting that totally new sets of genes would be required for the job; especially a totally new set of genes with predetermined codes already entered for wings on the DNA of the egg and sperm cells.
Thus, though based on factual cases of natural selection which strictly follow known genetic laws, the authors would then plead to one's imagination that, given enough time of say a couple of million years, wingless creatures could develop wings to give them a better chance to escape from ground predators.
In later editions, in order to find a purely scientific solution to the enigma of the origin of complex organisms that would exclude any need for God, the story of evolution was presented as fact. The story of evolution has been perpetuated as fact in the minds of the public for decades. The dotted lines proposed for the evolution of one kind of animal or plant to another kind disappeared from biology textbooks. The dotted lines became solid arrows deceiving young minds that evolution was indeed a fact. Terminologies were also changed in lecture halls and class-rooms. It was no longer the Theory of Evolution, but Evolution. The theory became fact in the minds of lecturers and students alike. This thinking next infected the media.
Once I became aware of this another common phenomenon in the halls of learning stood out like a sore thumb. At both conferences and at in-house seminars speakers threw in the word 'evolution' frequently into their presentations even when the subject matter had nothing to do with 'evolution'. Evolution became an acceptable jargon as though it could explain anything at all. Have you ever noticed on TV how often the phrase ‘millions of years’ or ‘evolution’ appear in travelogues. They are just casually thrown in to embellish the monologue of the presenter. It sounds good because people have become accustomed to hearing it and even voice it in their daily conversations.
We can summarize Natural Selection without contravening any scientific laws by saying that it is a simple matter of the environment determining which variants in a population are best suited for a particular environment and allowing its weaker competitors to die out, together with whatever unique genes they might have had. Thus the population becomes genetically impoverished since the lost gene combinations may have had superior qualities in a different environment. The simplest examples of how a shift in the average composition of a population can occur is the case of the Peppered Moth, in England, and the dominance of strong-beaked finches on islands where the vegetation is primarily hard-seeded.
Following the massive eruption of Mt Krakatoa in 1883, in the Sunda Strait between the islands of Java and Sumatra in Indonesia, new islands consisting of hot volcanic ash, completely devoid of life of any kind, were formed. Naturalists have been observing the gradual re-vegetation of those islands by species of insects and small animals that could live off the newly established vegetation. This is how the various volcanic islands of the Galapagos acquired their unique flora and fauna that Darwin observed. It depended on what the winds and currents brought upon the islands and whatever might have survived on driftwood arriving on the new beaches.
On the Galapagos Islands, it was a matter of life for those plants and animals which had the best composition of genes suitable for an island, and death to those that could not establish themselves into viable populations. Thus the gene pool of the finches, on each island, has become impoverished, because unsuitable combinations of genes in progeny have gradually been eliminated from the population. The sum total of all the genes in finches represented on the Galapagos Islands, probably equates to the genetic composition of the original finches that migrated there from South America.
The phenomenon which Charles Darwin so meticulously observed and based his theory of evolution on could never account in any way for the postulated appearance of a host of new genes that would be required to manufacture a pair of wings with perfectly co-ordinated motion for speed and change of direction.
The earliest winged creatures in the fossil record, such as dragon flies with their amazing aerodynamic capabilities (which gave rise to the design of helicopters), appear in their full complexity in the fossil record. There is no indication of earlier in-between transitional forms of these creatures in the fossil record, the essential scientific evidence which Darwin’s theory of evolution demands for it to stand. Darwin said so himself. Not only that, but the ancient dragon flies were much bigger with wing-spans of two feet or more. Devolution, rather than Evolution, would be the operative word!
Once again, I do not intend to spend further time on this matter because it has been adequately answered in detail in numerous articles published by organizations such as https://www.creation.com and https://www.icr.org.
ARTIFICIAL SELECTION AND BREEDING
In a women's magazine I saw a massively large dog the size of a pony located in Western Australia. The owner was reported to have said that the only problem with the dog was the pain she felt when he stood on her feet. I vaguely remember the dog's height reaching up to her shoulders. Have a look at some of the biggest dogs on https://www.bing.com/images/search?q=world%27s+biggest+dog&qpvt=world%27s+biggest+dog&FORM=IGRE . Their size is truly mind-blowing.
While difficult, if not impossible, to mate a toy dog with a St. Bernard it might be readily possible by artificial insemination because they are both dogs. However, I predict that this may not be possible in all cases since it might be difficult to revert extreme cases of variants in animals or plants, that have been selected by artificial selection (domestication), because of the loss of genes. For example: in the ‘cabbage’ genus (Brassica), where wild mustard was selected over thousands of years to produce gigantic flowers (cauliflower), the gene for flower size regulation was lost or damaged hence the large cluster of flowers. Back breeding with wild mustard would once again reduce the size of the flower clusters, but what purpose would that serve from an economic stand-point?
The Brassicas provide an excellent example of domestication. Variants in spindly-looking wild plants were selectively inbred for outsized flowers, stems or leaves giving us a whole selection of vegetables that appear so different from each other. Cabbage, for example, was domesticated by selecting plants with unusually large leaves and in-breeding them. The photo on the left is a wild mustard derivative that has edible leaves of moderate size, while cabbage has very large edible leaves. In summary, in cabbages (Brassica), artificial selection by growers was for flower clusters (cauliflower), flowers and stems (broccoli), lateral buds (Brussels sprouts), terminal bud (cabbage), and leaves (kale) or stem (kohlrabi). Kohlrabi (cabbage turnip) and Brussels sprouts were unknown anywhere 500 years ago. In each case, plants had lost the capacity for size regulation of a particular organ which proved advantageous for food production. This is the reverse of evolution - the loss of functional genes, not gain. Call it Devolution if you want to give it a name!
All the genes necessary for the different Brassica variants were already within the original wild plant just as is the case for all the various pigeon and dove varieties derived from the wild rock pigeon over thousands of years. If all the varieties of pigeons were crossed they would revert back to the wild rock pigeon as also mentioned in some textbooks.
The beautiful yellow fields of canola (rapeseed) are a good example of plants that were selected because their seeds are rich in oil and they proliferate readily under widely differing conditions. They grow like weeds. Canola was first bred in Canada in 1970’s. Their wild relatives are weeds or have unwanted oils.
‘Canola oil, or canola for short, is a vegetable oil derived from rapeseed that is low in erucic acid, as opposed to colza oil. There are both edible and industrial forms produced from the seed of any of several cultivars of the Brassicaceae family of plants, namely cultivars of Brassica napus L., Brassica rapa subsp. oleifera, syn. B. campestris L. or Brassica juncea’. https://en.wikipedia.org/wiki/Canola
ORIGIN OF SPECIES AND BREEDING BARRIERS
Why then did Darwin and others believe that they had discovered a mechanism for the origin of species or even life itself by natural descent? (See previous article on Creationists & Dinosaurs for the general scheme of evolution theory). The thinking was that if a wingless creature inhabited a micro-ecosystem long enough, where flight would have a considerable advantage, then one or two randomly or mutated forms of its progenitors might eventually evolve wings (having been chosen to survive by the environment). Where the new genes for wings would come from, or the genes for feathers, no scientist has a clue except in a wild hope they carry in their hearts and minds. So what spurs them on in the absence of evidence?
Everyone has seen litters of domestic animals where individuals in the litter seem sometimes to be so different from the rest of the litter. That is precisely how the best hunting or the fastest racing dogs were bred by selecting extreme variants that had the most desirable properties in the litter. The hope in the mind of every evolutionist is that if selection is done for long enough then something entirely new might arise. This is purely wishful thinking and has never been observed.
Think about the variation that is possible in domesticated animals and plants. After many generations of breeding it is difficult to believe that a Great Dane and a toy dog might have been derived from the same parents many generations ago. Yet scientists will freely admit that they are of the same species, Canis familiaris. However, having been kept apart for so long, the Great Dane and the toy dog will now be separated by a physical breeding barrier due to size unless one is artificially inseminated by the other. It’s as if 8.9 foot Robert Wadlow were to marry the world’s shortest mature woman, Nagpur Jyoti who is only 2 foot tall, weighing 5 kg. You can imagine the social and anatomical problem that might exist. Behavioural and colour-caused breeding boundaries can also eventually provide natural breeding barriers, such as the example of cichlid fish, where different varieties of cichlids can share the same lake in Africa without interbreeding.
John Leslie, from New Mexico USA, submitted the montage below of variations possible within the misnamed American toad. They are not toads but lizards. John, a biologist and medical practitioner, believes that these are fine examples of variations of the lizard originally created, but with adaptation and corruption of the genome over time. John has a large website https://www.defendingthechristianfaith.org/ with many articles, especially also on Christian ethics, geology and biblical archaeology.
Using X-irradiation to generate random mutations in the code of DNA and gene shuffling techniques scientists have tried to cross species barriers, or more appropriately, ‘kinds’ barriers, but to no avail. There is always a limit as to what can be achieved by interbreeding. This has been elegantly demonstrated in fruit flies where one can experimentally produce numerous mutants and generations in just six months; the more extreme a variant becomes the greater the probability that they are infertile or can no longer mate.
Another example is the mule. We are all familiar with mules. The mule is the offspring between a male donkey and a female horse, the reverse results in a hinny. Charles Darwin was impressed with mules because they are much hardier than either the horse or donkey. This horse-donkey hybrid possesses more reason, memory, obstinacy, social affection, powers of muscular endurance and length of life. It is called hybrid vigour. One can ride the narrow path down the Grand Canyon on mule back, if you’re game enough. Some parts of the path have very steep drops. Having my semi-invalid wife on the trip with me, who constantly needs my attention, I decided not to take the risk though the mules are apparently quite safe.
Personally speaking, I wasn't all that convinced that the ride was safe when I saw mule droppings on the extreme side of the path. I wondered how the mule stood to do that and how the rider would have felt. I certainly haven't heard of anyone falling down. In the photo with the sign one can see the zig-zag path in the distance leading down into the Grand Canyon.
That’s all very well for the mule. He is hardy and reliable, but how about its future progeny? There are none. Mules are infertile because of chromosome mismatch with horses. Thus, once again, there are limits to what can be achieved by breeding, which would also apply to natural selection. Ligers and tigons, the offspring of lions and tigers, have a similar story to that of the mule and hinny because they are also infertile
In other words, neither artificial nor natural selection can provide anything radically new because of chromosomal breeding barriers. There is, therefore, no other explanation to the origin of life and the distinctly different kinds of animals and plants apart from The Creation, as described in the Bible.
BREEDING BARRIERS IN FLOWERING PLANTS
Flowering plants readily illustrate how breeding barriers work. They reproduce sexually like we do, hence the romantic love songs about the birds and the bees that cross-pollinate flowers.
Breeding barriers in plants prevent the creation of a world populated with grotesque, weak or ‘undesirable’ hybrids, or hybrids that are infertile. The highly regulated reproductive system in flowering plants has similarities with our own. There are male and female organs. For fertilization to create a new embryo male and female cells must first meet. The female ovules are fixed to the ovary wall whereas the male sperm cells, inside the pollen grains, must be carried to them by a special germination tube that grows down the style into the ovary. (See diagram below).
In flowers, the anthers contain the many pollen grains each of which has two male sperm cells. When a pollen grain lands on the papillae, on the moist surface of the stigma, it germinates like a tiny seed and produces a very thin pollen tube that will try to penetrate the stigma and grow all the way down to the ovary carrying, in its tip, the male sperm cells.
The pistil, shown in orange (please refer to diagram), typically consists of a swollen base, the ovary, which contains the ovules that are the female egg cells. Once fertilized with the male sperm cells the fertilized egg cell undergoes cell division to become a seed. The placement of seeds, where the ovules originally were against the ovary walls, is readily apparent when a tomato or capsicum fruit is cut open revealing the interior of the ovary.
The stalk or style is the conductive tissue in the flower for the pollen tubes carrying the male sperm down to the egg cells (ovules). The location of the conductive tissue is shown in the sketch by the thin yellow line representing a pollen tube growing down to the ovary where it will have to do a U-turn to get into the ovary.
Breeding barriers are established in flowers in the following way with the whole process being regulated like the underground vaults of Fort Knox that houses the US gold reserves. Pollen tubes are allowed to grow along the style provided they have the correct molecular keys on their surface to open each ‘gate’ on the way down. At least ten such gates have been identified – they are mutual molecular recognition systems that will reject undesirable pollen tubes as determined genetically on their DNA.
The earliest failure or abortion can occur right at the point of contact between the germinated pollen tube and the papilla, on the stigma surface, as shown in my scanning electron micrograph. I used a weedy Brassica from our footpath to illustrate an example of incompatibility and compatibility in two neighbouring pollen grains that landed on adjoining papillae on the stigma. The photo shows the inability of a self-pollen (a pollen grain from the same plant landing on its own stigma) to germinate on the stigma surface. In this case, the germinated pollen tube is rejected at the surface of the stigma and is unable to penetrate the papilla. The area is marked by heavy callose deposition (yellow stain) at the tip of the pollen tube rendering the pollen tube incapable of penetrating the cell wall of the papilla. In contrast, when the stigma is pollinated with a compatible pollen grain, the pollen tube successfully penetrates the papilla, as indicated, eventually finding its way through the style down to the ovules.
The interaction and the responses, I illustrated on the surface of a stigma, is only the first of several molecular ‘security gates’ that have to be passed by the pollen tube. Even if all the molecular recognition gates were passed successfully, all the way down to the ovary, the pollen tube may be prevented from releasing the sperm cells if the recognition signals fail outside or inside the egg cell itself. The ultimate final breeding barrier may be at the point of cell division following attempted fertilization if there is a mismatch of chromosomes between the different plant species or cultivars.
Thus, in effect, God has predetermined what may or may not cross-breed. All this is pre-determined and pre-programmed in the DNA maintaining the interesting and separate species diversities we have. Domestic wheat is a self-pollinator. The wheat flower pollinates itself before the flower opens therefore keeping the seeds formed genetically uniform which is necessary for mass production and harvesting.
These fine-tuned and precise molecular signals that control what may and may not interbreed are predetermined at the level of the code in their DNA. The instructions were laid down in the very beginning and there is no other way the codes could have been written. Keys and locks are designed by locksmiths and molecular signals, that have the appearance of purpose and function have all the hallmarks of deliberate design. The molecular signals are often complex sugars protruding from the surface of cells. Breeding barriers allow us to enjoy the beauty and unique differences in the plant kingdom and also in agricultural production.
If everything was allowed to interbreed we would have a monotonous world. Consider the variety of distinct fruits and flavours we have thanks to the breeding barriers that keep them apart from one another.
The best analogy for the beauty we find in nature, that I can think of, is a painter’s palate. If, through sloppy practice, the paints have been allowed to merge into one another, every canvas would end up with strokes in a muddy brown. This would also happen in plants and animals if cross breeding were allowed to proceed unfettered. The stunning beauty of unique species evident in nature would disappear.
There is absolutely no experimental data or known mechanisms indicating how entirely new organs with a different design may randomly form, as postulated dogmatically by evolution theory. Everything points to the handiwork of an intelligent designer - God. There is no evidence that God used evolution to create man from simple organisms such as bacteria or amoeba. When God created Adam and Eve they were created with full faculties and the capacity for speech.
|
This resource from the PBS Evolution Library discusses how early stone tool making marks an important juncture in evolution. Compared to chimpanzees, early humans had longer thumbs with better dexterity in them, due to the presence of three muscles missing in chimp thumbs, as well as brains that gave them the ability to make effective stone tools. Once humans could manufacture and use tools, they could obtain more and better food. This enabled them to successfully raise more children, who in turn were likely to inherit their parents' hand morphology, leading to the increased manipulative abilities seen in modern humans.
Students could use this tool to discuss how something as simple as a thumb can drive evolution. Have them use the Chimps, Humans, Thumbs, and Tools resource listed below for activities related to thumb use in humans.
|
Yes, writing more does lead to better reading comprehension. Research proves it. But why?
The authors of The Reading – Writing Connection (2010) suggest many reasons:
- Both reading and writing are forms of communication. When writers create a text, little light bulbs go off as they think about their audience and what that audience needs in order to understand and want to continue reading their texts. Students write, but at the same time they act as readers, their own first audience.
- Writers think about composing skills when they read the texts of other writers. Why does the author use that vocabulary word? Why does the author have a first person narrator? How does the author identify characters through their dialog? Does an autobiography have to start with a birth? Does a story need to go in chronological order.? If not, how can ideas be arranged? How do other authors do this? They read to find out.
- How do other writers connect sentence ideas or paragraphs? How do they explain things—with figures of speech or with examples? How do other authors make a difficult idea clear? Do they depend on charts, graphs or maps?
When writers read, they are not merely enjoying or gaining information. They are also aware that what they are reading was written by someone who had to make writing decisions, the same kind of writing decisions they have to make. By thinking about those decisions, student writers understand better what they are reading.
|
Why should we care what happens to elephants?
New Delhi, Aug 12: Today, 12th August is World Elephant Day! India once the home to a large elephant population widely distributed across most states, now has fragmented populations because of excessive habitat loss, subsequent conflict with human beings and poaching for their tusks.
The objective behind celebrating the World Elephant Day is to focus attention of stakeholders to support conservation policies, including improving enforcement policies to prevent the illegal poaching and trade of ivory, conserving elephant habitats, providing better treatment to captive elephants and reintroducing some captive elephants into sanctuaries.
Yet, we need these mighty animals for more reasons than one!
India has a little over 27,000 wild Asian elephants, about 55 per cent of the species' estimated global population, yet these natural nomads face an increasingly uncertain future in the country due to growing human population and depleting areas available for elephants to roam.
They range in 29 Elephant Reserves spread over 10 elephant landscapes in 14 states, covering about 65,814 sq km of forests in northeast, central, north-west and south India.
But if that seems like a vast amount of territory, consider that Elephant Reserves include areas of human use and habitation - in fact unless they lie within existing Reserve Forests or the Protected Area network, Elephant Reserves are not legally protected habitats in themselves.
So a large chunk of the country's elephant habitat is unprotected, susceptible to encroachment or already in use by humans. And while elephant populations are largely concentrated in protected forests in the north-eastern states, east-central India, the Himalayan foothills in the north, and the Western and Eastern Ghats in the south, the animals require free movement between these areas to maintain genetic flow and offset seasonal variations in the availability of forage and water, according to WTI.
That's why 'elephant corridors' are so important. As forest lands continue to be lost, these relatively narrow, linear patches of vegetation form vital natural habitat linkages between larger forest patches. They allow elephants to move between secure habitats freely, without being disturbed by humans. In many cases, elephant corridors are also critical for other wildlife including India's endangered National Animal, the Royal Bengal tiger (Panthera tigris).
Why we need elephants
The elephant not only has an important physical presence in Indian tradition and culture but is also a metaphor for many elements of a civil society - emotion, intelligence, memory and most importantly, living in herds which teaches us the importance of a family
With their dung, they 'transport' seeds all over the forests. At they same time, they also clear up large trees and ensure that tiny plants on the forest floor get enough sunlight.
In many such little ways, these large mammals keep the entire ecosystem in check and the forests as we know them, cannot exist without them.
|
Emperor penguin (Aptenodytes fosteri) populations in 2019 were found to have grown by up to 10% since 2009 – to as many as 282,150 breeding pairs (up from about 256,500) out of a total population of over 600,000 birds (Fretwell et al. 2012; Fretwell and Trathan 2020; Trathan et al. 2020) – despite a loss of thousands of chicks in 2016 when an ice shelf collapsed. Yet, biologists studying this species are currently petitioning the IUCN to upgrade emperor penguins to ‘Vulnerable’ (Trathan et al. 2020), based on models that use the implausible and extreme RCP8.5 ‘worse case‘ climate change scenario (e.g. Hausfather and Peters 2020) that polar bear biologists find so compelling. Not surprisingly, their unscientific models suggest emperor penguins could be close to extinction by 2100 under these unlikely conditions – but if we reduce CO2 emissions via political policy, the penguins will be saved!
In the August issue of Physics Today, climate scientists Toby Ault and Scott St. George share a pair of startling research findings. Between roughly 800 and 1500 CE, the American West suffered a succession of decades-long droughts, much longer than anything we’ve endured in modern history. And statistical models suggest that, as the climate warms, such megadroughts are increasingly likely to return.
|
This is the half-century mark of the large scale immigration of the Copts from Egypt, which began to gather steam in 1968. It is a good time to reflect on this historic phenomenon and its implication for the ancient people, even if it is still ongoing and its impact remains fluid. As always with the Copts, their predicament is a microcosm, a test case, of the larger Egyptian problem. Their evolution in immigration, although interesting for its own sake, contains clues about Egypt at large. We can begin to understand how Egypt might fare if its political and social systems were freer by understanding what happened when Egyptians were suddenly placed in a freer society.
Immigration involves a departure and an arrival, or a “push” and a “pull”. In the case of the Copts, conditions in Egypt provided the push, where both the successes and failures of the Nasserist project were problematic for them. America, Canada and Australia provided the pull through social changes that made these places more hospitable to non-Westerners and changed their laws to allow more immigrants from outside Europe. For example, the Civil Rights revolution in America overturned the emergency quotas of 1921 through the Hart-Celler act of 1965. Canada and Australia underwent similar changes. Immigrants usually undergo a transformation that leaves them with a hyphenated identity to serve their new needs and the circumstances of their new countries. These identities are marked by various levels and types of activism; social, cultural and political. In the case of the Copts these forms of activism took different paths and were marked by differences in acceptance and success. Social and charitable activism proved most successful, in part because it built on pre-existing norms and practices in Egypt. Cultural activism proved weakest reflecting the tragic history of the Copts since the schisms of the 5th century, and more so in the aftermath of the Arab invasion in the 7th century. To keep their faith, the Copts have surrendered every facet of their native culture, language, music, literature, and all arts except icon painting and liturgical music. But it was political activism which proved most flammable and discordant, and in the end was to deeply mark their interaction with their ancestral home, and their evolution in their new homes. This post will attempt a summary of its earliest evolutions and its current uncertain role.
Political activism is usually grounded in some past, at times mythic, and is forged by the present and articulates a vision for a desirable future. For the immigrant-led activism that rose in the early 1970s, the past was a history of loss and dispossession, while the present was a crucible of conflict, and future was an imagined Egypt where the Copts were finally equal citizens. Fifty years later, its vision for Egypt remains unrealized, and perhaps further undermined. Political activism is still the province of a few leaders and with minimal participation from the larger community. It would not be harsh to declare it a failure by its measure of success, and yet influential in unanticipated ways. There are many causes for this outcome, none more vital that the decade-long conflict, from 1971 to 1981, between President Sadat and Pope Shenouda. Many books and articles have described and examined that conflict reliably and credibly. In almost all of them “immigrant Copts” play a role, often portrayed as secondary to the conflict. In fact, they were essential to the conflict, and in many ways served to aggravate it and drive its course. Immigrant Cops played the role of children in a bitter divorce, where the two parents play to the audience of their children for acceptance, support, approval and on occasions even emotional vengeance. This conclusion is not radical when the facts are looked at afresh. A question that can never be answered, but important to ask, is whether the Sadat-Shenouda conflict would have played out in the same manner, or to have occured at all, had there been no vocal immigrant community. “Aqbat Al Mahgar” (Immigrant Copts) is the term coined by many Islamists, and government officials, as a derogatory shorthand for the critics from afar. This is the clearest sign that immigrant political activism represented more than a passing nuisance, and that its message, and perhaps more importantly its methods, struck a nerve.
Sadat arrived to the President’s office a year before Shenouda rose to be Patriarch of all the Copts. By 1972, and certainly after the 1973 war, both men were comfortable and secure in their new offices and engaged in a punishing match of wills. The two men possessed similar temperaments but occupied different vantage points; indeed different planets. Yet the conflict between them was ahistorical by Egypt’s modern standards. Since the waning of Ottoman power in the 17th century, the rulers of Egypt largely avoided open conflicts with the Copts, regardless of how they felt about them or their degree of tolerance for religious differences. On the other side, Popes never saw fit to adopt a policy of open defiance toward the ruler. These two men, however, were different and came to conflict with unequal powers. Sadat possessed a strong grip on the instruments of the Egyptian state, including the army, police, civil service and propaganda channels, and after 1977, the appreciation of the West and most of the world at large. Against that Shenouda had only a grip on his shepherd’s staff, the symbol of his office. Those who knew Egypt intimately felt that the two men were headed for serious trouble, with more in store for Sadat. The Egyptian papacy is the oldest continuous institution in the country’s history, nearly 2000 years old. It has been headed by 117 men as successors of St Mark the Apostle. They possessed the full range of human characteristics, including saints and thieves, wise men and simpletons, reformers and dolts, and every shade in-between. Yet the office endowed them with power and a form of innate historical wisdom, so none could be touched or easily removed even by the most tyrannical of rulers. Byzantine emperors, Abbasid Caliphs, marauding soldiers of fortunes, European colonialists, and especially powerful lay Copts, found that going up against the Pope, even when their cause is right or just, to be a daunting prospect. At the height of the conflict between the two men in 1980, a man who disapproved of Shenouda’s handling of the relationship with Sadat summarized the grim prospects for the President. “Sadat can ignore Shenouda and appear weak, imprison him and thus become his prisoner, or kill him and be hounded on earth and in the afterlife”. Shenouda’s unyielding stand was perhaps understandable, but Sadat’s escalation of the conflict seemed to be a foolish gambit from a man who displayed a survivor’s wit, keen political instincts and on many occasions a daring ability to change course. Indeed there were many times when the relationship seemed to be taking a better course, only to have outside events inflame it again. For Sadat, it was always the fixation on “immigrant Copts”, a tiny group of little influence that raised his ire beyond reason. Abdel Latif El-Menawy, who once headed the News division of the Egyptian Radio and Television Organization, catalogues the times Sadat blew his top over small provocations from New Jersey or Washington DC. “Why do these Copts want to turn the Christians of the World against me and Egypt”, Sadat complained over and over again. Of course, the immigrants could no such thing; their tiny newspaper ads were little noticed, and the police kept their small demonstrations politely but firmly out of Sadat’s earshot or line of sight. El-Menawy, who knew Shenouda well and interviewed him often, relates a remarkable 1977 exchange between the two men. “How could our children abroad speak against us … they are complaining about me to Carter”, ranted Sadat. Shenouda cuts him off, rising to say “The first thing I want to say is that some Copts might have emotional problems.” He continues on to insist that these emotional problems are the results of discrimination in their past lives in Egypt. He then pivots to deliver a counter punch.”Our children abroad have done a great deal for Egypt. They served us during the war of October 1973 and God knows how much effort they exerted … they are worried about the [new] laws, … should we comfort them it will be over and you won’t be so upset with them”. Shenouda seems to simultaneously disavow immigrant activists while using them to accomplish his desired goals. That encounter encapsulates the trouble with political activism among immigrant Copts. They can irritate the Egyptian state but not alter its behavior. They can provide stout support to the Church in Egypt, and at the same time find themselves in trouble with it. For half a century the activist leaders looked obsessively in the rear view mirror, or fixed their gaze on a very distant horizon, often missing what is directly in front of them. Like generations of Egyptian political activists, they excelled at stating the problems but rarely made an effort at compromise toward a solution. They found a home in the margins and could scarcely imagine themselves wielding any power. Today, with all the concern about the fate of Christians in the Middle East, new activist leaders are unable to formulate a workable set of realistic goals or “asks”. They remain the children of Shawky Karas, the man who kick started Coptic political activism in 1972 and became its prototypical leader, and warning example.
From thousands of miles away, Shawky Karas, an academic mathmatician, could raise Sadat’s blood pressure with a tiny ad or a letter to a Congressman or Senator. He reproduced many of these ads and letters in his self-published 1986 book “The Copts Since The Arab Invasion : Strangers in Their Land”. The book, with type written pages, poor editing and plain blue cover, feels like notes from the fringe. It is a remarkable combination, however, of keen insights placed side by side with wild accusations and barely believable conspiracies. The most powerful part of the book is a 20 page response to Sadat’s May 14 1980 speech in which he declared himself “The Islamic President of an Islamic State”. Karas’ counter arguments anticipate the suffering Egypt would eventually undergo as different men and factions tried to provide concrete realization of that claim. Yet Karas makes no mention of his role in raising Sadat’s ire, nor in precipitating the “Easter rebellion” of 1980. For nearly a decade Karas was propagating a redefinition of the Copts, not only as non-Arabs which the majority accepts, but as living victims of Arab imperialism. It is a flammable message, precisely because it contains sufficient truth to give it credibility, with just enough mythology to make it a powerful cudgel. His retelling of Egypt’s history in the first third of the book explains why he never made an outreach to immigrant Muslims, whose voice might have added weight to his message and demands, and just as importantly why they were unlikely to add their voice, even if he asked. He attempted to recruit other prominent Copts to his side, succeeding with some and failing with many others, who found him too combustible for comfort. His major success came in 1977 when he agitated to convince a church conclave to include the following in its January 17 1977 message “ .. the total sincerity [ of the Copts] for the beloved nation, of which the Copts are the oldest strain, so much so that no people of the world had been tied to its land and nationality like the Copts of Egypt”. While the statement may well be true, it also serves to “other” the majority of Egyptians, who are Muslims. In a meeting at the Jersey City church in February 1977 to plan an upcoming trip by Shenouda to the US, Karas claimed credit for the statement and unveiled what would become his signature message and program for two decades to come. He warned about the “Creeping adoption of Shari’a” in Egyptian law. He centered his message on a single verse from the Bible, Matthew 12:25 “Every kingdom divided against itself is brought to desolation; and every city or house divided against itself shall not stand”, and he read from handwritten notes what he deemed to be a suitable template for every speech about Egypt’s predicament at that moment, “will it be unified by nationalism or divided by religion?”. He also advocated for the cancellation of religious celebrations as a form of passive resistance. Such measures were not unknown in Coptic history, but most in the church hierarchy considered them too extreme for the current situation. In time he began to gather support, most notably from men such as Dr Rodolph Yanney, a doctor and publisher of a cultural newsletter, and two “radical” priests from the US West, Fr. Ibrahim Aziz and Fr. Antonious Heinen, as well as the more mainstream Fr. Ghobrial Abdel Sayed of Jersey City. Others proved cold to his message. Bishops Gregorious and Samuel found him too radical for their tastes, and his entreaties to Aziz Atiya went unanswered. Things seemed to change in early 1980 after the Christmas eve attacks on several churches in Egypt. Shenouda intimated to others that he was considering a cancellation of the Easter celebrations on April 6. The news travelled quickly to America and spread both delight and consternation. Karas praised the step and booked space in several newspapers to coincide with Sadat’s visit in early April and his state dinner at the White House. Others worried about the impact of such a step. Bishop Gregorious records in his memoirs a meeting on March 14 1980 with Aziz Atiya, his wife, and Ishaq Fanous, the noted artist and Icon painter. He states the purpose as “discussion of the Encyclopedia”. Curiously he neglects to mention the presence of another man, Mirrit Boutros Ghali. Nor does he mention that at the end of the meeting both Mirrit and Aziz asked him to intervene with Shenouda and warn against cancelling celebrations. In the end, Shenouda did not heed their advice. Both of these men, and Gregorious himself, represented a rare moment in Egypt’s history that was rapidly vanishing from view. On March 26 1980 the Pope gave a sermon that seemed to borrow heavily from Karas’ 1977 notes. He asked the same question of Sadat and demanded an answer. Sadat was too busy preparing for his trip to Washington DC, and provided his flammable reply during the May 14 1980 speech. El-Menawy in his book “The Copts”, tries to discern the influence of immigrant Copts on Shenouda’s sermon and finds him evasive on the subject. Karas tried to stage a demonstration in front of the White House during Sadat’s state dinner but was rebuffed by the DC police. A rally called by Karas on April 6, Easter Sunday, in New York City fizzled because of a transit strike. The New York Times showed Carter and Sadat talking amiably under a magnolia tree in the Rose Garden, with no hint of Sadat’s rising temper. But the mere attempts at rallies were enough to send Sadat into a frenzy, exactly as Karas predicted in February 1977. “He wants to be loved and obeyed”, Karas said of Sadat then, before issuing his version of the “3 Nos”. “We will not be silent, and we will not obey, and we will not love him”.
The 18 months between Sadat’s April 1980 visit to DC and the end of his life were marked by further strife and nasty sectarian attacks. Karas and his merry band of immigrant Copts were not silent, and did not cease from writing to Congressmen, Senators, Governors and anyone who would listen. They did not seem to realize that few Americans cared about the “Coptic issue”. Peace had broken out between Egypt and Israel, and war between Iran and Iraq, and between Afghanistan and the Soviet Union. There was enough strife, and even diplomatic hostages, to divert the spot light. The death of Sadat, the exile of Shenouda and the appointment of a papal committee changed the conflict from one between immigrant Copts and the Egyptian state, to an internecine fight between different groups of Copts. What started out as a noble movement to enlarge the rights of Copts turned in on itself. A movement that started out narrowly Coptic, became ever narrower; indeed sub-Coptic. The entire focus of the activists from 1981 until 1985 was on the release of Shenouda from his desert exile. It can be said that the heat and noise from America did little to accomplish that. In the end it was insider negotiations, and Egypt’s usual reversion to the mean, that released Shenouda. But Karas became a Shenouda partisan, both agitating for his release from internal exile, and passionately ferreting out “enemies of Shenouda”. By 1985 all of Karas’ requests regarding the rights of Copts to politicians were regularly and politely rebuffed. On the other hand, he had won at least one internal battle. Those who opposed his methods, and had doubts about Shenouda’s, were now in retreat. Some walked away from their churches, others fell into silence. The “enemies” included many who might have formed a broader coalition for a broader good in Egypt. Karas’ group, the American Coptic Association, became unintentionally true to its name, having produced a larger impact on American rather than Egyptian Copts.The effect of the movement’s fading in the late 1980s was not to alter the nature of Coptic political activism, but to preserve it in the amber memory of those glory days when a single 2 inch newspaper ad could shake the walls of the presidential palace in Cairo. Soon enough, Shenouda asked the activists to pipe down, as Mubarak had a serious insurgency to deal with. The demonstrations were far and in-between, the demands as grandiose and vague as ever, and new organizations specialized in inside politics in DC, holding conferences and commissioning panels that would regularly identify the problems and not much else. The Copts’ demands became further subsumed within the general worry about terrorism and the demand for democratic reforms in the region. Few bothered to analyze or learn the lessons of the 1970s. The vast majority of immigrants got on with their lives, built their churches, prospered and lived contented lives without much involvement in Karas’ style of political activism, even if they were formed in some degree by it.
When Karas passed away in October 2003, age 75, condolences came by the hundreds from across the US, Canada and Australia. Many mentioned his activism, and more specifically his loyalty to Pope Shenouda. None called attention to his attacks on the Papal committee, or his involvement in the communal fights of the 1980s. While his activism failed to alter conditions for the Copts in Egypt, it did cement the loyalty of the vast majority of the immigrant community to Pope Shenouda. Many of the early immigrants were “children of Samuel”, the bishop who tended to their needs, helped them establish churches, and brought thoughtful discourse to the problems of immigration. In time that influence waned as a well, but not without considerable pain for many. Political activism originally meant to erect a barrier between state and religion in Egypt, brought forth a new immigrant identity that saw the church as a actor in every facet of the Copts’ life, both in Egypt and outside it. Immigrant Copts continued to embrace a cherished Egyptian identity, but one that rarely reached out for the other 90% of Egypt. New churches in the New World were being built at the rate of a handful a year, and all of them were becoming more than houses of worship; instead disciplined outposts for a nation without geography, as Sana Hasan aptly put it.
A generation later things are very different in America and Egypt, but Coptic political activism remains largely true to its older self. It has become vestigial. This is to be expected from a movement whose evolution is subject less to conditions in the new country than in the old one. Egypt has thrown a curve ball to these movements. Yes, sectarian attacks are more frequent, but bishops are not tossed in jail by the dozen, as in 1981. The rights of Copts in Egypt are further eroded, but so are the rights of many others as well. The Islamists who were once on the rise in Egypt in 1970s are now on the outside, themselves in America agitating for change in Egypt in a manner eerily reminiscent of the Copts’ agitation then, and equally likely to become vestigial as well. Protests in America against political oppression and sectarianism in Egypt are rarely cross-confessional, and sometimes even illiberal in character. If there are rays of hope they are usually among the young, those shaped by America, and not deformed by Egypt’s struggle with identity. The irony is that any possible emergence of a genuine “Egyptian-American” hyphenated identity might happen only among those who have far less to do with Egypt than their parents or grandparents.
— Maged Atiya
In the late summer of 1967 a white-haired academic read the final drafts of a book about to be published in England and soon after that in America. The book evolved from a set of lectures he had given at the Union Theological Seminary in New York City a decade earlier. It is easy to imagine him working in a study on the second floor of his Victorian house on Perry Avenue nestled in the hills east of Salt Lake City by the university campus. He was a year shy of 70, and soon to approach a second retirement, but life was to offer him two more decades which he used to great purpose. The news from the land where he was born, grew up and spent a good part of his adult life was difficult. He was to become an American citizen within a few years, but his connection to Egypt rarely wavered, however the circumstances, and whatever neglect and bias the country threw his way. He also remained involved in the affairs of his church, although he was neither outwardly religious nor a frequent church goer. He expressed this attachment in the preface of the book by offering it as “the fulfilment of a lifelong vow”. “Vow” may seem a paradoxically religious description for an act of scholarship by a man who was largely secular in tastes. But terms such a “secular” and “religious” could not easily be applied to one whose elliptical confession of faith reads “it must be stated that I, a historian by vocation, am also a member of the Coptic Church by birth and upbringing”. “Vocation” along with “Vow” color his life and work with a certain Christian religious brush, even if the bulk of his scholarship was devoted to the study of Islamic history and the late Crusades. Of the book he completed he writes “As a matter of fact, I allowed myself to be persuaded into shouldering this arduous task, partly as a modest work of scholarship, and partly as an act of faith”. These statements and many others throughout the book leave no doubt that his purpose was more than producing a simple scholarly and dry exposition of what the author calls the “primitive churches”, those of the “the Coptic and Ethiopic, the Jacobite, Nestorian, Armenian, Indian, Maronite, and the vanished churches of Nubia and North Africa”. And it is to the “more” that we must pay attention on the 50th anniversary of the publication of Aziz Atiya’s “History of Eastern Christianity”. Although the book does an excellent job of summarizing the history of these churches, it is the Copts that occupy the leading and largest chapter in the book, as befits the confession of the author. There is much to mine in the book, coming at the halfway mark of the last eventful century in the life of the Copts. A close reading of the book leaves the impression of a paradox of an author who both transcended and was limited by the circumstances of his time. The underlying worldview of the book is anti-colonial but not post-colonial. The mood of the author is one of pride in his heritage but unease about what has befallen it in over the centuries. The words that emerge have an uneasy balance between a desire for speaking truth and a reticence born of the author’s position and the consignment he received as a born Copt.
Aziz Suryal Atiya (1898-1988) would have been 120 years old next July 5. He had mused that he wished for a biblical lifespan of 10 dozen years. His tenure on earth was shorter, amounting to seven and half dozen years, and in a broad sense was marked by 12 year cycles of challenges and achievements. At 12, as an aspiring young student in Cairo away from his provincial family, he witnessed the events surrounding the assassination of Prime Minister Boutros Pasha Ghali and remembered them well into his late years. At 24 he was a poor but ambitious young man who left medical school due to lack of funds (his official biography notes that he was kicked out in 1919 due to his nationalist agitation). He experienced Dickensian poverty in the intervening years, made bearable only by the support of a stern father, a loving mother, and an adoring brood of siblings. He walked the streets of Cairo in shoes stuffed with newspapers, unable or unwilling to spend the streetcar fare, but with dreams of studying medieval history abroad. The poverty neither dimmed his ambition nor weakened his spirit. At 36 he had acquired several degrees from England and was headed to a respected professorship in Germany. He completed a study, now a classic, of the 14th century crusade of Nicopolis, one of the last crusades and an event pregnant with future meaning for Christian-Muslim relations. But Germany in the late 1930s was no place for a brown man and he headed back to Egypt. By 48 he was a resident of cosmopolitan Alexandria, a founding member of its university, married to an intelligent and spirited daughter of the Coptic aristocracy and raising two young children. He would soon start on a project that ultimately led him to America; the microfilming of the library of St Catherine monastery in the Sinai. His collaborators were mostly American and European refugees to America. In 1951 he was invited to summarize his findings to the Library of Congress and his speech was introduced by the then Egyptian ambassador Kamel Bey Abdul Rahim. Those years also brought ominous clouds. His neighbor and friend, the physician and intellectual Ahmed Zaki Abu Shady would immigrate to America one step ahead of the government provocateurs and murderous Islamists. The move, unique at the time, would presage a later flood, as well as Aziz’s own life. The nativist wave that started in the late 1940s and culminated in the educational “reforms” of 1954 occasioned his demotion and finally his departure from Alexandria. Unhappy with the lack of recognition for his work and general badgering by the new regime he resigned one step ahead of the purge. As the so-called liberal age was ending he became increasingly occupied with Coptic studies and affairs of the Church and community. His turn to Coptology bore echoes of earlier involvements with such scholars as Ragheb Muftah and Mirrit Ghali, but was clearly a new occupation for him. During the years immediately following the 1952 coup, when his career in Egyptian universities was nearly at an end, he made three critical contributions to the Coptic community. He established the Higher Institute of Coptic Studies with Sami Gabra, he mentored many students who joined the clergy in senior capacities, and persuaded the Coptic clerical hierarchy to ease its historic suspicion of Protestant churches and initiate ecumenical relations with many other Churches. After leading a delegation to the World Council of Churches conference in 1954, his trips to America became more frequent and at 60 he finally settled in Salt Lake City to head a new institute at the University of Utah. The next dozen years were exceptionally productive. Aside from his academic work, he finished several books, including the “History of Eastern Christianity” and was even involved in such esoteric pursuits as locating the hieroglyphic rolls at the foundation of the Church of Latter Day Saints. At 72 he did not settle into retirement. Instead he was assisting the Egyptian Church with selecting suitable pastors for new immigrants by working with his former student Bishop Samuel in that capacity, and planning his next project. That project was a compendium of scholarly articles on all aspects of the Copts. He succeeded in his ambition to make it an international work of scholarship, with as many non-Copts as Copts involved in it. It was a dozen more years before the project was firmly established and at age 84 he felt certain that a final product might come out in his lifetime. He missed the deadline by only a handful of years, having passed away in 1988 after falling ill while working at his desk, writing the introduction to the eight volume work.
The “History of Eastern Christianity” summarizes the history of these churches with quick brushes and substantial number of references. But beyond the impeccable scholarship there is also a polemic that looks critically at how the West perceived Eastern Christians. Of Catholic writers he notes “[are] usually men of great learning and erudition who viewed the East from the narrow angle of their own profession with sectarian vehemence and considerable lack of understanding”. On the other hand, Protestant writers “failed to come to grips with the essence of Eastern Christian primitivism”. What is needed, he argues, is a narrative by “native historians”. In its purpose the book anticipates later works, such as “Orientalism” by Edward Said, published a decade later. However, in method and conclusion, it is entirely different. It reflects the author’s belief that it is pointless to try to call out bias or demand that it ends; rather it is best to elevate the “native” so that such biases are made silly in the light of new accomplishments. His awareness of the condescension of the West toward Eastern Christians exists side by side with respect and fascination with Western culture and its methods and advances.He grew up among the Coptic clergy who harbored undisguised dislike for the West and Western Christian methods. Yet in 1954 he persuaded an anti-Protestant Pope Yus’ab to bless a mission of a bright young monk and a priest to the World Council of Churches by telling him that “we must strive to educate the Protestants, who are our younger brothers”. More than a dozen years later he looked on the fruits of his argument with some satisfaction. “The Coptic Church, which had chosen the solitude of its own primitiveness, its peculiar spiritualism, and the rough road of its so-called Monophysitism since the black days of Chalcedon in 451, is now steadily recapturing its faith in old friends and foes overseas and in distant climes. The aloofness and traditional suspicion of the patriarchs towards other Christians of different sects is gradually being replaced by a sense of mutual regard and a measure of cooperation ..”. He does not absolve Eastern Christians, and specifically his tribe, the Copts, of a measure of complicity in the Western gaze. Of his people he writes “The place of Copts in the general history of Christianity has long been minimized, sometimes even forgotten, because the Coptic people themselves had voluntarily chosen to live in oblivion. After having led the way for centuries, they decided to segregate themselves from the growing ecclesiastical authority of the West in order to guard their way of worship and retain their national pride”. Rather than air grievances and demand equality, he seeks a position of strength by the jujitsu of proudly adopting the description “primitive”, once hurled by Western missionaries as a rebuke to the East, as a definition for a Christianity uncontaminated by worldly power and its accretions. The epilogue makes clear his agenda of returning the East into a central place in Christian history. He approvingly notes Milton Obote’s demand that “we should have more African clergymen, after all churches are international .. White missionaries have done good work but their era is finished”. From that quote he pivots to making his own demand: “The drive towards proselytism must be arrested once and for all in order to strengthen the churches of the East by a systematic avoidance of separating their sons from their ancient professions”. Yet he notes that “the Eastern churches are at best too limited in their means to cope with those vast responsibilities”. This leads him to the conclusion that Western Christianity can best assure the survival of its Eastern brethren by aid to the native churches rather than direct intervention. Although this has become the official positions of many of the Oriental churches, it has yet to be accepted by all Western churches, especially the right-wing evangelicals. In a calm and deliberate manner he announces his ambition that the “general history of Christianity will have to be rewritten to incorporate the monumental and sometimes turbulent contributions of the Copts”, and by implication other Eastern Christians. The insertion of the word “turbulent” hints at his view that the primitives are not entirely blameless in the schisms of the Fifth century. He notes that he possesses the “inevitable passion of one who writes from within the Coptic world and yet who must view events dispassionately with the mind of a historian from outside”. This necessary distancing was to bring him into conflict with many of the Church leaders, including patriarchs, and accounts for the many misguided attacks some Copts still level against his scholarship to this day.
It is now common to see the rise of Islamists and the violent variants of their ideology as the largest threat to the primitive churches. Atiya was not blind to persecution and its ill effects, but he saw in Western evangelism a different and potent threat. He had studied Islam for decades and came to know it well and see much good in it. His lectures on Islamic history attracted many Muslim students at the University of Utah, and more than a few confessed that he taught them as much about their heritage as their religious leaders, if not more. His views on the threats to primitive christianity were subtle and uncolored by personal biases. This subtlety, and even a certain ambiguity, are demonstrated in his discussion of the turbulent times between Chalcedon and the arrival of Islam. He gives a full account of the theological differences at Chalcedon only to insist that the “political background can not be minimized”. After Chalcedon “The Copts were humiliated as never before, and the Coptic Church suffered the tortures of the damned at the hands of the Melkite colonialist. The wonder is that their communities were able to bear the brunt of such travesties and survive. But the bulk of the Coptic nation remained faithful unto the last, and harboured a deep-seated hatred of the Byzantine oppressors and all things Byzantine, which found natural expression not only in the so-called Monophysite doctrine but also in the Coptic language, Coptic literature, and above all in the Coptic art”. The Byzantine is a stand-in for all those, before and after him, who oppressed the common folk and ground them to a fate of ignorance and poverty, and survival is a testament to faith but also to sheer stubbornness. The book delivers an unambiguous conclusion about Chalcedon, seeing it as an expression of nascent Egyptian nationalism. While most political scientist would disagree with such an assessment, noting that nationalism is a product of modernity, Atiya is unapologetically romantic in believing in the existence of an essential Egyptian “folk”. This may have the product of the intellectual ferment of his youth in Egypt, or of the European scholars he studied with (most notably Paul Ernst Kahle, the notable orientalist who barely escaped with his life from Nazi Germany). This belief colored his view of Egypt’s conversion to Islam. The arrival of Islam would ultimately decimate the percentage of christians in Egypt from the entire nation to 10%, but he does not subject the Arabs to severe criticism. They, and subsequent rulers, “preserved the Copt as a fine source of revenue” , and their arrival may have been paradoxically providential. The Coptic Church was nearing extermination as a heresy and the arrival of the Arabs allowed it to cleverly outmaneuver the Melkites to “become the sole representative of Christianity in Egypt”. Such an interpretation may seem alien to the Western mind, but to a primitive Christian the survival of an undiluted faith trumps any assumption of secular power or the safety of the majority. He amply documents the horrors of pogroms and other persecutions of Copts during the times of the Mamluks, but refuses to lay the blame on Islam as a religion. He gives a full accounting of the horrors of the Armenian genocide, but blames it on the narrow Turkish ethno-nationalism. He attributes the massacre of christians in the mountains of Lebanon in the 1860s to tribal loyalties, cynical ploys by the Ottoman rulers and the general crookedness of humanity. He tries to find a general theory for the survival of the primitive churches in the final pages of the book. The epilogue begins with a question “At this journey’s end, it is fitting to ponder over the causes of the survival of most ancient Christianity of the East in the midst of the surging sea of Islam”, especially given that Islam was a “good religion” and conversion did not “throw a long shadow of shame on an apostate”. He provides two reasons. First that Islam never wanted to eradicate Christianity noting that “there was no humiliation in being a Christian in the eyes of a Muslim”, a statement of opinion that stands in direct conflict to some of the historical facts the book puts forth. Second was the “eastern Christian was able to preserve the purity of his race from pollution through the intermarriage with the ceaseless waves of conquerors from outside …Initially a way of worship, faith in the end became a comprehensive way of life and a symbol of an old culture”. Specifically with Egypt he notes that “the racial characteristics of the Copts themselves, their unwavering loyalty to their Church, their historic steadfastness toward the faith of their forefathers, and the cohesive elements in their social structure combined to render their community an enduring monument across the ages”. This is as close as he can come to a theory given the breadth of his experience with the local religions. In a hand-written account he notes his excitement upon first visiting St Catherine and locating rolls long thought extinct. The entire trove proves disorienting to anyone wishing a clean delineation between Islam and Christianity. There were bibles written in Kufic script. There were accounts of saints that are clearly “Islamic” in style, and so on.
Yet for all his deep understanding of the complexity of religious interactions, and his seemingly broad and secular views, the cosmopolitan scholar remained a “primitive Copt” according to a handwritten note to one of his relatives. He spent the last two decades of his life immersed in the Coptic Encyclopedia, sparing no effort to locate experts and cultural artifacts to fill its volumes. In a November 23 1977 note to his friend Kurt Weitzmann of Princeton, he inquires about his health and that of his wife, only to pivot quickly to a request to find him some Coptologists “behind the Iron Curtain”. But this immersion ultimately lessened his immediate involvement in the communal affairs of the Church. He reached out to the most prominent Coptic theologian, Matthew the Poor, and excitedly asked him and the monks around him to be involved in the effort. They turned him down. After the October 6 1981 assassination of Bishop Samuel he seemed to lose interest in meeting and conversing with church prelates, favoring the solitude of scholarship and his own Coptness. His personal travails with the men in black who lead the Church do not prevent him from offering an accurate assessment of the central role of the Church in the life of the Copts noting that “Copts regarded their prelates with the highest deference. To them they looked for spiritual leadership and personal guidance, especially in the days of great trials, which were not infrequent in Coptic annals. Neither massacre, nor persecution, nor dismissal from office, nor confiscation of property could exterminate the Copts as a community, and the hierarchy stood in the midst of all movements to fortify the faithful through times of storm. Faith and fortitude were their means of survival, and their rallying point was the patriarch, whom they feared and revered, not on account of the legal powers accorded to his office, but because of piety and godliness.” It is notable that the quality of great learning does not appear in that assessment.
The publication of the book predates the onset of a historic development for Copts, but also more generally for other Christians; the increased immigration to the West. Immigration blurs the neat distinction between Eastern and Western Christianity, and the reduction of faith into a national or racial identification.The realities of immigration, and rapid acculturation, seem to dawn on the author with occasional surprises. In a 1975 note to Weitzmann he apologizes for not stopping by to visit him in Princeton, noting that he spent nearly two weeks driving down the East Coast visiting members of his immediate family, and those of his wife, who now dotted that landscape. In 1982, while dining in a French Vietnamese restaurant in the Soho neighborhood of New York City, he remarked that Pope Shenouda introduced him to some bishops as “Ustaz Amerikani”, or an American Professor. He chuckled at the thought and concluded, in English, “perhaps he is right”. On July 4 1988 he celebrated his 90th and final birthday in a magnificent setting on top of the Rocky mountains, attended by a large number of his immediate family, close friends and many scholars who flew in from several continents. It was an entirely American affair. Of the younger generations in his immediate and extended family, which had grown polyglot by intermarriage with non-Copts, he expressed the hope that “they may not share our blood but perhaps they will remember our culture”. The book from two decades earlier remained the last moment of certainty about his people and their essential nature. After that moment it was increasingly difficult to separate the notion of religion as culture from culture as religion. Just at the moment when he expressed a certainty about what is a Copt (or an Eastern Christian), circumstances of historical proportions threw a large measure of doubt at his answer. It is possible to read the “History of Eastern Christianity” as a relic of a time before the region descended into cultural decay and savagery. It is also possible to read it as a celebration of renewal after centuries of decay. It is probably best to read it as both in accordance with the author’s subtle ambiguity about human effort and the uncertainty of providence. The book remains deserving of a first and many subsequent readings. As for the author, his life should be celebrated as a success clawed from fierce adversity. His wish to be buried in a mausoleum he built with his cousin a short distance from ancient Coptic Cairo, in part with proceeds from the book, remains unfulfilled. He rests in the American Rockies, a primitive Christian among the Protestants.
— Maged Atiya
In early 1968 Samir Nessim Atiya, an Engineer, met with his cousin Aziz Sourial Atiya, a historian, to plan and build a new family mausoleum. The current one was getting pretty full, and the time seemed right for the project. Samir’s company was prospering, while Aziz’s latest book had just gone to print. Their favored architect was finishing his main project, working on the new cathedral due to open that summer. The Engineer and historian planed for something different from the usual, a daring slab of granite more than 12 feet high in a modernist shape of a pyramid over the underground crypt. By their calculation the new mausoleum would be full by 2018. Others would then take up the task of building the next one. At the beginning of 2018 the mausoleum stands nearly empty. Its occupants are the builders’ two sisters, Linda Nessim Atiya and Galila Sourial Atiya, two strong willed women who feuded with each other for most of their lives before resting peaceably next to each other, alone with no one else.
The builders’ fathers, Nessim Atiya and Sourial Atiya had gone into business together 50 years earlier. The older brother, Sourial, was severe, kindly, deliberate and conservative, while Nessim, more than 15 years younger, was expansive, mercurial, daring and imaginative. Several times they made money together, only to lose it all, before trying again. Eventually, in the late 1920s, they went their separate ways. Sourial invested in land, the only thing he thought to be secure. Nessim started a bottling company producing soft drinks in unmarked bottles which the locals around the Delta town of Senbelaween called “Nessim’s Kazouza”. Nessim seemed to be a marketing wizard. Every week a horse drawn cart pulled into a different village loaded with his bottles. A robust body builder got out and gulped an entire bottle in one go, belched loudly, and then went on to do impressive deeds of strength. The message was not lost on the men in the village. They bought and bought into the promise of virility. But misfortune stalked both men. Sourial was shot by his body guard to rob him of his lands’ rent. Nessim died suddenly and painfully of either kidney failure or prostate cancer when Samir was 8 and his younger brother Maurice was a mere toddler. But the families held together. Aziz supported his brothers education with money from abroad while a student in England. He also became a mentor, and effectively an adopted father to Samir. The brothers Sourial and Nessim had ten children between them who survived to adulthood, seven boys and three girls. All of the ten children were to have relatively successful lives, against all odds. They produced 24 children among them. In 1968 only two of that generation lived abroad. Today more than three quarters of them live outside Egypt. On the occasion of burying his older sister Linda, who passed away at the age of 100, Samir noted that the locks on the underground crypt were hopelessly rusted from lack of use. “Our dead have left Egypt”, he remarked to his son.
— Maged Atiya
From the upcoming "Tales of Immigration"
Egyptian President Sisi inaugurated two floating bridges in Ismailia and Qantara named after two army men who died in combat against terrorists, Ahmed El Mansi and Abanoub Gerges. There is a symbolism in the gesture of twin names, one Muslim and one Copt draping the two bridges. We are supposed to feel a surge of warmth about the naming equivalent of a joined cross and crescent. There will be many notables, of both religions, who see the gesture as the “true” nature of Egypt. Journalist and academic Edward Wakin noted similar gestures while traveling in Egypt in 1961, and was not impressed. In his 1963 book, “A Lonely Minority”, Wakin identified with precision a species of humans that he called “Public Copts”. These men and women speak hopefully of religious equality in Egypt, while proclaiming their fealty to the nation in spite of religious discrimination that they deny exists. They insist that symbolic gestures embody the true feelings of the people, while harsh realities are caused by the wayward few. A Public Copt is always available as evidence against any attempt to identify and rectify obvious social ills. A decade after the publication of Wakin’s book there would be further sighting of the Public Copt in the vicinity of the aforementioned bridges. The liberation of East Qantara, where one of the bridges is located, and the capture of an Israeli corp commander was achieved by a capable and daring general named Fouad Aziz Ghali. After the war he further demonstrated administrative ability by supervising the growth of the Southern Sinai into a tourist destination. This exceptional man behaved as a Public Copt by insisting that his promotion demonstrated a lack of religious discrimination in the Army. The evolution of the Public Copt can be traced to the distant past, as illustrated by two other unrelated Ghalis. One Ghali, in the middle of the 19th century, kissed the hand of the Wali that ordered his father’s execution. Another Ghali, Boutros, served the Khedive and British imperial ruler faithfully even to the point of losing his life. He must have known what Lord Cromer thought of his fellow Copts “The principles of strict impartiality on which the Englishman proceeded were foreign to the nature of the Copt. He thought that the Englishman’s justice to the Moslem involved injustice to himself, for he was apt, perhaps unconsciously, to hold that injustice and absence of favoritism to the Copts where well-nigh synonymous terms”. Many factors must have raised the imperial ire in Cromer. Perhaps it was the Copts very different Christianity. It could also be that their temerity in asking for equal rights exposed the hollow nature of the “Englishman’s justice” and the entire lie of the imperial scheme. Or that Cromer sighted a Public Copt and proceeded to dislike all others, for the Public Copt’s habit of saying one thing while believing another fed directly into the stereotype of the Copts as devious and crafty, something that Cromer readily accepted. The task of a Public Copt is to praise the granting of crumbs.
The Public Copt is familiar to all from an early age, as the young witness what the adults say in public and private. Any anger or rage at such behavior is quickly extinguished in the young by the process of acculturation and socialization. It is nurture, not nature, that creates Public Copts. Many currents contribute to the pathology. First there is the simple need to constantly deal with a perennially authoritarian, and often hapless state. There is also the hope that in stating the perfect outcome as established fact the entire nation will be shamed into reform. Then there is the reality of collective punishment, which is a constant secret sharer of repression. Individual merit will sometimes rebound to the benefit of the owner in uncertain measures, but individual error will invariably be held against the entire community. Every Public Copt is aware that honest discourse is not a test of his or her courage, but of their intestinal fortitude to watch others suffer for their frankness. But perhaps the strongest reason for the existence of the Public Copt is the difficulty of the Coptic identity. There are many unattractive aspects to that identity born of centuries of persecution. The Public Copt may wish to underplay that identity, or escape its worst aspects, but will usually find that it claims him anyway. Every Copt who attempts the magic transformation of being more than a Copt will eventually grow to be an old Copt. Anger invariably stalks the Public Copt, born of the frustration of doing exactly what is known not to be effective for fear of worse.
It would be easy to paint the Public Copt as weak and compromising, but it would also be wrong. Ameen Fahim, a Public Copt from the 1980s, explained the issues facing men such as him. “[It is like] an earthenware vessel banging against a bronze vessel“, he told sociologist Sanaa Hasan. Magdy Wahba, another Public Copt, also reminded her of the need “to walk close to the wall“. There are plenty of men who enjoy praise for their public display of courage while cutting weasel deals in private. America provided plenty of such examples in 2017. It is rare to have men who undertake private risks without expecting praise for their courage. Such was the lot of Public Copt. The public record is sparse, intentionally, but fragments exist nevertheless. Kamal Ramzi Stino, often ridiculed as a Nasser poodle, took many courageous positions in private against a man that all Egyptians feared or worshiped. The same can be said of Fakhri Abdel Nour or Mirrit Ghali, and the full knowledge of their courage is likely forever lost to us. Occasionally the records survive in scattered public and some private form. Aziz Atiya left the safety of America in 1961 to travel to Egypt, meet Nasser, and ask that his underlings cease attacking the World Council of Churches. The WCC was in no danger from Nasser’s mouthpieces at Sawt Al ‘Arab, but Atiya felt that a connection to world Christianity is important for the Copts and worth the personal risk. There were many Copts, of a more militant attitude, who condemned the Public Copt. One such man was Pope Shenouda, or at least the first quarter of his long public life. For a decade he exposed the sectarianism and hypocrisy of Sadat, who at the time was the darling of the West. Many Public Copts disapproved of the Pope’s attitude, and he of them, but when he went into a desert exile on the orders of Sadat all worked hard for his release. Eventually, Shenouda too became a Public Copt, of sorts. If there is a lesson in all that, it is a difficult and complex one. And in any case, it is always necessary to calibrate actions to the times. The benefits of the Public Copt seem to be in great decline in today’s Egypt. That country would be unrecognizable to many of them, and their behavior might be entirely different now. Paradoxically, the path to future freedom and survival may well be in doing just the opposite of what has allowed survival after centuries of oppression.
It is difficult to miss the increasing talk of the need for a “New Copt”. This is especially so among those who are born and raised in the West. This desire is a reflection of the current realities in Egypt, and of the failure of Coptic activism abroad. That enterprise maybe necessary but now seems insufficient, as no outsider is able to nudge the Egyptian state into doing its job. The desire for a new reality for the Copts seeps to us via articles and talks. One hears it expressed above the din of a coffee shop by anxious acquaintances. It is elaborated over long meals by men and women of perceptive minds and sharp senses. It is a heady time; for this must be what Vienna felt like in the late 19th century. That analogy should also alerts us that sometimes an awakening is a prelude to future horrors. But the desire for a New Copt is fundamentally sound, even if the shape of it has yet to come into view, and leaders necessary for the transformation have yet to identify themselves. But those who come to raise a “New Copt” must first bury the “Public Copt”.
— Maged Atiya
“The drive towards proselytism must be arrested once and for all in order to strengthen the churches of the East by a systematic avoidance of separating their sons from their ancient professions” Aziz Atiya, History of Eastern Christianity, 1967
The US recognition of all of Jerusalem as the capital of Israel came exactly one century after the Balfour declaration and serves as a historic book end to it. Many applauded the decision, seeing it as a fulfillment of a decades-long Israeli wish and perhaps nothing more than a recognition of the reality of the situation, although reality is remade every day by the powerful. Others were puzzled by the timing of it and the lack of concrete gains from what is purely a symbolic move. There is one way to view the decision that renders it perfectly comprehensible. Its timing and language are designed to make the upcoming tour of the region by Vice President Mike Pence a victory lap. Mr Pence is the highest elected official representing the wing of Christian Evangelism called “Christian Zionism”. Many Eastern Christians view Christian Zionism as a heterodox sect of Protestant Christianity that places its faith in the fulfillment of prophecies and revelations through the material and historic realization of specific signs and events. Chief among these are the return of the Jews to their ancestral home and their complete control over Jerusalem and the lands around it.
The reaction to the decision among Eastern Christians has been largely negative. The Christians of Palestine, a fraction of their size a century ago, disapproved of the move. Others in the Levant voiced similar disapproval in the midst of an existential crisis arising from the advent of Islamic supremacist movements. In Egypt, home of the largest group of Eastern Christians, the reaction was muted but also negative. Egypt sports a sizable Evangelical community, but the vast majority of Christians are Coptic Orthodox. The patriarch of the Copts, Pope Tawadros II, canceled a meeting with Mr. Pence who had proclaimed that he will advocate on behalf of the Copts during his stopover in Cairo. As always during times of trouble (and these are troubled times) the Copts place their faith in the inscrutable hand of God over the proclaimed power of men. Many Copts and non-Copts criticized the Pope’s decision. The criticism fell into two broad categories; that the Pope was catering to the Egyptian state and the popular passions, or that he was intellectually captured by the “nationalist discourse” common in Egypt. In short, that the refusal to meet Pence reflected either fear or foolishness. Both arguments fail on closer examination.
We should be careful to attribute fear to those who kept the faith for centuries against great odds. But more importantly the argument is internally inconsistent. When Copts face death rather than give up their faith, and when their kin forgive the attackers, they are judged as paragons of Christian virtue and courage. When they refuse to accept a hasty decision by a bumbling American administration, they are accused of cowardice. You can’t have it both ways.
The argument against “foolishness” requires more subtlety. Many Christian Zionists insist that support for Israel is part and parcel of support for Eastern Christians, since those who come for the Jews will eventually come for the Christians. This is true; the Islamic supremacists have a habit of mentioning “Saturday” and “Sunday” people in sequence. But this argument conflates and confuses different things. It conflates secular Zionism (a laudable idea) with religious Zionism (a potentially dangerous one). It conflates the affairs of state (capitals and embassies) with the culture of the people (attachment to land and religion). The intellectual roots of Christian Zionism hark back to the Protestant rediscovery of the Old Testament and the Jewish roots of Christianity. It is perhaps why many Christian Zionists have found an affinity for “the Copts” as a generalization. The Coptic Church, along with its Ethiopian sibling, is the most Jewish of churches, as it had never abandoned many ideas that Protestants rediscovered centuries later. There is an apocryphal tale popular among Copts. One version of it runs as follows. An American Evangelist arrives in 1850s Egypt to tell a humble Coptic priest that he brings news of Jesus Christ. The priest responds with “when did you make his acquaintance? We first met him more than 1800 years ago when he visited as a new born in his mother’s arms”. The sly tale warns against the dangers of Western “Christian-splaining”.
Another variance of the “foolishness” argument insists that Copts are in no shape to refuse assistance from any quarter, and that Pence’s remonstrations to the Egyptian state should have been honored with an audience with the patriarch. This argument stems from a Western habit of wishing that the Eastern Christian should fulfill the rule of a vassal rather than a brother. The largest Christian denominations, such as the Catholic and main Protestants, have long abandoned this notion, but it persists in the American bible belt. In any case, any principled argument for freedom of conscience should include the freedom to disagree with political decisions. This is especially true for those with a track record of impulsive actions that proved harmful to many Eastern Christians (cue the Iraq sanctions and invasion).
The entire episode highlights a growing concern that the persecution of Eastern Christians is often a useful cudgel in political arguments. Recent events, especially with the Copts, have provided unforgettable and searing images. There is the image of 21 men kneeling at a beach and silently praying moments before their execution. There is the image of a small altar boy smiling happily in his vestments moments before a suicide bomber doomed him. It would dishonor the victims’ memories if these images are turned into fodder for political agitprop by those eager for conflict that would leave many more victims behind battle lines. Atiya’s description of the church, of which he proclaimed himself a member, “ Coptic Church … had chosen the solitude of its own primitiveness, its peculiar spiritualism, and the rough road of its so-called Monophysitism” remains remarkably accurate today, even as its seemingly modern sons and daughters spread out throughout the world, including the West. They would proudly appropriate the moniker he gave them, as “primitive Christians”, meaning that their faith is rooted in the people who kept a “historic steadfastness toward the faith of their forefathers”, while never aligning with worldly power and often existing in opposition to it. Many Protestant Evangelicals have not grasped that essential part of it. In their fervor to achieve secular power, legislative, judicial and executive, American Evangelicals, for example, are the antithesis of the Copts. Earlier this year a delegation of Evangelicals, including representative of Christian Zionism, met with President Sisi of Egypt, who enjoys the support of Pope Tawadros, and praised him widely. For its part, the Coptic Church avoided the meeting. These are some of the historic reasons why we should not rush to judge the Pope’s refusal to meet with Mr. Pence as a political or cultural capitulation to the popular rage or fear of the Egyptian state.
— Maged Atiya
The young American archaeologist and oilman, Wendell Phillips, was in Cairo to deliver a lecture to the Egyptian Geographic Society on Saturday June 27 1953 on his excavations in Southern Arabia trying to locate the historical roots of the Queen of Sheba. While waiting in town he ran another errand. He visited President Mohammad Naguib to hand him a pistol, a gift from President Dwight Eisenhower, with the name of the former Supreme Allied Commander engraved on its handle. The event was widely reported in the Egyptian press. One newspaper, Al Masry of June 26 1953, shows a photograph of President Naguib carefully inspecting the pistol, with the barrel wisely pointing downward. Wendell Phillips stands to his right. Between the two men is another figure, a silver-haired Egyptian academic, a founder of King Farouk University (later Alexandria), named Aziz Suryal Atiya. Atiya, with his signature enigmatic smile, seems to have wandered in from another event. In fact, “Aziz” and “Wendell” had been friends for some time, and within weeks Aziz would make a fateful decision partly on account of his friend. Atiya’s presence was perfectly explainable, as noted by two memos in his hand writing about the event, dated June 25 and June 27 1953 and titled “For forwarding to his Excellency President Mohammad Naguib”. In one of the memos Atiya suggests that Naguib award a medal to the US Librarian of Congress, Luther Evans. In the other, Atiya makes a recommendation to award an oil concession to Phillips and have the revenue flow directly to build Egypt’s power and army outside the regular budget. We do not know if Naguib read the memos, but by the end of 1953 Phillips had given up on getting a concession in the Western desert and looked at possibilities in the Sinai. This was not the first time the two had dealings with Egyptian rulers. In a letter dated July 20 1952, Phillips writes to Atiya informing him that he has sent a handsome leather bound and gold-edged volume about St Catherine monastery to his Highness King Farouk I. The volume was indeed delivered to Farouk on July 26 1952, a somewhat inopportune day in the life of the Egyptian monarch. The story of the friendship between Wendell Phillips (1921-1975) and Aziz Atiya (1898-1988) is a sidebar to the history of Egypt and America, their close and fraught relationship as lived through two men who remained friends long after their necessary initial collaboration, and after life placed them on unexpected paths.
Max Kutner in a recent article in the Smithsonian magazine calls Phillips a real life Indiana Jones for his work in excavating ancient southern Arabia; the man who “uncovered millennia-old treasures beneath Arabian sands, got rich from oil and died relatively unknown”. The last part was not exactly correct, as Aziz had secured an honorary doctorate for Wendell from the University of Utah shortly before Phillips’ death. In a 1954 review of one of his books the New York Times described him as a “swashbuckling adventurer with the coolness of a gambler and the cunning of a backwoodsman”. Atiya, nearly a generation older, was a historian of Islam, before he turned later in life to the study of Eastern Christianity and becoming one of the founders of “Coptology”, or the study of Egypt’s Christians. The two men came together in an expedition to microfilm the manuscript collection of the St Catherine monastery in Egypt’s Sinai in the late 1940s, which amounted to close to 700,000 documents. Atiya’s interest in the monastery dated back nearly a decade. In 1938 he was a professor at the University of Bonn before having to leave Germany on account of the proclivities of its then rulers. Back in Egypt he followed up a rumor first heard in Germany about the fabled “Firman rolls” in the monastery of St Catherine. The story of these rolls can serve as the script for a Spielberg sequel, “Indiana Jones and the Ottoman Firmans”. It involves two Germans, Karl Schmidt and Bernhard Moritz, who were chased out of the Sinai at outbreak of World War I, a lost cache of photographs, an Egyptian in Germany trying to track them down on the eve of World War II, an American adventurer, a reluctant Abbot looking for money to fix his monastery, American officials, Egyptian civil servants, a harrowing transport of electrical generators and photographic equipment up a difficult mountain, and finally the revelation of a cache of over 500 documents in dated and uninterrupted sequence. In this script, Phillips earned the role of the American swashbuckler when at the age of 26 he founded the grandly named “American Foundation for the Study of Man” and offered to assist with photographing the entire collection of the monastery and not just these specific rolls. This was his second venture in Africa, at least if we broadly define the location of the Sinai. His first was a trip from Cairo to Cape Town, shortly after WWII, called “The Africa Expedition”, made possible only because he persuaded Jan Smuts, South Africa’s Prime Minister, to support it. At that time he had no money or degrees, or any discernible qualifications. The same confidence allowed him to take a leadership role in a project he had not previously been associated with and to ask the Library of Congress to fund it. While trying to achieve some fame in archaeology he dabbled in oil leases and eventually became a major oilman with a fortune rumored to be in the hundreds of millions. The Library of Congress agreed to fund the photography effort, after some badgering by Atiya. The Acting Librarian, an icy man named Verner W. Clapps, wrote a precise contract to prevent any filching of monies from the US taxpayers to any purpose beyond the photographing of the monastery texts. Still, the pair found a way to stuff $10,000 into the Abbot’s habit for the repair of the monastery. It was money well-spent. Scholars had long wanted to document the library of the monastery but were rebuffed by the reclusive monks who had survived for 1400 years in a forbidding and often hostile territory. Aziz had earlier secured the friendship of Abbot Porphyrios which made the expedition possible. The exchanges between the two men, and with Egyptian and American officials are fascinating. All the grand events of the time are seen entirely through the narrow focus of the scholarly project. In one letter dated June 21 1949, the rector of King Farouk University, Sadek Gohar, apologizes for delays since conditions in the Sinai were turbulent on account of “recent conditions”. In a letter from August 22 1952 Phillips hopes that Atiya “is in no way endangered by the current trend of events in Egypt” before launching on the specifics of the project and informing him that he received an award from the prince of Comores for his work in Arabia, and expressing disappointment that Egypt has not seen it fit to make a similar award to him at this moment. On July 30 1952 Atiya wrote to Phillips that “events have been moving too fast in Egypt during the last few days“. He was optimistic that “We expect from our American friends to support our action in attempting to turn Egypt really into a democratic country. However, I firmly believe that the present condition of things will be even more favorable to our cultural collaboration with America“. A little more than a year later, on January 8 1954, Atiya sounded a note of alarm in telling Phillips’ mother that he can not send her a collection of stamps on account of “censorship“. In fact his disappointment came to pass earlier. During a wedding on January 25 1953 a relative asked him when he thinks the Army will relinquish power. Atiya flipped over the wedding invitation, pulled a pen from his breast pocket and wrote “July 23 2052”.
One of the letters to Phillips adds confusion to the history of Atiya’s purge from Egyptian academia. On July 15 1953 he writes to Phillips that he “resigned without regret” from his position in protest over the lack of recognition given to both of them by the University with regard to the St Catherine expedition. In reality, according to both Atiya and others familiar with the events, his position was getting increasingly tenuous since the Free Officers adopted the educational reforms recommended by Sayyd Qutb, and especially since his mentor Taha Hussein was eased out of running higher education in the country. It is possible that Atiya in sensing the upcoming purge simply beat his tormentors to the door, and while at it took a firm stand for his friend. Either way, in a letter to Wendell dated January 8 1954 declared himself “a free man“. It was a watershed year for both men. Aziz, at 55, was headed for America and greater recognition in the next 35 years of his life. Wendell was meanwhile accumulating wealth rapidly from his oil leases, and spending more time in harsh climates pursuing mythical kingdoms and occasionally uncovering fabulous objects.
The St Catherine microfilming project was largely completed by 1951. On March 19 1951 Atiya delivered a lecture on the “Arabic Treasures” of the monastery at the Library of Congress. He later acknowledged that the effort was critical to his turn to the study of Eastern Christianity, as well as its close interactions with Islam. The documents paint a nuanced and complex picture of the early co-existence between Islam and Christianity, and on the relationship between the Eastern and Western branches of the religion. In a classic work “The History Eastern Christianity” published in 1967, he proposes that “the general history of Christianity will have to be rewritten to incorporate the monumental and sometimes turbulent contributions of the Copts [and Eastern Christians]“. For his part, Wendell went on to excavate in present day Yemen and Oman. With an eye toward value, and having gained the respect of the local rulers, he obtained valuable concessions for oil explorations. Phillips seemed to lack a gene for fatigue. He talked his way out of many troubles and drove himself relentlessly, Later in life Atiya credited Phillips with the kind of restless energy that made practical plans out of scholarly pursuits, such as sending electrical generators up a mountain to be followed by a host of American scholars, including some who were refugees from Nazi Germany.
The letters between the two men paint a growing friendship and affection, even if neither man was emotionally demonstrative and both had reasons to be circumspect about what to put on paper. The letters are a window on their times and souls. Both men made their home bases in the American West, specifically Utah and Hawaii for Atiya and Phillips, but traveled incessantly. Their correspondences were sometimes delayed or made haphazard by their peripatetic nature. The last and most touching exchange was dated April 8 1974 and written by Aziz in Salt Lake City. He begins by saying “Last night I saw you in a dream. You seem to have lost weight but gained enormous funds”, before asking him to fund a faculty position in his name in Arabic studies. That same night, thousands of miles away in Honolulu, Phillips was struck by a heart attack and a stroke, one of a series that left him wasting and eventually dead within 18 months. Wendell had a way of sharing important events with Atiya in an off-handed manner that nevertheless seemed to demand attention, even affection. In a letter dated May 20 1969 (the same month Aziz was in Egypt tending to his dying Mother-in-Law) Wendell writes of his growing friendship with President Suharto of Indonesia (he was eventually awarded huge concessions there). The note is on the letterhead of the Kingdom of Oman, and its Sultan Said bin Taimur, where Wendell is listed as a “economic advisor and representative”. Toward the end of the letter Wendell confesses to what troubles him. “I believe I told you that Shirley [his wife] became quite ill and it was decided by the doctors that it was better to dissolve our marriage”. There was more bad news. Wendell was close to the Sultan’s son (and current Sultan), Qaboos, and perhaps more than a witness to the insurgency, especially since he did excavations in Dhofar, the heartland of the fight. That made him “unable to come to Cairo as I am not sure how popular I am with certain individuals in that part of the world”. He had previously informed Aziz of his marriage in a letter on November 24 1968 in a casual way “The second day after my marriage, I was hit in an auto accident and had my back broken in three places”. He continued to travel and followed up on July 2 1969 to inform Aziz that he had become close friends with Sheikh Zaid of Abu Dhabi, in addition to his relationship with Oman. Phillips’ association with Oman started in the 1950s, and culminated in a book “The Unknown Oman” in 1966. That was the year he began to use the Sultan’s letterhead as his own, and the practice ended only after his friend Qaboos deposed his father on July 23 1970. A letter dated August 31 1970 to Aziz by his assistant is uncharacteristically evasive about Phillips’ general direction, except that he was heading to Korea, where he obtained a concession in September 1970. What is notable about the letterhead is that it is titled “Wendell Phillips Oil Company”, but oddly enough still using the logo of the Kingdom of Oman. Perhaps there was too little time to design new stationary. Later that year, Phillips told the Guardian “I am not a businessman, although I employ many of these. I am an archaeologist”. At that point he owned some of the largest oil field concessions in the world, on three continents. Yet he seemed envious of Atiya’s increased prominence, asking him for copies of the “The History of Eastern Christianity” and for help on an upcoming book “Adventurer meets Jesus and the Koran”. Aziz took an almost parental delight in the adventures of Wendell, at times praising his friend in correspondence with Sunshine Phillips, Wendell’s mother. Aziz had the tact not to ask Wendell about his mysterious absences or the reasons for zigzag trips. The letters were direct and familiar and more than a few times he mentions views and even emotions that he generally kept for those closest to him. In a letter dated August 11 1970 he asks Wendell whether he is still on friendly terms with Qaboos who had recently deposed his father, and what the change might mean to his concessions. In the same letter he lets slip that he now has “three American Grandchildren”, a subtle hint about how Aziz viewed himself, immigration and the assimilation of his own immediate family. Taken as a whole the letters seem to be a conspiracy of two against the wider world. If the two men contrasted sharply they also shared at least one similar trait. Each man outgrew early provincial roots with a passionate desire to see the wider world and transcend any narrow identity. Both men seemed to regard the entire world as their home, with every culture as fair game for study, absorption and even appropriation. Yet both remained at heart paradigms of their roots; the fast talking American and the bookish Copt; Indiana Jones and the Coptologist.
We must also note a tragic coda to this tale. Almost at the moment this post was written news came of a horrific attack on a mosque in the Sinai by terrorists. The various places where these two men once studied now seem to be the heartland of this brand of senseless violence. Both men knew Islam well, and their knowledge brought them to respect it as a religion and value its cultural heritage. Atiya’s lectures on Islam in Utah attracted a decent following, including many Muslims who later confessed to the value of these lectures. Phillips adventures in Arabia may have been motivated in some part by his oil business, but he was also a genuine student of the Islamic and pre-Islamic culture there. It is tempting, but wrong, to see the descent to violence in these places as a rebuke to legacy of such men. It is better to remind ourselves that the progress of culture and the love of knowledge are the most potent antidotes to the nihilism that powers ignorant men.
— Maged Atiya
Sometime in late 1872 or early 1873 the 14 year-old Theodore Roosevelt, future President of United States, visited Egypt. Later in life he blurted out in his diary “How I gazed on Egypt. It was the land of my dreams; Egypt, the most ancient of all countries! A land that was old when Rome was bright, was old when Babylon was in its glory, was old when Troy was taken! It was a sight to awaken a thousand thoughts, and it did” The precocious boy exhibits a certainty of what Egypt is, an attitude shared by outsiders, then as now. Two decades after Roosevelt’s visit outsiders (mostly) brought forth the great age of museums in Egypt, with four of them built in two decades. First to be established was the Egyptian museum, the plaque on top of it lists the great men of Egyptology, all of them European. The items within would whet the appetite of every Teddy, and cuttingly remind Egyptians of how unworthy they have become of their ancestors. Then came the museum of “Arab” (actually Islamic) art. It was also built by Europeans of a different stripe; romantics who saw in Islam the exotic and the “other”. Then came the museum of Greco-Roman art in Alexandria. Again it was built by Europeans, of yet a third kind; eager to cement their claim to the city by attaching it firmly to the southern end of Europe. The last, and the most modest, was unique in that it was started by a native Egyptian, a bulldozer of a man and a Copt. The man was Murqus Pasha Simaika (1864-1944), and the museum was dedicated to “Coptic Archaeology”. It was an odd designation given that the Copts were not dead, and in fact were very much on the rebound at that time. Well into the 1970s Egyptians referred to the museum as “Mat7af Murqus Basha Simaika”, or the museum of Murqus Pasha Simaika. Simaika was not a scholar, but a mover and a shaker, an able administrator and dogged collector. His efforts lit a spark to the field of Coptology, with reverberations that echo to this day. He also fought in the trenches of the communal struggles between the 1870s into the 1940s. He was not a man of letters, and his opinions often changed, but by action set markers for Coptic identity that others continually sought to support or refute. It is not that he settled the question of “What is a Copt?”, but that he raised the question in the first place, without even meaning to do so.
The years after his visit to Egypt were kind to Theodore Roosevelt. He went from honor to greater honor until he reached the pinnacle of power as President of the United States, ending his term in 1909. In his first year away from power he traveled the world and visited Egypt. He gave a memorable speech denouncing the assassination of Prime Minister Boutros Ghali, a Copt, and advising the Egyptians that “the training of a nation to fit itself successfully to fulfill the duties of self-government is a matter, not of a decade or two, but of generations”. Grateful Copts whisked him away to visit the recently established Coptic museum where Murqus Pasha was his guide. The Simaika and Roosevelt families were equally ancient. In the middle of the 17th century the Simaika family was among the most powerful Coptic notables, at the same time that the Roosevelts traveled to New York to become landed gentry. It must be said that the artifacts in the museum fail to answer with complete certainty the question of “What is a Copt?”, since many predate Christianity and appear decidedly both Coptic and Hellenic, while others are medieval and appear both Coptic and Islamic. In a further swirl of identities and accidents, we know that this was not the last interaction between the Roosevelts and the Simaikas. Farid Simaika, the nephew of Murqus Pasha, and an Olympic diver, was inducted into the US Army air corps under a special program set up by President Franklin Roosevelt. He had recently become an American. He volunteered for a highly dangerous spying mission to the South Pacific where his airplane was shot
down. It is surmised with near certainty that he was beheaded by the Japanese forces. He is believed to be the first,and perhaps the only Copt to be awarded the Distinguished Flying Cross. It is a sober reflection on where America was then that Farid was able to marry an American woman only after the local California court ruled that “Egyptians belonged to the Hamitic and Semitic branch of the Caucasian race”. The court expressed certitude about Egyptians, and by implication Copts, that they themselves lack even today.
Murqus Pasha stood astride many divides among the Copts. There was the divide between the laity and the Church as how to reform and modernize the community. There was also the divide between the landed aristocracy and self-made new men. But perhaps most critically there was an identity divide. Should the Copts attach themselves to ancient Egypt, as the “true sons of the Pharaohs”, of hew to a Christian identity? How much of the Copts’ identity is tied to Egypt’s ancient history and how much is a product of their Christianity? Murqus Pasha was a bold and forceful man; he lacked what Stanley Lane-Poole insisted Copts possess, “the vices of servitude”. Yet it is possible to find in his life and actions clear evidence that he was on all sides of those divides. It is perhaps his great contradictions, as well as as his great actions, that make him worthy of study, especially in our current times.
A chronicler and molder of Egyptian and Coptic identity, Mirrit Boutros Ghali, wrote the obituary of Murqus Pasha. It was a fit choice, as Ghali had become a prominent archeologist by that time, and Murqus had been a friend of both his paternal and maternal grandfathers, as well as a grateful recipient of the assistance of Mirrit’s mother. In an entry in the Coptic Encyclopedia written four decades later he quotes from Simaika’s unpublished memoirs, which were kept privately by Murqus’s son Youssef. It was always hoped that full accounting of them be made public. A new account of Murqus Pasha and his times based on these memoirs is now published in English by AUC Press, by the Pasha’s grandson, the eminent gynecologist Samir Mahfouz Simaika, and Nevine Henein. This follows an earlier publication of a similar volume by the Farid Atiya Press. Samir Simaika is also the grandson of Naguib Mahfouz, the famous Coptic Gynecologist, after whom the Nobel Prize novelist Naguib Mahfouz, who is a Muslim, is named, in recognition of the doctor who made his life possible after a difficult birth. Islamists would always hold Mahfouz’s name against him, and late in life attempt to assassinate him for it. We should also note that the editor of the Encyclopedia, Aziz Atiya, who was Farid Atiya’s uncle and this blogger’s adoptive grandfather, was inspired to attempt his monumental work late in life through the example of men such as Simaika. So much of the focus on Egypt today centers on the roles of the military and Islamists, but those who wish to read Egypt beyond the doleful reality of power and prejudice will find rare treasures in this book, even if it is a difficult dig.
The book is divided neatly into four parts that tell of Semaika’s upbringing, his services to his nation, his services to his fellow Copts, and finally his efforts to establish and grow his museum. These correspond roughly to the divides mentioned earlier. A notably curious fact about both books is that they use the Latinized version of Simaika’s name, Marcus, rather than the pronunciation favored by Egyptians, Murqus, thus banishing the harsh Semitic Qoph. It is possible that the Pasha would have approved of this. In official photographs the old Copt seems pleased as his chest proudly displays the multitude of medals and accolades bestowed on him by kings and potentates from various countries. A 1923 photograph of the Simaikas looks remarkably like a European aristocratic family. The memoirs of Marcus display an easy familiarity with the top colonial and Egyptian officials, as well as many eminent scholars of the time, such as Alfred Butler, Somers Clarke, Josef Strzygowski, and Ugo de Villard. But the old Copt within him chafed underneath the charming veneer of a man of the modern world and occasionally it would lash out in resentment. He confronted Sa’ad Zaghlul over the matter of teaching only Muslim religious thoughts in schools, and Zaghlul, who favored the word “uskut”, or “shut up”, in debates, gave in. He was angry with multiple British officials for sowing seeds of dissent and general run-of-the-mill condescension. After all, the Pasha came from a family of Coptic notables accustomed to respect for centuries. Throughout his life, and in quoted passages from his memoirs, he promoted a vision of Egyptian identity that stands beyond religion, only to be faced with ugly realities at all times. He attended the funeral of Prime Minister Boutros Pasha Ghali after his assassination, but could recall with precision the “praise” bestowed by Sheikh Al Azhar on Ghali, “this Copt did more for his country than many Muslims”. The sense of anger, coiled beneath a requisite surface of amity, must feel familiar to many Copts. When aroused, the anger can take on unhappy forms. In a speech regarding the dispute with the Ethiopian Church over the ownership of Deir Al Sultan in Jerusalem (still ongoing a century later), he notes that “after each incident … the repenting Ethiopians came back tearfully begging to be allowed to stay, and the Copts taking pity on them and considering them as their brothers in faith always pardoned them ..”. It is expected of ambitious men to stand up for themselves, unless they are Copts. Marcus Pasha is advised by a more traditional Coptic politician, Youssef Wahba, to turn it down a notch, saying “when you want something …you seem to carry a stick in one hand and a knife in the other”. The quotations in the book leave no doubt that Marcus Pasha was shadowed by anger. In the preface his grandson notes that unlike many other Coptic grandees he never turned his back on his people, or ignored their needs, after he achieved wider fame. That is exactly true of Simaika, he remained a passionate Copt and fully engaged in the affairs of the community. His greatest battles were with other Copts, usually the clerical hierarchy. A dynamic man in a time of rapid social change could not possibly avoid that predicament. It is not so much that he was a bridge between generations, but that he was a familiar and oft repeated note in an endless fugue.
The Pasha was not an easy man, and he sometimes clashed with many of his contemporaries, especially the prelates of the Coptic Church. The book bills him as the “Father of Coptic Archaeology”, which is a richly deserved honor. The title of “Founder of Coptology” should be reserved to the intellectual Cladius Labib (1868-1918). He, and his son Pahor (who directed the Coptic museum after Simaika’s death), tried and failed to revive Coptic as a spoken language, something all other Coptologists shied away from, in favor of Arabic, English, French and German as their favored tongues. But Simaika should be counted as one of Coptology’s early founder and a prototype for many of subsequent followers, even if he was more of a man of power than scholarship. His contemporary in that work, Prince Omar Toussoun, also deserves equal honors. The Prince, a descendant of Muhammad Ali on both sides of his parents, was an accomplished scholar who studied the geography and history of Egypt. Although a Muslim, he too is a father of Coptology. The book features a rare photograph of the two of them at the Coptic museum in 1942, a few years before both would pass away. By that time these two men were already passing the baton to a new generation of Coptologists cut from a different cloth, but with equal or greater ambitions. These men shared a curious feature. All would make major contributions to the revival of Coptic culture while denying any thought that there is a “Coptic nation”. Most saw the contradiction between their actions and words (as indeed did Simaika) but perhaps felt it was the price of gaining agency in a world beyond their control.
The book features many anecdotes so familiar that they seem apocryphal. There is the story of the strong-willed Marcus defying his father, who wanted him to be a priest, and learning English and venturing out onto the wider world. He was not the first Copt to do so, as many Boutros, Murqus and Salamas would try to transcend and outgrow their Coptic identity. The older man, made wiser by the buffeting of the world, returns to serve his people in ways far more important than a mere parish priest. This is a familiar story of many “founders”, whether they were secular Zionists who rejected the rabbinical ways of their families, or Brahmin Hindus who adopted the manners and language of the British they loved and resented. There is a hesitant uncertainty about the world made by Western culture. The arms embrace it but the eyes betray a suspicion of it. In the case of Marcus Pasha the ironies and ambiguities loop on each other. He went to a school founded by Pope Kyrillos IV, “Father of Reform”, but open to Muslims and Copts, although Copts could not attend state schools at the time. The English Church Mission Society (CMS) made the Pope’s task easier, but he was a iconoclastic man, both figuratively and literally. Marcus the ambitious young man must have appreciated the “figurative” part, while the older Marcus as an art collector resented the senseless destruction brought on by this Pope. Admiration and censoriousness have a common heart.
The foundation myth, even if true, of the Coptic museum is also a familiar one. Marcus Pasha sees Pope Kyrillos V, the man he battled for years, about to melt ancient and beautiful silver bowls. He snatches them from his hands and with these as the first artifacts builds the museum dedicated to the history and culture of the Copts. Since then the story has been repeated and retouched by many a Coptologist. Ragheb Moftah documented Coptic sacred music with Western musical notation to save it from the mouths of ignorant priests who mumbled it without understanding. Aziz Atiya would not let Pope Shenouda have a final say on the editors of the Coptic Encyclopedia lest it becomes a uselessly hagiographic paean. These stories, all true, share a common theme. The determined scholar eager to use the tools of Western knowledge to serve “his” people must face down the entrenched and sometimes ignorant official Church. The reality also contains additional notes. The majority of Copts at those times likely supported Kyrillos V and Shenouda III, and viewed these men as “fathers” necessary for their survival. Whatever these scholars did to guarantee the cultural survival of the common folks was likely to be under-appreciated by the beneficiaries. There were more than a few shades of gray to all the confrontations. Samir Simaika notes the difficulty of collecting old Coptic sacramental artifacts since any item anointed by chrism must be destroyed once unusable lest it falls to profane hands. There is an echo of this in the tale of the Cairo Geniza records. European, or “advanced” Jews, wrested these documents from their rightful owners, the Egyptian or “backward” Jews, and sent them to Europe and the US for preservation and study. The act is either a perfidious theft or a heroic effort that documents the ways of a people now literally extinct. Individuals often pay a heavy price for communal reform. Conventional morality is a confused waif when it comes to the difficult work of preserving and building a nation’s culture.
Marcus was elected to the newly created Al Majlis Al Mili or “Community Council” at the tender age of 25. And for decades he was one of its most notable voices. As befitting a man of his temperament, his positions and views were unambiguous, until they changed. He favored the primacy of the lay Copts over the clerical hierarchy in the running of the affairs of the community, yet he paid homage to the very same bishops to pry items from their monasteries and churches. He favored exiling Pope Kyrillos V, and also bringing him back with honors. Many a man cut in Marcus’ mold would bend down and kiss the hand of a Bishop or a Pope that he believed to be an uneducated rube.The men in his party found themselves in paradoxical situations. The Church has been the backbone of the Copts for centuries, and the common folks loved Christ and their Church even while occasionally disapproving of the behavior of the men in black. But the Copts must be beaten out of these views if they are to be whipped into shape and made fit for the modern world, so thought many men like Simaika. The century-long battle now seems to have been decided in favor of the Church, perhaps. The Church was reformed from within, by laymen who joined its ranks. The Coptic notables seem to have largely disappeared, victims of the various “isms” that haunt Egypt today. But listening closely one can hear the opening salvo of a renewal of that struggle. The old notables like the Simaika family, born and bred to serve Egypt’s despots, are gone, but new notables made of a different stock are coming on the scene. These are the figurative descendants of Marcus’ Pasha nephew Farid, Copts born outside Egypt and sometimes less than fully acquainted with its realities, but with entirely different sense of entitlements and expectations. They expect the world to respect their individual and personal rights, they expect the state to serve them not the other way around, and they expect the Church to administer to their spiritual needs but not be in control of their views and actions. These new notables are eager to belong and serve, but under a new compact. The shape of the future struggle, or even if there is one, is still unknown. Recently this author found himself in an audience with Pope Tawadros II and a number of young women. They were all Copts, most were not Egyptian, and a few were not even of Egyptian stock. They could just as easily have been in an audience of Oprah as with a Patriarch of an ancient Church. He listened to them with a great deal of fatherly love and some incomprehension. What came to heart were the twin feelings that underpin most religious experiences; hope and dread.
Marcus Simaika spent the last decades of his life collecting Coptic artifacts and building up his museum. The book is rich in telling details. He was not a man to take “No” or even “Yes” for an answer. He insisted on “Yes, Now!” (“Whenever I heard of some object worthy of being added to our collection, I began my attack. I never despaired if refused once … and obtained it when the possessor became tired of my visits”). There is a comic underside to such a man in Egypt, for “now” among the Egyptians often stretched to years or never. There was also a tragic underside. His searches proved beyond doubt that much of Coptic heritage was destroyed in the Mamluk pogroms of the 13th and 14th centuries. As his collection grew the state became interested in it, less because it supported Coptic culture but because it wished to look like it is solicitous of the welfare of the Copts, especially to outsiders. Marcus Pasha did nothing to expose the condescending sneer behind the smiling facades. In this manner he was a model for the men who followed him. Most ignored the painful realities that touched them in favor of a distant vision of a better country. Aziz Atiya, who was hounded out of his university professorship by Islamists, would later write that “Copts enjoy full citizenship rights in Egypt today”. Mirrit Ghali would serve the Free Officers as minister (briefly) and diplomat, even after he was certain they would destroy his vision of a genuinely liberal Egypt. Pope Tawadros II insists that Copts can trust their safety to the state, even as policeman watch idly while mobs ransack Coptic properties in Minya. A sympathetic American asked “Why do Copts do that?”, stopping short of repeating Lane-Poole’s charge. We can only look in vain for an answer among Marcus Simaika’s words. He was a nominal support of Lutfi El-Sayed brand of Egyptian nationalism, which time has shown to be inimical to the interests of the Copts, while also developing an ideological framework for the violent suppression of Islamists. Yet he, and the majority of Coptic public men, remained faithful to it. Simaika, while building up the Christian portion of the Coptic identity, insisted that Copts attach themselves to the ancient Egyptian heritage. This seeming contradiction persists, even within the Church, where Egyptian nationalism has attached itself to its theology, as a barnacle would to a magnificent ship. The Copts are full-fledged members of the fraternity of reviled minorities, yet have struck out differently from others. Unlike the Jews and the Kurds, for example, they never sought out a geographic state fortified behind secure walls. Also, unlike the Christians of the Levant, they never sought out communally based representation, nor attempted to secure special rights. Most even reject the label “minority”, a triumph of aspiration over arithmetic. These stands might be a product of nearly two centuries of sacralization of Egypt and a belief in its exceptionalism, or simply a realistic approach favoring the possible over the desirable. But whatever the reasons these views have become problematic, and might set up new communal struggles, as the percentage of non-Egyptians among Copts grows.
For all its rewards, one can come to the last few pages of a book about a man who collected and preserved Coptic heritage without a satisfactory answer to “What is a Copt?”. For that we must look inward. A simple tribal definition that draws boundaries, defining who is in and who is out seems unsatisfactory. If any attempt at preserving cultural identity is to succeed it must account for change and allow for a constant redefinition of that identity by future generations. No culture can thrive behind high walls, and no wall is high enough to protect and contain a thriving culture. What might work is a series of concentric definitions radiating outward. There are those born into the Coptic identity, then there are those who wish to join it. Others might earn a place of honor by their understanding and support. Still others might look at the trials and triumphs of Copts and respect them as a retelling of the larger human condition. They are all Copts, and Copts would do well to embrace them without fear of dilution or loss of identity.
— Maged Atiya
|
Chlamydophilae are obligatory intracellular parasites incapable of generating their own energy source. Chlamydophilae depend on the host cell's adenosine triphosphate (ATP) for energy. Like bacteria, they can be eliminated by broad-spectrum antibiotics, but like viruses, they require living cells for multiplication. Once, Chlamydophilae were thought to be large viruses rather than bacteria due to their small size and dependence on the host cell. Some scientists suggested that Chlamydophilae, a 'primitive' bacteria, occupied the evolutionary space between traditional viruses and bacteria.
There are two distinct forms in the life cycle of Chlamydophilae: the intracellular form and the infective form. During the intracellular form, the Chlamydophilae reproduce within the host cell as many organisms as the host cell will hold. This occurs 35-40 hours after infection. Once full of Chlamydophilae organisms, the weakened cell bursts and dies. At this point, the Chlamydophilae organisms enter the infective form as it seeks more host cells to invade. In this form, the organisms are especially resistant and are transmitted to other hosts.
The name Chlamydia derives from the Greek word "chlamys" denoting the cloak-like mantle worn by men in Ancient Greece. The name was given to Chlamydiae organisms since they were believed to be intracellular protozoan pathogens that cloaked the nucleus of an infected cell. Scientists have since discovered that Chlamydiae are prokaryotic organisms, and what appeared to be a cloak was a cytoplasmic vesicle with countless individual organisms inside. Chlamydia trachomatis, the most famous Chlamydia disease, is the leading cause of preventable blindness in the developing word, and in the developed world, it is a sexually transmitted diseases responsible for infertility and pelvic inflammatory disease in women.
|Join the GlobalSecurity.org mailing list|
|
What are two significant examples of Jewish resistance to the Holocaust?
The Jewish people resisted the Holocaust in two different ways: active resistance and passive resistance.
Active Resistance: Active resistance occurred throughout Germany and the German-occupied countries when the Jewish citizens fought back against the Nazis.
- 1943 Uprising at Treblinka-- young men forged keys to sneak into the arms storage at the camp. The inmates distributed weapons and grenades. The attack resulted in 1,500 lost lives of prisoners, but it disrupted gassing operations at the camp for more than a month.
- 1943 Warsaw ghetto-- men and women revolted against the SS when it became clear that the Nazis were deporting the ghetto inhabitants to Treblinka.
Passive Resistance: Passive Resistance also occurred; although not directly confrontational passive resistance made a major impact in saving Jewish lives.
- Leaving and Hiding-- Many German-born Jews protested their treatment by leaving Germany, before tensions escalated. Alfred Einstein feared the direction that Germany was headed and left for the United States. Other families, like the Franks, made famous by the Diary of Ann Frank, had friends to help them hide from the Nazis.
- Sabotage-- Many of the imprisoned Jewish men in the concentration camp work programs chose to sabotage the machinery and equipment that they were forced to work on. Mike Jacobs, in his book Holocaust Survivor, recounts how he lied about being a machinist so he could sabotage the steering mechanisms on German aircraft when he was imprisoned in their work camp.
|
Mitochondria—created to energize us
General diagram of a mitochondrion. Mitochondria vary in size and shape, from nearly spherical to long threadlike filaments.
Mitochondria are small, membrane-bound organelles serving as energy generators in eukaryotic cells. Most cells have hundred to thousands of them, depending on their energy needs. Mitochondria are very good at what they do—they generate about 95% of a cell’s energy in the form of adenosine triphosphate (ATP) by oxidizing pyruvate (a by-product of anaerobic glycolysis) to CO2 and water. They are ovoid to filamentous in shape, generally ranging from one to seven micrometers in length (about the same size and shape as small bacteria). Since the discovery that mitochondria possess their own DNA, it has been frequently theorized that mitochondria evolved from ancient bacteria ingested by larger cells. This is known as the ‘endosymbiont theory’ of mitochondrial origin. Sometimes it is stated boldly:
‘More than a billion years ago, aerobic bacteria colonized primordial eukaryotic cells that lacked the ability to use oxygen metabolically. A symbiotic relationship developed and became permanent. The bacteria evolved into mitochondria, thus endowing the host cells with aerobic metabolism, a much more efficient way to produce energy than anaerobic glycolysis.’1
Sometimes it is stated more cautiously:
‘In the endosymbiont theory, the ancestor of the eukaryotic cell (we will call this organism a protoeukaryote) is presumed to have been a large, anaerobic, heterotrophic prokaryote that obtained its energy by a glycolytic pathway. Unlike present-day bacteria, this organism had the ability to take up particulate matter … . The endosymbiont theory postulates that a condition arose in which a large, particularly complex, anaerobic prokaryote took up a small aerobic prokaryote into its cytoplasm and retained it in a permanent state [emphasis added].’2
Whichever way it is stated, it is given an aura of authority and certainty by its frequent repetition in writings on cell biology. Many students find it convincing. However, like many evolutionary ideas, it may look solid from a distance, but gaps appear on close scrutiny.
The evidence for the endosymbiont theory revolves around selected similarities between mitochondria and bacteria, especially the DNA ring structure. However, these similarities do not prove evolutionary relationship. There is no clear pathway from any one kind of bacteria to mitochondria, although several types of bacteria share isolated points of similarity. Indeed, the scattered nature of these similarities has left plenty of room for a less-publicized ‘direct evolution’ theory of mitochondrial origin, in which they never had any free-living stage.3 There is enough diversity among the mitochondria of protozoa to make evolutionists wonder if endosymbiotic origin of mitochondria occurred more than once.4
The endosymbiont theory implies that there should be considerable autonomy for mitochondria. This is not the case. Mitochondria are far from self-sufficient even in their DNA, which is their most autonomous feature. Mitochondria actually have most of their proteins coded by nuclear genes, including their DNA synthesis enzymes. For example, human mitochondria have 83 proteins, but only 13 are coded by mtDNA (mitochondrial DNA). Even those proteins which are coded by mtDNA often have large subunits that are coded by nuclear DNA. These nuclear-coded mitochondrial proteins must be labelled and transferred from the cytoplasm across two membranes. This intricate, hand-in-glove working between mtDNA and nuclear DNA presents a major difficulty for evolutionists. They have yet to propose a reasonable mechanism by which so many genes could be transferred intact (along with appropriate labelling and control mechanisms) to the nucleus.
Plants and other ‘lower creatures’ may have more mitochondrial genes than the higher animals do, but they still fall far short of the number necessary for free-living existence. Plants have also been found to have much more non-coding mtDNA than the ‘higher’ animals. Referred to as ‘junk DNA’ by evolutionists, it is held to have been eliminated by evolution from the mitochondrial genomes of the higher animals, to the point that humans have virtually no non-coding mtDNA. Evolution seems to be remarkably unpredictable in its handling of ‘junk DNA’, allowing it to accumulate ‘haphazardly’ in the nuclear DNA of higher animals and man, but ‘efficiently’ eliminating it from mtDNA. It doesn’t seem reasonable for evolutionists to have it both ways.
There are more important differences between mtDNA and nuclear or prokaryotic DNA. The main one is that the genetic code for mtDNA differs from the standard DNA code in slight but significant ways. Why? Evolutionists make much of the universality of the genetic code, saying that it offers strong support for common descent of all living things. If this is true—if the code is so highly conserved in evolution through over a billion years and millions of species—then even a few exceptions to the rule are hard to explain. (On the other hand, from a design standpoint the answer may lie in the simpler protein synthetic machinery served by mtDNA, which uses fewer tRNAs, and is less specific in codon recognition.) Lack of introns is another important difference. The ‘higher’ mtDNA has no introns, whereas nuclear DNA and some ‘lower’ mtDNA do have them. Again, the bacteria from which mitochondria are supposed to have evolved also lack introns. Thus, we’re asked to believe that the pre-mitochondrial bacteria sporadically evolved introns as they became ‘primitive’ mitochondria, and then lost them again as eukaryotic evolution ensued. As evolutionists grapple with the biochemical details, the endosymbiont theory becomes more and more cumbersome and vague.5
As alluded to earlier, mitochondrial numbers are controlled within each cell by energy needs. They can also travel within cells on cytoskeletal microtubule ‘rails’ wherever energy is needed (near the ribosomes in pancreatic zymogen cells, near the proton pumps in gastric acid-secreting cells, etc.).6 This complex intracellular control is highlighted by a common pathological abnormality in which certain body cells become bloated by an oversupply of mitochondria. These cells, known to medicine as ‘oncocytes’, are packed by malformed or malfunctioning mitochondria, in which various mutations have been detected.7,8 Also, when mutated mitochondria derived from a maternal oocyte populate all of the body’s cells, the results can be devastating. A whole spectrum of degenerative multisystem diseases associated with mitochondrial mutations has been described recently, with more being discovered.9,10 Such diseases tend to affect tissues most heavily dependent on aerobic metabolism, such as neural and muscular tissue. These observable phenomena underscore the harsh reality that random changes in mitochondria or microbes do not produce complex new structures and regulatory systems, but rather disease and death.
It should also be pointed out that the engulfing of bacteria by larger cells is one of the commonest phenomena in nature, happening countless times each hour. Yet, nothing really like the formation of mitochondria has ever been observed. There may be rare modern examples of endosymbiosis between two different types of cells, such as the Chlorella algae within ‘green’ paramecia. Also, infecting or parasitic microbes can persist for a time inside of larger host cells due to encapsulation or other protective factors. Still, these events are far from the radical biotransformation demanded by the endosymbiont theory, and no one untainted by evolutionary preconceptions would ever dream of classifying mitochondria as once-separate life forms, as some evolutionists have suggested. It is essentially an ‘evolutionary miracle’, assumed to have happened in the past, but never seen or duplicated in the present.
Furthermore, if we accept this ‘naturalistic miracle’ of mitochondrial origin we are forced to conclude that the same miracle happened repeatedly. Evolutionists also postulate an endosymbiotic origin for chloroplasts, the organelles of photosynthesis in higher plants. Chloroplasts have their own DNA, once again with a ring structure. They are similar in some respects to present-day photosynthetic bacteria. However, because of biochemical variety among chloroplasts (like the mitochondria), evolutionists are once again forced toward the unlikely conclusion that their endosymbiotic origin occurred more than once!
‘According to this endosymbiont hypothesis, eucaryotic cells started out as anaerobic cells without mitochondria or chloroplasts and then established a stable endosymbiotic relationship with a bacterium, whose oxidative phosphorylation system they subverted to their own use … . Plant and algal chloroplasts seem to have been derived later from an endocytic event involving an oxygen-evolving photosynthetic bacterium. In order to explain the different pigments and properties found in the chloroplasts of present-day higher plants and algae, it is usually assumed that at least three different events of this kind occurred [emphasis added].’11
Given the enormous leaps of biochemical and genetic integration which are demanded by the endosymbiont theory, creationist skepticism is entirely justified.
Although it is correctly admitted here that the endosymbiont scenario is actually only a hypothesis, it is presented as the only possibility. However, as shown above, the fine print admits that assumption and speculation are major components of this idea.
Why do mitochondria and chloroplasts have their own DNA? Evolutionists believe that it is a source of cellular inefficiency, and that evolution has been slowly phasing out cytoplasmic DNA over time. (This raises the obvious question of why there is any mtDNA left at all, to which the evolutionary response is that the process of elimination is either incomplete or arrested.) However, viewing mtDNA as inefficient may just be a reflection of our own ignorance of the fine details of mitochondrial function. Deeper knowledge may show that manufacture of certain mitochondrial protein subunits ‘on-site’ is very efficient, just as the energy-harnessing chemistry of the mitochondrial enzymes has been shown to be.
Given the enormous leaps of biochemical and genetic integration which are demanded by the endosymbiont theory, creationist skepticism is entirely justified. There is no compelling reason to believe it unless one has already decided that evolution is true. The creationist model, holding that structures may look similar because they were designed to do similar jobs, is a more reasonable way to view the miracle of mitochondria.
- DiMauro, S. and Schon, E., Mitochondrial respiratory-chain diseases, New England Journal of Medicine 358:2656, 2003. Return to text.
- Karp, G.,Cell Biology, 2nd edition, McGraw-Hill, New York, p. 773, 1984. Return to text.
- Karp, ref. 2, p. 775. Return to text.
- Alberts et al., Molecular Biology of the Cell, 3rd edition, Garland Publishing Inc., New York, p. 715, 1994. Return to text.
- Alberts et al., ref. 4, pp. 708, 709. Return to text.
- DiMauro et al., ref. 1, p. 2665. Return to text.
- Tallini, G., Oncocytic tumors, Virchow’s Archives 433:5, 1998. Return to text.
- Jih, D. and Morgan, M., Oncocytic metaplasia occurring in a spectrum of melanocytic nevi, American Journal of Dermatopathology 24(6):468, December 2002. Return to text.
- DiMauroet al., ref. 1, pp. 2656–2665. Return to text.
- Leonard, J. and Schapira, A., Mitochondrial respiratory chain disorders I: mitochondrial DNA defects, The Lancet 355:299–304, 2000. Return to text.
- Albertset al., ref. 4, pp. 714–715. Return to text.
(Also available in Portuguese)
|
It wasn’t too long ago when parents believed that exposing children to two languages while growing up brought more harm than good. They were concerned that being bilingual might mix up the languages in their heads and the children would turn out to be late talkers. They also feared that if their children became bilingual, this might hinder their academic and intellectual development, and the children might grow up with poor communication skills.
Parents were worried that their kids might experience interference in processing language. This is true, bilingual brains tend to work two language systems even if they are using only one language, and one system can obstruct the other.
But, this interference may actually have benefits. The fear that bilingual kids take longer to develop language skills is unwarranted, as previous studies suggest that the ability to speak two languages does not stunt overall development.
Also, many other scientific studies should ease concerns that kids knowing two languages may have bad effects. On the contrary, these studies suggest that bilingualism actually has cognitive practical benefits and positive effects on kids:
The interference in processing language that bilingual kids experience make them able to switch from one language to another. Whenever bilinguals use language, their brains are busy choosing the right word while blocking the same term from another language. This forces the brain to resolve internal conflict, giving the brain a good workout. It is like exercise for the brain.
A number of studies suggest that being bilingual improves the brain’s executive function. The executive control is a command system in the brain that directs the attention processes that we use for planning, solving problems and performing a variety of mentally demanding tasks. Bilinguals use this executive function more, and this makes it more efficient.
The physical brain of bilinguals adapt to the demand of juggling between two languages by restructuring itself. The brains of older lifelong bilinguals, young early bilinguals and adult early bilinguals are found to have a thickness of the myelin known as “myelination” compared to monolinguals. This thickness makes the transfer of information faster and with fewer losses.
Bilingual kids exercise cognitive flexibility; that is, they are better at focusing attention on relevant information and ignoring unnecessary distractions, making the process of learning new rules much faster. Because their brains are active and flexible, bilinguals understand math concepts and solve word problems more easily.
Bilinguals have a heightened ability to monitor their environment. They are constantly looking out for the need to switch languages, for example, talking to their parents in one language and their friends in another. Being bilingual requires keeping track of changes around you, the same way that we monitor surroundings when we are driving.
Bilinguals are less susceptible to egocentric bias and better at understanding other people’s belief because they’re able to block out what they already know and focus on another’s point of view.
A study suggests being bilingual “wedges open” a window for learning languages, making it easier to master languages throughout one’s life. Children inheriting a native language from their parents connect to their ancestors, family, culture and community. Bilingual kids are easily able to make friends with other kids who speak the same language. This is important in establishing social connections in a diverse society.
Bilingual kids are more culturally sensitive.
Bilinguals were shown to have a delayed onset of Alzheimer’s disease when compared to monolinguals. Individuals with a higher degree of bilingualism, those who are proficient in each language, were more resistant than others to the onset of dementia and other symptoms of Alzheimer’s disease. The higher the degree of bilingualism, the later the age of onset.
Bilinguals have advantages in the real world in terms of employment and other economic opportunities. They are able to read and appreciate literature written in another language, travel to other countries and talk to people in their native language.
Tips for raising a bilingual child
Children do not become bilingual naturally. Even though your child is exposed to two languages, at some point he or she might just stick to the majority language and forget the second language. As the parent, you need to do some planning and adopt strategies to successfully raise a child who knows two languages. You need to ask questions like how do you expose your child equally to two languages, who will speak what and when and what materials can you use to promote your child’s language learning?
Below are some tips that can help you answer these questions:
Find the right balance for your child to learn the two languages. To accomplish this, conduct an audit to see the patterns of the use of language in your home and your environment.
Try to balance the exposure of your child to both languages in terms of speaking, reading and writing.
To achieve a balance of exposure to both languages, you may want to consider having one parent speaking one language, and the other parent another.
Another strategy would be to nurture your child in the weaker language in her early years at home. Then when she starts school, she can withstand the more powerful majority language.
Once you have a plan how to use your languages, commit to it and be consistent.
Make your family commit to your effort to raise a bilingual child.
Immerse your child in language all day.
Integrate it into everyday routines and talk to each other a lot.
Don’t forget the influence of grandparents, caregivers and babysitters. Make them aware of your intention to make your child bilingual. Let them know that you would appreciate whatever help they can give you to achieve that goal.
Make language learning enjoyable.
Incorporate it into songs, games and activities.
Read books to your child in the second language.
For babies and toddlers, using media like TV, DVD’s, apps and games are not nearly as effective as human interaction. If you have to use these, interact with your child while using these devices. For example, talk to him about what is happening in the show or game that he is playing, ask simple questions or share ideas.
Babies, infants, and children (not to mention adults) learn best from interaction with other humans. It’s wired into us. In order to learn, children need language situations where the conversations are interactive, adaptive, and pitched at their level.
If the conversations are focused on the things they are interested in, it only helps.
This is true for learning in general, and also for language learning.
For older kids, media and entertainment help reinforce their second language learning. Also consider iPods and digital music player and load them with second language learning materials as gifts.
Treat your child to shows and other cultural events that involve the second language. Praise your child when they make an effort to communicate in the second language.
Socialise with other parents who are raising their children to speak the same language. You can encourage each other and share ideas and triumphs.
It also gives you an opportunity to create future play dates with the ultimate language teachers – other kids.
Be patient and stay strong when doubts about your success creep in!
original author: unknown / source: raise smart kids
|
People often ask: "Can Chernobyl happen here?". To answer, we have to examine why the accident occurred, and why the consequences were so severe. The more important causes are:
- The U.S.S.R.'s RBMK design of reactor, to which the Chernobyl one belonged, used inflammable graphite (similar to barbecue briquettes) as moderator. During reactor operation the graphite runs hot: when the accident happened it became exposed to the air and started to burn. It was the resulting fire, lasting ten days, that was primarily responsible for the early deaths of 31 plant workers and for releasing so much radioactive material to the near and far environments. The CANDU design uses heavy water, not graphite, as moderator. Water, far from sustaining a fire, tends to wash out and hence retain the fission products of greatest concern, iodine and cesium.
- The Chernobyl reactor had a large "positive void effect" during operation with fuel at a high burn-up, such as existed at the time of the accident. A positive void effect means that if a void develops within the reactor core, e.g., by steam formation when cooling water hits hot graphite, the reactor power increases, making the accident more difficult to control. CANDU reactors have a positive void effect but this is maintained at a low level, well within the capacity of the shutdown systems to override it, by continuous on-power refuelling. Also, the absence of hot graphite reduces the likelihood of large void formation.
- The detailed design of the Chernobyl shutdown rods was such as to increase the reactor power first before having their intended effect, under the particular circumstances at the time of the accident. This design weakness does not apply to CANDU reactors.
- The building in which the Chernobyl reactor was located was totally inadequate as containment for radioactive material released as a result of the accident. CANDU reactors are contained in reinforced-concrete buildings with walls about a metre thick, designed to retain releases and suppress steam formation.
- There was generally a poor "safety culture" at Chernobyl. It, according to the IAEA, is "that assembly of characteristics and attitudes in organizations and individuals which establishes that, as an overriding priority, ... safety issues receive the attention warranted by their significance". Examples of the poor safety culture at Chernobyl were inadequate examination of a test program that had not been done during commissioning but was being conducted at the time of the accident; violation of operating procedures; and pressure on the operators to maintain production at the expense of safety. Since this cause is not a simple question of yes or no as for previous ones, the difference in Canadian utilities is only a matter of degree. Indeed, Ontario Hydro was criticized in the 1997 IIPA Report for a decline in its safety culture. However, Ontario Hydro at its worst would never have allowed the abuses that led up to the Chernobyl accident.
- A largely ignored root cause of the Chernobyl accident was the absence of a fully independent and effective regulatory body, a vital component of defence in depth. The Canadian regulator, the AECB, would not have licensed the RBMK design and would not have permitted operation under the conditions prevailing at the time of the accident. Regrettably, the international program to improve the safety of former-U.S.S.R. reactors has paid little attention to this aspect, concentrating on the design and operations.
To help readers understand the technical argument, the Commissioner of the Ontario Nuclear Safety Review, Professor F. Kenneth Hare, quoted as follows in his 1988 report:
"A well-known nuclear advocate, J.A.L. Robertson, wrote to me that he was appalled by the number of operator errors at Chernobyl and the difficulty of explaining them to the public. Under such circumstances, Robertson felt that it would be helpful to use more familiar analogies, where apt:
"'Let us suppose that a certain airline took into service a new design of jumbo jet on the assurance that it could, if necessary, land on automatic pilot. This was not, however, tested in commissioning. During a scheduled flight, with a full load of passengers and highly flammable fuel, the flight crew decided unilaterally to conduct this test, which inevitably had to be performed when the aircraft was in a highly unstable condition just on the point of stalling. To permit the test, the crew disabled the manual controls, disconnected a safety system and switched off some alarms. In doing all this somebody overlooked an altimeter adjustment.
"'If this had really happened, the wonder would not be that 31 people died but that anyone survived.'"
|
Old Europe was once a grouping of feudal societies that occasionally interacted with each other. As transport and communication developed many of these feudal societies amalgamated to form larger societies known as nations, where strings of alliances to preserve their interests developed.
If we fast-forward to the end of the 19th century, Europe began to become dominated by two main groupings: the Triple Alliance of Germany, Austria, and Italy; and the Entente Cordiale between England and France. This was supplemented with the Franco-Russian Alliance, and the Anglo-Russian Entente.
These alliances formed two military camps on European soil and hastened the process to all-out war when the Archduke Franz Ferdinand of Austria was assassinated in Sarajevo on 28th June, 1914. Again in 1938, aggression across Europe led into bloodshed, pain and suffering destroying a major part of Europe. And after the World War II Europe was partitioned with an iron curtain that once again divided the continent.
The narratives within Europe were once full of delusions of racial and religious superiority, imposed dominance, and cultural diversity. Some pockets of Europe today still hold these kinds of beliefs, where groups are still expressing aspirations for independence.
In Europe, there was a desperate need to find a way to co-exist, otherwise future conflicts would have devastating consequences similar to what has been witnessed a number of times through European history. The union had to unite a divided Europe of different histories and then stretch it's arms out to most of Eastern Europe after the collapse of the Soviet Union with an almost unbelievable transformation which other regions of the world like the ASEAN Economic Community will find very hard to emulate.
We can see that the spirit of these old alliances are preserved not for war that dragged Europe into destruction, but this time to bail out a member Greece in the quest to save the union, although the decisions to do this put extreme pressure upon the individual members of the Union.
Today many fundamental questions are arising as new challenges. Youth unemployment, freedom of domicile within the union, the influx of migrants, the Euro-crisis, soaring health costs, rising petroleum prices, food shortages, and terrorism are all concerning Europeans deeply. The answers don't appear to be there and this is leading to great uncertainty.
The European phenomena is still incomplete. We had the political revolution symbolized by the blue European Union flag flying above European land. The second revolution is an economic one, symbolized by the common regulation and the euro currency. This is currently presenting great challenges as we are still finding that common regulation is not as easy as anticipated due to the cultural diversity and situational issues that persist within the union member states. The euro and uniform financial regulation had unforeseen consequences. The perceived strength of the union, a common currency also had a paradoxical weakness in that it severely limited the utilization of monetary policy, as the EU has now found. Relying almost solely on budgetary mechanisms for fiscal control along Keynesian philosophies is not enough for member states.
Undoubtedly the European Union Economic approach needs another mechanism. The euro currency is not the "Higgs Boson" particle that everybody anticipated, and another mechanism to financially drive Europe is needed. But the answer may come in a similar manner to scientists at CERN who discovered that quantum mechanics is extremely complex to truly understand, and the deep fundamentals are within the individual parts, rather than the whole.
There is another revolution that is needed to create the great EU as originally dreamed about. And this revolution is the hardest of all to achieve. It's a mistake to believe that this revolution will come from the committee rooms of the European Parliament. No revolution ever comes from a legislature.
This revolution is a spiritual one about vision for a new Europe and it must come from the streets of Munich, the streets of Paris, villages in Romania, and towers in Barcelona, and so on. The vanguard of this revolution will be the same people who were involved in the Arab Spring, the uprisings in Burma and Iran, and the Occupy movement in the United States, the youth of Europe.
The European Union must find the right balance between debate and consensus on an overall vision. That vision must permeate into all aspects of society. Without this vision Europe cannot progress and may actually decline. The people of Europe need a new identity that carries both meaning and a sense of excitement about the future.
And what must be borrowed by the European Union, once discarded in an attempt to create a pan-Euro culture is the "hotch potch" of cultural diversity that exists within the member states. Uniformity does not bring strength, diversity brings strength which has been unrecognized. Diversity is what makes Europe and the Commission has over the years tried to create a Europe of the lowest common denominator (LCD). Europe has actually been stripped of its very strength. The answer is not in the pan-Euro approach but engaging the diversity within the Union, something many, if not the majority feel in their hearts. A Euro-culture should take in both national and pan-Euro traits and slowly evolve into a single euro-identity.
Just like the euro debt crisis, the Euro cultural crisis is the result of legislators believing that regulation is not the solution to everything. New approaches outside legislative frameworks are required here.
There is great risk that the metaphor of blue may become a sea that lacks the ability to have foresight and vision. The EU Council is fast becoming a transactional rather than transformational identity as it started out to be. The bureaucrats have replaced the dreamers and philosophers setting into motion processes that inhibit rather than rather than encourages growth in diversity and richness.
Blue is also symbolic of authority and the EU must be aware of the need to develop an environment where the Commission is not seen as top down regulator but truly concerned with what it's citizenry thinks and feels about issues. The citizens of the EU must be encouraged to develop a sense of ownership in the whole process once again.
|
Parasitic Adaptations of Plants and Animals
What is meant by Adaptation?
Adaptation definition: ‘Any feature of an organism or its part which enables it to exist under conditions of its habitat is called adaptation.’ The adaptations are mainly to withstand the adverse conditions of the environment and to use the maximum benefit of the environment.
What are Parasites?
A parasite is an organism which lives in or on another organism (called host) and benefits by deriving shelter and nutrients from them. The parasitism is a type of negative ecological / biological interaction in nature where one organism gets benefited (the parasite) and the other is harmed (the host). The parasites may be microbes such as bacteria, virus and Mycoplasma, or animals such as liver fluke, worms, nematodes, some insects and plants such as Loranthus, Cuscuta etc. All types of parasites show peculiar adaptations to survive in or on the host system and to get maximum benefit from them.
Parasites show three level adaptations, they are
(1). Structural Adaptations (Morphological and Anatomical Adaptations)
(2). Physiological Adaptations
(3). Reproductive Adaptations
(1). Structural Adaptations (Morphological and Anatomical) Adaptations of Parasites:
Ø Feeding organs are usually absent in endoparasites.
Ø Fluid feeding insects such as aphids have highly specialized mouth parts for the easy absorption of cell sap from the host.
|
In this article
Women & Girl’s Education: Issues in India
It is now well established that gender equality in education and enhancing the access of girls to basic education is influenced by three inter-locking sets of issues -systemic, content and process of education and economy, society and culture. This issue need not be reiterated now as this approach has now become an integral part of mainstream analysis.
There is almost unanimous acceptance of the fact that gender, as a category, needs to be seen within the larger social, regional and location context. India is a land of rich diversity and it is also a country.of sharp disparities. The interplay of socio-economic inequalities and gender relations creates a complex web that either promotes or impedes girls’ ability to go through schooling! While economic disparities and social inequalities are certainly important, a number of researchers argue that cultural beliefs and practices and regional characteristics play an important role.
Big Differences of India
In case of India, it is important to understand the intermeshing of poverty, social inequalities and gender relations. The three intersect in different ways in different regions of the country -with one reinforcing the other in some and offsetting one in others. Understanding and unravelling this, is the biggest challenge today. In this context, there is a need to acknowledge the following:
- Rural-urban differences in enrollment, attendance and completion are greater than male- female differences;
- Backward-forward areas / regional differences are greater than gender and social group differences;
- Disparities between very poor households (below poverty line) and the top quartile is much higher than gender, social and regional differences;
- Differences between social groups -especially between tribal communities, Muslims and specific sub-groups among the SC on the one hand and the forward castes / Christians and other religions is
- Inter-community differences are often as severe as intra-community For ex- ample, the literacy status of some tribes is better than others and some Dalit groups better than others.
What Can We Do in India?
- Meaningful Access
Meaningful access is providing not just the physical access (in terms of number of schools, improved infrastructure etc.) to participate in the formal education system, but more impor- tantly, an equitable opportunity for all children to engage with a quality education system. Meaningful access needs to happen at every single step of the education delivery system, right from bringing the child to the school or for that matter, taking the school to a child. Right from ensuring that schools are available for all children from any social group to ensuring that once the child reaches the school, it is a safe haven of learning and growth to achieve his or her potential instead of a few skills thrown in a staccato manner. Meaningful access includes access to teachers, who will provide differentiated support catering to varied learning styles and who will pay special attention to those that need extra nudge to keep pace. And most importantly, meaningful access to provide a safe, gendered space with room to find and express one’s own identity shaped by membership in any social group without the fear of mockery or discrimination.
- Safe and Non-Discriminatory Environment
It is said that a school is a microcosm of the society in which we live. More often than not, inter-personal and inter-group dynamics prevalent in the community is also reflected in the school. Teachers, if they are not adequately sensitized and trained, may just transfer behaviour pattern and prejudices to the school. Educational administrators and politicians give this as an excuse for persisting discrimination in schools. This is where we have a lot to learn from countries that have successfully combated this tendency and have insisted that schools and other publicly funded institutions adhere to constitutionally mandated rights and obligations. Taking the right to equality and the right against discrimination enshrined in the Constitution of India, teachers and all educational administrators are duty bound to ensure a non- discriminatory environment in school. Teachers and headmasters do not have the freedom to discriminate on the basis of case, religion, gender, ability or economic status.
Taking the Constitution of India as the guiding spirit, teachers, administrators and community leaders need to be told that any violation of the right to equality and the right against discrimination will invite strict penal action. A non-negotiable code of behaviour needs to be communicated to all those who are involved in school education. This needs to be done in writing and prominently displayed in all schools and educational institutions. Simultaneously, children, especially boys, need to be involved in activities that enable them to understand and appreciate diversity, respect differences and formulate school level norms of behaviour towards other children, and towards girls. Involving children in creating an egalitarian atmosphere could bring moral pressure on teachers, administrators and local leaders not to differentiate or discriminate.
May be a lot more can be said and a longer list of issues that frame women and girls participation in education can be presented. In the last fifty years, several commissions and commit- tees have brought out long laundry lists of issues and concerns and many strategies have also been listed. Reflecting on why these recommendations and strategies have remained unimplemented, I realized that, we need to first and foremost agree on a few non-negotiable maxims or principles. If they are adhered to, then the chances of other inputs falling into place is far higher. It is with this in mind that I have highlighted only three: meaningful access to education, non-discrimination and fore grounding gender in the construction of knowledge. If we are able to push for these three, then maybe we can start moving towards greater gender equality and social justice.
|
For illustrations to accompany this article see Insect Life-Cycles
Macrotermes bellicosus is one of nearly 2000 species of termites, and the genus Macrotermes is widely distributed throughout Africa and South-East Asia. The species M. bellicosus is one of the largest termites, with a complex, highly evolved colonial organization. All termites live in colonies, those of M. bellicosus reaching a size of hundreds of thousands over many years. Although termites are sometimes called "white ants", they are not ants, nor are they closely related to them.
The nest. Some termites build nests in wood, some in trees and posts, and some below the ground but M. bellicosus, though it begins its nest below ground, forms large mounds or towers of soil. The nest is constructed of sand and clay. The workers burrow into the subsoil bringing back a sand grain and a "mouthful" of clay which becomes moistened with saliva. The sand particle is stuck in position in the nest and cemented with the mortar of clay and saliva. Building activities are most intense at the beginning of the wet season.
If clay is abundant in the subsoil and rainfall is low, the nest is a tapering column up to 9 metres high, but if clay is in short supply and the rainfall is high, the building material cannot withstand the eroding action of rain, and a dome-shaped nest, about 2 metres high, results. The mound does not appear until the colony is several years old but thereafter it grows rapidly, reaching perhaps one-third of its final height in the first year.
The main part of the colony, the "hive", is in the mound just above ground level and inside the mound is a maze of passages, some of which lead away under the soil to reach supplies of wood and vegetation up to 100 metres away. One or more vertical shafts run to the summit of the nest.
Inside the nest, the temperature is fairly constant at about 30oC and often lower than outside, and the humidity also remains steady at about 90 per cent. This high humidity keeps the nest material permanently plastic and the internal architecture of the nest is constantly being altered. The interior of the nest provides an almost unvarying environment, making the termites independent of changes in climate outside. Even if termites have to pass over the ground in search of food, they construct covered runways which protect them from drying out and from certain predators.
Food. On the whole, termites live on the cellulose of woody vegetation, many species burrowing into dead wood of trees and in buildings. There is no cellulose-digesting enzyme in the termite's body, and so indirect methods of digestion are important. Some species have single-celled organisms (protozoa) in their intestines and it is these protozoa which digest the cellulose, while the protozoa themselves are a source of food for the termite. Macrotermes bellicosus has no intestinal protozoa but constructs fungus combs in the nest.
The fungus combs consist of spongy masses of wood-pulp derived from the faeces, covered with a mycelium of fungal hyphae and sporangia. It seems very likely that the fungus digests the wood to a stage that can be utilized by the termites.
Life cycle. Termites undergo incomplete metamorphosis, but most of the individuals in a colony remain as nymphs and function as workers or soldiers. The workers collect food, chew wood-pulp, enlarge and repair the nest, make tunnels, and look after the queen, the eggs and young nymphs. The soldiers keep ants out of the nest by snapping their large jaws or blocking the tunnels with their heads.
Just before the rainy season, some of the nymphs continue their metamorphosis, developing reproductive organs and wings. Early in the rainy season, these mature termites (reproductives) swarm; that is, they emerge in their hundreds from the nest at night and fly off into the surrounding countryside. The males and females mate and start a new colony. As soon as there are enough worker nymphs to maintain the nest, the queen settles down to uninterrupted egg-laying. Her abdomen swells enormously and the workers wall her up in the nest, bring her food and take away the eggs as fast as she lays them.
Economic importance. Although termites normally eat dead vegetation, their tunnels may weaken plant stems, causing them to collapse or giving access to fungus and other diseases. Where bark is gnawed from trees, the phloem may be interrupted, causing the death of the tree. The mud runways with which some species cover the plants to reach dead wood may cause the plant, e.g. tea, to wither and collapse. Cocoa trees, sugar cane, young coconut trees, cotton and wheat plants are among crops that may be affected by termites. Macrotermes is not of very great importance in this respect but its nest-mounds get in the way of agricultural machinery and have to be blown up with explosives or levelled with bulldozers, thus increasing the cost of mechanized farming, road construction and building-site clearance. Perhaps the most familiar termite damage is to the wooden fabric of buildings and furniture. Various earth-dwelling species, though not M. bellicosus, tunnel underground from their nests and enter the building through its foundations or any wooden part in contact with the ground. To obtain the wood for their food they make extensive tunnels through the structures, weakening them and eventually causing their collapse. Since the insects make no openings to the outside, or do so only at an advanced stage of the invasion, the owner is frequently unaware of their presence until it is too late.
Prevention of termite damage. Buildings should be constructed on solid concrete or be raised on masonry piers. In either case, termites may still gain access by building their covered runways over the masonry to reach the wood. A projecting ledge round the concrete base or pier will prevent even this method of access. Great care must be taken to see that no unprotected wooden part of the building, (e.g. steps) is in contact with the ground.
The earth round the building can be impregnated with insecticides which may prevent the termites tunnelling through to the structures for several years. None of these precautions is proof against the species of termites which fly to buildings and establish nests in the woodwork. The only measure against this type of invasion is to use wood which is naturally resistant to termite attack or that has been soaked in chemicals which repel the insects.
Predators. Vast numbers of winged termites are eaten at the time of swarming. In the air they are snapped up by birds such as hawks and swallows, and on the ground by lizards, toads and spiders. In the nest, the termites are subject to attacks by "ant-eating" mammals such as the ant-bear or aardvark which burrows into the mound and licks out the termites with a long, thin tongue. The covered, over-ground runways are broken open to reach the termites within by birds of various kinds.
Humans and ants are also the enemies of termites. Ants carry off the workers that are working outside the nest and may invade and destroy a colony; humans capture and eat the swarming
reproductives or dig out the queen as a special delicacy.
For illustrations to accompany this article see Insect Life-Cycles
|Search this site|
|Search the web|
© Copyright 2004 - 2015 D G Mackean & Ian Mackean. All rights reserved.
|
University of Iowa researchers are working with a California-based startup company to make clean energy from sunlight and any source of water.
Hydrogen power is arguably one of the cleanest and greenest energy sources because when it produces energy, the final byproduct is water instead of carbon emissions. Hydrogen power also can be stored in a fuel cell, making it more reliable than traditional solar cells or solar panels, which need regular sunlight to remain “on.”
HyperSolar’s lead scientist, Syed Mubeen, a chemical engineering professor at the UI, says although hydrogen is the most abundant element in the universe, the amount of pure hydrogen in the Earth’s atmosphere is very low (about 0.00005 percent), so it must be produced artificially.
Currently, most hydrogen power is made from fossil fuels in a chemical process called steam reforming, which emits carbon dioxide. Even though the end product is hydrogen, its inputs make it much less environmentally friendly and sustainable.
Hydrogen also can be made using electrolysis, which requires electricity and highly purified water to split water molecules into hydrogen and oxygen. Although this is a sustainable process (assuming the electricity is produced from a renewable energy source), the cost of materials associated with the system are expensive–a major barrier to the affordable production of renewable hydrogen.
“Developing clean energy systems is a goal worldwide,” Mubeen says. “Currently, we understand how clean energy systems such as solar cells, wind turbines, et cetera, work at a high level of sophistication. The real challenge going forward is to develop inexpensive clean energy systems that can be cost competitive to fossil fuel systems and be adopted globally and not just in the developed countries.”
With HyperSolar, Mubeen and his team at the UI’s Optical Science and Technology Center are developing a more cost-effective and environmentally friendly way to manufacture hydrogen by drawing inspiration from plants. So far, the researchers have created a small solar-powered electrochemical device that can be placed in any type of water, including seawater and wastewater. When sunlight shines through the water and hits the solar device, the photon energy in sunlight takes the water (a lower energy state) and converts it to hydrogen (a higher energy state), where it can be stored like a battery. The energy is harvested when the hydrogen is converted back into its lower energy state: water. This is similar to what plants do using photosynthesis, during which plants use photons from the sun to convert water and carbon dioxide into carbohydrates–some of which are stored in fruits and roots for later use.
Mubeen says his team is currently working to lower costs even further and to make their process more robust so it can be produced on a mass scale. That way, it eventually could be used as renewable electricity or to power hydrogen fuel cell vehicles.
“Although H2 can be used in many forms, the immediate possibility of this renewable H2 would be for use in fuel cells to generate electricity or react with CO2 to form liquid fuels like methanol for the transportation sector,” he says. “If one could develop these systems at costs competitive to fossil fuel systems, then it would be a home run.”
|
Cgi word problems
B1SUP-A3_AddSubNumLn_0709. Maths: Solving Problems: Word and Real Life Problems. Wordsort. Packet-3. MX_Problem_Solving. CountsMathProblemBooklet. Chapter 5 - Analyzing Students· Thinking. Analyzing Students’ Thinking In this chapter, we examine the type of professional development experience in which teachers analyze student thinking as revealed in students’ written assignments, think-aloud problem-solving tasks, class discussions and clinical interviews.
Within this kind of professional development sessions, teachers learn to observe various types of student mathematical activity and to interpret what they observe, with the ultimate goal of enhancing their students’ learning opportunities. Theoretical rationale and empirical support In Chapter 1, we discussed the research evidence that supports teachers learning about students’ mathematical thinking. We argued that doing so can help teachers develop not only a knowledge base about students’ conceptions and problem-solving strategies that they can use in planning instruction but also skills for listening to students and interpreting their thinking.
|
Diagnosing Heart Attacks
When diagnosing heart attacks, doctors will ask questions about things such as current symptoms and heart disease risk factors, and perform a physical exam. Tests used in making a heart attack diagnosis include electrocardiograms (EKGs), blood tests, and nuclear heart scans.
In order to make a heart attack diagnosis, the healthcare provider will work quickly to find out if you are having or have had a heart attack. He or she will ask a number of questions about your:
- Current symptoms
- Heart disease risk factors
- Family history of medical conditions
- Current medications
- Other conditions.
Your healthcare provider will also perform a physical exam looking for signs or symptoms of a heart attack. He or she will also order certain tests or procedures. Initial tests will be quickly followed by heart attack treatment if you are having a heart attack.
Tests for diagnosing heart attacks include:
- Electrocardiogram (ECG or EKG)
- Blood tests
- Nuclear scans
- Cardiac catheterization with angiography.
Electrocardiogram (ECG or EKG)
An electrocardiogram is used to measure the rate and regularity of your heartbeat. A 12-lead EKG is used in diagnosing a heart attack.
|
It’s January, and many of you are likely scrambling to pick a diet in an attempt to lose some of the post-Christmas weight gain. Your body's conversion of all the excess sugar consumed into fat certainly didn't help, but a team of researchers from the University of Montreal may have found a way to regulate this. As reported in Proceedings of the National Academy of Sciences, a new enzyme has been discovered that can directly control how your body converts sugar and fats.
Mammalian cells use both sugar (glucose) and fatty acids as their main sources of energy. Much of this glucose is stored in the liver as glycogen, a dense compound that can be mobilized whenever the body requires it for energy production. Those in developed countries tend to have diets that are too sugar-rich, giving themselves far more glucose than their body needs at the time. An excess of carbohydrates will also produce too much sugar for the body to be able to immediately use. Any large glucose excess is converted and stored as fat, and a major build up can lead to obesity.
Insulin, a hormone produced by the pancreatic beta cells, causes the liver to convert glucose into glycogen. Those with type 2 diabetes do not produce enough insulin when required, or they produce ineffective insulin that isn’t able to interact with the glucose in the blood, meaning glucose remains in the bloodstream.
Excess glucose in the blood also leads to the over-generation of a glycerol 3-phosphate (Gro3P) within cells. Normally, Gro3P participates in many cellular processes, including the formation of fats (lipids) and the conversion of glucose into other useful compounds (glycolysis).
However, too much Gro3P is toxic to cells; tissues can be damaged, and the metabolic, glucose, and fat conversion processes are unable to operate properly. The derangement of these can lead to type 2 diabetes and even cardiovascular (heart) disease. Thus, excess glucose in the body is essentially toxic for a variety of reasons.
Cupcakes, of course, will input a fairly high amount of sugar into your bloodstream. Ruth Black/Shutterstock
As this new study details, an enzyme called Gro3P phosphatase, or “G3PP,” has been discovered, hiding within all types of body tissue. This enzyme appears to be able to regulate both the conversion of glucose and fats into other compounds, and the production of adenosine triphosphate (ATP), the cell's "energy currency." This means that G3PP has direct influence over how glucose and fats are used within the body.
Using laboratory rats, the researchers showed that increasing the activity of G3PP within their livers ultimately lowers their weight gain and ability to produce glucose from the liver. Murthy Madiraju, a researcher at the University of Montreal Hospital Research Centre (CRCHUM), noted in a statement that “G3PP prevents excessive formation and storage of fat and it also lowers excessive production of glucose in liver, a major problem in diabetes.”
This offers a stepping stone for researchers hoping to manipulate this enzyme within humans. By using G3PP to alter how glucose and fats are absorbed and produced, those unable to control this themselves – such as those suffering from type 2 diabetes – could potentially be treated.
|
If you wear glasses or contact lenses, you probably have some degree of astigmatism. But how much do you really know about this all-too-common refractive error?
Below, we answer some of the most frequently asked questions about astigmatism and explain why scleral contact lenses are often prescribed to astigmatic patients.
1. What is Astigmatism?
Astigmatism is a common refractive error caused by a cornea that isn’t perfectly spherical. The cornea is the outer front covering of the eye and is partially responsible for refracting light onto the retina. When the cornea is misshapen, it refracts light incorrectly, creating two focus points of light entering the eye. Since the light is no longer focused on the retina, it results in blurred vision at all distances.
2. What are the Symptoms of Astigmatism?
The main symptom of astigmatism is blurred vision, but it can also cause symptoms like:
- Objects appearing wavy or distorted
- Poor night vision
- Frequent eye strain
3. How Common is Astigmatism?
Astigmatism affects approximately 1 in 3 individuals around the world. Most people with myopia (nearsightedness) or hyperopia (farsightedness) also have some level of astigmatism.
4. What’s the Difference Between Astigmatism, Nearsightedness and Farsightedness?
Although all 3 of these refractive errors negatively affect visual clarity, they are caused by different mechanisms.
Astigmatism is a result of a non-spherical cornea, which causes two focal points and blurry vision. Myopia occurs when the corneal focusing power is too high and the light focuses in front of, instead of directly, on the retina. Hyperopia occurs when the corneal power is too weak, so the light rays focus behind the retina, not on it. Both myopia and hyperopia can occur with a spherical cornea.
5. How is Astigmatism Corrected?
In cases of mild to moderate astigmatism, the blurred vision can be easily corrected with prescription glasses or contact lenses. But for patients with high levels of astigmatism, standard contact lenses may not be an option. Toric contact lenses are a popular choice for patients with mild or moderate astigmatism due to their unique focusing features and oblong shape. Scleral contact lenses are suitable for moderate to severe astigmatism.
Refractive surgery is also an option, but comes with the risk of surgical complications.
6. Why Can’t Individuals With High Astigmatism Wear Standard Contact Lenses?
A highly astigmatic cornea has an irregularly shaped surface that isn’t compatible with standard soft contact lenses. Standard soft lenses are limited in the amount of astigmatism they can correct, as these lenses move around on the cornea due to the cornea’s irregular shape. This, in turn, reduces visual clarity and comfort.
Regular hard lenses can often correct astigmatism better than soft lenses, but they, too, have limitations: these lenses are smaller and may also move around too much.
7. Why are Scleral Lenses Ideal For Astigmatism?
Scleral contact lenses are customized to each patient. They have a larger diameter than standard lenses, and thus cover the entire front surface of the eye. These specialized rigid lenses gently rest on the white part of the eye (sclera) and don’t place any pressure on the sensitive cornea, making them suitable for even highly astigmatic eyes.
Furthermore, scleral contact lenses contain a nourishing reservoir of fluid that sits between the eye and the inside of the lens, providing the cornea with oxygen and hydration all day long. In fact, patients typically report that sclerals provide sharper vision than other types of contact lenses.
Have Astigmatism? We Can Help
If you’ve been told that you have astigmatism and that your current contacts or glasses just aren’t cutting it, ask your optometrist whether scleral contact lenses are right for you.
At Specialty Contact Lens Center At Advanced Eyecare Center, we provide a wide range of eye care services, including custom scleral lens fittings and consultations. Our goal is to help all patients achieve crisp and comfortable vision, no matter their level of astigmatism or corneal shape.
To schedule your appointment or learn more about what we offer, call Specialty Contact Lens Center At Advanced Eyecare Center today!
Specialty Contact Lens Center At Advanced Eyecare Center serves patients from Redondo Beach, Manhattan Beach, Torrance, and Palos Verdes, California and surrounding communities.
Q: Can a person outgrow astigmatism?
- A: About 20% of all babies are born with mild astigmatism, but only 1 of those 5 babies with astigmatism still have it by the age of 5 or 6, at which point it is unlikely to diminish or disappear. Astigmatism can continue to change and even progress as the child grows, but tends to stabilize at around age 25.
Q: Can eye surgery cause astigmatism?
- A: Yes. For example, cataract surgery may cause or worsen astigmatism as the surgeon makes a tiny incision in the cornea to replace the lens. During the healing process, the cornea may change its shape and lead astigmatism to develop.
|
Fluoride is the most effective agent available to help prevent tooth decay. It is a mineral that is naturally present in varying amounts in almost all foods and water supplies. The benefits of fluoride have been well known for over 50 years and are supported by many health and professional organizations.
Fluoride works in two ways:
Topical fluoride strengthens the teeth once they have erupted by seeping into the outer surface of the tooth enamel, making the teeth more resistant to decay. We gain topical fluoride by using fluoride containing dental products such as toothpaste, mouth rinses, and gels. Dentists and dental hygienists generally recommend that children have a professional application of fluoride twice a year during dental check-ups.
Systemic fluoride strengthens the teeth that have erupted as well as those that are developing under the gums. We gain systemic fluoride from most foods and our community water supplies. It is also available as a supplement in drop or gel form and can be prescribed by your dentist or physician. Generally, fluoride drops are recommended for infants, and tablets are best suited for children up through the teen years. It is very important to monitor the amounts of fluoride a child ingests. If too much fluoride is consumed while the teeth are developing, a condition called fluorosis (white spots on the teeth) may result.
Although most people receive fluoride from food and water, sometimes it is not enough to help prevent decay. Your dentist or dental hygienist may recommend the use of home and/or professional fluoride treatments for the following reasons:
- Deep pits and fissures on the chewing surfaces of teeth.
- Exposed and sensitive root surfaces.
- Fair to poor oral hygiene habits.
- Frequent sugar and carbohydrate intake.
- Inadequate exposure to fluorides.
- Inadequate saliva flow due to medical conditions, medical treatments or medications.
- Recent history of dental decay.
Remember, fluoride alone will not prevent tooth decay! It is important to brush at least twice a day, floss regularly, eat balanced meals, reduce sugary snacks, and visit your dentist on a regular basis.
|
Did two prehistoric supernovae exploding about 300 light years from Earth cause mutations to plant and animal life as little as 2 million years ago? Scientists from the University of Kansas studying the effects of such an explosion, think perhaps so. Using computer modelling to ascertain the affects of two stars that went supernova approximately 1.7 to 3.2 million and 6.5 to 8.7 million years ago, Adrian Melott, professor of physics at the University of Kansas, suggests that along with disrupting animals' sleep patterns for a few weeks due to the incredibly light nights that would have occurred as a result of the explosions, cosmic rays may have caused mutations to life on Earth on a cellular level.
"I was surprised to see as much effect as there was," said Melott. "I was expecting there to be very little effect at all," he said. "The supernovae were pretty far way – more than 300 light years – that's really not very close. The big thing turns out to be the cosmic rays. The really high-energy ones are pretty rare… [these] are the ones that can penetrate the atmosphere. They tear up molecules, they can rip electrons off atoms, and that goes on right down to the ground level. Normally that happens only at high altitude."
The research suggests that the supernovae might have caused a 20-fold increase in irradiation by muons – an unstable subatomic particle of the same class as an electron – at ground level on Earth. This boosted exposure to cosmic rays would be equivalent to one CT scan per year for every creature inhabiting land or shallower parts of the ocean.
Muons make up much of the cosmic radiation reaching the earth's surface and usually they just pass straight through us, but with a mass around 200 times greater than an electron, they can penetrate hundreds of meters of rock. "Normally there are lots of them hitting us on the ground, but because of their large numbers they contribute about 1/6 of our normal radiation dose. So if there were 20 times as many, you're in the ballpark of tripling the radiation dose." Would this increased radiation be high enough to boost the mutation rate and frequency of cancer? “Not enormously. Still, if you increased the mutation rate you might speed up evolution,” explained Merlot.
It is perhaps coincidental, but around 2.59 million years ago a minor mass extinction occurred. This may be connected to a cooling in Earths climate, as the increased abundance of cosmic rays ionised Earth’s troposphere – the lowest level of the atmosphere – to a level eight times higher than normal and subsequently causing an increase in cloud-to-ground lightning.
"There was climate change around this time," said Melott. "Africa dried out, and a lot of the forest turned into savannah. Around this time and afterwards, we started having glaciations – ice ages – over and over again, and it's not clear why that started to happen. It's controversial, but maybe cosmic rays had something to do with it."
|
Age-related macular degeneration (AMD) is the leading cause of vision loss in the elderly. Now, new research using 3D organoid models of the eye has uncovered clues as to what happens in AMD, and how to stop it.
In AMD, a person loses their central vision because the light sensitive cells in the macula, a part of the retina, are damaged or destroyed. This impacts a person’s ability to see fine details, recognize faces or read small print, and means they can no longer drive.
No one is quite sure what causes AMD, but in a study in the journal Nature Communications, German researchers used miniature human retina organoids to get some clues.
Building a better model for research
Organoids are 3D models made from human cells that are grown in the lab. Because they have some of the characteristics of a human organ—in this case the retina—they help researchers better understand what is happening in the AMD-affected eye.
In this study they found that photoreceptors, the light sensitive cells at the back of the retina, were missing but there was no sign of dead cells in the organoid. This led them to suspect that something called cell extrusion was at play.
Cell extrusion is where a cell exports or sends large particles outside the cell. In this case it appeared that something was causing these photoreceptors to be extruded, leading to the impaired visual ability.
In a news release Mark Karl, one of the authors of the study, said, “This was the starting point for our research project: we observed that photoreceptors are lost, but we could not detect any cell death in the retina. Half of all photoreceptors disappeared from the retinal organoid within ten days, but obviously they did not die in the retina. That made us curious.”
Using snakes to fight AMD
Further research identified two proteins that appeared to play a key role in the process, triggering the degeneration of the retinal organoid. They also tested a potential therapy to see if they could stop the process and save the photoreceptors. The therapy they tried, a snake venom, not only stopped the photoreceptors from being ejected, but it also prevented further damage to the retinal cells.
Karl says this is the starting point for the next step in the research. “This gives hope for the development of future preventive and therapeutic treatments for complex neurodegenerative diseases such as AMD.”
CIRM’s fight against blindness
The California Institute for Regenerative Medicine (CIRM) has funded six clinical trials targeting vision loss, including one for AMD. We recently interviewed Dr. Dennis Clegg, one of the team trying to develop a treatment for AMD and he talked about the encouraging results they have seen so far. You can hear that interview on our podcast “Talking ‘Bout (re)Generation.”
|
A Vihara (Sanskrit: meaning "dwelling" or "house") was the ancient Indian term for a Buddhist monastery. Originally, viharas were dwelling places used by wandering monks during the rainy season but eventually they evolved into centers of learning and Buddhist architecture through the donations of wealthy lay Buddhists. Subsequent royal patronage allowed pre-Muslim India to become a land of many viharas that propagated university-like learning and were repositories of sacred texts. Many viharas, such as Nalanda, founded in 427 C.E., were world famous, and their Buddhist teachings were transmitted to other parts of Asia including China and Tibet, where Buddhism continued to flourish after its wane in India. The Indian viharas therefore were great catalysts in the fecundation and transmission of Buddhist religious knowledge, which slowly passed along trade routes and was shared through religious and diplomatic exchanges. While Europe was living in the Dark Ages, India, China and the Middle East were all flourishing centers of education, intellectual fermentation and discovery.
In the early decades of Buddhism the wandering monks of the Sangha had no fixed abode, but during the rainy season they stayed in temporary shelters. These dwellings were simple wooden constructions or thatched bamboo huts. As it was considered an act of merit not only to feed a monk but also to shelter him, monasteries were eventually created by rich lay devotees. These monasteries, called viharas, were located near settlements, close enough for monks to receive begging alms from the population but with enough seclusion to not disturb meditation.
Trade-routes were therefore ideal locations for a vihara and donations from wealthy traders increased their economic strength. From the first century C.E. onwards, viharas developed into educational institutions, due to the increasing demands for teaching in Mahayana Buddhism.
During the second century B.C.E., architectural plans for viharas were established such as the rock-cut chaitya-grihas of the Deccan. These plans consisted of a walled quadrangular court, flanked by small cells. The front wall was pierced by a door, and, in later periods, the side facing it often incorporated a shrine for the image of the Buddha. The cells were fitted with rock-cut platforms for beds and pillows. This basic layout was similar to that of the communal space of an ashrama ringed with huts in the early decades of Buddhism.
As permanent monasteries became established, the name "Vihara" was kept. Some Viharas became extremely important institutions, some of them evolving into major Buddhist Universities with thousands of students, such as Nalanda.
Life in "Viharas" was codified early on. It is the object of a part of the Pali canon, the Vinaya Pitaka or "basket of monastic discipline."
The northern Indian state of Bihar derives its name from the word "Vihara," probably due to the abundance of Buddhist monasteries in that area. The Uzbek city of Bukhara also probably takes it name from "Vihara."
In Thailand, "Vihara" has a narrower meaning, and designates a shrine hall.
Buddhist Vihara or monastery is an important form of institution associated with Buddhism. It may be defined as a residence for monks, a center for religious work and meditation and a centre of Buddhist learning. Reference to five kinds of dwellings (Pancha Lenani) namely, Vihara, Addayoga, Pasada, Hammiya and Guha is found in the Buddhist canonical texts as fit for monks. Of these only the Vihara (monastery) and Guha (Cave) have survived.
Epigraphic, literary and archaeological evidence testify to the existence of many Buddhist Viharas in Bengal (West Bengal and Bangladesh) and Bihar from the fifth century C.E. to the end of the twelfth century. These monasteries were generally designed in the old traditional Kusana pattern, a square block formed by four rows of cells along the four sides of an inner courtyard. They were usually built of stone or brick. As the monastic organization developed, they became elaborate brick structures with many adjuncts. Often they consisted of several stories and along the inner courtyard there usually ran a veranda supported on pillars. In some of them a stupa or shrine with a dais appeared. Within the shrine stood the images of Buddha, Bodhisattva or Buddhist female deities. More or less the same plan was followed in building monastic establishments in Bengal and Bihar during the Gupta and Pala period. In the course of time monasteries became important centers of learning.
An idea of the plan and structure of some of the flourishing monasteries may be found from the account of Hsuan-Tsang, who referred to the grand monastery of po-si-po, situated about 6.5 km west of the capital city of Pundravardhana (Mahasthan). The monastery was famous for its spacious halls and tall chambers. General Cunningham identified this vihara with bhasu vihara. Huen-tsang also noticed the famous Lo-to-mo-chi vihara (Raktamrittika Mahavihara) near Karnasuvarna (Rangamati, Murshidabad, West Bengal). The site of the monastery has been identified at Rangamati (modern Chiruti, Murshidabad, West Bengal). A number of smaller monastic blocks arranged on a regular plan, with other adjuncts, like shrines, stupas, pavilions, etc., have been excavated from the site.
One of the earliest viharas in Bengal was located at Biharail (Rajshahi district, Bangladesh). The plan of the monastery was designed on an ancient pattern, i.e. rows of cells round a central courtyard. The date of the monastery may be ascribed to the Gupta period.
A number of monasteries grew up during the Pala period in ancient Bengal. One of them was Somapura Mahavihara at Paharpur, 46.5 km to the northwest of Mahasthana. The available data suggests that the Pala ruler Dharmapala founded the vihara. It followed the traditional cruciform plan for the central shrine. There were 177 individual cells around the central courtyard. There were central blocks in the middle of the eastern, southern and western sides. These might have been subsidiary chapels. It was the premier vihara of its kind and its fame lingered till the eleventh century C.E.
The famous Nalanda Mahavihara was founded a few centuries earlier; Huen-tsang speaks about its magnificence and grandeur. Reference to this monastery is found in Tibetan and Chinese sources. The fame of this monastery lingered even after the Pala period.
Reference to a monastery known as Vikramashila is found in Tibetan records. The Pala ruler Dharmapala was its founder. The exact site of this vihara is at Antichak, a small village in Bhagalpur district (Bihar). The monastery had 107 temples and 50 other institutions providing room for 108 monks. It attracted scholars from neighboring countries.
The name of the Odantapuri monastery is traceable in Pagsam jon zang (a Tibetan text), but no full-length description is available in the Tibetan source. Gopala I (?) built it near Nalanda. This was the monastery invaded by Bakhtiyar Khalji.
Very interesting and important structural complexes have been discovered at Mainamati (Comilla district, Bangladesh). Remains of quite a few viharas have been unearthed here and the most elaborate is the Shalvan Vihara. The complex consists of a fairly large vihara of the usual plan of four ranges of monastic cells round a central court, with a temple in cruciform plan situated in the center. According to a legend on a seal (discovered at the site) the founder of the monastery was Bhavadeva, a ruler of the Deva dynasty.
Other notable monasteries of Pala period were Traikuta, Devikota (identified with ancient kotivarsa, 'modern Bangarh'), Pandita vihara and Jagaddala (situated near Ramavati). Excavations conducted in 1972 to 1974 yielded a Buddhist monastic complex at Bharatpur in the Burdwan district of West Bengal. The date of the monastery may be ascribed to the early medieval period. Recent excavations at Jagjivanpur (Malda district, West Bengal) revealed another Buddhist monastery of the ninth century C.E. Unfortunately, nothing of the superstructure has survived. However, a number of monastic cells facing a rectangular courtyard have been found. An interesting feature is the presence of circular corner cells. It is believed that the general layout of the monastic complex at Jagjivanpur is by and large similar to that of Nalanda.
Beside these, scattered references to some monasteries are found in epigraphic and other sources. They were no less important. Among them Pullahari (in western Magadha), Halud vihara (45 km south of Paharpur), Parikramana vihara and Yashovarmapura vihara (in Bihar) deserve mention.
List of Ancient Indian Viharas
Several sites on the Indian subcontinent were centers of learning in ancient times. Many were Buddhist monasteries. The following is a partial list of ancient center of learning in India:
- Taxila, inpresent-day Pakistan (seventh century B.C.E. - 460 C.E.)
- Nālandā, about 55 miles south east of present-day Patna in India (circa 450 – 1193 C.E.)
- Odantapuri, in Bihar (circa 550 - 1040 C.E.)
- Somapura, now in Bangladesh (from the Gupta period to the Muslim conquest)
- Jagaddala, in Bengal (from the Pala period to the Muslim conquest)
- Nagarjunakonda, in Andhra Pradesh
- Vikramaśīla, in Bihar (circa 800 - 1040 C.E.)
- Valabhi, in Gujarat (from the Maitrak period to the Arab raids)
- Varanasi in UP (eigth century to modern times)
- Kanchipuram, in Tamil Nadu
- Manyakheta, in Karnataka
- Sharada Peeth, in Kashmir
- Puspagiri, in Orissa
- D. Mitra, Buddhist Monuments (Sahitya Samsad: Calcutta, 1971).
- D.K. Chakrabarti, 1995, Buddhist sites across South Asia as influenced by political and economic forces. World Archaeology 27(2): 185-202.
- Mitra, 1971
- C. Tadgell, The History of Architecture in India (Phaidon: London, 1990).
- Anant Sadashiv Altekar, Education in Ancient India (Varanasi: Nand Kishore and Brothers, 1965).
ReferencesISBN links support NWE through referral fees
- Altekar, Anant Sadashiv. 1965. Education in Ancient India. Varanasi: Nand Kishore & Brothers.
- Chakrabarti, D.K. 1995. Buddhist sites across South Asia as influenced by political and economic forces. World Archaeology 27(2): 185-202.
- Mitra, D. 1971. Buddhist Monuments. Sahitya Samsad: Calcutta. ISBN 0-89684-490-0
- Tadgell, C. 1990. The History of Architecture in India. Phaidon: London. ISBN 1-85454-350-4
- Khettry, Sarita. 2006. Buddhism in North-Western India: Up to C. A.D. 650. R.N. Bhattacharya: Kolkata.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed.
|
Monitoring Alberta's Diversity
Alberta’s geographic diversity—from the boreal forests in the north through to the grasslands in the south—supports an estimated 80,000 species. The vast majority of these have not been well studied and some remain completely undiscovered.
The work of ABMI monitoring crews—collecting samples and information for a wide range of species of plants and animals from across the province—helps to address this knowledge gap. To understand, and sometimes even put a name to these samples, however, the samples are delivered to the ABMI Processing Centre located at the University of Alberta to be classified and identified.
Since 2007, ABMI taxonomists and technicians have processed over 480,000 specimens, including lichen, bryophyte (moss and liverworts), aquatic invertebrates and mite samples. Many of the identified species represent new scientific records for the province and occasionally new records for Canada. The work has even led to the discovery and description of species new to science. In 2013, six new mite species were described, including Oribatella abmi.
Taxonomy at the ABMI
In addition to processing field-collected samples (specimen and sample tracking, sorting & processing), the dedicated team at the Processing Centre also:
- Curate specimens
- Build taxonomic tools such as specimen reference collections and keys
- Develop taxonomic expertise and train new taxonomists
- Generate the ABMI’s species-level datasets
ABMI scientists use the detailed species-level dataset generated by the Processing Centre to determine relationships between human land use, habitat, and species abundance, and to calculate the ABMI’s Biodiversity Intactness Index.
The ABMI specimen reference collections are curated as part of the Royal Alberta Museum’s provincial natural history collection, which is intended to be available in perpetuity. This collection is extremely valuable for both current and future researchers and is a catalogue of the province’s vast, natural heritage for all Albertans.
More questions? Feel free to contact:
Dr. Tyler Cobb
Director, Processing Centre
|
TURN YOUR DATA INTO VITAL INFORMATION
- Qualitative Data Analysis Help
- Content analysis. This refers to the process of grouping verbal/behavioural data to group, summarize and then tabulate the data.
- Narrative analysis. This method involves narrations given by respondents. It takes into account the context of the respondents and their different experiences. It is in essence, the revision of primary data collected by the researcher.
- Discourse analysis. A method of analysis of naturally occurring talk and all types of written text.
- Grounded theory. This method of qualitative data analysis starts with an analysis of a single case giving rise to evidence that help to develop a theory. Subsequently new cases are used to see if they can contribute to the theory.
- Quantitative Data Analysis Help
- Exploratory Data Analysis
- Multivariate Data Analysis
- Univariate Data Analysis and Statistical inferences
Statistical Packages use for analysis – SPSS, MINITAB, AMOS, Nvivo, Endnotes, SmartPLS
SOME COMMON QUANTITATIVE ANALYSIS TOPICS
TOPIC 1 : ANALYSIS OF COVARIANCE (ANCOVA)
The Analysis of Covariance or also referred to as ANCOVA, is a powerful statistical procedure that is used in educational research to remove the effects of pre-existing individual differences among subjects in a study
WHAT IT DOES
ANCOVA reduces the error variance by removing the variance due to the relationship between age (covariate) and the dependent variable (knowledge of current events).
ANCOVA adjusts the means on the covariate for all of the groups, leading to the adjustment of the means of the dependent variable (knowledge of current events).
WHAT ARE ITS ASSUMPTIONS
- Homogeneity of Variance
- Homogeneity of Regression
- Reliability of the Covariate
SOME GENERATED SPSS PROCEDURE AND OUTPPUT
SPSS PROCEDURES TO OBTAIN SCATTER DIAGRAM & REGRESSION LINE FOR EACH GROUP
- Select Graphs then Scatter . If you’re using SPSS 16, then it is Graphs Legacy Dialog and then Scatter/Dot
- Make sure Simple is selected and then choose Define
- Move the dependent variable (i.e Knowledge of current events) to the Y Axis
- Move the grouping variable to Set Markers box
- Click OK
[Note that this will give you the scatter diagram of all the groups together]
- Once you have done the above , double-click on the Graph which opens up the SPSS Chart Editor
- Choose Chart and Options which opens the Scatter Plot Options
- Check on the Subgroups box
- Click on Fit Options button which opens the Fit Line dialogue box
- Click the Liner Regression and ensure the box is highlighted
- In Regression Prediction , check the Mean box
- Click on continue , then OK
[This will give you the regression on line of each of the groups separately]
TOPIC 2 : CORRELATIONAL STUDY
In general, a correlational study is a quantitative method of research in which you have TWO or more quantitative variables from the same group of subjects, & you are trying to determine if there is a relationship (or covariation) between the TWO variables . Theoretically, any TWO quantitative variables can be correlated (for example, midterm scores & number of body piercings!) as long as you have scores on these variables from the same participants; however, it is probably a waste of time to collect & analyze data when there is little reason to think these two variables would be related to each other.
Your hypothesis might be that there is a positive correlation (for example, the number of hours of study & your midterm exam scores), or a negative correlation (for example, your levels of stress & your exam scores). A perfect correlation would be an r = +1.0 & -1.0, while no correlation would be r = 0. Perfect correlations would almost never occur. Although correlation can’t prove a causal relationship, it can be used to support a theory, to measure test-retest reliability, etc.
TOPIC 3 : MANOVA
MANOVA ia a multivariate procedures analogues of various univariate ANOVA experimental designs.
There are three variations of MANOVA:
- Hotelling’s T: Analogue of the two group T-test situation. Involves one dichotomous independent variable, and multiple dependent variables.
- One-Way MANOVA: Analogue of the one-way F situation. Involves one multi-level nominal independent variable, and multiple dependent variables.
- Factorial MANOVA: Analogue of the factorial ANOVA design. Involves multiple nominal independent variables, and multiple dependent variables.
Two major situations where MANOVA is used:
- There are several correlated dependent variables, and the researcher desires a single, overall statistical test on this set of variables instead of performing multiple individual tests.
- Exploring how independent variables influence some patterning of response on the dependent variables.
ASSUMPTION FOR MANOVA
Normal Distribution: – The dependent variable should be normally distributed within groups. Overall, the F test is robust to non-normality, if the non-normality is caused by skewness rather than by outliers. Tests for outliers should be run before performing a MANOVA, and outliers should be transformed or removed. For multivariate procedures, an important requirement is multivariate normal and the joint effect of the dependent variables is normally distributed. There is no direct test for multivariate normality as such it is necessary to provide evidence of univariate normality, even though this does not guarantee multivariate normality but data that complies to this requirement provide greater confidence.
WHAT ARE ITS ASSUMPTIONS
- Homogeneity of Variances
- Homogeneity of Variances and Covariances
Two special cases arise in MANOVA, the inclusion of within-subjects independent variables and unequal sample sizes in cells.
- Unequal sample sizes – As in ANOVA, when cells in a factorial MANOVA have different sample sizes, the sum of squares for effect plus error does not equal the total sum of squares. This causes tests of main effects and interactions to be correlated. SPSS offers adjustment for unequal sample sizes in MANOVA
- Within-subjects design – Problems arise if the researcher measures several different dependent variables on different occasions. This situation can be viewed as a within-subject independent variable with as many levels as occasions, or it can be viewed as separate dependent variables for each occasion.
SPSS PROCEDURES FOR ASSESSING
There are several procedures to obtain the different graphs and statistics to assess normality, for example the EXPLORE procedure is the most convenient when both graphs and statistics are required.
- From the main menu, select Analyse.
- Click Descriptive Statistics and then Explore ….to open the Explore dialogue box.
- Select the variable you require and click the arrow button to move this variable into the Dependent List: box.
- Click the Plots…command pushbutton to obtain the Explore: Plots sub dialogue box.
- Click the Histogram check box and the Normality plots with tests check box, and ensure that the Factor levels together radio button is selected in the Boxplots display.
- Click Continue.
- In the Display box, ensure that Both is activated.
- Click the Options…command pushbutton to open the Explore: Options sub-dialogue box.
- In the Missing Values box, click the Exclude cases pairwise (if not selected by default)
- Click Continue and then OK.
SPSS PROCEDURES FOR MANOVA
- Select the ANALYZE menu
- Click on the General Linear Model and then Multivariate … to open Multivariate dialogue box.
- Select the dependent variables and move them into the Dependent Variable box.
- Select the independent variable and move them into Fixed Factor box.
- Click on the Model … command pushbutton to open the Multivariate sub-dialogue box.
- In the Spesify Model box, ensure the Full Factorial radio button is selected and Type III is selected from the sum of squares : drop-down list.
- Click Continue
- Click on the Options… command pushbutton to open the Multivariate Options sub-dialogue box.
- In the Estimated Marginal Means box, under the heading Factors and Factor Interactions, click on the independent variable and move the variable into the Display Means for: box
- In the Display box, select the Descriptive statistics and Homogeity test check box
|
The debates around renewable sources of energy have been going on at least a decade. After more than a century of relying on fossil fuels almost entirely, changing this paradigm in favor of the renewable energy sources may seem difficult and unjustified for some people. However, the situation when fossil fuel was the most efficient and the cheapest source of energy has been left far in the past; nowadays, it is obvious that using oil or gas is not only expensive, but also causes tremendous damage to the planet we live on. Many countries such as Germany or Sweden have already made significant efforts to fix this situation, employing numerous power plants working on the renewable sources of energy; the most effective among these sources is geothermal energy. Using it has a number of benefits which should be considered by governments globally.
Geothermal energy—and in particular, the prices of it—does not depend on the world’s economic and political situation as strongly as fossil fuels do. Besides, extracting and transporting fossil fuel adds up to the price of energy produced from it. In its turn, geothermal energy is much cheaper than conventional ones, involving low-running costs and saving up to 80% of costs over fossil fuels (CEF).
One-stop solution for all your homework needs. Get the job done.
✅ AI Essay Writer ✅ AI Detector ✅ Plagiarism checker ✅ Paraphraser
Environmental friendliness is another benefit of geothermal energy. Being a renewable source, it definitely produces less waste and pollution than conventional energy sources; the exact indexes, however, depend on the systems used for producing geothermal energy. In open-loop geothermal systems, carbon dioxide makes up about 10% of air emissions; an even smaller percentage of emissions is methane. Overall, open-loop geothermal systems produce 0.1 pounds of carbon dioxide and other harmful gases per kilowatt-hour of the energy produced. In closed-loop systems, the greenhouse gases are not released into the atmosphere, although a relatively small amount of such emissions can be produced during a geothermal power plant’s construction. For a comparison, a power plant producing electricity from gas releases up to 2 pounds of carbon dioxide per kilowatt-hour into the atmosphere, and those power plants that work on coal produce an astonishing 3.6 pounds of greenhouse gases per kilowatt-hour of energy produced. As it can be seen, even less advanced open-loop geothermal systems are much cleaner and safer for ecology than the power plants working on conventional energy sources (UCS).
Low maintenance costs make yet another reason why using geothermal power plants should be a priority for many countries. Geothermal heat pump systems require 25% to 50% less energy for work compared to the conventional systems for heating or cooling. Besides, geothermal equipment is less bulky, so it requires less space: due to the very nature of geothermal energy (which is extracted from the bowels of the planet), geothermal power plants have only a few moving parts, all of which can be easily sheltered inside a relatively small building. This is not to mention that the life span of geothermal equipment is rather long: up to 50 years for pipes, and up to 20 years for pumps (GreenMatch). All this makes geothermal power stations easy to build and maintain.
As it can be seen, using geothermal energy is more effective than energy produced from conventional sources of energy. Geothermal energy is cheaper, less harmful for the environment, and power plants producing it are easier to build and maintain. These factors make geothermal energy a reasonable and effective alternative to energy produced from fossil fuels, so the governments of the world should consider converting their industries to work on geothermal energy.
- “Advantages of Geothermal Energy.” ConserveEnergyFuture. N.p., 20 Jan. 2013. Web. 26 Sept. 2016.
- “Environmental Impacts of Geothermal Energy.” Union of Concerned Scientists. N.p., n.d. Web. 26 Sept. 2016.
- “Advantages and Disadvantages of Geothermal Energy.” GreenMatch. N.p., n.d. Web. 26 Sept. 2016.
Follow us on Reddit for more insights and updates.
|
Interoperability refers to the ability of diverse systems and organizations to work together (inter-operate). The term is often used in terms of technical systems engineering, or alternatively in a broader sense that accounts for social, political, and organizational factors that impact system to system performance. "Interop" is also the name of several annual networking product trade shows.
Interoperability makes communication and data exchange between systems possible, and it is vital for system integration or collaboration of sub-systems. Today, system integration is rapidly taking place at various levels and in various fields. Interoperability becomes an issue when economic interests of developers and the public are in conflict. For example, the European Commission accused Microsoft of abusing market power by developing systems that were not interoperable with non-Microsoft systems.
One successful example of interoperability is the development of MARC standards (standards for MAchine-Readable Cataloging) across library communities. In 1968, the Library of Congress (U.S.) developed the standards for electronic bibliographic information, which became the international standard for inter-library operations.
The IEEE (Institute of Electrical and Electronics Engineers) defines interoperability as:
To exchange information or data, two or more systems must be compatible at some level. If information or data was generated in one information system and another system could not read or retrieve it in a meaningful way, those two systems are not interoperable. Communication and exchange of data are possible only when two or more systems are interoperable.
Interoperability exists in various ways: Hardware, software, technology, metadata, and others. In today's information environment, it is important to set common standards that all system producers can comply with in a given field.
Many organizations are dedicated to interoperability. All have in common that they want to push the development of the world wide web towards the semantic web. Some concentrate on eGovernment, eBusiness or data exchange in general. In Europe, for instance, the European Commission and its IDABC (Interoperable Delivery of European eGovernment Services to public Administrations, Businesses and Citizens) program issue the European Interoperability Framework. They also initiated the Semantic Interoperability Centre Europe (SEMIC.EU). In the United States, the government CORE.gov service provides an environment for component development, sharing, registration, and reuse.
In telecommunication, the term can be defined as:
- The ability of systems, units, or forces to provide services to and accept services from other systems, units or forces and to use the services exchanged to enable them to operate effectively together.
- The condition achieved among communications-electronics systems or items of communications-electronics equipment when information or services can be exchanged directly and satisfactorily between them and/or their users. The degree of interoperability should be defined when referring to specific cases (Federal Standard 1037C and from the Department of Defense Dictionary of Military and Associated Terms in support of MIL-STD-188).
In two-way radio, interoperability is composed of three dimensions:
- Compatible communications paths (compatible frequencies, equipment and signaling)
- Radio system coverage or adequate signal strength
- Scalable capacity
With respect to software, the term interoperability is used to describe the capability of different programs to exchange data via a common set of exchange formats, to read and write the same file formats, and to use the same protocols. (The ability to execute the same binary code on different processor platforms is "not" contemplated by the definition of interoperability.) The lack of interoperability can be a consequence of a lack of attention to standardization during the design of a program. Indeed, interoperability is not taken for granted in the non-standards-based portion of the computing world.
According to ISO/IEC 2382-01, Information Technology Vocabulary, Fundamental Terms, interoperability is defined as follows: "The capability to communicate, execute programs, or transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units."
Note that the definition is somewhat ambiguous because the user of a program can be another program and, if the latter is a portion of the set of program that is required to be interoperable, it might well be that it does need to have knowledge of the characteristics of other units. This definition focuses on the technical side of interoperability. On the other hand, it has also been pointed out that interoperability is often more of an organizational issue. Interoperability raises issues of ownership, labor relations, and usability. In this context, a more apt definition is captured in the term "business process interoperability."
Interoperability can have important economic consequences, such as network externalities. If competitors' products are not interoperable (due to causes such as patents, trade secrets, or coordination failures), the result may well be monopoly or market failure. For this reason, it may be prudent for user communities or governments to take steps to encourage interoperability in various situations. In the United Kingdom, for example, there is an eGovernment-based interoperability initiative called e-GIF. As far as user communities are concerned, Neutral Third Party is currently creating standards for business process interoperability.
New technologies are being introduced in hospitals and labs at an ever-increasing rate, and many of these innovations have the potential to interact effectively if they can be integrated effectively. The need for “plug-and-play” interoperability—the ability to take a medical device out of its box and easily make it work with one’s other devices—has attracted great attention from both health care providers and the health care industry.
Conditions in the biomedical industry are still in the process of becoming conducive to the development of interoperable systems. A potential market of interested hospitals exists, and standards for interoperability are being developed. Nevertheless, it seems that current business conditions do not encourage manufacturers to pursue interoperability. Only sixteen to twenty percent of hospitals, for example, use electronic medical records (EMR). With such a low rate of EMR adoption, most manufacturers can get away with not investing in interoperability. In fact, not pursuing interoperability allows some of them to tout the inter-compatibility of their own products while excluding competitors. By promoting EMR adoption, companies such as Intel hope to create an environment in which hospitals will have the collective leverage to demand interoperable products.
eGovernment interoperability refers to the collaboration of cross-border services for citizens, businesses and public administration. Exchanging data can be challenging due to language barriers, different specifications of formats and varieties of categorizations.
If data is interpreted differently, collaboration is limited and inefficient. Thus eGovernment applications must exchange data in a semantically interoperable manner.
Interoperability is an important issue for law enforcement, fire fighting, EMS (Emergency medical services), Earthquake Early Warning, and other public health and safety departments, because first responders need to be able to communicate during wide-scale emergencies. Traditionally, agencies could not exchange information because they operated on widely disparate hardware that was incompatible. Agencies' information systems such as computer-aided dispatch systems (CAD) and records management systems (RMS) functioned largely in isolation, so-called "information islands." Agencies tried to bridge this isolation with inefficient, stop-gap methods while large agencies began implementing limited interoperable systems. These approaches were inadequate and the nation's lack of interoperability in the public safety realm become evident during the September 11 attacks on the Pentagon and World Trade Center (New York City) structures in 2001. Further evidence of a lack of interoperability surfaced when agencies tackled the aftermath of the Hurricane Katrina disaster in the U.S. in 2005.
In contrast to the overall national picture, some states, including Utah, have already made great strides forward. The Utah Highway Patrol and other departments in Utah have created a statewide data-sharing network using technology from a company based in Bountiful, Utah, FATPOT Technologies.
The State of Washington seeks to enhance interoperability statewide. The State Interoperability Executive Committee (SIEC), established by the legislature in 2003, works to assist emergency responder agencies (police, fire, sheriff, medical, hazmat, and so on) at all levels of government (city, county, state, tribal, federal) to define interoperability for their local region.
The U.S. government is making a concerted effort to overcome the nation's lack of public safety interoperability. The Department of Homeland Security's Office for Interoperability and Compatibility (OIC) is pursuing the SAFECOM and CADIP programs, which are designed to help agencies as they integrate their CAD and other IT systems.
The OIC launched CADIP in August 2007. This project will partner the OIC with agencies in several locations, including Silicon Valley. This program will use case studies to identify the best practices and challenges associated with linking CAD systems across jurisdictional boundaries. These lessons will create the tools and resources public safety agencies can use to build interoperable CAD systems and communicate across local, state, and federal boundaries.
In countries such as Japan, Taiwan, and Mexico, where earthquakes are frequent, Earthquake Warning Systems have been established. An Earthquake Warning System is a system of accelerometers, communication, computers, and alarms that is devised for regional notification of a substantial earthquake while it is in progress. This is not the same as earthquake prediction, which is currently incapable of producing actionable event warnings. Japan has some of the most advanced systems in the world, one of which was put to practical use in 2006; its ability to warn the general public was installed on October 1, 2007. When the sign of Earthquake is detected, the warnings are immediately issued by the Japan Meteorological Agency. The warning is then transmitted to all key agencies in the area and broadcast through TV, radio, cellular phone, internet, home intercom, and other communication medium.
The purpose of an Earthquake Early Warning is to mitigate damages by an earthquake. They allow people to take measures to protect themselves at home or at work. Since time is a crucial factor, all networks must be interoperatable with maximum speed and efficiency.
Library and information science
In the latter half of the twentieth century, libraries conducted massive data migration from the traditional card system (bibliographic information about a book is written an index card) to computerized information systems. The Library of Congress (U.S.) took the lead in setting the standard by starting the MARC Standards (standards for MAchine-Readable Cataloging) project in 1965. The Library of Congress, led by a project leader Henriette Avram, completed the project in 1968. MARC standards provide the protocol by which computers read, interpret, exchange, and use bibliographic information. Furthermore, MARC standards also became the international standard. Thus, billions of bibliographic items of information in tens of thousands of libraries now exist in a computer based information system with a uniform format.
Software Interoperability is achieved through five interrelated ways:
- Product testing
- Products produced to a common standard, or to a sub-profile thereof, depend on clarity of the standards, but there may be discrepancies in their implementations that system or unit testing may not uncover. This requires that systems formally be tested in a production scenario—as they will be finally implemented—to ensure they actually will intercommunicate as advertised, that is, they are interoperable. Interoperable product testing is different from conformance-based product testing as conformance to a standard does not necessarily engender interoperability with another product which is also tested for conformance.
- Product engineering
- Implements the common standard, or a sub-profile thereof, as defined by the industry/community partnerships with the specific intention of achieving interoperability with other software implementations also following the same standard or sub-profile thereof.
- Industry/community partnership
- Industry/community partnerships, either domestic or international, sponsor standard workgroups with the purpose to define a common standard that may be used to allow software systems to intercommunicate for a defined purpose. At times an industry/community will sub-profile an existing standard produced by another organization to reduce options and thus making interoperability more achievable for implementations.
- Common technology and IP
- The use of a common technology or IP may speed up and reduce complexity of interoperability by reducing variability between components from different sets of separately developed software products and thus allowing them to intercommunicate more readily. This technique has some of the same technical results as using a common vendor product to produce interoperability. The common technology can come through 3rd party libraries or open source developments.
- Standard implementation
- Software interoperability requires a common agreement that is normally arrived at via a industrial, national, or international standard.
Each of these has an important role in reducing variability in intercommunication software and enhancing a common understanding of the end goal to be achieved.
Interoperability as a question of power and market dominance
Interoperability tends to be regarded as an issue for experts and its implications for daily living are sometimes underrated. The case of Microsoft vs. the European Commission shows how interoperability concerns important questions of power relationships. In 2004, the European Commission found that Microsoft had abused its market power by deliberately restricting interoperability between Windows work group servers and non-Microsoft work group servers. By doing so, Microsoft was able to protect its dominant market position for work group server operating systems, the heart of corporate IT networks. Microsoft was ordered to disclose complete and accurate interface documentation, which will enable rival vendors to compete on an equal footing (“the interoperability remedy”). As of June 2005 the Commission is market testing a new proposal by Microsoft to do this, having rejected previous proposals as insufficient.
Recent Microsoft efforts around interoperability may indicate a shift in their approach and level of commitment to interoperability. These efforts including the migration of Microsoft Office file formats to ECMA Office Open XML, and several partner interoperability agreements, most notably their recent collaboration agreement with Novell.
Interoperability has also surfaced in the Software patent debate in the European Parliament (June/July 2005). Critics claim that because patents on techniques required for interoperability are kept under RAND (reasonable and non discriminatory licensing) conditions, customers will have to pay license fees twice: Once for the product and, in the appropriate case, once for the patent protected program the product uses.
- Institute of Electrical and Electronics Engineers, IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries (New York, NY: 1990).
- ISO, IEC, Proposed Draft Technical Report for: ISO/IEC xxxxx, Information technology; Learning, education, and training; Management and delivery; Specification and use of extensions and profiles. Retrieved October 11, 2008.
- David Talbot, A technology review, Massachusetts Institute of Technology.
- Japan Meteorological Agency, What is the Earthquake Early Warning? Retrieved October 10, 2008.
- Microsoft, Microsoft and Novell Announce Broad Collaboration on Windows and Linux Interoperability and Support Companies also announce a patent agreement covering proprietary and open source products. Retrieved October 10, 2008.
ReferencesISBN links support NWE through referral fees
- Branscomb, Lewis M., and James Keller. Converging Infrastructures Intelligent Transportation and the National Information Infrastructure. Cambridge, MA: MIT Press, 1996.
- Faith, G. Ryan, Vincent Sabathier, and Lyn Wigbels. Interoperability and Space Exploration A Report of the Human Space Exploration Initiative at the Center for Strategic and International Studies. Washington, D.C.: HSEI, 2007. Retrieved October 10, 2008.
- "Inside PC Labs—There's a Renaissance Going on in the Wireless LAN Arena, but Interoperability Among Product Manufacturers Isn't Perfect yet." PC Magazine. 20 (5): 61.
- Japan Meteorological Agency. What is the Earthquake Early Warning? Retrieved October 10, 2008.
- Leebaert, Derek. The Future of the Electronic Marketplace. Cambridge, MA: MIT Press, 1998.
- National Institute of Justice (U.S.). Public Safety Communications and Interoperability. In Short, Toward Criminal Justice Solutions. Washington, D.C.: U.S. Dept. of Justice, Office of Justice Programs, National Institute of Justice, 2007.
- Panetto, Herve. Interoperability of Enterprise Software and Applications. Paris: Hermes Science Publ, 2005.
- Searle, Jonathan, and John Brennan. General Interoperability Concepts. Ft. Belvoir: Defense Technical Information Center, 2006. Retrieved October 10, 2008.
- Searle, Jonathan, and John Brennan. Interoperability Architectures. Ft. Belvoir: Defense Technical Information Center, 2006. Retrieved October 10, 2008.
- Talbot, David. A technology review. MIT. Retrieved October 10, 2008.
- Tennant, R.2003. "Digital Libraries The Engine of Interoperability." Library Journal 128: 33.
All links retrieved March 4, 2018.
- Simulation Interoperability Standards Organization (SISO)
- InterOP V-Lab
- University of New Hampshire Interoperability Laboratory - premier research facility on interoperability of computer networking technologies
- Microsoft Openness
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed.
|
Conflict resolution is one of the most important skills a child can learn. Learning how to deal with conflict is a core skill that your students need now, and will use throughout their lives. For kids, these are tough everyday problems that never go away — problems they must learn to handle in order to protect themselves and learn lifelong conflict resolution skills.
BeCool®: Upper Elementary
In these modules,students are taught effective and ineffective strategies for dealing with difficult feelings and difficult people.
- 5 modules
- BeCool poster
- 5 Teacher's Guides
|
The Physics of Baseball, Adair, 2002
The following summary is condensed from the chapter on The Swing of the Bat:
The process described from the release of a good fastball pitch to the moment the bat crosses the plate takes about 400 milliseconds. For comparison, an eye-blink takes 150 milliseconds. The timing required to begin the swing in time to meet the fastball are as follows:
The initial assembly of the information in the eye takes about 25 milliseconds. The initial information, not really a picture, within the brain from first look at the ball by the batter takes 20 milliseconds. The batter can then fill in additional information stored in his brain (i.e. infield, background) constructs the picture in another 30 milliseconds. So in the time the fastball has travelled nine feet (75 milliseconds), the batter has an initial “picture” to work with. Only then can the batter begin thinking about swinging.
The batter’s brain compares the information with learned and retained patterns from other parts of the brain: think of the choices as a deck of cards that might be shuffled with certain ones on top based on the pitcher’s reputation and the observations from the dugout or on-deck circle. The batter has about 25 milliseconds to select the correct reaction: swing, don’t swing, what kind of swing, or maybe hit-the-deck. The decision is then relayed to the muscles over a 100 millisecond period. The muscles react, depending on the type of muscle, in 10 to 50 milliseconds.
The swing takes about 180 milliseconds, of which the first 30 are shifting the weight to the front foot.
The batter can make significant changes in the first 50 milliseconds of the swing (after the weight shift). In making these modifications, he uses information he gathers by “looking” at the ball for another 50 milliseconds and then “thinking” and “acting” in response to that look. The exceptional athletes who play the game can probably make small adjustments for another 50 milliseconds though at the end of that time the bat is traveling at about three-fourths of its final velocity.
Even if the skilled batter can use this late data, none of the subsequent information from seeing the ball over the last half of its flight can be used at all. If it weren’t psychologically upsetting, the batter could just as well close his eyes after the ball is halfway to the plate or, if it was a night game, management could turn out the lights—the batter would hit the ball just as well. [end of summary]
So, what does that tell me about certain Nationals at the plate?
Some random observations by me, based on the above analysis and game observations:
1. LASIK may help the batter by giving him more accurate data during that first 45 milliseconds, which can then lead to a more accurate “shuffle” of the deck of possible pitches in that crucial 25 milliseconds.
2. The more muscles that are involved in the swing, the more difficult it is physically to send adjusted information to the muscles and adjust the swing once the initial signals have been sent.
3. Experience matters in pitch recognition and response. . . up to a point. At some point, the eye-to-brain-to-muscle reaction time slows down enough that the batter is no longer hitting at a major league level. Changing venues entirely, Jeopardy for Seniors is played separately because no matter how much a slightly older person knows, they cannot hit the button as fast as a younger competitor.
4. Jayson Werth might be right about hitting a major league pitch being the hardest job in the universe. Within that 150 seconds of the swing, if the batter is 7 milliseconds late, the ball will be foul past first, 7 milliseconds early, foul past third. That’s not much margin for error.
5. If the pitch moves enough in the last few feet, the information literally cannot get from eye to brain to muscle to action in time for the batter to react. Advantage pitcher—and that explains why occasionally the batter swings at pitches which arrive at the plate really far out of the strike zone, or swings so hard they fall down.
6. Certain players seem to do the “shuffle the deck of cards” very quickly, and don’t (or can’t) use that extra 50 milliseconds to adjust their swing: they stick with the initial “read” of the ball. That may or may not be accurate, depending on the player’s experience and the expertise of the pitcher.
7. The additional time that a curve ball (or knuckleball!) travels gives the batter additional time to evaluate the pitch and make a better decision, but that is offset by the potential for late movement on the pitches. That’s why we pay Scherzer $210M, and Dan Haren is retired.
|
Brand awareness is the extent to which a brand is recognized by potential customers, and is correctly associated with a particular product. Expressed usually as a percentage of the target market, brand awareness is the primary goal of advertising in the early months or years of a product’s introduction. Brand awareness is related to the functions of brand identities in consumers’ memory and can be reflected by how well the consumers can identify the brand under various conditions. Brand awareness includes brand recognition and brand recall performance. Brand recognition refers to the ability of the consumers to correctly differentiate the brand they previously have been exposed to. This does not necessarily require that the consumers identify the brand name. Instead, it often means that consumers can respond to a certain brand after viewing its visual packaging images. Brand recall refers to the ability of the consumers to correctly generate and retrieve the brand in their memory. A brand name that is well known to the great majority of households is also called a household name.
|
Nutrients are usually divided into five classes: carbohydrates, protein, fats (including oil) vitamins and minerals. We also need water and fibre, but these are not usually counted in the main five nutrients. Most foods contain a mixture of nutrients (pure salt and sugar are the exception) but it is easier to classify them by the main nutrient they provide.
Meat supplies protein, fat, some B vitamins and minerals (mostly iron, potassium, zinc and phosphorus). Fish also contains the above but also vitamin A, D and E, as well as iodine. All these nutrients can be obtained by vegetarians from other sources.
Protein is easy to replace in a vegetarian diet. Nuts, seeds, pulses, grain, Soya, dairy products (apart from butter and cream) and free range eggs all are rich in protein. Women need 45g of Protein a day and men need 55g (more if very active). You may have heard about vegetarians having to balance amino acids in their diet. Amino Acids are the units from which Protein is made and there are 20 different ones in all. It is very involved but as with everything, as long as your diet is varied and balanced there is no need to worry.
There are three main types of Carbohydrates which gives us are main source of energy. Simple Carbohydrates are found in fruit, milk, and table sugar. Refined sugar is best avoided as it provides energy without any fibre, vitamins or minerals. Complex Carbohydrates are found in cereals, grain and some rooted vegetables like potatoes and parsnips. A healthy diet should contain plenty of these starchy foods, as a high intake of Complex Carbohydrates is now known to benefit health. Fibre can be found in unrefined or wholegrain cereals, fruit (fresh or dried) and vegetables. Dietary fibre is the best Carbohydrate to eat, like wholemeal bread and brown rice, as they also contain essential B vitamins.
Fats and Oils
Too much Fat can be bad for us, but a little is needed to keep our tissues in good repair, carry vitamins and make hormones. Fats are made of small units and two essential fatty acids are linoleic and linoenic acid which are widely found in plant foods. The benefit of a vegetarian diet is using Vegetable Fat for cooking, as this tends to be unsaturated.
Only a small quantity is needed in the diet, but needed it is. Vitamin A is found in red, orange or yellow vegetables like carrots and tomatoes. It is also added to most margarine. B Vitamins are found in yeasts and whole cereals, nuts and seeds, pulses and green vegetables. B12 is the only exception and is not present in plant food, but can be found in dairy products and free range eggs. Vitamin C is found in fresh fruit, salad, green vegetables and potatoes and helps the body absorb iron. Vitamin D is obtained from sunlight, but is also added to most margarine and is present in milk, cheese and butter. Vitamin D helps the body to absorb calcium. Vitamin E is in vegetable oil, cereals, eggs and wholegrain. Vitamin K is also in fresh vegetable and cereals.
Minerals perform a variety of jobs in the body. Calcium is important for healthy teeth and bones and is found in dairy produce, leafy green vegetables, bread, nuts and seeds, cheese and dried fruit. Iron is needed for red blood cells and is found in leafy green vegetables, eggs, molasses, lentils and pulses. Zinc helps with the immune system and enzyme reactions and again is found in green vegetables, cheese, sesame and pumpkins seeds, lentils and wholegrain cereals. Iodine is present in dairy products and some vegetables, but it depends on how rich the soil that they are grown in is.
You can see from this, why research has shown that being a vegetarian can be the healthier option; so next meal time why not try a meat-free recipe.
|
The Chaturmukh temple in Madhya Pradesh
The Chaturmukh temple in Madhya Pradesh has one of the largest stone-carved lingas in the country. India has a rich history of architecture through its heritage monuments, temples, palaces, which are spread across different states in India. Some are known to be very popular but some are lost, some are preserved and some are not. The architectural history of the temples is unusual and complex and has a very significant role in Indian history. Today, we will explore the Chaumukhnath temple, one of the earliest surviving stone temples in central India.
The Chaumukhnath temple, also referred to as the Chaturmukh Mahadeva temple is situated in the Nachna village of Panna district of Madhya Pradesh, India. The temple illustrates a North Indian style of Hindu temple architecture.
The temple is named after the colossal linga which is inside the temple whose surface is carved with four faces in the four cardinal directions. The five faces are believed to express the five aspects of Shiva, namely, creation (Vamadeva), maintenance (Tatpurusha), destruction (Aghora), beyond space (Isana), and introspection (Saytajota). The linga is approximately 4.67 feet (1.42 m) high.
The walls of the temple have images of divine attendants and each corner has the image of Dikpalaka. There are five storyes with images of carvings of ganas and river goddesses on windows and doors.
The temple also has a square plan and has a door design similar to the Parvati temple, but otherwise it very different in style. The building is concentric squares, 16.75 feet (5.11 m) outside, and 11.75 feet (3.58 m) inside. The Shikhara is slightly curved as it rises towards the sky, with a total height of about 40 feet (12 m). This temple too stands on a jagati platform, but unlike the Parvati temple, it has stairs to enter the temple from multiple directions.
Its dating is uncertain, but comparing style to structures that can be dated, it is variously dated to the 5th- or 6th-century Gupta Empire era.
|
At the very beginning of this course, we learned of the Scythians, a group of non-Slavic people living in what we know as Russia. Russia’s origins were never purely “Russian” for archeological evidence revealed many groups such as the Scythians in early Rus’. Fast forward four hundred years and Russia has expanded vastly. It has control of the Polish kingdom on its West side, and southern control in parts of Middle Asia. With this expansion of the Russian territory came along new demographics in Russia, in certain places, outnumbering the number of Russians in various towns and cities. It was at this time, the Census of 1897 started to collect demographic data on who lived in the Russian Empire.
The census revealed multi-ethnic, speaking, and religious groups such as the Poles, Jews, Muslims, Tartars, Finns, and Germans. Not to mention persons of Ukrainian or Belarus ethnicity who were considered to be Russian. Usually one’s ethnicity was correlated to native tongue and religion. For example Poles and Lithuanians were exclusively Roman Catholic. This allowed the Russian state to create a unified identity for many of groups because of the correlating language and religion. However this was more difficult to determine amongst populations in Middle Asia because of how common it was to be bilingual in these areas.
For the most part, a large proportion of the Russian population was not affected by these various ethnicities and religions. Many Russians lived in rural areas while non-Russians allocated to cities. Even though Russians had control over kingdoms such as Poland and Finland, by only replacing the autocratic figures and military, the native nobility was able to live on in rural areas, thus the populations at the borders of Russia remained non-“russified”. Hence the socio-economic standings of many under the old rule remained with Russian rule. For example, even if a noble Polish man were poor, the Russian empire allowed for his status to remain, largely concentrated areas of the used-to-be kingdoms’ natives. As Russian modernization and industrialization occurred, Russians began to migrate and took on industrial jobs. At this point Russians were entering spaces of foreigners and taking hold of the industrial jobs provides.
How is nobility seen as universal across ethnicities? Why do you think non-Russians were not subject to the Table of Ranks system?
Kappeler, Andreas. (Translated by Alfred Clayton) “The Late Tsarist Multi-Ethnic Empire between Modernization and Tradition.” Longman, 2001. Chapter 8
|
AuxiliariesThere are four verbs which are sometimes auxiliary verbs: be, do, get, and have. They are used mainly to add meaning to a main verb, for example by forming a continuous tense, a passive, a negative, or an interrogative. They are also used to add meaning to a clause, for example by helping to form question tags.Like other verbs, auxiliaries have tenses, some of which are formed with other auxiliaries. For example, in the clause She has been singing for two hours, the auxiliary be is used in the pattern AUX -ing, that is, been singing. However, the auxiliary be itself has a tense formed by the auxiliary have in the pattern AUX -ed, that is, has been.DIAGRAM HEREAnother example is the clause Food was being thrown across the room, where the auxiliary be is used in the passive pattern AUX -ed, that is, being thrown. However, that auxiliary itself has a tense formed by the auxiliary be in the pattern AUX -ing, that is, was being. The verb group in this clause therefore contains two forms of the auxiliary verb be.DIAGRAM HERELooking at this from another point of view, when an auxiliary is followed by an '-ing' form, an '-ed' form, or a to-infinitive form, that form may itself be that of an auxiliary verb which is followed by another verb. For example, in the clause She has been arrested, the auxiliary have is used in the pattern AUX -ed, that is, has been. However, be is also an auxiliary, used here in the pattern AUX -ed, that is, been arrested.DIAGRAM HEREIn this chapter, we use the terms '-ing' form, '-ed' form, and to-infinitive form to indicate either a single main verb with that form, such as liking, liked, or to like, or an auxiliary with that form together with the main verb following it, such as being followed, been followed, or to be followed.Auxiliary verbs are made negative by putting not after them, as in She is not swimming, They did not know, or He has not written to you. In spoken English and informal written English, not is often contracted to n't and is added to the auxiliary: He hasn't written to you.The interrogative of verb groups formed with auxiliary verbs is made by placing the Subject after the auxiliary verb, as in Is she swimming? or Has he not written to you? If the n't form of the negative is used, the Subject comes after that: Hasn't he written to you?Auxiliary verbs have the following patterns:
AUX -ingHe is swimming.
AUX to-infShe is to arrive at six.
AUX neg infDon't go!
AUX n infDid they remember?
AUX infDo come in.
AUX -edShe got knocked down.
AUXShe's probably earning more than I am.
cl AUX nShe hasn't finished, has she?
so/nor/neither AUX n...so do I.
AUX n -edHad I known...
|
Early US History
September 18, 2018
Study Guide for EXAM 1
What motivated each nation to create colonies?
Spanish: colonized America in order to search for gold and silver, found silver and gold in the Inca and Aztec Empires
French: wanted access to the fur trade, wanted to spread Catholicism Dutch: wanted access to the fur trade
English: freedom of religion, better access to trade, more natural resources
Reformation > Americans
French: middle ground/assimilation
2. Native Americans
*compete for recourse
3. Geography/ Resources
New England warfare
*fur trade (inter waterways)
The 3 Captivity Narratives:
Treatment corresponded with gender and role
Religion used for survival
Finds way back to colony
Several months captive
Assimilation led to end of torture
Sondies (no burial)
Domestic work led to assimilation
11 weeks captive
Traded back to colony
Adopted Native American trends/practices
Short Answer Identifications
1) Briefly outline how humans were created in each of these stories.
CHEROKEE: The reader states, “Men came after the animals and plants. At first there were only a brother and sister until he struck her with a fish and told her to multiply, and so it was. “ OTTAWA: According to the Ottawa story, “the great hare caused the birth of man from their corpses, as also from those of the fishes which were found along the shores of the rivers.” Not If you want to learn more check out ant 3000
only was man created from the hare, but overtime, man and man’s culture would be based off of the animal that created them.
HAUDDENOSAUNEE: The Hauddenosaunee story states that, “Sky Woman gave birth to twins. The first born became known as the Good Spirit.” Man was created by the Good Spirit by using red clay.
2) What common themes or elements do these stories share? In particular, what do you notice about the role of women? What roles do animals and the natural world play in the processes described? Who or what has power? Where is power coming from? According to the Cherokee story, “When the world grows old and worn out, the people will die and the cords will break and let the earth sink down into the ocean, and all will be water again.” The natural world started out flat and began with some higher being who creates the world. It took the creator 7 nights before animals were given power, and after the animals and plants were created, the men and women were created. This story has the women being used solely as a child maker, where she would birth a child every seven days. Over time, the woman’s rights were restricted and she was forced to only birth one child during a year. Both the Ottawa and Hauddenosaunee believed that mans origins came from various animals. 3) What evidence do you find in these stories that European culture influenced Native American beliefs about creation? Cite one specific passage as evidence. According to the Ottowa story, “some of the savages derive their origin from a bear, others from a moose, and others similarly from various kinds of animals; and before they had intercourse with the Europeans they firmly believed this, persuaded that they had their being from those kinds of creatures whose origin was as above explained.” Don't forget about the age old question of hist 101 final exam
Good and evil =God & Devil
4) Finally, from your analysis of these documents (which were all originally passed among Indian peoples orally and not as written texts), what would say are the strengths and weaknesses of the oral tradition as historical source material? Can we learn anything about preEuropean contact Indian life and thought from these stories? If so, what do these stories tell us?
The stories help show that, although they’re all from different backgrounds and cultures, their beginnings are all somewhat similar and start with the creation of the world and then animals and then finally man. If you want to learn more check out umd stat minor
Time and method of telling
1) When is this written and why is it written?
It is written in 1493 and it was written in order to tell of his tales from his voyages and who he met and the places he went.
2) How does he describe the Native people he meets?
The natives were quick to take flight at first because they were fearful by nature as well as timid, however, they were also very honest, simple and humble. They weren’t materialistic at all. They were very generous.
3) What clues does he give about how he's categorizing those he meets or hears about? (Note some specific passages in the reading)
“they are by nature fearful and timid.
Yet when they perceive that they are safe, putting aside all fear, they are of simple manners and trustworthy, and very liberal with everything they have.”
“These people practice no kind of idolatry; on the contrary they firmly believe that all strength and power, and in fact all good things are in heaven”
“Nor are they slow or unskilled, but of excellent and acute understanding” “In all these island there is no difference in the appearance of the people, nor in the manners and language, but all understand each other mutually… each man is content with only one wife, except the princes or kings, who are permitted to have twenty. The women appear to work more than the men.” We also discuss several other topics like drawing on the reform programs of the gilded age and the example of european legislation, progressives sought to reinvigorate the idea of an activist, socially conscious government.
“island named Charis, people who are
considered very warlike by their neighbors. These eat human flesh. The said people have many kinds of rowboats, in which they cross over to all the other Indian islands, and seize and carry away everything that they can.”
4) Why does he want to return to America?
To show proof of what he has seen and what others can’t imagine. He wants people to know that these are not fables, that these islands and people do in fact exist.
1) When is this written and why is it written?
This was written in 1520 and it was written in order to tell of the great city he would plan on conquering. Don't forget about the age old question of langowski rutgers
2) How does he describe the Aztec capital city? What does he notice about it and why are those things important? (Note some specific passages in the reading)
“This great city contains a large number of temples, or houses, for their idols, very handsome edifices, which are situated in the different districts and the suburbs; in the principal ones religious persons of each particular sect are constantly residing, for whose use, besides the houses containing the idols, there are other convenient habitations.”
“This city has many public squares, in which are situated the markets and other places for buying and selling.”
“if the inhabitants of the city should prove treacherous, they would possess great advantages from the manner in which the city is constructed, since by removing the bridges at the entrances, and abandoning the place, they could leave us to perish by famine without our being able to reach the main land” If you want to learn more check out psyc 311 textbook notes
These are all very important and essential because he cares about religion, money and power. 3) Can you detect a tension or contradiction here? He calls Montezuma and the Aztecs barbarous, but notes how admirable their civilization is. How do you explain that? He believes them to be barbarous solely because of the fact that they are not knowledgeable of God. This is uncalled for. Just because they have different beliefs and aren’t aware of other religions and Cortes beliefs, does not make them barbarous. Instead it makes him judgmental. I believe he admires their civilization because of how large and wondrous it is. It’s filled with so many things that are new to him, he can only help but to appreciate it.
Las Casas "Devastation of the Indies"
1) When is this written and why is it written?
This was written in 1552 and it was written in order to tell about his desire to create better treatment of Native Americans.
2) How does he describe Native peoples? Why does he return to the subject of their religious beliefs multiple times? (Note specific passages in the reading) He describes them as tender imbeciles that are incapable of doing hard work. “the Inhabitants of the Island of Hispaniola, in their own proper Idiom, term Hammacks. The Men are pregnant and docible. The natives tractable, and capable of Morality or Goodness, very apt to receive the instill'd principles of Catholick Religion; nor are they averse to Civility and good Manners”
3) What kind of atrocities does Las Casas report? How does he describe the Spanish? Why is he so critical of his own people? What's wrong with what they are doing? (Note specific passages in the reading)
They took babies from mothers, they bashed the brains of innocents, tore women and children alive into pieces, behead men, and took children and women as slaves. He describes them as violently brutal, godless, offensive, defensive.
“They snatcht young Babes from the Mothers Breasts, and then dasht out the brains of those innocents against the Rocks”
“the Spaniards first attempted, the bloody slaughter and destruction of Men first began: for they violently forced away Women and Children to make them Slaves”
1) What is the middle passage and how does Equiano describe it? Who is supposed to read these descriptions and what emotions are they supposed to feel as they read?
All of their women stayed behind. “The first object which saluted my eyes when I arrived on the coast, was the sea, and a slave ship, which was then riding at anchor, and waiting for its cargo.” The men in charge were white men with red faces, long hair and evil looks. They spoke a language different from his. This letter was written to tell readers of the horrors of slavery. “I even wished for my former slavery in preference to my present situation, which was filled with
horrors of every kind” It’s supposed to make them feel empathy for them and dread that things like that happened. The men would be whipped or flogged if they didn’t eat or if they tried to jump overboard to escape slavery.
2) Thinking about the runaway slave ads, what information do these ads give us about the health, wellbeing, physical condition, and appearance of VA’s enslaved peoples? How does this information help us to better understand the lives of slaves? (JOHN STITH) mulattoe man slave,5’8 5’9, thin faced, bushy hair, grin when speaks, old clothes, tried escaping a lot and resulted in getting handcuffed, and getting an iron neck collar (ROBERT MUNFORD) slave in Mecklenburg county red eyes, bow legged, short, branded on the right cheek R and M on left
(GEORGE NOBLE) Gibb is a slave was 6’, knock kneed, flat footed, right knee bent, whipping scars on back Robin another slave 6’, stout, film over an eye, sore on a shin Dinah another slave old female, fat, almost 6’, stumpy thumb
(MARY CLAY) Jude, slave, mulatto, 30, only has one eye, long black hair, scar on elbow, scars on face
(JOSHUA JONES) slave Ben, 5’6, 35, carpenter, rotten teeth
(WILLIAM GREGORY) slave Peter, 44, black, slim, cut teeth
(PETERFIELD TRENT) Peter brown, painter,3540yrs, 5’85’9, dark complexion, slim, thin face, missing some front teeth; Walton, negro, 23, light complexion, smooth skin, decaying top teeth, short hair
(CHARLES GRYMES) Johnny, negro man, 22, 5’8
(JOHN SCOTT) tom, negro, short, full eyes, knock kneed
(GABRIEL JONES) sam, negro, 5’55’6, broad face
(DAVID WALKER) Jemmy, dark mulatto man, 5’95’10, large feet, long middle toes, part of front tooth missing
(EDWARD CARY) Kate, mulatto negro woman, 18, 5’95’10, speaks smoothly 3) What evidence do you see of owners’ attitudes towards their slaves in these ads? What about in the other readings?
They will pay a hefty amount of money to anyone who finds their slaves. Some will pay to have the slave found and then killed and brought the head. The loss of their slave would cause them a serious economic result.
4) What skills did these runaways possess? Why might these skills be important during the colonial period?
Singers of hymns, preachers, worked with horses, raced horses, dancers, artful, writer, card playing, cockfighting, reading, painter, carpentry, maid, talkative, strong drinker, spin, weave, sew, iron, well, mechanics, coopers, masons, smiths, wheelwrights
5) What evidence do the ads offer of slave resistance?
“First, there is the pure fact that they ran away: this was a major act of resistance, and required a great deal of courage. Most were not successful.”
7) Turning to Fithian’s journal, how does he describe Virginia society, particularly its planter elite? What qualities characterize their daily lives and how do they contrast with the lives of slaves?
“The Colonel, invented this Day a method for finding the difference of the value of money in this Province and in Maryland.” In this society, those of high class have several servants. “The family is invited to dine with Mr Turburville—Mr & Mrs Carter, Miss Priscilla & Nancy with three Servants went from Church—Ben, Bob, Miss Fanny, Betsy & Harriot with two Servants cross'd the River—Miss Sally with Tasker & one Servant rode in a Chair—Dined with us Captain Dennis, of the Ship Peggy; Dr Steptoe; & Mr Cunningham.” Those with money were considered to be gentleman. The gentleman would conduct their business at church. While the poor class tended to be full of negroes, servants and slaves. They would celebrate on Saturdays because it was their day of pleasure and amusement, they would be dressed festively and be full of cheer. 8) According to Fithian, how is Virginia different than his home colony of New Jersey? “In this place I think it needful to caution you against hasty & illfounded prejudices. When you enter among a people, & find that their manner of living, their Eating, Drinking, Diversions, Exercise &c, are in many respects different from anything you have been accustomed to, you will be apt to fix your opinion in an instant, & (as some divines deal with poor Sinners) you will condemn all before you without any meaning or distinction what seems in your Judgment disagreeable at first view, when you are smitten with the novelty. You will be making ten thousand Comparisons.” On Sundays, they dress differently. Also he normally would be in bed by 10 on a Sunday. In NJ, gentleman associated with farmers and mechanics. They don't care about ranks, they actually want equality of wealth among all inhabitants.
|
Activities Index | Handout | Educator Ideas
Can you build a bridge that holds 100 pennies, using 1 sheet of paper and up to 5 paper clips?
A bridge must support its own weight (the dead load) as well as the weight of anything placed on it, like the pennies (the live load). Your paper bridge must span 20 centimeters (about 8 in.). The sides of your bridge will rest on two books and cannot be taped or attached to the books or the table.
What You Will Need
5 paper clips
2 books or blocks
at least 100
pennies or other small weights
Make a Prediction
Describe how you think the bridge should be
constructed in order to support its dead load
plus the live load of the pennies.
Try It Out
1. Discuss possible ideas with your partner before you start building. What can you do to the paper to make it stronger? When you have decided on a design, construct your bridge.
2. Place the bridge across two supports that are 20 cm apart. Remember that the space below the bridge must be clear to allow boats to pass!
3. To test your bridge, load it with pennies one at a time, until it collapses. Record how many pennies your bridge supported.
Describe how well your bridge supported its dead load and the live load you placed on it. Was the bridge as strong as you thought it would be? Where did it fail?
Build on It
Redesign your bridge and test it again, using a new sheet of paper. How does your second attempt compare? How can engineers test their plans for building a full-size bridge?
Is there a difference in the load your bridge can hold if you put the load in the center of the bridge compared to spreading it out along the bridge? Make a prediction and test it.
|
When you think of play, the actual nature of playing, you don’t associate it with playmates who can weigh upwards of 70kg sporting sharp claws and a mouthful of large canines. Yet this is what playing includes when you are part of a wolf pack. With such a high risk of injury, it is no wonder that scientists have longed to understand exactly why wolves play. Recently, research out of the University of Veterinary Science, Vienna, has investigated that very question by recording the playful interactions of both puppy and adult grey wolves (Genus species).
Despite play being widespread throughout the animal kingdom, the true function of the behaviour is relatively unknown. The most accepted theory regarding animal play is that it is crucial for both the strengthening of social bonds and the establishment/maintenance of dominance relationships between group members. A past investigation of gelada baboons and tonkean macaques found that two individuals who engage in more playful interactions are more affiliative towards each other outside of the play context. Similarly, in spotted hyenas the frequency of playing was correlated to a decreased amount of displayed aggression, whilst an investigation on yellow-bellied marmots supported the dominance assessment hypothesis: that individuals within a hierarchy will use play to examine the physical abilities of opponents, allowing them to gain a competitive advantage without risking serious injury.
One such animal that lives in a hierarchical social group is the wolf. As packs are characterized by cooperation, high social cohesion and dominance relationships, they are in many ways the ideal species in which to study play — as the latest research demonstrates.
The wolves used in the latest study originated from North America but were born and raised at the Wolf Science Centre, Austria. A total of 26 hours of video footage was captured on a puppy wolf pack in 2009 and on a mixed-age group in 2012 who had a previously established hierarchy. This allowed researchers to analyse the benefits of playing in a puppy pack and then how play changed as juvenile wolves became part of an adult pack.
Two key questions were investigated: Do pairs that play more have fewer aggressive interactions outside the play context? Does more relaxed play leads to more affiliative behaviour? In addition to this, the scientists also wanted to find out if competitive playing was correlated with the rank of an individual within the pack’s hierarchy.
Researchers found the more time that each pair of wolves engaged in play, the fewer aggressive interactions were observed within the pack. Similarly, the more relaxed play that a wolf experienced the more affiliative their relationship was with others. It was also found that those individuals who were ranked closely in the group’s hierarchy tended to engage more in competitive play. These results allowed the researchers to begin to define and discuss the possible functions of play for both puppy and adult wolf packs.
One of the benefits of playing with different pack members is to reduce the amount of aggressive interactions experienced: this not only maintains group cohesion but also reduces the potential of serious injury. This is further supported by the argument that play strengthens the social bonds between wolves as the longer a wolf spent in relaxed play, the friendlier its behaviour was.
This research also supported the dominance assessment hypothesis. Observed only in the puppy pack, aggressive play was positively correlated with aggressive interactions outside of play, with those pairs with less clear dominance ranks spending more time in competitive play. Therefore, scientists have suggested that play in wolf puppies may help to define their hierarchical rank and establishes close social bonds between pack members. This is important for more effective future cooperative interactions regarding hunting, territory defence and pup rearing.
Overall, this investigation found more significant results than previous studies due to the sample including wolves of various age groups and observing play between siblings. This demonstrates the importance of researching play regarding ontogeny, i.e. the development of behaviour as an animal matures. In terms of the function of play, the differences between relaxed and competitive play clearly has an important role within the wolf pack. Whilst it can help strengthen the social bonds between individual wolves, it can also be applied to determine where a wolf sits in terms of the group’s hierarchy. This behavioural variety demonstrates the importance of scientific study on the play behaviour of various animals, with fascinating results being found regarding the true function of play.
Cafazzo et al. “In wolves, play behaviour reflects the partners’ affiliative and dominance relationship.” Animal Behaviour: Volume 141, July 2018, Pages 137-150
Discover the story behind the research through the scientist’s eyes, subscribe to Biosphere digital magazine for access to in-depth articles that bring the natural world to life.
|
A new approach has been successfully initiated at the pre-primary and primary levels.
- Alphabets and numbers are taught using phonetics and songs in a competitive environment, using interactive multimedia techniques. Children are taken on outings, which help them to understand concepts and to express themselves better.
Teaching is tailored to the needs of the students by involving them in the learning process to maintain their interest. Some of these methods are as follows
Students are dressed up to enact various roles as per the demand of the lesson or chapter. The class that performs well is permitted to enact on stage while the other sections learn by watching them.
- Role Play
Students dress up to depict various characters according to the subject matter. For example, a student journalist interviews his colleagues dressed up as Maharashtrian farmers to explore their way of life and food habits.
Experience is the best form of learning. A very practical approach is undertaken by conducting simple experiments which go a long way in imprinting never-to-be-forgotten concepts on young minds.
Children are involved in the learning processs in a group or individual basis. They bring in various materials as needed and assemble them in class under the teacher’s guidance, with the aim to help them understand a concept.
Children begin their day with 10-15 minutes of yoga. Laughter therapy has also been used with positive results.
- Home activity
These are simple independent activities for children, which can be accomplished without any intervention by parents.
The results are encouraging, with children taking greater interest in their studies and therefore lower absenteeism. The children are more confident and their hidden talents are revealed. The burden of homework and exams is reduced to a large extent.
|
Hepatitis A and B are caused by virus infection of the liver. The virus breeds in waste matter from the bowel and is common where there is poor sanitation. It is passed in contaminated food and drink; less usually, by sexual contact; more rarely, by transfusions of infected blood. Hepatitis is on the increase, probably due to more foreign travel. When visiting areas with poor sanitation, observe strict personal hygiene. Drink bottled water, eschew ice cubes. Avoid anal and oral sexual contact.
The symptoms of both A and B are the same: fever, nausea, headache, fatigue, loss of appetite, and chills. Jaundice shows as a yellow tinge to the skin, fingernails, and whites of the eyes about a week later. Urine can be dark in colour; stools almost whitish. A few people are asymptomatic. With hepatitis A, the symptoms are mild. The defence system builds immunity to the virus, but it remains in the blood and can be transmitted.
The hepatitis B virus (HBV) produces severe symptoms, which start suddenly 1 to 6 months after contact. If liver damage is extensive, death occurs in 5 to 20 percent of cases. The B virus is transmitted in blood and blood products during sexual contact: semen, vagina secretions, saliva, and faeces are suspect. It is also passed by IV drug users sharing infected needles. The incidence of HBV is rising rapidly, perhaps due to more foreign travel and IV drug use. Male homosexuals, heterosexuals with multiple partners, travellers, and drug addicts are high risk groups.
|
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
Implicit cognition refers to unconscious influences such as knowledge, perception, or memory, that influence a person's behavior, even though they themselves have no conscious awareness whatsoever of those influences.
Implicit cognition is everything one does and learns unconsciously or without any awareness that one is doing it. An example of implicit cognition could be when a person first learns to ride a bike: at first they are aware that they are learning the required skills. After having stopped for many years, when the person starts to ride the bike again they do not have to relearn the motor skills required, as their implicit knowledge of the motor skills takes over and they can just start riding the bike as if they had never stopped. In other words, they do not have to think about the actions that they are performing in order to ride the bike. It can be seen with this example that implicit cognition is involved with many of the different mental activities and everyday situations of people's daily lives. There are many processes in which implicit memory works, which include learning, our social cognition, and our problem solving skills.
Implicit cognition was first discovered in the year of 1649 by Descartes in his Passions of the Soul. He said in one of his writings that he saw that unpleasant childhood experiences remain imprinted in a child's brain until its death without any conscious memory of it remaining. Even though this idea was never accepted by any of his peers, in 1704 Gottfried Wilhelm Leibniz in his New Essays Concerning Human Understanding stressed the importance of unconscious perceptions which he said were the ideas that we are not consciously aware of yet still influence people's behavior. He claimed that people have residual effects of prior impressions without any remembrance of them. In 1802 French philosopher Maine de Biran in his The Influence of Habit on the Faculty of Thinking was the first person after Leibniz to systematically discuss implicit memory stating that after enough repetition, a habit can become automatic or completed without any conscious awareness. In 1870 Ewald Hering said that it was essential to consider unconscious memory, which is involved in involuntary recall, and the development of automatic and unconscious habitual actions.
Implicit learning starts in our early childhood, this means that people are not able to learn the proper grammar and rules to speaking a language until the age of seven. So if this is the case then how do we learn to talk by the age of four? One of the ways that this is possible is through implicit learning and association. Children learn their first language from what they hear when they are listening to the adults and through their own talking activities. This goes to show that the way children learn language involves implicit learning.
Studies on implicit learning
A study was conducted with amnesiac patients in an attempt to demonstrate that amnesiac patients that were unable to learn a list of words or pictures when their performance was tested were able to complete or put together fragmented words and incomplete pictures. This was found to be true as the patients were able to perform better when asked to complete words or pictures. A possible explanation for this could be that implicit memory should be less susceptible to damage that may happen to the brain than explicit memory. There was a case where a 54-year-old man that had bitemporal damage worse on the right side had a hard time remembering things from his own life as well as famous events, names and even; yet he was able to perform within the normal limits with a word completion task involving famous names and with judgments of famous faces. This is a prime example that implicit memory can be less vulnerable to brain damage.
A famous study investigated the Identification blindsight effect with individuals who had suffered damage to one half of the visual cortex and were blind in the opposite half of the visual field. It was discovered that when objects or pictures were shown to these blind areas, the participants said that they saw no object or picture, but a certain number were able to identify the stimulus as either a cross or circle when asked to guess at a considerably higher rate than would have been expected by chance. The reason that this happened is because the information was able to be process through the first three stages of selection, organization, and interpretation or comprehension of the perceptual cycle but failed at only the last stage of retention and memory where the identified image is entered into their awareness. Thus stimuli can enter implicit memory even when people are unable to consciously perceive them.
Implicit cognition also plays a role in social cognition. People tend to see objects and individuals as more encouraging or acceptable the more often that people are exposed to them. An example includes the False-fame Effect. Graf and Masson (1993) conducted a study where they showed participants a list with both famous and non-famous names. When it was shown around people were able to recall the famous names more than the non-famous names initially, but after about a 24-hour delay participants began to associate the non-famous names with famous people. This supports implicit cognition because the participants began to unconsciously associate the non-famous names with famous people.
Although the process is unconscious, implicit cognition influences how people view each other as well as their interactions with one another. People tend to view those who look alike as belonging together or to similar groups and associate them with the social groups that existed in their high school years. These groups represented different relations between the students and were made up of students who were perceived as having similarities among each other. A study was conducted to see the amount of distance that participants put between individuals given certain circumstances. The participants were asked to place figures of individuals where they thought the figures should be standing given certain circumstances. It was found that people typically place men and women close to each other, to make little families formed with the figures of a woman, a man and of children. The participants did the same when asked to show friends and/or acquaintances, the two figures were placed relatively close to one another rather than if they were asked to represent strangers. When asked to represent strangers the participants placed the figures far apart. There are two parts to the social relations view that is liking relations were the ultimate goal is to be together, then there is the disliking relation view which is separation from the person. An example of this could be when someone is walking down a hall way and see someone whom they know and like that person is more likely to wave and say hello to them. On the other hand, say the person they see is someone whom they dislike, their response will be the opposite as they try to either avoid them or get away from them as quickly as possible showing the separation between the two of them. There are two views to the social relations theory, one of them is that people are out to mainly seek dominance of those around them, while the other view is that people mainly see the relations as either belonging or not belonging or liking and disliking on another. It is seen that males mainly seek dominance against one another as they are competitive and looking to outdo one another. For females on the other hand it is seen that women perceive their social views and values as more of the belonging or liking scale in terms of their closeness to one another. Implicit cognition not only involved how people view each other but also how they view themselves. This means that our own image is constructed from what others see of us rather than our own views. The way that we view ourselves is from what others see us as, or from the times that we compare ourselves to other people. The way that this plays a role in implicit cognition is because all of these actions people do unconsciously, or they are unaware that they are making these decision. Men do not consciously seek to be dominant over one another as women do not consciously arrange their social views or values in terms of their closeness. These are each things that people do without their conscious knowledge of these actions, which ties in with implicit cognition.
Implicit attitudes (also called automatic attitudes) are mental evaluations that occur without the awareness of the person.
Although there is debate of whether these can be measured fully, attitudes have been assessed with the implicit association test (IAT). The test claims to measure people's implicit associations to certain groups or races. But the controversy lies on whether it does predict people's future behaviors. Some claim that the IAT does predict if someone will act differently towards a certain group others believe there is not enough evidence to assure this will happen.
It is not well known how these are developed many believe that they come from past experiences, pleasant experiences or unpleasant ones can influence how a person's attitudes are formed towards a specific thing. This explanation implies that attitudes could be unpleasant if the previous experience was also an unpleasant one, they can also be formed because of early experiences in the early stages of life. Another possible explanation is the fact that implicit attitudes can also stem from affective experiences, there is evidence that the amygdala is involved in affective or emotional reactions to stimuli. A third explanations involves cultural biases, this was shown in a study done by Greenwald, McGhee, and Schwartz (1998) that in-group bias was more prevalent when the in-group was more in tune with their ancestral culture (for example knowing the language).
Evidence suggests that early and affective experiences might affect implicit attitudes and associations more than the other explanations provided.
This section's tone or style may not reflect the encyclopedic tone used on Wikipedia. (May 2016) (Learn how and when to remove this template message)
There are scenarios when we act on something and then think back about handling it in different situation or manner. That is implicit cognition coming into play, the mind will then go based on ethical and similar situations when interacting with a certain thought. Implicit cognition and its automated thought process allows a person to decide something out of impulse. It is often defined as an involuntary process where tasks are easily absent of consciousness. There are plenty of factors that influence behaviors and thought processes. Such as social learning, to stigmas, and two major aspects of implicit and explicit cognition. Implicit on one hand is obtained through social aspects and association, while explicit cognition is gained through propositional attitudes or beliefs of certain thoughts, Implicit cognition can be incorporated with a mixture of attention, goals, self-association, and at times even motivational processes. Researchers have used different methods to test these theories of behavior correlation with implicit cognition. Using Implicit Association Tests (ITA's) is a method that is significantly used, according to Fazio & Olsen (2003) and Richetin & Richardson (2008). Since published, approximately ten years or so, it has been widely used influencing research on implicit attitudes. Implicit cognition is a process based on automatic mental interpretations. It's what a person really thinks, yet is not consciously aware of. Behavior is then affected, usually causing negative influences, both theoretically and empirical reasons presume that automatic cognitive processes are contributed to aggressive behaviors.
Impulse behaviors are often created without awareness. Negativity is a characteristic of implicit cognition, since it is an automated response. Explicit cognition is rarely used when trying to discover behavior of ones thought process. Researchers again use IAT's to determine ones thoughts and how a person incorporates these automatic processes, findings consider that implicit cognition may direct what behaviors a person may choose when facing extreme stimuli. For example, death can be perceived as positive, negative, or a combination of the two. Depending on the attributes of death, it can include a general perspective or a "me" attribute. Nock et al. (2010) implied that implicit association with death and or suicide initiates a final process when deciding how to cope to these extreme measures. Self- harm is another characteristic associated with implicit cognition. Because although we may think of it, it is controlled subconsciously. IAT's showed that there was a stronger correlation of implicit cognition and death/suicide than self-harm. The idea of pain may influence a person to think twice, while suicidal may seem quick, thus the automatic process can show how effective this negative behavior and implicit cognition come hand in hand. Automated processes doesn't allow a person to thoroughly create a conscious choice, therefore creating negative influence to behavior. Another negative behavior that can be associated with implicit cognition is depression. Whether a person takes a positive or negative outlook on the certain situation can produce if a person will be associated with depression. It is easier to determine an implicit mindset simple because it is outside of awareness. Implicit processes are considered critical when determining a person's reactions to a certain schema. Implicit cognition is often immediately affective towards a person's reaction. Implicit cognitions also consisted of negative schemas that included hidden cognitive frameworks, and activation of stress. Awareness was often misinterpreted and implicit cognition emerged because of these negative schemas. Behaviors merged through implicit cognition involve a variety of addictive behaviors, problematic thinking, depression, aggression, suicide, death, and other negative factors. Certain life situations add to this schema, whether it be stressful situations, sudden, or anything along these lines, aspects of implicit cognition are used and evaluated.
Implicit cognition can also be associated with mental illness and the way thoughts are processed. Automatic stigmas and attitudes may anticipate other cognitive and behavior tendencies. A person with mental illness may be correlated with a guilt-related, self-associated personality. Because of these associations it may be managed outside one's own control and awareness, showing how implicit cognition is affected. However a dual process can be assessed within implicit and explicit cognition. An agreement between the two thought processes may be an issue, explicit may not be in contact with implicit, therefore causing more of a problem. Mental illness can include both implicit and explicit attitudes, however implicit self-concepts gave more negative consequence when dealing with mental illness. Much of implicit problems happened to be associated with alcohol, however this isn't the goal in order to describe a mental process and implicit cognition. The most widely influenced mental illness in association with implicit cognition would be Schizophrenia. Since a person of this illness has a problem of detecting what is real and what is not, implicit memory is often used with these patients. However, since it cannot really be detected if it is emotionally, mentally, or a combination of both some aspects of this illness are usually exercised uninterrupted, and unconsciously. Since schizophrenia is widely varied and has different characteristics, we cannot quite measure the outcome of implicit cognition.
Implicit cognition refers to perceptual, memory, comprehension, and performance processes that occur through unconscious awareness. For example, when a patient is discharged after a surgery, the effects of the anesthesia can cause abnormal behaviors without any conscious awareness. According to Wiers et al. (2006) some scholars argue implicit cognition is misinterpreted and could be used to improve behaviors while others highlight the dangers of it. Research studies have shown implicit cognition is a strong predictor for several issues like substance abuse, misconduct, and mental disorders. These inherent thoughts are influenced from early adolescent experiences primarily a negative impact from culture. Adolescents who experience a rough childhood early on develop low levels of self-esteem. Therefore, the cognition to act dangerously is an oblivious development. Research for implicit cognition has started to grow especially within mental disorders.
In mental disorders
Schemas are used to interpret the functions involved when individuals would make sense of their surroundings. This cognition happens through an explicit process of recalling an item routinely or implicit process that is outside conscious awareness control. A recent study suggests individuals who have experienced a difficult upbringing develop schemata of fear as well as anxiety and will react almost immediately when they feel threatened. People who are anxious predominantly focus on any peril-related stimuli since they are hyper vigilant. For example, an anxious individual who is about to cross the street at the same time a car drives to a stop sign. The anxious person will automatically assume the driver will not stop. This is recognition of threat through a semantic process that instantaneously occurs. Ambiguous cues are viewed as a threat since there is no relevant knowledge to make sense of. People will have a difficult time to understand and will respond negatively. This kind of behavior can explain how implicit cognition may be an influence for pathological anxiety.
The ideas of psychotic patients who have low self-esteem are prone to more serious illnesses. This concept was examined through both implicit and explicit perspectives by measuring the self-esteem of paranoia and depression patients. Previous research suggests that negative implicit cognition is not the symptoms for depression and paranoia, but it is an antecedent for the onset. Current research proposes that high implicit self-esteem is linked to less paranoia. It is imperative for patients who have low self-esteem to become more overt about these situations. Another study found a substantial association between adverse self-questioning in implicit cognition and depression. People who do not think highly of themselves are more likely to be depressed because of this involuntary implicit learning.
Implicit cognition is another influential predictor for bipolar disorder and unipolar disorder. Research proposes patients with bipolar disorder show more common implicit and depressive self-referencing than unipolar patients. Implicit cognition plays a strong role for patients with both bipolar and unipolar disorder. These patients have dysfunctional self-schemata, which is viewed as vulnerability for potential illnesses. Patients who have this vulnerability usually do not seek mental assistance which can later become more problematic to treat. Bipolar disorder patients with low implicit self-esteem are more defensive. This is an unconscious reaction to be manic protective when they feel threatened in any way. Since the growing research of implicit cognition is associated with abnormalities, researchers attempted to find a connection between implicit neuroticism and schizophrenia. Indeed, there was a correlation; participants with schizophrenia were high in implicit neuroticism and low in implicit extraversion when compared to people who were mentally healthy. Participants were given questionnaires that ask personality questions such as "I enjoy being the center of attention". Implicit cognition constitutes low levels of extraversion because these participants are known to avoid any coping. Schizophrenia patients and healthy individuals differ in associative representation pertaining to themselves in neuroticism features. People who are schizophrenic develop an implicit learning, meaning they have an error free learning style so they never take feedback from anyone else.
Research on suicide can be a difficult process because suicidal patients are commonly covert about their intentions to avoid hospitalization. An implicit cognition associated self-task was applied in one experiment to unveil any suspicious behavior of people who might attempt suicide. This study found patients who were released from mental hospitals showed significant implicit association to attempt suicide. The Implicit Association Task would predict whether a patient was likely to attempt suicide depending on they respond. An individual's implicit cognition may lead to a behavior to best cope with stress. This behavior may be suicide, substance abuse, or even violence. However, implicit association with death will show those most at risk for attempting suicide because this individual is looking for the best solution for ending their stress.
Implicit cognition is measured in different ways to find the most accurate outcomes. The task used for patients with anxiety disorders was a modified Stroop task to observe attention biases in anxiety. A participant's reaction time was measured by the reported color of each word. Participants would name the color for each risk relevant words and risk irrelevant words and depending on the color of each word might slow the reaction time. Colors like red were used to see if anxious participants would have a slower reaction time. These colors are known as aposematic implying a threat warning. Most studies used the Implicit Association Test which varies for each study. For example, implicit self-esteem can be tested by giving participants questionnaires that are self-referent. These ask questions like "I am known to be suicidal". Depending on the response of the participants, the researcher can assessed the current state for each participant. Patients that were rated high in suicide immediately received psychiatric treatment. According to these experiments implicit cognition may be a strong predictor for mental disorders.
- Reingold, Eyal M.; Ray, Colleen A. (2006). "Implicit Cognition". In Nadel, Lynn (ed.). Implicit Cognition. Encyclopedia of Cognitive Science. Hoboken, NJ: Wiley. doi:10.1002/0470018860.s00178. ISBN 9780470018866.
- Baddeley, Alan D. (1997). Human Memory: Theory and Practice (Revised ed.). Psychology Press. ISBN 9780863774317.
- Graf, Peter; Masson, Michael E. J., eds. (1993). Implicit Memory: New Directions in Cognition, Development, and Neuropsychology. Psychology Press. ISBN 9781317782322.
- Schacter, Daniel L. (1987). "Implicit memory: History and current status" (PDF). Journal of Experimental Psychology: Learning, Memory, and Cognition. 13 (3): 501–518. doi:10.1037/0278-73126.96.36.1991. Archived from the original on 2016-03-06.CS1 maint: BOT: original-url status unknown (link)
- Howes, Mary B. (2007). Human Memory: Structures and Images. SAGE Publications. ISBN 9781483316840.
- Baddeley, Alan; Aggleton, John; Conway, Martin, eds. (2002). Episodic Memory: New Directions in Research (Reprint ed.). Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198508809.001.0001. ISBN 9780198508809.
- Wegner, Daniel M.; Vallacher, Robin R. (1977). Implicit Psychology: An Introduction to Social Cognition. Oxford University Press. ISBN 9780195022292.
- "How Do Attitudes Influence Behavior?". How do Attitudes Influence Behavior?. The Psychology of Attitudes and Attitude Change. SAGE Publications Ltd. 2010. pp. 67–86. doi:10.4135/9781446214299.n4. ISBN 978-1-4129-2975-2.
- Karpinski, Andrew; Hilton, James L. (2001). "Attitudes and the Implicit Association Test". Journal of Personality and Social Psychology. 81 (5): 774–788. doi:10.1037/0022-35188.8.131.524. ISSN 1939-1315. PMID 11708556.
- Rudman, Laurie A. (2004). "Sources of Implicit Attitudes". Current Directions in Psychological Science. 13 (2): 79–82. doi:10.1111/j.0963-7214.2004.00279.x. ISSN 0963-7214. S2CID 55154745.
- Greenwald, Anthony G.; Banaji, Mahzarin R. (1995). "Implicit social cognition: Attitudes, self-esteem, and stereotypes". Psychological Review. 102 (1): 4–27. doi:10.1037/0033-295x.102.1.4. ISSN 1939-1471. PMID 7878162.
- Phelps, Elizabeth A.; O'Connor, Kevin J.; Cunningham, William A.; Funayama, E. Sumie; Gatenby, J. Christopher; Gore, John C.; Banaji, Mahzarin R. (September 2000). "Performance on Indirect Measures of Race Evaluation Predicts Amygdala Activation". Journal of Cognitive Neuroscience. 12 (5): 729–738. doi:10.1162/089892900562552. ISSN 0898-929X. PMID 11054916. S2CID 4843980.
- Greenwald, Anthony G.; McGhee, Debbie E.; Schwartz, Jordan L. K. (1998). "Measuring individual differences in implicit cognition: The implicit association test". Journal of Personality and Social Psychology. 74 (6): 1464–1480. doi:10.1037/0022-35184.108.40.2064. ISSN 1939-1315. PMID 9654756.
- Graf, P, & Schacter, D, L., (1985).
- Gawronski, B, & Bodenhausen, G, V., (2006); Wilson, T. D., Lindsey, S., & Schooler, T. Y. (2000).
- Anderson, C, A., et al. (2002)
- Haeffel et al. 2007.
- (Scher, C, D., Ingram, R, E., & Sega S, V., 2005).
- Dovidio, J, F., et al., (2002) & Greenwald, A, G., et al., (2009)
- Mano & Brown 2013.
- Sun 2001.
- Wiers & Stacy 2006b.
- Valiente et al. 2011.
- Haeffel, GJ; Abramson, LY; Brazy, PC; Shah, JY; Teachman, BA; Nosek, BA (2007). "Explicit and implicit cognition: a preliminary test of a dual-process theory of cognitive vulnerability to depression". Behaviour Research and Therapy. 45 (6): 1155–67. doi:10.1016/j.brat.2006.09.003. PMID 17055450.
- Ione, Amy. Implicit Cognition and Consciousness in Scientific Speculation and Development (Retrieved January 30, 2008)
- Jabben, Nienke; de Jong, Peter J.; Kupka, Ralph W.; Glashouwer, Klaske A.; Nolen, Willem A.; Penninx, Brenda W.J.H. (2014). "Implicit and explicit self-associations in bipolar disorder: A comparison with healthy controls and unipolar depressive disorder". Psychiatry Research. 215 (2): 329–334. doi:10.1016/j.psychres.2013.11.030. PMID 24365387. S2CID 40022666.
- Mano, Quintino R.; Brown, Gregory G. (2013). "Cognition–emotion interactions in schizophrenia: Emerging evidence on working memory load and implicit facial-affective processing". Cognition and Emotion. 27 (5): 875–899. doi:10.1080/02699931.2012.751360. PMID 23237406. S2CID 6030986.CS1 maint: ref=harv (link)
- Nock, M. K.; Park, J. M.; Finn, C. T.; Deliberto, T. L.; Dour, H. J.; Banaji, M. R. (2010). "Measuring the Suicidal Mind: Implicit Cognition Predicts Suicidal Behavior" (PDF). Psychological Science. 21 (4): 511–517. doi:10.1177/0956797610364762. PMC 5258199. PMID 20424092. Archived from the original on 2016-05-17.CS1 maint: BOT: original-url status unknown (link)
- Polaschek, Devon L. L.; Bell, Rebecca K.; Calvert, Susan W.; Takarangi, Melanie K. T. (2010). "Cognitive-behavioural rehabilitation of high-risk violent offenders: Investigating treatment change with explicit and implicit measures of cognition". Applied Cognitive Psychology. 24 (3): 437–449. doi:10.1002/acp.1688.
- Phillips, Wendy J.; Hine, Donald W.; Thorsteinsson, Einar B. (2010). "Implicit cognition and depression: A meta-analysis". Clinical Psychology Review. 30 (6): 691–709. doi:10.1016/j.cpr.2010.05.002. PMID 20538393.
- Sun, Ron (2001). Duality of the Mind: A Bottom-up Approach Toward Cognition. Psychology Press. ISBN 9781135646950.CS1 maint: ref=harv (link)
- Suslow, Thomas; Lindner, Christian; Kugel, Harald; Egloff, Boris; Schmukle, Stefan C. (2014). "Using Implicit Association Tests for the assessment of implicit personality self-concepts of extraversion and neuroticism in schizophrenia". Psychiatry Research. 218 (3): 272–276. doi:10.1016/j.psychres.2014.04.023. PMID 24816120. S2CID 7545555.
- Teachman, Bethany A.; Woody, Sheila R. (2004). "Staying tuned to research in implicit cognition: Relevance for clinical practice with anxiety disorders". Cognitive and Behavioral Practice. 11 (2): 149–159. doi:10.1016/S1077-7229(04)80026-9.CS1 maint: ref=harv (link)
- Underwood, Geoffrey (1996). Implicit Cognition. OUP Oxford. ISBN 9780198523109.CS1 maint: ref=harv (link)
- Valiente, Carmen; Cantero, Dolores; Vázquez, Carmelo; Sanchez, Álvaro; Provencio, María; Espinosa, Regina (2011). "Implicit and explicit self-esteem discrepancies in paranoia and depression". Journal of Abnormal Psychology. 120 (3): 691–699. doi:10.1037/a0022856. PMID 21381800.
- Wiers, Reinout W.; Stacy, Alan W., eds. (2006a). Handbook of Implicit Cognition and Addiction. SAGE Publications. ISBN 9781452261669.CS1 maint: ref=harv (link)
- Wiers, Reinout W.; Stacy, Alan W. (2006b). "Implicit Cognition and Addiction" (PDF). Current Directions in Psychological Science. 15 (6): 292–296. CiteSeerX 10.1.1.466.1766. doi:10.1111/j.1467-8721.2006.00455.x. PMC 3423976. PMID 20192786. Archived from the original on 2016-05-17.CS1 maint: ref=harv (link) CS1 maint: BOT: original-url status unknown (link)
|
IC 1101 is a supergiant elliptical galaxy located in the constellation Virgo. With a radius of about 2 million light years and home to 100 trillion stars, it is one of the largest galaxies known, as well as one of the most luminous. The galaxy has an apparent magnitude of 14.73 and lies at an approximate distance of 1.045 billion light years from Earth.
IC 1101 has the classification E/S0 (elliptical to lenticular galaxy) and its exact morphological type is not certain. It is probably an elliptical galaxy, but there has been some debate about the possibility that it may be shaped like a flat disc, which is characteristic of lenticular galaxies. If it is a lenticular galaxy, it is seen at its broadest dimensions when viewed from Earth. However, the sheer size of IC 1101 suggests an elliptical galaxy since most lenticulars are 50,000 to 120,000 light years across.
Both elliptical and lenticular galaxies are composed of old stars and contain very little interstellar matter that would feed star formation. However, lenticulars have visible disks and a prominent bulge like spiral galaxies, but they do not have the spiral arm structure. They are believed to be the transitional type between spiral and elliptical galaxies. Ellipticals, on the other hand, do not have a disk or any spiral structure. They have an ellipsoidal shape and appear featureless.
The halo of IC 1101 stretches about 2 million light years (600 kiloparsecs) from the core, making the galaxy one of the largest ones discovered to date, with a diameter of about 4 million light years.
IC 1101 has an angular size of 1.20 by 0.60 arcminutes and an effective radius of about 212,000 light years. Also called the half-light radius, the effective radius is the radius at which half of the galaxy’s light is emitted. It does not reflect the galaxy’s actual size, which is difficult to measure because galaxies do not have clear boundaries, but simply get fainter further from the core.
Additionally, a galaxy’s apparent size changes depending on the size and sensitivity of the telescope, as well as on the length of time for which the galaxy is observed. Longer exposures with larger telescopes will always reveal more than shorter ones in smaller instruments. For this reason, the half-light radius is used as a measurement.
The immense size of IC 1101 is believed to be the result of many smaller galaxies merging with each other. The many collisions have stripped the galaxies of star-forming gas and dust, resulting in very little active star formation taking place in IC 1101.
IC 1011 compared to the Milky Way
IC 1101 spans 4 million light years – some estimates even give the galaxy a diameter of 6 million light years – while the Milky Way has a visible diameter of 150,000 to 200,000 light years. The entire Local Group of galaxies, which includes the Milky Way, Andromeda and Triangulum galaxies, spans 9.8 million light years. If IC 1101 replaced our galaxy in the Local Group, it would engulf Andromeda, Triangulum, the Large and Small Magellanic Clouds, and everything in between.
Even larger spiral galaxies than Andromeda and the Milky Way fade in comparison to IC 1101. Malin 1 in Coma Berenices and the Condor Galaxy (NGC 6872) in Pavo, both among the largest spiral galaxies known, span 650,000 and 522,000 light years respectively. Messier 87 (Virgo A), the supergiant elliptical galaxy in the centre of the Virgo Cluster, has an estimated diameter of 240,000 light years.
IC 1101 contains 100 trillion stars, and they give it a staggering luminosity. In comparison, our galaxy hosts between 100 and 400 billion stars.
However, with very little star-forming activity taking place in IC 1101, the galaxy is left mostly with old, metal-poor stars that will reach the end of their lives in the relatively near astronomical future. The old stars give the galaxy a yellowish hue. They also give it a bleak future. As the stars reach the end of their lives, IC 1101 will gradually shrink and, unless it keeps colliding with younger galaxies, it will eventually fade away.
A study published in 2017 reported an exceptionally large core in IC 1101, about 2.77 arcseconds in apparent size. This corresponds to a physical size of 13,700 light years (4.2 kiloparsecs) across, the largest core size in any galaxy observed.
IC 1101 has a bright radio source at the centre, catalogued as PKS 1508+059, which emits two jets and likely corresponds to a supermassive black hole with an estimated mass of 50 to 70 billion solar masses. This is one of the largest black holes ever detected.
IC 1101 was discovered by the German-born British astronomer William Herschel on June 19, 1790. However, Herschel did not know what he was observing. At the time, galaxies were still believed to be nebulae in the Milky Way. Their true nature was not proven until the 1920s, when Edwin Hubble measured the distance to the Andromeda Galaxy using a Cepheid variable.
IC 1101 is the brightest member of the Abell 2029 cluster of galaxies. The cluster has a diameter of 5.8 to 8 million light years and is one of the densest clusters in the sky. As the brightest member, IC 1101 has the designation A2029-BCG (BCG stands for “brightest cluster galaxy”).
A study published in 1990 reported that the galaxy emitted about 26% of the total light from the cluster, even though it is not the only exceptionally luminous galaxy in the group. Abell 2029 contains thousands of galaxies, including hundreds of giant galaxies.
The designation IC 1101 comes from the Index Catalogue of Nebulae and Clusters of Stars, first published in 1895 as a supplement to the New General Catalogue (NGC). Most brighter galaxies, nebulae and star clusters are still commonly referred to by their NGC or IC designations. Danish astronomer John Louis Emil Dreyer, who compiled the Index Catalogue, listed the galaxy as the 1101st entry.
IC 1101 is located in the constellation Virgo, near the border with Serpens. It lies in the same area of the sky as the globular cluster Messier 5 and the barred spiral galaxy NGC 5921 (also discovered by William Herschel). However, at magnitude 14.73, the galaxy is considerably fainter than these two objects and requires a larger telescope to be seen. Like all elliptical galaxies, it appears as a ball of light and does not get any more distinctive even in the largest of telescopes.
|Galaxy type||E/S0 (elliptical to lenticular)|
|Right ascension||15h 10m 56.100s|
|Declination||+05° 44′ 41.19″|
|Apparent size||1′.2 × 0′.6|
|Distance||1.045 ± 0.073 billion light years (320.4 ± 22.4 megaparsecs)|
|Size (effective radius):||212,000 ± 39,000 light years (64 ± 12 kiloparsecs)|
|Number of stars||100 trillion|
|Helio radial velocity||23,368 ± 26 km/s (14,520 ± 16 mi/s)|
|Galactocentric velocity||23,395 ± 26 km/s (14,537 ± 16 mi/s)|
|Designations||IC 1101, PGC 54167, UGC 9752, A2029-BCG, BWE 1508+0555, GALEX J151056.1+054439, 2MASX J15105610+0544416, NVSS J151055+054439, PKS 1508+05, PKS J1510+0544, PKS 1508+059, SDSS J151056.10+054441.1, RGB J1510+057A, RGB J1510+057B|
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.