content
stringlengths 275
370k
|
---|
The electromagnetic force has an asymmetry: the magnetic field lags the electric field. The phase shift is 90 degrees. We can use complex notation to write the E and B vectors as functions of each other. Indeed, the Lorentz force on a charge is equal to: F = qE + q(v×B). Hence, if we know the (electric field) E, then we know the (magnetic field) B: B is perpendicular to E, and its magnitude is 1/c times the magnitude of E. We may, therefore, write:
B = –iE/c
The minus sign in the B = –iE/c expression is there because we need to combine several conventions here. Of course, there is the classical (physical) right-hand rule for E and B, but we also need to combine the right-hand rule for the coordinate system with the convention that multiplication with the imaginary unit amounts to a counterclockwise rotation by 90 degrees. Hence, the minus sign is necessary for the consistency of the description. It ensures that we can associate the aeiEt/ħ and ae–iEt/ħ functions with left and right-handed spin (angular momentum), respectively.
Now, we can easily imagine a antiforce: an electromagnetic antiforce would have a magnetic field which precedes the electric field by 90 degrees, and we can do the same for the nuclear force (EM and nuclear oscillations are 2D and 3D oscillations respectively). It is just an application of Occam’s Razor principle: the mathematical possibilities in the description (notations and equations) must correspond to physical realities, and vice versa (one-on-one). Hence, to describe antimatter, all we have to do is to put a minus sign in front of the wavefunction. [Of course, we should also take the opposite of the charge(s) of its antimatter counterpart, and please note we have a possible plural here (charges) because we think of neutral particles (e.g. neutrons, or neutral mesons) as consisting of opposite charges.] This is just the principle which we already applied when working out the equation for the neutral antikaon (see Annex IV and V of the above-referenced paper):
Don’t worry if you do not understand too much of the equations: we just put them there to impress the professionals. 🙂 The point is this: matter and antimatter are each other opposite, literally: the wavefunctions aeiEt/ħ and –aeiEt/ħ add up to zero, and they correspond to opposite forces too! Of course, we also have lightparticles, so we have antiphotons and antineutrinos too.
We think this explains the rather enormous amount of so-called dark matter and dark energy in the Universe (the Wikipedia article on dark matter says it accounts for about 85% of the total mass/energy of the Universe, while the article on the observable Universe puts it at about 95%!). We did not say much about this in our YouTube talk about the Universe, but we think we understand things now. Dark matter is called dark because it does not appear to interact with the electromagnetic field: it does not seem to absorb, reflect or emit electromagnetic radiation, and is, therefore, difficult to detect. That should not be a surprise: antiphotons would not be absorbed or emitted by ordinary matter. Only anti-atoms (i.e. think of a antihydrogen atom as a antiproton and a positron here) would do so.
So did we explain the mystery? We think so. 🙂
We will conclude with a final remark/question. The opposite spacetime signature of antimatter is, obviously, equivalent to a swap of the real and imaginary axes. This begs the question: can we, perhaps, dispense with the concept of charge altogether? Is geometry enough to understand everything? We are not quite sure how to answer this question but we do not think so: a positron is a positron, and an electron is an electron¾the sign of the charge (positive and negative, respectively) is what distinguishes them! We also think charge is conserved, at the level of the charges themselves (see our paper on matter/antimatter pair production and annihilation).
We, therefore, think of charge as the essence of the Universe. But, yes, everything else is sheer geometry! 🙂
In my ‘signing off’ post, I wrote I had enough of physics but that my last(?) ambition was to “contribute to an intuitive, realist and mathematically correct model of the deuteron nucleus.” Well… The paper is there. And I am extremely pleased with the result. Thank you, Mr. Meulenberg. You sure have good intuition.
I took the opportunity to revisit Yukawa’s nuclear potential and demolish his modeling of a new nuclear force without a charge to act on. Looking back at the past 100 years of physics history, I now start to think that was the decisive destructive moment in physics: that 1935 paper, which started off all of the hype on virtual particles, quantum field theory, and a nuclear force that could not possibly be electromagnetic plus – totally not done, of course ! – utter disregard for physical dimensions and the physical geometry of fields in 3D space or – taking retardation effects into account – 4D spacetime. Fortunately, we have hope: the 2019 fixing of SI units puts physics firmly back onto the road to reality – or so we hope.
Paolo Di Sia‘s and my paper show one gets very reasonable energy and separation distances for nuclear bonds and inter-nucleon distances when assuming the presence of magnetic and/or electric dipole fields arising from deep electron orbitals. The model shows one of the protons pulling the ‘electron blanket’ from another proton (the neutron) towards its own side so as to create an electric dipole moment. So it is just like a valence electron in a chemical bond. So it is like water, then? Water is a polar molecule but we do not necessarily need to start with polar configurations when trying to expand this model so as to inject some dynamics into it (spherically symmetric orbitals are probably easier to model). Hmm… Perhaps I need to look at the thermodynamical equations for dry versus wet water once again… Phew ! Where to start?
I have no experience – I have very little math, actually – with modeling molecular orbitals. So I should, perhaps, contact a friend from a few years ago now – living in Hawaii and pursuing more spiritual matters too – who did just that long time ago: orbitals using Schroedinger’s wave equation (I think Schroedinger’s equation is relativistically correct – just a misinterpretation of the concept of ‘effective mass’ by the naysayers). What kind of wave equation are we looking at? One that integrates inverse square and inverse cube force field laws arising from charges and the dipole moments they create while moving. [Hey! Perhaps we can relate these inverse square and cube fields to the second- and third-order terms in the binomial development of the relativistic mass formula (see the section on kinetic energy in my paper on one of Feynman’s more original renderings of Maxwell’s equations) but… Well… Probably best to start by seeing how Feynman got those field equations out of Maxwell’s equations. It is a bit buried in his development of the Liénard and Wiechert equations, which are written in terms of the scalar and vector potentials φ and A instead of E and B vectors, but it should all work out.]
If the nuclear force is electromagnetic, then these ‘nuclear orbitals’ should respect the Planck-Einstein relation. So then we can calculate frequencies and radii of orbitals now, right? The use of natural units and imaginary units to represent rotations/orthogonality in space might make calculations easy (B = iE). Indeed, with the 2019 revision of SI units, I might need to re-evaluate the usefulness of natural units (I always stayed away from it because it ‘hides’ the physics in the math as it makes abstraction of their physical dimension).
Hey ! Perhaps we can model everything with quaternions, using imaginary units (i and j) to represent rotations in 3D space so as to ensure consistent application of the appropriate right-hand rules always (special relativity gets added to the mix so we probably need to relate the (ds)2 = (dx)2 + (dy)2 + (dz)2 – (dct)2 to the modified Hamilton’s q = a + ib + jc – kd expression then). Using vector equations throughout and thinking of has a vector when using the E = hf and h = pλ Planck-Einstein relation (something with a magnitude and a direction) should do the trick, right? [In case you wonder how we can write fas a vector: angular frequency is a vector too. The Planck-Einstein relation is valid for both linear as well as circular oscillations: see our paper on the interpretation of de Broglie wavelength.]
Oh – and while special relativity is there because of Maxwell’s equation, gravity (general relativity) should be left out of the picture. Why? Because we would like to explain gravity as a residual very-far-field force. And trying to integrate gravity inevitable leads one to analyze particles as ‘black holes.’ Not nice, philosophically speaking. In fact, any 1/rn field inevitably leads one to think of some kind of black hole at the center, which is why thinking of fundamental particles in terms ring currents and dipole moments makes so much sense ! [We need nothingness and infinity as mathematical concepts (limits, really) but they cannot possibly represent anything real, right?]
The consistent use of the Planck-Einstein law to model these nuclear electron orbitals should probably involve multiples of h to explain their size and energy: E = nhf rather than E = hf. For example, when calculating the radius of an orbital of a pointlike charge with the energy of a proton, one gets a radius that is only 1/4 of the proton radius (0.21 fm instead of 0.82 fm, approximately). To make the radius fit that of a proton, one has to use the E = 4hf relation. Indeed, for the time being, we should probably continue to reject the idea of using fractions of h to model deep electron orbitals. I also think we should avoid superluminal velocity concepts.
This post sounds like madness? Yes. And then, no! To be honest, I think of it as one of the better Aha! moments in my life. 🙂
Brussels, 30 December 2020
Post scriptum (1 January 2021): Lots of stuff coming together here ! 2021 will definitely see the Grand Unified Theory of Classical Physics becoming somewhat more real. It looks like Mills is going to make a major addition/correction to his electron orbital modeling work and, hopefully, manage to publish the gist of it in the eminent mainstream Nature journal. That makes a lot of sense: to move from an atom to an analysis of nuclei or complex three-particle systems, one should combine singlet and doublet energy states – if only to avoid reduce three-body problems to two-body problems. 🙂 I still do not buy the fractional use of Planck’s quantum of action, though. Especially now that we got rid of the concept of a separate ‘nuclear’ charge (there is only one charge: the electric charge, and it comes in two ‘colors’): if Planck’s quantum of action is electromagnetic, then it comes in wholes or multiples. No fractions. Fractional powers of distance functions in field or potential formulas are OK, however. 🙂
A few days ago, I mentioned I felt like writing a new book: a sort of guidebook for amateur physicists like me. I realized that is actually fairly easy to do. I have three very basic papers – one on particles (both light and matter), one on fields (QED), and one on the quantum-mechanical toolbox (amplitude math and all of that). But then there is a lot of nitty-gritty to be written about the technical stuff, of course: self-interference, superconductors, the behavior of semiconductors (as used in transistors), lasers, and so many other things – and all of the math that comes with it. However, for that, I can refer you to Feynman’s three volumes of lectures, of course. In fact, I should: it’s all there. So… Well… That’s it, then. I am done with the QED sector. Here is my summary of it all (links to the papers on Phil Gibbs’ site):
The last paper is interesting because it shows statistical indeterminism is the only real indeterminism. We can, therefore, use Bell’s Theorem to prove our theory is complete: there is no need for hidden variables, so why should we bother about trying to prove or disprove they can or cannot exist?
Jean Louis Van Belle, 21 October 2020
Note: As for the QCD sector, that is a mess. We might have to wait another hundred years or so to see the smoke clear up there. Or, who knows, perhaps some visiting alien(s) will come and give us a decent alternative for the quark hypothesis and quantum field theories. One of my friends thinks so. Perhaps I should trust him more. 🙂
As for Phil Gibbs, I should really thank him for being one of the smartest people on Earth – and for his site, of course. Brilliant forum. Does what Feynman wanted everyone to do: look at the facts, and think for yourself. 🙂
I ended my post on particles as spacetime oscillations saying I should probably write something about the concept of a field too, and why and how many academic physicists abuse it so often. So I did that, but it became a rather lengthy paper, and so I will refer you to Phil Gibbs’ site, where I post such stuff. Here is the link. Let me know what you think of it.
As for how it fits in with the rest of my writing, I already jokingly rewrote two of Feynman’s introductory Lectures on quantum mechanics (see: Quantum Behavior and Probability Amplitudes). I consider this paper to be the third. 🙂
Post scriptum: Now that I am talking about Richard Feynman – again ! – I should add that I really think of him as a weird character. I think he himself got caught in that image of the ‘Great Teacher’ while, at the same (and, surely, as a Nobel laureate), he also had to be seen to a ‘Great Guru.’ Read: a Great Promoter of the ‘Grand Mystery of Quantum Mechanics’ – while he probably knew classical electromagnetism combined with the Planck-Einstein relation can explain it all… Indeed, his lecture on superconductivity starts off as an incoherent ensemble of ‘rocket science’ pieces, to then – in the very last paragraphs – manipulate Schrödinger’s equation (and a few others) to show superconducting currents are just what you would expect in a superconducting fluid. Let me quote him:
“Schrödinger’s equation for the electron pairs in a superconductor gives us the equations of motion of an electrically charged ideal fluid. Superconductivity is the same as the problem of the hydrodynamics of a charged liquid. If you want to solve any problem about superconductors you take these equations for the fluid [or the equivalent pair, Eqs. (21.32) and (21.33)], and combine them with Maxwell’s equations to get the fields.”
So… Well… Looks he too is all about impressing people with ‘rocket science models’ first, and then he simplifies it all to… Well… Something simple. 😊
Having said that, I still like Feynman more than modern science gurus, because the latter usually don’t get to the simplifying part.
My very first publication on Phil Gibb’s site – The Quantum-Mechanical Wavefunction as a Gravitational Wave – reached 500+ downloads. I find that weird, because I warn the reader in the comments section that some of these early ideas do not make sense. Indeed, while my idea of modelling an electron as a two-dimensional oscillation has not changed, the essence of the model did. My theory of matter is based on the idea of a naked charge – with zero rest mass – orbiting around some center, and the energy in its motion – a perpetual current ring, really – is what gives matter its (equivalent) mass. Wheeler’s idea of ‘mass without mass’. The force is, therefore, definitelynot gravitational.
It cannot be: the force has to grab onto something, and all it can grab onto is the naked charge. The force must, therefore, be electromagnetic. So I now look at that very first paper as an immature essay. However, I leave it there because that paper does ask all of the right questions, and I should probably revisit it – because the questions I get on my last paper on the subject – De Broglie’s Matter-Wave: Concept and Issues, which gets much more attention on ResearchGate than on Phil Gibb’s site (so it is more serious, perhaps) – are quite similar to the ones I try to answer in that very first paper: what is the true nature of the matter-wave? What is that fundamental oscillation?
I have been thinking about this for many years now, and I may never be able to give a definite answer to the question, but yesterday night some thoughts came to me that may or may not make sense. And so to be able to determine whether they might, I thought I should write them down. So that is what I am going to do here, and you should not take it very seriously. If anything, they may help you to find some answers for yourself. So if you feel like switching off because I am getting too philosophical, please do: I myself wonder how useful it is to try to interpret equations and, hence, to write about what I am going to write about here – so I do not mind at all if you do too!
That is too much already as an introduction, so let us get started. One of my more obvious reflections yesterday was this: the nature of the matter-wave is not gravitational, but it is an oscillation in space and in time. As such, we may think of it as a spacetime oscillation. In any case, physicists often talk about spacetime oscillations without any clear idea of what they actually mean by it, so we may as well try to clarify it in this very particular context here: the explanation of matter in terms of an oscillating pointlike charge. Indeed, the first obvious point to make is that any such perpetual motion may effectively be said to be a spacetime oscillation: it is an oscillation in space – and in time, right?
As such, a planet orbiting some star – think of the Earth orbiting our Sun – may be thought of a spacetime oscillation too ! Am I joking? No, I am not. Let me elaborate this idea. The concept of a spacetime oscillation implies we think of space as something physical, as having an essence of sorts. We talk of a spacetime fabric, a (relativistic) aether or whatever other term comes to mind. The Wikipedia article on aether theories quotes Robert B. Laughlin as follows in this regard: “It is ironic that Einstein’s most creative work, the general theory of relativity, should boil down to conceptualizing space as a medium when his original premise [in special relativity] was that no such medium existed [..] The word ‘ether’ has extremely negative connotations in theoretical physics because of its past association with opposition to relativity. This is unfortunate because, stripped of these connotations, it rather nicely captures the way most physicists actually think about the vacuum.”
I disagree with that. I do not think about the vacuum in such terms: the vacuum is the Cartesian mathematical 3D space in which we imagine stuff to exist. We should not endow this mathematical space with any physical qualities – with some essence. Mathematical concepts are mathematical concepts only. It is the difference between size and distance. Size is physical: an electron – any physical object, really – has a size. But the distance between two points is a mathematical concept only.
The confusion arises from us expressing both in terms of the physical distance unit: a meter, or a pico- or femtometer – whatever is appropriate for the scale of the things that we are looking at. So it is the same thing when we talk about a point: we need to distinguish a physical point – think of our pointlike charge here – and a mathematical point. That should be the key to understanding matter-particles as spacetime oscillations – if we would want to understand them as such, that is – which is what we are trying to do here. So how should we think of this? Let us start with matter-particles. In our realist interpretation of physics, we think of matter-particles as consisting of charge – in contrast to, say, photons, the particles of light, which (also) carry energy but no charge. Let us consider the electron, because the structure of the proton is very different and may involve a different force: a strong force – as opposed to the electromagnetic force that we are so familiar with. Let me use an animated gif from the Wikipedia Commons repository to recapture the idea of such (two-dimensional) oscillation.
Think of the green dot as the pointlike charge: it is a physical point moving in a mathematical space – a simple 2D plane, in this case. So it goes from here to there, and here and there are two mathematical points only: points in the 3D Cartesian space which – as H.A. Lorentz pointed out when criticizing the new theories – is a notion without which we cannot imagine any idea in physics. So we have a spacetime oscillation here alright: an oscillation in space, and in time. Oscillations in space are always oscillations in time, obviously – because the idea of an oscillation implies the idea of motion, and the idea of motion always involves the notion of space as well as the notion of time. So what makes this spacetime oscillation different from, say, the Earth orbiting around the Sun?
Perhaps we should answer this question by pointing out the similarities first. A planet orbiting around the sun involves perpetual motion too: there is an interplay between kinetic and potential energy, both of which depend on the distance from the center. Indeed, Earth falls into the Sun, so to speak, and its kinetic energy gets converted into potential energy and vice versa. However, the centripetal force is gravitational, of course. The centripetal force on the pointlike charge is not: there is nothing at the center pulling it. But – Hey ! – what is pulling our planet, exactly? We do not believe in virtual gravitons traveling up and down between the Sun and the Earth, do we? So the analogy may not be so bad, after all ! It is just a very different force: its structure is different, and it acts on something different: a charge versus mass. That’s it. Nothing more. Nothing less.
Or… Well… Velocities are very different, of course, but even there distinctions are, perhaps, less clear-cut than they appear to be at first. The pointlike charge in our electron has no mass and, therefore, moves at lightspeed. The electron itself, however, acquires mass and, therefore, moves at a fraction of lightspeed only in an atomic or molecular orbital. And much slower in a perpetual current in superconducting material. [Yes. When thinking of electrons in the context of superconduction, we have an added complication: we should think of electron pairs (Cooper pairs) rather than individual electrons, it seems. We are not quite sure what to make of this – except to note electrons will also want to lower their energy by pairing up in atomic or molecular orbitals, and we think the nature of this pairing must, therefore, be the same.]
Did we clarify anything? Maybe. Maybe not. Saying that an electron is a pointlike charge and a two-dimensional oscillation, or saying that it’s a spacetime oscillation itself, appears to be a tautology here, right? Yes. You are right. So what’s the point, then?
We are not sure, except for one thing: when defining particles as spacetime oscillations, we do definitely not need the idea of virtual particles. That’s rubbish: an unnecessary multiplication of concepts. So I think that is some kind of progress we got out of this rather difficult philosophical reflections, and that is useful, I think. To illustrate this point, you may want to think of the concept of heat. When there is heat, there is no empty space. There is no vacuum anymore. When we heat a space, we fill it with photons. They bounce around and get absorbed and re-emitted all of the time. in fact, we, therefore, also need matter to imagine a heated space. Hence, space here is no longer the vacuum: it is full of energy, but this energy is always somewhere – and somewhere specifically: it’s carried by a photon, or (temporarily) stored as an electron orbits around a nucleus in an excited state (which amounts to the same as saying it is being stored by an atom or some molecular structure consisting of atoms). In short, heat is energy but it is being ‘transmitted’ or ‘transported’ through space by photons. Again, the point is that the vacuum itself should not be associated with energy: it is empty. It is a mathematical construct only.
We should try to think this through – even further than we already did – by thinking how photons – or radiation of heat – would disturb perpetual currents: in an atom, obviously (the electron orbitals), but also perpetual superconducting currents at the macro-scale: unless the added heat from the photons is continuously taken away by the supercooling helium or whatever is used, radiation or heat will literally bounce the electrons into a different physical trajectory, so we should effectively associate excited energy states with different patterns of motion: a different oscillation, in other words. So it looks like electrons – or electrons in atomic/molecular orbitals – do go from one state into another (excited) state and back again but, in whatever state they are, we should think of them as being in their own space (and time). So that is the nature of particles as spacetime oscillations then, I guess. Can we say anything more about it?
I am not sure. At this moment, I surely have nothing more to say about it. Some more thinking about how superconduction – at the macro-scale – might actually work could, perhaps, shed more light on it: is there an energy transfer between the two electrons in a Cooper pair? An interplay between kinetic and potential energy? Perhaps the two electrons behave like coupled pendulums? If they do, then we need to answer the question: how, exactly? Is there an exchange of (real) photons, or is the magic of the force the same: some weird interaction in spacetime which we can no further meaningfully analyze, but which gives space not only some physicality but also causes us to think of it as being discrete, somehow. Indeed, an electron is an electron: it is a whole. Thinking of it as a pointlike charge in perpetual motion does not make it less of a whole. Likewise, an electron in an atomic orbital is a whole as well: it just occupies more space. But both are particles: they have a size. They are no longer pointlike: they occupy a measurable space: the Cartesian (continuous) mathematical space becomes (discrete) physical space.
I need to add another idea here – or another question for you, if I may. If superconduction can only occur when electrons pair up, then we should probably think of the pairs as some unit too – and a unit that may take up a rather large space. Hence, the idea of a discrete, pointlike, particle becomes somewhat blurred, right? Or, at the very least, it becomes somewhat less absolute, doesn’t it? 🙂
I guess I am getting lost in words here, which is probably worse than getting ‘lost in math‘ (I am just paraphrasing Sabine Hossenfelder here) but, yes, that is why I am writing a blog post rather than a paper here. If you want equations, read my papers. 🙂 Oh – And don’t forget: fields are real as well. They may be relative, but they are real. And it’s not because they are quantized (think of (magnetic) flux quantization in the context of superconductivity, for example) that they are necessarily discrete – that we have field packets, so to speak. I should do a blog post on that. I will. Give me some time. 🙂
Post scriptum: What I wrote above on there not being any exchange of gravitons between an orbiting planet and its central star (or between double stars or whatever gravitational trajectories out there), does not imply I am ruling out their existence. I am a firm believer in the existence of gravitational waves, in fact. We should all be firm believers because – apart from some marginal critics still wondering what was actually being measured – the LIGO detections are real. However, whether or not these waves involve discrete lightlike particles – like photons and, in the case of the strong force, neutrinos – is a very different question. Do I have an opinion on it? I sure do. It is this: when matter gets destroyed or created (remember the LIGO detections involved the creation and/or destruction of matter as black holes merge), gravitational waves must carry some of the energy, and there is no reason to assume that the Planck-Einstein relation would not apply. Hence, we will have energy packets in the gravitational wave as well: the equivalent of photons (and, most probably, of neutrinos), in other words. All of this is, obviously, very speculative. Again, just think of this whole blog post as me freewheeling: the objective is, quite simply, to make you think as hard as I do about these matters. 🙂
As for my remark on the Cooper pairs being a unit or not, that question may be answered by thinking about what happens if Cooper pairs are broken, which is a topic I am not familiar with, so I cannot say anything about it.
Philosophers usually distinguish between form and matter, rather than form and substance. Matter, as opposed to form, is then what is supposed to be formless. However, if there is anything that physics – as a science – has taught us, is that matter is defined by its form: in fact, it is the form factor which explains the difference between, say, a proton and an electron. So we might say that matter combines substance and form.
Now, we all know what form is: it is a mathematical quality—like the quality of having the shape of a triangle or a cube. But what is (the) substance that matter is made of? It is charge. Electric charge. It comes in various densities and shapes – that is why we think of it as being basically formless – but we can say a few more things about it. One is that it always comes in the same unit: the elementary charge—which may be positive or negative. Another is that the concept of charge is closely related to the concept of a force: a force acts on a charge—always.
We are talking elementary forces here, of course—the electromagnetic force, mainly. What about gravity? And what about the strong force? Attempts to model gravity as some kind of residual force, and the strong force as some kind of electromagnetic force with a different geometry but acting on the very same charge, have not been successful so far—but we should immediately add that mainstream academics never focused on it either, so the result may be commensurate with the effort made: nothing much.
Indeed, Einstein basically explained gravity away by giving us a geometric interpretation for it (general relativity theory) which, as far as I can see, confirms it may be some residual force resulting from the particular layout of positive and negative charge in electrically neutral atomic and molecular structures. As for the strong force, I believe the quark hypothesis – which basically states that partial (non-elementary) charges are, somehow, real – has led mainstream physics into the dead end it finds itself in now. Will it ever get out of it?
I am not sure. It does not matter all that much to me. I am not a mainstream scientist and I have the answers I was looking for. These answers may be temporary, but they are the best I have for the time being. The best quote I can think of right now is this one:
‘We are in the words, and at the same time, apart from them. The words spin out, spin us out, over a void. There, somewhere between us, some words form some answer for some time, allowing us to live more fully in the forgetting face of nonexistence, in the dissolving away of each other.’ (Jacques Lacan, in Jeremy D. Safran (2003), Psychoanalysis and Buddhism: an unfolding dialogue, p. 134)
That says it all, doesn’t it? For the time being, at least. 🙂
Post scriptum: You might think explaining gravity as some kind of residual electromagnetic force should be impossible, but explaining the attractive force inside a nucleus behind like charges was pretty difficult as well, until someone came up with a relatively simple idea based on the idea of ring currents. 🙂 |
According to Warder, Culture refers to the behavior and belief characteristics of a particular society, community or ethnic group. Culture matters to the extent that it is normal for different experiences to be felt by the individuals in a given society. It is worth noting here that the perspectives in cultural matters usually provide a new insight into the psychological processes. The experiences we go through in life are facilitated by the culture we live in, because culture provides or is the environment which allows all these experiences to take place (Warder, 1996).
Self concept refers to all understanding and knowledge of oneself. The components of self concepts include: psychological, physical and social attitudes, ideas and beliefs that one has. The most influence in terms of self concept is family’s history, basically referring to the culture one has been brought up in, and the experiences he or she has undergone.
Our notions of who-we-are are constant and are quite properly referred to as individual theories that we revise and test according to our own experiences. The implicit theories of oneself may differ from each other systematically between the cultures and times period, which also differ in the roles, socially, and in the experiences provided for an individual. It follows that there may be differences in consumer cross-cultural and cross temporal behavior that occur as a result of differential concepts of oneself (Wendt 1994).
Various studies have been carried out concerning the impact culture may have towards self concept. One such study was carried out by Erdman (2006) using American and Chinese students, requesting them to recall memories and events of their early years of childhood. In his study, Erdman found out that early childhood memories were a big part of self concept. The findings demonstrated that different cultural memories are brought about by early childhood years and persists into adulthood. The differences are formed both in the extended cultural contexts which defines the meaning of the self and the immediate family environment.
In conclusion, culture has such a greater influence on an individual’s life contributing majorly to the self concept of an individual. The influence might either be negative or positive depending on the type of culture that one has been brought up in. It is important that individuals study and appreciate their culture and its contribution in shaping their individual personalities.
Erdman (2006) Study of bisexual identity formation.
Warder, A. (1996) Consumption identity formation and uncertainty sociology. Manchester: Manchester University Press
Wendt (1994) Collective Identity formation and the intersexual state. New York: Rutledge |
A goiter is an enlargement of the thyroid. The thyroid is a gland. It produces hormones that help regulate your body’s metabolism. It is located on the front of the neck, right below the Adam’s apple. Goiters are seldom painful. They tend to grow slowly.
There are different types of goiters. This sheet focuses on nontoxic (or sporadic) goiter. It is a type of simple goiter that may be:
- Diffuse—enlarging the whole thyroid gland
- Nodular—enlargement caused by nodules, or lumps, on the thyroid
The development of nodules marks a progression of the goiter. It should be evaluated by your doctor.
The exact causes of nontoxic goiter are not known. In general, goiters may be caused by too much or too little thyroid hormones. There is often normal thyroid function with a nontoxic goiter. Some possible causes of nontoxic goiter include:
- Family history of goiters
- Regular use of medications such as lithium, propylthiouracil, phenylbutazone, or aminoglutethimide
- Taking a lot of substances (goitrogens) that inhibit production of thyroid hormone—common goitrogens include foods such as cabbage, turnips, Brussel sprouts, seaweed, and millet
- Iodine deficiency—though rare in the United States and other developed countries, it is a primary cause of goiter in other parts of the world, particularly in mountainous areas, or areas that experience heavy rainfall or flooding
Nontoxic goiter is more common in women and in people over age 40.
The following factors increase your chance of developing nontoxic goiter:
- A diet low in iodine
- Family history of goiter
- History of radiation therapy to head or neck, especially during childhood
Nontoxic goiters usually do not have noticeable symptoms, unless they become very large. Symptoms may include:
- Swelling of the neck
- Breathing difficulties, coughing, or wheezing with large goiter
- Difficulty swallowing with large goiter
- Feeling of pressure on the neck
You will be asked about your symptoms and medical history. A physical exam will be done. Your doctor may recommend a specialist. An endocrinologist focuses on hormone related issues.
Your body fluids and tissues may be tested. This can be done with:
- Blood tests
- Fine needle aspiration biopsy
Images may be taken of your body structures. This can be done with:
Nontoxic goiters usually grow very slowly. They may not cause any symptoms. In this case, they do not need treatment.
Treatment may be needed if the goiter grows rapidly, affects your neck, or obstructs your breathing.
If a nontoxic goiter progresses to the nodular stage, and the nodule is found to be cancerous, you will need treatment. Talk with your doctor about the best plan for you. Treatment options include the following:
Hormone Suppression Therapy
Thyroid hormone medication is used to suppress secretion of thyrotropin (TSH). TSH is the thyroid-stimulating hormone that causes growth. This therapy is most effective for early stage goiters that have grown due to impaired hormone production. It is less effective for goiters that have progressed to the nodular stage.
Radioactive iodine treatment is used to reduce the size of large goiter. It is used in the elderly when surgical treatment is not an option.
Thyroidectomy is done to remove part or all of the thyroid gland. It is the treatment of choice if the goiter is so large that it makes it difficult to breathe or swallow.
- Reviewer: Kim A. Carmichael, MD
- Review Date: 12/2015 -
- Update Date: 12/20/2014 - |
Click on this LINK for a website with Lesson Plans for students.
You can also click on this LINK to go to NASA's website for teachers with lessons and activities for classrooms.
Flying model rockets is a relatively safe and inexpensive way for students to learn the basics of forces and the response of a vehicle to external forces. Like an airplane, a model rocket is subjected to the forces of weight, thrust, and aerodynamics during its flight.
Flying model rockets is a relatively safe and inexpensive way for students to learn the basics of forces and the response of vehicles to external forces. Like an airplane in flight, a model rocket is subjected to the forces of weight, thrust, and the aerodynamic forces, lift and drag. The relative magnitude and direction of the forces determines the flight trajectory of the rocket.
On this slide we show the events in the flight of a single stage model rocket. Throughout the flight, the weight of a model rocket is fairly constant; only a small amount of solid propellant is burned relative to the weight of the rest of the rocket. This is very different from full scale rockets in which the propellant weight is a large portion of the vehicle weight. At launch , the thrust of the rocket engine is greater than the weight of the rocket and the net force accelerates the rocket away from the pad. Unlike full scale rockets, model rockets rely on aerodynamics for stability. During launch, the velocity is too small to provide sufficient stability, so a launch rail is used. Leaving the pad, the rocket begins a powered ascent. Thrust is still greater than weight, and the aerodynamic forces of lift and drag now act on the rocket. When the rocket runs out of fuel, it enters a coasting flight. The vehicle slows down under the action of the weight and drag since there is no longer any thrust present. The rocket eventually reaches some maximum altitude which you can measure using some simple length and angle measurements and trigonometry. The rocket then begins to fall back to earth under the power of gravity. While the rocket has been coasting, a delay "charge" has been slowly burning in the rocket engine. It produces no thrust, but may produce a small streamer of smoke which makes the rocket more easily visible from the ground. At the end of the delay charge, an ejection charge is ignited which pressurizes the body tube, blows the nose cap off, and deploys the parachute. The rocket then begins a slow descent under parachute to a recovery. The forces at work here are the weight of the vehicle and the drag of the parachute. After recovering the rocket, you can replace the engine and fly again.
On the graphic, we show the flight path as a large arc through the sky. Ideally, the flight path would be straight up and down; this provides the highest maximum altitude. But model rockets often turn into the wind during powered flight because of an effect called weather cocking. The effect is the result of aerodynamic forces on the rocket and cause the maximum altitude to be slightly less than the optimum. |
Middle School Extended Learning
Speech and Debate
Students learn the techniques for building and delivering a successful speech. We will engage in delivering speeches, as well as participating in formal and informal debates. In addition, students will have the necessary skills to critique, analyze and question speeches and debates delivered by others.
Creative writing offers young writers a chance to get their ideas down on paper, play with words, and celebrate their accomplishments. Students will create an original myth, tell a fairy tale from a different point of view, and create a picture book which is shared with kindergarten students, along with a few other projects.
Students learn the science behind some of our favorite things and activities! Is there a plant that can survive on nothing but Coke? Which bouncy ball bounces the highest? How can you make a lava lamp? These questions are answered and more in an exploration of easy, in-class science experiments.
Students will learn a variety of skills and information, including the following topics: social, emotional, and physical health, good decision making and skills practice, proper hygiene, self-esteem, stress management skills, growth and development, disease prevention and safety, drug use and abuse, and nutrition.
Design and Modeling (a project lead the way class)
Students use solid model software and be introduced to the design process. Utilizing this design approach, students understand how design influences their lives. Students also learn sketching techniques and use descriptive geometry as a component of design, measurement and computer modeling. Students brainstorm, research, develop ideas, create models and evaluate ideas and communicate results.
Automation and Robotics (a project lead the way class)
Students trace the history, development, and influence of automation and robotics. They learn about mechanical systems, energy transfer, machine automation, and computer control systems. A robust robotics platform is used to design, build and program a solution to solve an existing problem.
Students learn about the various Latin rhythms and Zumba fitness dances.
History of Dance
Students learn about the popular dances throughout history and the dance steps.
This course brings theatrical plays and stories to life through expressions and gestures, without the need for props and costumes. It assists students with their presentation skills in a fun learning environment.
3D Video Game Design
In this elective, students will learn to program video games in 3 dimensions using a drag and drop tool called AgentCubes, developed by Colorado University. Students will use creativity, innovation, communication, critical thinking, and problem-solving skills to create and program 3D objects and games they can share with friends.
This course is designed to develop basic skills in design, photography, editing, journalism, managing and marketing. Students create the yearbook and publish a monthly student newspaper. |
Lise Whitfield, Bill McMillon, Judy Scotchmoor, Phil Stoffer, DLESE (Digital Library for Earth System Education)
Activity takes three 50-minute class periods. Additional materials are needed for one part of the activity.Learn more about Teaching Climate Literacy and Energy Awareness»
See how this Activity supports the Next Generation Science Standards»
Middle School: 1 Disciplinary Core Idea, 3 Science and Engineering Practices
Activity lists that the grade level is 6-12, reviewers suggest grade level 7-10.
About Teaching Climate Literacy
Other materials addressing 7a
Excellence in Environmental Education Guidelines
Other materials addressing:
A) Processes that shape the Earth.
Notes From Our Reviewers
The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness.
Read what our review team had to say about this resource below or learn more about
how CLEAN reviews teaching materials
Teaching Tips | Science | Pedagogy |
- Great extension activity in supplement lessons is the one on fresh water availability and how it will be affected by sea level rise and economic impact.
- Educator should try to discuss the homework from "Activity 3: Mapping Shorelines" to ensure student consideration of the impacts of ice melt and sea level rise. A class discussion may suffice.
- Activity 3 needs more explicit directions for the worksheet students fill out.
- The second activity should include safety precautions for working with sharp instruments.
About the Science
- Activity covers how sea level rise will affect humans, especially fresh water availability and tourism. Students follow multiple exercises that relate to the concept of sea level rise and its impacts.
- Activity offers great insight on how the landforms will change due to sea level rise.
- Google Earth offers some animations that can be used to simulate sea level rise http://services.google.com/earth/kmz/changing_sea_level_n.kmz.
- Comment from the scientist: Sea level rise is a very complex issue, which is heavily based on Earth’s gravity field. The ocean will, therefore, not rise equally in all locations. This should not be the highlight, but may be noted by the educator.
About the Pedagogy
- Lead-ins to activities are very engaging and well organized.
- Lesson is active and integrates thoughtful discussion questions.
- Working in groups and using visualizations and hands-on activities helps to accommodate a variety of learning styles.
- Contains list of linked resources, which are strong and could be incorporated in the lessons.
Technical Details/Ease of Use
- The link to the topographic map is filled with ads; bypass these by clicking (again) on "topofinder" tab. You should not need to register for "trails.com" to gain access to topographic maps.
- The animation of the increased sea level needs directions for use. Also, it should be called an "interactive" not an "animation." Directions for use are as follows: A click and hold will change the sea level when the mouse is moved up and down. Click, hold, and move left to right rotates the globe to view different continents. There is a command on the interactive toolbar that converts the mouse clicks and holds to zoom and lateral movement.
- List of needed materials is very California-centric (e.g. grocery store names).
- Educators should check links and test computer simulation (segment 1) before teaching.
Related URLs These related sites were noted by our reviewers but have not been reviewed by CLEAN
- This activity is part of a larger collection which can be found at http://www.teachingboxes.org/seaLevel/index.jsp.
Next Generation Science Standards See how this Activity supports:
Disciplinary Core Ideas: 1
MS-ESS3.B1:Mapping the history of natural hazards in a region, combined with an understanding of related geologic forces can help forecast the locations and likelihoods of future events.
Science and Engineering Practices: 3
MS-P2.2:Develop or modify a model— based on evidence – to match what happens if a variable or component of a system is changed.
MS-P2.5:Develop and/or use a model to predict and/or describe phenomena.
MS-P4.2:Use graphical displays (e.g., maps, charts, graphs, and/or tables) of large data sets to identify temporal and spatial relationships. |
Infographics engage students in non-fiction topics and inspire them to make a difference. Empower students to raise awareness by giving them the space to create their own infographics.
Exploring Mentor Texts
Inspire students about the power of infographics by doing a Google image search on infographics and exploring why people love infographics. Give students freedom to explore while guiding them to understand what makes infographics effective. Analyze the many different ways to visualize data and begin connecting with non-profit organizations to tell their stories.
13 Reasons Why Your Brain Craves Infographics
The Time You Have (in Jellybeans)
Google image search
Infographic Observations Doc
Wine Folly’s Basic Wine Infographic
33 Ways to Visualize data
Water Crisis Fact Sheet
Guide your students to find data for their infographics by helping them to write business letters to non-profit organizations. Continue the search by going to Google and searching the issue along with "statistics." Have students create a sketch of how they plan to visualize the data before moving on to the digital creation.
Elements of Design
How do the elements of design make the data meaningful in infographics? Explore Steve McGriff's "Graphic Design Tips for Creating Infographics" and guide students through putting these design principles into practice. Explore a variety of online tools for finding colors, icons, and text. |
THE LEAST luminous star in the sky, a so-called brown dwarf, has been
identified. The star is only one twenty-thousandth as bright as the Sun
and may contain as little as a twentieth of its mass. It is 30 per cent
less luminous than the faintest star yet found. Phillip Ianna of the University
of Virginia and Mike Bessell of Mount Stromlo and Siding Spring Observatories
in Australia have made new measurements of the star, which was first photographed
by Mike Hawkins of the Royal Observatory in Edinburgh in 1988.
By working out the star’s position, about 68 light years from the Sun,
Ianna and Bessell have measured its intrinsic brightness – that is, its
actual luminosity as opposed to how bright it appears from Earth.
Brown dwarfs are ‘failed’ stars, containing so little mass that nuclear
reactions cannot take place in their cores. They may make up the invisible
‘dark matter’ which is thought to compose between 90 and 99 per cent of
the total mass of matter in the Universe. |
Imagine playing with Lego bricks each 5nm in size (for the record, to build a Lego brick using these nano-bricks you would need, roughly, 2 billion billions of them).
This is what researchers at Cornell School of Chemical and BioMechanical Engineering are doing. In a paper published on Nature they report their results in creating a material made through the assembly on nano crystals (the nano-bricks). At this size you don't need any glued to keep the nano bricks together, they simply stick one another creating a larger crystal. Each nano-brick is made of lead-selenium and all together they show electrical properties that are superior to the ones of any other semiconductor crystalline material (including silicon) created so far.
By manipulating the assembly of these nano crystals it is possibile to design a material with the desired energy bands. In lay terms, an atom as a bunch of electrons forming a cloud around it. Each electron has a very definite energy band in which it can operate (the Pauli exclusion principles tells us that two electrons cannot occupy the same space - by the way this is the root cause that hitting an object is so painful, your foot electrons cannot mix with the door jamb electrons...). The energy bands of a material, however, are the result of the superposition of billions and billions of single electrons energy bands.
These energy bands determine the electrical properties of the material and by engineering the assembly of such small bricks you can finely tune the energy bands to suit your needs. Hence you can create materials with just the right characteristics to adsorb certain radio frequencies (or light wavelengths...) increasing the efficiency of electronic components, something that is crucial in IoT where energy efficiency can make the difference between cost affordability or not.
Again, another example on the power of nano-tech that will change our world in the next decade. |
- Jupiter is the largest planet in the Solar System and has at least 16 large moons and even more very small ones.
- Four of Jupiter’s moons, or satellites, can be seen using a small telescope or even a pair of binoculars.
- The four moons were first seen and described by Galileo four hundred years ago, in 1610.
- They were later given the names, Io, Europa, Ganymede and Callisto, all of which are names taken from Greek mythology.
- They cannot be seen with the naked eye.
Figure 1. Jupiter and the 4 Galilean moons seen through a small telescope
- Of these four moons Io is the moon which is closest to Jupiter and, like all the other moons, orbits around the planet.
- “Moon” is another word for “satellite”, which means a body in outer space which revolves, or turns around another body.
- Io is a little larger than our own moon, with a diameter of about 2262 miles (3643 kilometres) and is made of rock.
- Io orbits Jupiter at a distance of about 262,094 miles (421,800 kilometres) from the planet.
- Io orbits round Jupiter at a speed of 38,764 miles(62,423 kilometres) per hour which is faster than the other moons. The speed of its orbit is due to both Europa and Ganymede exerting a pull of gravity on Io.
- Io orbits Jupiter four times for every single orbit that Ganymede makes and every two orbits that Europa makes.
- She travels 1645,796 miles (2650,236 kilometres)in her orbit and takes only 1.7 of our earth days to complete a single orbit.
- As Io turns round the planet, she keeps the same face turned towards Jupiter, so in the course of one orbit, Io only turns once on her axis.
- Several NASA space missions to Jupiter have given close up photographs and information about these “Galilean moons”, such as this photograph taken from Apollo Mission 4.
Figure 2. Images of the four Galilean moons from the Apollo spacecraft.
- The edge of Jupiter can be seen in the bottom right. Io is the large moon in the bottom left, closest to Jupiter.
- Io is so close to Jupiter that the pull of Jupiter’s gravity causes continual disturbance to the moon’s surface.
- Volcanic eruptions constantly cover the surface of Io with new coatings of sulphuric lava.
- Io is the most volcanically active body in the whole solar system.
- The volcanoes are driven by silicate magma.
- With volcanoes constantly erupting and spilling out hot lava, Io’s surface is so hot that any moisture is driven off it.
- Because of the constant volcanic eruptions, the appearance of Io is constantly changing.
- NASA describes Io as “Looking like a giant pizza covered with melted cheese and splotches of tomato and ripe olives”.
Figure 3. A NASA image of Io's surface
- Io, in Greek mythology, was a Princess of Argos. She was loved by Zeus, the sky god of the Greeks (known to the Romans as Jupiter).
- Zeus’ wife, Hera (known to the Romans as Juno) was jealous of Io, so Zeus turned Io into a young cow to escape from Hera, but Hera set Argus, a herdsman with 100 eyes, to watch Io.
- Zeus (Jupiter) killed Argus and Hera (Juno) then sent a gadfly, an insect that stings cattle, to drive Io away.
- In the heavens, however, the moon Io no
Really nice overview of the moon
Lots of pictures |
Significant immigration from the Caribbean began in 1948. It accelerated during the 1950s at a time when immigrants from India and Pakistan were also arriving in large numbers. Around a quarter of a million black and Asian immigrants arrived in this period. 136,000 immigrants from the <<'New' Commonwealth>> entered Britain in 1961.
During the 1960s, because of high levels of discrimination and exclusion, and relatively low housing costs in inner cities, black and Asian immigrants settled mainly in inner city areas - in particular London, the West Midlands and West Yorkshire. This had consequences for relations with the indigenous population and led to race riots occurring in London (Notting Hill) and Nottingham during 1958. A number of sociological studies subsequently demonstrated that black and Asian immigrants were subject to discrimination in employment and housing. Race relations therefore became an essential element of social policy in the 1960s.
Before its victory in the 1964 general election, Labour opposed immigration controls. However, poor results (in particular when the Labour shadow Foreign Secretary, Patrick Gordon Walker, was defeated at Smethwick in the West Midlands) highlighted the strength of the anti-immigration lobby. Labour subsequently decided to tighten immigration controls. |
Posted at 05.10.2018
Most folks have heard about global warming and what it is doing to our world, including our oceans. Global warming is simply a climate change. A slow and steady upsurge in the heat range of the earth's atmosphere, environment, and its own oceans is believed to be entirely changing the earth's climate.
Climate change will involve rapidly changing temperature and unpredictable weather habits on huge size. In addition, these changes cause a rise in concentrations of gases which trap heat in the atmosphere, categorised as greenhouse gases. The mostly released gas is skin tightening and.
The increase volumes of carbon dioxide plus some other gases released by the burning of fuels, clearing lands, agriculture, and other human actions are thought to be the most important sources of global warming. It has occurred over the past fifty years.
Ocean acidification has dangerous and harmful results to our earth's underwater environment. The absorption of carbon dioxide by the earth's oceans is increasing the acid levels, producing damaging and long-term destruction to your oceans' coral reefs, which in turn causes them to dissolve by minimizing their calcification.
Changes in the earth's ocean environment is not often seen or felt, so it is essential to discuss the importance of this process on the coral reefs and the dangerous ramifications of global warming. Coral reefs are the most biodiverse ecosystems of the oceans. Coral reefs are estimated to shelter around one-third of most marine species; about 500 million people rely on coral reefs for food, income, and remedies. Coral reefs also become barriers during inclement weather.
Human activity is triggering the planet earth to get warmer and warmer, especially by the using up of fossil fuels and deforestation, the clear trimming of forests. When we dig out and melt away fossil fuels, like coal and petroleum, we cause the discharge of carbon dioxide and other gases in to the atmosphere. Clearing of the forests also allows for large amounts of carbon dioxide to be released around the entire world. The continuing future of coral reefs is threatened by humans and natural disruptions.
Typical ocean pH levels vary due to the effect of the surroundings. When the pH of drinking water comes below 5. 0 or increases above 9. 6, hazardous actions become noticeable. pH levels below 7. 6 may cause coral reefs to dissolve because of this of having less calcium carbonate. Initiatives to ease global warming and ocean acidification by minimizing emissions have been unsuccessful. Researchers have grown to be more considering climate engineering to prevent the dangerous benefits of environment change.
Artificial sea alkalization is analyzed as a way to decrease local sea acidification also to protect coral reefs ecosystems. Several readings focused on the probability of changing ocean pH by increasing water alkalinity. In these studies, alkalizing agents, such as calcium mineral carbonate or calcium mineral hydroxide, were used to increase the oceans' alkalinity and the probable of safeguarding coral reefs against sea acidification. This approach is reasonable but doubtful as a result of constant changes (variation) of carbon dioxide amount from season to season, day to nights, and the varieties' variety and potential to adapt. Also, increasing the ocean's surface pH stimulates yet another absorption of carbon dioxide.
In another study, a team of international experts, including a Texas A&M College or university researcher, reviewed the Tree Reef, bordering the Australian coast. The team added sodium hydroxide to the water to change acidity and increase alkalinity of this particular. With all the increase of water pH, the reef grew quickly as a result of the experiment. Scientists figured it is possible to increase the expansion of coral reefs if ocean acidification is reversed.
In addition, corresponding to a new study shared in the journal Environmental Research & Technology, blowing tiny bubbles though seawater could remove carbon dioxide from the and help offset (counteract) water acidification. However, installing bubbles everywhere where coral reefs can be found is expensive.
The notion of increasing the alkalinity of ocean water to protect and maintain coral reefs is like turning again the clock more than 100 years. Back then, the carbon dioxide levels in the atmosphere was lower, and the oceans were much healthier. The best solution is always to stop emitting skin tightening and and prevent sea acidification. |
Everything that is alive is made of one or more cells. The cell is the smallest unit of life, and there are two main types:
Prokaryotes = bacteria and their bacteria-like cousins Archaea
Article Summary: The cells of plants are eukaryotic, with a nucleus, a vacuole, membrane-bound organelles and a cell wall. Here's a summary of the structure and function of plant cells.
Plant Cell Parts, Functions & Diagrams
Image of generic plant celll. Click here for a labeled diagram of this cell.
Page last updated: 1/2016
SPO VIRTUAL CLASSROOMS
VIDEO: How to Make a Wet Mount Slide of Elodea Plant Cells
Eukaryotes = the more complex cells of animals, plants, fungi, protozoans, algae, and slime and water molds
Although eukaryotic cells share many characteristics, there are also specializations that make plant cells unique. The following is a rundown of the main features differentiating plant cells from other eukaryotes.
Structures Present in Plant Cells & Absent in Animal Cells
Cell wall: Plant cells have protective cell walls, composed mainly of structural carbohydrates. The cell wall provides support, helps maintain cell shape, and prevents the cell from taking on too much water and bursting. The cell wall is not a feature unique to plants; bacteria, fungi and some protists also have cell walls. But unlike the cell walls of bacteria and fungi, plant cell walls are composed of different types of carbbohydrates—cellulose and hemicellulose—and structurally consist of three layers; an outer primary cell wall, a sticky pectin layer called the middle lamella, and a secondary cell wall, closest to the plasma membrane.
Central vacuole: The central vacuole takes up most of the space within a plant cell. Defined by a membrane called the tonoplast, the central vacuole functions as a holding tank for water and other molecules used by the cell. When full of water, the vacuole presses the other cell contents against the boundary of the cell.
Chloroplasts: These double membrane bound organelles contain the green pigment chlorophyll, which captures sunlight energy, so that the cell can produce its own food, a process called photosyntheses. Chloroplasts are just one type of plastid organelle common to plant cells. Some plastids function in food storage; others house different types of pigments that impart colors other than green to plants.
Turgor pressure or turgidity is the pressure of the cell contents against the cell wall, in plant cells, determined by the water content of the vacuole, resulting from osmotic pressure. |
The most dramatic images available from the foundry are those of melting and pouring. Huge furnaces, glowing with heat, transform chunks of metal into a flowing fiery liquid. When ready, their contents are poured into waiting ladles amid a shower of sparks. Workers guide the flow of metal from furnace to mold behind heat shields, guarding against the dangers of the temperature and materials. The foundry floor is where design becomes actual, in an extraordinary process that creates everyday objects.
Metal melting furnaces
The manufacture of cast metal relies on furnace technologies, since metals usually melt at very high temperatures. The first smelting of ores in human history were lead and tin, which can be melted in the heat of the cookfire. After that, metallurgy needed something more than wood flame. The first copper may have been smelted accidentally in a pottery kiln, which runs at least 200°C hotter than a campfire, but lack of written record makes it hard to be sure.
Blast furnaces, which are very tall furnaces injected with pressurized gasses, are commonly associated with metal working, but they are used for extracting iron and some other metals from their ores. Usually, a blast furnace casts only ingots of an intermediate alloy, which is then shipped to foundries involved in manufacturing.
Manufacturing foundries take metal alloys and additives and melt them to make specific grades of cast metal. Traditionally, cupola and crucible furnaces were the most common ways to forge metals for casting; in the modern day, electric arc and induction furnaces are common.
are the most basic form of metal furnace. A crucible is a vessel made of material that can handle incredibly high temperatures, often made of ceramic or other refractory material. It is placed into the source of heat like a pot might sit in a fire. The crucible is filled, or charged, with metal and additives. In the modern era, crucible furnaces are still in use by jewelry makers, backyard hobbyists, some non-ferrous foundries, and foundries doing very small batch work. Crucibles can range from a very small cup where metals are melted by blowtorch, like might be done at a jeweler’s, to large vessels that contain 50lbs of material. Larger crucibles are often put inside a kiln-like furnace and can be lifted-out for pouring, or have material ladled off the top.
are long, chimney-like furnaces which are filled with coal-coke and other additives. The fuel inside the cupola is lit, and when the furnace is sufficiently hot, pig and scrap iron is added directly. The process of melting the iron around the coke and additives adds carbon and other elements and produces different grades of iron and steel. Cupola furnaces are no longer usually used in production, as electric arc and induction methods are more efficient at producing the needed heat. However, there are some places where tradition keep the cupola furnaces running, like in this video of Da Shu Hua, where Chinese foundry workers throw molten iron against a wall to create dramatic sparks to welcome in the New Year.
Electric arc furnaces
(EAFs) came into use in the late 1800s. Electrodes run electrical current through the metal inside the furnace, which is more effective than adding external heat when melting high volumes at one time. A large EAF used in steel production can hold up to 400 tons. A “charge” of this steel is often made of heavy iron like slabs and beams, shredded scrap from cars and other recycling, and pig iron ingots from a smelter.
After the tank is filled, electrodes are placed into the metal, and an arc of electricity passes between them. As the metal begins to melt, the electrodes may be pushed farther into the mix or pulled apart to create a larger arc. Heat and oxygen might be added to speed the process. As molten metal starts to form, the voltage can be turned up, as the slag created on top of the metal acts like a protective blanket for the roof and other components of the EAF.
When everything is melted, the whole furnace is tilted, to discharge the liquid metal to a ladle below. Sometimes the ladles themselves can be smaller EAF furnaces, tasked with keeping the metal hot before pouring.
work with magnetic fields rather than with electrical arcs. Metal is charged into a crucible surrounded by a powerful electromagnet made of coiled copper. When the induction furnace is turned on, the coil creates a rapidly reversing magnetic field by the introduction of an alternating current. As the metal melts, the electromagnet creates eddies within the liquid that self-stir the material. The heat in an induction furnace is created by the excitation of the molecules in the iron itself, meaning that whatever goes into the crucible is exactly what comes out: there is no addition of oxygen or other gasses to the system. This means fewer variables to control during melting, but it also means that an induction furnace cannot be used to refine steel. What goes in, comes out. Like an EAF furnace, induction furnaces often discharge by tilting into ladles below.
Induction furnaces are very common and are simple to operate when given high quality input. Common models can produce 65 tons of steel at each charge.
All furnaces on the foundry floor face a fatal enemy: steam. Water, even in small amounts, can cause splashing or explosions, and so all scrap and ferroalloys, as well as every tool used in production, must be dry before use. Scrap metal must not have any closed areas in which water or vapor may have been trapped. Even the tools used by the foundry workers must be free of condensation or moisture. Many foundries have a drying oven to make sure that scrap and tools are bone dry before anything touches the casting furnace.
After metal is melted, it must be introduced to the mold. In smaller foundries, this may all happen in one stage: a tilting or lift-out crucible may take metal from the furnace to the sand. However, this is impractical when a furnace holds many tons of metal. Typically, in ferrous manufacturing, ladles transfer smaller portions of the melt from the main furnace.
In these systems, a ladle may bring metal straight to the mold. However, a transfer ladle might take the liquid to a holding tank or secondary furnace. Treatment ladles are another available type, used to break the melt into portions, like a baker might separate a basic dough to use it as the base for other recipes. For example, liquid cast iron may have agents added in the treatment ladle to make the carbon within it spherical in shape, rather than flaked, creating a more malleable metal called ductile iron.
Ladles can be very small and lifted by foundry workers or they can hold many tons of metal and need mechanical support. The largest ladles are moved through a foundry by ladle-car or by an overhead crane or track system.
Ladles of all sorts are designed to protect the worker from splash, flames, or sparks while pouring. Some ladles pour over the top lip, or a pour spout, and need to be tilted: these often have gears that allow the foundry worker to carefully control the rate of pouring. Other ladles have their pouring spout at the bottom of the bucket and the pour is controlled by removing and replacing a plug.
Metal alloys are made of mixtures of elements which are standardized by a formula that specifies the percentages of each type as well as the steps taken in its manufacture. The melting furnaces and treatment ladles of a foundry are where these alloy types are created for castings.
Foundries often specialize in either ferrous alloys, which contain iron, or specific non-ferrous alloys, like precious metals, copper-based, or aluminum-based alloys.
Non-ferrous alloys include all other metals, so it is not surprising there is further specialization in non-ferrous foundries. Some places specialize in zinc, some in aluminum; others work primarily with copper-based alloys like brass and bronze. However, there is crossover. If a particular foundry works with both bronze and aluminum, for example, they will likely specialize in certain grades of each.
Whatever alloys a foundry works with, the premise of making molten metal and casting into voids to shape it is the same. An idea becomes actual the moment that metal flows into a mold. |
Relationships: Lesson Plan for Ages 2-6
How to Teach Relationships to Preschoolers
When you were a kid, do you wish someone would have taught you how to protect and support your personal space? Yes? Well, here's a lesson to pass on to the children you know. Littles will learn about "Superhero Me" and will practice being the superhero of their space. Other concepts they will be introduced to are who their family members are, how to be respectful, and who the supporters and protectors are from their community.
This article includes an Inform section that is the instruction the teacher will provide the students. The Explore and Activity sections elaborate on the lesson but allow the students some hands-on learning.
This section of the lesson plan contains the basics of the lesson. There is a slideshow video that you can access at the link below which contains the rest of the slides and a video. The linked page also has additional content that goes along with the lesson.
A relationship is between you and people who protect and support you.
Family and Friends
Support means that you get help when needed. Protect means to keep something safe. You are at the center of the circle, and all the people around you are there to protect and support you. Family can protect and support you. Friends can protect and support you. Neighbors and people in your community can protect and support you.
YOU are in charge of the space around you!
Everything in your space is yours to support and protect. This means you are the superhero of your space. How can you protect your space? You tell people your feelings! You use your words to say how you feel. Superhero me will sometimes need help protecting and supporting their personal space. Family, friends, and your community can help protect and support you.
Family Protects and Supports You
Superhero Me Shares with Family
The people in your family can be your mom, dad, brother or sister. You can also have stepfamily. Other family members are uncles, aunts, grandma, grandpa, and cousins.
While you are growing up with your family, you will learn how to share your feelings. Feelings or emotions you might feel are happy, sad, surprised, angry, disgusted, or afraid.
Neighbors and Friends Protect and Support You
Neighbors are the people who live around you. Friends are people you meet who you like to be with. We should be respectful of our families, friends, and neighbors. Ways to be respectful are to say “please” when you want something and to say “thank you” if you get it. You can show respect by saying “excuse me” when you need attention. You can show respect by saying, “I’m sorry” if you hurt someone.
People in Your Community Support and Protect You
Teacher, Coach, Police Officer, and Doctor
Sometimes you will need other people to protect and support you. Your teacher will help you learn. A police officer will help you if you are in danger. A doctor will help you if you are sick. These people are helpers in your community. There are many more. If you are in need, look for a helper. They will protect or support you.
Look for people in your friends, neighbors, or community who will support and protect you.
This part of the lesson takes the students a little further into what they've been learning. In this case, they will be able to better identify family members.
Make a finger family video with little puppets.
Grampa Finger, where are you?
Hide the finger family members in plastic eggs to do an egg reveal. Kids get to name who is coming out of the egg. When the whole family is out, do the finger family song. Catch the whole thing on video and re-play it. Kids love watching themselves, and it also reinforces this part of the lesson.
The activity gets the kids creating and moving. Take a moment to imagine a room full of superhero preschoolers with capes and hula hoops!
Use a hula hoop to teach personal space, and make a superhero cape to teach kids how to be the “Superhero Me” of their space.
Make a superhero cape from felt. Cut some lightning bolts and stars out of different colors of felt and then use fabric glue to attach them to a larger piece of felt shaped like a cape. You can also get a foam mask kit to make the superhero mask, too. Move a hula hoop up and down while the child stands in the middle to show them where their personal space is. Let them know that “Superhero Me” protects this space, and if they aren’t able to, they need to ask an adult for support. |
2A + B = 18
B + C = 12
3A = 15
My third graders can! However it looks more like this:
My students think this is great fun. They have no idea they are exploring linear functions or algebraic relationships. All they know is that these problems make them think and they seem to like that.
I usually introduce algebraic thinking problems to third grade students during our unit on multiplication and division. As you know, this topic does go on for quite some time and it can get a little, dare I say, dull. Algebraic reasoning problems give young students a chance to apply their knowledge of basic math facts to fairly complex problems. Problems like this inspire young minds and satisfy their need for a greater challenge. My students are incredibly proud when they are able to solve one of these math problems successfully.
To make things even more interesting, I ask my students to create their own scale problems. We begin with two scales which I improvise with pieces of plain copy paper. I then give the students a variety of objects such as base ten blocks, colored cubes, and geometric tiles. They choose two types of objects to work with and begin creating their scale problems. They have to decide upon a value for each scale and then check it to make sure it works. After that, the students switch places and try to solve the problem. It's one of their favorite activities and it gives me great joy to see them so actively engaged in problem solving.
Give it a try. You won't be disappointed! |
Catching the Blue Streak: Following the Cerulean Warbler on its Trans-continental Migration
Birds often reveal their presence to us through their vocalizations, before we even have a chance to see the bird itself. This is especially true of the Cerulean Warbler, whose buzzy song is delivered emphatically from high in the canopy of mature trees. Craning your neck to catch a glimpse of a singing male is well worth it: the Cerulean Warbler takes its name from the male’s rich blue plumage. In Virginia, this tiny warbler breeds in large forests in the mountains, making an annual long-distance migration from its wintering grounds in Latin America.
Unfortunately, populations of Cerulean Warblers have declined dramatically over the last few decades. According to Partners in Flight, their populations fell by 72% between 1970 and 2014. The Virginia Wildlife Action Plan lists Cerulean Warbler as a Species of Greatest Conservation Need, tying its declines to loss of breeding habitat. Nationally, Cerulean Warblers breed in hardwood forests on slopes, ridge tops and along rivers in the Appalachians and Midwestern states. However, given that these birds spend their winters in Central and South America, it is worth investigating whether factors there are also impacting the bird’s populations. A team of researchers working across ten states is beginning this important study by first determining exactly where Cerulean Warblers’ wintering grounds are located. Here in Virginia, the research team consists of a partnership between the Department of Game & Inland Fisheries (DGIF), Virginia Commonwealth University (VCU Department of Biology and Center for Environmental Studies), The Nature Conservancy (TNC) and the Virginia Society of Ornithology (VSO).
The Cerulean Warbler study relies heavily on technology; researchers outfit the birds with light-level geolocator units, tiny data-collecting devices that allow researchers to roughly track the bird’s whereabouts over a year. Because the units cannot transmit data, researchers must retrieve the geolocators in order to download and analyze the information collected. VCU leads the hands-on field work in Virginia, with the other partners (DGIF, TNC, and VSO) contributing funding for the purchase of the geolocators and support of the field work. The project took place at DGIF’s Gathright Wildlife Management Area in Bath County, where strategic forest management over the past few years has contributed to maintaining an abundance of Cerulean Warblers. In spring of 2017, 13 geolocators were deployed and an additional 14 individual (control) birds were color banded over the course of a short but intense few weeks. This past May and June, an effort was made to collect the units by recapturing the birds, fresh from their return from their wintering grounds. Here in Virginia, over 100 unbanded birds were observed throughout Gathright and the surrounding area. However, only one bird bearing a geolocator was found (and successfully captured in its same territory from the previous season) by VCU; and only two banded birds without geolocators were observed. Similar low numbers were reported from other states involved in the larger study. We do not know why so few birds captured in 2017 returned to the study area. However, the single geolocator unit retrieved in Virginia holds a wealth of data on that bird’s movements over the course of a year, and we eagerly await results from the analysis. A similar Virginia project involving Golden-winged Warblers, another declining long-distance migrant, has yielded impressive results and some insights into the causes of its declines.
How to help the Cerulean Warbler
- Participate in Virginia’s Second Breeding Bird Atlas, now in its third of 5 years, to help document the breeding status and distribution of Cerulean Warblers and many other bird species in the Commonwealth.
- Consider donating to DGIF’s Non-Game Fund, so that we can continue funding projects, such as this, that contribute to cutting edge research on Virginia’s Species of Greatest Conservation Need.
- Drink shade-grown coffee. Shade coffee plants in Latin America are grown under tall trees, which provides habitat for the Cerulean Warbler (and many other species) on its wintering grounds. The alternative, ‘sun coffee’, is grown as a row crop and is contributing heavily to deforestation. |
coffin, closed receptacle for a corpse. Its purpose is usually to protect and to aid preservation of the body, although in the past some have believed that it may confine the spirit of the deceased. Bark, skins, and mats were commonly used in primitive societies to wrap the body prior to burial. Peoples living near rivers or oceans often buried their dead in canoes, and hollowed oak coffins have been found in the Bronze Age barrow. The Chaldaeans and the early Greeks enclosed a corpse in clay, sealing the coffin by firing it. The largest known stone coffins (see sarcophagus) are Egyptian. Wood and papier-mâché were also used in Egypt for mummy chests. Coffins lined with metal, usually lead, came into use in the Middle Ages. Most coffins used in the Western world today are made of elm or oak and are lined with bronze, copper, lead, or zinc.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on coffin from Fact Monster:
See more Encyclopedia articles on: Customs and Artifacts |
Discoid lupus erythematosus (DLE) is a chronic skin condition of sores with inflammation and scarring favoring the face, ears, and scalp and at times on other body areas. SLE affects people of all ages. Females are affected more often than males. Young women represent a large group of SLE sufferers. Lupus is called a multisystem disease because it can affect many different tissues and organs in the body. The autoimmune disease lupus is thought to affect up to 60,000 people in the UK, mostly women. Sometimes lupus can cause disease of the skin, heart, lungs, kidneys, joints, and/or nervous system. When only the skin is involved, the condition is called discoid lupus. When internal organs are involved, the condition is called systemic lupus erythematosus (SLE). People of Afro-Caribbean and Asian origin are more likely to develop SLE than Caucasians.
It is likely that a combination of genetic, environmental, and possibly hormonal factors work together to cause the disease. Scientists are making progress in understanding lupus, as described here and in the "Current Research" section of this booklet. It is not known why this inflammatory reaction begins, but it probably occurs because of some combination of inborn or hereditary predispositions and environmental factors. Recent research suggests that people affected by lupus may have a defect in the normal biological process of clearing old and damaged cells from the body, which then causes an abnormal stimulation of the immune system.
There are many possible symptoms.
- Skin rash
- Muscle aches
- Vomiting and diarrhea
- Kidney problems (protein leak)
- Central nervous system problems
- Blood problems (anemia)
Patients on Plaquenil need eye exams once a year to prevent damage to the retina of the eye and periodic blood work. Closely related drugs (Aralen, Quinacrine) may be more effective but have more side effects. Other drugs, such as Accutane and Soriatane, can also be used.
CLE is treated with relatively lower doses of steroids plus vitamin E and fatty acid supplements. Treatment generally needs to be lifelong, and dogs usually do well on it.
Other drugs called immunosuppressives (including azathioprine and cyclophosphamide) may be used to treat disease affecting the major organ systems and to reduce the amount of steroids required. If you are taking this combination of drug therapy you will have regular safety screening tests, eg blood tests. More information on immunosuppressive treatment is available.
A combination of rest, especially during flares, and exercise for joints and muscles is important and should be supervised by the treating physician and physical therapists. |
Using Technology Designed for Mars to Look at Earth's Deserts
An international team led by JPL has used radar sounding technology developed to explore the subsurface of Mars to create high-resolution maps of freshwater aquifers buried deep beneath a desert on Earth.
Over a span of two weeks, the researchers flew a helicopter equipped with a radar sounding prototype provided by Caltech and the Institut de Physique du Globe de Paris over two well-known subsurface aquifers in northern Kuwait. The radar successfully located the aquifers and was able to probe variations in the depth of the water table and identify locations where water flowed into and out of the aquifers.
"By mapping desert aquifers with this technology, we can detect layers deposited by ancient geological processes and trace back paleoclimatic conditions that existed thousands of years ago, when many of today's deserts were wet," says Essam Heggy, the team's leader and a research scientist at JPL.
The radar sounding prototype shares similar characteristics with two instruments flying on Mars-orbiting spacecraft: Mars Advanced Radar for Subsurface and Ionospheric Sounding (MARSIS), on the European Space Agency's Mars Express, and Shallow Radar (SHARAD), on NASA's Mars Reconnaissance Orbiter. Both instruments have found evidence of ice in the Martian subsurface, but have not yet detected liquid water. The Kuwait results may lead to revised interpretations of data from these two instruments.
The study was cofunded by Caltech's Keck Institute for Space Studies and the Kuwait Institute for Scientific Research (KISR), in Kuwait City. |
1. Classroom Activities
A basic skill in teaching english as a foreign language is to able to prepare, set up and run a single classroom activity, for example a game or a communication task or a discussion. In this chapter looks at some typical activities, and considers on in detail.
These the following activities that would be possible to use
a. A whole-class discussion of ideas and answer
b. Individual written homework
c. A dictation
d. Students prepare a short dramatic sketch.
Each of these activities is possible by using the same material in different ways, for example;
- The class discuss the problems and possible solutions.
- The students write their feelings about the situations at home or perhaps turn them into story.
- The teacher dictates a situational description to the students and then invites one student to invent and dictate the first line of the dialogue, then another student does line two, and so on.Students make up dialogues in pairs and perform them.
2. Four kinds of Lesson
A complete lesson may consist of a single long activity, or it may have a number of shorter activities within it. These activities may have different aims; they may also, when viewed together, give the entire lesson an overall objective.
Here description of four basic lesson types
a. Logical Line
In this lesson there is a clear attempt to follow a ‘logical’ path from one activity to the next. Activity A leads to activity B leads to activity C. activity C builds on what has been done in activity B, which itself builds on what has been done in activity A.
In work on language skill, the sequence of activities often moves from overview towards work on specific details. For example, the learners move gradually from a general understanding of a reading text to detailed comprehension and study of items within it.
b. Topic Umbrella
In this kind of lesson, a topic (eg; Rain forest or education or wheater or good management) provides the main focal point for student work. The teacher might include a variety of separate activities (eg; on vocabulary, speaking, listening, grammar, etc) linked only by the fact that the umbrella topic remains the same.
The activity can often be done in a variety of orders without changing the overall success of the lesson. In some cases activities may be linked; for example, when the discussion in one activity uses vocabulary studied in a proceding activity. There may be a number of related or disparate aims in this lesson, rather than a single main objective.
c. Jungle Path
An alternative approach would be not to predict and prepare so much but create the lesson moment by moment in classs, the teacher and learners working with whatever is happening in the room, responding to questions, problems and options as they come up and finding new activities, materials and tasks in response to particular situations.
The starting point might be an activity or a piece of material, but what comes out of it will remain unknown until it happens.
The essential diffrences between this lesson and the previous lesson types is that the teacher is working more with the people In the room than with her material or her plan.
d. Rag Bag
This lesson is made up of a number of unconnected activities. For example; a chat at the start of the lesson, followed by vocabulary game, a pairwork speaking activity and a song. The variety in a lesson of this kind may often be appealing to students and teacher. There can, however, be a ‘bittiness’ about this approach that makes it unsatisfactory for long-term usage.
There will be no overall language objective for the lesson (though there might be a group-building aim). Each separate activity might have its own aims.
3. Using a course book
A coursebook can be a good source of useful, exploitable material. It will also sequence the activities. Sadly not all coursebooks are equally helpful, but as a starting point, I’d certainly recommend finding out of your book is usable or not.
You do not necessarily need to be a slave to the book, you can adapt and vary the activities if you wish, you can do them in a different order.
The coursebook writer is a more experienced teacher that you knows something of the problems learners have, provides a useful syllabus for them to follow, and has devised a course to help them learn.
Coursebook are written:
- To give less experienced teachers support and guidence and control of a well-organized syllabus.
- To give more experienced teachers materials to work from.
Using a coursebook as a resource you must know.
- Select, choose what is appropriate for you and your students
- Reject, leave it out if it’s not appropriate
- Teach, the book is a resource to help and inform your work
- Exploit, find interesting ways to adapt or exploit the material
- Supplement, use teachers recipe, books, magazine, picture, etc. |
There are usually three qualities that make up a sense of nationhood: a founding myth that explains the origins and purpose of the nation; a compelling narrative that tells a story about the core values that emanate from that myth and the heroes and heroines of that story; and finally, rituals that remind the people of that story, embodying the myth, its values, its heroes and heroines, and symbols.
Founding myths, accounting for the origins of a nation and explaining its destiny, usually are tribal (based on genealogy or blood) as with the Abraham story in the Old Testament (God told Abraham to create a new nation) or the tale of Theseus, the mythical Athenian king who defeated the horrible Minotaur of Crete and united twelve small independent states of Attica, making Athens the capital of the new state. The foundation of Rome was attributed to Romulus, son of Mars, the god of war, who, after having been raised with his brother Remus by a she-wolf, conquered the Sabines and built a new city on the Tiber River on the spot where their lives had been saved. Japan, according to its traditional founding myth, was established because a favorite descendent of the Sun Goddess created the Japanese islands and became the first Emperor, from whom all emperors are descended.
Early in its history, spokesmen for the new American nation explained that the U.S. was created as a nation in which individual liberty, opportunity, and reward for individual achievement would prosper. This powerful new myth provided an ideological rationalization for the selfish interest early settlers had in recruiting European immigrants to claim the land, fight Indians, and later to work in the mines and factories. It became the founding myth of a new political culture, uniting white Americans from different religious and national backgrounds, and later others who were not white, in a sense of shared American nationhood. Belief in the myth motivated Americans to create new political institutions and practices that Alexis de Tocqueville, an early nineteenth century French visitor, saw as encouraging a patriotism that grew "by the exercise of civil rights." What he called the "patriotism of a republic" was based on the premise that it is possible to interest men and women in the welfare of their country by making them participants in its government and by so doing to enlist their enthusiastic loyalty to a national community. Here, the feeling of "we-ness" that is usually based on similar physical characteristics, language, or religion (or a combination of them) was replaced by a belief in the myth itself and the values, heroes and heroines, symbols and rituals revolving around the idea of freedom. It was this civic culture that replaced a more tribal culture as a basis of nationhood and appeared to teach men and women to work together with those "who otherwise would have always lived apart."
They myth and its values were embodied in a narrative, beginning with the American Revolution. The narrative tells of the struggle to enlarge freedom for an increasing number of persons regardless of their national origin, color, or religion. It recalls the Civil War, the Second World War against Nazism, the civil rights revolution of the 1960s and 1970s, the continuing immigrant saga, and the successful defense of freedom and democracy against the Soviet Union and its totalitarian system of government.
This American story about the struggle for freedom-freedom sought, freedom thwarted, freedom won, and freedom enlarged-includes oppression, exploitation, and terrible harm against those whom the insiders thought of as outsiders. But the main story line is clear: a long-term expansion of freedom-a continual rebirth of freedom (to use Lincoln's words), although often interrupted. That sense of continuing renewal is reflected in part in the slogans of the twentieth century activist presidents, such as "The New Freedom" (Wilson), "The New Deal" (Franklin D. Roosevelt), "The Fair Deal" (Truman), "The New Frontier" (Kennedy), "The Great Society" (Johnson), and "The New Beginning" (Reagan). The American story relies heavily on texts that are often treated as though they are sacred: the Declaration of Independence; the Constitution, and especially its Bill of Rights and Fourteenth Amendment; Lincoln's Gettysburg Address and Second Inaugural Address; and Martin Luther King's "I have a dream" speech.
The preamble to the Constitution calls upon Americans to "form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity. . . ." More than any other nation, Americans have emphasized liberty as its central value. Liberty was grounded in what they called the equality of every person under God, a belief asserted in the Declaration of Independence:
We hold these truth to be self evident, that all men [and women] are created equal, that they are endowed by their Creator with certain unalienable rights, that among these are Life, Liberty, and the Pursuit of Happiness.
By emphasizing equal rights in a nation that authorized slavery, the founders introduced a profound moral ambiguity that some argue has not yet been entirely resolved. The idea of equality became as compelling as that of liberty in American political discourse, but its meaning is less clear. Liberty meant freedom from government interference to the founders and still retains essentially that meaning. But what does equality mean? Does it mean social equality, equality under the laws, equality of opportunity, equality of condition-or some combination of them? Equality, in all its many meanings, took on great significance in the rhetoric of public live and social discourse and even in family life in America.
The American narrative is also about the tension between the values of liberty and quality. Much of the debate in American domestic politics is about society's obligation to promote equality of opportunity for those who are born to inherently unequal conditions. Those who are suspicious about public policies that attempt to do that point out that equality of opportunity implies the opportunity to compete with and rise about others and to be rewarded for one's successful efforts. The opposite view holds that without government intervention, equality of opportunity can have little meaning for those born to economically or otherwise disadvantaged circumstances. However, much Americans differ in the debate as to how they should promote equality of opportunity, most of them deeply cherish equality of opportunity as an American value.
In recent years, the political debate has focused increasingly on the role of government in providing equality of opportunity through policies that give particular recognition to the claims of members of groups who have suffered (and many say continue to suffer) restricted opportunity because they are women, African-Americans, or members of other designated minority groups. The very fact that disadvantage is presumed to be a condition that inheres in membership in such groups regardless of one's individual circumstances (health, wealth, education of parents, family situation) has given new meaning to the question of the relationship of the pluribus to the unum. Group membership has become an increasingly powerful way of defining individual identity, and public policies now go beyond equality under the law in recognizing gender, color, and national origin as a basis for what is usually called affirmative action regardless of individual circumstances. In addition, some members of groups designated as disadvantaged claim a status of inherited victimization that deepens their sense of grievance against the political and economic system that they see as dominated by white males. As a result, much of the current debate about multiculturalism revolves around their insistence on having their separateness acknowledged and affirmed in public policy generally, including education. Some, including even white males, reject the idea of a common culture, even a common political culture, as long as members of certain groups cannot show aggregate (group) results in economic, educational, and political attainment equal to that of white males.
Others see in this view of American life a danger to the value of liberty itself. They ask what freedom means if not the freedom to assert one's individuality regardless of inherited group status. What does it mean unless one is free to cross group boundaries regardless of one's color or inherited religion or nationality? What, they ask, will happen under the strains of increasing diversity if those who live, work, and vote in this country begin to think of themselves first and foremost as members of separate groups and not as Americans?
My own view is more optimistic than these questions imply. I agree with those who believe that the tension between the claims of the pluribus and the requirements of the unum can be resolved in a nation that values both liberty and equality, but only if we pay at least as much attention to the requirements of the unum as to the histories, sensibilities, and claims of the pluribus.
I think that serious scholars of this subject of Americanization are in virtually unanimous agreement that civic virtue and good citizenship in the United States has nothing to do with race or ethnicity, despite the burgeoning claims of polemicists to the contrary. That conclusion is demonstrated overwhelmingly by the evidence during these past 250 years. To argue that whites or Anglo-Americans have been more devoted to principles of liberty and justice for all flies in the face of the facts.
American values are accessible to anyone, regardless of race, religion, or ancestral background, precisely because our most important principle is the equal protection of the laws for all, regardless of race, religion, or national origin. That is the genius of the system, which must be protected and nourished.
A robust idea of American citizenship depends on a widespread understanding and appreciation-and even celebration-of the American constitutional system, its symbols and rituals, its heroes and heroines.
I believe there are several things that can be done to encourage that robust sense of citizenship for native-born and naturalized citizens. Here are six of them:
What do I mean by a commitment to civic education? I believe the Secretary of Education should call a conference of state educational leaders to examine the possibility of developing a common core civic curriculum. I don't mean curriculum just for one grade level in a civics course, although that could and should be a part of it. I mean that our schools should teach the essential of American history and constitutional principles repeatedly at different grade levels in appropriate ways. I also mean that the Pledge of Allegiance should be recited and discussed. What does the goal of liberty and justice for all mean? What do we mean by majority rule and individual rights?
I mean putting the pictures of George Washington, Abraham Lincoln, and Franklin Delano Roosevelt back on the schoolroom walls and teaching how their roles in the Revolution, the Civil War, and World War II relate to the Declaration of Independence, Lincoln's Gettysburg Address, and FDR's Four Freedoms in the American search to expand the meaning of liberty and justice for all.
I mean a curriculum that nourishes civic virtue in action by including community service, as a number of schools now do. I mean a curriculum that encourages all Americans, not just those in the schools, to think about the meaning of Independence Day, Thanksgiving, Memorial Day, and Martin Luther King's birthday. And whatever happened to "I Am An American Day?" Lets bring it back. What a wonderful occasion for naturalization ceremonies, community service, and local competition for young essayists to say what being an American means to them.
Throughout most of American history, Independence Day, July 4th, was the most important national holiday. Different ethnic groups often combined their celebration of American independence and the and the values of American life in their own particular way. In the 1880s in Worcester, Massachusetts, the Ancient Order of the Hibernians, an Irish nationalist society, held a picnic on July 4th in which the exuberance of the Irish served both as a preservation of Irish customs and a defense of American freedoms. Independence Day in Worcester in the 1890s attracted a large proportion of the Swedish-American population, who began with services at one of eight Swedish Protestant churches and ended at the picnic with patriotic speeches, sometimes in Swedish. All the ethnic groups of Worcester used Independence Day as an opportunity to express their own ethnic identity even as they celebrated American freedoms.
As new immigrant-ethnic groups claimed an American identity for themselves, a great many Anglo-Saxon Protestants, particularly in New England, thought of Independence Day as their special holiday. When President Ulysses S. Grant and key members of his cabinet joined the centennial celebration of the beginning of the American revolution at Concord and Lexington, Massachusetts, on April 19, 1875, they listened to speeches made only by illustrious Anglo-Americans, including some of the great poets of New England. The master of ceremonies of the festive day in Lexington reminded the audience that all of the foreign heroes at Lexington and Concord had English names.
Then the Anglo-Americans, especially in New England, thought of themselves as the charter members of the Republic. Americans from other backgrounds were relative newcomers, and persons of color were still treated essentially as outsiders by those who held governmental and economic power, despite the Thirteenth, Fourteenth, and Fifteenth Amendments to the Constitution. A few years after the centennial, their position as outsiders would be more sharply defined. Blacks in the south would be subjugated, for the most part, to a segregated rural working class-sharecroppers, for the most part. Chinese laborers would be excluded from immigrating to the United States, and an Act was passed in 1887 by the Congress to break up native American Indian lands and assimilate the Indians.
One hundred years later, the bicentennial of the Revolution was celebrated by emphasizing American diversity. That was also true of the centennial of the Statue of Liberty, the bicentennial of the Constitution, and the celebration of the restoration of Ellis Island. Ethnicity had become central to the American story-to the way Americans looked at themselves and presented themselves to the world. On the mall of the nation's capital on July 4th, 1983, the National Symphony was conducted by a Russian refugee, Mstlav Rostropovich, who played the distinctive American music of Jewish-American composers George Gerswhin and Aaron Copeland. Arias from Porgy and Bess were sung by the great American black singer, Leontyne Price. "A Lincoln Portrait" was narrated by the black American baseball player Willie Stargell. Newcomer Americans from at least two dozen Asian, African, and Latin American countries tapped and drummed to "The Star and Stripes Forever," composed by the Portuguese-American, John Philp Sousa.
The Fourth of July emphasizes national independence and personal freedom. Thanksgiving, which has become the other major holiday of America's civic culture, also offers an opportunity for celebrating e pluribus unum. The theme of Thanksgiving, proclaimed by George Washington in 1789 and again by President Lincoln in 1863 as a national holiday memorializing the 1621 feast of thanks given by Pilgrims (who did not call themselves Pilgrims or wear tall hats or black suits with wide collars or eat turkey) at Plymouth, still retained a more or less religious appreciation of the benefits of freedom and opportunity in the U.S. But in the 1970s and 1980s, Thanksgiving became another occasion for the celebration of ethnic diversity. In 1976, The New York Times told of an Italian family who ate a Thanksgiving dinner as they imagined Columbus might have had. A Russian-American family featured a Russian dessert made from cranberries; a Chinese-American family ate Peking duck instead of turkey; and an Austrian-American family feasted on braised turkey and white beans. A Boston Globe story in 1985 told of Cambodians, Vietnamese, and Laotians celebrating Thanksgiving at a feast sponsored by the Jewish Vocational Service. There, the refugee families ate fiery nuk chau sauce and cha gio egg rolls along with their roast turkey, cranberry sauce, and pumpkin pie.
By 1986, a writer for The New York Times concluded that "as an American holiday, Thanksgiving's universality must lie in its ability to welcome succeeding generation of immigrant to these shores." She wrote of Haitians, Barbadians, Jamaicans, Panamanians, and Trinidadians sitting down with family members for dinners that merged the culinary traditions of their homeland cultures with those of the more traditional Thanksgiving. "For me," reported one second-generation American of West Indian background, "Thanksgiving is a mixing of the black-American traditions with the Caribbean."
A civic education curriculum does not denigrate ethnic and religious diversity in the United States. Far from it. It honors it and even celebrates it. But it would not permit, as now occurs in many universities and some high schools, the encouragement of ethnocentrism in the name of multiculturalism.
I will skip a discussion my second point-the question of group rights as I have written about it extensively elsewhere. There has been a tendency in American public discourse to speak of group rights as though they were civil rights. Civil rights apply to individuals. We have no place in our constitutional system for group rights, except for native American Indians and possibly ethnic Hawaiians, Aleuts, and Eskimos.
The importance of English seems self-evident. The more linguistically capable Americans are, the better. But English is a must for anyone to participate substantially in the national political community or the enter the competition for opportunities in a vast continental and global economy. English is an important sign of national identity. My immigrant, orphan, illiterate grandmother could not write English or any other language until the day she died, and she was a magnificent human being who raised eight dedicated, patriotic Americans. But her limited knowledge of English restricted her chances-she never held any job except that of maid-and cut her off from many aspects of American life. We need a national volunteer effort not just to teach children English, as called for by President Clinton, but also to expand English teaching resources for adult immigrants and refugees.
The next recommendation-improving the naturalization test-is one I have not written or testified about before. The present naturalization oath includes archaic language that takes away from its meaning. It reads:
I hereby declare, on oath, that I absolutely and entirely renounce and abjure all allegiance and fidelity to any foreign prince, potentate, state, or sovereignty, of whom or which I have heretofore been a subject or citizen; that I will support and defend the Constitution and laws of the United States of America against all enemies, foreign and domestic; that I will bear truth and allegiance to the same; that I will bear arms on behalf of the United States when required by the law; that I will perform noncombattant service in the armed forces of the United States when required by the law; that I will perform work of national importance under civilian direction when required by the law; and that I take this obligation freely without any mental reservation or purpose of evasion; so help me God.
It is amazing that the oath has held up as long as it has. But surely we can do something about such archaic language as "abjure" and "fidelity to any foreign prince, potentate." One possibility would be:
I, (name), take this solemn oath (or "make this solemn affirmation) freely and without mental reservation or purpose of evasion. My allegiance is to the United States of America above any other nation. I promise to support and honor the Constitution and laws of my new country and their principles of liberty and justice for all. I pledge to defend them by force of arms, noncombatant military service, or civilian work of national importance if necessary.
One might suspect that I am in favor of making naturalization easier by recommending a change in language. That is not the case. I want the naturalization oath to be understood. I am concerned about any tendency to reduce further the civic education and English language requirements for naturalization. There is considerable evidence that the naturalization ceremony, when done with dignity, stimulates feelings of patriotic loyalty for newcomer citizens around basic values of liberty and equality of opportunity. Newspaper accounts of naturalization swearing-in ceremonies repeatedly tell of the enthusiasm with which these new citizens embrace them. A Russian-Jewish refugee from Kiev, who was one of ninety-seven immigrants from twenty-eight countries sworn in as U.S. citizens at the Monticello home of Thomas Jefferson, told a television reporter after his naturalization ceremony: "I believe the most important thing that brought me to this country is the dream about the future of my kids, to grow them in a free country, to be independent, to be whatever they want. . . . The United States. My land of opportunity." Another immigrant, this one from Vietnam, told a reporter after the same ceremony: ". . . this is the best place . . . this is the best opportunity, in America." In El Paso, Texas, a new Trinidadian-American said: "We feel there are more opportunities here for us and our family . . ."
Freedom to work and make a living is one of the inducements to become an American citizen. But the speeches made by those who preside over naturalization ceremonies usually stress the importance of other freedoms. At many ceremonies, naturalized citizens are given a copy of the Bill of Rights. And many new citizens respond. A newly-naturalized, Cuban-American told reporters that becoming an American is the greatest thing "because here we can have what everyone should have, and that means our human rights. In short, freedom."
We need to mobilize volunteer resources to support the naturalization work of the INS. I believe the presidential leadership, with presidential leadership, with the cooperation of governors, mayors, civic and service organizations, universities, corporations, and labor unions, we can process naturalization expeditiously without demeaning its significance. Panels of distinguished Americans from various walks of life can be enlisted as accredited volunteers to participate in managing naturalization ceremonies.
I also believe we should consider requiring a variety of standardized written civics and history tests in English for passing of the naturalization exam. This would cut down the time used in oral interviews and elevate the significance of passing the exam by making standards more uniform. Exceptions could be made for compassionate reasons, as they are now.
I will conclude by saying that if people living in poverty had heard my remarks up to this point, they would be likely to think them utterly irrelevant to their own lives. Jacob Riis examined the relationship of civic virtue and citizenship to poverty in 1902 in his book The Battle with the Slum. He wrote that where the slum flourishes unchallenged in the cities, "citizen virtue," as a he called it, is starved. It is not enough, he wrote, to repeat that all men are created equal.
So let us remember that citizenship does not flourish in mean streets where unemployment, drive-by shootings and crack cocaine are widespread. Nor is civic virtue helped by a hostile reception to immigrants. It does nothing to cultivate a robust ideal of citizenship to categorically deny safety net welfare benefits to legal immigrants who need them through no fault of their own or of their sponsors. Nor will civic virtue be promoted by denial of a public school education to the children illegal aliens or by the modification of birthright citizenship.
Why do we care so much about citizenship in U.S.? I think it is because we were the first nation to say that citizenship is not a question of complying with the wishes of the sovereign or a matter of blood. It is entirely voluntary. No government can force it on you or take it away unless you lied to get it. It is a matter of our free will. That revolutionary idea is at the heart of our experiment in self government. We believe that ordinary women and men, regardless of their ancestry, can make a democratic republic work. This is not just an abstract issue: too much blood has been spilt in order to make this idea a reality to everyone born in this country, regardless of race, ancestry, religion, or economic circumstances.
Some of my friends are extremely worried about the fact that our constitutional system permits dual citizenship. I urge them to keep in mind that loyalty cannot be compelled. The loyalty of subjects may be compelled, but not that of totally free citizens. The power to win loyalty in this culture of voluntary citizenship has been demonstrated many times in American history. Witness the extraordinary record of Japanese-Americans in the 442nd regimental combat team in World War II. Note the story of Sergeant Jimmy Lopez, one of the American hostages held by Iran in 1980, who wrote on the wall where he was imprisoned: "Viva el rojo, blanco e azul!" (Long live the red, white, and blue). Tell your grandchildren the story of Guy Gabald—n a Mexican-American who won the silver star in the Second World War. Raised in East Los Angles by a Japanese-American family who taught him to speak Japanese fluently, he won the medal for persuading one thousand Japanese soldiers to surrender during the battle for the island of Saipan.
These stories illustrate the strength of our civic culture. But they do not mean we can be complacent. The civil culture must be nourished. Attention must be paid. And Senator Simpson, you should be congratulated for doing just that. |
Review of Chapter 17 - China Develops a New Economy
Changes in tgagriculture were a mjor reason of the growth of Chica's economic growth. During this period China saw a higher increase in production of rice as well as new and better farming methods.
Reasons for Agriculture Changes
The movement of farmers to the fertile basin of the Chang Jiang river in southern China. The climate was warm and wet, ideal for cultivating rice plants, which needed a lot of water.
Population in Southern China in 1207
About 65 million people lived in this area.
Population in Northern China in 1207
About 50 million people lived in this area.
Northern China was the wealthiest and most popular part of the country . . . wars and attacks by Mongolia frove many landowners to move south.
a farm tool used to break up and even out plowed ground
a pump with containers attached to a loop of chain to lift water and carry it where it is wanted.
Characteristics of the New Agriculture
Small farms covered all the suitable land. Terraced hillsides spread afar the land. Rice grew on the terraces in flooded fields call paddies. Elaborate irrigation systems crisscross the paddies, brmning water where its needed.
Harvest time of rice during the 11th century
a new type of rice resistent to drought matured in two months.
13th century Agriculture additions
Peasant farmers also grew tea, cotton, and sugar. They also grew mulberry trees to feed silkworms
Importance of tea
Tea was used mainly for medicine , by the ninth century tea was the national drink. Later it became a social custom and teahouses became popular. The demand grew larger.
Results of Agriculture Changes
Growing rice increased food production. The population grew to over 100 million people. Pesants took interest in other business opportunities to make silk, cotton cloth, and other products to sell or trade. Commerce increased to other businesses to buy luxury items.
Growth of Trade and Commerce
Tang emperors eased restrictions on merchants and actively promoted trade. Products like rice, silk, tea, jade and porcelain travelled to many asian countries. Under the Song, businesses grew even more.
a long boat with a flat bottom
the form of money used in a country. The introduction of currency in the 11th century coins made of copper with used at first and later paper money was made in large quantities.
Reasons for Growth in Trade and Commerce
Wealthy landowners were eager to buy luxuries, encouraging the demand for silk and other goods. Commerce was helped by water transportation, across rivers and the Grand Canal. It was cheaper than road transportation. The introduction of paper currency increase commerce. |
Two large surveys of the human genome indicate that it may be much harder than scientists once thought to map out all the genetic mutations that underpin common human diseases, complicating the potential development of personalized, gene-based treatments.
The studies were from GlaxoSmithKline and the University of Washington in Seattle and were both published in the journal Science. The two reports showed that there are many different ‘rare’ mutations in the human genome associated with diseases like cancer, coronary artery disease, Alzheimer’s disease and schizophrenia.
The study from UW looked at 202 genes in 14,002 patients. There are approximately 3 billion base pairs in the entire human genome, and the scientists studied 864,000 of them.
"Our results suggest there are many, many places in the genome where one individual, or a few individuals, have something different," study senior author John Novembre, an assistant professor of ecology and evolutionary biology and of bioinformatics at UCLA, wrote in a press release. "Overall, it is surprisingly common that there is a rare variant in the population.”
The scientists found there was one genetic variant for every 17 bases – a dramatically higher rate than they expected, Novembre said.
The majority of the time, only one person had a specific genetic variant, while the 14,001 others did not, meaning further research into each variant would likely be costly and require a very large population of people.
"We saw lots of that," Novembre wrote. "We discovered there are many places in these 202 genes where there is variation and only a few individuals differ from the whole group, or only one differs. We also see evidence that a substantial fraction of these rare genetic variants appear to be deleterious in a long-term evolutionary sense and might impact disease."
The GlaxoSmithKline researchers who conducted the other study agreed with the UW researchers in attributing the large number of variants to human population growth.
"Because the human population has grown so much, the opportunity for mutations to occur has also grown,” Novembre wrote. “Some of the variants we are seeing are very young, dating to population growth since the invention of agriculture…the growth has created many opportunities for mutation in the genome because there are so many transmissions of chromosomes from parent to child in large populations." |
By Dean Nernberg
Many horticultural varieties of native plants have been developed in recent years. However, these varieties, which are commonly sold in nurseries and greenhouses, have been greatly selected or genetically altered. If you want pure native stock, it is best to seek information from local gardening or native plant groups to locate reputable sources of seed or plants. Digging up and removing plants from the wild can deplete their natural populations and, for some species in certain areas, it is illegal.
Adventurous types might want to consider growing wildflowers from seed. Some species grow quite easily with this method, which can produce a lot of plants for less money. The trade-off is that it takes at least two years for plants to develop and flower; for some species, it can take several years.
It is important to remember that plants grown from seeds usually stay very small the first year. During that first growing season, plants are putting most of their growth into roots, so they can withstand drought. If you get 25 to 50 mm (1 to 2 in.) of growth the first year, consider yourself lucky.
Most species can be seeded in the spring, summer or fall, and will grow at the first opportunity. However, some species may need to be planted in the fall, allowing seeds to over-winter in the ground. If these species are planted in the spring, they usually won’t germinate until the following spring. Other varieties, like spring-flowering prairie crocuses and three-flowered avens, should be seeded as soon as their seed has ripened and is ready to fall or blow away; these plants germinate very well at this time. If stored until fall or the following spring, some seeds might take a year or more to germinate.
Regardless of timing, it is important to control fast-growing weeds that can quickly shade and out-compete native seedlings. Pulling the weeds is an option; however, if they are large and well rooted, it might be best to simply clip them close to the ground level. Yanking out a well-established root might dislodge and kill the surrounding small native wildflowers you are trying to establish.
Serious native plant enthusiasts avoid ‘shake in a can’ or other commercially packaged wildflower mixes. In most cases, these seeds are not native to their region and quite weed-like in nature. In fact, most come from California or another U.S. state and usually don’t perform well. Many gardeners with visions of wildflower meadows dancing through their heads scatter these types of seeds expecting miracles; in most cases, they are simply left with weed infested yards.
This is 4 of 5 in A Guide to Gardening with Native Plants |
Physically, the visual cortex is at the back of the brain in the occipital lobe.
David Hubel and Torsten Wiesel did research on the visual cortex for many years. They won the 1981 Nobel Prize in Physiology or Medicine for their discoveries about information processing in the visual system.
- Their work in the 1960s and 1970s on how the visual system developed. They worked on parts of the visual cortex of the brain which get signals from the right or left eye.
- Their work describing how signals from the eye are processed by the brain to generate edge detectors, motion detectors, stereoscopic depth detectors and colour detectors. These are building blocks of the visual scene.
Primary visual cortex[change | change source]
Research on the primary visual cortex can involve recording action potentials from electrodes within the brain of cats, ferrets, rats, mice, or monkeys. Alternatively, signals can be recorded outside the animal by EEG, MEG, or fMRI. These techniques gather information without invading the brain. |
- Part 1 - Installation, Interface, Symbols, Remote/Local Debugging, Help, Modules, and Registers
- Part 2 - Breakpoints
- Part 3 - Inspecting Memory, Stepping Through Programs, and General Tips and Tricks
BreakpointsBreakpoints are markers associated with a particular memory address that tell the CPU to pause the program. Because programs can contain millions of assembly instructions, manually stepping through each of those instructions would take an incredibly long time. Breakpoints help speed up debugging time by allowing you to set a marker at a specific function which allows the CPU to automatically execute all the code leading up to that point. Once the breakpoint is reached, the program is paused and the debugging can commence.
Breakpoints can be set in software and within the CPU (hardware), let's take a look at both:
Software BreakpointsPrograms get loaded into memory and executed - which allows us to temporarily modify the memory associated with a program without affecting the actual executable. This is how software breakpoints work. The debugger records the assembly instruction where the breakpoint should be inserted, then silently replaces it with an
INT 3assembly instruction (
0xcc) that tells the CPU to pause execution. When the breakpoint is reached, the debugger looks at the current memory address, fetches the recorded instruction, and presents it to the user. To the user it appears that the program paused on that instruction however the CPU actually had no idea it ever existed.
Software breakpoints are set within WinDBG using the
bp(for Break Point) is arguably the most used breakpoint command. In its most basic use, its only argument is the address at which a breakpoint should be set:
bp, the address should be a memory location where executable code exists. While
bpworks on locations where data is stored, it can cause issues since the debugger is overwriting the data at that address. To be safe Microsoft suggests that if you want to break on a memory location where data is stored, you should use a different breakpoint command (
ba, discussed below).
Let's take a look at setting a software breakpoint. Here we'll launch
notepad.exewith WinDBG. By default, when the program is launched with WinDBG, it will insert a breakpoint before the entry point of the program is executed and pause the program. First we'll get the location in memory where
Next we'll determine the program's entry point by using
!dhwith the image load address:
Now we'll set a breakpoint at it's entry point (load address + 0x3689):
Finally we'll tell the program to run until it encounters a breakpoint using the
gcommand (more on this later), when the breakpoint is hit, we'll get a notice:
Most of your debugging will likely use software breakpoints, however there are certain scenarios (read-only memory locations, breaking on data access, etc..) where you need to use hardware breakpoints.
Hardware BreakpointsWithin most CPUs there are special debug registers that can be used to store the addresses of breakpoints and specific conditions on which the breakpoint should triggered (e.g. read, write, execute). Breakpoints stored here are called hardware (or processor) breakpoints. There is a very finite number of registers (usually 4) which limits the number of total hardware breakpoints that can be set. When the CPU reaches a memory address defined within the debug register and the access conditions are met, the program will pause execution.
Hardware breakpoints are set within WinDBG using the
ba(Break on Access) command. In its most basic usage, it takes 3 attributes:
This command would (we'll see soon why it doesn't) accomplish the same thing as the previous
bpexample, however now we're setting a hardware breakpoint. The first argument,
e, is the type of memory access to break on (execute), while the second is the size (always 1 for execute access). The final is the address. Let's take a look at setting a hardware breakpoint, keep in mind our load addresses are different because of the whole ASLR thing.
Due to the way Windows resets thread contexts and the place where WinDBG breaks after spawning a process, we wont be able to set a breakpoint in the same way we did in our earlier example. Previously we set our breakpoint on the program's entry point, however if we try to do that with WinDBG we get an error:
So in order to get around this, we'll need to use that
gcommand and tell it to run the program until it reaches a specific memory address. This is sort of like setting a software breakpoint in behavior but isn't exactly the same. So we'll tell WinDBG to execute until we enter the program's initial thread context, which will then allow us to set hardware breakpoints.
Now we can set our hardware breakpoint:
To confirm we actually set the breakpoint in CPU's registers, we can use the
rcommand (discussed later). We'll use the
Mattribute to apply a register mask of
You'll notice something doesn't look right here, all of the registers contain
0! This is because WinDBG hasn't actually set them yet. You can single step (discussed below) with the
pcommand. Once we do, the
dr0register will have our breakpoint defined:
In this specific example, we probably will never hit our breakpoint because it is in the entry point of the program that we've already reached. However if our breakpoint was on a function that was called a variety of times in the life of the program, or on a memory address where an often used variable was stored, we'd get a "Breakpoint Hit" message when the memory was accessed just as we would with a software breakpoint.
Common CommandsNow that you have the basics of setting breakpoints, there are a handful of other breakpoint related commands that will be useful. Let's look at a couple:
Viewing Set BreakpointsTo view each of the breakpoints that have been set, you can use the
bl(Breakpoint List) command.
Here we have one breakpoint defined, the entry is broken into a few columns:
0- Breakpoint ID
e- Breakpoint Status - Can be
00523689- Memory Address
e 1- Memory address access flags (execute) and size - For hardware breakpoints only
0001 (0001)- Number of times the breakpoint is hit until it becomes active with the total passes in parentheses (this is for a special use case)
0:****- Thread and process information, this defines it is not a thread-specific breakpoint
notepad!WinMainCRTStartup- The corresponding module and function offset associated with the memory address
Deleting BreakpointsTo remove a breakpoint, use the
The only attribute to
bcis the Breakpoint ID (learned from
bl). Optionally you can provide
*to delete all breakpoints.
Breakpoint TipsThere are a couple simple tips that I commonly use when setting breakpoints. Here are a few of them, please share any you have in the comments below!
Calculated AddressesThe simplest breakpoint tip, is just something that you'll learn when dealing with memory addresses within WinDBG. You can have WinDBG evaluate expressions to calculate address. For instance, in the above examples, we knew the module load address of
notepad.exeand the entry point was at offset
0x3689. Rather than calculating that address ourselves, we can have WinDBG do it for us:
Name and Offset AddressesOne of the great things about Symbols (covered in part 1 of this post) is that they give us the locations of known functions. So we can use the offsets to those known functions as addresses in our breakpoints. To figure out the offset, we can use the
u(Unassemble) command within WinDBG.
uwill take a memory address and interpret the data at that memory address as assembly and display the corresponding mnemonics. As part of its output,
uwill also provide the offset to the nearest symbol:
Now we know that
notepad!WinMainCRTStartupis a friendly name for
00770000 + 3689. Since there isn't a numeric offset at the end of this friendly name, we can also infer that Symbols exist for this function. Look what happens when we check out the second instruction in this function:
This time we got a function name,
notepad!_initterm_e, plus an offset (
+0x61). I'm not entirely sure why WinDBG gave the offset to
notepad!WinMainCRTStartup, probably a symbol search order thing - nonetheless, we could have used a
notepad!WinMainCRTStartupoffset to reference the same location:
The point is that now we can use this offset as a breakpoint and those offsets are always valid even if ASLR is enabled - so we don't have to waste time calculating addresses at every launch.
Breaking On Module LoadThere may be some occasions when you'd like to set a breakpoint when a module is being loaded. Unfortunately, there doesn't appear to be an obvious way within the standard breakpoint commands to do this (let know if you know of a way in the comments). Instead a sort of "hacky" way to do this is by defining that an exception be raised when a particular module is loaded using the
Here we've set up a first chance exception (
sxe) when a module is loaded (
ld) and defined
IMM32.DLLas the specific module which triggers the exception.
We can use
sx(Set Exceptions) to view the configured exceptions. If we look under the Load Module list, we'll see that we have a break on
To clear it we can use the
sxi(Set Exception Ignore) command:
Executing CommandsThere may be certain commands that we execute every time a breakpoint is reached. For instance, say we're always interested at what values are on the stack. We can automate this with WinDBG by building a list of commands and appending it to our breakpoint. In our example, we'll print out some information, and use the
ddcommand (discussed later) to show the stack. Notice how our command is referenced in the
bloutput as well:
Let's see what happens when we hit our breakpoint:
As expected, the commands were executed, showing the "
Here are the values on the stack" message and the stack. Commands are chained together with a semi-colon, and be sure to escape quotes within the outer-most quotes that contain the entire command. You can even append the
gcommand to have the commands be executed and the program to just continue. This allows you to inspect the state of the program as it runs rather than manually interrupting it every time a breakpoint is hit. |
Giant pandas are becoming extinct due to extensive habitat loss and destruction by hunters. Habitat loss destroys bamboo, which is the giant panda's sole food source. It also isolates pandas causing a reduction in the rate that mating and reproduction occurs. Poachers kill only a few pandas each year, but hunters of other animals in the area accidentally kill pandas on a more regular basis further reducing their numbers.Continue Reading
All wild giant pandas live in the Yangtze River basin in China. Rapid industrialization in China has destroyed much of the forest in this area. Each giant panda eats about 28 pounds of bamboo a day, and a dwindling supply of bamboo due to fewer bamboo forests leads to malnourishment and fewer pandas.
Pandas are solitary animals each living in their own territory. Individual pandas only meet up briefly during the spring for mating. Growing numbers of roads and miles of railway track increasingly prevent pandas from finding suitable mates during the brief period when females are fertile.
Chinese laws provide some protection for the endangered giant panda population. Preserves provide protection for some of the population, and anti-poaching laws are strictly enforced. Many pandas live in zoos, and captive breeding programs have become increasingly successful.Learn more about Environmental Science |
How To Be A Spy Kid - Lesson 2 of 3
Communication and Collaboration
Students use digital media and environments to communicate and work collaboratively, including at a distance, to support individual learning and contribute to the learning of others.
Interact, collaborate, and publish with peers, experts, or others employing a variety of digital environments and media.
Does being smart include the ability to observe your surroundings and recall details with a short or long term memory? (Be prepared to defend your answer!) |
Developed to complement the Middle/High School teaching guide, this student study guide was created as reproducible support for extension and self-directed study of A History of US: The New Nation. Every chapter is covered by a lesson, which includes activities to reinforce the following areas: access, vocabulary, map skills, comprehension, critical thinking, working with primary sources and further writing. The student study guide contains reproducible maps and explanations of graphic organizers, as well as suggestions on how to do research and special projects.
About the Series: Master storyteller Joy Hakim has excited millions of young minds with the great drama of American history in her award-winning series A History of US. Recommended by the Common Core State Standards for English Language Arts and Literacy as an exemplary informational text, A History of US weaves together exciting stories that bring American history to life. Hailed by reviewers, historians, educators, and parents for its exciting, thought-provoking narrative, the books have been recognized as a break-through tool in teaching history and critical reading skills to young people. In ten books that span from Prehistory to the 21st century, young people will never think of American history as boring again. |
C14 dating archaeology
Thus, BP means years before A.
What is radiocarbon dating?
Many laboratories adopted this method which produced a gelatin presumed to consist mainly of collagen. This assumption is now known to be incorrect, dating archaeology that radiocarbon years are not equivalent to calendar years. When the fuels are burned, their carbon is released into the atmosphere as carbon dioxide and certain other compounds.
During the lifetime of an organism, the amount of c14 in the tissues remains at an equilibrium since the loss through radioactive decay is balanced by the gain through uptake via photosynthesis or consumption of organically fixed carbon.
These so-called "solid-carbon" dates were soon found to yield ages somewhat younger than expected, and there were many other technical problems associated with sample preparation and the operation of the counters.
Since carbon is fundamental to life, occurring along with hydrogen in all organic compounds, the detection of such an isotope might form the basis for a method to establish the age of ancient materials. Working with several collaboraters, Libby established the natural occurrence of radiocarbon by detecting its radioactivity in methane from the Baltimore sewer.
What is radiocarbon?
Some laboratories impose a minimum value on their error terms. It works very well for the last 30, years, but becomes more and more inaccurate for older samples.
Corrections for isotopic fractionation in commonly dated materials are summarized below:. They are most likely to err on the young side, but it is not possible to predict their reliability. However, their association with cultural features such as house remains or fireplaces may make organic substances such as charcoal and bone suitable choices for radiocarbon dating. Indeed, it was believed, apparently by dating archaeology with elemental charcoal, that bone was suitable for radiocarbon dating "when heavily charred" Rainey and Ralph, Berger, Horney, and Libby published a method of extracting the organic carbon from bone.
Main Index General Information Archaeology. This means that half of the c14 has decayed by the time an organism has been dead for years, and half of the remainder has decayed by 11, years after death, etc. And there is a last important drawback.
Radio Carbon Dating
Yet another change occurs in carnivores whose bone collagen is enriched by an additional 1 part per mil. The standards offer a basis for interpreting the radioactivity of the unknown sample, but there is always a degree of uncertainty in any measurement.
Grasses that are adapted to arid regions, such as buffalo erfahrungsberichte dating area Bouteloua and maize Zeaare known as C4 plants, because they create a molecule with four carbon atoms using the Hatch-Slack cycle.
However, to avoid confusion all radiocarbon laboratories continue to use the half-life calculated by Libby, sometimes rounding it to years. It is believed that all organisms discriminate against C about twice as dating archaeology as against C, and the ratio between the stable C and C atoms can be used to correct for the initial depletion of C Many sites in Arctic Canada contain charcoal derived from driftwood that was collected by ancient people and used for fuel.
Very few laboratories are able to measure ages of more than 40, years. Every laboratory must factor out background radiation that varies geographically and through time.
C14 dating is a very usefull dating method with some important drawbacks. For example, to demonstrate a secure association between datings archaeology and artifacts is often easier than to demonstrate a definite link between charcoal and artifacts.
Bases may be used to remove contaminating humic acids. Most flowering plants, trees, shrubs and temperate zone grasses are known as C3 plants, because they create a molecule with three carbon atoms using the Calvin-Benson photosynthetic cycle.
Targets tuned to different atomic weights count the number of c12, c13, and c 14 atoms in a sample. Most laboratories consider only the counting statistics, i. Pre-treatment seeks to remove from the sample any contaminating carbon that could yield an inaccurate date.
- Black speed dating in raleigh nc
- Subtitle indonesia dating agency cyrano
- Korean dating site for foreigners
- Dating in shenyang china
- Online dating without sign up
- Transition from friends with benefits to dating
- How often should you check your online dating profile
- Indian dating site without membership
- Dating two guys who are best friends
- Dating iraqi man
- How to hack zoosk dating site
- Speed dating rotherham
- 33 dating 25 year old |
Medical and social aspects play into our society's treatment of aging individuals.
As people age, vision, hearing, and other sensory capacities gradually decline. Although each individual ages at a different rate, older persons typically experience some level of dimming eyesight, fading hearing, loss of memory, decreased comprehension rate, and physical impairment. These impairments can diminish older persons' abilities to perform daily tasks, decrease mobility, and can affect elders' communication with others. Statistics show that the elderly comprise a significant proportion of physiologically impaired Americans:
- Approximately 22 percent of Americans over 65 reported trouble seeing (even when wearing glasses)
- Roughly 31 percent of Americans over 65 indicated difficulty hearing (even when wearing hearing aids)
- Approximately 27 percent of adults over 60 have severe memory problems
- About 48 percent of Americans over 65 years of age have at least one chronic health condition.
The U.S. Census Bureau estimated that almost 85.3 million Americans were disabled in 2014, or about one of every four Americans. However, this rate may be skewed when considering the rates present in nursing facilities or other assisted living facilities (96.7 percent). This figure is expected to rise as the number of older Americans living longer lives increases. Most common forms of disability included difficulties crouching, standing, pushing/pulling, and walking.
Mental Health Issues
Some of the more common mental health issues impacting the elderly are depression, dementia, and substance abuse. Clinical depression in the elderly is common, affecting about six million Americans age 65 and older. Depression in the elderly tends to be undertreated, often because symptoms are frequently confused with the effects of multiple illnesses and the medications used to treat them.
Dementia is a decline in mental ability that affects memory, thinking, problem-solving, concentration, perception, and behavior. Some forms of dementia, such as Alzheimer's disease, are degenerative. People with dementia can become confused. Some people also become restless or display repetitive behavior and may also seem irritable, tearful, or agitated. People with dementia often need a longer time to make decisions, may need an advocate to speak on their behalf, and are likely to experience varying levels of mental functioning based on the day and time of day.
Substance abuse, including alcohol abuse and addiction to prescription medications, differentially impacts the older population. Benzodiazepine and narcotics are two types of prescribed drugs most commonly abused by the elderly, while alcohol and over-the-counter sleep remedies are the two most commonly abused non-prescription drugs. Even regular use of some prescribed medication can be the source of mental health problems, including delirium, anxiety, and late-onset schizophrenia.
Key Medical Concepts
Alzheimer's disease (AD) - the most common form of dementia, a neurologic disease characterized by loss of mental ability severe enough to interfere with normal activities of daily living, lasting at least six months, and not present from birth. AD usually occurs in old age, and is marked by a decline in cognitive functions such as remembering, reasoning, and planning.
Chronic Disease – one lasting 3 months or more, by the definition of the U.S. National Center for Health Statistics. Chronic diseases generally cannot be prevented by vaccines or cured by medication, nor do they just disappear.
Clinical Depression – a mental disorder characterized by an all-encompassing low mood accompanied by low self-esteem, and loss of interest or pleasure in normally enjoyable activities. Symptoms last two weeks or more and are so severe that they interfere with daily living.
Dementia - a descriptive term for a collection of symptoms that can be caused by a number of disorders that affect the brain. People with dementia have significantly impaired intellectual functioning that interferes with normal activities and relationships. They also lose their ability to solve problems and maintain emotional control, and they may experience personality changes and behavioral problems such as agitation, delusions, and hallucinations. Dementia is diagnosed if two or more brain functions, such as memory, language skills, perception, or cognitive skills including reasoning and judgment, are significantly impaired without loss of consciousness.
Disability – The Americans with Disabilities Act (ADA) defines disability as "a physical or mental impairment that substantially limits one or more major life activities of such individual, a record of that impairment or being regarded as having such an impairment." Other definitions, such as those espoused by the United Nations, are more social in nature and distinguish between impairments, disabilities, and handicaps.
"Ageism" is a term used to describe stereotyping of and discrimination against persons based on their age. In American society, the term refers most commonly to negative attitudes about aging and the elderly. Media and marketing have been criticized for their promotion of a culture of youth and their unfavorable depiction of aging and the elderly. Studies have shown that widespread exposure to negative images of the elderly and media messages that marginalize elders can have harmful effects on the mental and physical health of older people. On the other hand, a recent study of happiness in the United States shows that aging tends to be associated with increased levels of happiness.
The potential for social isolation is another significant concern related to aging. Life course events such as retirement and the death of a spouse can reduce social connections, while physical impairments and reduced mobility can present barriers to social interaction as well as access to information and resources. These conditions can increase the vulnerability of older persons to abuse, neglect, and financial exploitation. Social integration and support networks therefore are considered to be critical to the health and well-being of older persons.
Past theories held that social isolation is an inevitable aspect of aging, which painted a bleak view of older age. This concept was challenged by newer theories and research indicating that older adults tend to maintain their accustomed social roles and activities as they age and that they can be resilient to the impact of later life transitions such as retirement and bereavement. A recent study examined social integration from the perspective of social networks (Social Connectedness: What Matters to Older People?, originally published in Aging and Society, 2019; 41(5), 1126-1144). The researchers found that, although the size and closeness of social networks may decline with age, the frequency of higher quality social contacts may increase as older adults have more time for community involvement (e.g., religious participation and volunteering).
Several demographic trends point to the potential for diminished family support networks for older persons. Increases in the divorce rate, a decline in the birth rate, and geographic mobility may lead to lower social connection and higher levels of social isolation. As fewer family members are available or capable of caring for elderly relatives, additional pressures are likely to be placed on social service agencies to provide some level of assistance. Bringing social, religious and civic institutions into a coordinated community response to elder abuse may help prevent social isolation of older adults, mitigate the negative impact of media portrayals of elders, and reduce their vulnerability to abuse, neglect and exploitation. |
Printed circuit boards are all over. Think of the electronic products that you have come across. They all have a printed circuit board as the fundamental piece in their makeup. Unfortunately, a relatively high number of electronic scraps also imply that many PCBs go to waste. Still, the waste circuit boards can help assemble electrical components. Thus, it's essential to have a recycling process for these valuable materials. In that regard, our article will delve into the essence of recycling PCBs. Furthermore, it'll expound on the PCB recycling process.
Why Recycle PCB?
People casually disposed of electronic waste and other hazardous materials in the past. However, the electronic industry has grown bigger, implying more destruction than ever. Thus, the recovery of materials is more important than ever.
We can use it as a raw material for other electronics rather than the waste circuit board finding its way in pits.
How to Recycle Circuit Boards?
Green Printed PCB without components.
It is possible to recycle PCB boards, but you will not get everything back. The primary materials in PCB are copper and FR-4. You can use this copper as an electrical conductor. However, you can only obtain pure copper as the fiberglass will degrade in the recycling process. Still, fiberglass is essential in several low-tech applications.
Here are the ways to recycle PCB:
As the name suggests, the technique involves subjecting the PCB to high temperatures. Primarily, it's handy in the recovery of metals as the FR-4 cannot survive high temperatures. Also, it's easy to implement, but it releases hazardous fumes of lead and dioxin.
Firstly, you need to place the PCB in a bed of acid. The acid solution will digest the FR-4, making it impossible to recover. Like the technique mentioned above, you can only contain metals from this recovery process.
Also, the process will release wastewater which you can reuse after treatment.
Computer parts ready for recycling.
It involves the physical separation of metals from nonmetals. Primarily, this is via first smashing and shredding the PCB. Recycling wasted printed materials using this process is safe as it doesn't contribute to environmental pollution.
However, it has several hazards for the operators. First, the machine involved is extremely noisy. Secondly, it emits dust particles laden with heavy metals and other intoxicants. Lastly, the high temperature causes the PCB to produce an irritant odor.
Circuit Board Recycling Process
Printed circuit board with components
The primary process involves cutting and sorting the PCB components via the following steps.
- First, you need to remove the components attached to the green boards via drilling.
- Next, cut the PCB board into tiny pieces.
- Also, you need to separate ferromagnetic materials from non-ferrous metals. Thus, you'll require magnetic separators to isolate the non-magnetic waste.
- Lastly, you need to sort out other materials such as ceramics and fiberglass.
After cutting and sorting, companies in the recycling industry use three main procedures.
Pyrometallurgical processes involve heating the PCB to approximately 1400 to 1600 degrees. Subsequently, metals such as lead, tin, and oxides will change to a liquid slurry. Additionally, the heating will change organic materials to hydrogen and carbon monoxide.
First, you need to dissolve the PCB in a solution of aqueous leaching agents. Next, you need to add a precipitation agent to convert the slurry from liquid to solid waste. Lastly, add the materials to an ion exchange system to remove the metal ions.
Primarily, this procedure involves the separation of precious metals via electrolysis from the waste stream.
What You Can Recycle from Printed Circuit Boards
Copper is the main product recovered from PCBs.
Essentially, recycling aims to acquire as many metal scraps as possible. The precious primary materials that you'll recover are copper and tin. You'll obtain copper from:
- Treatment sludge
- Edge trim
- Etching solution
- Rack stripping
- Solder stripping
Furthermore, in the extraction of copper, you can also obtain copper hydroxide, specifically in the hole process. Lastly, you can generate tin during the hot air leveling procedure.
How to Make PCBs More Recyclable?
A PCB board without components
There is an ever-increasing number of PCBs in the electronics industry. Thus, we need to recycle more waste electrical materials. However, at the moment, there's little that we can do to improve PCB recyclability.
Earlier, there was a notion that using SMD components in place of THT could improve PCB recyclability. Nonetheless, this has proven to be ineffective.
Environmental protection is a critical concern in the present world. Thus, all stakeholders in the PCB industry need to join hands to ensure that there is little electronic waste. It will be handy in limiting the energy consumption in the PCB production process.
At OurPCB, we are leading advocates in PCB recycling. Thus, if you have any queries on this process, talk to us at any time. |
Fertilizers and the Environment (Grades 6-8)
In this lesson students will recognize that fertile soil is a limited resource to produce food for a growing population, describe the role fertilizer plays to increase food productivity, distinguish between organic and commercial fertilizers, and recognize how excess nutrients are harmful to the environment. Grades 6-8
Activity 1: The Big Apple
- 1 apple
- 1 knife
- Apple Land Use Model, available for purchase from agclassroomstore.com (optional)
- Lesson Handouts:
- Master 5.1, Newspaper Articles (Prepare an overhead transparency.)
- Master 5.2, Population and Land Use Graphs (Make 1 copy for each group of 3 students.)
- Master 5.3, Needs of the Future (Make 1 copy for each group of 3 students.)
- Lesson Handouts:
- Master 5.4, Thinking about Fertilizers (Make 1 copy for each group of 3 students.*)
- Master 5.5, Pros and Cons of Different Fertilizers (Make 1 copy for each group of 3 students.)
- Master 5.6, Nutrient Pollution (Make 1 copy for each group of 3 students.*)
- Master 5.7, Nutrient Pollution Discussion Questions (Make 1 copy for each group of 3 students.*)
* Half of the groups receive Masters 5.4, Thinking about Fertilizers and 5.5 Pros and Cons of Different Fertilizers, and the other half receive Masters 5.6, Nutrient Pollution and 5.7, Nutrient Pollution Discussion Questions.
non point source: nutrient pollution that results from runoff and enters surface, ground water, and the oceans from widespread and distant activities
nutrient pollution: the presence of excessive amounts of nutrients such as nitrogen and phosphorus in surface water, groundwater, air, and non-agricultural land; stimulate the growth of algae and phytoplankton, which eventually depletes the waters of oxygen and impacts many aquatic organisms
nutrient toxicity: the presence of an excessive amount of a specific nutrient, which is harmful to the organism
point source poluution: nutrient pollution that comes from a specific source that can be identified such as a factory or a wastewater treatment plant
Background Agricultural Connections
Nourishing Plants with Fertilizers
Additional information can be found in the Background section of the lesson Plant Nutrient Deficiencies.
Fertilizers and the Environment
No one disputes the fact that proper application of organic and commercial fertilizers increases the yield of crop plants. The concern over their use is that plants may be exposed to larger quantities of nutrients than they can absorb, especially when applied improperly. In such cases, the excess nutrients run off the farmers’ fields with the rain and enter rivers, streams, lakes, and oceans, where they are not wanted. Excess nutrients in aquatic environments promote the growth of algae and similar organisms, leading to a general degradation of water quality. They can also enter groundwater and the atmosphere where they can contribute to human health problems and global warming. Some nutrients are a natural part of the environment and enter the biosphere from weathering and erosion processes. Nutrient sources from humans include agriculture, sewage and waste water treatment plants, coal-burning power plants, and automobile exhaust. The relative importance of these pollutants varies greatly between urban and rural areas. Controlling nutrient pollution means identifying its various sources and implementing policies that limit contact between nutrients and the environment.
As discussed earlier, organisms require essential nutrients to survive, but they must be present in the proper amounts. Either too little or too much can adversely affect health. A similar situation exists with regard to the environment. The U.S. EPA estimates that 12 percent of the nation’s waters are impaired either by nutrients or by sediment, which also may represent nutrient-related impairments such as oxygen depletion. It has been estimated that more than 60 percent of rivers and bays found in coastal states are moderately to severely degraded by nutrient pollution. Nutrient pollution, especially from nitrogen, can lead to explosive growth of aquatic organisms through a process called eutrophication. The resulting blooms of organisms such as phytoplankton and algae reduce the amount of sunlight available to aquatic vegetation. Their metabolism depletes the bottom waters of oxygen,which can suffocate organisms that cannot move away from oxygen-depleted areas. Scientists have shown that the area of oxygen-depleted bottom water is increasing in estuaries and coastal zones worldwide. Excess nitrate in water supplies can cause human health concerns at high concentrations. The most severe acute health effect is methemoglobinemia, often called ‘blue baby’ syndrome. Recent evidence suggests that there is not a simple association between nitrate and blue baby syndrome, rather that nitrate is one of several interrelated factors that lead to methemoglobinemia. The disease is uncommon in the United States because potential exposure to high levels of nitrate is limited to a portion of the population that depends on groundwater wells, which are not regulated by the Environmental Protection Agency (EPA). Public drinking water systems should contain nitrates at a level safe for consumption as nitrates can be removed by water filtration. Nitrogen pollution from cultivated soils, industry and other sources contributes to global warming because a portion is released into the atmosphere as nitrous oxide (N2 O), a powerful greenhouse gas.
These excess nutrients enter the environment through both natural and human-induced mechanisms. Sources of nutrient pollution are classified as being either point sources or non point sources. Point sources typically are factories, power plants, and wastewater treatment plants,whereas non point sources are general sources, such as farms, cities, and automobiles. A major non point source of nutrient pollution is urban development. For example, clearing of land for housing and industry creates sealed surfaces that do not absorb water and increase nutrient-laden runoff. A related non point source of nutrient pollution is the septic systems that have proliferated as the suburbs extend beyond the reach of urban sewer systems. Another non point source is automobile exhaust. Nitrogen is released first into the atmosphere, but returns to the surface with the rain. Although definitive information is hard to come by, it has been estimated that up to 40 percent of the nitrogen entering aquatic environments in some areas can come from nitrogen in the air. Agriculture is also a non point source for nutrient pollution. Use of fertilizers can send excess nutrients into the environment, particularly when they are applied in excess of the plant’s needs or can quickly move into waterways. Increasingly, farmers are adopting nutrient management and precision agriculture measures that limit the amount of this pollution.
Point sources of nutrient pollution can be tied to specific locations. Most such sources come from wastewater treatment facilities and industrial plants. In urban areas,wastewater treatment facilities can be the largest contributors to nutrient pollution. For example, in Long Island Sound off the East Coast, an estimated 60 percent of the nitrogen that enters the water comes from sewage discharge leaving NewYork City. For many estuaries, however, non point sources contribute more to nutrient pollution than wastewater. In the Mississippi River, point sources account for just 10 to 20 percent of nitrogen and 40 percent of phosphorus entering the system.
During the past 40 years, antipollution laws have been enacted to reduce the amounts of toxic substances released into our waters. Water-quality standards are set by states, territories, and tribes. They classify a given water body according to the human uses the water quality will allow—for example, drinking water supply, contact recreation (swimming), and aquatic life support (fishing)—and the scientific criteria to support those uses. The federal CleanWater Act mandates that if a water body is impaired by a pollutant, a total maximum daily load (TMDL) must be created. Total maximum daily load is a calculation of the maximum amount of a pollutant that a water body can receive and still meet water quality standards, and an allocation of that amount to the pollutant’s sources. A TMDL is the sum of the allowable loads of a single pollutant from all contributing point and non point sources. The calculation must include a margin of safety to ensure that the water body can be used for the purposes the state has designated—such as swimming and fishing. The calculation must also account for seasonal variation in water quality.
Today, scientists and policy makers are working with farmers to develop more-effective and extensive nutrient management strategies. Solving the nutrient pollution problem will involve establishing emission regulations, compliance incentives, and federal oversight of designated water quality uses.
Managing Lawn Fertilizers
Growing concern about algae in surface waters has led some local municipalities to begin regulating lawn fertilizers. Areas in Florida, Maine, Michigan, Minnesota, Missouri, Washington, and Wisconsin have enacted ordinances limiting the phosphate in lawn fertilizers. In Ontario, Canada, the township of Georgian Bay recently passed a bylaw banning the application of fertilizer. The merit of such legislation is still under debate. However, manufacturers are responding by offering fertilizer grades with lower amounts of phosphate. Will these approaches be effective in improving water quality in our rivers, lakes, and reservoirs? The principles of nutrient management that have been developed for agricultural fertilizers also apply to lawn fertilizers. With soil testing and wise application, such as more frequent applications at lower doses, nutrient losses can be reduced.
Perhaps surprisingly, fertilizers can have a positive impact on the environment with regard to land use. Land is a finite resource, and human societies use it for a variety of purposes. We need land for residential living, for industries, for recreation, for wildlife habitats, and of course, for growing food and fiber. Land cultivation worldwide has remained about the same for the past 50 years. Although subsistence farmers in developing countries have brought some additional land into production, land has also been lost to expanding cities in the developed countries. Even so, starting in the 1960s, farmers were able to increase food production about 400 percent. The Green Revolution was made possible largely by three innovations: better crop varieties, use of commercial fertilizers, and better water management practices. The economist Indur Goklany calculated that if we needed to feed today’s population of over 6 billion people using the organic methods in use before the 1960s, it would require devoting 82 percent of Earth’s land to farming.
The United States produces a surplus of food, but the world doesn’t. By 2050, the world’s population is expected to number well over 8 billion people. Food production will need to keep pace. If the world’s farmland were used evenly by the world’s population, then each person would use 1.8 hectares. Instead, each person in North America uses 9.6 hectares and each European uses 5.0 hectares.
Technology and Nutrient Management
Clearly, if we are going to produce adequate food for our growing population, then crop yields will need to further increase. Strategies will have to be developed to meet the challenges of the future. Some farmers are using technology in a variety of ways to increase crop yields. While the utilization of these new technologies is growing, it is not occurring today on most of the nation’s farms. The rest of this section describes some of these technologies.
Geographic information systems (GIS) allow farmers to use map-based information about natural resources, soils,water supplies, variability in crop conditions throughout the year, and crop yields to ensure the that amount of nutrients being used matches crop needs. Even information about the amount of crop residue (which still contains nutrients) left at the end of the year and the amounts of nutrients removed by the crop can be “mapped” and stored in a GIS database. Once this information is gathered into one database, it can be integrated with other GIS databases such as rainfall records (taken from Doppler radar).
The global positioning system (GPS) is critical to the development of GIS databases and is used to identify the locations of equipment and people in the field. GPS is also useful in assessing general crop conditions and for scouting fields for problems such as nutrient deficiencies. GPS can help farmers return to the same field sites when problems are being addressed.
Autoguidance is a feature of mechanized agriculture. It ties together GPS, GIS, and robotics technologies, allowing a driver to sit and watch as the machine does the work. This technology is being used in various types of farm equipment such as tractors, combines, sprayers, and fertilizer applicators. For example, by using autoguidance systems, farmers can ensure that applications of fertilizers are not on overlapping tracks. The best of these systems can apply fertilizer to an accuracy of less than one inch.
Remote sensing uses satellite images of fields to help farmers know what is happening to their crops. The satellite images can be analyzed to detect variability in the reflection of visible, infrared, and other wavelengths of light. Some images show thermal (heat) radiation from the ground below,which helps estimate soil moisture conditions. These images and data, linked with the GIS data mentioned earlier, offer a means of detecting problems developing in the field and comparing successive images over time. The rate of change can be determined to illustrate how a problem is spreading.
Enhanced efficiency fertilizers help reduce nutrient losses and improve nutrient-use efficiency by crops while improving crop yields. These products provide nutrients at levels that more closely match crop demand leaving fewer nutrients exposed to the environment. Slow- and controlled-release fertilizers are designed to deliver extended, consistent supplies of nutrients to the crop. Stabilized nitrogen fertilizers incorporate nitrification inhibitors and nitrogen stabilizers, which extend the time that nitrogen remains in a form available to plants and reduces losses to the environment.
Gene modification technology is another strategy with potential implications for the future. One of the main factors that limit crop growth is the efficiency of nitrogen uptake and usage by the plant. If crop plants can be made to more efficiently use nitrogen, more fertilizer will be converted into biomass. This means less fertilizer will run off into the environment.
The ultimate goal of this research is to give non-legume plants the ability to obtain their own nitrogen from the atmosphere (i.e. to ‘fix’ nitrogen from the atmosphere) and not relying as heavily on added fertilizers. However, giving a corn plant the ability to fix nitrogen would involve adding a large number of genes, not only from nitrogen-fixing bacteria, but also from an appropriate host plant. The prospect of achieving this anytime soon is remote. Scientists have succeeded in helping plants better use nitrogen by increasing the expression of a single gene. For example, plants that highly express the enzyme glutamate dehydrogenase have been shown to grow larger than those that weren’t modified to do so. Of course, genetic scientists aren’t limiting their efforts to nitrogen fixation. A wide variety of crop plants have been engineered to grow faster, tolerate unfavorable environments, resist pests, and have increased nutritional value.
- Ask your students if they think we have adequate land to grow and produce enough food for a growing population. Can every acre of farm land be used to grow food crops or raise animals? Students may picture areas where there is a lot of open space. However, do they realize that not all land is suitable for growing crops?
- After completing this lesson, students will be able to:
- recognize that farmland is a finite resource,
- appreciate that the world’s growing population demands an increase in food productivity,
- describe the role fertilizer plays in increasing food productivity,
- distinguish between organic and commercial fertilizers,
- describe how excess nutrients are harmful to the environment, and
- identify different sources of nutrient pollution.
Explore and Explain
Activity 1: The Big Apple
Tip from the field test: This activity uses an apple as a model of Earth. The Apple Land Use Model can be used as an alternative demonstration option. Students discuss the various ways people use land and make predictions about what percentage of Earth’s land is needed to grow our food. After discussing the ways in which land is used (Step 2), you may consider having the students create their own pie charts where they predict the percentages associated with different land uses, especially farming. Later, their predictions can be compared with the actual values revealed by the apple demonstration.
- Explain to the class that this activity is concerned with how we as a society use land. The amount of land on Earth stays the same, so as the world’s population gets larger, it becomes even more important that we make wise decisions about how it is used.
- Explain that land is used for many different reasons. Ask, “What are some of the most important uses for land?” Write students’ responses on the board or an overhead transparency. Students’ responses may include the following:
- Industries or places where we work
- Pastures or land for livestock.
- Parks, sports, and recreation.
- Wildlife habitat (wetlands, mountain ranges, forests, deserts, beaches, and tundra).
- If one of these uses is not mentioned by a student, ask guiding questions to bring it out. A student may point out that some land such as a desert has no use. Of course, any land that is not being used by humans can be considered a habitat for wildlife and provides a variety of other economic services for people. For example, wetlands help remove nutrient pollution from rivers, lakes and estuaries.
- Call attention to the apple and the knife. Explain that the apple represents Earth. Ask, “How much of the total Earth’s surface do you think is devoted to farming?” Students’ responses will vary. Some may remember that about 70 percent of the surface is water.
- Use the knife to cut the apple into 4 equal parts. Set 3 parts aside and hold up 1 part. Explain that the surface of the world is about 70 percent water, so this 1 piece represents that part of the surface that is land. Remind students of the many different uses for this relatively small amount of land.
- Use the knife to cut the 1/4 piece of apple in half 3 more times, each time discarding 1/2. Finally, hold up 1 of the smallest pieces and explain that it represents 1/32 of the surface of Earth or 1/8 the land where we live. This is the amount of land available for farming. Point out that the skin on this small piece of apple represents the tiny layer of topsoil that we depend on to grow food.
- Explain that because we put land to so many different uses, the amount devoted to farming has hardly changed during the past 50 years. Scientists are worried about how we will feed the world’s growing population in the next 50 years.
Activity 2: Using Land Wisely
- Display a transparency of Master 5.1, Newspaper Articles and cover the bottom portion so that only the top article can be read. Ask for a student volunteer to read the article aloud.
- Explain to students that they will continue in their roles as agricultural experts concerned with increasing crop yields on farms. Ask students to summarize the content of the article.
- Try to focus the discussion on the world. Most students in the United States do not have direct experience with severe hunger. Help them understand that in addition to human suffering, hunger can also lead to political instability. It is in everyone’s best interest to eliminate world hunger. The article mentions that population growth contributes to the problem of world hunger. Although population growth is an important societal issue, please remind students that the scope of this module is limited to discussions related to agricultural practices. The article also mentions the availability of freshwater and increasing temperatures due to global warming as challenges for growing more food. If they don’t understand why increasing temperatures cause lower crop yields, explain that it takes more energy and water for plants (and people) to maintain themselves at higher temperatures. Using humans as an example, you can point out that marathon records are usually set at cooler temperatures.
- Now uncover the bottom article and ask for a second volunteer to read it aloud.
- Once again, ask students to summarize the article. Students should recognize that there are many factors that influence world hunger and that addressing the problem requires the skills of many different types of people including as social scientists, climatologists, ecologists, water management experts, and agricultural experts.
- Divide the class into groups of 3 students. Explain that their first task is to investigate how land use is expected to affect farming in the future.
- Pass out to each group a copy of Master 5.2, Population and Land Use Graphs and Master 5.3, Needs of the Future. Instruct groups to use the graphs on Master 5.2, Population and Land Use Graphs to help them perform a calculation on Master 5.3, Needs of the Future about how much farmland will be needed in the year 2050. Give groups 5 to 10 minutes to perform their calculations.
- The numbers needed to perform the calculation are indicated on the population graph.
- For an explanation of calculations, see Teacher's Note
- Ask each group to report the results of their calculations. Write their answers on the board or on an overhead transparency.
- If any answers are out of the expected range, go through the calculation step by step, identify the mistake, and correct it.
- Review the land use for the class. If crop yields stay the same over the next 50 years, then an extra 10 billion acres of farmland will need to be set aside and cultivated.
- Ask the students to remember the different uses of land that they described in Activity 1: The Big Apple, Step 2. Point to the list of land uses on the board or display the transparency where they are listed.
- Ask, “If billions of acres of extra farmland are needed to feed people, where should it come from?” “What are you willing to sacrifice?”
- Students likely will believe that people must have adequate land for the places where they live and work. They may suggest taking the land from parks or wildlife habitats. Some may suggest that if more people became vegetarians, the extra farmland could come from pastures where livestock graze. These questions are not intended to settle the issue. Instead, they are intended to prompt a discussion that helps students see the scope of the problem and to consider some of the difficult decisions that may lie ahead.
- Explain that in the next activity, they will consider how farming practices can influence land use and crop yields.
Activity 3: Fertilizers and the Future
Teacher note: In this activity, students read about organic and commercial fertilizers (Master 5.4, Thinking about Fertilizers) and nutrient pollution (Master 5.6, Nutrient Pollution). In both masters, the information is a brief introduction to the topics. The information is not meant to be comprehensive. Rather, it is designed to challenge students’ critical-thinking skills.
- Remind students that in Activity 2: Using Land Wisely they calculated that 10 billion extra acres of farmland would be needed to feed the world’s population in 2050. Ask, “What assumption was made in reaching this conclusion?”
- Students’ answers will vary. Some may focus on assumptions associated with the rate of population growth. This is a good answer, but you should guide the discussion to remind students that their calculations assumed that the food yields on farms would remain the same during the next 50 years.
- Ask, “What will be the effect of increasing the amount of food that an acre of farmland can produce?”
- Students should realize that if farmland becomes more productive, then fewer acres will be required to meet the world’s food needs.
- Explain that in their roles as agricultural experts, they are going to make recommendations to the Earth Food Bank about how to farm in the future. Explain to students that when considering the proper use of fertilizer, they want to increase crop yields, while at the same time minimizing harm to the environment. Proper application of fertilizer means the following:
- Fertilizer is added at the right time. Fertilizers should be applied during that part of the plant’s life cycle when the nutrients are needed.
- Fertilizer is added at the right place. Fertilizers should be applied in a location where the nutrients can be taken up by the plant’s root system. This can also mean not adding fertilizer to land that is too close to waterways.
- Fertilizer is added at the right rate. Fertilizers should be applied at the rate at which the plant can use the nutrients.
- Explain that students need to learn more about fertilizers and their effects on the environment.
- Pass out to half of the groups a copy of Master 5.4, Thinking about Fertilizers and a copy of Master 5.5, Pros and Cons of Different Fertilizers.
- Pass out to the other groups a copy of Master 5.6, Nutrient Pollution and a copy of Master 5.7, Nutrient Pollution Discussion Questions.
- Instruct the groups to read the information found on the first handout (either Master 5.4, Thinking about Fertilizers or Master 5.6, Nutrient Pollution) and to discuss within their groups their understanding. Students should relate the ideas of “right time, right place, and right rate” when considering the use of fertilizers and their impacts on the environment.
- Students should use the second handout (either Master 5.5, Pros and Cons of Different Fertilizers or Master 5.7, Nutrient Pollution Discussion Questions) to record their conclusions.
- Students reading about fertilizers should be able to identify three or four advantages and disadvantages of each type of fertilizer. Students reading about nutrient pollution should be able to describe how excess nutrients can produce algal blooms that use up oxygen in the water, leading to suffocation of other plants and animals. They should be able to identify wastewater treatment facilities and industrial plants as point sources of nutrient pollution. They should identify agriculture, urban development, septic systems, and the burning of fossil fuels as non point sources of nutrient pollution. Student suggestions for limiting non point sources of nutrient pollution will vary. There is no simple correct answer. Look for logical responses that students can defend using evidence. The idea is to get them thinking about the multiple sources of nutrient pollution and for them to realize that limiting its effects will require a complex set of regulations, incentives, and government oversight.
- After the groups have completed their tasks, ask for volunteers to read their conclusions.
- Make a list of the advantages and disadvantages of each type of fertilizer on the board or on an overhead transparency.
- Discuss answers to the questions about nutrient pollution.
- Ask, “Why do think that some farmers use organic fertilizers and others use commercial fertilizers?"
- Student responses will vary. Try to bring out in the discussion that the farmers in the United States have more options than farmers in poorer countries, who may have no choice and must use organic fertilizers that they produce for themselves. A consequence is that farmers in poorer countries obtain lower crop yields as compared with farmers in the United States. However, farmers in the United States often choose to use organic fertilizers for a variety of other reasons.
Teacher note: Try to avoid getting bogged down in debating whether or not food that is organically grown is safer or tastes better than food grown using commercial fertilizers. This is not the focus of the lesson. Scientific studies have not been able to consistently find taste, health, or safety differences between food grown using the two types of fertilizers.
Optional Homework Assignment 1 Instruct students to research and write a short paper describing the advantages and disadvantages of organic and commercial fertilizers. For each type of fertilizer, students should include information about the fertilizer’s composition, the fertilizer’s application, its influence on crops yields, its impacts on the environment, and its role in agriculture, both in North America and globally.
Optional Homework Assignment 2 Instruct students to involve their parents or guardians in this activity. Using the world population graph on Master 5.2, Population and Land Use Graphs, ask students to determine the world’s population when their parents or guardians were their age. Have students calculate the population increase from then until now. Have students ask their parents or guardians: “What is the world’s population today?” “How much of Earth is used for farmland?” Have students,with their parents or guardians, come up with 3 ways of increasing the world’s food supply. Instruct students to turn in a summary of the activity. It should contain the world’s population when the parents or guardians were the same age as the student, the calculation showing the increase in population between then and now, the parents’ or guardians’ answers to the population and farmland questions, the 3 proposed ways of increasing the world’s food supply, and the parents’ or guardians’ signatures.
This lesson is the last in a series of five related lessons. Refer to the following lessons for further depth.
- Lesson 1: In Search of Essential Nutrients
- Lesson 2: Properties of Soil
- Lesson 3: Plant-Soil Interactions
- Lesson 4: Plant Nutrient Deficiencies
- Lesson 5: Fertilizers and the Environment
Watch the Fertilizers and the Environment video clip.
After conducting these activities, review and summarize the following key concepts:
- Fertile soil with an adequate climate for plant growth is a limited resource.
- Soil and water are natural resources that need to be managed and conserved.
- The use of various fertilizers needs to be used correctly to avoid negative environmental impacts.
Recommended Companion Resources
|We welcome your feedback. Please take a minute to share your thoughts on this lesson.| |
In Sixth Grade
In sixth grade, students converted units within a measurement system using proportions and unit rates. Students extended their knowledge about triangles to find the sum and measures of angles and the lengths of sides to form a triangle. Students modeled and solved area formulas for parallelograms, trapezoids, and triangles as well as wrote equations. Graphing using the four quadrants of a coordinate plane was also introduced.
In Seventh Grade
In seventh grade, students will solve real-world problems related to similar shape and scale drawings, convert between measurement systems using proportions and unit rates as well as write and solve equations related to the sum of angles in a triangle. Students will also solve problems related to finding the volume and lateral and total surface area of rectangular prisms, triangular prisms, rectangular pyramids, and triangular pyramids as well as determine the circumference and areas of circles and area of composite figures that contain various combinations of figures.
In Eighth Grade
In eighth grade, students will solve problems to find the volume of cylinders, cones, and spheres as well as use the formulas for lateral and total surface area to determine the solutions for problems involving rectangular prisms, triangular prisms, and cylinders. Students will also use the Pythagorean theorem including models and diagrams to solve problems and explain the effects of translations, reflections, and rotations on a coordinate plane. |
Kangaroo - Birth and Infancy
The Australian kangaroo as a marsupial
One of the strangest and most fascinating birthing processes in the animal kingdom is that of the Australian grey kangaroo. Like possums, koalas, wombats and wallabies, kangaroos are marsupials that are found mainly in Australia. Marsupials represent about 6% of all mammals, and their most unique characteristic is that they all give birth to undeveloped offspring who spend time inside of their mother’s pouch until they are fully developed and ready to go out into the world. This video displays exactly what occurs during the kangaroo’s birthing process.
After the egg descends from the ovary and into the uterus, if the female kangaroo finds herself pregnant then the neonate will take about 33 days to develop inside the uterus. Once the 33 days are over, the female kangaroo is ready to give birth to the kangaroo baby; she will usually give birth to only one baby at a time. From this point on, the baby kangaroo is called a joey.
Kangaroo delivery process
The kangaroo’s delivery process is very gentle, in order to account for the still-undeveloped state of the newborn, who is born approximately the size of a jellybean and weighs only two grams. The joey is born blind, hairless and with undeveloped back legs, organs and central nervous system. It instinctively uses its sense of smell and forelegs to make its way out of one of the mother’s two uteri and into her pouch by climbing through her thick fur. This process takes the baby joey about three to five minutes to complete.
Once the joey makes its way into the pouch, the mother’s sexual cycle restarts almost immediately. A second egg descends from the ovary and into the uterus and she becomes sexually receptive. However, if she does mate and the egg becomes fertilized, the process will be temporarily put on hold in order to allow for the first joey to grow to maturity.
What happens inside the kangaroo’s pouch?
Meanwhile, inside the mother’s pouch, the joey latches onto one of the mother’s teats and begins feeding. The baby joey receives vital nutrients from the mother’s milk, allowing for it to properly grow and develop. After about 190 days the joey is ready to leave the mother’s pouch for the first time. From that point on, the little kangaroo will spend more and more time outside of the pouch, but will come back if it senses danger or is in need of protection.
As the joey begins spending time outside of it’s mother’s pouch, the mother’s body begins preparing for the next baby. Her body will begin to develop the fertilized egg that has been waiting in her womb, and after 33 days a second joey will find it’s way into the mother’s pouch. The second joey will begin to suckle from a different one of the mother’s teats than the first joey. Remarkably, the mother kangaroo’s mammary glands are able to produce two different types of milk at the same time. Each type of milk is made of a different chemical composition. This allows for the older joey to indulge in milk with a much higher fat content than that of the younger joey, enabling both joeys to receive suitable and proper nourishment, according to their respective needs.
The little kangaroos’ life outside of the pouch
Approximately 235 days later the mature joey will be completely ready leave the pouch for good, leaving the mother room to carry her next baby. However, shortly after leaving the pouch, the mother will stay close to the little joey and not let it stray away too far. Eventually, the joey will become independent of its mother and learn to be a part of the mob. After the kangaroo is off on its own, its lifespan can range between six and 20 years, depending on its living conditions. Most kangaroos in the wild, however, do not have a long lifespan. |
In jazz and blues, a blue note is a note that—for expressive purposes—is sung or played at a slightly different pitch from standard. Typically the alteration is between a quartertone and a semitone, but this varies depending on the musical context.
The blue notes are usually said to be the lowered third, lowered fifth, and lowered seventh scale degrees. The lowered fifth is also known as the raised fourth. Though the blues scale has "an inherent minor tonality, it is commonly 'forced' over major-key chord changes, resulting in a distinctively dissonant conflict of tonalities". A similar conflict occurs between the notes of the minor scale and the minor blues scale, as heard in songs such as "Why Don't You Do Right?", "Happy" and "Sweet About Me".
In the case of the lowered third over the root (or the lowered seventh over the dominant), the resulting chord is a neutral mixed third chord.
Blue notes are used in many blues songs, in jazz, and in conventional popular songs with a "blue" feeling, such as Harold Arlen's "Stormy Weather". Blue notes are also prevalent in English folk music. Bent or "blue notes", called in Ireland "long notes", play a vital part in Irish music.
Music theorists have long speculated that blue notes are intervals of just intonation not derived from European 12-tone equal temperament tuning. Just intonation musical intervals derive directly from the harmonic series. Humans naturally learn the harmonic series as infants. This is essential for many auditory activities such as understanding speech (see formant) and perceiving tonal music. In the harmonic series, overtones of a fundamental tonic tone occur as integer multiples of the tonic frequency. It is therefore convenient to express musical intervals in this system as integer ratios (e.g. = octave, = perfect fifth, etc.). The relationship between just and equal temperament tuning is conveniently expressed using the 12-tone equal temperament cents system. Just intonation is common in music of other cultures such as the 17-tone Arabic scale and the 22-tone Indian classical music scale. In African cultures, just intonation scales are the norm rather than the exception. As the blues appears to have derived from a cappella field hollers of African slaves, it would be expected that its notes would be of just intonation origin closely related to the musical scales of western Africa.
The blue "lowered third" has been speculated to be from (267 cents) to 350 cents above the tonic tone. It has recently been found empirically to center at (316 cents, a minor third in just intonation, or a slightly sharp minor third in equal temperament) based on cluster analysis of a large number of blue notes from early blues recordings. This note is commonly slurred with a major third justly tuned at (386 cents) in what Temperley et al. refer to as a "neutral third". This bending or glide between the two tones is an essential characteristic of the blues.
The blue "lowered fifth" has been found to be quite separate from the perfect fifth and clusters with the perfect fourth with which it is commonly slurred. This "raised fourth" is most commonly expressed at (583 cents). The eleventh harmonic (i.e. or 551 cents) as put forward by Kubik and Curry is also possible as it is in the middle of the slur between the perfect fourth at and .
The blue "lowered seventh" appears to have two common locations at (969 cents) and (1018 cents). Kubik and Curry proposed as it is commonly heard in the barbershop quartet harmonic seventh chord. The barbershop quartet idiom also appears to have arisen from African American origins. It was a surprising finding that was a much more common tonal location although both were used in the blues, sometimes within the same song.
It should not be surprising that blue notes are not represented accurately in the 12-tone equal temperament system, which is made up of a cycle of very slightly flattened perfect fifths (i.e.). The just intonation blue note intervals identified above all involve prime numbers not equally divisible by 2 or 3. Prime-number harmonics greater than 3 are all perceptually different from 12-tone equal temperament notes.
The blues has likely evolved as a fusion of an African just intonation scale with European 12-tone musical instruments and harmony. The result has been a uniquely American music which is still widely practiced in its original form and is at the foundation of another genre, American jazz. |
Skin cancer is the most common type of cancer. Prevention and early diagnosis play critical roles in the fight against skin cancer, regardless of the type a person develops.
Skin cancer has many causes, but most have to do with a history of sunburn and exposure to sun as a child. Individuals with fair skin, light eyes, and freckles are at a greater risk of developing skin cancer, as are those who have more than the average number of moles. A family history of this form of cancer also increases a person’s risk of developing a melanoma or carcinoma.
Individuals with chronic, repeated, or prolonged sun exposure, such as those who reside in warmer climates, work outdoors, or frequently engage in outdoor activities, as well as those who use tanning booths, are also at a greater risk of developing skin cancer.
Skin cancer is treatable when detected early and has a high cure rate. Seek medical attention when suspicious lesions appear or a change in a mole is detected. A total body exam is suggested annually and more frequently in individuals at high risk or with a history of skin cancer. Treatment depends on the type and severity of the skin cancer, which may include but is not limited to, freezing, surgery, radiation therapy, and chemotherapy.
There are three main types of skin cancer, and each has different symptoms. Below, we’ve provided brief descriptions of these cancers and how they may manifest. |
A rise in average global temperatures causes an increase in the time it takes the Earth to rotate according to a new paper. The finding could solve a 20-year-old mystery known as Munk’s Enigma.
It’s already established that the length of a day increases over time because the moon is gradually getting further away. That affects tidal patterns and in turn causes friction on the floor of the seas which slows down the Earth’s rotation. It’s this pattern that means we occasionally have to add a leap second at the end of June or December.
But while the moon is having this effect over time, it’s also believed that the rate of this effect varies in relation to sea levels. In 2002, oceanographer Walter Munk tried to explain the connection between the two but ran into a mystery.
Munk’s figures showed that the Earth’s rotation slows over time and that sea levels changed over time. He could account for a specific relationship between the two going back to the Ice Age, but couldn’t make the figures match up for the 20th century, despite knowledge that increased global temperatures were melting glaciers and in turn causing sea levels to rise.
The mystery was so profound that Munk even developed a theory that melting glaciers might in fact speed up the Earth’s rotation, thus shortening days. He pondered the idea that with the ice melting, the rock beneath could “spring up” and thus add more weight near the surface of the poles. However, Munk openly admitted this idea was uncertain.
Now six researchers led by Harvard geophysics professor Jerry Mitrovica believe they have solved Munk’s Enigma. They’ve produced a paper arguing that Munk made three errors which could have prevented him from establishing the expected “higher sea levels/slower rotation” relationship as having continued throughout the 20th century.
- The figures Munk used for the sea level rise in the 20th century were too big, thus distorting the relationship with day length.
- The figures Munk used for the Ice Age were incorrect, partly because they didn’t account for the Earth not being a perfect sphere and partly because they didn’t take account of just how much glaciers can deform the rock beneath them,
- Munk’s calculations didn’t take account of the fact that the Earth has a liquid core which “spins” in the opposite direction to the planet and thus slows down its overall rotation.
According to the researchers, adjusting for these three errors makes it possible to establish a model for a constant sea level-Earth rotation relationship that holds true for both the Ice Age and 20th century and, in turn, allows more confident predictions about future effects.
However, Willieam Peltier, a University of Toronto physics professor quoted by the Washington Post, disputes the findings of the paper. He argues there’s no proof that the liquid core has the effect the researchers assumed when producing their calculations. |
The Atlanta Science Festival encourages real-world problem solving in the classroom. By building the capacity of students to apply creativity, perseverence, leadership and teamwork alongside STEM conceptual understanding, we prepare students to address tomorrow’s challenges. Here are several lesson plans that incorporate engineering design challenges and review key science concepts for the end of the year.
- Kindergarten – Physical Properties, Gravity. Humpty Dumpty. [Student Journal PPT]
- 1st grade – Weather. A Windy Day.
- 2nd grade – Bridge Building. Three Billy Goat’s Gruff Engineering.
- 3rd grade – Heat Energy. How do you keep an ice cube from melting?: The Penguin Problem
- 4th grade – Seed Engineering
- 5th or 6th grade – Protecting Our City With Levees.
- 5th or 8th grade – Physical Science, electricity, magnetism. Scribble Bots.
- 6th or 7th grade – fossils in sedimentary rocks. Evidence of Plate Tectonics (or Evolution).
- 7th grade – Engineering Seed Helicopters.
- Middle or High School – Physical Science. Penny Boats: An Exploration of Density.
- 8th or 9th grade – Physical Science. Life Jacket Engineering (and Inverse Relationships).
- High School – An Introduction to Biometrics.
- High School – Chemistry/Physical Science. Engineering: Pill Coatings. |
What is computer memory?
The memory is one of the fundamental components for the proper functioning of our PC , since its existence allows the computer to start, the data is processed, the instructions are executed for different programs and others.
On the other hand, the greater the amount of memory that a PC has, the greater the performance and the improvement in the performance of the computer.
However, a computer works with four different types of memories, which serve to perform various functions. These are RAM memory, ROM memory, SRAM or Cache memory and Virtual or Swap memory.
The most important is the so-called RAM (Random Access Memory), since our computer could not function without its existence.
Different types of information are stored in RAM, from temporary processes such as file modifications, to the instructions that enable the execution of the applications that we have installed on our PC.
For this reason, it is constantly used by the microprocessor, which accesses it to find or temporarily save information regarding the processes carried out on the computer .
Within the RAM memories there are different types of technologies that differ mainly in their access speed and their physical form. Among them we find DRAM, SDRAM, RDRAM , among others.
The so-called DRAMs (Dynamyc Random Access Memory) have been used in computers since the first years of the 80’s, and even today they continue to be used. It is one of the cheapest types of memory , although its greatest disadvantage is related to the speed of the process, since it is one of the slowest, which has led manufacturers to modify their technology to offer a better product.
As for the type of SDRAM technology , derived from the first, it began to be marketed at the end of the 90’s, and thanks to this type of memory processes were significantly streamlined , since it can operate at the same speed as the motherboard. to which it is incorporated.
For its part, RDRAM technology is one of the most expensive due to its manufacturing complexity, and is only used in large processors, such as Pentim IV and higher.
Another difference between the different RAM memories is found in the type of module in question, which can be SIMM (Single in line Memory Module), DIMM (Double Memory Module) and RIMM (Rambus in line Memory Module), depending the number of pins it contains and the physical size of the module.
In addition to RAM, computers work with memory called ROM, Read Only Memory, which, as its name indicates, is a read-only memory, since most of these memories cannot be modified because they do not allow his writing.
The ROM is built into the motherboard and is used by the PC to start the BIOS , which is basically a program that has the right to guide the computer instructions during boot.
Among its functions, the BIOS begins with the process called POST (Power On Self Test) during which it will inspect the entire system to verify that all its components are working properly to initiate boot.
To do this, the BIOS consults a registry in which all the information regarding the hardware that we have installed in our PC is found, to verify that everything is in order. This register is called CMOS Setup .
Although we mentioned that in many cases the ROM memory cannot be modified, nowadays a large number of motherboards incorporate new models of ROM that allow it to be written, so that the user can make changes to the BIOS in order to improve its operation.
The fundamental difference between RAM and ROM lies in speed , since ROM , being a type of sequential memory, needs to go through all the data until it finds the information it is looking for, while RAM works randomly. , which makes you access specific information directly.
This factor makes the speed of the RAM noticeably higher. Likewise, its capacity is greater than that of ROM memory , and unlike the latter, RAM is not integrated into the motherboard, which allows the user to expand the amount of RAM on their PC.
Another type of memory used by computers is called SRAM , better known as Cache memory.
Both the processor, the hard disk and the motherboard have their own cache memory, which basically protects different addresses that are used by the RAM to perform different functions , such as running programs installed on the PC.
The process carried out by the cache is to save the locations on the disk that the programs that have been executed occupy, so that when they are started again the access to the application will be faster .
There are three different types of cache:
– The L1 cache that is inside the processor and works at the same speed as it, and in which instructions and data are stored.
– The L2 cache that are usually of two types: internal and external. The first is inside the motherboard, while the second is in the processor but externally, which makes it slower than the L1 cache.
– The L3 cache that is only incorporated into some of the most advanced microprocessors, which results in a higher processing speed.
In some computers, especially those that have Microsoft Windows or Linux operating systems, we will also find the so-called virtual or Swap memory .
This type of memory, which works in a similar way to the cache, is created by Windows or Linux to be used exclusively by the operating system. In the case of Linux this so-called swap memory is generally located in a different partition of the disk, while in the Microsoft system it is a file within the operating system itself.
On many occasions, virtual memory tends to produce certain problems that cause the PC to hang , since this type of memory has been created by the system inside the hard disk and can sometimes exceed the processing capacity.
In the execution of programs through virtual memory, we will only get as a result that our PC becomes slower, since it reduces the processing speed of the hard disk.
The best way to avoid this problem is to expand the amount of RAM on our PC , so that the system does not need the creation of extra virtual memory, and therefore slows down the processes during our work.
RAM memory types
According to the type of motherboard that we use in our PC , it will be provided with different types of sockets according to its age , and it may use DDR, DDR2, DDR3 or DDR4 RAM .
The acronym DDR is used to abbreviate the concept “Double Data Rate” , whose definition is double transfer rate memory, and it is a series of modules that are composed of synchronous memories, called SDRAM, and although they have the same size Of the SDRAM DIMMs , the DDR-SDRAM have more connectors , since while the normal SDRAM has 168 pins, the DDR-SDRAM has 184.
DDR memories work by transferring data through two different channels , simultaneously and in the same clock cycle with a transfer of a volume of information of 8 bytes in each clock cycle. However, they are compatible with more powerful processors in terms of clock cycles.
With regard to DDR2 memory, it is basically the second generation of DDR SDRAM , which has managed to improve certain aspects by providing faster simultaneous processes.
Being a more modern technology, DDR2 has notable differences with its predecessors , among which the most significant has to do with the minimum transfer value, since while in traditional DDR it is 1600Mbps, in DDR2 it is doubled to 3200Mbps.
This allows a greater bandwidth in the processes, since DDR2 memories have higher latency because they work with 4 bits per cycle (2 going and 2 return) within the same cycle and under the same frequency of a conventional DDR .
Unfortunately, DDR and DDR2 are not compatible, so if you have a PC whose motherboard has sockets for DDR, you will not be able to use DDR2 memories , since the latter have 240 pins, which allows reducing its voltage to 1.8V, while DDRs use a voltage of 2.5V .
The voltage reduction in the second generation of DDR memories have incorporated a great improvement , because in this way the energy consumption and therefore the heat generation are considerably reduced.
The advance in the development of the technology of this type of RAM memory produced the DDR3 modules , whose most important manufacturer so far has been the company Samsung Electronics.
DDR3 incorporates important improvements in the field of DDR SDRAM memories , among which stands out the fact that it can transfer data at an effective clock rate of 800-1600 Mhz , greatly surpassing previous DDRs, since DDR2 they have a rate of 533-800 MHz and the DDR of 200-400 MHz.
This allows a greater bandwidth in the processes, significantly noticeable in the operation of the PC , in addition to having doubled its latency to 8 bits, in order to increase its performance, and double its minimum transfer rate to 6400Mbps, in comparison to DDR2 that have a rate of 3200Mbps .
DDR3s consume only 1.5V , thanks to the implementation of 80-nanometer manufacturing technology. This change reduces energy consumption and heat generation , thus increasing the speed of the processes.
Regarding the physical aspect, although the DDR3 have 240 pins, that is to say the same number as the DDR2, both types of memory are incompatible , since the pins have been located differently.
DDR4 memories have a speed of 2,667 Mhz and its transfer rate is 21,300 Mbps.
In the market, in addition to the typical DDR type RAM, we can also find a variant of it, called GDDR SDRAM (Graphics Double Data Rate Synchronous Dynamic RAM), which is a type of memory that was specifically designed with the Purpose to be used in the field of video rendering, usually working as a team with the GPU of our graphics card.
With this type of memory, we will be able to create very complex 3D graphic structures, for which we need a large amount of memory. However, with GDDR memories, which are much faster, the amount of memory required for these processes is reduced, which means less money and space, although the price of GDDR memories does not allow them to be used by the average user. on a tight budget in your home deployments, as they are much more expensive to produce than DDRs, which translates to a much higher price.
Although GDDR-type memories share many of the technical characteristics with DDR-type memories, the truth is that they are not completely the same. In this sense, GDDR memories, being optimized for use in video rendering, prioritize bandwidth, not latency. GDDR memories also work in compliance with the DDR standard specified by the JEDEC, which is why it is capable of sending two bits or 4 for each clock cycle, although in this case GDDR memory is optimized to achieve higher frequencies and a bus width. larger, allowing you to minimize access time to instructions stored in memory.
GDDR memories, like DDRs, have evolved over time, which is why we can find multiple variants. From this point on we will know the different types of GDDR memories on the market.
GDDR: The first type of GDDR on the market. Its effective working frequency was between 166 and 950 MHz with a latency of 4 to 6 ns.
GDDR2: In this type, the operating frequency was improved, which ranged from 533 to 1000 MHz, and could offer a bandwidth of between 8.5 to 16 GB / s.
GDDR3: Especially used by some models of ATI and Nvidia graphics cards, these memories can operate between 166 and 800 MHz.
GDDR4: Quickly replaced by GDDR5, they were only used by some AMD models.
GDDR5: One of the most widespread GDDR memory types in recent years. It is used in mid-range and high-end video cards from manufacturers such as Nvidia, AMD and Radeon, among others. These memories are capable of offering a bus width close to 20 GB / s in 32-bit buses and 160 GB / s in 256-bit buses, the operating frequency can reach up to 8 Gbps. It should be noted that this type of memory is also installed in game consoles such as the Xbox One and the PS4.
GDDR5X: This memory is basically an evolution of the GDDR5 technology that is used in some models of video cards. They offer an operating frequency of 11 Gbps and a bandwidth of 484 GB / s over a 352 bit bus.
GDDR6: At the moment, this is the latest version of GDDR memory available. They are capable of offering up to an operating frequency of 14 Gbps with a bandwidth of 672 GB / s over a 384 bit bus. This type of memory is used in high-end video cards such as the Nvidia Titan RX.
Annex 1: ROM memory
ROM memory is perhaps the most important hardware element of computers and portable devices such as cell phones, smartphones and tablets, among many others, since in this small electronic component all the necessary information is stored so that the device starts up and can comply with its function.
The term ROM is an abbreviation of the Saxon term “Read Only Memory” which in Spanish means “Read Only Memory” , and as its name indicates, this type of memory stores information that can only be accessed, that is, it cannot be written with new data, except through special procedures such as when we are updating a BIOS.
What is ROM memory?
Basically, a ROM memory is a chip that inside stores the necessary information to be able to start an electronic device such as a computer or a smartphone, and whose main characteristic is to have the ability to preserve the data it contains even when there is no power that powers it, unlike RAM memories, which if not energized, immediately lose their content.
The term ROM nowadays is used by convention, and they basically come from when ROM memories were developed and left the factory with the data stored in them, and there was no way to write them.
Nowadays it is possible to find memories that fulfill the same function of the old ROMs but that can be written, called EPROM and Flash EEPROM, however writing in this type of memory is a complicated task and that cannot be done directly, except with special tools and procedures, which most of the time are not available to the average user.
These EPROM and Flash EEPROM memories can be written many times, which favors, for example, that updating the BIOS of a computer can be a frequent task and that does not present problems. Such is the adoption of this type of memory to fulfill the role of ROM that we will practically not be able to find on the market devices that contain ROM of the oldest type since the end of the first decade of the 21st century.
What is ROM for?
ROM memories in devices fulfill the important function of storing inside them the code that is needed to start the different modules that make up a computer, that is, everything that is required to start working with it. Likewise, the ROM memory fulfills the function of starting the operating system of the PC on which it is installed.
In addition to being used for managing the PC boot process, ROM is used for the initial system check and various input and output device control routines.
The capacity offered by the ROM memory to be able to preserve data even if it is not energized, makes it ideal for the work of starting a computer, since the data stored in the ROM memory does not alternate or degrade in the absence of electricity to feed it , that is, they are always the same, so the device they manage will always behave in the same way.
ROM memory types
Over the years, ROM memories have evolved to adapt to new technologies. Currently, there are three basic types of ROM memory.
ROM (Read Only Memory)
This type of ROM or “Read Only Memory” was the first to be developed and manufactured, and the information that had to be stored in it was recorded using a procedure that involved the use of a silicone plate and a mask . This type of ROM memories are no longer used, being replaced by the memories detailed below.
PROM (Programmable Read Only Memory)
The PROM memories, also known as “Programmable Read Only Memory”, saw the light at the end of the 70s, and their programming, that is to say, the loading of the data that they had to contain, was carried out by burning certain electronic components, called diodes, with a voltage overload using a device known as a “ROM Programmer”. The diodes affected with the load correspond to “0”, while the others correspond to “1”.
EPROM (Erasable Programmable Read Only Memory)
EPROM type memories, also known as “Programmable and Erasable Read Only Memory”, are basically PROM type memories but they have the particularity of being able to be erased. The way to program these memories is through rays of ultraviolet light that penetrate the circuit through a window in the encapsulation of the chip. As soon as the chip is exposed to UV light, all bits return to their “1” state.
EEPROM (Electrically Erasable Programmable Read Only Memory)
The EEPROM memories also known by the name “Electrically Erasable Programmable Read-Only Memory”, are, like the PROM memories, erasable, however this procedure in the EEPROM memories is simpler , since it can be carried out by means of a certain current electrical.
It should be noted that EEPROM memories offer a variant called Flash EEPROM, which uses fewer components, specifically a single transistor, instead of the 2 or 3 that EPROM memory uses. It also offers the ability to read record by record, instead of a full page reading like EEPROM memory.
Differences between RAM and ROM memories
As we know, there are two types of memory in a computer, ROM and RAM, and each of them fulfills a very different function. RAM memory, or random access memory, is that memory accessed by the operating system to search for the data that both the user and the operating system are using, since it is a much faster method than searching for it on the hard disk.
RAM memory can be read and written multiple times, however RAM is temporary, since the data it contains is erased immediately in the absence of power, that is, when it loses power.
In contrast, ROM memory is not affected by power supply, which makes this type of memory the ideal medium for storing the data necessary for a device to function. In addition, the condition of not being writable, at least by the usual means available to the average user, guarantees that it will keep the data it contains in any situation , so the device will always turn on and follow the same routine.
Annex 2: Cache memory
Cache memory was born when it was discovered that the memories were no longer capable of keeping up with the speed of the processor, causing the latter many times to stay “waiting” for the data that the RAM memory had to deliver in order to complete its tasks, losing a lot performance.
If at the time of 386, year 1991, the speed of the memories was already a limiting factor, imagine this problem today, with the processors we have today.
To solve this problem, they began to use cache memory, an ultra-fast type of memory that serves to store the data that is most frequently used by the processor, avoiding, most of the time, having to resort to the comparatively slow RAM.
Without the cache memory, the performance of the system would be limited to the speed of the memory, which could drop up to 95% !
The types of cache memory
Two types of cache memory are used, called primary cache, or L1 cache (level 1), and secondary cache, or L2 cache (level 2). The primary cache memory is embedded in the processor itself and is fast enough to keep up with speed. Whenever a new processor is developed, a faster type of cache memory must also be developed to accompany it. As this type of memory is extremely expensive (it is hundreds of times more expensive than conventional RAM memory) only a small amount of it is used. To complement this, a slightly slower type of cache memory is also used , which is called secondary cache, which, because it is much cheaper, allows you to use a greater quantity.
How to install cache memory?
First, you must make sure that the motherboard allows the installation of cache memory. The motherboards that allow installation have a socket called COAST where the cache memory module is placed . Generally, you need to change the cache size configuration jumpers. The correct position of the jumpers must be consulted in the manual of the plate. If after this configuration the PC does not turn on, it means that the cache memory module is faulty or is incompatible with the motherboard. In this case, the module must be changed. When everything is working, the memory cache should be enabled in the PC BIOS.
Check the existence of cache memory on the PC?
There are several programs for this purpose. One of them is called PC-Config , it is shareware and can be downloaded for free on the internet. In addition to testing the cache, this program does not provide important information about the PC , such as the type of memory installed and the type of chipset . |
The 2022 Yellowstone flood inundated communities and swiftly eroded the land beneath this cabin that housed park employees. (Courtesy Gina Riquier/Public domain).
By Frances Davenport
Heavy rain combined with melting snow can be a destructive combination.
In mid-June 2022, storms dumped up to 5 inches of rain over three days in the mountains in and around Yellowstone National Park, rapidly melting snowpack. As the rain and meltwater poured into creeks and then rivers, it became a flood that damaged roads, cabins and utilities and forced more than 10,000 people to evacuate.
The Yellowstone River shattered its previous record and reached its highest water levels recorded since monitoring began almost 100 years ago.
Although floods are a natural occurrence, human-caused climate change is making severe flooding events like this more common. I study how climate change affects hydrology and flooding. In mountainous regions, three effects of climate change in particular are creating higher flood risks: more intense precipitation, shifting snow and rain patterns and the effects of wildfires on the landscape.
Warmer air leads to more intense precipitation
One effect of climate change is that a warmer atmosphere creates more intense precipitation events.
This occurs because warmer air can hold more moisture. The amount of water vapor that the atmosphere can contain increases by about 7% for every 1.8 degrees Fahrenheit (1 degree Celsius) of increase in atmospheric temperature.
Research has documented that this increase in extreme precipitation is already occurring, not only in regions like Yellowstone, but around the globe. The fact that the world has experienced multiple record flooding events in recent years – including catastrophic flooding in Australia, Western Europe and China – is not a coincidence. Climate change is making record-breaking extreme precipitation more likely.
The latest assessment report published by the Intergovernmental Panel on Climate Change shows how this pattern will continue in the future as global temperatures continue to rise.
More rain, less snow
In colder areas, especially mountainous or high-latitude regions, climate change affects flooding in additional ways.
In these regions, many of the largest historical floods have been caused by snowmelt. However, with warmer winters due to climate change, less winter precipitation is falling as snow, and more is falling as rain instead.
This shift from snow to rain can have dramatic implications for flooding. While snow typically melts slowly in the late spring or summer, rain creates runoff that flows to rivers more quickly. As a result, research has shown that rain-caused floods can be much larger than snowmelt-only floods, and that the shift from snow to rain increases overall flood risk.
The transition from snow to rain is already occurring, including in places like Yellowstone National Park. Scientists have also found that rain-caused floods are becoming more common. In some locations, the changes in flood risk due to the shift from snow to rain could even be larger than the effect from increased precipitation intensity.
Changing patterns of rain on snow
When rain falls on snow, as happened in the recent flooding in Yellowstone, the combination of rain and snowmelt can lead to especially high runoff and flooding.
In some cases, rain-on-snow events occur while the ground is still partially frozen. Soil that is frozen or already saturated can’t absorb additional water, so even more of the rain and snowmelt run off, contributing directly to flooding. This combination of rain, snowmelt and frozen soils was a primary driver of the Midwest flooding in March 2019 that caused over US$12 billion in damage.
While rain-on-snow events are not a new phenomenon, climate change can shift when and where they occur. Under warmer conditions, rain-on-snow events become more common at high elevations, where they were previously rare. Because of the increases in rainfall intensity and warmer conditions that lead to rapid snowmelt, there is also the possibility of larger rain-on-snow events than these areas have experienced in the past.
In lower-elevation regions, rain-on-snow events may actually become less likely than they have been in the past because of the decrease in snow cover. These areas could still see worsening flood risk, though, because of the increase in heavy downpours.
Compounding effects of wildfire and flooding
Changes in flooding are not happening in isolation. Climate change is also exacerbating wildfires, creating another risk during rainstorms: mudslides.
Burned areas are more susceptible to mudslides and debris flows during extreme rain, both because of the lack of vegetation and changes to the soil caused by the fire. In 2018 in Southern California, heavy rain within the boundary of the 2017 Thomas Fire caused major mudslides that destroyed over 100 homes and led to more than 20 deaths. Fire can change the soil in ways that allow less rain to infiltrate into the soil, so more rain ends up in streams and rivers, leading to worse flood conditions.
With the uptick in wildfires due to climate change, more and more areas are exposed to these risks. This combination of wildfires followed by extreme rain will also become more frequent in a future with more warming.
Global warming is creating complex changes in our environment, and there is a clear picture that it increases flood risk. As the Yellowstone area and other flood-damaged mountain communities rebuild, they will have to find ways to adapt for a riskier future.
Frances Davenport is a Postdoctoral Research Fellow in Atmospheric Science at Colorado State University. She wrote this piece for The Conversation, where it first appeared.
Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our web site. Please see our republishing guidelines for use of photos and graphics. |
In All You Need Is Biology we often make reference to fossils to explain the past of living beings. But what is exactly a fossil and how is it formed? Which is the utility of fossils? Have you ever wondered how science knows the age of a fossil? Read on to find out!
WHAT IS A FOSSIL?
If you think of a fossil, surely the first thing that comes to your mind is a dinosaur bone or a petrified shell that you found in the forest, but a fossil is much more. Fossils are remnants (complete or partial) of living beings that have lived in the past (thousands, millions of years) or traces of their activity that are preserved generally in sedimentary rocks. So, there are different types of fossils:
- Petrified and permineralized fossils: are those corresponding to the classical definition of fossil in which organic or hollow parts are replaced with minerals (see next section). Its formation can leave internal or external molds in which the original material may disappear.
- Ichnofossils (trace fossils): traces of the activity of a living being that are recorded in the rock and give information about the behavior of the species. They may be changes in the environment (nests and other structures), traces (footprints), stools (coprolites -excrements-, eggs …) and other traces such as scratches, bites…
- Amber: fossilized resin of more than 20 million years old. The intermediate state of amber is called copal (less than 20 million years) old. The resin, before becoming amber can trap insects, arachnids, pollen… in this case is considered a double fossil.
- Chemical fossils: are fossil fuels like oil and coal, which are formed by the accumulation of organic matter at high pressures and temperatures along with the action of anaerobic bacteria (bacteria that don’t use oxygen for metabolism).
- Subfossil: when the fossilization process is not completed the remains are known as subfossils. They don’t have more than 11,000 years old. This is the case of our recent ancestors (Chalcolithic).
- Living fossils: name given to today’s living organisms very similar to species extinct. The most famous case is the coelacanth, it was believed extinct for 65 million years until it was rediscovered in 1938, but there are other examples such as nautilus.
- Pseudofossils: are rock formations that seem remains of living beings, but in reality they are formed by geological processes. The best known case is pyrolusite dendrites that seem plants.
Obviously fossils became more common after the appearance of hard parts (shells, teeth, bones …), 543 million years ago (Cambrian Explosion). The fossil record prior to this period is very scarce. The oldes tknown fossils are stromatolites, rocks that still they exist today formed by the precipitation of calcium carbonate because of the activity of photosynthetic bacteria.
The science of fossils is Paleontology.
HOW A FOSSIL IS FORMED?
The fossilization can occur in five ways:
- Petrifaction: is the replacement of organic material by minerals from the remains of a living being buried. An exact copy of the body is obtained in stone. The first step of petrificationis permineralization: the pores of the body are filled with mineral but organic tissue is unchanged. It is the most common method of fossilized bones).
- Gelling: the body becomes embedded in the ice and don’t suffer transformations .
- Compression : the dead body is on a soft layer of soil, such as clay, and is covered by layers of sediment .
- Inclusion : organisms trapped in amber, or petroleum .
- Impression: organisms leave impressions in the mud and the trace is preserved until the clay hardens.
UTILITY OF FOSSILS
- Fossils give us information on how living things were in the past, resulting in evidence of the biological evolution and help to establish the lineages of living things today.
- Allow analyzing of cyclical phenomena such as climate change, atmosphere-ocean dynamics and even orbital perturbations of the planets.
- Those who are of a certain age can be use to date the rocks in where they are found (guide fossils).
- They give information of geological processes such as the movement of the continents, the presence of ancient oceans, formation of mountains…
- The chemical fossils are our main source of energy .
- They give climate information from the past, for example, studying the growth of rings in fossilized trunks or deposition of organic matter in the glacial varves.
To determine the age of fossils there are indirect methods (relative dating) and direct (absolute dating). As there is no perfect method and accuracy decreases with age, the sites are often dated with more than one technique.
The fossils are dated according to the context in which they are found, if they are associated with other fossils (guide fossils) or objects of known age and it depends on the stratum they are found.
In geology, stratums are different levels of rocks that are ordered by their depth: according to stratigraphy, the oldest ones are found at greater depths, while the modern ones are more superficial, as the sediments have not had much time to deposit on the substrate. Obviously if there are geological disturbances dating would be wrong if there were only this method.
This methods are more accurate and are based on the physical characteristics of matter.
They are based on the rate of decay of radioactive isotopes in rocks and fossils. Isotopes are atoms of the same element but with different number of neutrons in their nuclei. Radioactive isotopes are unstable, so they are transformed into a more stable ones at a rate known to scientists emitting radiation. Comparing the amount of unstable isotopes to stable in a sample, scientits can estimate the time that has elapsed since the fossil or rock formed.
- Radiocarbon (Carbon-14): in living organisms, the relationship between C12 and C14 is constant, but when they die, this relationship changes: the uptake of C14 stops and decay with a descomposing rate of 5730 years. Knowing the difference between C12 and C14 of the sample, we can date when the organism died. The maximum limit of this method are 60,000 years, therefore only applies to recent fossils.
- Aluminum 26-Beryllium 10: it has the same application as the C14, but has a much greater decaying period, allowing datings up to 10 datings millions of years, and even up to 15 million years.
- Potassium-Argon (40K/40Ar): is used to date rocks and volcanic ash older than than 10,000 years old. This was the method used to date the Laetoli footprints, the first traces of bipedalism of our lineage left by Australopithecus afarensis.
- Uranium Series (Uranium-Thorium): various techniques with uranium isotopes. They are sed in mineral deposits in caves (speleothems) and in calcium carbonate materials (such as corals).
- Calcium 41: allows to date bones in a time interval from 50,000 to 1,000,000 years .
The magnetic north pole has changed throughout the history of Earth and its geographical coordinates are known in different geological eras.
Some minerals have magnetic properties and are directed towards the north magnetic pole when in aqueous suspension, for example clays. But when laid on the ground, they are fixed to the position that the north magnetic pole was at the time. If we look at what coordinates are oriented such minerals at the site, we can associate it with a particular time.
This dating is used on clay remains and as the magnetic north pole has been several times in the same geographical coordinates, you get more than one date. Depending on the context of the site, you may discard some dates to reach a final dating.
THERMOLUMINESCENCE DATING AND OPTICALLY STIMULATED LUMINESCENCE (OSL)
Certain minerals (quartz, feldspar, calcite …) accumulate in its crystal structure changes due to radioactive decay of the environment. These changes are cumulative, continuous and time dependent to radiation exposure. When subjected to external stimuli, mineral emits light due to these changes. This luminescence is weak and distinct as apply heat (TL), visible light (OSL) or infrared (IRSL).
Can be dated samples that were protected from sunlight and heat to more than 500 ° C, otherwise the “clock” is reset as the energy naturally releases.
ELECTRON PARAMAGNETIC RESONANCE (ESR)
The ESR (electro spin resonance) involves irradiating the sample and measuring the energy absorbed by the sample depending on the amount of natural radiation which it has been subjected during its history. It is a complex method which you can get more information here.
- Carbonell, E. (coord) (2011). Homínidos. Las primeras ocupaciones de los continentes (pg 26-37). Ariel Historia.
- Coenraads, R. (2005). Rocas y fósiles. Libros cúpula.
- Tipos de fósiles
- New Word Encyclopaedia
- Dating rocks and fossils using geological methods
- CENIEH – Centro Nacional de Investigación sobre la Evolución Humana
- Procesos químicos de fosilización
- Museo Virtual de Paleontología
- Understanding evolution
- Cover photo by Mireia Querol (Tarbosaurus) |
Students engage with materials developed as part of a partnership between the Smithsonian American Art Museum and the National Endowment for the Humanities to analyze the photographs captured during the original survey projects of the 1970s and create their own interpretations of places near and far to them.
Using primary sources and an inquiry-based approach, students will research a civil rights movement and then share their findings in a small group, with the goal of learning about the complexities of civil rights activism that has shaped our efforts to form a more perfect union.
"Veterans Speak: War, Trauma, and the Humanities" is the culmination of Governors State University's 2017 NEH Dialogues on the Experience of War project. This collection of clips from a discussion with scholars and Veterans is moderated by Kevin Smith, Director of Veterans Affairs at Governors State University.
Crafting Freedom is a comprehensive NEH-funded resource on the African American experience during the early 19th century. The companion site includes short, classroom ready videos of reenactments based on primary sources and standards aligned lesson plans for grades 3-5 and 6-8 in social studies, language arts, and other humanities subjects.
Bringing in primary sources, such as oral histories to supplement the textbook is essential, and oral histories are a particularly valuable tool for cultivating historical empathy and nurturing a sense of caring among students
This video of Elizabeth Alexander reading the poem “Praise Song for the Day” that she composed for President Barack Obama’s 2009 inauguration ceremony is the seventh in the “Incredible Bridges: Poets Creating Community” series. The companion lesson contains a sequence of activities for use with secondary students before, during, and after reading and listening to the poem.
Did you realize the humanities understood as the study and interpretation of languages, history, literature, jurisprudence, philosophy, comparative religion, history of art, and culture along with the fine and performing arts are considered worthy of support by two federal agencies?
Mission US is a multimedia project that immerses players in U.S. history content through free interactive games.
In Mission 2: “Flight to Freedom,” players take on the role of Lucy, a 14-year-old slave in Kentucky. As they navigate her escape and journey to Ohio, they discover that life in the “free” North is dangerous and difficult. In 1850, the Fugitive Slave Act brings disaster. Will Lucy ever truly be free? |
Rain refers to water droplets forced to fall down to earth by gravitational push. The droplets form when atmospheric water vapour condenses to water. Rain is essential for human, plant, and animal life. For instance, rainwater provides water for plant irrigation and hydroelectric power. Torrential rain refers to the heavy downpour of rain. There is no definite definition of it other than the definition provided by the National Weather Service (NWS). The NWS defines torrential rain as rain that accumulates at a rate of three tenths of an inch or more per hour. There are several idioms that also bring out the meaning of heavy rainfall. They include "raining cats and dogs" and "raining pitchforks."
What Causes Torrential Rain?
Moisture that moves along the weather fronts is the major cause of torrential rain. The convective clouds cause precipitation to occur when enough moisture rises up due to an upward motion. Narrow torrential rainbands come as a result of cumulonimbus clouds. In mountainous regions, torrential rain falls on one side of the mountain since heavy precipitation occurs on one side of the mountain. The side of the mountain where much precipitation occurs is the windward side. Most of the moist air condenses and then falls as torrential rain on the windward side of the mountain. Dry air blows on the other side of the mountain due to the down slope. The urban heat experienced on islands results in torrential rain. Scientific research shows that torrential rain which pours on other planets contains volumes of iron, water, methane, sulphuric acid, and even neon gas.
Formation of Torrential Rain
The atmospheric air always contains varied amounts of water vapour. Relative humidity is the term used to describe the amount of moisture in the air. Furthermore, relative humidity is the amount of water vapour that the air can hold at a certain temperature. When the air saturates with the water vapour clouds are formed. The clouds suspend in the air and are visible from the surface of the earth. Cool air has more saturation of vapour than warm air. The main mechanisms through which air cools to dew point are radiational cooling, adiabatic cooling, evaporative cooling, and conductive cooling. Convection or physical barriers such as a hill cause the cool air to rise up. Condensation then forces the water vapour to form clouds. The type of cloud formed depends on the amount of condensation that occurs. In the case of torrential rain, dark nimbus clouds form in the clouds.
Coalescence and Fragmentation
During coalescence individual water droplets fuse to form larger water droplets. Once formed, they remain stationary in the cloud due to air resistance. The production of larger droplets occurs when air turbulence causes water droplets to collide. Coalescence continues as larger water droplets fall. These water drops are quite heavy, therefore overcoming air resistance and make the rain continuous. The process of coalescence is temperature dependent. The temperature difference between the earth’s surface and the clouds make crystallized air to melt as it falls as torrential rain. |
- Join over 1.2 million students every month
- Accelerate your learning by 29%
- Unlimited access from just £6.99 per month
GCSE: H.G. Wells
Meet our team of inspirational teachers
The novel The Time Machine is centred on the events which take place when a man of science-whose name is not given- journeys forward into the future
However, new ideology was emerging at this time, thanks to revolutionary new ideas, from people such as Karl Marx-who found the Marxism school of thinking. This was based upon the idea that the lower and working classes were being exploited and alienated by their social superiors. While the Time Traveller is in the future, he uses his knowledge of the 1900s to evaluate what he sees of the people that he meets. This knowledge is based on the theory of evolution, an idea which was presented not long before the books publication, by Charles Darwin.
- Word count: 1669
This war has taught us pity - pity for those witless souls that suffer our domination What does the War of the Worlds tell us about human nature?
We are with the narrator as he learns and we learn from him. Wells puts a man that could well be you or I in an extreme situation to exemplify the problems mankind could face and its weaknesses. The narrator recounts the events with the benefit of hindsight, "It is curious to recall some of the mental habits of those departed days", and is surprisingly objective in his account. He details how men, "went to and fro over this globe about their little affairs, serene in their assurance of their empire over matter".
- Word count: 2930
Therefore the time machine is an illustration of the Victorian era. Wells was also influenced by Darwin's theory of evolution as in his novel it is an example of how the world around him would be if the human race divided into two new species. Morlocks were the examples of the working classes, they lived underground and maintained machines, whilst the Eloi are examples of the educated classes; they live above ground and indulge in leisure activities like the idle rich of Victorian England. During the Victorian era there were two notable classes, the "Upper Class" and the "Working Class".
- Word count: 2232
He ignores whatever rumour he may hear about this room, and he dismisses the fact of a ghost trying to scare him. Although the complete opposite happens; the ghost attacks the narrator because he's the person that is fearless and that must be stopped. The build up to reaching 'The Red Room' and the narrator getting attacked creates suspense throughout the story, which reflects on the question to show it's a typical ghost story. There are three more characters in 'The Red Room' too.
- Word count: 1522
Half roasted to death! Trying to escape!" (pg.15) This shows the way that the people immediately assume that it is manmade and could not be anything of alien origin. The arrogance of mankind is further shown many times during the course of the novel. One of the major themes in this novel is the possible submission of mankind. This is obviously shown by the Martian taking men from their homes to use for injection. There is a sense of helplessness as all attempts to resist the Martians fail.
- Word count: 2433
Yet technology brought a dark side as well. Writers were starting to use sci-fi more. On a more positive note, the nineteenth century was the period when modern science developed for the first time. However, it was also a start of new concepts one of them was classes; it affected everyone and included everyone. Herbert George Wells (1866-1946), English author and political philosopher, most famous for his science fiction romances that variously depict alien invasion, terrifying future societies, and transformed states of being; Author of ''the Time Machine''. H.G. Wells was very much a free thinker, although born into 'Victorian society' he rebelled against many of the accepted norms and values of that society.
- Word count: 2215
Firstly I'll be reviewing and commenting on The Red Room; The title "Red Room" immediately attracts the reader's attention; it is symbolic but leaves unanswered questions like, "What is the Red Room? Why is it red?" In my own opinion I think that red is also associated with fear and danger. Overall, the title raises so much curiosity that it has an overwhelming effect, and wanting the reader to read on; and to find out the answers to their questions.
- Word count: 1420
Also the actual settings of the future include 'bare hillsides' and 'shrubs and long grass' which gives it rural scenery which is the opposite of the expectation of more progress in development in buildings and an urban landscape. One of the newly modified beings that the time traveller encounters in the future are called the Eloi - who are initially believed to be the dominant descendants of the upper class. Wells describes their physical appearance as 'Dresden china type of prettiness', page 29.
- Word count: 2147
This makes us think that this is something to do with the Red Room and we think about what might happen to the narrator if he goes in the Red Room. This makes the story more mysterious and sets the scene for a gothic story. It could show the narrator's view of the old people at this point. The 'old woman' tries to force the narrator in not going to the room and she repeats 'This night of all nights!'.
- Word count: 3416
The Red Room How successfully did HG Wells create an atmosphere of mystery and suspense in his story?
Because of the narrator's commitment to being rational and clear-headed, he looks down upon anything that seems superstitious or fantastic. This disdain comes across in his dismissal of the "fanciful suggestion" of the room. Or the old people, who he says are prey to "fashions born in dead brains". In spite of his claims to being rational, a nervousness and sense of foreboding does creep into the narrator's tone as the story progresses. We see this first in the unease and mysterious suggestiveness of some of his descriptions, as when he says the shadows in the red room make "that odd suggestion of a lurking, living thing".
- Word count: 1544
The UK is well known as 'multiculturalism country' this is because there are varieties of different cultures. In the Victorian era, their were different genres of books, including romance, comedy, fantasy and etc, H.G. Wells differed from these groups because he wrote sci-fi books and he was known as 'The man who invented tomorrow' and he was well remembered. Jules and Verne wrote stories about space travel which Victorian readers to wonder about other planets and if there might be other creatures like aliens living in other planets. Science fiction authors were middle class, publishers were suspicious of sci-fi because it challenged God of society's order and people thought it was dangerous like the big bang theory, challenges the existence of God.
- Word count: 2151
He suggested we were to continue. As the carriage bumped along the cobbly road which was in tandem to the windy river to my right. As we made the mysteriously quiet journey up the path with the only noise coming from the trees creaking and murmuring to each other. The weather seemed to change instantaneously, from being a bright crisp day to a dreary windswept one. I shivered, a chill went up my spine as I pulled my dress coat over my shoulder and did up the buttons. I wondered what it was that made the weather so melancholy and dejected so quickly, but then I realised.
- Word count: 587
In 1996 'Independence Day' was filmed using wells' ideas. Then in 2005, Steven Spielberg made the film 'The War Of The Worlds' Starring Tom Cruise. It release was delayed because of 9/11 had made people so fearful that terrorism would overthrow our way of life. For this to happen the book must be very popular. When the novel was written was in 1898 Britain ruled over 25% of the world. The British people were used to invading different countries and winning battles and wars. In the novel London is being over powered by the Martians who landed in Horsell Common from Mars.
- Word count: 559
Maybe the old women means sorrow for the young duke who had to die. She might have also meant that so many people have tried to come out of 'the red room' alive and abolish the myth of the room being haunted, but much sorrow is felt each time when they don't make it out alive. A sense of suspicion is built-up by the old folks in the castle, for the boy suspects them of enhancing the 'spiritual terrors' of the house by using their repetitive insistence.
- Word count: 2346
These ideas all originated from the Book of War of the Worlds. The use of various settings have interested the reader a lot more as it includes realism into the book. The novel uses a number of settings in and around Surrey and London. "At about three o'clock there began a thud of a gun at measured intervals from Chertsey or Addlestone." This quote expresses the fact that it might actually be happening which makes it very realistic. H.G. Wells was very interested in science and by using real places to increase the accuracy of the writing that makes it very scientific.
- Word count: 967
The Red Room is a short story written by H.G.Wells in 1896. It is written in 1st person to make us feel and think like the narrator as he doesnt know everything.
To attract the reader and keep their attention throughout fear is essential to the story. In order to keep it interesting, tension has built through the text. For constructing the suspense, the story has anticlimaxes during the story and the climax is revealed towards the end. TRR seems like a supernatural story due to its elements and the language used by the poet but, there is no ghost in the whole story. It is filled with clich�s as a typical ghost story and make's the reader to think that it is supernatural. TRR takes place in a creepy Lorraine castle, which has been abandoned for quite a while and said to be haunted.
- Word count: 1086
What is the effect of the juxtaposition of the ordinary and the extra-ordinary in the War of the Worlds?
In the first paragraph of that chapter, the narrator gives us an account of how the star was 'rushing', indicating purpose, over Winchester. If we are to believe that it is just a falling star then it would not make sense for it to have a sense of purpose in its movement. This leads us to think that it is actually something more and that the humans portrayed in the book are rather ignorant if they think that it isn't.
- Word count: 2508
To take the question from all angles, you have to look at everything he does or has done and how he feels for example, political and religious views, his emotions, attitude and past. It is important to also explore the context of the time in which the Time Traveller and the author lived. The religion at the time and the scientific knowledge support both sides of the argument, for example a Victorian Everyman would have been bonded to religion but would have promoted science on the basis of furthering knowledge.
- Word count: 3086
The story the man who could work miracles by H.G. Wells is a powerful warning about the impact that humans can have on their environment. Discuss the methods used by H.G. Wells to convey this message
Things were beginning to change, they were slightly limited but it appeared human's were taking more control. Populations of towns and cities increased, laws changed, children had to go to school meaning more people were now educated. Machines being invented, like the plane or the Hoover, interested a lot of people and were considered as being miracles, in a different sense to what Wells suggest. I think Wells reflected on changes and wrote about them in more detail, such as in this story, humans using this control but getting carried away, and being the dominant species, beginning to control their environment.
- Word count: 1463
Compare how the authors of The red Room(TM) and The Signalman(TM) create a sense of tension in their texts
This is one similarity between the two stories which creates an edge of mystery contributing to tension. 'The Red Room' is set in a deserted castle that is dark and isolated (Lorraine Castle) with deformed characters who are 'grotesque'. The setting in 'The Signalman' is in a dark, lonely, damp location, in a steep, foreboding cutting and Dickens makes sinister descriptions here of the 'solitary' and 'dismal' post of the signalman which immediately creates tension due to the vulnerable image created of the signalman's daily work life .
- Word count: 1103
Light and colours used and other senses that the story plays on will also be examined. When the young man in The Red Room asks to go to the Red Room he is told, 'You go along the passage a bit... through that is a spiral staircase... down the corridor at the end, and the red room is on your left up the stairs.' The fact that the instructions to get to the room are so complex, and very lengthy, shows that the room is very far away.
- Word count: 2025
Analysis and comparison of two gothic short stories: The Monkey(TM)s Paw(TM) by W.W. Jacobs and The Red Room(TM) by H.G Wells
The creation of tension is achieved with the use of typical features, for instance setting or use of characters. When using the setting to create tension writers often set the main location in an isolated area. This is apparent in both stories 'The Monkey's Paw' is set in a cut off house during a storm and 'The Red Room' is set in a remote castle. The setting is established straight away in 'The Monkey's Paw' for instance; "without the night was cold and wet", Jacobs uses pathetic fallacy to relay illustrate to the reader what the story will be like and already creates tension.
- Word count: 1000
Examine the ways in which HG Wells creates atmosphere in The War of the Worlds by close reference to key episodes.
He is most famous for creating a dramatic effect of horror which he does consistently through War of the Worlds in places where the reader feels as though it is a real situation. There are many different techniques needed which HG Wells uses in his novel such as lots of adjectives and adverbs, alliteration, repetition and onomatopoeia. In the beginning extract, HG Wells shows the reader the horror, alarm and revulsion which is being displayed by the characters in the scene.
- Word count: 3001
However they make him feel a little uncomfortable because of their age, unattractiveness and belief in supernatural begins. "The Red Room" and "The Signalman" are of the same genre. They are both ghost stories. Furthermore know one is named in either story which adds to the tension and suspense of the story. This is a similarity between the two stories. They are both set at night which is typical of a gothic, horror genre. They both effectively build up tension and suspense.
- Word count: 2601
H.G.Wells describes the room with this sentence "large sombre room, with its shadowy window bays". This helps the reader imagine the room in their own way. I believe this is the most powerful two words in the opening paragraph are "sombre" and "shadowy". These two words have a big impact on the reader, they give the feeling of a dark gothic style room. "Sombre" means dark and gloomy, this gives the image to the reader that there either is shadows in the room or there is a faint light source. This will give the effect of darkness, people are not scared of darkness but that are frightened of what can be lurking in the dark.
- Word count: 2343 |
Another possibility is that there was a slowing of thermohaline circulation. The circulation could have been interrupted by the introduction of a large amount of fresh water into the North Atlantic, possibly caused by a period of warming before the Little Ice Age known as the Medieval Warm Period. There is some concern that a shutdown of thermohaline circulation could happen again as a result of the present warming period.
The Little Ice Age (LIA) was a period of cooling that occurred after the Medieval Warm Period (Medieval Climate Optimum). While not a true ice age, the term was introduced into the scientific literature by François E. Matthes in 1939. It is conventionally defined as a period extending from the 16th to the 19th centuries, though climatologists and historians working with local records no longer expect to agree on either the start or end dates of this period, which varied according to local conditions. NASA defines the term as a cold period between 1550 AD and 1850 AD and notes three particularly cold intervals: one beginning about 1650, another about 1770, and the last in 1850, each separated by intervals of slight warming. The Intergovernmental Panel on Climate Change (IPCC) describes areas affected by the LIA:
There is still a very poor understanding of the correlation between low sunspot activity and cooling temperatures. During the period 1645–1715, in the middle of the Little Ice Age, there was a period of low solar activity known as the Maunder Minimum. The Spörer Minimum has also been identified with a significant cooling period between 1460 and 1550. Other indicators of low solar activity during this period are levels of the isotopes carbon-14 and beryllium-10.
Throughout the Little Ice Age, the world experienced heightened volcanic activity. When a volcano erupts, its ash reaches high into the atmosphere and can spread to cover the whole earth. This ash cloud blocks out some of the incoming solar radiation, leading to worldwide cooling that can last up to two years after an eruption. Also emitted by eruptions is sulfur in the form of SO2 gas. When this gas reaches the stratosphere, it turns into sulfuric acid particles, which reflect the sun's rays, further reducing the amount of radiation reaching Earth's surface. The 1815 eruption of Tambora in Indonesia blanketed the atmosphere with ash; the following year, 1816, came to be known as the Year Without a Summer, when frost and snow were reported in June and July in both New England and Northern Europe. Other volcanoes that erupted during the era and may have contributed to the cooling include Billy Mitchell (ca. 1580), Mount Parker (1641), Long Island (Papua New Guinea) (ca. 1660), and Huaynaputina (1600).
Was a change in thermohaline circulation responsible for the Little Ice Age?
Originally posted by Vitchilo
reply to post by CaticusMaximus
it could be enough to displace enough oxygen so that most air breathing creatures die,
Come on. That is NOT possible.
Anyway, some humans would survive... people in closed facilities...edit on 13-12-2011 by Vitchilo because: (no reason given)
"Between AD 1200 to 1300, we see a decrease in stomata and a sharp rise in atmospheric carbon dioxide, due to deforestation we think," says Dr van Hoof, whose findings are published in the journal Palaeogeography, Palaeoclimatology, Palaeoecology.
But after AD 1350, the team found the pattern reversed, suggesting that atmospheric carbon dioxide fell, perhaps due to reforestation following the plague.
The researchers think that this drop in carbon dioxide levels could help to explain a cooling in the climate over the following centuries.
The Black Death is estimated to have killed 30–60 percent of Europe's population, reducing world population from an estimated 450 million to between 350 and 375 million in the 14th century. The aftermath of the plague created a series of religious, social and economic upheavals, which had profound effects on the course of European history. It took 150 years for Europe's population to recover. The plague returned at various times, killing more people, until it left Europe in the 19th century.
The Gulf Stream, so named by Benjamin Franklin in 1762, is a mighty river in the ocean that flows from the Gulf of Mexico around the southern tip of the Florida peninsula and along the East Coast, bending off to the northeast when it reaches the vicinity of Cape Hatteras. It continues on to Iceland,the British Isles, and Norway. In the Straits of Florida the Gulf Stream is about forty miles wide and flows at a speed of about five miles an hour. As it progresses into the North Atlantic, it expands to several hundred miles in width and slows to about three miles an hour.
The first European to recognize the Gulf Stream as a discrete oceanic current was Ponce de Leon, following his landing in 1513 at what is now St. Augustine, Florida, in search of the Fountain of Youth. When he tried to return to Puerto Rico he found the current to be more powerful than the fair wind before which he was sailing, and his ships were driven to the north instead of southward, his intended course.
Originally posted by Melbourne_Militia
reply to post by Erno86
I've said it before and Ill say it again.....the main reason for the Ice melting in Northern latitudes is NOT because of global warming.....the world is actually cooling.
What melts the ice is soot from pollution that our civilisations spews into the air. Soot settles on the ice is is much more heat absorbing then carbon Dioxide is or anything else the propaganda machines try and tell you.
This physical proof of ice melting is what these propagandists use to justify their cry that the world is warming - "surely it must be warming if the ice is melting", which cannot be further for from the truth.
yes, ice is melting, no the world is not warming.....soot is the cause....do your research, there have been great scientists writeups on this. The mainstream media will not discuss this as it goes against the Green Agenda being pounded into everyones head.
Originally posted by Fractured.Facade
reply to post by Melbourne_Militia
If that is all true, what caused the extreme ice melt 55 million years ago, along with a period of catastrophic global warming??
Did we have pollution issues then?
Previous research into this period, called the Palaeocene-Eocene Thermal Maximum, or PETM, estimates the planet's surface temperature blasted upwards by between five and nine degrees Celsius (nine and 16.2 degrees Fahrenheit) in just a few thousand years.
The Arctic Ocean warmed to 23 C (73 F), or about the temperature of a lukewarm bath.
How PETM happened is unclear but climatologists are eager to find out, as this could shed light on aspects of global warming today.
What seems clear is that a huge amount of heat-trapping "greenhouse" gases -- natural, as opposed to man-made -- were disgorged in a very short time.
n 2050, they and their 19-year-old daughter Molly move to New York City by car, passing desperate Texans begging for rides north. One pulls a gun on Molly, but fortunately, others in the car/truck convoy point automatic weapons on the desperate man, who backs down. While the others in the convoy make it to Canada, New York City is a marvel of clean power, clean transit, and community gardening. Josh sets to work building a flood barrier to hold back the ocean,but the CO2 warming unleashes trapped methane in the Arctic, which causes even faster, non-linear warming. An effort to use sulfur dioxide as a last resort to cool the planet is called off when it is found to destroy the ozone layer.Lucy finds and helps quarantine and neutralize a strange new disease, and Molly moves upstate to an agricultural community. During a storm at high tide in 2075, Josh is killed trying to fix a stuck gate, and New York City is flooded. Lucy refuses Molly's offer to live with her, her husband and son. Starving people among the rotting flood damage set the stage for the return of the disease Lucy saw, now called "Caspian Fever."
Originally posted by Fractured.Facade
reply to post by poet1b
There have been much warmer periods on this planet... Methane is a greenhouse gas, so maybe these "eruptions" and global warming are much more related than CO2 emissions from mankind.
Could be at the beginning, middle or end of a millions of years long cycle.
The earth has been warm enough in its distant past to have had the ocean levels 100ft+ higher than now.
How hot can it get?
Does anyone really want the answer to that? |
What is Spectroscopy?
Spectroscopy pertains to the dispersion of an object's light into its component colors (i.e. energies). By performing this dissection and analysis of an object's light, astronomers can infer the physical properties of that object (such as temperature, mass, luminosity and composition).
But before we hurtle headlong into the wild and woolly field of spectroscopy, we need to try to answer some seemingly simple questions, such as what is light? And how does it behave? These questions may seem simple to you, but they have presented some of the most difficult conceptual challenges in the long history of physics. It has only been in this century, with the creation of quantum mechanics that we have gained a quantitative understanding of how light and atoms work. You see, the questions we pose are not always easy, but to understand and solve them will unlock a new way of looking at our Universe.
To understand the processes in astronomy that generate light, we must realize first that light acts like a wave. Light has particle-like properties too, so it's actually quite a twisted beast (which is why it took so many years to figure out). But right now, let's just explore light as a wave.
Picture yourself wading around on an ocean beach for a moment, and watch the many water waves sweeping past you. Waves are disturbances, ripples on the water, and they possess a certain height (amplitude), with a certain number of waves rushing past you every minute (the frequency) and all moving at a characteristic speed across the water (the wave speed). Notice the distance between successive waves? That's called the wavelength.
Keeping this analogy in mind, let's leave the ocean beach for a while and think about light like a wave. The wave speed of a light wave is simply the speed of light, and different wavelengths of light manifest themselves as different colors! The energy of a light wave is inversely-proportional to its wavelength; in other words, low-energy waves have long wavelengths, and high-energy light waves have short wavelengths.
Physicists classify light waves by their energies (wavelengths). Labeled in increasing energy, we might draw the entire electromagnetic spectrum as shown in the figure below:
The Electromagnetic Spectrum. Notice how small the visible region of the spectrum is, compared to the entire range of wavelengths.
Notice that radio, TV, and microwave signals are all light waves, they simply lie at wavelengths (energies) that your eye doesn't respond to. On the other end of the scale, beware the high energy UV, x-ray, and gamma-ray photons! Each one carries a lot of energy compared to their visible- and radio-wave brethren. They're the reasons you should wear sunblock, for example.
When we look at the Universe in a different "light", i.e. at "non-visible" wavelengths, we probe different kinds of physical conditions -- and we can see new kinds of objects! For example, high-energy gamma-ray and X-ray telescopes tend to see the most energetic dynamos in the cosmos, such as active galaxies, the remnants from massive dying stars, accretion of matter around black holes, and so forth. Visible light telescopes best probe light produced by stars. Longer-wavelength telescopes best probe dark, cool, obscured structures in the Universe: dusty star-forming regions, dark cold molecular clouds, the primordial radiation emitted by the formation of the Universe shortly after the Big Bang. Only through studying astronomical objects at many different wavelengths are astronomers able to piece together a coherent, comprehensive picture of how the Universe works!
Typically one can observe two distinctive classes of spectra: continous and discrete. For a continuous spectrum, the light is composed of a wide, continuous range of colors (energies). With discrete spectra, one sees only bright or dark lines at very distinct and sharply-defined colors (energies). As we'll discover shortly, discrete spectra with bright lines are called emission spectra, those with dark lines are termed absorption spectra.
Continuous spectra arise from dense gases or solid objects which radiate their heat away through the production of light. Such objects emit light over a broad range of wavelengths, thus the apparent spectrum seems smooth and continuous. Stars emit light in a predominantly (but not completely!) continuous spectrum. Other examples of such objects are incandescent light bulbs, electric cooking stove burners, flames, cooling fire embers and... you. Yes, you, right this minute, are emitting a continuous spectrum -- but the light waves you're emitting are not visible -- they lie at infrared wavelengths (i.e. lower energies, and longer wavelengths than even red light). If you had infrared-sensitive eyes, you could see people by the continuous radiation they emit!
Discrete spectra are the observable result of the physics of atoms. There are two types of discrete spectra, emission (bright line spectra) and absorption (dark line spectra). Let's try to understand where these two types of discrete spectra.
Emission Line Spectra
Unlike a continuous spectrum source, which can have any energy
it wants (all you have to do is change the temperature), the electron clouds
surrounding the nuclei of atoms can have only very specific energies dictated
by quantum mechanics. Each element on the periodic table has its own set
of possible energy levels, and with few exceptions the levels are distinct
Atoms will also tend to settle to the lowest energy level (in spectroscopist's lingo, this is called the ground state). This means that an excited atom in a higher energy level must `dump' some energy. The way an atom `dumps' that energy is by emitting a wave of light with that exact energy.
In the diagram below, a hydrogen atom drops from the 2nd energy level to the 1st, giving off a wave of light with an energy equal to the difference of energy between levels 2 and 1. This energy corresponds to a specific color, or wavelength of light -- and thus we see a bright line at that exact wavelength! ...an emission spectrum is born, as shown below:
An excited Hydrogen atom relaxes from level 2 to level 1, yielding a photon. This results in a bright emission line.
Tiny changes of energy in an atom generate photons with small energies and long wavelengths, such as radio waves! Similarly, large changes of energy in an atom will mean that high-energy, short-wavelength photons (UV, x-ray, gamma-rays) are emitted.
Absorption Line Spectra
On the other hand, what would happen if we tried to reverse this process? That is, what would happen if we fired this special photon back into a ground state atom? That's right, the atom could absorb that `specially-energetic' photon and would become excited, jumping from the ground state to a higher energy level. If a star with a `continuous' spectrum is shining upon an atom, the wavelengths corresponding to possible energy transitions within that atom will be absorbed and therefore an observer will not see them. In this way, a dark-line absorption spectrum is born, as shown below:
A hydrogen atom in the ground state is excited by a photon of exactly the `right' energy needed to send it to level 2, absorbing the photon in the process. This results in a dark absorption line.
Next Page --> |
Kids And Trees Grow With The Environmental Three R’s
Every day in America, each man, woman and child generates nearly four pounds of trash. That’s over one trillion pounds of solid waste or 365 trillion pounds each year. It’s a staggering statistic when you consider the environmental effect that much garbage has on our fragile ecosystem.
As adults, it’s easy to forget the importance of the 3 R’s our world depends on-reducing, reusing and recycling-for the health and safety of future generations. It’s those future generations–our children–that will bear the consequences of today’s environmental mismanagement, unless an effort is made to improve upon current behaviors.
For the third year, one hotel company is stepping up to the task, helping kids to think globally and act locally by educating them on how to properly care for the environment. With help from The National Arbor Day Foundation, Doubletree Hotels is distributing an environmentally focused lesson plan that provides the framework for taking would-be waste and recycling it into artistic treasures to thousands of elementary school students in the U.S. and Canada.
The education initiative is an extension of the hotel’s Teaching Kids to CARE program, a community outreach initiative that pairs hotel properties with elementary schools and youth groups to educate children about making conscious decisions about environmental care. This spring, Teaching Kids to CARE volunteers and children will create “litter critters,” a reduced, reused and recycled representation of animals in the world hurt by litter, and will plant more than 10,000 seedling trees across the U.S. and Canada.
For those parents (and mentors) wanting to engage their kids (or nieces, nephews and grandkids) in environmentally conscious activities, here are a few tips:
1. Recycling is Fun-Pass it On – Recycling isn’t all about aluminum cans and old newspapers. Encourage your kids to start their own recycling program in which they share old toys, books and games with their friends and classmates. One child’s trash is another child’s treasure and by “passing it on,” kids will learn that they can reduce waste by recycling their old things so that others can reuse them.
2. Become a Habitat Hero – Challenge your children to gather up all their friends and classmates to help clean up a park or schoolyard (with parental supervision). Whoever collects the most trash wins the “Habitat Hero” award and prize (as decided upon by you).
3. Plant a “Family Tree” – Take your kids to a garden or home store and allow them to help pick out a young tree. (Make sure to check that it can survive in your climate region.) Plant the tree in a special location as a family, assigning a different task (digging, planting, watering) to each family member. Make sure to document the activity with a photo, so kids can remember how small the tree was when they planted it.
4. You CAN Make a Difference – Encourage your children to save empty aluminum cans, then take a weekly trip to a nearby “Cash for Cans” drop-off location. Decide with your kids how best to use the money they’ve collected from their recycling efforts to better the environment. Options to consider include volunteering for tree planting projects, adopting a local stretch of highway to be beautified and maintained or donating the money to a local environmental organization.
5. Pulp to Paper – This fun, hands-on project shows kids how old newspapers are recycled back into fresh newspapers. Have your child tear a half page of newspaper into small, one-inch pieces. Fill buckets or bowls with one-part newspaper and two-parts water and let soak for several hours. Using a hand mixer, “pulp” the fibers in the paper until the mixture looks like mush. Take a handful of pulp and place it on a piece of felt, molding it to the size of the piece of paper you want to make, and press it firmly to squeeze out excess water. Let the paper dry for one or two days and voilà.
Remember, proper waste management not only helps save the environment, it also helps save energy, reduce pollution and protect animals around the world. A small effort from your kids today can guarantee a healthier, greener tomorrow. |
Steadfast adherence to a strict moral or ethical code.
- 2. The
state of being unimpaired; soundness.
- 3. The
quality or condition of being whole or undivided; completeness.
- Source: The American Heritage. Dictionary of the
English Language, Fourth Edition
Copyright ) 2000 by Houghton Mifflin Company.
The data within our
databases must constantly follow the rules and constraints placed within the
data models. Without constraints, the data would hold no meaning. The
reliability and integrity of that data would always be in question and the
users of such data would always question the validity of it. If a database were
to neglect the rules and overstep the boundaries that were placed upon it,
there would be mayhem within the database and that database would cease to
exist for any value.
Database Integrity Constraints
Within the definition of a
table, we at times might allow data to contain a NULL value. This NULL value is
not really a value at all and is considered to be an absence of value. The
constraint of NOT NULL forces a value to be given to a column.
Uniqueness for a column or
set of columns means that the values in that column or set of columns must be
different from all other columns or set of columns in that table. The unique
key will stand on its' own and has power to drive other information in the
database through foreign keys. A unique key may contain NULL values since they
are by definition a unique non-valued value.
Primary Key Values
Primary key values are
much like unique keys except that they are designed to uniquely identify a row
in a table. They can consist of a single column or multiple columns. The
primary key cannot contain NULL values.
You can put on a column
an integrity constraint that requires certain conditions to be met before the
data is inserted or modified. If the checks are not satisfied then the
transaction is not allowed to finish.
Any table in the database that
has a primary key or unique key can be referenced by another table by setting
up a rule that relates those tables and governs the relationship. In this
relationship, there is what is known as a parent table and a child table. The
child table uses a foreign key to reference the parent table's primary key or
unique key. Through the use of the relationship both parent tables and child
tables can effect each other is positive and negative ways.
Types of Relationships
Restrict or No Action
If there is an attempt to
alter or delete data in the parent table and there are rows in the child table,
the transaction is not allowed.
Set to Null
If a delete where to
happen on the parent table, the columns for the rows in the child table that
were referencing the parent would be set to NULL.
Set to Default
When a delete or
modification to the parent table breaks the relationship for the child table,
the child's table columns are set to a predefined default value.
If the parent data is
modified, then all the children's table data is also modified. If the parent is
deleted then all the children are deleted.
Database integrity is the thread that holds database objects
together to satisfy business objectives and rules. It is unfortunate that data
integrity doesn't always cascade into the real world and hold us all together
when times get tough.
See All Articles by Columnist James Koopmann |
Electromagnetic Radiation (EMR) is the generated byproduct of electricity traveling through electronic devices. Electronic devices develop a field of radiation or, an Electromagnetic Field (EMF) that is strongest when closest to the source and reduces in intensity when moving away from the source. Increasing the current within the source will directly increase the strength of the field.
Industry experts recommend that exposure to these fields should be less than 2 milliGauss (mG), the measurement of EMFs. For devices such as laptops, these levels can reach up to 175 mG, when processing and computing.
EMF radiation can be dangerous because it can often change field direction, creating a vibration within the tissues of the body.
EMF Vibration Impacts
A vibrating Electromagnetic Field (EMF) touching the body generates heat and cell mutation. In fact, this phenomenon works the same way as a microwave by vibrating molecules together and generating enough heat to cook food. Long term exposure to EMF vibrations can lead to serious health concerns.
With increased temperatures, both male and female fertility can be affected. Studies over the past 20 years show that long-term exposure to high levels of EMFs can mutate cells, burn skin, cause DNA cellular breakdown and many other health problems including cancer.
Most EMFs are fields that vibrate the human body’s cells. This vibration can not only cause heating up of cells, but also manipulate the cells in other dangerous ways.
Tallahassee Florida USA
Solomon Islands, Honiara
City of Gosford, Australia
Solomon Islands, Honiara,
Equatorial Guinea Malabo
Click on any of the pictures below
to learn more |
Researchers from Northwestern University, University of Toronto and the University of Toledo have developed an all-perovskite tandem solar cell with extremely high efficiency and "record-setting" voltage.
“Further improvements in the efficiency of solar cells are crucial for the ongoing decarbonization of our economy,” says U of T Engineering Professor Ted Sargent (ECE). “While silicon solar cells have undergone impressive advances in recent years, there are inherent limitations to their efficiency and cost, arising from material properties. Perovskite technology can overcome these limitations, but until now, it had performed below its full potential. Our latest study identifies a key reason for this and points a way forward.”
Perovskite solar cells are built from nano-sized crystals that can be dispersed into a liquid and spin-coated onto a surface using low-cost, well-established techniques. Another advantage of perovskites is that by adjusting the thickness and chemical composition of the crystal films, manufacturers can selectively ‘tune’ the wavelengths of light that get absorbed and converted into electricity, whereas the traditionally-used silicon always absorbs the same part of the solar spectrum.
The researchers in the recent study used two different layers of perovskite, each tuned to a different part of the solar spectrum, to produce what is known as a tandem solar cell.
“In our cell, the top perovskite layer has a wider band gap, which absorbs well in the ultraviolet part of the spectrum, as well as some visible light,” says Chongwen Li, a postdoctoral researcher in Sargent’s lab and one of five co-lead authors on the new paper. “The bottom layer has a narrow band gap, which is tuned more toward the infrared part of the spectrum. Between the two, we cover more of the spectrum than would be possible with silicon.”
The tandem design enables the cell to produce a very high open-circuit voltage, which in turn improves its efficiency. But the key innovation came when the team analyzed the interface between the perovskite layer, where light is absorbed and transformed into excited electrons, and the adjacent layer, known as the electron transport layer.
“What we found is that the electric field across the surface of the perovskite layer — we call it the surface potential — was not uniform,” says PhD student Aidan Maxwell, another co-lead author.
“The effect of this was that in some places, excited electrons were moving easily into the electron transport layer, but in others, they would just recombine with the holes they left behind. Those electrons were being lost to the circuit.”
To address this challenge, the team coated a substance known as 1,3-propanediammonium (PDA) onto the surface of the perovskite layer. Though the coating was only a few nanometers in thickness, it made a big difference.
“PDA has a positive charge, and it is able to even out the surface potential,” says postdoctoral fellow Hao Chen, another of the co-lead authors. “When we added the coating, we got much better energetic alignment of the perovskite layer with the electron transport layer, and that led to a big improvement on our overall efficiency.”
The team’s prototype solar cell measures one square centimeter in area, and produces an open-circuit voltage of 2.19 electron volts, which is a record for all-perovskite tandem solar cells. Its power conversion efficiency was measured at 27.4%, which is higher than the current record for traditional single-junction silicon solar cells. The cell was also independently certified at the National Renewable Energy Laboratory in Colorado, delivering an efficiency of 26.3%.
The team used industry standard methods to measure the stability of the new cell and found that it maintained 86% of its initial efficiency after 500 hours of continuous operation.
The researchers will now focus on further enhancing efficiency by increasing the current that runs through the cell, improving stability, and enlarging the area of the cell so that it can be scaled up to commercial proportions.
The identification of the key role played by the interfaces between layers also points the way toward potential future improvements.
“In this work, we’ve focused on the interface between the perovskite layer and the electron transport layer, but there is another important layer that extracts the ‘holes’ those electrons leave behind,” says Sargent.
“One of the intriguing things in my experience with this field is that learning to master one interface doesn’t necessarily teach you the rules for mastering the other interfaces. I think there’s lots more discovery to be done.”
Maxwell says that the ability of perovskite technology to hold its own against silicon, even though the latter has had a multi-decade head start, is encouraging.
“In the last ten years, perovskite technology has come almost as far as silicon has in the last 40,” he says. “Just imagine what it will be able to do in another ten years.” |
An ovarian cyst is any collection of fluid, surrounded by a very thin wall, within an ovary. Any ovarian follicle that is larger than about two centimeters is termed an ovarian cyst. An ovarian cyst can be as small as a pea, or larger than an orange. Most ovarian cysts are functional in nature and harmless (benign).
Ovarian cysts affect women in all ages. They occur most often, however, during a woman's child bearing years.
Some ovarian cysts cause problems, such as bleeding and pain. Surgery may be required to remove cysts larger than 5 centimeters in diameter.
Classification of Cysts
Functional cysts or simple cysts are part of the normal process of menstruation. They have nothing to do with disease, and can be treated. These types of cysts occur during ovulation. If the egg is not released, the ovary can fill up with fluid. Usually these types of cysts will go away after a few period cycles.
Follicular cyst of ovary
Follicular cyst which is the most common type of ovarian cyst is the graafian follicle cyst or follicular cyst.
Corpus luteum cyst
Corpus luteum cyst, which may rupture about the time of menstruation and take up to three months to disappear entirely.
Theca lutein cyst
The term "hemorrhagic cyst" is used to describe cysts where significant quantities of blood have entered. "Hemorrhagic follicular cyst" is classified under N83.0 in ICD-10, and "hemorrhagic corpus luteum cyst" is classified under N83.1.
There are several other conditions affecting the ovary that are described as types of cysts, but are not usually grouped with the functional cysts. (Some of these are more commonly or more properly known by other names.) These include:
Chocolate cyst of ovary
An endometrioma, endometrioid cyst, endometrial cyst, or chocolate cyst is caused by endometriosis, and formed when a tiny patch of endometrial tissue (the mucous membrane that makes up the inner layer of the uterine wall) bleeds, sloughs off, becomes transplanted, and grows and enlarges inside the ovaries.
A polycystic-appearing ovary is diagnosed based on its enlarged size — usually twice normal —with small cysts present around the outside of the ovary. It can be found in "normal" women, and in women with endocrine disorders. An ultrasound is used to view the ovary in diagnosing the condition. Polycystic-appearing ovary is different from the polycystic ovarian syndrome, which includes other symptoms in addition to the presence of ovarian cysts.
Ovarian cyst Treatment
About 95% of ovarian cysts are benign, meaning they are not cancerous
Treatment for cysts depends on the size of the cyst and symptoms. For small, asymptomatic cysts, the wait and see approach with regular check-ups will most likely be recommended.
Pain caused by ovarian cysts may be treated with:
Pain relievers, including acetaminophen (Tylenol), nonsteroidal anti-inflammatory drugs such as ibuprofen (Motrin, Advil), or narcotic pain medicine (by prescription) may help reduce pelvic pain. NSAIDs usually work best when taken at the first signs of the pain.
A warm bath, or heating pad, or hot water bottle applied to the lower abdomen near the ovaries can relax tense muscles and relieve cramping, lessen discomfort, and stimulate circulation and healing in the ovaries. Bags of ice covered with towels can be used alternately as cold treatments to increase local circulation.
Combined methods of hormonal contraception such as the combined oral contraceptive pill -- the hormones in the pills may regulate the menstrual cycle, prevent the formation of follicles that can turn into cysts, and possibly shrink an existing cyst. (American College of Obstetricians and Gynecologists, 1999c; Mayo Clinic, 2002e)
Cysts that persist beyond two or three menstrual cycles, or occur in post-menopausal women, may indicate more serious disease and should be investigated through ultrasonography and laparoscopy, especially in cases where family members have had ovarian cancer. Such cysts may require surgical biopsy. Additionally, a blood test may be taken before surgery to check for elevated CA-125, a tumor marker, which is often found in increased levels in ovarian cancer, although it can also be elevated by other conditions resulting in a large number of false positives.
For more serious cases where cysts are large and persisting, doctors may suggest surgery. Some surgeries can be performed to successfully remove the cyst(s) without hurting the ovaries, while others may require removal of one or both ovaries. |
This month in the news: is there no place like home?
NASA's Kepler satellite is undeniably an impressive mission. Having been running continuously now since 2009, it has analysed over 150,000 stars in a 100 square degree patch of sky (for comparison the Moon covers only 0.2 square degrees) and has identified more than 3800 planetary candidates. However, if you are only interested in the news of Earth-like habitable planets then the discoveries of Kepler may appear to be infuriatingly sparse. During Kepler's five years of operation there have been no planets which precisely fit the narrow criteria of being Earth-like. This is because an Earth-like planet must orbit a Sun-like star within the habitable zone (the distance at which liquid water can exist on a planets surface) and be of a similar radius to the Earth. There have been several planets that have come close to being Earth-like, such as Kepler-20e which orbits a Sun-like star, has a radius which is 80% of the Earth but its small orbit means it has a surface temperature of several hundred celsius. There has also been Kepler-22b, which orbits another Sun-like star and is within the habitable zone but has almost twice the radius of the Earth making it unlikely to have a rocky surface.
Perhaps though the search for new Earths around Sun-like stars is too restricting. After all, every star will have a habitable zone around it and it is known that stars smaller than the Sun evolve slower, which is perhaps more conducive to the long timescales required for life to develop. Smaller stars than the Sun have other advantages over Sun-like stars when it comes to finding new Earths. For one, smaller stars are more abundant, for example within 30 light years of the Earth there are only 5 Sun-like stars but there are hundreds of small Class M red dwarf stars, this provides a far larger sample of potential planetary systems to study. Also smaller stars have habitable zone orbits that are much smaller than those of Sun-like stars, meaning that the habitable planets will require less time orbit the host star, which means the Kepler satellite can build up the picture of the orbiting planets over a much shorter span of time. However most of the planets discovered around red dwarf type stars seem to have very tiny orbits and, because the stars are so dim, follow up observations from Earth based telescopes, which are required to determine properties of exo-planets, are far more difficult if not impossible at the present time.
With all that said though the announcement last month by the Kepler team of the newly discovered planet Kepler-186f is enormously exciting. It is the first planet to be discovered that has an estimated radius that is almost exactly the same as the Earth's and is nestled snugly within the important habitable zone, therefore meaning liquid water on its surface is possible. Of course at the present time Kepler-186f is the only planet of its kind that has been found around a red dwarf star but due to the way Kepler detects planets (by measuring the slight dimming of the star when a planet passes directly in front) even one detection wildly increases the probability that there are similar planets around other red dwarf stars. Unfortunately, detections of such planets for the time being with Kepler will not tell us anything about the likelihood of life on these planets. However, discovering them will be provide a catalogue for which future, more sophisticated space telescopes such as the James Webb Telescope can study and reveal the properties of their atmospheres, which may tell us whether or not life on these distant worlds exists.
Interview with Dr Chris Hales
We talk to Dr. Chris Hales about his work on detecting weakly polarised radio sources, which hid did for his PhD. He explains the issues with detecting weak polarised sources and the work he did to improve the validity of the results. He then goes on to talk about his current work at the NRAO where they are attempting to map the magnetic field of the universe.
The Night Sky
Ian Morison tells us what we can see in the northern hemisphere night sky during May 2014.
Gemini is setting in the west as twilight ends, with Canis Minor and its bright star Procyon to its lower left. Cancer is further to the south, with the Beehive Cluster at its heart. Leo the Lion is even further round, with the star Regulus and some galaxies nearby that are visible with binoculars or a small telescope. Over to the east is Botes and the star Arcturus, with Corona Borealis beside them. Continuing to move around the sky, the four stars of the Keystone in Hercules can be found, and the globular cluster M13 is two-thirds of the way up its right-hand side. The summer constellations are rising in the north-east, with the stars Vega in Lyra and Deneb in Cygnus. To the south-east, below Botes, are Virgo and the bright star Spica, as well as Libra, Serpens Caput and the zodiacal constellation of Ophiuchus.
- Jupiter is still at 45° elevation in the west at sunset at the beginning of May, setting around 01:00 BST (British Summer Time, 1 hour ahead of Universal Time). It dims from -2.0 to -1.9 in magnitude during the month, while shrinking from 35 to 33" in angular diameter. It is only 20° above the horizon at sunset by month's end, and sets at around 23:00 BST (although you may see it for up to an hour longer if you have a very low north-western horizon). Jupiter lies in Gemini, moving from beside the fourth-magnitude star Mekbuda towards the bright stars Castor and Pollux, and passing Wasat on the way. You can see the Galilean moons with binoculars, and the Great Red Spot at certain times with a telescope, with the best chance to observe coming early in the month.
- Saturn reaches opposition (opposite the Sun in the sky) on the 10th, so it is in the sky all night. It shines at magnitude +0.1 quite low down in Libra, and its disc is 18.6" in diameter. It is moving west (retrograde) towards the star Alpha Librae. The rings are now 22° from the line of sight and about 43" across, so a small telescope can pick out Cassini's Division.
- Mars is receding from us and dims from -1.2 to -0.5 this month, while shrinking from 14.5 to 11.8" in angular size. It moves little in the sky during May because its retrograde motion ends midway through the month, and it lies close to the star Porrima in Virgo. Mars is best seen early in the month, at around 23:00 BST.
- Mercury has its best apparition of the year this month. It is low in the evening twilight as May begins, but rises higher each evening, appearing 15° above the horizon at sunset in the second half of the month. It reaches elongation (its furthest from the Sun in the sky) on the 25th, shining at magnitude +0.4 and presenting a crescent disc 8" across and 40% illuminated. It is visible for 2 hours after sunset at this point, but by the end of May it is only 5° above the horizon 1 hour after sunset, with a brightness of magnitude +1.2.
- Venus lies in Pisces and shines at magnitude -4. It is some 12° above the eastern horizon half an hour before sunrise. Its gibbous disc drops from 17 to 15" in angular size this month, but its illuminated fraction increases from 67 to 77%, keeping its brightness almost constant.
- It is a good time to observe Mars this month as it appears larger than usual in the sky.
- It is also a good time to observe Saturn this month, since it, too, appears large in the sky and is up throughout the night. It can be located by following the arc of the Plough's handle towards orange Arcturus, then continuing the curve down to white Spica, and finally looking for the slightly brighter yellow object to it's lower left - this is Saturn. As well as its ring system, binoculars or a small telescope allow you to spot its largest moon, Titan, at magnitude 8.2.
- Jupiter is coming to the end of its current apparition, but can still be easily seen this month. It is high in the sky just after sunset, allowing surface features to be seen with a telescope.
- Comet C/2012 K1 (PANSTARRS) passes below the Plough this month. With a magnitude of +7 to +8, it can be seen with binoculars. It is near the leftmost star of the Plough's handle on the 1st and 2nd, passes by Chi Ursae Majoris on the 18th and 19th and approaches Psi Ursae Majoris on the 26th and 27th.
- The Eta Aquariid meteor shower peaks before dawn on the 6th. With a low south-eastern horizon, up to 10 meteors per hour may be visible. The meteors come from dust particles release by Comet Halley when it approached the Sun 4000 years ago.
- Jupiter is near a thin crescent Moon on the 3rd and 4th.
- Mars is near a waxing Moon on the 11th. Mercury is close to a thin crescent Moon on the 30th.
John Field from the Carter Observatory in New Zealand speaks about the southern hemisphere night sky during May 2014.
Orion the Hunter is low in the west, with three stars forming his Belt. To Māori, they form part of the Bird Snare. The blue star Rigel marks one of Orion's feet, while red Betelgeuse forms one of his shoulders. Above the Belt are the three stars of Orion's Sword, the middle member of which is actually the Orion Nebula. Slightly fuzzy to the naked eye, it is a bat-shaped cloud in binoculars or a small telescope and can be seen to be a beautiful star-forming region with a large telescope. The Belt and Sword are sometimes described as the Pot or the Saucepan by southern hemisphere observers. Following Orion is Canis Major, one of his hunting dogs, with Sirius, the brightest star in the night sky, forming its head. It is commonly known as the Dog Star, but to Māori it is Takarua, the Winter Star, and in Ancient Egypt it was called Sothis, and heralded the annual flooding of the River Nile. Procyon, the brightest start in Canis Minor, is lower down. Both Sirius and Procyon have faint white dwarf companions, but these are not easily observed. Following a line from Rigel through Betelgeuse leads to the planet Jupiter, near to Castor and Pollux, the heads of the Gemini Twins. Bands and belts on Jupiter's surface can be seen on a dark night using a telescope, while binoculars show its four largest moons.
The constellations of Scorpius and Sagittarius are rising in the east in the evening, reaching high into the sky later and showing off many beautiful objects. Crux, the Southern Cross, is high overhead after sunset, and near the star Beta Crucis is a star cluster called the Jewel Box, which appears as a hazy star to the unaided eye and as a pretty group of stars in binoculars or a telescope. Between Crux and Sirius, Carina the Keel and Vela the Sails sit along the Milky Way and contain the asterisms of the False and Diamond Crosses. They contain a wealth of bright stars, clusters and nebulae, many of which can be observed with no equipment. The Carina Nebula is the brightest of these, and appears larger than the Orion Nebula. Binoculars reveal its bright star clusters and glowing clouds of gas, intertwined with dark lanes. Within it, the star Eta Carinae is bright and orange.
The planets Mars and Saturn are in the north and east respectively after sunset. Mars is in Virgo, near the blue-white star Spica, and is now receding from the Earth and shrinking in apparent size. Lying away from the Milky Way, Many galaxies can be spotted in Virgo using a medium-sized telescope. Saturn is a yellowish object in Libra the Scales. It reaches opposition (opposite the Sun in the sky) on the 10th, and is occulted by the nearly-full Moon for observers in New Zealand and Australia at around midnight NZST (New Zealand Standard Time, 12 hours ahead of Universal Time) on the night of the 14th-15th. The event is visible to the unaided eye, but its progress will be spectacular when viewed through binoculars.
Autumn is a prime time to observe the Aurora Australis, or Southern Lights. Caused by the interaction between the solar wind and the Earth's atmosphere, the phenomenon can sometimes be seen from southerly parts of New Zealand, Australia and South America, consisting of a red glow, or even moving sheets of red and green light, on the southern horizon. With a high level of activity on the Sun so far this year, it is worth checking the several websites on which you can find current information and short-term forecasts of aurorae.
The planet Venus is in the morning sky, but rises later each day as it moves closer to the Sun from our perspective.
Odds and Ends
A group of university students from around the world have started a project called "Mars Time Capsule" which aims to send three miniature spacecraft to Mars, carrying messages, pictures and possibly video to the Red Planet as a sort of time capsule for future astronauts. More informtaion on the project can be found on their website.
Back in April, the European Southern Observatory added to its online image gallery a photo of a rock with petroglyphs of llamas photographed near the La Silla site by Hakon Dahle. This photo, reminds us that astronomers were not the first people to visit many of the mountaintops where modern observations now stand. In other llama-related news, workers at the ALMA observatory rescued a very young vicuna (the wild ancestor of the alpaca) that was separated from its herd by a pack of Chilean foxes. The fawn was eventually brought to a wildlife rehabilitation centre where it will be taken care of until it is ready for release back into the wild. More information can be found here.
A group in the University of Edinburgh has been researching the possibility of life on the moons of exoplanets - "exomoons". The study, by Duncan Forgan and Vergil Yotov, examines the various different factors that could influence the hability of exomoons. An article on the study can be read on Ars Technica here.
|Interview:||Dr Chris Hales and Chris Wallis|
|Night sky:||Ian Morison and John Field|
|Presenters:||George Bendo, Fiona Healy and Indy Leclercq|
|Editors:||Indy Leclercq and Chris Wallis|
|Segment Voice:||Iain McDonald|
|Website:||Sally Cooper and Stuart Lowe|
|Cover art:||Llamas at La Silla. CREDIT: ESO/H.Dahle| |
What Does Molecule Mean?
A molecule is the name applied to bonded groups of atoms. These form the building blocks of everything, from rocks and soil to trees and human beings. In living things, molecules join together to form cells. In plants, those cells eventually form an entire plant. Plant molecules are unique from those of other living things in many ways, including having the ability to manufacturer their own food.
Maximum Yield Explains Molecule
While atoms might be considered the building blocks of all things, only when they are bonded together do we get the ingredients for life. Molecules (groups of bonded atoms) come in many different forms. When they are combined, molecules form cells. Plant cells are similar to those of fungi and other eukaryotes, but they have several key differences. For instance, plant molecules lack cilia and intermediate filaments. They also lack centrioles and flagella, as well as lysosomes.
Plant molecules, as a broader term, also applies to elements beyond the cells that make up a plant’s physical body. For instance, growth regulators like auxins, which are molecules naturally found in plants, are manufactured within their cells but can also be synthetically derived molecules that can be applied to seeds and plants to support stronger growth. Auxin is more heavily concentrated and manufactured in heavier concentrations within certain parts of the plant, including the meristems and new growth shoots.
There are many other molecules within plants, such as macromolecules comprised of multiple smaller molecules. These smaller components are most often hydrogen, oxygen, and carbon. You’ll find a number of macromolecules at work within plants, including carbohydrates, fats, and oils. |
The agricultural revolution
The conversion of our predecessors from hunter-gatherers to herder-farmers circa 8000 BC is the great dichotomy of the human experience. From that point on, the dominance of our species, at least to the present, was assured. The growing of food crops and the domestication of food producing animals changed human activity overnight, in geologic terms. It is important to note, however, that this revolution took many thousands of years to unfold and never reached large parts of the world.
The importance of agriculture
Agriculture removed much of the uncertainty in obtaining food. People no longer had to search it out over large areas—they found places where it could be produced in abundant quantities year after year and fixed themselves there. Instead of relying on the environment’s natural bounty, they could direct and manipulate the provision of that bounty. Abundant and dependable food supplies allowed population to grow and set the stage for the rise of civilization.
The human population on Earth two million years ago has been estimated at 100,000. At the beginning of the agricultural revolution this number had risen to perhaps five million, thanks to better adaptation, technology, and abundant resources that became available as the ice sheets receded. By 3000 BC, the time of the first Egyptian dynasty, world population had increased to approximately 100 million. By the birth of Christ, world population was well over 200 million.
The beginnings of agriculture
Agriculture was a gradual discovery. It is believed that early gatherers first learned the relationship between plants, the foods they produced, and their growing cycle. At some point the gatherers learned how to encourage the plants they depended on and inhibit those of no use. Then came the steps of gathering seeds, planting seeds, and nurturing the plants. By selecting seeds from the strongest and most productive plants for replanting, the early planters interrupted and redirected the process of natural selection to improve the yield of the useful plants. For example, researchers in Mexico have found evidence of the ancient corn plant with only a few kernels that became the much more productive corn plant of ancient America through selection over many thousands of years.
The first domesticated grain is believed to have been a wild wheat that grew in southern Turkey. To domesticate this plant, the early gatherers had to learn how to harvest the grain seeds, extract the wheat kernel, grind it, and bake it, all before they learned how to grow the plant and select it so that it increased in kernel size. Until fertilization or crop rotation were developed, fields had to be moved regularly or placed near a river whose annual floodings brought new soil. All this was a complicated learning process that took time. Less is known about how and when the rice plant was domesticated, but it was clearly as important in Asia as wheat was in the Middle East and corn in America.
The agricultural revolution accelerated as innovations increased arable land, crop yields, and farmer productivity. Land was cleared by grazing or fire. Soil was prepared by digging first, and then plowing. Irrigation insured adequate water. Fertilization and crop rotation increased yields. Specific tools like the sickle, scythe, plow, and hoe increased farmer efficiency.
Domestication of animals
Dogs were domesticated from wolves perhaps 15,000 years ago in both America and the Near East, but dogs were useful mainly as companions, guards, and hunting aids. The first domesticated food producing animal was probably the goat, a source of meat, milk, and waterproof hides. This breakthrough occurred in the hills of modern Iraq. Domestication of the goat was followed by sheep (meat and wool), cattle (meat, milk, hides, power), horses (meat, milk, transport), pigs (meat and fat), chickens (meat and eggs), and others.
Cattle are considered the most significant domestication. In addition to providing meat, milk, and hides, they were also valuable as beasts of burden. They pulled wagons, greatly improving land transport. They pulled ploughs, greatly improving agriculture. The existence of domesticated cattle is thought to have been primarily responsible for the doubling of population in the Near East between 4000 and 5000 BC.
The process of domestication must have been to obtain young animals, raise them in captivity over enough generations so that they grew less wild and more tolerant of being managed. It is unclear why some animals could be domesticated but not others. Why the cow but not the buffalo? Predators would be unlikely choices for domestication, but the wolf was a predator and so was the cat, which was domesticated much later.
Domestication of animals brought many important advantages. They were dependable sources of meat. Cattle and goats converted grass, of little use to humans, into milk and milk products like cheese. Vast grasslands that could previously support only a few hunters could now support much larger populations of herders. Sheep and vicuñas produced wool each year. Animals could graze on broken lands of little use for farming.
Horses, oxen, llamas, and other animals provided power for pulling, ploughing, and carrying. Military uses were eventually found for onagers, asses, and horses, and to some extent, elephants, although elephants are not considered domesticated. The horse was domesticated first in the north Asian steppes and its use spread from there, probably in the wake of Asian migrations to the south and west. Paleontologists believe the horse evolved in the Americas actually, but went extinct there, perhaps due to hunting pressure.
Mounted Asian barbarians are thought to have overrun the first towns rising in Asia Minor and the Middle East around 6000 BC. The people that were overrun may not have seen a domestic horse previously, much less one on which a warrior could ride.
And speaking of the first towns...
Birth of Cities
People have always lived in groups; all the way back to primitive fish struggling in the ocean, all our ancestors have lived in some form of community. However, it isn't until around 6000 BC that humans began to settle down in clustered permanent structures to till the land more efficiently; the first cities.
The earliest towns were founded in places where several of many different factors came together to create a favorable spot. Many towns had rivers that ran through or nearby them, providing them a place to gather drinking water from and dump their waste; most had naturally rich soil, promoting intensive farming and allowing a surplus of goods to support non-farmers; some had flat pastures, good for raising a wide variety of animals; a few had valuable resources in the earth nearby, such as valuable metal ores (especially copper, for the first cities), gems, or some other valuabe material; and several were conveniently located between other towns, serving as a welcome stop for trade caravans and growing rich off of the cash they spent.Last modified on 1 August 2011, at 07:27 |
Mercury is the most fatal, non-radioactive and naturally occurring substance on the planet. There is no safe level of mercury because even a single atom can prove lethal to your body. It is found in the air, water, and soil. According to the website of Abel Law Firm, whether in its liquid or gaseous form, mercury exposure can cause serious illness to humans.
Mercury is found in the fish we eat whether caught in the lake or bought in a grocery store. It is also found in some of the products we use, in the home, at the dentist, or at school. Fishes and shellfishes are the main source of methylmercury exposure to humans. The level of mercury contamination in fish and shellfish depend on what they eat, their lifespan, and their level in the food chain.
High level mercury exposure can harm the brain, heart, kidney, lungs, and immune system. Methyl mercury in the bloodstream of unborn babies and young children can cause harm to the developing nervous system, which can slow down the child’s ability to think and learn. Aside from that, mercury poisoning can impair the peripheral vision and may result to lack of coordination, impairment of speech, hearing, walking, and muscle weakness.
The National Institute of Health reveals that the effect of mercury poisoning is a slow process that takes months or years. For this reason, most people do not know that they are being poisoned right away. Some of the effects of mercury may include neurological and chromosomal problems such as:
- Uncontrollable shaking or tremor
- Numbness or pain in certain parts of the skin
- Blindness and double vision
- Inability to walk well
- Memory problems
- Deaths with large exposures
- The health effects of mercury are dependent on several factors which may
- include the following:
- The form of mercury
- Amount of mercury in the exposure
- Age of the person exposed
- How long the exposure lasts
- The manner of exposure such as breathing, eating, skin contact, etc
- Health of the person exposed
When you or your loved one has been exposed to mercury, you need to consult the doctor right away. |
|මෙම පිටුව ප්රලේඛනය කරනුයේ ඉංග්රීසි විකිපීඩියාවෙන් අපත්යීකරණය කල සිංහල විකිපීඩියා ප්රතිපත්තීන් වන අතර, එය ඉංග්රීසි විකිපීඩියාවේ පුළුල් ලෙසින් පිළිගත් සම්මතයක් බැවින්ද සිංහල විකිපීඩියාවේ මීට වෙනස් සම්මුතියක් ජනිත වන තෙක් සිංහල විකිපීඩියාවේද එලෙසින් සම්මතය ලෙසින් අපත්යීකරණය කරන ලද බැවින්ද සමස්ත සංස්කාරක වරුන් විසින් සාමාන්ය වශයෙන් පිළිපැදිය යුතුය. එයට කෙරෙන වෙනස් කම් සම්මුතියක් ප්රකාරයෙන්ම සිදුවිය යුතු අතර අත්තනෝමතික ලෙසින් සිදු නොවිය යුතුමය.|
|මෙම පිටුව පිණ්ඩාර්ථයෙන්: Consensus is Wikipedia's fundamental model for editorial decision-making.|
පොදු එකඟතා මගින් විකිපීඩියාවේ සංස්කරණ තීරණ ගැනීම පිලිබඳ මූලික ක්රමවේදය විස්තර කෙරෙයි. විකිපීඩියාවේ පොදු එකඟතා යන්නෙන් අදහස් වන්නේ කුමක්දැයි තනි අර්ථ දැක්වීමක් නොමැති වුවද, ලිපි තුළ, පොදු එකඟතා සාමාන්යයෙන් භාවිත වනුයේ මධ්යස්ථතාව සහ නිරවද්යතාව තහවුරු කර ගැනීම පිණිසයි. Editors usually reach consensus as a natural and inherent product of editing; generally someone makes a change or addition to a page, then everyone who reads it has an opportunity to leave the page as it is or change it. When editors cannot reach agreement by editing, the process of finding a consensus is continued by discussion on the relevant talk pages.
- 1 What consensus is
- 2 Consensus-building
- 3 See also
- 4 External links
- 5 Related information
What consensus is[සංස්කරණය]
Consensus is a decision that takes account of all the legitimate concerns raised. All editors are expected to make a good-faith effort to reach a consensus aligned with Wikipedia's principles.
Sometimes voluntary agreement of all interested editors proves impossible to achieve, and a majority decision must be taken. More than a simple numerical majority is generally required for major changes.
Consensus is a normal and usually implicit and invisible process on articles across Wikipedia. Any edit that is not disputed or reverted by another editor can be assumed to have consensus. Should that edit later be revised by another editor without dispute, it can be assumed that a new consensus has been reached. In this way the encyclopedia is gradually added to and improved over time without any special effort. Even where there is a dispute, often all that is required is a simple rewording of the edit to make it more neutral or incorporate the other editor's concerns. Clear communication in edit summaries can make this process easier.
When reverting an edit you disagree with, it helps to state the actual disagreement rather than citing "no consensus". This provides greater transparency for all concerned, and likewise acts as a guide so that consensus can be determined through continued editing.
When there is a more serious dispute over an edit, the consensus process becomes more explicit. Editors open a section on the article's talk page and try to work out the dispute through discussion. Consensus discussion has a particular form: editors try to persuade others, using reasons based in policy, sources, and common sense. The goal of a consensus discussion is to reach an agreement about article content, one which may not satisfy anyone completely but which all editors involved recognize as a reasonable exposition of the topic. It is useful to remember that consensus is an ongoing process on Wikipedia. It is often better to accept a less-than-perfect compromise - with the understanding that the article is gradually improving - than to try to fight to implement a particular 'perfect' version immediately. The quality of articles with combative editors is, as a rule, far lower than that of articles where editors take a longer view.
Some articles go through extensive editing and discussion to achieve a neutral and a readable product. Similarly, other articles are periodically challenged and/or revised. This is a normal function of the ongoing process of consensus. It is useful to examine the article's talk page archives and read through past discussions before re-raising an issue in talk - there is no sense in forcing everyone to rehash old discussions without need.
When editors have a particularly difficult time reaching a consensus, there are a number of processes available for consensus-building (Third opinions, requests for comment, informal mediation at the Mediation Cabal), and even some more extreme processes that will take authoritative steps to end the dispute (administrator intervention, formal mediation, and arbitration). Keep in mind, however, that administrators are primarily concerned with policy and editor behavior and will not decide content issues authoritatively. They may block editors for behaviors that interfere with the consensus process (such as edit warring, socking, or a lack of civility). They may also make decisions about whether edits are or are not allowable under policy, but will not usually go beyond such actions.
Level of consensus[සංස්කරණය]
Consensus among a limited group of editors, at one place and time, cannot override community consensus on a wider scale. For instance, unless they can convince the broader community that such action is right, participants in a WikiProject cannot decide that some generally accepted policy or guideline does not apply to articles within its scope.
Policies and guidelines reflect established consensus, and their stability and consistency are important to the community. As a result, Wikipedia has a higher standard of participation and consensus for changes to policy than on other kinds of pages. Substantive changes should be proposed on the talk page first, and sufficient time should be allowed for thorough discussion before being implemented. Minor changes may be edited in, but are subject to a higher level of scrutiny. The community is more likely to accept edits to policy if they are made slowly and conservatively, with active efforts to seek out input and agreement from others.
Consensus can change[සංස්කරණය]
Consensus is not immutable. Past decisions are open to challenge and are not binding. Moreover, such changes are often reasonable. Thus, "according to consensus" and "violates consensus" are not valid rationales for accepting or rejecting proposals or actions. While past "extensive discussions" can guide editors on what influenced a past consensus, editors need to re-examine each proposal on its own merits, and determine afresh whether consensus either has or has not changed.
Wikipedia remains flexible because new people may bring fresh ideas, growing may evolve new needs, people may change their minds over time when new things come up, and we may find a better way to do things.
A representative group might make a decision on behalf of the community as a whole. More often, people document changes to existing procedures at some arbitrary time after the fact. But in all these cases, nothing is permanently fixed. The world changes, and Wikipedia must change with it. It is reasonable and indeed often desirable to make further changes to things at a later date, even if the last change was years ago.
Some exceptions supersede consensus decisions on a page.
- Declarations from the Wikimedia Foundation Board, or the Developers, particularly for copyright, legal issues, or server load, have policy status.
- Office actions are outside the policies of the English Wikipedia.
- Some actions, such as removal of copyright violations and certain types of material about living persons, do not normally require debate or consensus, primarily because of the risk of real harm inherent in them.
- A decision of the Arbitration Committee may introduce a process which results in temporary binding consensus. For example, Ireland article names.
Editors who maintain a neutral, detached and civil attitude can usually reach consensus on an article through the process described above. However, editors occasionally find themselves at an impasse, either because they cannot find rational grounds to settle a dispute or because they become emotionally or ideologically invested in 'winning' an argument. What follows are suggestions for resolving intractable disputes, along with descriptions of several formal and informal processes that may help.
Consensus-building in talk pages[සංස්කරණය]
Be bold, but not foolish. In most cases, the first thing to try is an edit to the article, and sometimes making such an edit will resolve a dispute. Use clear edit summaries that explain the purpose of the edit. If the edit is reverted, try making a compromise edit that addresses the other editors' concerns. Edit summaries are useful, but do not try to discuss disputes across multiple edit summaries - that is generally viewed as edit warring, and may incur sanctions. If an edit is reverted and further edits seem likely to meet the same fate, create a new section on the article's talk page to discuss the issue.
In determining consensus, consider the quality of the arguments, the history of how they came about, the objections of those who disagree, and existing documentation in the project namespace. The quality of an argument is more important than whether it represents a minority or a majority view. The argument "I just don't like it", and its counterpart "I just like it", usually carry no weight whatsoever.
Limit talk page discussions to discussion of sources, article focus, and policy. The obligation on talk pages is to explain why an addition/change/removal improves the article, and hence the encyclopedia. Other considerations are secondary. This obligation applies to all editors: consensus can be assumed if editors stop responding to talk page discussions, and editors who ignore talk page discussions yet continue to edit in or revert disputed material may be guilty of disruptive editing and incur sanctions.
Consensus-building by soliciting outside opinions[සංස්කරණය]
When talk page discussions fail - generally because two editors (or two groups of editors) simply cannot see eye to eye on an issue - Wikipedia has several established processes to attract outside editors to offer opinions. This is often useful to break simple, good-faith deadlocks, because uninvolved editors can bring in fresh perspectives, and can help involved editors see middle ground that they cannot see for themselves. The main resources for this are as follows:
- Third Opinions
- 3O is reserved for cases where exactly two editors are in dispute. The editors in question agree to allow a third (uninvolved) volunteer to review the discussion and make a decision, and agree to abide by that decision.
- Most policy and guideline pages, and many Wikipedia projects, have noticeboards for interested editors. If a dispute is in a particular topic area or concerns the application of a particular policy or guideline, posting a request to the noticeboard may attract people with some experience in that area.
- Requests for Comment
- A formal system for inviting other editors to comment on a particular dispute, thus allowing for greater participation and a broader basis for consensus. This is particularly useful for disputes that are too complex for 3O but not so entrenched that they need mediation.
- Informal Mediation by the (purported) Cabal
- More complex disputes involving multiple editors can seek out mediation. This is a voluntary process that creates a structured, moderated discussion - no different than an article talk page discussion, except that the mediator helps keep the conversation on focus and moving forward, and prevents it from degenerating into the type of heated conflicts that can occur of unmoderated pages.
- Village pump
- For disputes that have far-reaching implications - mostly ones centered on policy or guideline changes - placing a notification at the pump can bring in a large number of interested editors. This ensures broad consensus across the project.
Many of these broader discussions will involve polls of one sort or another, but polls should always be regarded as structured discussions rather than voting. Consensus is ultimately determined by the quality of the arguments given for and against an issue, as viewed through the lens of Wikipedia policy, not by a simple counted majority. Responding YES/NO/AGREE/DISAGREE is not useful except for moral support. responding (DIS)AGREE per user X's argument is better, presenting a novel explanation of your own for your opinion is best. The goal is to generate a convincing reason for making one choice or another, not to decide on the mere weight of public expressions of support.
Administrative or community intervention[සංස්කරණය]
In some cases, disputes are personal or ideological rather than mere disagreements about content, and these may require the intervention of administrators or the community as a whole. Sysops will not rule on content, but may intervene to enforce policy (such as wp:BLP) or to impose sanctions on editors who are disrupting the consensus process inappropriately. Sometimes merely asking for an administrator's attention on a talk page will suffice - as a rule, sysops have large numbers of pages watchlisted, and there is a likelihood that someone will see it and respond. However, there are established resources for working with intransigent editors, as follows:
- Wikiquette alerts
- Wikiquette is a voluntary, informal discussion forum that can be used to help an editor recognize that they have misunderstood some aspect of Wikipedia standards. Rudeness, inappropriate reasoning, POV-pushing, collusion, or any other mild irregularity that interferes with the smooth operating of the consensus process are appropriate reasons for turning to Wikiquette. The process can be double-edged - expect Wikiquette respondents to be painfully objective about the nature of the problem - but can serve to clear up personal disputes.
- As noted above, policy pages generally have noticeboards, and many administrators watch them.
- Administrator's intervention noticeboard and Administrator's noticeboard
- These are noticeboards for administrators - they are high-volume noticeboards and should be used sparingly. Use AN for for issues that need eyes but may not need immediate action; use ANI for more pressing issues. Do not use either except at need.
- Requests for comment on users
- A more formal system designed to critique a long-term failure of an editor to live up to community standards.
- Requests for arbitration
- The final terminus of intractable disputes. Arbiters make rulings designed to eliminate behavior that is disrupting the progression of the article, up to and including banning or restricting editors.
Consensus-building pitfalls and errors[සංස්කරණය]
The following are common mistakes made by editors when trying to build consensus:
- Too many cooks. Try not to attract too many editors into a discussion. Fruitful discussions usually contain less than ten active participants; more than that strains the limits of effective communication on an online forum of this sort. Where large-scale consensus is needed then it should be sought out, otherwise the input of one or two independent editors will give far better results.
- Off-wiki discussions. Discussions on other websites, web forums, IRC, by email, or otherwise off the project are generally discouraged. They are not taken into account when determining consensus "on-wiki", and may generate suspicion and mistrust if they are discovered. While there is an occasional need for privacy on some issues, most Wikipedia-related discussions should be held on Wikipedia where they can be viewed by all participants.
- Canvassing, Sock puppetry, and Meatpuppetry. Any effort to gather participants to a community discussion that has the effect of biasing that discussion is unacceptable. While it is perfectly fine - even encouraged - to invite people into a discussion to obtain new insights and arguments, it is not acceptable to invite only people favorable to a particular point of view, or to invite people in a way that will prejudice their opinions on the matter, and it is surely objectionable to pretend to gather people by simply using other accounts on your own. Neutral, informative messages to Wikipedia noticeboards, WikiProjects, or editors are permitted, but actions that could reasonably be interpreted as an attempt to "stuff the ballot box" or otherwise compromise the consensus building process would be considered disruptive editing.
- Tendentious editing. The continuous, aggressive pursuit of an editorial goal is considered disruptive, and should be avoided. The consensus process works when editors listen, respond, and cooperate to build a better article. Editors who refuse to allow any consensus except the one they have decided on, and are willing to filibuster indefinitely to attain that goal, destroy the consensus process. Issues that are settled by stubbornness never last, because someone more pigheaded will eventually arrive; only pages that have the support of the community survive in the long run.
- Forum shopping, admin shopping, and spin-doctoring. Raising the same issue repeatedly on different pages or with different wording is confusing and disruptive. It doesn't help to seek out a forum where you get the answer you want, or to play with the wording to try and trick different editors into agreeing with you, since sooner or later someone will notice all of the different threads. You can obviously draw attention to the issue on noticeboards or other talk pages if you are careful to add links to keep all the ongoing discussions together, but best practice is to choose one appropriate forum for the consensus discussion, and give (as much as possible) a single neutral, clear, and objective statement of the issue. See also Wikipedia:Policy shopping.
|This page is referenced from the Wikipedia:Glossary.|
Wikipedia essays and information pages concerning consensus:
- Wikipedia:What is consensus?
- Wikipedia:How to contribute to Wikipedia guidance
- Wikipedia:Don't revert due to "no consensus"
- Wikipedia:IPs are human too
- Wikipedia:No consensus
- Wikipedia:Silence and consensus; cf. Wikipedia:Silence means nothing
- Wikipedia:Staying cool when the editing gets hot
- Wikipedia:Method for consensus building
- Wikipedia:Closing discussions
- Wikipedia:Consensus doesn't have to change
Articles concerning consensus: |
Discontinued, Limited Supply Available!
Interior: Shamisen - 3 stringed lutelike instrument played by geisha
Geisha Concisus Genus
Geisha are traditional Japanese entertainers. The word consists of two kanji - (gei) meaning "art" and (sha) meaning "person" or "doer". Like all Japanese nouns, there are no distinct singular or plural variants. The most direct translation of geisha is "performing artist". Geisha were originally men, who served a purpose much like travelling minstrels of medieval Europe. As the number of males studying the arts declined, females took over. Most geisha lived in a house called an okiya, owned by a woman who was typically a former geisha. Geisha attended local schools that specialized in every area of training: music, dance, poetry and tea ceremony. As young girls approached apprenticeship age, the okiya would negotiate for a mature geisha to become a mentor. The “older sister” helped promote the apprentice and taught her the art of entertaining, from how to make witty conversation to how to pour sake. Their clothing was made up of several layers of kimono and undergarments, as many as 15, and an obi or sash was worn around the waste and tied in back. Dressing could take over an hour, even with professional help. A popular view of geisha is that they were prostitutes. During the Edo period, courtesans, known as oiran, wore elaborate hairstyles and white makeup like geisha, but they tied their obi in front, an important distinction. Geisha were, first and foremost, entertainers. They attended parties, playing drinking games with the men, dancing and singing. A geisha’s presence was considered essential to the success of a party. Several geisha meant the host was of great wealth and status. Some geisha had a personal patron or danna. The danna was a wealthy man who could afford to pay the geisha’s expenses for school, lessons, private recitals and even clothing. With a wealthy danna, a geisha could afford to break with an okiya and live independently. Today, young women who want to become geisha begin their training after completing junior high or high school or even college, with many starting in adulthood. Geisha still study traditional instruments like the shamisen, shakuhachi, flute and drums, as well as traditional dance, tea ceremony, literature and poetry. In the 1920s there were over 80,000 geisha, but now there are far fewer. The exact number is unknown, but it is estimated between 1000 and 2000, with Kyoto maintaining the strongest geisha tradition. |
Updated About encyclopedia.com content Print Article
Septennial Act, 1716. This Act prolonged the life of Parliament from a maximum of three years (as the 1694 Triennial Act required) to seven years. Its pretext was the Jacobite uprising in 1715. But by delaying the next election until 1722 the new Whig ministers succeeded in evading electoral judgement until they had consolidated themselves in power and weakened their Tory opponents. Following the ‘rage of party’ of Queen Anne's day, the longer periods between elections did much to quieten political life and entrench the Whigs in government for the next three decades. The Parliament Act of 1911 shortened the duration of parliaments to five years. |
Article Featured on NeuroscienceNews
Researchers train brains to use different regions for same task.
Practice might not always make perfect, but it’s essential for learning a sport or a musical instrument. It’s also the basis of brain training, an approach that holds potential as a non-invasive therapy to overcome disabilities caused by neurological disease or trauma.
Research at the Montreal Neurological Institute and Hospital of McGill University (The Neuro) has shown just how adaptive the brain can be, knowledge that could one day be applied to recovery from conditions such as stroke.
Researchers Dave Liu and Christopher Pack have demonstrated that practice can change the way that the brain uses sensory information. In particular, they showed that, depending on the type of training done beforehand, a part of the brain called the area middle temporal (MT) can be either critical for visual perception, or not important at all.
Previous research has shown the area MT is involved in visual motion perception. Damage to area MT causes “motion blindness”, in which patients have clear vision for stationary objects but are unable to see motion. Such deficits are somewhat mysterious, because it is well known that area MT is just one of many brain regions involved in visual motion perception. This suggests that other pathways might be able to compensate in the absence of area MT.
Most studies have examined the function of area MT using a task in which subjects view small dots moving across a screen and indicate how they see the dots moving, because this has been proven to activate area MT. To determine how crucial MT really was for this task, Liu and Pack used a simple trick: They replaced the moving dots with moving lines, which are known to stimulate areas outside area MT more effectively. Surprisingly, subjects who practiced this task were able to perceive visual motion perfectly even when area MT was temporarily inactivated.
On the other hand, subjects who practiced with moving dots exhibited motion blindness when MT was temporarily deactivated. The motion blindness persisted even when the stimulus was switched back to the moving lines, indicating that the effects of practice were very difficult to undo. Indeed, the effects of practice with the moving dot stimuli were detectable for weeks afterwards. The key lesson for brain training is that small differences in the training regimen can lead to profoundly different changes in the brain.
This has potential for future clinical use. Stroke patients, for example, often lose their vision as a result of brain damage caused by lack of blood flow to brain cells. With the correct training stimulus, one day these patients could retrain their brains to use different regions for vision that were not damaged by the stroke.
“Years of basic research have given us a fairly detailed picture of the parts of the brain responsible for vision,” says Christopher Pack, the paper’s senior author. “Individual parts of the cortex are exquisitely sensitive to specific visual features – colors, lines, shapes, motion – so it’s exciting that we might be able to build this knowledge into protocols that aim to increase or decrease the involvement of different brain regions in conscious visual perception, according to the needs of the subject. This is something we’re starting to work on now.”
Grounded in the belief we are all unique beings, we begin each new client with a meticulous bio-mechanical evaluation, assessing each joint in its relationship to the movement of the body as a whole. Our therapists are skilled at reading the unique story your body tells, and treating everything from the bottom of your foot to the top of your head.
Bodywise Physical Therapy is located in Portland, Oregon. The Bodywise approach is wholistic, individualized, and can benefit people of all fitness levels. While Bodywise has always specialized in general orthopedics, spine rehabilitation, and sports medicine, they have evolved into a truly wholistic practice integrating Hands-on treatments with Mindfulness, Pilates, Trauma Release Exercise, Womens Health and Lymphedema. |
What to do during a Tsunami warning?Wise Blog Team
Afflicting coastal communities around the world, tsunamis are large wave surges that are caused by disturbances such as volcanic eruptions, landslides or earthquakes. Given that they’re typically caused by hard-to-predict events, it’s difficult to know exactly when a tsunami might strike. So, if you happen to live in a coastal region that could potentially be subjected to tsunamis, then it’s important that you prepare in advance. Here are a few suggestions on how to prepare for a possible tsunami.
What is a tsunami?
Tsunamis are large wave surges that are caused by events such as earthquakes or volcanic eruptions. When a large event, like an earthquake, displaces a large amount of water, that displaced water will travel until it hits land. Derived from the Japanese words for harbor (tsu) and wave (nami), tsunamis are found throughout the world (though they’re most common in the Pacific), and they’ve been observed and recorded throughout history. In fact, according to NOAA, one of the first recorded tsunamis occurred in 2000 B.C.E just off of Syria.
How to prepare for a tsunami?
Preparing for a tsunami is relatively straightforward. If you live in a coastal region, you need to prepare an evacuation route ahead of time. Most coastal community governments have an evacuation route pre-planned out, so make sure to study up on the route as soon as possible. Generally speaking, tsunamis are made up of a series of large, wave surges—these aren’t cleanly breaking waves that you might see someone surf. Instead, a tsunami is a wave surge—a chaotic wall of water that grows larger as it approaches land. And they can be fairly large. In fact, during the 2004 tsunami in Indonesia, some of the initial wave surges reached a height of over 50 meters (or over 160 feet). Also, tsunamis generally consist of a series of waves, not just one. So once the first surge passes, additional waves could follow shortly there after. Because of the size and strength of a tsunami, it’s inadvisable to remain in a home during a tsunami warning. Unless your home is outside of the predicted affected area, it’s best to evacuate immediately. Make sure your car is packed with supplies such as dehydrated meals and water, and make sure that you take an emergency or survival kit (one that features medical supplies) with you in case you don’t have a chance to reach a shelter before the tsunami strikes.
What to do during a tsunami?
However, if don’t have the ability to relocate, or if you’re caught on the beach when a tsunami strikes, then it’s absolutely critical that you seek higher ground immediately. Though tsunami can’t be predicted, there are telltale signs that note that a tsunami is quickly approaching. If you’re standing on the beach, and the waters recede dramatically, exposing rocks, fish and large swaths of ocean floor, then that means that a tsunami is coming. You may only have a few minutes to seek higher ground before it strikes. If you can’t seek out higher ground, don’t seek shelter within a home or residence—most homes would not survive a direct tsunami strike. Instead, as a last resort, find a tall building with reinforced concrete walls, like an office building or hotel, and start climbing floors quickly.
What to do after a tsunami?
As noted previously, tsunamis don’t consist of a single wave surge—there are usually multiple waves, and the surges can last for hours. If you’ve found a safe spot, then stay there until an all-clear message is sounded. Then, follow any evacuation directions provided by NOAA or any government agencies. Don’t return to your home or any building within the impact zone unless the danger has completely passed.
If you hear a tsunami warning for your area, do your best to immediately evacuate. It’s impossible to be truly prepared for an event like a tsunami. But by even studying up on evacuation routes and by purchasing an emergency kit for your car, you’re increasing your and your loved one’s chances of successfully surviving a tsunami. |
Some of the differences between fertilization and double fertilization in flowering plants are as follows:
1. It is the union of two compatible gametes.
2. It occurs in almost all eukaryotes.
3. Fertilization produces a diploid zygote.
1. It is union of one male gamete with egg and the other male gamete with secondary nucleus of the same embryo sac.
2. It is restricted to angiosperms only.
3. Double fertilization produces a diploid zygote and a triploid primary endosperm cell. |
The Kongo language, or Kikongo, is the Bantu language spoken by the Bakongo and Bandundu people living in the tropical forests of the Democratic Republic of the Congo, the Republic of the Congo and Angola. It is a tonal language and formed the base for Kituba, a Bantu creole and lingua franca throughout much of west central Africa. It was spoken by many of those who were taken from the region and sold as slaves in the Americas. For this reason, while Kongo still is spoken in the above-mentioned countries, creolized forms of the language are found in ritual speech of African-derived religions in Brazil, Jamaica, Cuba and especially Hispanola (Haiti and Dominican Republic). It is also one of the sources of the Gullah people's language and the Palenquero creole in Colombia. The vast majority of present-day speakers live in Africa. There are roughly seven million native speakers of Kongo, with perhaps two million more who use it as a second language. |
Hubble Space Telescope by NASA was released in Earth’s orbit in 1990. This is one of the most multipurpose mission of NASA that is not focused on one single aspect but lays focus on all aspects. It been twenty-seven years of duty yet Hubble Space Telescope keeps on discovering more scientific aspects outside and within our Solar System.
Max Planck Institute for Solar System Research led a team of international astronomers who made a striking discovery by using Hubble Space Telescope. They found the presence of a unique object between Mars and Jupiter. The unique object was found inside of the Main Asteroid Belt or 288P. The asteroid is known for behaving like a comet and was first discovered in September 2016.
The comet-like tails is a result of the sublimation (a process which causes solid to turn directly into vapours without undergoing the liquid stage) that occurs when the asteroid comes close to the sun. It has been out there for more than 5000 years.
In Nature, the study found its space under the title ‘A Binary Main Belt Comet’. There were various astronomers involved in the research including those from University of California, Lunar and Planetary Laboratory (Arizona University), Space Telescope Science Institute, Johns Hopkins University Applied Physics Laboratory etc.
The images sent by Hubble indicated that there wasn’t just one but two asteroids orbiting each other. The distance between the two is said to be 100 km and the asteroids are of similar mass and size. This asteroid is aptly classified as main belt comet since this is the first ever comet to show such signs of the tail formation. The increase in heat produced by sun caused the water ice there to undergo sublimation.
This asteroid is truly unique because of all the features that set it apart from others. No other known asteroid has been known to show similar mass and size, or orbit at such a distance, or show comet tails or even show eccentric orbits. The history of binary asteroids can be clearly indicated by 288P. It is said to contain ice since the early onset of the solar system. The unique properties cannot be explained as a onetime thing or common for all the binary comets.
To find out more, scientists need more of 288P type asteroids. But the presence of 288P is an opportunity to know more about Mars and Jupiter’s origin.
This unique object might be the key to solving some of the mysteries of our solar system. This is another step in understanding the origin and evolution of solar system, life on Earth etc. |
You may not be familiar with a disease called psittacosis. If you have pet birds such as parrot-like birds, you should know something about it. Psittacosis, or ornithosis, is a respiratory tract infection caused by the Chlamydia (or Chlamydophila) psittaci organism.
The sources of psittacosis include parakeets, parrots, macaws, and cockatiels, especially those that may have been smuggled into the country. Pigeons and turkeys are other sources of the disease. In most cases, this disease is spread to humans when they breathe in airborne dust particles from dried bird feces. Birds do not have to be sick to transmit the disease.
Transmission from person to person is very uncommon. Fortunately, this infection occurs rarely in children. The incubation period is a week or two but may be longer.
Signs and Symptoms
Children with psittacosis have mild flu-like symptoms that often include:
- A nonproductive cough
- A general sense of not feeling well and tiredness
Some patients develop pneumonia. On rare occasions, complications such as inflammation of the heart (myocarditis), lining of the heart (pericarditis), liver (hepatitis), and brain (encephalopathy) may occur.
When To Call Your Pediatrician
If your child has symptoms associated with psittacosis that don’t improve over several days and has been around pet birds, call your pediatrician.
How Is the Diagnosis Made?
Psittacosis is usually diagnosed by taking a medical history of the child, inquiring about exposure to birds, and evaluating the youngster’s symptoms. The diagnosis can be confirmed by blood tests that detect increases of antibodies to the bacteria.
Children with psittacosis are usually treated with azithromycin if they are younger than 8 years, and doxycycline if they are older.
What Is the Prognosis?
With proper treatment, the overwhelming majority of children recover fully from the infection.
If you have birds as pets, clean their cages frequently so their feces do not build up and become airborne. Only purchase birds from a trustworthy breeder or importer. Birds that are believed to be the source of a human infection need to be evaluated and treated by a veterinarian and may need antibiotics. Cages, food bowls, and water bowls that may be contaminated should be disinfected thoroughly, using a household disinfectant such as a 1:100 dilution of bleach or detergent, before they are used again. |
During World War I, the United States felt they were lagging behind Europe in terms of airplane technology. Not to be outdone, Congress created the National Advisory Committee for Aeronautics [NACA]. They needed to have some very large propellers built for wind tunnel testing. Well, they had no bids, so they set up shop and trained men to build the propellers themselves in a fantastic display of coordination and teamwork. This week’s film is a silent journey into [NACA]’s all-human assembly line process for creating these propellers.
Each blade starts with edge-grained Sitka spruce boards that are carefully planed to some top-secret exact thickness. Several boards are glued together on their long edges and dried to about 7% moisture content in the span of five or so days. Once dry, the propeller contours are penciled on from a template and cut out with a band saw. |
Alongside the normal chimpanzee, the bonobo is the closest surviving with respect to people. Since the two species are not capable swimmers, the development of the Congo River 1.5–2 million years back perhaps prompted the speciation of the bonobo. Bonobos live south of the stream, and consequently were divided from the progenitors of the regular chimpanzee, which live north of the waterway.
Fossils of Pan species were not depicted until 2005. Existing chimpanzee populaces in West and Central Africa don't cover with the significant human fossil locales in East Africa. Notwithstanding, Pan fossils have now been accounted for from Kenya. This would demonstrate that both people and parts of the Pan clade were available in the East African Rift Valley amid the Middle Pleistocene. As indicated by A. Zihlman bonobo body extents nearly take after those of Australopithecus,
The bonobo is generally thought to be more gracile than the regular chimpanzee. Albeit substantial male chimpanzees can surpass any bonobo in mass and weight, the two species really extensively cover in body size. Grown-up female bonobos are to a degree more modest than grown-up guys. Body mass in guys ranges from 34 to 60 kg (75 to 132 lb), against a normal of 30 kg (66 lb) in females. The aggregate length of bonobos (from the nose to the backside while on all fours) is 70 to 83 cm (28 to 33 in). At the point when grown-up bonobos and chimpanzees remained up on their legs, they can both achieve a stature of 115 cm (45 in). Its head is moderately more diminutive than that of the normal chimpanzee with less noticeable temples edges over the eyes.
Most studies show that females have a higher economic wellbeing in bonobo society. Forceful experiences in the middle of guys and females are uncommon, and guys are tolerant of babies and adolescents. A male infers his status from the status of his mother. The mother–son bond frequently stays solid and proceeds all through life. While social progressive systems do exist, rank assumes a less conspicuous part than in other primate social orders.
The restricted research on bonobos in the wild was taken to demonstrate that these matriarchal practices may be misrepresented by bondage, and by sustenance provisioning via scientists in the field.
Bonobo guys sometimes take part in different manifestations of male–male genital conduct, which is the non-human simple of frotting, occupied with by human guys. In one structure, two bonobo guys dangle from a tree appendage vis-à-vis while penis fencing. This additionally may happen when two guys rub their penises together while in vis-à-vis position. An alternate manifestation of genital communication (posterior rubbing) jumps out at express compromise between two guys after a clash, when they remained consecutive and rub their scrotal sacs together. Takayoshi Kano watched comparable practices among bonobos in the characteristic territory.
Perceptions in the wild show that the guys among the related normal chimpanzee groups are uncommonly dangerous to guys from outside the group. Gatherings of guys "watch" for the neighboring guys that may be voyaging alone, and assault those single guys, frequently executing them. This does not give off an impression of being the conduct of bonobo guys or females, which appear to lean toward sexual contact over vicious encounter with untouchables. Actually, the Japanese researchers who have invested the most time working with wild bonobos depict the species as phenomenally quiet, and de Waal has archived how bonobos might frequently purpose clashes with sexual contact (henceforth the "make love, not war" characterization for the species).
As the bonobos' living space is imparted to individuals, a definitive achievement of preservation exertions will depend on neighborhood and group inclusion. The issue of parks versus individuals is remarkable in the Cuvette Centrale the bonobos' extent. There is solid neighborhood and expansive based Congolese imperviousness to creating national parks, as indigenous groups have regularly been determined from their timberland homes by the station of parks. In Salonga National Park, the main national stop in the bonobo environment, there is no nearby association, and studies attempted since 2000 demonstrate the bonobo, the African woodland elephant, and different species have been seriously crushed by poachers and the flourishing bushmeat trade. interestingly, zones exist where the bonobo regardless biodiversity flourish without any settled parks, because of the indigenous convictions and taboos against murdering |
To celebrate the launch of Project Fourth edition, author of the pronunciation SIG journal, Robin Walker explores the place of pronunciation in the upper primary classroom.
A few years ago I was crossing the playground in Spain, on my way to a training session with local teachers. As I was going past two young girls I heard one of them say ¿Jugamos al inglés? (Lets play English). The idea of ‘playing English’ roused my curiosity, and I stopped and eavesdropped. What followed was a stream of sh- and z-like sounds with not a word of actual English among them. But the rhythm was very English, and very un-Spanish.
By the time they get to the 9-15 age group, young learners are usually very aware that English feels and sounds different to their mother tongue. This makes this a great age for working on pronunciation, and offers us an opportunity to sow seeds that will produce very tangible benefits. We know from experience, for example, that poor pronunciation means poor fluency – you can’t be fluent if you can’t get your tongue around a sound, or get a short phrase out of your mouth. In fact, learners actually avoid words or grammatical structures that they find difficult to pronounce, and as teachers we are sometimes guilty of misinterpreting these ‘gaps’ in production as gaps in a learner’s knowledge or understanding.
But poor fluency isn’t the only outcome of poor pronunciation. Listening is a nightmare for students with limited pronunciation skills, either because they simply don’t recognise key sounds or words in their spoken form, or because they have to concentrate so hard when listening that their brains very quickly overload and ‘block’. When we spot problems with listening we are tempted to respond by doing more listening work, and are frustrated when this has no effect. What is need, of course, is focused pronunciation work.
Although problems with speaking and listening are obvious to us, poor pronunciation can also badly affect reading and writing. At the level of writing, for example, students might write coffee instead of copy, or berry instead of very. My tourism students used to write Festival at the beginning of a series of points in favour of an argument. At first I didn’t understand where this was coming from. Then they told me that I said this a lot in class. What do you think I was saying? (Answer below*)
More important than writing, however, is the dramatic impact of poor pronunciation on reading. At the end of her talk at the 2008 IATEFL Conference, researcher and OUP author Catherine Walter told the audience that if they wanted their learners to read better, they would have to improve their pronunciation. She was basing this invaluable piece of advice on academic research into how we read in English as an additional language.
Speaking, listening, writing, reading – competence is all four skills is closely related to competence in pronunciation. The same is obviously true for learning vocabulary, where doubts about the pronunciation of words make it very difficult for learners to remember them. Even grammar is related to good pronunciation, which is why the Oxford English Grammar Course is accompanied by a pronunciation CD.
What can we do on a daily basis to help our students with pronunciation? Well festival first of all, show your learners that pronunciation matters. Don’t skip the pronunciation exercises in your coursebook because of lack of time. They are too important. At the same time, don’t do exercises that aren’t relevant to your students. The difference between /b/ and /v/ matters for Spanish students of English, but not for students of many other first language backgrounds. Stick to what matters.
Integrate pronunciation into normal lessons. Don’t leave it for Friday afternoons because we all know that what we do then isn’t important. (This may not be true for you but it’s what learners often think). Integrate pronunciation into learning new vocabulary, or learning a new structure. If you are teaching advice with ‘If I were you’, insist on good sentence stress and rhythm, so that students say if I were you and not if I were YOU. If you are teaching frequency adverbs make sure that your learners are saying SOMEtimes or OFten as opposed to someTIMES and ofTEN. In other words, insist on correct word stress.
Insist on accuracy but don’t demand perfection. Insisting on good pronunciation is the first way of showing that it matters. Demanding perfection is the best way of failing, since many learners lose interest in pronunciation on seeing that they can never get it right. And what is perfect, any way? The identical imitation of the voice on the CD? Out of necessity, coursebooks model pronunciation using a standard accent, but we mustn’t confuse the CD model with our learners’ goal, which is to be intelligible. Intelligibility is something that something that you can achieve in many different accents, both native speaker and non-native speaker.
Work on pronunciation, then. And enjoy working on it. But most of all, make sure your learners enjoy working with you.
(*Answer. In class I often begin a sequence of instructions by saying ‘First of all’, which my students heard as ‘Festival’.) |
Learn and practice Verbal Ability Courses, Verbal Ability, Aptitude preparation, Error Correction questions and answers with explanation for interview,placement test,online test,competitive examination and entrance test
The Sentence correction Section includes different types of Questions. These Questions are designed to test your ability to identify written English that is grammatically correct. They also test your ability to understand the essential message conveyed in that sentence. Therefore, understanding the essential and discarding the unimportant or non-essential in the key point to be focused while attending to these type of questions.
When we analyze placement papers, we find that there are different patterns employed to test these question on Sentence Correction"
Choosing the Grammatically Correct Sentences
In this type of question, four sentences are given and we are asked to choose the grammatically correct sentence. There is no underlined part so you have to observe the entire sentence for its accuracy and grammar.
Choosing the best alternative.
This is a different type of question where a part of the sentence is high-lighted or underlined. You have to choose the best alternative from amount the four given sentences.
Identifying the incorrect sentence or Sentences.
In this type of questions four sentences are given, usually connected to one another. You have to identify the incorrect sentences. At times out, of the four given sentences, three may be incorrect and at times One or Two May be incorrect so you have to study the sentence with concentration.
Here, the different usages are tested. it may be a particular use of word. It may be particular usage of phrases. You have to choose the option in which the usage in inappropriate or incorrect.
To score well in the above sections, you need to know Standard English grammar. You must be able to recognize the various parts of speech and identify the way they are used incorrectly in test Question.
Mainly, your attention should be focused on tenses of verbs, word order, word form, and agreement of the verb with the subject, difference between principal verb and Auxiliary verb, that usage of Infinitives and grounds and proper usage of preposition. You must also have a solid understanding of the different idiomatic phrases and the link between one clause and the other. i.e. principal clause and sub-ordinate clause. Strategies to solve questions on choosing grammatically correct sentences
The first thing to do is to go through all the four sentences quickly. The common mistake, committed by the examinees, is that the movement they find one error immediately they choose that as error. There may be multiple errors in a sentence. Therefore while choosing the correct sentence; you have to be careful, the correct answer must correct all the errors. Intelligent reading will help you to make a judicious selection.
While reading the options you may find one or two sentences with glaring grammatical mistakes. Obviously, what you should do is to short list your options. Then closely concentrate on the one or two short listed options out of the four given.
Do not look for spelling errors or errors is the use of capital letters and punctuation marks. In this type of questions, you can take at for granted that errors, pertaining to spelling, use of capital letters and punctuation marks, are never included.
Look out for the grammatical errors. We have different types of grammatical errors. You have to concentrate chiefly on the following kinds of errors.
1) Errors of subject verb agreement or concord of the verb with the subject.
2)Errors based on the wrong usage of certain words of group of words.
3) Errors in the use of pronoun
4) Errors in the use of Tenses
5) Errors in the use of Certain Nouns, Adjectives, Adverbs.
6) Errors in the use of Infinitives and gerunds.
1. a. I am not one of those who believe everything they hear.
b. I am not one of these who believes everything I hear.
c. I am not one of those who believes everything he hears.
d. I am not one of those who believes in everything one hears.
2. a. Cannot one do what one likes with one’s own?
b. Cannot one do that one likes to do with his own?
c. Cannot one do that one likes with his own?
d. Cannot one do what he likes with his own?
3. a. There’s Mr. Som, whom they say is the best singer in the country.
b. There’s Mr. Som, who they say is the best singer in the country.
c. There is Mr. Som, whom they say is the best singer in the country.
d. There is Mr. Som who, they say is the best s inger in the country.
4. a. Each of the students has done well.
b. Each of the student has done well.
c. Each of the students have done well.
d. Each of the student have done well.
5. a. Today we love, what tomorrow we hate; today we seek, what tomorrow we shun,
today we desire, what tomorrow we fear.
b. Today, we love what tomorrow we hate, today, we seek what tomorrow we shun,
today, we desire what tomorrow we fear.
c. Today we love what tomorrow we hate, today we seek what tomorrow we shun,
today we desire what tomorrow we fear.
d. Today we love what tomorrow we hate; today we seek what tomorrow we shun;
today we desire what tomorrow we fear.
Directions for Questions 6 to 8. In each question, the word given is used in four different ways, numbered I to 4. Choose the option in which the usage of the word is incorrect or inappropriate
a. Nagasaki suffered from the fallout of nuclear radiation.
b. People believed that the political fallout of the scandal would be insignificant.
c. Who can predict the environmental fallout of the WTO agreements?
d. The headmaster could not understand the fallout of several of his good students at
the Public examination.
a. She did not have passing marks in mathematics
b. The mad woman was cursing everybody passing her on the road.
c. At the birthday party all the children enjoyed a game of passing the parcel.
d. A passing taxi was stopped to rush the accident victim to the hospital
a. The shopkeeper showed us a bolt of fine silk.
b. As he could not move , he made a bolt for the gate.
c. Could you please bolt the door?
d. The thief was arrested before he could bolt from the scene of the crime.
1.a; 2.a; 3.b; 4.a; 5.d; 6.d; 7.a; 8.b |
Compare proportions for two or more groups in the data
The compare proportions test is used to evaluate if the frequency of occurrence of some event, behavior, intention, etc. differs across groups. The null hypothesis for the difference in proportions across groups in the population is set to zero. We test this hypothesis using sample data.
We can perform either a one-tailed test (i.e.,
less than or
greater than) or a two-tailed test (see the
Alternative hypothesis dropdown). A one-tailed test is useful if we want to evaluate if the available sample data suggest that, for example, the proportion of dropped calls is larger (or smaller) for one wireless provider compared to others.
We will use a sample from a dataset that describes the survival status of individual passengers on the Titanic. The principal source for data about Titanic passengers is the Encyclopedia Titanic. One of the original sources is Eaton & Haas (1994) Titanic: Triumph and Tragedy, Patrick Stephens Ltd, which includes a passenger list created by many researchers and edited by Michael A. Findlay. Lets focus on two variables in the database:
Suppose we want to test if the proportion of people that survived the sinking of the Titanic differs across passenger classes. To test this hypothesis we select
pclass as the grouping variable and calculate proportions of
Choose level) for
Variable (select one)).
Choose combinations box select all available entries to conduct pair-wise comparisons across the three passenger class levels. Note that removing all entries will automatically select all combinations. Unless we have an explicit hypothesis for the direction of the effect we should use a two-sided test (i.e.,
two.sided). Our first alternative hypothesis would be ‘The proportion of survivors amongst 1st class passengers was different compared to 2nd class passengers’.
The first two blocks of output show basic information about the test (e.g.,. selected variables and confidence levels) and summary statistics (e.g., proportions, standard errors, etc. per group). The final block of output shows the following:
Null hyp.is the null hypothesis and
Alt. hyp.the alternative hypothesis
diffis the difference between the sample proportion for two groups (e.g., 0.635 - 0.441 = 0.194). If the null hypothesis is true we expect this difference to be small (i.e., close to zero)
p.valueis the probability of finding a value as extreme or more extreme than
diffif the null hypothesis is true
If we check
Show additional statistics the following output is added:
chisq.valueis the chi-squared statistic associated with
diffthat we can compare to a chi-squared distribution. For additional discussion on how this metric is calculated see the help file in Basics > Tables > Cross-tabs. For each combination the equivalent of a 2X2 cross-tab is calculated.
dfis the degrees of freedom associated with each statistical test (1).
2.5% 97.5%show the 95% confidence interval around the difference in sample proportions. These numbers provide a range within which the true population difference is likely to fall
There are three approaches we can use to evaluate the null hypothesis. We will choose a significance level of 0.05.1 Of course, each approach will lead to the same conclusion.
Because the p.values are smaller than the significance level for each pair-wise comparison we can reject the null hypothesis that the proportions are equal based on the available sample of data. The results suggest that 1st class passengers were more likely to survive the sinking than either 2nd or 3rd class passengers. In turn, the 2nd class passengers were more likely to survive than those in 3rd class.
Because zero is not contained in any of the confidence intervals we reject the null hypothesis for each evaluated combination of passenger class levels.
Because the calculated chi-squared values (20.576, 104.704, and 25.008) are larger than the corresponding critical chi-squared value we reject the null hypothesis for each evaluated combination of passenger class levels. We can obtain the critical chi-squared value by using the probability calculator in the Basics menu. Using the test for 1st versus 2nd class passengers as an example, we find that for a chi-squared distribution with 1 degree of freedom (see
df) and a confidence level of 0.95 the critical chi-squared value is 3.841.
In addition to the numerical output provided in the Summary tab we can also investigate the association between
survived visually (see the Plot tab). The screen shot below shows two bar charts. The first chart has confidence interval (black) and standard error (blue) bars for the proportion of
yes entries for
survived in the sample. Consistent with the results shown in the Summary tab there are clear differences in the survival rate across passenger classes. The
Dodge chart shows the proportions of
survived side-by-side for each passenger class. While 1st class passengers had a higher proportion of
no the opposite holds for the 3rd class passengers.
prop.testfunction to compare proportions. When one or more expected values are small (e.g., 5 or less) the p.value for this test is calculated using simulation methods. When this occurs it is recommended to rerun the test using Basics > Tables > Cross-tabs and evaluate if some cells may have an expected value below 1.
Greater than) critical values must be obtained by using the normal distribution in the probability calculator and squaring the corresponding Z-statistic.
The more comparisons we evaluate the more likely we are to find a “significant” result just by chance even if the null hypothesis is true. If we conduct 100 tests and set our significance level at 0.05 (or 5%) we can expect to find 5 p.values smaller than or equal to 0.05 even if the are no associations in the population.
Bonferroni adjustment ensures the p.values are scaled appropriately given the number of tests conducted. This XKCD cartoon expresses the need for this type of adjustments very clearly.
This is a comparison of proportions test of the null hypothesis that the true population difference in proportions is equal to 0. Using a significance level of 0.05, we reject the null hypothesis for each pair of passengers classes evaluated, and conclude that the true population difference in proportions is not equal to 0.
The p.value for the test of differences in the survival proportion for 1st versus 2nd class passengers is < .001. This is the probability of observing a sample difference in proportions that is as or more extreme than the sample difference in proportion from the data if the null hypothesis is true. In this case, it is the probability of observing a sample difference in proportions that is less than -0.194 or greater than 0.194 if the true population difference in proportions is 0.
The 95% confidence interval is 0.112 to 0.277. If repeated samples were taken and the 95% confidence interval computed for each one, the true difference in population proportions would fall inside the confidence interval in 95% of the samples
1 The significance level, often denoted by \(\alpha\), is the highest probability you are willing to accept of rejecting the null hypothesis when it is actually true. A commonly used significance level is 0.05 (or 5%) |
The Anglo-Austrian Alliance connected the Kingdom of Great Britain and the Habsburg monarchy during the first half of the 18th century. It was largely the work of the British statesman Duke of Newcastle, who considered an alliance with Austria crucial to prevent the further expansion of French power.
It lasted from 1731 to 1756 and formed part of the stately quadrille by which the Great Powers of Europe continually shifted their alliances to try to maintain the balance of power in Europe. Its collapse during the Diplomatic Revolution ultimately led to the Seven Years' War.
In 1725 Austria had signed the Treaty of Vienna, offering material support to the Spanish in their efforts to try to take back Gibraltar from the British. Britain was then allied to France, but their relationship was slowly declining, and by 1731, they would be considered enemies again. When, in 1727, the Spanish besieged Gibraltar during the Anglo-Spanish War, British diplomats persuaded the Austrians not to assist the Spanish by offering a number of concessions. A humiliated Spain was forced to break off the siege and make peace.
A number of prominent Austrophiles had for some time been advocating a British alliance with Austria, as the Austrians were seen as the only country with land forces that could match the French on the Continent. They received a boost when the greatest opponent of Austria, Lord Townshend was forced to resign from office in 1730. That cleared the way for a full rapprochement between London and Vienna and gave the Duke of Newcastle more control over British foreign policy. He was strongly convinced that an alliance with Austria was essential.
In 1727, the Austrians had agreed to suspend the Ostend Company, whose overseas trading had been a constant source of tension with the British. That laid the groundwork for the Treaty of Vienna, which instituted a formal alliance between the two powers. It was signed on 16 March 1731 by Count Zinzendorf and the Earl of Chesterfield. One immediate result was the complete disbandment of the Ostend Company, which delighted the British government. Britain and Austria gave each other a reciprocal guarantee against aggression.
The British gave material support to the Austrians in the War of the Austrian Succession in the form of British troops and providing large financial subsidies that allowed Maria Theresa to secure the Austrian throne, in defiance of Salic Law. By 1745, Austria had appeared to be in serious danger of being completely overrun and partitioned by Prussia and France, but a British campaign against the French in Flanders drew away crucial French manpower, allowing the Austrians to counterattack.
The British had also applied diplomatic pressure to persuade Prussia's Frederick the Great to agree a ceasefire at the Treaty of Dresden so the Austrians could turn their full attention against the French.
The Alliance was sometimes severely strained. The Austrians believed the British had done little to prevent France from occupying Brussels in 1746, which led to a further increase of conflicts. The worst was during the Congress of Breda, aimed at negotiating an end to the war and leading to the eventual settlement at Aix-la-Chapelle in 1748.
The British, hoping for a swift conclusion, were annoyed by Austria's slow progress in agreeing the terms. They eventually threatened to sign the treaty alone if Austria did not agree to it within three weeks. Austria reluctantly signed the treaty. It was particularly disturbed to have little material gains for their efforts in the war, but the British considered the terms received by the French to be very generous.
In spite of that, the omens looked good for the alliance. The Austrians had an enthusiastic supporter in Newcastle and apparently had no other major ally to turn to. The British regarded the alliance as part of the Newcastle System to maintain the security of Germany by creating an alliance between Britain, Hanover, Austria and the Dutch Republic.
In Austria there remained amongst some nagging suspicion that the British were not fully committed to the alliance. They highlighted Britain's absence from the War of the Polish Succession and its failure to insist on a return of Silesia to Austria at the Treaty of Aix-la-Chapelle, as signs of Britain's bad faith. Essentially, they believed, Britain was interested in the alliance only when it suited their own goals. One of the leading anti-British influences was Wenzel Anton Graf Kaunitz, who became Minister for Foreign Affairs in 1753.
In 1756, suspecting that Prussia was about to launch an invasion of Bohemia and fearing that the British would do nothing to help them because of a preoccupation with a dispute with France over the Ohio Country, Austria concluded an alliance with its traditional enemy, France. Britain, left out in the cold, made a hasty alliance with Prussia, hoping that the new balance of power would prevent war.
Unable to control its Prussian ally, Frederick the Great, who attacked Austria in 1756, Britain honoured its commitment to the Prussians and forged the Anglo-Prussian alliance. Although Britain and Austria did not declare war against each other, they were now aligned in opposing coalitions in a major European war. During the Capture of Emden in 1758, British and Austrian forces came close to open warfare. In spite of its efforts during the war, Austria was ultimately unable to retake Silesia, and the 1763 Treaty of Paris confirmed Prussian control of it.
Britain had been growing increasingly less favourable to Austria, and the Austrophiles in Britain saw their influence decrease during and after the Seven Years' War. Austria was by now seen as increasingly autocratic and resistant to the spread of British liberal democracy.
In 1778, when France entered the American War of Independence to try to assist the American colonists to gain their independence, Britain sought to gain Austrian support for their efforts to put down the rebellion. Austria's entry into the war, it was believed, would have drawn off French troops that were sent to America. However, Austria refused even seriously considering the proposal.
Britain and Austria later again became allies during the Napoleonic Wars, but they were both part of a broader anti-French coalition, and the relationship was nowhere near as close as it had been during the era of the Alliance. Once again, British subsidies became crucial to putting Austrian armies in the field, such as during the Flanders campaign of 1793–1794, when they received £1 million.
- Browning p.48
- Simms p.215-221
- Browning p.55
- Simms p.219
- Simms p.338
- Browning p.154
- Browning p.56
- Anderson p.128-29
- Anderson, Fred. Crucible of War: The Seven Years' War and the Fate of Empire in British North America, 1754-1766. Faber and Faber, 2001
- Browning, Reed. The Duke of Newcastle. Yale University Press, 1975.
- McLynn, Frank. 1759: The Year Britain Became Master of the World. Pimlico, 2005.
- Murphy, Orvile T. Charles Gravier: Comete de Vergennes: French Diplomacy in the Age of Revolution. New York Press, 1982.
- Simms, Brendan. Three Victories and a Defeat: The Rise and Fall of the First British Empire. Penguin Books, 2008.
- Whiteley, Peter. Lord North: The Prime Minister who lost America. The Hambledon Press, 1996. |
Stroke is the 5th leading cause of death in the U.S. and a major cause of long-term disability.
It occurs when the blood supply to the brain is interrupted or reduced,
depriving the brain of oxygen and causing brain tissue to die. The most
common type of stroke is
ischemic, which is caused by blockages or narrowing of the arteries.
Hemorrhagic stroke is caused by arteries in the brain either leaking blood or bursting open.
In the U.S., approximately 40% of stroke deaths occur in men, and 60% in
women. African-American individuals have nearly twice the risk of stroke
and a much higher death rate. These are sobering statistics, but the good
news is that 80% of strokes can be prevented. Even if a stroke occurs,
its damage can be minimized if the patient is treated quickly.
As a designated Primary Stroke Center, our hospital meets the highest standards
for treatment of stroke patients, including speed of care and innovative
procedures that prevent death and minimize brain damage.
Signs of Stroke
When a stroke occurs, time is of the essence. For this reason, everyone
should learn how to recognize the signs of stroke. One of the most effective
ways to do this is by memorizing the acronym F.A.S.T.
F = FACE: Ask the person to smile. Does one side of their face droop?
A = ARMS: Ask the person to raise both arms. Does one arm drift downward?
S = SPEECH: Ask the person to repeat a simple sentence. Does speech sound slurred
T = TIME: If you observe any of these signs (independently or together), call 911
Patients who have suffered a stroke sometimes report other symptoms, including:
- Sudden loss of balance
- Severe headache
- Paralysis of the face, arm or leg
- Changes in vision
Reduce Your Risk
You can reduce your risk of stroke by living a healthy lifestyle and getting
regular checkups with your physician. Certainly, if you have a family
history of stroke or cardiovascular disease, you should talk to your doctor
about preventive measures.
You can significantly reduce your risk of stroke if you:
- Eat a healthy diet
- Get regular exercise
- Don’t smoke
- Limit your alcohol intake
- Control your cholesterol and blood pressure levels |
Systems, in one sense, are devices that take input and produce an output. A system can be thought to operate on the input to produce the output. The output is related to the input by a certain relationship known as the system response. The system response usually can be modeled with a mathematical relationship between the system input and the system output.
Physical systems can be divided up into a number of different categories, depending on particular properties that the system exhibits. Some of these system classifications are very easy to work with and have a large theory base for analysis. Some system classifications are very complex and have still not been investigated with any degree of success. By properly identifying the properties of a system, certain analysis and design tools can be selected for use with the system.
The initial time of a system is the time before which there is no input. Typically, the initial time of a system is defined to be zero, which will simplify the analysis significantly. Some techniques, such as the Laplace Transform require that the initial time of the system be zero. The initial time of a system is typically denoted by t0. The value of any variable at the initial time t0 will be denoted with a 0 subscript. For instance, the value of variable x at time t0 is given by:
x(t0)=x0 Likewise, any time t with a positive subscript are points in time after t0, in ascending order: t0 ≤ t1 ≤t2 ≤···≤ tn So t1 occurs after t0, and t2 occurs after both points. In a similar fashion above, a variable with a positive subscript (unless specifying an index into a vector) also occurs at that point in time: x(t1)=x1 x(t2)=x2 This is valid for all points in time t.
A system satisfies the property of additivity, if a sum of inputs results in a sum of outputs. By definition: an input of x3(t)=x1(t)+x2(t) results in an output of y3(t)=y1(t)+y2(t). To determine whether a system is additive, use the following test: Given a system f that takes an input x and outputs a value y, assume two inputs (x1 and x2) produce two outputs:
y1 =f(x1) y2 =f(x2) Now, create a composite input that is the sum of the previous inputs: x3 =x1+x2 Then the system is additive if the following equation is true: y3 =f(x3)=f(x1+x2)=f(x1)+f(x2)=y1+y2
A system satisfies the condition of homogeneity if an input scaled by a certain factor produces an output scaled by that same factor. By definition: an input of ax1 results in an output of ay1. In other words, to see if function f() is homogeneous, perform the following test: Stimulate the system f with an arbitrary input x to produce an output y:
y =f(x) Now, create input x1, scale it by a multiplicative factor C(C is an arbitrary constant value), and produce a corresponding output y1: y1 =f(Cx1) Now, assign x to be equal to x1: x1 =x Then, for the system to be homogeneous, the following equation must be true: y1 =f(Cx)=Cf(x)=Cy
A system is considered linear if it satisfies the conditions of Additivity and Homogeneity. In short, a system is linear if the following is true: Take two arbitrary inputs, and produce two arbitrary outputs:
y1 =f(x1) y2 =f(x2) Now, a linear combination of the inputs should produce a linear combination of the outputs: f(Ax+By) = f(Ax) + f(By) = Af(x) + Bf(y) This condition of additivity and homogeneity is called superposition. A system is linear if it satisfies the condition of superposition.
A system is said to have memory if the output from the system is dependent on past inputs (or future inputs!) to the system. A system is called memoryless if the output is only dependent on the current input. Memoryless systems are easier to work with, but systems with memory are more common in digital signal processing applications. Systems that have memory are called dynamic systems, and systems that do not have memory are static systems.
Causality is a property that is very similar to memory. A system is called causal if it is only dependent on past and/or current inputs. A system is called anti-causal if the output of the system is dependent only on future inputs. A system is called non-causal if the output depends on past and/or current and future inputs. A system design that is not causal cannot be physically implemented. If the system can't be built, the design is generally worthless.
A system is called time-invariant if the system relationship between the input and output signals is not dependent on the passage of time. If the input signal x(t) produces an output y(t) then any time shifted input, x(t+δ), results in a time-shifted output y(t+δ) This property can be satisfied if the transfer function of the system is not a function of time except expressed by the input and output. If a system is time-invariant then the system block is commutative with an arbitrary delay.
It is the job of a control engineer to analyze existing systems, and to design new systems to meet specific needs. Sometimes new systems need to be designed, but more frequently a controller unit needs to be designed to improve the performance of existing systems. When designing a system, or implementing a controller to augment an existing system, we need to follow some basic steps:
An external description of a system relates the system input to the system output without explicitly taking into account the internal workings of the system. The external description of a system is sometimes also referred to as the Input-Output Description of the system, because it only deals with the inputs and the outputs to the system
If the system can be represented by a mathematical function h(t, r), where t is the time that the output is observed, and r is the time that the input is applied. We can relate the system function h(t, r) to the input x and the output y through the use of an integral:
Here we facilitate best online services in assignmenthelp. net on the graduate and engineering level in control system. Here assignment help net gives you cheapest, facile and easy to accessing. assignment help net services provided all over the world.
You will get all answers of your problems which is in your assignment, homework at our Assignment Help services at any level as school, university students. Finding control system assignment help is quickest services, send your assignment to us through an e-mail and also declare date of receiving. As we all know that control Systems is complex subject in engineering, but we will make it easy for you by the help of our expertise. Control Systems Assignment Help also helps students with Control Systems lesson plans and work sheets.
To Schedule a Control Systems Engineering tutoring session
To submit Control Systems Engineering assignment click here
Assignment Writing Help
Engineering Assignment Services
Do My Assignment Help
Write My Essay Services |
Most of us are probably familiar with the term engine braking, but have you ever stopped to think just how it works?
Some of you that are a little more mechanically minded probably have a pretty good idea, but there are actually 3 different methods of engine braking that can be used depending on the vehicle. Most commonly, engine braking is used on larger transport trucks or vehicles that would otherwise be hard to slow down with rotary breaks.
However, there is a method of engine braking in gasoline-powered engines, as well as two other methods for diesel engines. Essentially, engine braking is using the retarding forces inside an engine (friction, compression, etc.) to slow down the movement of the rotor, and resultingly, the car. You can check out a super in-depth explanation of all of the different kinds of engine braking in the video below from Engineering Explained Youtube Channel.
The first method to discuss is gasoline engine braking, which utilizes the formation of a vacuum to slow down the vehicle. When you let your foot off of the throttle, the throttle body closes, meaning that as the pistons retract into the cylinders, a small vacuum is formed, creating forces that inhibit the continued motion of the car.
The first diesel method acts the exact opposite of creating a vacuum, rather it creates excess compression in the cylinder. As the piston moves up to push the exhaust out, the exhaust valve will close creating back pressure in the cylinder, thus slowing the car down.
The last diesel method of engine braking is a little more complicated, and it is dubbed the "jake brake" after the company that created it. The video above explains this method the best, but essentially you are releasing some of the compression built up in the engine so it is not as effective in moving the pistons.
You are "wasting energy" in the combustion process by simply combusting gasses, then immediately releasing them, resulting in a negative network. There are quite a few valves and solenoids used to make each of these systems work properly, and all of them are timed and regulated perfectly to make everything work.
Hopefully, now you understand engine braking a little bit better, and you can impress your non-engineer friends by your knowledge of how cars are able to slow down without pressing on their brakes. |
Thursday, 28 February 2013
Improving the Students’ Reading Comprehension through Sustained Silent Reading Method (Classroom Action Research)
The goals of teaching English in Indonesia are mainly to enable the students to use English for communication and to read books and references written in English. The students are expected to have skills of the English language such as reading, writing, listening, speaking, and other elements of language that must be taught to the students through the chosen themes.
Among the four skills above, reading get greater attention than three others, because reading is one of the important skills. Reading can be defined as an active cognitive process of interacting with the print and monitoring comprehension to establish meaning and through reading we can get much knowledge, studynew words, comprehend ideas, study the word are used, how to implement the grammatical rules, and gain the information.
Problem mostly occurs to the students when reading book. Sometimes students are facing a book but do not read at all. They just can mention symbol word without getting any idea from the book. The researcher herself experienced when reading a book without any comprehension tends to feel sleepy.
As the explanation above, the data from observation indicates that the students of MTs. Muhammadiyah Tallo Makassar and also face the same problems. Most of them are not competent to comprehend English text well. Many students can read the word in passage perfectly but are unable to answer the questions. They can say the words, but unable to gain the meaning from words. They find hard to comprehend reading materials. The writer also observed that the teacher only asked the students to read, and then they must answer the question without giving explanation about the text first. So the students who did not understand what they read. As a result, they could not answer the whole questions correctly. Based on the result of students’ achievement in reading are still underneath, it is about 5.5 mean score and the target score is 7.00. In this case, the students have to read critically, and the teacher must select the suitable technique or strategy to teach it.
What a teacher has to consider as a prime important task is how to design the reading course with strategies and techniques to facilitate the students to comprehend to concept from the author’s mind in the text. There have been a lot of techniques and strategies discussed by many experts dealing with reading comprehension. One of the techniques to be offered here is Sustained Silent Reading. In which the students are learning how to interact with the text they read.
By using SSR method, the students read silently for a given period of time. But, it does not mean they read without sound. A reader may sound in respond words. SSR does not need to say out each word. A reader is silent reading only says the word in mind, those any references to pronunciation stress or intonation. In addition, they can choose books, magazines, etc, that they are interested in, and??????????????
DAPATKAN PROPOSAL ATAU SKRIPSINYA LENGKAPNYA DI SINI |
To learn how to construct and use an electromagnet. To learn that electromagnets are temporary magnets and work only when electricity passes through the coil of wire.
People use the power of magnets in many ways. Magnetism and electricity are closely related. In an electric generator, an electric current is set up in a coil of wire that moves through a magnetic field. An electric current moving through a wire coil wrapped around an iron core produces magnetism. The close interrelationship between magnetism and electricity has many applications.
By exploring magnets, students are indirectly introduced to the idea that there are forces that occur on earth which cannot be seen. This idea can then be developed into an understanding that objects, such as the earth or electrically charged objects, can pull on other objects. It is important that students get a sense of electric and magnetic force fields (as well as of gravity) and of some simple relations between magnetic and electric currents. (Benchmarks for Science Literacy, p. 93.) This lesson continues the exploration of magnetism begun in Science NetLinks lessons Magnets 1: Magnetic Pickups, Magnets 2: How Strong is Your Magnet?, and Exploring Magnetic Fields. Before doing this activity, students should have built simple electric circuits with batteries and flashlight bulbs.
In this lesson, students will make a simple electromagnet by wrapping a wire around a nail and attaching the ends of the wire to a battery to make an electric circuit. As current flows through the coil, a magnetic field is produced and the nail is magnetized. Lessons such as this help to build a foundation upon which students can develop their ideas about gravitational force and how electric currents and magnets can exert a force as well.
Begin with a brief discussion in which students can review concepts about magnetism, using questions such as these:
- What is a magnet?
- What is a magnetic field?
- Can you make a magnet?
If students' responses indicate that they need to review magnetism, you can refer them to How Electromagnets Work for a brief refresher.
Then say to students, “Electromagnets are temporary magnets that let us turn magnetic fields on and off so we can control the magnetic energy.” Then ask students to speculate on why it is advantageous to turn the fields on and off. Tell students that they will conduct an activity to explore how electromagnetic fields work.
Pass out the Build an Electromagnet student sheet and have students do the activity in pairs. Students will build an electromagnet and test its strength. To save time, you can pre-strip the ends of the wire for each pair of students. You can use a wire stripper, scissors, or a sharp knife to remove the insulation.
Before students begin to work on their own, make sure that each group has the needed materials to build their electromagnet. To help students, you can ask questions such as the following before they begin:
- Do you think an electromagnet will be attracted to the same things as a regular magnet?
- Will it be attracted to all metal things?
- Will it be attracted to other magnets?
As students are building the electromagnet, walk around the class to make sure that they are on track. Ask questions such as:
- What happens to the electromagnet if you disconnect one of the wires from the battery?
- How many turns of the wire does it take to pick up a paper clip?
- Are more turns better?
After students have built their electromagnet and tested it, you could ask questions such as the following to extend their ideas:
- What happens if you build another electromagnet using a different size battery?
- How many paper clips will this new electromagnet pick up?
- Does using a different size battery (“A” versus “C”) make a difference?
- What things are attracted to a permanent magnet, such as a refrigerator magnet? Are these the same things that are attracted to the electromagnet?
- Are there any differences between what the permanent magnet and the electromagnet can do?
After students have completed the activity, discuss the questions on their student sheets:
- What is traveling through the wires? Where does the electricity come from?
- Is an electromagnet a temporary magnet or a permanent magnet? Why is it a temporary magnet?
- How can you measure the strength of your electromagnet? How can you make your electromagnet stronger?
Do not dismantle the electromagnets until you are finished with electromagnet activities, but be sure they are disconnected from the batteries at the end of your class.
When you are finished with electromagnet activities, unwrap the wires and be sure the electromagnets are not connected to the batteries. Students can label their electromagnets with tape and use them over and over.
Next, using the Electromagnets student esheet, students will explore Using Electromagnets. After students have explored the site, review how electromagnets are used in each of these items:
- electric bell
- electric motor
To assess student understanding, instruct your students to write several paragraphs to summarize some of the uses of electromagnets described in the Web resource. They should explain the function of the electromagnet in the devices described. They also should discuss how the electromagnets in those devices are like the ones they built.
After doing the activity and exploring the website, students should understand that when an electric current flows through a wire, a magnetic field is produced around it. This concentrates the magnetic field, especially if we put an iron bar in the center of the field. They also should generally understand that there are two ways to increase the strength of an electromagnet. One is by adding more length to the wire (more coils). The other is by increasing the amount of electricity going through the wire. By increasing these two things, engineers have developed very powerful electromagnets such as the enormous ones used in junkyards to lift large piles of metal.
More detailed information about magnets and electromagnets can be found at Magnetic Fields.
More activities to try with electromagnets can be found at How Electromagnets Work: Experiments to Try. |
Plans to abolish Human Rights Act 1998
After the Second World War, nations came together in order to prevent another war and the Universal Declaration of Human Rights was created. These standards were intended to protect the individual from the state, to uphold the rights of minorities and to provide support for the vulnerable.
The Human Rights Act 1998 came in to force in 2000 and it was Britain’s way of implementing the European Convention on Human Rights. The Act sets out the fundamental rights and freedoms that individuals in the UK have access to. All public bodies including schools and the police must therefore comply with the Human Rights Act. If an individual wishes to bring a claim under the Human Rights Act then then they can do this in a British Court, they would not have to make a claim in the European Courts.
The rights of individuals under the Human Rights Act include:-
- The right to life – for instance the state is required to conduct investigations into suspicious deaths and deaths in custody;
- The prohibition of torture and inhuman treatment – nobody should be tortured or treated in an inhuman or degrading way
- The right to liberty and freedom –everyone has the right to be free and this liberty will only be taken away if for instance you are convicted of a crime;
- The right to a fair trial and no punishment without law – the concept that everyone is innocent until proven guilty.
- Respect for privacy and family life and the right to marry – this protects against unnecessary surveillance or intrusion into your life. You have the right to marry and raise a family;
It is clear that the Human Rights Act is very much the ‘people’s law’.
The Conservatives proposal to abolish this Act coupled with the impending legal aid cuts will severely restrict people’s access to justice and it is no surprise that people are feeling uneasy about it.
Instead, they plan to introduce a British Bill of Rights as an alternative which would mean that British people would not be able to have their case heard in a British court but would need to go to the European Court of Human Rights in Strasbourg. If Britain were to opt out of the European Convention on Human Rights then they would be the only European country other than Belarus which is not signed up to the convention. |
In 3D computer graphics, 3D modeling is the process of developing a mathematical representation of any three-dimensional surface of object (either inanimate or living) via specialized software. The product is called a 3D model. It can be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena. The model can also be physically created using 3D printing devices.
Models may be created automatically or manually. The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. Recently, new concepts in 3D modeling have started to emerge. Recently, a new technology departing from the traditional techniques starts to emerge, such as Curve Controlled Modeling that emphasizes the modeling of the movement of a 3D object instead of the traditional modeling of the static shape. |
Classification of Caves
Definition: A cave is an airfilled underground void, large enough to be
examined in some way by man.
There are several ways to classify caves:
- by the rocks they are in
- By the morphology of the cave, the geometric structure
- horizontal caves consist of some nearly horizontal tubes.
- fissure caves consist of a single fissure in the rock.
- vertical caves consist of shaft(s) and short links in between.
- cave systems are rather large a contain many different features.
The discrimination in horizontal and vertical caves is useful only in areas with rather small caves.
This caves consist normally of a single tunnel or shaft.
In other karst areas with larger caves any cave is a cave system.
- by the time they were formed, in relation to the forming of the rocks they are in:
- primary caves formed together with the surrounding rocks.
This are typically lava tubes or gas bubbles or tufa caves.
- most caves are secondary caves.
After the formation of the rock there is a time when part of the rocks are removed.
This secondary stage formed the cave.
The mechanism of the transport of removed material is not relevant for this classifikation.
- tertiary caves are the result of the collapse of other caves.
- by the way they were formed:
- solutional caves or Karst caves:
Most caves are in rocks which can be dissolved by a weak natural acid (usually carbonic acid).
This acid forms when rainwater absorbs CO2 from the air and the upper layers of the soil.
The forming of Gypsum Caves does not require CO2, Gypsum has a very high solubility.
- lava caves or lava tubes:
First a crust hardens on a lava flow.
When the crust gets thick enough, the lava flow is underground.
When the eruption end, the lava keeps flowing and the empty tunnel-like passage remains.
The length of this tubes depends on the distance from the lava source to the drain, a depression or the sea.
It can be hundreds or even thousands of meters long.
Example: Hana Cave
- tufa caves:
When limestone rich water emerges from a spring, the limestone
Example: Olga Cave
- sea caves:
This caves are created by the erosion of waves.
The waves force water into cracks in the rock, breaking of the rock and forming caves.
Often this caves follow less resistant rock layers.
Example: Sea Lion Cave
- talus (ta'les) caves:
Huge rockfalls from cliffs can create large spacious chambers within the resulting boulder piles.
Example: Polar Caves Park
- earthquake cave:
Formed by the movement of rock along a fault.
Its just a natural crack in the rock and the big ones are very rare.
Example: Seneca Caverns
- glacier caves:
Melting water moving through glaciers creates glacier caves.
This caves are formed inside the ice.
But: Ice caves on the other side are caves that are filled with ice, but the cave itself is formed in rock.
Most ice caves are formed as solutional caves in limestone!
Examples can be found in Canada, Alaska, and high on Mt. Ranier in Washington.
- soil tubes:
In desert areas, flash floods can move through the soil and hollow out openings.
Examples can be found in the Mohave Desert in California.
- by the age of the rock:
This is useful for limestone caves.
Limestone is a sedimentary rock and is characterized by the time it was formed.
The most common limeston formations are:
- Recent Limestone or Tufa is found all over the world.
- Jurassic Limestone.
- Devonian Limestone. |
Scientists have produced amazing three-dimensional images of a prehistoric mite as it hitched a ride on the back of a 50 million-year-old spider.
At just 176 micrometres long and barely visible to the naked eye, University of Manchester researchers and colleagues in Berlin believe the mite, trapped inside Baltic amber (fossil tree resin), is the smallest arthropod fossil ever to be scanned using X-ray computed tomography (CT) scanning techniques.
They say their study published in the Royal Society journal Biology Letters today also sets a minimum age of almost 50 million years for the evolution among these mites of phoretic, or hitchhiking, behaviour using another animal species.
"CT allowed us to digitally dissect the mite off the spider in order to reveal the important features on the underside of the mite required for identification," said Dr David Penney, one of the study's authors based in the Faculty of Life Sciences. "The specimen, which is extremely rare in the fossil record, is potentially the oldest record of the living family Histiostomatidae.
"Amber is a remarkable repository of ecological associations within the fossil record. In many cases organisms died instantaneously and were preserved with lifelike fidelity, still enacting their behaviour immediately prior to their unexpected demise. We often refer to this as 'frozen behaviour' or palaeoethology and such examples can tell us a great deal about interactions in ecosystems of the past. However, most amber fossils consist of individual insects or several insects together but without unequivocal demonstrable evidence of direct interaction. The remarkable specimen we describe in this paper is the kind of find that occurs only once in say a hundred thousand specimens."
Fellow Manchester biologist Dr Richard Preziosi said: "Phoresy is where one organism uses another animal of a different species for transportation to a new environment. Such behaviour is common in several different groups today. The study of fossils such as the one we described can provide important clues as to how far back in geological time such behaviours evolved. The fact that we now have technology that was unavailable just a few years ago means we can now use a multidisciplinary approach to extract the most information possible from such tiny and awkwardly positioned fossils, which previously would have yielded little or no substantial scientific data."
Co-author Professor Phil Withers, from Manchester's School of Materials, said: "We believe this to be the smallest amber inclusion scanned anywhere to date. With our sub-micron phase contrast system we can obtain fantastic 3D images and compete with synchrotron x-ray systems and are revealing fossils previously inaccessible to imaging. With our nanoCT lab systems, we are now looking to push the boundaries of this technique yet further."
Dr Jason Dunlop, from the Humboldt University, Berlin, added: "As everyone knows, mites are usually very small animals, and even living ones are difficult to work with. Fossil mites are especially rare and the particular group to which this remarkable new amber specimen belongs has only been found a handful of times in the fossil record. Yet thanks to these new techniques, we could identify numerous important features as if we were looking at a modern animal under the scanning electron microscope. Work like this is breaking down the barriers between palaeontology and zoology even further."
Explore further: Discovery of two new species of primitive fishes
More information: 'A minute fossil phoretic mite recovered by phase contrast X-ray computed tomography,' by Jason A. Dunlop et al. Paper online: doi: 10.1098/rsbl.2011.0923 |
A-level Physics (Advancing Physics)/Circular Motion
Very rarely, things move in circles. Some planets move in roughly circular orbits. A conker on a string might move around my head in a circle. A car turning a corner might, briefly, move along the arc of a circle. The key thing to note about circular motion is that there is no force pulling outwards from the circle, and there is no force pulling the moving object tangential to the circle. Centrifugal force does not exist. There is only one force acting in circular motion, which is known as centripetal force. It always acts towards the centre of the circle. The object does not follow a circular path because two forces are balanced. Instead, the centripetal force accelerates the object with a constant magnitude in an ever-changing direction. The object has a velocity, and will continue moving with this velocity unless acted on by the centripetal force, which is perpetually adding velocity towards the centre of the circle.
If you were to subject a stationary object to the centripetal force, it would simply fall. If you gave it a little bit of velocity, it would still fall, but it would not land directly beneath its starting position. If you kept increasing the velocity and dropping it, there would come a point when it would land infinitely far away - it would go into orbit. The relationship between this 'magic' velocity and the magnitude of the centripetal force is as follows:
where m is the mass of the object in circular motion, v is the magnitude of its velocity, and r is the distance from the centre of the circle to the object. Since F=ma, the centripetal acceleration is:
The centripetal force may manifest itself as many things: the tension in a string, friction, gravity or even an electric or magnetic field. In all these cases we can equate the equation for centripetal force with the equation for the force it really is.
Velocity is the rate of change of displacement. Angular velocity is the rate of change of angle, commonly denoted ω and measured in radians per. second:
In circular motion:
where T is the time for one revolution and f is the frequency of rotation. However:
Therefore, the relationship between velocity and angular velocity is:
If we substitute this into the formula for centripetal acceleration:
1. A tennis ball of mass 10g is attached to the end of a 0.75m string and is swung in a circle around someone's head at a frequency of 1.5Hz. What is the tension in the string?
2. A planet orbits a star in a circle. Its year is 100 Earth years, and the distance from the star to the planet is 70 Gm from the star. What is the mass of the star?
3. A 2000kg car turns a corner, which is the arc of a circle, at 20kmh-1. The centripetal force due to friction is 1.5 times the weight of the car. What is the radius of the corner?
4. Using the formulae for centripetal acceleration and gravitational field strength, and the definition of angular velocity, derive an equation linking the orbital period of a planet to the radius of its orbit. |
Learn About Thinking Activities
Thinking activities are used to tap learners’ prior knowledge by giving them an opportunity to create lists, make predictions, and use analogies. By using these activities, learners are put in control of their learning and allowed to make personal connections to new content.
Making organized lists that rank items in an order that makes sense to the learner is one way to help learners organize what they know as way of tapping prior knowledge. The Visual Ranking Tool*:
- Is an online thinking tool for ordering and prioritizing items in a list
- Helps learners analyze and evaluate criteria for their decisions
- Compares reasoning visually to promote collaboration and discussion
With the use of this tool, learners can use prior knowledge at the beginning of a project or lesson to rank items and then see how their new knowledge expands their viewpoint over the course of learning.
Example Visual Ranking List:
Working in pairs, learners are given a list of animals and asked which one most resembles a human. They use Visual Ranking to put the animals into order, ranking them on their human-like qualities.
This Visual Ranking list comes from the Project Idea: Classify Animals*.
To Page 2 of 2 | Next >
< Return to Tapping Prior Knowledge |
Compared to adults there can be a difference in the behaviour and side effects seen in teenagers when they drink alcohol because teenager’s brains are still developing which can result in negative effects in the short and long-term.
When young people drink alcohol they can experience short-term effects. These can include:
1. General impairment of ability.1
2. Increased risk taking 12
3. Mood changes1
There are long-term effects that can result from young people drinking alcohol,
Evidence has shown binge drinking can impact the white matter within the brain. White matter is responsible for passing information quickly through a nerve.3
It is recommended that for under 18's, no alcohol is the safest choice.
1 White J. Adolescence, Alcohol and Brain Development, What is the impact on well-being and learning? [Presentation] Drug and Alcohol Services, South Australia.
2 Directorate for Education and Human Resources of the American Association for the Advancement of Science. Alcohol and your brain. [online] 2013 [cited 2013 Jan 14]. Available from: http://sciencenetlinks.com/student-teacher-sheets/alcohol-and-your-brain/
3 Hickie IB, Whitwell BG. (2009). Alcohol and The Teenage Brain: Safest to keep them apart. BMRI Monograph 2009-2. Brain & Mind Research Institute, Sydney.
4Bava S, Tapert S. (2010). Adolescent Brain Development and the Risk for Alcohol and Other Drug Problems. Neuropsychology review 2010; 20(4):398-413.
Call the Alcohol and Drug Support Line on (08) 9442 5000 or 1800 198 024 toll free for country callers.
For emergencies call the 000 emergency line. |
What happened to Pluto?
New telescope technologies began to reveal far-off objects even larger than Pluto – were these planets, or not? The International Astronomical Union (IAU) appointed a panel to decide on a new definition.
The panel proposed:
A planet is a celestial body that
(a) is in orbit around a star, and
(b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, and
(c) is neither a star nor a satellite of a planet.
This gives us 12 "planets" in the current Solar System, adding Ceres (which originally was a planet), Charon (Pluto-Charon would be a "double-planet") and 2003 UB313 (also known as Xena). And who knows how many other "planets" remain to be discovered?
About 2,500 scientists meeting in Prague in August 2006 had the deciding vote. On the last day of the meeting, they added another condition for a celestial body to qualify as a planet:
(a) it must be in orbit around the Sun (or another star);
(b) it must be large enough that it takes on a nearly round shape;
(c) it has cleared its orbit of other objects.
Pluto, which was discovered in 1930 by the American Clyde Tombaugh, was automatically disqualified because its highly elliptical orbit overlaps with that of Neptune. It will join a new category of “dwarf planets.” Pluto is further away and much smaller than the eight other "traditional" planets. At just 2,360km (1,467 miles) across, it is smaller even than some moons in the Solar System. Since the early 1990s, astronomers have found several objects of similar size in an outer region of the Solar System called the Kuiper Belt. Xena, 3,000km (1,864 miles) across, is larger than Pluto.
The demotion is likely to upset the public, who have become accustomed to a particular view of the Solar System.
"I have a slight tear in my eye today, yes; but at the end of the day we have to describe the Solar System as it really is, not as we would like it to be," said Professor Iwan Williams, chair of the IAU panel.
3 September 2006 |
A block (mass = 2.9 kg) is hanging from a massless cord that is wrapped around a pulley (moment of inertia = 1.4 x 10-3 kg·m2), as the figure shows. Initially the pulley is prevented from rotating and the block is stationary. Then, the pulley is allowed to rotate as the block falls. The cord does not slip relative to the pulley as the block falls. Assume that the radius of the cord around the pulley remains constant at a value of 0.043 m during the block's descent. Find (a) the angular acceleration of the pulley and (b) the tension in the cord. |
A person with a kidney infection, or pyelonephritis, may experience vomiting, nausea, back pain, fever or confusion. Medical treatment should be sought immediately to avoid permanent kidney damage.Continue Reading
Kidney infections are often the result of a urinary tract infection that has spread to the upper urinary system. When bacteria, often E. coli, penetrates the body through the urethra, it travels up the urinary tract and into the bladder. The bacteria may find its way to the one or both kidneys, resulting in a kidney infection.
Pyelonephritis is a serious medical condition that spreads quickly through the blood, making a patient severely sick. Women are more vulnerable to kidney infections than men due to females having a shorter urethra. Fortunately, kidney infections can be treated using antibiotics as long as the symptoms are detected early on, before the infection spreads and becomes more severe.
Pyelonephritis may cause several changes in the urine, including pain during urination, smelly urine, blood in the urine and frequent urination. Young women have a higher risk of getting kidney infection because they tend to be more sexually active, and chances of contracting a kidney infection are higher for people who have regular intercourse. Keeping the urethra free from bacteria is one way of preventing a kidney infection.Learn more about Conditions & Diseases |
Today, it is more important than ever for children to be financially literate and understand entrepreneurship basics. With the economy constantly changing and the cost of living rising, it is essential that kids learn how to be fiscally responsible from a young age. The following ten books teach financial literacy and entrepreneurial skills in an engaging and accessible way for kids.
Kidpreneurs: Young Entrepreneurs With Big Ideas
Surely, we have all heard this common statement: “It’s never too late.” Well, Adam and Matthew Toren, authors of Kidpreneurs, offer a different perspective. They say “It’s never too early!” It’s no surprise that kids can grasp fundamental business concepts and the entrepreneurial advantages. Laying out some standard strategies children can utilize is what Kidpreneurs aims to accomplish. Specifically, this book details the invaluable wisdom they can achieve by beginning, leading, and expanding a prosperous company. By taking a simple and innovative means of introduction, this book describes the crucial systems that can significantly improve children’s entrepreneurial knowledge. It was created with kids in mind, which is clearly exhibited through easily-understandable illustrations. Consequently, this simplifies the entrepreneurship essentials to make the learning more fun for children. In addition to this, children can boost their managerial abilities through trying out easy businesses as they grow. Today commences our future. Consider introducing your children to this book to strengthen their future money skills. For parents who want to further educate their kids on money management through interactive content, Kidpreneurs Academy incorporates fun games for them to play in the learning process.
Heads Up Money
There are a lot of choices to make when it comes to money, and it can be hard to know what to do. Heads Up Money can help your kids make sense of it all. It explores topics like global banking, ethical trade, and how to run a successful business. Collaboratively, Marcus Weeks, a renowned author, and Derek Braddon, Professor of Economics at UWE Briston Business School, wrote this ideal book to help teenagers and young adults gain a better understanding of wealth and economics. This book discusses various other topics as well. The worldwide marketplace, current market landscapes, and concealed costs are just a few. Likewise – even if you’re considering investing, spending, and saving – this book can guide you through it.
If You Made a Million
Do your kids have dreams of becoming millionaires? Marvelosissimo the Mathematical Magician can show you the way through explaining the nitty-gritty while still maintaining the secrets and magic of making money. This book teaches kids how to invest and make a profit through dividends and interests while also showing you how their savings can grow. The advice in this book offers endless possibilities. As duly noted in its description, If You Made a Million achieved much recognition among children’s literature. This was recognized as a Horn Book Fanfare Selection, an ALA Notable Book, a Teachers’ Choices Selection, and a School Library Journal Best Book of the Year.
The Everything Kids’ Money Book: Earn it, save it, and watch it grow!
Do you want to help your kids understand money and become financially responsible adults? Then The Everything Kids’ Money Book is perfect for you! Saving money has evolved past a piggy bank. It is common for children today to invest money, begin their own businesses, and observe the interest earned in their savings. This book is a one-stop-shop in teaching children everything they should know about money management in order to make sound financial decisions. With updated information on digital banking, starting a bank account, and saving their earnings, this edition is sure to successfully teach kids better money management skills. Topics include: How coins and bills are made; What money can buy–from school supplies to fun and games; How credit cards work; Ways to watch money grow–from savings to stocks; Cool financial technology; And more!
One Cent, Two Cents, Old Cent, New Cent
Numismatics is the study of money and its history, and this book gives a super simple look at the topic. Children will be engaged from the very first page of One Cent, Two Cents, Old Cent, New Cent as the Cat in the Hat takes them on a journey through the history of money. The book starts with explaining the ancient practice of bartering before delving into various forms of currency used in different cultures. These forms include shells, feathers, leather, jade, metal ingots, and coins. The book also looks at banking, from the use of temples as the first banks to the concept of gaining or paying interest. Finally, there is a step-by-step guide to minting coins. This fascinating introduction is bound to change young reader’s appreciation for change!
Finance 101 for Kids: Money Lessons Children Cannot Afford to Miss
In a world where it seems like everything comes with a price tag, it’s more important than ever to teach children the value of money and how to manage it responsibly. Finance 101 for Kids is the perfect tool to help kids understand the basic concepts of financial literacy in a way that is fun, engaging, and relatable. With charming illustrations and easy-to-understand language, this book is a must-have for parents wanting the best possible financial foundation for their children.
The Steady Road to a Million Dollars (Bradley Jr’s Investing Adventures)
Many people think that becoming a millionaire is impossible, but with the right mindset and a little bit of effort, it can be done! Bradley Jr. is just a kid, but with the help of his dad, he’s already on the road to becoming a millionaire. How? Read The Steady Road to a Million Dollars to find out how Bradley Jr.’s dad helps set him up for success. Along the way, Bradley Jr. learns about different investment opportunities and how to make his money grow. Time is your most important tool when investing, so don’t wait to get started!
Grandpa’s Fortune Fables: Fun Stories to Teach Kids About Money
Did you know that kids form most of their financial habits by the age of 7? Grandpa’s Fortune Fables is a great way to start talking to your kids about money in a fun and interactive way that they will love! Do you want to start teaching your kids about money but are not sure how to talk to them about it? Luckily, there is an easy solution. Introducing Grandpa’s Fortune Fables – Fun Stories To Teach Kids About Money, by Will Rainey. This easy-to-read book will teach your children about money management, investing and starting a business. Teaching your children about money from an early age is one of the most important things you can do to ensure their future success. However, it can be a very difficult task, especially if you were never taught yourself. In this book, children can read stories recounted by Gail, a 13-year old girl. Gail’s stories describe the adventures her Grandpa embarked on to a distant island, where he became very wealthy due to gaining invaluable money management skills there.
The Lemonade War
By combining unique skills – like magical math skills and essential business skills – with humor, The Lemonade War is sure to pique your child’s interest. In exhibiting an uncommon brother-sister bond, this heartbreaking story briefly details how disagreements can lead to detrimental and unintended results. Evan Treski, who is skilled at communicating with others, always says the right thing. And, he knows just what to do to get people – even the adults – on his side. On the contrary, his little sister Jessie is skilled at math. While she understands equations and numbers better than just about anyone else her age, she lacks the interpersonal skills to understand people and their emotions. With school starting in a mere five days, Evan and Jessie start a competitive challenge to see who can sell more lemonade before summer vacation ends. With such high stakes, the true duration of their battle – let alone a winner – is anyone’s guess!
Mulani Moneybags Starts a Business
In Mulani Moneybags Starts a Business, nine-year-old Mulani Moneybags is inspired to start a business of her own after learning about entrepreneurship. With help from her mom, Mulani sets up a lemonade stand and quickly learns many lessons on the journey to becoming an entrepreneur. Overcoming challenges and celebrating triumphs, Mulani discovers that one of the most invaluable tools for any entrepreneur is love and support from their family. As we follow Mulani’s journey, we watch as she learns many lessons along the way to becoming an entrepreneur. With help from her Mom, Mulani learns both the advantages and difficulties of being her own boss.
We all want what’s best for our children, and that includes helping them to develop strong financial literacy skills. Entrepreneurship is a great way to learn about managing money, developing business strategies, and taking risks. The books on this list are some of the best options out there for teaching kids about entrepreneurship and financial literacy. They provide valuable lessons that can help set your child up for success in life.
Credit: Source link |
Spelling can be fast tracked by building a solid foundation. This process can be likened to building a house. Builders make certain that the base of the structure is solid, this is checked by councils and engineers so that the building stands firm through all seasons. A similar process applies to spelling.Checking children’s basic spelling skills makes certain that the spelling program is being built on a solid base that supports growth using sequential lessons and explicit teaching. Spelling fast tracked is simple with these easy steps.
Foundation of Spelling Success
The cement and gravel foundation for Spelling is Phonological Awareness. Consider the results of a Year Three child, I tested recently. She spelt the word van as ven, jam as jem and plan as plen. This looks like a simple problem. BUT being the detective that I am I knew that the problem would be deeper. AND I was right. In the phonological awareness testing, this student was unable to identify the middle sound in a word. This means that when I said the word, jam, the child was unable to tell me the middle sound /a/, not the letter’s name, the sound.
Further, this is not the only problem. The child is having vowel discrimination difficulties e.g being able to identify the difference in the sounds of /a/ and /e/ and then being able to write the letter that matches that sound. Spelling will not improve unless this problem is corrected. The child must be able to hear the difference between the vowel sounds before writing or making words. I call this tuning the ears. Here is a simple trick to fast track this child’s difficulty.
Use the Letter Box vowel chart to support this progress http://www.letterboxlearntoread.com/products
The chin goes down bit by bit as the vowel sounds are said in this order on the above chart. The best trick is to work on two vowel sounds at a time. For the student above these vowel sounds would be the /e/ and the /a/ sound. Remember there are no letters only sounds. I use the vowel chart to help the student. Firstly, have the child practise saying the sounds /e/ and /a/.If they place their hand under the chin and watch in a hand mirror as they say these two sounds they can see as well as hear and feel the difference between the /e/ and the /a/ sounds. The position of the chin can be correlated to the positioning of the vowels on the vowel chart. This will all require practice and more practice. Using the mirror, the vowel chart and the hand positioned under the chin helps with vowel discrimination when difficulties identifying sounds is recognised.
Success follows practice in any area of learning so, now, we must move to daily practice to improve this child’s spelling. Included below is a list of one syllable words that have the /e/ and the /a/ sounds. Remember, don’t show any letters or expect the child to write the words. This exercise is only about the ears. Firstly, we stretch out the words so that the child can identify the vowel easily through their ears. Then as the child improves in this skill, the word is spoken and the child identifies the sound.
Step by step progress
1. The teacher says one of the two sounds /e/ & /a/ at a time and ask the child to point to the letter on the vowel chart. Make sure that the child uses his hand under his chin, the vowel chart and a hand mirror. Practise this strategy until the child is competent.
2.The teacher says the word, bet
3.The teacher stretches the word bet….. b-e-t.
4. Have the child place his hand under his chin and look in the mirror.
5.The child stretches out the word
6. The child points to the corresponding letter on the vowel chart.
7.Continue with a few words in this way each day.
This simple process leaves the complexity of handwriting and spelling separate and allows the child to focus totally on the spelling strategies.
This process can be used with any vowels BUT remember that this problem can be fixed and you can fast track spelling with this one strategy. The problem is intimately related to tuning the ears. If the foundation is not solid the building will collapse. Working with phonological awareness on a daily basis, will build solid spelling progress where spelling knowledge will increase as basic skills are learnt.
List of the sound /a/ words: lap,tap,bat, bad, ram, rat, mad, rag, bag, pat
List of the sounds /e/ words:bed,beg,bet,led,set,met,let,web,yet,vet, den
Claim your FREE High Frequency Words @ www.letterboxlearntoread.com
Ann Foster is a teacher with a unique talent to provide back to basics step by step programs/products and tutoring for students in Australia and overseas.
Her programs and products help children, teachers and parents to achieve extraordinary results quickly. She has a track record of bringinginto action programs that are easy to follow and that achieve results. They are tried and proven and bring clarity out of chaos.
Ann has been working online teaching students and adults successfully for the last four years and has taken children from average to well above spelling, reading and writing results.
Letter Box staff solve problems and puts wings onto dreams. |
Hot Air Balloon
- Hot air in the balloon has lower density than the surrounding air.
- As a result, when the buoyant force produced is higher than the weight of the balloon, the balloon will start rising up.
- The altitude of the balloon can be controlled by varying the temperature of the air in the balloon.
A submarine use ballask tank to control its movement up and down.
To get submerge, water is pumped into the ballast tank to increase the weight of the submarine.
To surface, the water is pumped out to reduce the weight of the submarine.
Q & A
Q: The diagram shows a picture of a hydrometer. What is the function of the lead shot at the bottom of the hydrometer?
To lower down the centre of gravity of the hydrometer. The hydrometer will topple if the centre gravity of the hydrometer is above the surface of the liquid. |
RIVERSIDE, Calif. -- Astronomers at the University of California, Riverside, have discovered that powerful winds driven by supermassive black holes in the centers of dwarf galaxies have a significant impact on the evolution of these galaxies by suppressing star formation.
Dwarf galaxies are small galaxies that contain between 100 million to a few billion stars. In contrast, the Milky Way has 200-400 billion stars. Dwarf galaxies are the most abundant galaxy type in the universe and often orbit larger galaxies.
The team of three astronomers was surprised by the strength of the detected winds.
"We expected we would need observations with much higher resolution and sensitivity, and we had planned on obtaining these as a follow-up to our initial observations," said Gabriela Canalizo, a professor of physics and astronomy at UC Riverside, who led the research team. "But we could see the signs strongly and clearly in the initial observations. The winds were stronger than we had anticipated."
Canalizo explained that astronomers have suspected for the past couple of decades that supermassive black holes at the centers of large galaxies can have a profound influence on the way large galaxies grow and age.
"Our findings now indicate that their effect can be just as dramatic, if not more dramatic, in dwarf galaxies in the universe," she said.
Study results appear in The Astrophysical Journal.
The researchers, who also include Laura V. Sales, an assistant professor of physics and astronomy; and Christina M. Manzano-King, a doctoral student in Canalizo's lab, used a portion of the data from the Sloan Digital Sky Survey, which maps more than 35% of the sky, to identify 50 dwarf galaxies, 29 of which showed signs of being associated with black holes in their centers. Six of these 29 galaxies showed evidence of winds -- specifically, high-velocity ionized gas outflows -- emanating from their active black holes.
"Using the Keck telescopes in Hawaii, we were able to not only detect, but also measure specific properties of these winds, such as their kinematics, distribution, and power source -- the first time this has been done," Canalizo said. "We found some evidence that these winds may be changing the rate at which the galaxies are able to form stars."
Manzano-King, the first author of the research paper, explained that many unanswered questions about galaxy evolution can be understood by studying dwarf galaxies.
"Larger galaxies often form when dwarf galaxies merge together," she said. "Dwarf galaxies are, therefore, useful in understanding how galaxies evolve. Dwarf galaxies are small because after they formed, they somehow avoided merging with other galaxies. Thus, they serve as fossils by revealing what the environment of the early universe was like. Dwarf galaxies are the smallest galaxies in which we are directly seeing winds -- gas flows up to 1,000 kilometers per second -- for the first time."
Manzano-King explained that as material falls into a black hole, it heats up due to friction and strong gravitational fields and releases radiative energy. This energy pushes ambient gas outward from the center of the galaxy into intergalactic space.
"What's interesting is that these winds are being pushed out by active black holes in the six dwarf galaxies rather than by stellar processes such as supernovae," she said. "Typically, winds driven by stellar processes are common in dwarf galaxies and constitute the dominant process for regulating the amount of gas available in dwarf galaxies for forming stars."
Astronomers suspect that when wind emanating from a black hole is pushed out, it compresses the gas ahead of the wind, which can increase star formation. But if all the wind gets expelled from the galaxy's center, gas becomes unavailable and star formation could decrease. The latter appears to be what is occurring in the six dwarf galaxies the researchers identified.
"In these six cases, the wind has a negative impact on star formation," Sales said. "Theoretical models for the formation and evolution of galaxies have not included the impact of black holes in dwarf galaxies. We are seeing evidence, however, of a suppression of star formation in these galaxies. Our findings show that galaxy formation models must include black holes as important, if not dominant, regulators of star formation in dwarf galaxies."
Next, the researchers plan to study the mass and momentum of gas outflows in dwarf galaxies.
"This would better inform theorists who rely on such data to build models," Manzano-King said. "These models, in turn, teach observational astronomers just how the winds affect dwarf galaxies. We also plan to do a systematic search in a larger sample of the Sloan Digital Sky Survey to identify dwarf galaxies with outflows originating in active black holes."
The research was funded by the National Science Foundation, NASA, and the Hellman Foundation. Data was obtained at the W. M. Keck Observatory, and made possible by financial support from the W. M. Keck Foundation.
The University of California, Riverside is a doctoral research university, a living laboratory for groundbreaking exploration of issues critical to Inland Southern California, the state and communities around the world. Reflecting California's diverse culture, UCR's enrollment is more than 24,000 students. The campus opened a medical school in 2013 and has reached the heart of the Coachella Valley by way of the UCR Palm Desert Center. The campus has an annual statewide economic impact of almost $2 billion. To learn more, email [email protected]. |
For reading up on some basics, see Chi-Square Independence Test - Quick Introduction.
Null Hypothesis for the Chi-Square Independence Test
A chi-square independence test evaluates if two categorical variables are associated in some population. We'll therefore try to refute the null hypothesis that
two categorical variables are (perfectly) independent in some population.
If this is true and we draw a sample from this population, then we may see some association between these variables in our sample. This is because samples tend to differ somewhat from the populations from which they're drawn.
However, a strong association between variables is unlikely to occur in a sample if the variables are independent in the entire population. If we do observe this anyway, we'll conclude that the variables probably aren't independent in our population after all. That is, we'll reject the null hypothesis of independence.
A sample of 183 students evaluated some course. Apart from their evaluations, we also have their genders and study majors. The data are in course_evaluation.sav, part of which is shown below.
We'd now like to know: is study major associated with gender? And -if so- how? Since study major and gender are nominal variables, we'll run a chi-square test to find out.
Assumptions Chi-Square Independence Test
Conclusions from a chi-square independence test can be trusted if two assumptions are met:
- independent observations. This usually -not always- holds if each case in SPSS holds a unique person or other statistical unit. Since this is that case for our data, we'll assume this has been met.
- For a 2 by 2 table, all expected frequencies > 5.If you've no idea what that means, you may consult Chi-Square Independence Test - Quick Introduction. For a larger table, no more than 20% of all cells may have an expected frequency < 5 and all expected frequencies > 1.
SPSS will test this assumption for us when we'll run our test. We'll get to it later.
Chi-Square Independence Test in SPSS
In SPSS, the chi-square independence test is part of the CROSSTABS procedure which we can run as shown below.
In the main dialog, we'll enter one variable into the sex has only 2 categories (male or female), using it as our column variable results in a table that's rather narrow and high. It will fit more easily into our final report than a wider table resulting from using major as our column variable. Anyway, both options yield identical test results.
Under we'll just select . Clicking results in the syntax below.
SPSS Chi-Square Independence Test Syntax
/TABLES=major BY sex
/COUNT ROUND CELL.
You can use this syntax if you like but I personally prefer a shorter version shown below. I simply type it into the Syntax Editor window, which for me is much faster than clicking through the menu. Both versions yield identical results.
crosstabs major by sex
Output Chi-Square Independence Test
First off, we take a quick look at the Case Processing Summary to see if any cases have been excluded due to missing values. That's not the case here. With other data, if many cases are excluded, we'd like to know why and if it makes sense.
Next, we inspect our contingency table. Note that its marginal frequencies -the frequencies reported in the margins of our table- show the frequency distributions of either variable separately.
Both distributions look plausible and since there's no “no answer” categories, there's no need to specify any user missing values.
First off, our data meet the assumption of all expected frequencies > 5 that we mentioned earlier. Since this holds, we can rely on our significance test for which we use Pearson Chi-Square.
Right, we usually say that the association between two variables is statistically significant if Asymptotic Significance (2-sided) < 0.05 which is clearly the case here.
Significance is often referred to as “p”, short for probability; it is the probability of observing our sample outcome if our variables are independent in the entire population. This probability is 0.000 in our case. Conclusion: we reject the null hypothesis that our variables are independent in the entire population.
Understanding the Association Between Variables
We conclude that our variables are associated but what does this association look like? Well, one way to find out is inspecting either column or row percentages. I'll compute them by adding a line to my syntax as shown below.
set tvars labels tnumbers labels.
*Crosstabs with frequencies and row percentages.
crosstabs major by sex
/cells count row
Adjusting Our Table
Since I'm not too happy with the format of my newly run table, I'll right-click it and select
We selectand then drag and drop right underneath “What's your gender?”. We'll close the pivot table editor.
Roughly half of our sample if female. Within psychology, however, a whopping 87% is female. That is, females are highly overrepresented among psychology students. Like so, study major “says something” about gender: if I know somebody studies psychology, I know she's probably female.
The opposite pattern holds for economy students: some 80% of them are male. In short, our row percenages describe the association we established with our chi-square test.
We could quantify the strength of the association by adding Cramér’s V to our test but we'll leave that for another day.
Reporting a Chi-Square Independence Test
We report the significance test with something like “an association between gender and study major was observed, χ2(4) = 54.50, p = 0.000. Further, I suggest including our final contingency table (with frequencies and row percentages) in the report as well as it gives a lot of insight into the nature of the association.
So that's about it for now. Thanks for reading! |
One of the most dazzling displays that nature offers on a rainy day is a full rainbow arcing across the sky. These ephemeral daytime occurrences have captured the human imagination for centuries — leading to a wide array of myths and legends as to what causes them.
While we know that there are solid scientific principles involved, many of us would probably struggle to describe exactly what those are. Here's a quick guide to the science behind rainbows, so you can understand what's going on in the sky the next time you see one.
To understand how a rainbow occurs in a natural environment, we need to understand what happens during the process of light refraction. To illustrate that, we need to take a closer look at a basic science class tool — the prism.
A prism is a pyramid-shaped piece of glass that can produce a rainbow-like display of color from an ordinary beam of white light in a controlled indoor setting. It does this because glass has a different refractive index than air does. The refractive index measures how quickly light can pass through a medium. When two objects have a different refractive index, the light bends when it moves from one environment to the next.
Different wavelengths of light
A beam of white light, or sunlight, is actually made up of many different colors. These colors — red, orange, yellow, green, blue, and violet — each have their own wavelength. When light is refracted, each wavelength has its own angle of bending.
This means that during the split second that a white light enters and exits a prism, each color is separated slightly. When they leave the prism and are projected onto a flat surface, each color hits a different spot because it has been bent at a slightly different angle. This allows you to see each color side by side.
On a wet yet still sunny day, each raindrop acts as an individual prism. As sunlight hits each raindrop, it is reflected off the interior of the raindrop and refracted — separating the different wavelengths of light.
This process explains why rainbows always have the same color pattern. Violet has the shortest wavelength, which causes it to bend the most. Red has the longest wavelength, which causes it to bend the least. This is why violet is always the bottom color of the rainbow and why red is always on the top. All of the other colors have wavelengths that are in between those two and will always be in the middle.
One question still remains though. When light is refracted in a prism, it comes out in a straight line. However, rainbows are always curved. Why does this happen?
The answer is in the shape of the object that causes the refraction. Raindrops are spherical and refract light in a circle. Because of our angle relative to the sun and the horizon, we typically only see half of that circle — the arc that we associate with normal rainbows. However, if you are in a plane flying high above the horizon and observe a rainbow from the right angle, you might have the opportunity the see the full, circular rainbow effect.
Rainbows are a seemingly mystical phenomenon, but, like most things, they can be explained with some basic observations and repeated under the correct circumstances. So, the next time you see a rainbow after a stormy day, enjoy the effect and appreciate the fascinating science behind these marvelous bands of color in the sky. |
The syntax of the
__import__() function is:
__import__(name, globals=None, locals=None, fromlist=(), level=0)
- name - the name of the module you want to import
- globals and locals - determines how to interpret name
- fromlist - objects or submodules that should be imported by name
- level - specifies whether to use absolute or relative imports
Use of __import__() is Discouraged
__import__() function is not necessary for everyday Python program. It is rarely used and often discouraged.
This function can be used to change the semantics of the import statement as the statement calls this function. Instead, it is better to use import hooks.
And, if you want to import a module by name, use importlib.import_module().
Example: How __import()__ works?
mathematics = __import__('math', globals(), locals(), , 0) print(mathematics.fabs(-2.5))
fabs() method is defined in the
math module. You can call this function using the following syntax:
import math math.fabs(x)
However, in the above program, we changed the way
fabs() works. Now, we can also access
fabs() using the following syntax: |
Graham’s number is a very, very big number that was discovered by a man called Ronald Graham. It is the answer to a problem in an area of mathematics called Ramsey theory, and is one of the biggest number ever used in a mathematics study.
Even if every number in the number was written in the tiniest writing possible, it would still be too big to fit in all the universe that scientists have seen so far – in other words: the universe is just too small a place to be able to write this number in.
Graham’s number is connected to the following problem in the branch of mathematics known as Ramsey theory: (Note that the symbol ^ is used to denote “to the power”)
Consider an n-dimensional hypercube, and connect each pair of vertices to obtain a complete graph on 2^n vertices. Then colour each of the edges of this graph either red or blue. What is the smallest value of n for which every such colouring contains at least one single-coloured 4-vertex planar complete subgraph?
Graham and Rothschild proved in 1971 that this problem has a solution, N*, and gave as a bounding estimate 6 ≤ N* ≤ N, with the upper bound N a particular, explicitly defined, very large number. In other words, the smallest possible value of N was thought to be 6 in 1971. However, this answer has been debunked. A possible solution points to the smallest value of N being at least 11, it may well be 12, but the answer lies between 11 and Graham’s Number (G).
To convey the difficulty of appreciating the enormous size of Graham’s number, it may be helpful to express—in terms of exponentiation alone—just the first term (g1) of the rapidly growing 64-term sequence.
(I) 3x3x3 is 3^3 is 27.
(II) 3^^3 is 7625597484987 you can think of this as 3 mutiplied by its self 3^3 times so 3x3x3x3x3….27 times.
(III.a) 3^^^3 is so huge, its digits would fill up the universe and beyond. it has 3638334640025 decimal digits – and this is only the start.
(III.b) 3^^^3 can also be represented as g1.
(IV) g2 is equal to 3^^^^^^^^^^^^^^^^^^^^^^…3 the number of arrows (^) in this number is g1 this means there are 3^^^3, arrows, or levels of exponents, in g2.
(V) g3 is equal to 3^^^^^^^^^^…3 the number of arrows is the value of g2 and so on.
(VI) g64 is equal to G, graham’s number.
Because of the Knuth up-arrow notation described here we know that the last ten numbers in Graham’s number are […] 2464195387. But the actual entire number (the remaining cyphers preceding these last ten numbers put together in one large number) is virtually infinitely longer.
The point of Graham’s number (G) is that it is the answer to the upper bound of N in this particular hypercube problem. Whatever the smallest value of N must be in this particular problem remains unclear. The answer probably lies between 11 and G. There is quite a margin of error to render faulty. |
In a new paper published in the Proceedings of the Royal Society B, scientists from the Queen Mary University of London argue that insects most likely have central nervous control of nociception (detection of painful stimuli); such control is consistent with the existence of pain experience, with implications for insect farming, conservation and their treatment in the laboratory.
Modulation of nociception allows animals to optimize chances of survival by adapting their behavior in different contexts.
In mammals, this is executed by neurons from the brain and is referred to as the descending control of nociception.
Whether insects have such control, or the neural circuits allowing it, has rarely been explored.
“Nociception is the detection of potentially or actually damaging stimuli, which is mediated by specialized receptors: nociceptors,” said Queen Mary University of London’s Professor Lars Chittka and colleagues.
“It can be accompanied by the feeling of pain, which is a negative subjective experience generated by the brain.”
“Nociception and/or pain can be inhibited or facilitated (modulated) by descending neurons from the brain (including the brainstem in vertebrates) called the descending pain controls.”
Based on behavioral, neuroscientific and molecular evidence, the authors argue that insects probably have descending controls for nociception.
“Behavioral work shows that insects can modulate nocifensive behavior,” the researchers said.
“Such modulation is at least in part controlled by the central nervous system since the information mediating such prioritization is processed by the brain.”
“Central nervous system control of nociception is further supported by neuroanatomical and neurobiological evidence showing that the insect brain can facilitate or suppress nocifensive behavior, and by molecular studies revealing pathways involved in the inhibition of nocifensive behavior both peripherally and centrally.”
The presence of descending nociception controls in insects is important and interesting for many areas of insect and human neuroscience.
The descending control of nociception in humans can also affect pain perception, so it is conceivable that a form of pain perception exists in insects, and can be similarly modulated.
“Mammalian researchers quantify pain through measuring non-reflexive, complex and long-lasting changes to the animal’s natural behavior, which are likely mediated by descending controls,” the scientists said.
“For example, in rodents, reduced feeding, locomotion and burrowing behaviors are used as pain indicators.”
“Thus, the examples of insects performing these kinds of behaviors may support the idea of pain in insects.”
“For example, insects show reduced attraction to appetitive stimuli if they have to also experience nociceptive stimuli. Further, recent evidence demonstrating sentience-linked cognitive abilities in some insects supports this idea, as well as studies indicating pain perception in other invertebrates.”
“This is important morally, as insects are often subjected to potentially painful stimuli in research and farming,” they said.
“The possibility of pain sensations in insects is also an important consideration for modeling human pain disorders.”
“The fruit fly Drosophila melanogaster is currently used as a model organism for human pain research, because of similarities in the genetics and behavioral responses to human nociception.”
“The abnormal and persistent pain states in humans seem to occur due to dysfunction of descending pain controls, so, if insects have descending nociception controls, they could potentially be viable models for human pain disorders.” |
Carbon Monoxide Poisoning
Carbon monoxide (known by the chemical symbol CO) is a colorless and practically odorless gas. It is poisonous to people and animals, because it displaces oxygen in the blood. It is produced by the incomplete burning of solid, liquid, and gaseous fuels. Appliances fueled with natural gas, liquefied petroleum, oil, kerosene, coal, or wood may produce CO. Burning charcoal and running car engines also produce CO.
Every year, thousands of people are treated for CO poisoning in emergency rooms, and more than 200 people in the U.S. die from it.
What are the symptoms of CO poisoning?
Carbon monoxide can have different effects on people based on its concentration in the air that people breathe. Because you can’t smell, taste, or see it, you cannot tell that CO gas is present. The health effects of CO depend on the level of CO and length of exposure, as well as each individual’s health condition.
The initial symptoms of CO poisoning are similar to the flu (but without fever). They include headache, fatigue, shortness of breath, nausea and dizziness. Many people with CO poisoning mistake their symptoms for the flu. Because CO replaces oxygen in the blood, it can make people feel sleepy, or prevent those who are asleep from waking up.
At higher concentrations, people can experience impaired vision and coordination, headaches, dizziness, confusion, and nausea. In very high concentrations, CO poisoning can cause death.
What should you do if you experience symptoms of CO poisoning?
If you think you or your family are experiencing any of the symptoms of CO poisoning, get fresh air immediately. Open windows and doors for more ventilation, turn off any combustion appliances, and leave your home. Then call your fire department and report your symptoms. You could lose consciousness and die if you do nothing. It is also important to contact a doctor immediately for a proper diagnosis. Tell your doctor that you suspect CO poisoning is causing your problems.
Prompt medical attention is important if you are experiencing any symptoms of CO poisoning when you are operating fuel-burning appliances. Before turning your fuel-burning appliances back on, make sure a qualified serviceperson checks them for malfunction. |
Showing 25–36 of 89 results
A labelled map and questions for learning about the world’s continents and oceans.
In this activity, students learn about maps and scale.
In this activity, students learn how to make a legend for a map.
A worksheet for learning about how to follow directions.
An activity sheet for learning about Canada’s national borders.
Learn about Canada’s time zones with this activity sheet.
Learn more about Canada with this labelled map showing provinces and territories. |
The research began from a genetic disease called tuberous sclerosis complex, or TSC. Half the patients afflicted with TSC also have some form of autism and many have severe mental retardation. Even in the mild cases of the disease some learning disabilities and short-term memory problems often arise.
The UCLA researchers created a mouse model for TSC and tested Rapamycin, which is commonly used to fight tissue rejection in patients after organ transplants. Rapamycin was chosen because it is known to target an enzyme involved in the production of proteins needed for memory. The same enzyme is influenced by TSC proteins and accordingly, the TSC mice suffer from learning disabilities.
Dr. Alcino Silva and Dan Ehninger, both from the David Geffen School of Medicine at UCLA ran the experiment on TSC mice. First of all they verified that the TSC mice did have learning problems. Then they discovered that the learning problems were caused by biochemical changes which disrupted healthy hippocampal function (a brain structure that plays a key role in memory). These findings suggest that the mice have a specific problem with making the distinction between important and unimportant data while learning.
Three days after the beginning of Rapamycin treatment, the TSC mice’s learning was up to the level of healthy mice’s. Rapamycin corrected the biochemistry and restored hippocampal function, thus reversing the learning deficits and allowing the mice to create proper memories. This reversal of a learning dysfunction in adult mice suggests that the changes in proper brain function in human TSC patients does not result from a structural abnormality, but is a reversible biochemical disruption.
It’s too early to say that a cure for autism has been found. Even if the drug will work in humans as it does in mice, it will probably only improve the autism sufferers’ learning problems, but will not normalize them. Currently Rapamycin clinical trials are conducted by Dr. Petrus de Vries at England’s University of Cambridge. Yet more research is needed to see whether Rapamycin can improve short-term memory in TSC patients or other patients suffering from autism spectrum disorders.
TFOT has recently covered the story of a transgenic mouse model that explains the phenomenon of autistic savants. We also covered several stories on the mechanism of memory creation, such as a research conducted at the Weizmann Institute, in which scientists discovered an enzyme capable of erasing memories by disrupting the synapse maintenance and the story of a mechanism allowing for memory deletion, which was discovered at the Bristol University.
More information on the UCLA rapamycin research can be found at UCLA’s website. |
THE LUNGS AND PULMONARY CAPILLARIES
The Lungs and Pulmonary Capillaries
This section will delve into the anatomy of the lungs themselves. Including, the layout of the organ, the nature of the microscopic structures that exchange gases, and several of the terms used to define how air moves through the lungs.
Inhaled air is split at the carina into left and right pathways, each of these pathways then subdivides into smaller paths that serve each individual lung lobe. These lobes are filled with countless alveoli and capillaries which exchange gases in the blood for gases in the inhaled air.
The Structure of the Lung
The left lung has two lobes and the right lung has three lobes. The left lung is smaller than the right because it shares a large portion of the left chest cavity with the heart.
The left lung has an upper and lower lobe
The right lung has an upper, middle, and lower lobe.
Remember that the angle of the right mainstem bronchus encourages inhaled solids/fluids to lodge in the right lung.
Other Structures of the Lung
PLEURA: The lungs are surrounded by delicate membranes named pleura.
- The visceral pleura is the inner membrane that covers the surface of each lung and dips into the spaces between the lobes.
- The parietal pleura is the outer membrane which is attached to the inner surface of the thoracic (chest) cavity. This pleura also separates the pleural cavity from the mediastinum (which houses the heart and the great vessels).
Serous fluid lies between the two membranes and allows them to easily slide over each other without friction. This ensures that the lungs can easily inflate and deflate with minimal resistance.
HILUS: Each lung also has a hilus, an indentation in the surface where the blood vessels, bronchi, and nerve fibers enter and exit. The pulmonary hila (plural for hilus) are located on the mediastinal (centermost) surface of each lung.
Gas Exchange Structures
PULMONARY CAPILLARIES: Deoxygenated blood enters into the pulmonary arteries from the right side of the heart and is delivered to the pulmonary capillaries, the smallest blood vessels inside of the lungs, attached to the walls of the alveoli.
ALVEOLI: The alveoli collect oxygen from inhaled air and transfer it to the deoxygenated and carbon dioxide-rich blood at the pulmonary capillaries. Simultaneously, the carbon dioxide waste is transferred from the deoxygenated blood into the lungs, to be exhaled.
The now oxygen-rich and carbon dioxide-depleted blood travels from the pulmonary capillaries through the pulmonary veins and into the left atrium of the heart for transit to the left ventricle and the systemic circulation.
SURFACTANT: Alveoli are aided by a thin film called surfactant made up of lipids and proteins which covers their surface and prevents them from collapsing upon exhalation; they reduce the surface tension thus preventing the lungs from sticking together or collapsing. (In very premature babies, the surfactant hasn't yet been produced, causing severe respiratory compromise.)
Pulmonary Function Terms/Definitions
The ability of the lungs to move air is far more complicated than it initially appears. There are many interacting forces that work together to keep the lungs volume closely regulated. Several complex terms and definitions have been included below for completeness. The most important ones to know are tidal volume, minute respiratory volume, and lung compliance.
TIDAL VOLUME: the amount of air that enters the lungs during a normal breath at rest. The average tidal volume is approximately 500 ml in adults. The same amount of air leaves the lungs during resting exhalation.
MINUTE RESPIRATORY VOLUME: (Tidal volume x Respiratory Rate) Defines the volume of air that either enters or exits the lungs per minute.
VITAL CAPACITY: the maximum amount of air that can be moved by the lungs in one breath cycle. i.e. when you breathe out as much as possible, followed by taking the largest breath possible.
RESERVE VOLUME: accounts for physiologic dead space in the lungs. Reserve volume cannot be exhaled. This volume is the amount of air in the lungs after a maximal exhalation. This air reserve keeps the alveoli from collapsing.
LUNG COMPLIANCE: Lung compliance is defined as the ability of the lungs to stretch in response to movement of the diaphragm and chest wall. Low compliance is generally only seen in smokers and patients with severe lung diseases that slowly damage the tissue. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.