content
stringlengths 275
370k
|
---|
The World's Smallest Double Slit Experiment: Breaking up the Hydrogen Molecule
|Contact: Paul Preuss, (510) 486-6249, [email protected]|
BERKELEY, CA The big world of classical physics mostly seems sensible: waves are waves and particles are particles, and the moon rises whether anyone watches or not. The tiny quantum world is different: particles are waves (and vice versa), and quantum systems remain in a state of multiple possibilities until they are measured — which amounts to an intrusion by an observer from the big world — and forced to choose: the exact position or momentum of an electron, say.
On what scale do the quantum world and the classical world begin to cross into each other? How big does an "observer" have to be? It's a long-argued question of fundamental scientific interest and practical importance as well, with significant implications for attempts to build solid-state quantum computers.
Researchers at the Department of Energy's Lawrence Berkeley National Laboratory and their collaborators at the University of Frankfurt, Germany; Kansas State University; and Auburn University have now established that quantum particles start behaving in a classical way on a scale as small as a single hydrogen molecule. They reached this conclusion after performing what they call the world's simplest — and certainly its smallest — double slit experiment, using as their two "slits" the two proton nuclei of a hydrogen molecule, only 1.4 atomic units apart (a few ten-billionths of a meter). Their results appear in the November 9, 2007 issue of Science.
The double slit experiment
"One of the most powerful ways to explore the quantum world is the double slit experiment," says Ali Belkacem of Berkeley Lab's Chemical Sciences Division, one of the research leaders. In its familiar form, the double slit experiment uses a single light source shining through two slits, side by side in an opaque screen; the light that passes through falls on a screen.
If either of the two slits is closed, the light going through the other slit forms a bright bar on the screen, striking the screen like a stream of BBs or Ping-Pong balls or other solid particles. But if both slits are open, the beams overlap to form interference fringes, just as waves in water do, with bright bands where the wavecrests reinforce one another and dark bands where they cancel.
So is light particles or waves? The ambiguous results of early double slit experiments (the first on record was in 1801) were not resolved until well into the 20th century, when it became clear from both experiment and the theory of quantum mechanics that light is both waves and particles — moreover, that particles, including electrons, also have a wave nature.
"It's the wave nature of electrons that allows them to act in a correlated way in a hydrogen molecule," says Thorsten Weber of the Chemical Sciences Division, another of the experiment's leading researchers. "When two particles are part of the same quantum system, their interactions are not restricted to electromagnetism, for example, or gravity. They also possess quantum coherence — they share information about their states nonlocally, even when separated by arbitrary distances."
Correlation between its two electrons is actually what makes double photoionization possible with a hydrogen molecule. Photoionization means that an energetic photon, in this case an x-ray, knocks an electron out of an atom or molecule, leaving the system with net charge (ionized); in double photoionization a single photon triggers the emission of two electrons.
"The photon hits only one electron, but because they are correlated, because they cohere in the quantum sense, the electron that's hit flies off in one direction with a certain momentum, and the other electron also flies off at a specific angle to it with a different momentum," Weber explains.
The experimental set-up used by Belkacem and Weber and their colleagues, being movable, was employed on both beamlines 4.0 and 11.0 of Berkeley Lab's Advanced Light Source (ALS). In the apparatus a stream of hydrogen gas is sent through an interaction region, where some of the molecules are struck by an x-ray beam from the ALS. When the two negatively charged electrons are knocked out of a molecule, the two positively charged protons (the nuclei of the hydrogen atoms) blow themselves apart by mutual repulsion. An electric field in the experiment's interaction region separates the positively and negatively charged particles, sending the protons to one detector and the electrons to a detector in the opposite direction.
"It's what's called a kinematically complete experiment," Belkacem says, "one in which every particle is accounted for. We can determine the momentum of all the particles, the initial orientation and distance between the protons, and the momentum of the electrons."
What the simplest double slit experiment reveals
"At the high photon energies we used for photoionization, most of the time we observed one fast electron and one slow electron," says Weber. "What we were interested in was the interference patterns."
Considered as particles, the electrons fly off at an angle to one another that depends on their energy and how they scatter from the two hydrogen nuclei (the "double slit"). Considered as waves, an electron makes an interference pattern that can be seen by calculating the probability that the electron will be found at a given position relative to the orientation of the two nuclei.
The wave nature of the electron means that in a double slit experiment even a single electron is capable of interfering with itself. Double slit experiments with photoionized hydrogen molecules at first showed only the self-interference patterns of the fast electrons, their waves bouncing off both protons, with little action from the slow electrons.
"From these patterns, it might look like the slow electron is not important, that double photoionization is pretty unspectacular," says Weber. The fast electrons' energies were 185 to 190 eV (electron volts), while the slow electrons had energies of 5 eV or less. But what happens if the slow electron is given just a bit more energy, say somewhere between 5 and 25 eV? As Weber puts it, "What if we make the slow electron a little more active? What if we turn it into an 'observer?'"
As long as both electrons are isolated from their surroundings, quantum coherence prevails, as revealed by the fast electron's wavelike interference pattern. But this interference pattern disappears when the slow electron is made into an observer of the fast one, a stand-in for the larger environment: the quantum system of the fast electron now interacts with the wider world (e.g., its next neighboring particle, the slow electron) and begins to decohere. The system has entered the realm of classical physics.
Not completely, however. And here is what Belkacem calls "the meat of the experiment": "Even when the interference pattern has disappeared, we can see that coherence is still there, hidden in the entanglement between the two electrons."
Although one electron has become entangled with its environment, the two electrons are still entangled with each other in a way that allows interference between them to be reconstructed, simply by graphing their correlated momenta from the angles at which the electrons were ejected. Two waveforms appear in the graph, either of which can be projected to show an interference pattern. But the two waveforms are out of phase with each other: viewed simultaneously, interference vanishes.
If the two-electron system is split into its subsytems and one (the "observer") is thought of as the environment of the other, it becomes evident that classical properties such as loss of coherence can emerge even when only four particles (two electrons, two protons) are involved. Yet because the two electron subsystems are entangled in a tractable way, their quantum coherence can be reconstructed. What Weber calls "the which-way information exchanged between the particles" persists.
Says Belkacem, "For researchers who are trying to build solid-state quantum computers this is both good news and bad news. The bad news is that decoherence and loss of information occur on the very tiny scale of a single hydrogen molecule. The good news is that, theoretically, the information isn't necessarily lost — or at least not completely."
"The Simplest Double Slit: Interference and Entanglement in Double Photoionization of H2," by D. Akoury, K. Kreidi, T. Jahnke, Th. Weber, A. Staudte, M. Schöffler, N. Neumann, J. Titze, L. Ph. H. Schmidt, A. Czasch, O. Jagutzki, R. A. Costa Fraga, R. E. Grisenti, R. Díez Muiño, N. A. Cherepkov, S. K. Semenov, P. Ranitovic, C. L. Cocke, T. Osipov, H. Adaniya, J. C. Thompson, M. H. Prior, A. Belkacem, A. L. Landers, H. Schmidt-Böcking, and R. Dörner, appears in the 9 November issue of Science and is available online to subscribers at http://dx.doi.org/10.1126/science.1144959.
Berkeley Lab is a U.S. Department of Energy national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California. Visit our website at http://www.lbl.gov. |
1471: Why did Edward IV win his crown back?
This activity is not concerned with the details of the events of 1471 (at some stage I hope to add a guided role-play on the events to the website). It’s really here to exemplify one way of beginning work on an A level topic – using the enquiry process to help students build their confidence and independence in finding their way through a topic, secure in the knowledge that they’re on the right track to an effective explanation, in this case of Edward’s success. So, even if you don’t teach this topic, reading on might well be useful. It also includes ways of using ‘washing-line’ and ‘Diamond 9’ techniques for building explanations.
A common problem at A level is that students think they can’t begin to make sense of a topic or start to put answers together until they have read and learned a great deal. But without hooks (the question and hypothesis of the early stages of the enquiry process) there’s nothing to hang this learning on as it develops and that creates the ever-present danger of lots of reading and note-taking without any clear sense of direction. That in turn leads to frustration and reduced motivation – ‘I’ve done loads of reading but haven’t retained much/can’t make sense of it/don’t seem to be making progress.’ Having a question and hypothesis to work on from the beginning make a huge difference, guiding reading constructively and making for much more effective learning. This process also combats the fear of ‘not knowing’, making explicit that it’s OK to know little or nothing at the outset and that uncertainty is a natural and accepted part of getting to grips with a topic.
The purposes of this activity are to
a) show students how to go about planning their way through a topic, using a question to generate a hypothesis and then to use those first ideas to guide their reading.
b) boost students’ confidence in working and reading independently.
c) build up an outline knowledge of the factors that explain Edward’s success in 1471 and to suggest the factors that may have been most important
You just need the 8 factor cards plus the two cards to mark the ends of the ‘washing line’. If you work as a class you only need one set of cards. If it’s being used as a group activity each group needs a set of cards.
1. The key to students tackling this effectively is to move straight into the question and to construct a hypothesis, a possible answer in the first lesson. So, begin by presenting students with the question – Why did Edward regain his throne in 1471?
EITHER ask them for ideas – after all, they’ll have looked at why he gained the crown in 1461 (military qualities, Henry’s incompetence etc) and why he lost it ( Warwick’s role, French support for his opponents etc) – but you can’t rely on this as some students won’t have the confidence to make the mental leaps across topics
OR give them the 8 factor cards which list the range of reasons involved. In effect you’re giving them the elements of the answer. Their task is to organize them. This doesn’t mean you’re ignoring the prior knowledge mentioned in the ‘Either’ paragraph above. You’re just bringing it in via a different route, using the factor cards as stimuli to students’ memories of what they’ve done before.
2. Armed with the factors cards, the task for students is to organize them into groups:
a) reasons which seem to link to Edward’s own strengths and qualities
b) reasons which seem to be his opponents’ mistakes or weaknesses
c) other factors which don’t seem to fit into (a) or (b)
Note the use of the word ‘seem’, very hypothetical, very reassuring that you’re not meant to be certain or to know the answer at this stage.
The best way to do this is to set up a continuum or ‘washing line’, one end marked “Edward’s strengths” and the other “Opponents’ weaknesses” (see the ‘Washing Line End Cards’ in the Support section [ click here ]). This line needs to be long enough to create three clear groups of cards – one at each end and one in the middle. This washing line has a cunning relationship to an essay plan! In doing this, students have taken the first stage in creating a hypothesis by organizing the cards into groups. Leaving students to sort out the cards for themselves is important because this engenders discussion amongst them – they may need some guidelines such as ‘It’s OK to say you’re not sure or you don’t know’ but discussion here will help create more effective writing later. Discussion is where you try out ideas – better tried out-loud than battling to transfer half-formed ideas straight from brain to paper. The completed washing-line will look like the chart below.
3. Having sorted the cards on the line they now need to move on to the second stage of creating their hypothesis by suggesting which factors were likely to have been more important. This can be done by creating a ‘Diamond 9’ pattern with the cards, even if there’s only 8 of them! This physically shows which factors are at the top of a tree of importance. Students need to draw a sketch of the ‘Diamond’ pattern for reference and adjustment while they’re reading. Discussion of the ‘Diamond’ pattern is one place where their knowledge of events before 1471 can come into play. The completed pattern could look like that shown in the chart below but several different patterns are possible.
It’s also vital at this stage to explain why they’re using this enquiry process and how the activities will help their reading – otherwise some at least will be nervous about creating hypotheses on the basis of minimal knowledge.
4. Now students have a list of factors, organized into a pattern but they know hardly any detail. However the factors headings – ‘Edward’s military leadership’, ‘Clarence changes sides’ etc – are enough to enable them to start reading with a strong sense of direction. They know the question but, more importantly, they have the shape of an answer. The first part of their dual task is to look for detail on these factors in their books – exactly how was Edward a good military leader, when exactly did his leadership play a part, was this a factor at critical moments? The second part is to reflect on their hypothesis as they read – is it standing up to the evidence? Do they want to sketch a different ‘Diamond’ shape, moving the cards around into a different pattern?
Thus the initial identification of a question and hypothesis helps students read much more effectively because they’ve got a focus for that reading. The pages of their books no longer comprise an obstacle course full of completely unfamiliar material. The benefits of the approach are confidence, a sense of direction, improved motivation and more-focussed reading. And, in the long-run, better writing because (a) knowledge is more secure, having been built up in layers, from outline to depth and (b) the card sorting activities have created good discussion and effective discussion is a key contributor to good writing.
5. Creating a hypothesis at the outset of a sequence of work provides a structure that students can follow, realising that it’s OK to know only little or nothing at the outset but that they can build that knowledge and understanding as they go – and you’re giving them the tools to become more and more independent in their learning. At some stage take away the scaffolding – the list of possible reasons – and insist they come up with their own ideas. Creating independent thinkers and learners is what A level should be about.
While you will want to end the unit by summarising the key reasons for Edward’s success, discussing the impact of this activity is important for helping students develop independence and confidence.
You could, for example discuss:
Was the activity a success in helping you plan and structure your work?
Did it help direct your reading so you read more effectively?
Did it help your confidence in tackling this topic?
What have you learned about structuring how you learn about a topic that you can use again?
1. Did students understand the reasons for using the transferable enquiry process – does this need to be more explicit next time?
2. How much more independence will you give students next time? Which students can run with the technique and which need further support?
3. At what stage in A level do you introduce this idea and how does it build on earlier work at KS3 and GCSE? |
You will see these artifacts or similar artifacts when you visit the museum. See how many of the questions you can answer now and find out if they are correct when you visit.
|This is a special doll given to children, but they do not play with it. What is the purpose of this doll? What tribes use it? To learn more about these special dolls, and to see some that are now on display, click here.|
|What is this? What is it made of and how was it used?|
|Native Americans used these. A form of this is used today. What is it?|
|What do we call this type of shoe worn by Native Americans? What are they made of? Do they all look the same? Can you tell which tribe might have worn these?|
|There are some clues on this doll to tell us what culture area it is from. What area is this from and how do we know?|
What You Will See...
Here is an example of one of our exhibits.
Native Peoples of Illinois
|Wigwam- A wigwam is a dome-shaped dwelling used by the Native Americans of the Northeast Woodlands. Women made the wigwam by gathering all of the plant materials needed throughout the warmer seasons. One wigwam typically held one family unit.|
Birch Bark- The birch bark tree is native to the Northeast Woodlands. This trees is unique because the outer bark can be easily removed from the trunk by making a verticle slit down the length of the tree causing the bark to "pop off." If the stripping is done correctly, the tree will re-seal itself. These birch bark sheets were placed like shingles to create the outer wall of the wigwam. Birch bark was an ideal material for the outer wall, because it is both insect and water resistant.
|Weegoob- Weegoob is the Ojibwa name for the inner bark of the basswood tree. This inner bark is the tree's transportation system that carries water and other plant products from the leaves to the rest of the tree. Because of this, the weegoob is flexible, strong, and water resistant. Therefore, it was great material for ropes and twine. Small strips of weegoob were tied to the sapling frame of a wigwam to hold the structure in place.|
|Braided Weegoob- Weegoob strips were also braided together to form a solid rope or handle.This handle could then be attached to a birch bark basket and used to hold a cooking basket over a fire.|
|Birch bark basket- A birch bark basket was handmade using the bark from a birch bark tree. The bark was first stripped from the tree and then a pattern was placed upon it. The pattern was then traced and cut. The ends were then folded over, and stitched together with weegoob to make its basket shape. These baskets were used to gather food and cook over fires. As long as there was water in the bottom of the basket, the basket would not burn. A well-made basket could last a family a lifetime.|
|Wigwam stitching- When creating the walls of the wigwam, pieces of birch bark were stitched together with weegoob to hold them in place. Before the weegoob was stitched through the pieces of bark, holes were punched through using a bone awl.|
|Deer hide- Deer hides were used in many ways in the Northeast Woodlands. Many tanned hides were used as blankets and clothing for the Native Americans. They were also hung over the entranceway of a wigwam to protect the inhabitants from cold and rain.|
|Beaver fur- Beaver furs were a very popular source of outerwear among the Native Americans due to their water resistant nature. They also became a very valuable commodity to both Native Americans and Europeans during the Fur Trade. Most of the upper Midwest tribes entered into the trade agreements, first with the French, and then with the British, to supply beaver pelts to the insatiable European market.| |
Japanese languagelanguage spoken on the island nation of Japan, in East Asia. Japanese has two alphabets, katakana, and hiragana. Katakana is for words from outside of Japan. Hiragana is for words from inside Japan. Each alphabet has letters that you say as sounds, or syllables. Katakana have more straight edges and jagged corners than hiragana. Hiragana is more curvy than katakana.
There is a third way to write, called kanji, where every word or idea has a picture character. 1000s of kanji are needed to read. Many kanji are made from smaller, simpler kanji. Each kanji may sound differently when used in a different way.
Japan has only five vowel sounds. They are ah, E, oo, eh, and O. Japanese has L and R as one sound between L and R. That is why it may be difficult for Japanese to pronounce the English L. Japanese has a sound not found in English, that is usually written Tsu.
Japanese has no spaces between words, so kanji help seperate words in a sentence.
Japanese can be written in 2 ways.
- From left to right, to the bottom of the page.
- From top to bottom, to the left of the page.
.jp is a domain for Japanese web sites.
In Japanese, Japan is called Nihon, and Japanese is called Nihongo.
This article is a stub. You can help Wikipedia by adding to it. |
Close to 70 percent of emerging viral diseases such as HIV/AIDS, West Nile, Ebola, SARS, and influenza, are zoonoses. But until now, there has been no good estimate of the actual number of viruses that exist in any wildlife species, according to a release from Columbia University’s Mailman School of Public Health.
Scientists there have reported on a novel new study that has estimated a minimum of 320,000 viruses in mammals awaiting discovery. That is a manageable number, they believe, and identifying and collecting information on those viruses could be a cost-effective strategy for early detection and mitigation of disease outbreaks in humans.
“Historically, our whole approach to discovery has been altogether too random,” says lead author Simon Anthony, D.Phil, a scientist at the Center for Infection and Immunity (CII) at Columbia University’s Mailman School of Public Health. “What we currently know about viruses is very much biased towards those that have already spilled over into humans or animals and emerged as diseases. But the pool of all viruses in wildlife, including many potential threats to humans, is actually much deeper. A more systematic, multidisciplinary, and One Health framework is needed if we are to understand what drives and controls viral diversity and following that, what causes viruses to emerge as disease-causing pathogens.”
To develop an estimate of total mammal viruses, the team studied flying foxes, the world’s largest bat species, in Bangladesh. The bats are known as a source of several outbreaks of Nipah virus in humans.
The team collected 1,900 biological samples from the bats and used polymerase chain reaction to identify 55 viruses in nine viral families. Of these, only five were previously known, including two human bocaviruses, an avian adenovirus, a human/bovine betacoronavirus, and an avian gammacoronavirus. The other 50 viruses were previously unknown, including 10 in the same family as Nipah. The researchers also used a statistical technique to estimate there were another three rare viruses unaccounted for in the samples, upping the estimate of viruses in the flying fox to 58. They used that figure to extrapolate to all 5,486 known mammals, yielding a total of at least 320,000 viruses.
Based on the flying fox study, the researchers were able to develop cost estimates for collecting virus information on a broader scale. They estimated their cost for surveillance, sampling, and discovery of all 58 flying fox viruses at $1.2 million, and used that figure to extrapolate a total cost of $6.3 billion for all mammals. Given the disproportionate cost of discovering rare viruses, they estimate that limiting discovery to 85 percent of estimated viral diversity would bring the cost down to $1.4 billion.
“By contrast, the economic impact of the SARS pandemic is calculated to be $16 billion,” says Anthony. “We’re not saying that this undertaking would prevent another outbreak like SARS. Nonetheless, what we learn from exploring global viral diversity could mitigate outbreaks by facilitating better surveillance and rapid diagnostic testing.”
The team plans to repeat the process in two follow-up studies—one in a species of primates in Bangladesh in order to see if their viral diversity is comparable to the flying fox’s, and another in Mexico, where analysis of samples from six species of bats that share the same habitat, to determine the extent to which they share viruses. With additional resources, they hope to expand the investigation to other species and viral families.
The paper is published in the journal mBio.
Read more from Columbia University’s Mailman School of Public Health. |
The Enigma Machine
In 1915, two Dutch Naval officers invented a machine to encrypt messages. This became known as the Enigma machine. In 1918, Arthur Scherbius, a German businessman, patented the Enigma machine. In the mid 1920s, the mass production of Enigma machines with 30,000 machines being sold to the German military over the next 2 decades. The Poles set up a world leading crypt analysis bureau and hired leading mathematicians such as Marian Rejewski. He Built his own model of the Enigma Machine without seeing it.
In 1931, a German traitor told Rejewski that the Germans routinely changed the daily key indicator setting for the codes. To find the daily key, Rejewski build 6 replicas of the Enigma machine and connected them. |
Learn something new every day
More Info... by email
A morwong a type of fish that belongs to the order perciformes, which means "perch-like." These ocean-dwellers make up the family Cheilodactylidae, which contains 18 species of fish divided into five genera: Cheilodactylus, Chirodactylus, Dactylophora, Goniistius, and Nemadactylus. Other names for the morwong include butterfish, fingerfin, jackassfish, and moki. They are sometimes known as snappers, but this is a misnomer, as snappers make up an entirely separate family of fish called the Lutjanidae.
This type of fish has one single, continuous dorsal fin that contains 14 to 22 spines, and an anal fin that usually contains three spines. They can reach up to 3.28 feet (1 m) in length and 2.64 pounds (1.2 kg) in weight. They are sometimes characterized by a comical appearance, which is due to their small mouths and thick lips. Some species can also have bony protrusions above their eyes. Coloring can range from reddish-orange and white, to silvery blue, to brownish, depending on the species.
Morwongs are typically found in the Southern Hemisphere, especially in Australian waters, but they have also been known to inhabit the oceans near Japan, China, and the Hawaiian Islands. They thrive in tropical to temperate waters and prefer areas close to the shore. They often dwell in areas with reefs because they hide in small holes at night and frequently lay their eggs in beds of sea grass.
Small invertebrates that dwell on the ocean floor are the primary food source for morwongs. Their diet can include crustaceans, mollusks, and echinoderms. When feeding, they use their large lips to scoop up sea-floor sediments into their mouths and then filter out their prey.
Some species of morwongs are harvested as food by both commercial and recreational fishers. Around 1915, commercial fishing for morwong first began in earnest through the use of demersal otter trawls, the most common capture tool. They are sold whole or as fish filets in domestic fish markets and are said to have only a mild fishy flavor with medium to firm texture.
When fishing for morwong recreationally, anglers may find it best to drift over a reef area, because they are solitary hunters and do not feed in schools. Bait can include bits of fish, prawns, and squid, and should be firmly attached to the hook because morwongs like to suck at their food. Anglers should be prepared to struggle when the morwong bites, because they are tough fish that can put up quite a fight.
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK! |
Rc circuits and the oscilloscope parallel combinations of resistors and capacitors will also be explored circuit diagram for rc circuit with oscilloscope 3. Using the same value components in our series example circuit, we will connect them in parallel and see what happens: (figure below) parallel r-c circuit because the power source has the. R, l, c circuits prof townsend page 1 of 6 series rc, rl, and rlc circuits parallel rc, rl, and rlc circuits by prof townsend mth 352 fall 2005. Rc circuit –initial conditions an rc circuit is one where you have a capacitor and resistor in the same circuit suppose we have the following circuit.
Dc and rc circuits introduction however, when you add a resistor in parallel in a circuit you provide another path through which current can flow. Subscribe here: physics ninja looks at an rc circuit where the resistor and capacitor are in. Rc circuits 41 objectives two flat metal plates placed nearly parallel and an rc circuit is a circuit with a resistor and a capacitor in series connected. The circuit the parallel rc circuit shown to the right behaves very differently when ac is applied to it, than when dc is applied with a dc voltage, the capacitor will charge rapidly to. Practical electronics/parallel rc from wikibooks, open books for an open world rc circuit impedance: z =.
The parallel rc circuit is generally of less interest than the series circuit this is largely because the output voltage voutis equal to the input voltage vin as a result, this circuit does. As with the parallel rc circuit this can be verified using the simulator by creating the above mentioned parallel rl circuit and by measuring the current and. Calculates the impedance of the resistor and capacitor in parallel. Methods for solving a c parallel circuits r 1 = 70 q rc = 30 n in series-parallel circuits, the parallel circuit is first reduced to an equivalent.
Essays - largest database of quality sample essays and research papers on series and parallel circuits rc circuits •in this presentation. I was doing some exercises on transfer functions and ran into this rc circuit i thought that i was allowed to add up the two resistors since they are in series.
Parallel rc circuit source is dc in a rc parallel circuit does we need a series resistor with a capacitor or not and if the source is ac in rc parallel circuit.
The simplest rc circuit is a capacitor and a resistor in parallel when a circuit consists of only a charged capacitor and a resistor, the capacitor will discharge its stored energy through. Parallel rc circuit analyze a parallel rc circuit analyze series-parallel rc circuits determine power in rc circuits isu ee 3 cy lee sinusoidal response of.
Physics - resistors in series and parallel (4 of 5) michel van biezen 160,093 views 13:35 rc circuits physics problems, time constant explained. •current leads voltage in a parallel inductive circuit •voltage leads current in a parallel capacitive circuit rc and rl circuits. Rc parallel circuit figure 1 series rc circuit figure 2 capacitor voltage step-response figure 3 resistor voltage step-response 2 rl series circuit natural response.
Hi, im a 4th year student from mindanao state university- iligan institute of technology , philippines and i am now starting my thesis proposal can. Ac circuit experiment this lab deals with circuits involving resistors, capacitors and inductors in which the currents and voltages series rc circuit. Rc parallel circuits [closed] up vote-1 down vote favorite i am trying to solve the following having difficulty understanding the 'r' in more complex rc circuits-1. Series and parallel ac circuits ac electric circuits question 1 don’t just sit there necessary to create a total current of 113 ma in this parallel rc circuit. Electrical tutorial about parallel rlc circuits and analysis of parallel rlc circuits that contain resistor, inductor and capacitor and their impedances. Conclusion this experiment introduced series and parallel rlc circuits we from ee 110l at ucla. |
Robert Roy Britt of Live Science reports on the research of a team led by University of Wisconsin-Madison professor John Valley that shows increased dryness in the Eastern Mediterranean between 100 and 700 of the Common Era (CE), with dramatic dips in rainfall in 100 CE and 400 CE. It raises questions about whether climate is somehow implicated in the decline of the Roman Empire (traditionally considered to have fallen in 476 CE) and the weakening of the Byzantine Empire in the 600s-700s CE.
Britt does not mention that the 600s were the era in which the Orthodox Caliphs of Islam took greater Syria and Egypt away from Byzantium; these lands were later ruled by the Umayyad Empire. Indeed, within the first century after Islam was founded, its adherents spread out with lightning speed to take over the southern third of the old Roman Empire, as well as the entirety of the Sasanid Empire of Iran. The Muslim conquests after 632 CE are rivaled in history for their speed and extent only by the 13th-century Mongol expansion. The Muslim empire, however, retained its civilizational identity and it was adopted by the conquered, whereas the Mongols were absorbed.
Since the Arab Muslims were from desiccated Western Arabia, they may have been better at dealing with a dry climate; Muslim water-management techniques were superior to those of other civilizations in that era. They may also have had advantages in logistics and fighting technique. The Bedouin tribesmen of Arabia that were the core of the Arab Muslim army had been used to raiding across arid territory. Camels need less water than horses and can cover more territory per day, so in dry conditions a camel cavalry has advantages over a horse cavalry. Bedouin had been probing Byzantine defenses in Syria all along; why were they suddenly able to over-run Damascus in 634 CE? Many historians have focused on the esprit de corps and unifying ideology they derived from the new religion of Islam, but other explanations should continue to be considered.
Institutions and social arrangements–how people deal with climate change– are more important than the change itself. Note that pastoral nomads, who take their herds to pasturage wherever it pops up, have advantages over farming peasants in dry eras. Peasants and urban people defect to tribes, or engage in migrations to regain access to water. Since the Bedouin were such an important social element in early Islam, a shift in social and economic power toward pastoralists would have benefited the new religion.
‘ The work involved geochemical analysis of a stalagmite from Soreq Cave in the Stalactite Cave Nature Reserve near Jerusalem. Rain flushed organic matter from the surface into the cave, and it was trapped in mineral deposits that formed layers on the stalagmite. Geology graduate student Ian Orland determined annual rainfall levels for the years the stalagmite was growing, from approximately 200 B.C. to 1100 A.D. ‘
A lot of climate history is done from tree ring analysis, but it has been difficult to pursue in the Middle East because the arid conditions there are not conducive to long-lived trees like the California redwoods. Some analysis has been done for medieval Turkey by using antique wooden, from surviving buildings, churches and ships, but getting a long data series that has wide implications has been difficult.
Richard Bulliet at Columbia University has used rainfall data for Mongolia in trying to understand medieval Iran’s climate history. But obviously Middle Eastern data would be preferable where it can be gotten.
Some scientists have suggested that rainfall can be a proxy for temperature, but that relationship is not accepted by everyone(warm weather might be associated with dry periods, cold weather with increased rainfall).
Climate history enjoyed a vogue a hundred years ago in areas like Roman history, but became discredited because its practitioners tried to explain too much by it and discounted other important explanations. We should avoid these temptations as new climate information allows another run at weather explanations in history. |
Q. How do you pronounce the Japanese "r"?
A. The Japanese "r" is different from the English "r". The sound is sort of between the English "r" and "l". To make "r" sound, start to say "l", but make your tongue stop short of the roof of your mouth, almost in the English "d" position. It is more like the Spanish "r".
The Japanese have trouble to pronounce and tell the difference between the English "r" and "l' because these sounds don't exist in Japanese.
Don't get too frustrated trying to pronounce it right. When you say words, there is no point in focusing on one syllable. Please listen carefully to how a native speaker pronounces it and repeat it the way you hear it.
If you can't manage it, "l" is better option than English "r", because the Japanese don't roll their tongue when speaking.
Click this link to hear the Japanese "r" pronunciation. |
The Idran system was an uninhabited star system. This system was located approximately 40,000 light years from the galactic core in the Gamma Quadrant and approximately 70,000 light years from the Bajoran system in the Alpha Quadrant.
In the latter half of the 22nd century, the Quadros-1 probe conducted a stellar survey of stars in the Gamma Quadrant. One of the systems surveyed was Idran (FGC-1215). This ternary system consisted of the primary supergiant Idran and its twin spectral class O-type companions. There were no M-class planets in the system.
In 2369, the Danube-class runabout USS Rio Grande traveled through the Bajoran wormhole into the Gamma Quadrant. Upon entering into the quadrant, the runabout's computer identified the nearest system as Idran based on a hydrogen-alpha spectral analysis from the Quadros-1 probe survey. Idran was located 4.7234 light years away from the terminus of the wormhole. (DS9: "Emissary"; Star Trek: Voyager, Season 7 production art ) |
Solutions To Mathematics Textbooks/Algebra (9780817636777)/Exercises 26-50
If a and b had a nontrivial common factor k >= 2, then a = k*a' and b = k*b', so (ad - bc) = k(a'd - b'c) = ±1.
Alternatively, you must essentially show that a and b are coprime; that is the numerator and denominator share no common factor. Another way of saying this is to say that .
Let . We can write as . Thus must be either -1 or 1, and thus a and b are coprime. |
- What makes DNA an ideal replacement for silicon-based chips?
- Microsoft takes a leap into the DNA computing world
- The development of DNA-based computers is held back by a cash squeeze
50 years ago, the smallest computers in the world weren’t that small. In fact, they were the size of an average room. Not only did they use a lot of electricity, but they also generated a lot of heat, which often caused problems. Then came a new generation of tiny electronic components, known as microprocessors, which allowed computers to shrink in size and grow in power. This made them far more useful and encouraged their rapid adoption. When these microprocessors first appeared, they were built on conventional silicon chips. And though you’d expect that they’ve changed a lot over the years, microprocessors today rely on the same material and the same basic tech. They’ve been refined nearly to the limits of physics, but today’s chips are still silicon-based processors. And although silicon has been a great material for microprocessors, it’s not the best – and it’s certainly not the most reliable, either.
In fact, a group of researchers from The University of Manchester proposed a far better alternative. As strange as it seems, the team thinks that DNA is the secret to a super powerful computer that “grows as it computes”. The project, which was published in the Journal of the Royal Society Interface, is based on the theory that DNA strands can be used to store and compute data, just like regular microprocessors. And the researchers believe that computers operated by DNA microchips could solve complex problems much faster than traditional ones.
What makes DNA an ideal replacement for silicon-based chips?
What makes this approach so exciting is that it’s potentially much, much faster than those we already have on the market. That’s because DNA-based computers can work on two completely different problems at the same time. Professor Ross D. King, the project leader, describes it as something akin to the process of finding information in a maze. Imagine that your computer is searching for a specific piece of information in a labyrinth. Once it gets to a crossroad where it can go either left or right, a typical computer tries one path first. Then, if it fails to find the information there, it’ll go back and try the second. This is because traditional computers rely on binary code, 1s and 0s. There are only these two possibilities at the most elemental level. But DNA offers four: G, T, C, and A. And those additional possibilities allow it to run much, much faster, especially for complex calculations.
For instance, for a conventional binary processor, really complex mathematical problems are simply too much. It’s estimated that it would take hundreds of years for a conventional computer to solve the really hard ones, whereas a DNA-based computer could potentially solve them in a couple of hours. And this is where DNA computers truly shine. Unlike conventional computers, DNA-based computers can take more than one path at the same time, as if they were replicating themselves at each point in the maze. Moreover, “All electronic computers have a fixed number of chips,” he explains. “Our computer’s ability to grow as it computes makes it faster than any other form of computer, and enables the solution of many computational problems previously considered impossible.”
Another advantage of DNA computers is their capacity to store large amounts of data. Silicon-based microprocessors enable computers to store a maximum of a few terabytes of data. That’s a lot, but a single gram of DNA, for example, could store 100 billion terabytes of data, and since DNA microprocessors are much smaller in size, an average desktop computer could be equipped with more than one DNA microprocessor, achieving even greater speeds and storage capacity. As Professor King puts it, due to the miniscule size of DNA, a “desktop computer could potentially utilize more processors than all the electronic computers in the world combined – and therefore outperform the world’s current fastest supercomputer, while consuming a tiny fraction of its energy”.
DNA storage is super-safe, too. As George Church, a geneticist and expert on DNA from Harvard University, emphasised, DNA is an awesome storage medium because of its durability and stability. For example, at sub-zero temperatures, DNA can last for thousands of years. Most digital data today is usually stored on media with limited lifespans. Take conventional storage solutions such as SD cards and flash drives as an example. If taken care of correctly, these can last you up to 10 years, while more traditional tools such as CDs or DVDs have a lifespan of between two to five years. Clearly, DNA is the better storage option.
Microsoft takes a leap into the DNA computing world
Scientists from The University of Manchester weren’t the only ones who were intrigued by DNA computers. In 2017, Microsoft announced it’ll start developing a DNA-based computer in the next three years. The first model, once fully developed, is expected to store only information such as medical records and police video footage. And it’ll still be larger than conventional desktop computers. According to Doug Carmean, a partner architect at Microsoft Research, the computer will be the size of a Xerox machine from the 1970s. But this isn’t the first time that Microsoft tapped into the potential of DNA as a data storage solution. In 2016, they collaborated with the University of Washington, and set a record by storing 200 megabytes of data onto DNA. Since then, they improved their system, and today it stores 400 megabytes of DNA-encoded data.
The development of DNA-based computers held back by a cash squeeze
Despite all these breakthroughs, we’re still using traditional computers and we rely on their limited storage capacity. If you’re wondering why, just keep in mind that storing even a tiny piece of information in DNA form can cost a small fortune. Only one megabyte of data stored biologically is estimated to cost $12,500. However, compared to previous years, the price of this technology has dropped significantly, and it’ll continue to decrease over time. So, maybe DNA-based computers will undergo the same transformation as human genome sequencing did earlier. When first developed, sequencing an entire genome cost $2.7 billion. Today, the same process costs less than $1,000. In fact, some genome sequencing cases cost as little as $280.
DNA computers offer enormous potential for the future of computing, but there’s still a lot of work that needs to be done in this field. Although our existing computers serve us well, once they reach their absolute maximum speed and storage capacity, the idea of having a super-efficient computer based on DNA won’t seem so bizarre. |
Angle of Elevation Calculator
Calculate the angle of elevation and angle of depression by entering the vertical height (rise) and horizontal distance (run) below.
Angle of Elevation:
On this page:
How to Calculate the Angle of Elevation
Angle of elevation is the positive, or upwards, angle of a line of sight from an observer to an object. For instance, if you were standing outside looking up at the top of a tree, the angle of elevation is the angle your head would need to tilt in order to look at the top of the tree.
You can calculate the angle of elevation using trigonometry. Considering that the line of sight and the horizontal baseline form a right triangle, you can use a simple trig function to calculate the angle of elevation.
Angle of Elevation Formula
The formula to calculate the angle of elevation is:
tan(θ) = opposite ÷ adjacent
The tangent of the angle θ is equal to the length of the opposite side divided by the length of the adjacent side. If you substitute the vertical height (opposite) and horizontal distance (adjacent) of the object into this formula, you can calculate the angle of elevation.
Put another way, the equation becomes:
tan(angle of elevation) = rise ÷ run
The tangent of the angle of elevation is equal to the vertical height (rise) divided by the horizontal distance (run). In other words, the tangent of the angle is equal to the slope of the line.
Using the inverse tangent, you can isolate the angle of elevation on one side of the formula:
angle of elevation = atan(rise ÷ run)
Thus, the angle of elevation is equal to the inverse tangent of the vertical height (rise) divided by the horizontal distance (run).
Angle of Elevation vs. Angle of Depression
If the vertical height of the object is negative, that is, the object is below the observer, then the angle is referred to as the angle of depression. A practical example of an angle of depression is the angle of an airline pilot looking down at a runway while landing an airplane.
In this case, the formula to solve for the angle of depression is the same as the angle of elevation. Note that when the vertical height is negative, the resulting angle will be negative as well. This simply means the angle spans downward relative to the horizontal axis or horizontal line of sight.
If you are interested in further reading, you’ll probably also be interested in our elevation grade calculator.
Frequently Asked Questions
How do you find the height of an object using the angle of elevation?
To find the height h, of an object using the angle of elevation θ, you must also know the distance between the observer and the object d. If you also know this distance, then you can calculate the height of the object as:
h = d × tan(θ)
Note: the units of the height will be the same as the distance. For example, if the distance is measured in meters, then the resulting height will also be in meters.
How do you calculate distance using the angle of elevation?
To find the distance d to an object of height h using the angle of elevation θ, you can use the following formula:
d = h ÷ tan(θ)
Note: the units of the distance will be the same as the height. For example, if the height is measured in meters, then the resulting distance will also be in meters.
What is the maximum angle of elevation?
The maximum angle of elevation occurs when the object is directly above the observer. If this is the case, then the maximum angle of elevation is 90 degrees.
How do you measure the angle of elevation?
If you don’t know either the distance to the object or its height but still want to measure the angle of elevation, you can use specific tools to do so, such as a theodolite, which is commonly used to measure angles in surveying. |
Osteoporosis is a bone disease that literally translates “porous bone.” If you look at the healthy bone under a microscope, it looks similar to a honeycomb. With osteoporosis, there are larger holes and spaces in between the bone, meaning you’ve lost bone density or mass. As your bones become less dense, they become weaker and are more likely to break. In seniors, this poses the threat of kyphosis (curving of the spine) and a potentially fatal hip fracture.
1. PREVALENCE: The National Osteoporosis Foundation estimates about 53 million Americans have osteoporosis. Discovery Health reports that approximately 71% of women with osteoporosis don’t even know they have it, and 86% who have osteoporosis are not being treated.
2. CALCIUM: Young adults should be consuming between 1,000 to 1,200 mg of calcium daily through food, and if needed, supplements to help keep your bones strong. Women 50+ should be getting 1,200 to 1,300 mg of calcium a day. Good sources of calcium include low fat milk, yogurt, and cheese. Your doctor may prescribe a calcium + vitamin D supplement based on your specific needs.
3. MENOPAUSE Your risk for developing osteoporosis increases after menopause because your body’s natural production of the hormone estrogen declines. Estrogen helps keep bones strong. Because post-menopausal hormone therapy increases the risk for breast cancer, heart attack, stroke, and blood clots, your doctor will discuss if hormone therapy is right for you. Women taking estrogen products are urged to have yearly breast exams, perform monthly breast self-exams and receive periodic mammograms.
4. BONE MASS: Without treatment, women lose as much as 25-30% in the first five to seven years following menopause. Bone-loss rates can be slowed by regular weight-bearing and muscle-strengthening exercises. Activities such as walking, gardening, jogging, and playing tennis help to strengthen bones and connective tissue.
5. BONE DENSITY TEST: A bone density test (dual energy x-ray absorptiometry or DEXA) measures the mineral density in your hip bones and spine to determine your risk of developing osteoporosis. This test takes about 20 minutes and is not usually performed until after menopause, unless you have an unusually high risk for osteoporosis. It is quick, painless and a non-invasive procedure (no needles).
6. PREVENTION & TREATMENT: While there is no cure for osteoporosis, it is treatable. Medications are available to help either slow bone loss or increase the rate of bone formation. Your doctor can discuss medication options with you, but you can help prevent bone loss and fractures from osteoporosis with proper nutrition, exercise, and by not using tobacco products. |
Carbs, protein and fat are all nutrients to be familiar with. This way, you’ll know exactly how much of each one you should be eating as part of a healthy diet. Find out what role they play in your body and how to balance your intake. There are many diet mistakes to avoid, but with some basic guidance, you’ll dodge any diet traps.
As part of a balanced and healthy diet, getting familiar with carbs, protein and fat is key if you want to put together examples of fitness meal plans that are right for your needs.
Carbohydrates are the primary source of energy of the human body. Your brain consumes over 25% of your glucose intake. Muscles also consume a high numbers of carbs, which they use after they’re stored as glycogen. Carbs and exercise go hand in hand. That’s why many foods that contain carbs are in the top 10 foods to eat for athletic performance.
There are two types of carbs: simple and complex carbs. Glucose, fructose, galactose, lactose, sucrose are all simple carbs, whereas starch, cellulose and glycogen are all complex carbs. These are also often mistakenly called “slow sugars.” Carbs raise your blood sugar. This is known as a blood sugar spike that’s measured using the glycemic index (GI). The lower a food’s glycemic index value is, the slower the blood sugar spike will occur. These carbs are then digested slowly, which is why they’re called “slow sugars.”
To get enough carbohydrates to fuel your workouts, your daily carbohydrate intake ranges from 40 and 55%.
Fat, also known as lipids, contributes to your nervous system and cell membrane structure. They’re also the precursors of hormones and molecules that contribute to a healthy immune system.
These essential nutrients are made up of triglycerides and phospholipids, which are both made up of saturated and unsaturated fat. Each one has a different chemical structure. Not all of them can be produced by the human body, so getting these nutrients from your diet is vital.
Omega-3 and omega-6 are unsaturated fatty acids that should make up most of your fat intake. However, eating high amounts of saturated fat found in meat and dairy products can increase your risk of cardiovascular disease. Your daily fat intake ranges from 35 to 40%.
Protein is one of the macromolecules that make up all living things. It plays many roles throughout your body and is so important for healthy muscle fiber regeneration. Your diet as someone who exercises must contain enough protein. A person’s needs vary from 10 to 15% of their daily intake. You don’t need to start eating a high-protein diet in order to get results.
Controlling your portion sizes and getting the right nutrient intake is important when you exercise regularly with three 20-minute workouts a week.
Carbs are fuel when you exercise. That’s why you need to eat 4 to 5 g of carbs per kilogram of body weight per day to get results if you want to lose weight or gain muscle.
Fat is actually good for your health. During an intense workout, your body relies on it as fuel, so don’t overlook this nutrient intake. Eat 1.2 g per kilogram of body weight per day.
It’s important to watch your protein intake, whether it’s animal or plant-based. If you want to lose weight, you need to eat between 0.8 to 1.2 g of protein per kilogram of body weight per day. If you want to gain muscle, increase your intake between 1.8 and 2 g of protein per kilogram of body weight per day. If you eat a vegetarian diet, paying special attention to your protein intake in order to gain muscle is key.
80% of your results depend on your eating habits, which is why you should get familiar with carbs, protein and fat to help you start eating a healthy diet that will get you to your fitness goal. Still, knowing these tips aren’t always enough for you to make balanced meals. You can rely on the FizzUp Nutrition Guide that includes over 150 tips, tricks and recipes so that you can cook wholesome food and get into great eating habits in your everyday life. |
The bones of the body form a framework called the skeleton. This framework supports and protects the softer tissues. All the higher animals have an internal skeleton (endoskeleton) with a central spine, or backbone. Many lower animals, such as insects and shellfish, carry their skeletons on the outside (exoskeleton). Other creatures of still lower types have no skeleton. The jellyfish, squid, and octopus, for example, are supported primarily by the water in which they live (see invertebrates).
There are between 200 and 212 bones in the human skeleton. Whether it serves as a framework for the attachment of muscles or as a protection for delicate organs, each bone is shaped with exactness and precision. Some bones are knit solidly together, others are loosely connected. Each, however, is designed to meet its particular needs.
The human skeleton is divided into two main parts—the axial skeleton and appendicular skeleton. The axial skeleton consists of the head, neck, and trunk. The appendicular skeleton is made up of the arms and legs.
In infants the spine consists of 33 irregular bones, called vertebrae. In adults the nine bones at the lower end of the column have fused into two masses, the upper five uniting to form the sacrum and the remaining four the coccyx. Thus during the greater part of a person’s life the backbone consists of 24 vertebrae (seven cervical, in the neck; 12 thoracic, in the chest; five lumbar, in the lower back), one sacrum, and one coccyx.
Each vertebra in a human’s backbone is constructed like a ring. These vertebral rings, piled one upon the other with a padding of cartilage between, are studded with bony projections, called processes, which serve for the attachment of muscles and for articulation with other bones. The spinal canal, which is the hollow inside the backbone, contains the fragile spinal cord. Between each pair of vertebrae are openings through which the many spinal nerves pass.
If a person’s vertebral column were a straight pillar, that person would be jarred into a nervous wreck. To help prevent injury to the spinal cord and brain, nature has given the backbone four curves.
Jointed to the thoracic vertebrae are 12 pairs of ribs, but only the upper seven opposing pairs are attached in front to the sternum, or breastbone. Three of the remaining five pairs are attached by cartilage to the rib immediately above. The last two are unattached. The breastbone, situated in the midline of the chest wall, is shaped like a blade. The sternum, ribs, and 12 vertebrae make up the framework of the thoracic cavity.
Whatever the length of the neck, it is composed of the seven cervical vertebrae. The upper two are known as the atlas and axis. The atlas supports the head and rotates with it on a pivotlike process (the odontoid process) of the axis.
The skull is composed of cranial and facial bones. Eight bones unite to enclose the brain within a strong box, the cranium, and to form sockets for the eyes and ears. At the back of the cranium is the occipital bone, perforated by the foramen magnum, a passage for the spinal cord. Two parietal bones form the roof and the principal part of the sides. Below each parietal bone is one of the temporal bones, which contain sockets for the ears and bear the knoblike cellular parts called the mastoid processes. The frontal bone shapes the forehead. The sphenoid and ethmoid form the eye sockets, separate the brain from the nose, and serve as the base of the cranium. In each of the two temporal bones are three tiny bones of the middle ear—the malleus (hammer), incus (anvil), and stapes (stirrup)—which are capable of making extremely fine movements. The hyoid is a U-shaped bone in the front of the neck at the root of the tongue.
The face has 14 bones: a lower jawbone, known as the inferior maxillary, or mandible; 2 superior maxillaries, which make up the upper jaw and part of the roof of the mouth; 2 nasal bones, which form the bridge of the nose; the vomer, which forms the back and lower part of the nasal septum; 2 inferior nasal conchae (also called inferior turbinated bones); 2 malar, or zygomatic, bones, which form the cheeks; 2 lacrimal bones; and 2 palatine bones.
Some bones of the skull contain sinuses, or spaces filled with air. The sinuses connect with the nose and are lined with a mucous membrane similar to that found in the nose.
Jointed to the axial skeleton are the bones of the upper and lower extremities. These constitute the appendicular skeleton. The arms are supported by a shoulder girdle, which has on each side a collarbone, or clavicle, and a scapula, or shoulder blade. The humerus is the bone of the upper arm, and the ulna and radius form the forearm. The hand has 8 carpals, or wristbones; 5 metacarpals, which form the palm; and 14 phalanges, which make up the fingers.
The bony framework of the lower extremity is built on the same plan as the upper extremity. Each of the two innominate bones, or hip bones, consists of three parts—the ilium, the ischium, and the pubis. The hip bones unite with the sacrum and coccyx of the vertebral column to form the pelvic girdle, which supports the legs. The femur is the thighbone, the patella forms the kneecap, and the tibia and fibula are the bones of the lower leg, with the tibia being the shinbone. The skeleton of the foot consists of three parts. The ankle has 7 small tarsal bones; 5 metatarsal bones form the arch; and 14 phalanges make up the toes.
The bones are all smoothly jointed and firmly held together by flexible ligaments that keep the bones aligned during movement. The ends of the bones in each typical joint are padded with cartilage, covered with a thin sheath called the synovial membrane, and oiled with a lubricating, or synovial, fluid so that they can be used constantly and yet be protected against wear and tear. The degree of movement possible in a joint varies. Joints, therefore, are classed as immovable, yielding, or having free motion. For example, the joints of the cranium are immovable; the vertebrae are yielding; and the shoulder joint has free motion. The muscles in general are attached to the bones across the joints so that movements are brought about by the shortening, or contraction, of opposing pairs of muscles. (See also anatomy, human; bone; joint.) |
Children And Young People’s Mental Health: How Do I Know If My Child Has Mental Health Issues?
The children’s mental health dilemma: Something seems wrong but you can’t be sure what it is. Perhaps you’re noticing changes in your child’s behaviour or moods. You may struggle to recognise the child you know so well. Or perhaps you’ve long wondered if something is wrong but haven’t been able to decipher if your child is just quiet or hot-headed, or is suffering from a mental health issue. So what are the signs of children’s mental health problems and how might you approach the issue? Let’s take a look.
What is a children’s mental health problem?
Mental health is the general health of how a person thinks, behaves and governs their emotions. A person has a mental health problem when shifts or patterns of emotions, thoughts and behaviour create distress and disturb their ability to go about life.
Just as adults can have mental health disorders, so can children, although the symptoms may vary.
Mental illness in children is usually defined as delays or disturbances to the cultivation of age-appropriate cognition, social skills, behaviour and management of emotions. Such problems are very upsetting to children and impair their capacity to function well in school, in the home and in other social settings.
Mental illnesses in children
Mental health disorders (or neurodevelopmental disorders in children that mental health professionals deal with) might include:
- Anxiety disorders – Incessant fears, alarm and worries that disturb a child’s capacity to take part in education, play or common age-appropriate activities with others.
- Autism spectrum disorder (ASD) – A neuro-developmental disorder that begins to manifest itself early on, most often before a child turns three. Autism is a spectrum that will differ in severity from child to child, but a person with autism will find communication and interaction with others challenging.
- Depression and other mood disorders – Depression is incessant unhappy and miserable feelings and flat mood, usually accompanying a loss of interest and motivation which obstructs a child’s capacity to perform well in school and socialise with others. Bipolar disorders are made up of severe mood switches, alternating between highs in emotion and behaviour (that may involve risky acts) on the one hand, and depression on the other.
- Eating disorders – Eating disorders (like bulimia, anorexia, and binge-eating disorder) encompass rumination over one’s own body image and pursuit of an ideal body type, disordered and harmful patterns of diet and eating, and dysfunctional thinking about weight and weight loss. They can create enormous mental, emotional, social and educational disruption in a child’s life, and lead to physical repercussions that can be life-threatening.
- Attention-deficit hyperactivity disorder (ADHD) – This is a neurodevelopmental disorder that begins in childhood. Compared with the majority of their peers, children with ADHD struggle more with focus, impulsivity, and hyperactivity. The combination of these struggles varies, and there are different types of ADHD (hyperactive-impulsive type, inattentive type, and combined type), with girls more often presenting as inattentive, and boys more likely to be hyperactive-impulsive. The hyperactive-impulsive type is much more of a presence in popular culture and public imagination and as a result, ADHD may be more difficult to identify in girls.
- Schizophrenia – This is a serious mental health disorder of thinking, perception and behaviour that leads someone to become disconnected with reality (psychosis). It most commonly appears in late adolescence and early adulthood. Schizophrenia creates delusions, hallucinations and disordered behaviours, perceptions and thought patterns.
- Post-traumatic stress disorder (PTSD) – PTSD is a sustained emotional disturbance, agitation, upsetting memories and flashbacks, anxiety, nightmares and dysfunctional behaviours as a result of traumatic experiences such as injury, abuse, or violence.
The challenge of mental illness in children
Dr Radha Bhat, Consultant Child and Adolescent Psychiatrist says: “The problem with identifying children’s mental health issues is that healthy childhood development is by definition a process of change. Every stage in childhood development can present with a variety of mental health difficulties but the presentation can vary based on a child’s age. Furthermore, it is often more challenging for a child to identify and discuss their feelings and behaviours.
“The changes we might identify as mental health issues in a young person can cross over with a his/her healthy development – particularly in adolescence. The teenage years are a time of immense physical, mental and emotional upheaval purely in terms of growth and hormonal changes, before you even begin to factor in social pressures, increasing awareness of the world and its problems, and the academic pressures of exams. Even the most lovingly nurtured and well-adjusted child can struggle.”
So how do you know if the changes your child is presenting with are just growing pains or signs of children’s mental health issues? By observing and following your instincts as a parent, you can usually identify if something isn’t right and not in-keeping with the child’s age and maturity, you need to seek help.
Of course, no one knows your child better than you but even so, don’t try to diagnose them. Diagnoses can be complex, and mistakenly attaching a label to a child can in itself affect them emotionally.
Children’s mental health problems: signs and symptoms
It can be difficult to ascertain if a child’s behaviour is ‘just a phase’. Particularly if your child is entering teenagehood, they may be more reclusive as a matter of course, and you may worry about drawing them further into themselves if you ‘push it’. But still, this is your child and you need to know. So, what can you do and what is there to look out for?
Common signs and symptoms of children’s mental health problems include:
- Abrupt shifts in behaviour, personality or mood.
- Unexplained physical shifts, for example losing weight or gaining weight
- Abrupt poor school reports and academic performance
- Self-harm (you may notice strange marks on your child’s arms or legs for example) or talking about self-harm
- Trouble with sleeping
- Social behaviour shifts (for example avoiding spending time with friends and family)
- Continual sadness for a fortnight or more
- Talk of death or suicide
- Unusual eating habits
- Severe irritability and/or eruptions
- Uncontrolled, potentially harmful conduct
- Problems focusing on school work and mundane routine.
- Skipping or trying to avoid school
What to do if you think your child may have a mental health problem
One of the things children often want most is to be listened to and have their struggles taken seriously by their parents. They might want support in changing something, they might need practical assistance, or they might just need a hug. That is, if they are actually willing to talk. It can be very difficult for a parent to get a child to open up about what’s troubling them, not least because the child may not have the words themselves.
Yes, young people’s emotional struggles and unhappiness most often shift and pass. But if your child is experiencing ongoing problems lasting over weeks, it’s advisable to seek help and support. Early intervention can make all the difference so if you are worried, talk to your child’s healthcare provider and describe the signs and symptoms you’re seeing. It can also be a good idea to speak with other family members and care providers as well as your child’s teacher, to identify if they are noticing shifts too. Tell your doctor about any changes others have noticed as well.
In the UK, the NHS runs Child and Adolescent Mental Health Services (CAMHS) for children’s mental health problems, and if necessary, your GP can refer your child. CAMHS are made up of a range of different professionals (for example child and adolescent psychiatrists, therapists and nurses) who work together to assess, diagnose and treat children’s mental health problems such as depression, eating disorders, anxiety and so on. CAMHS does incredible work, but their waiting lists are often extensive, so in the interests of urgency, some parents opt to go private.
Here at The London Psychiatry Centre, our CAMHS team has decades of experience in effectively diagnosing and treating children’s mental health problems. We take self-referrals and referrals from GPs and other professionals, and can provide video and telephone appointments too. We work with you therapeutically as a family and individually, in order to understand the bio psycho social aspects, get to the root of the problem, and help put your child back on the path of healthy development and happiness. |
Agricultural Literacy Curriculum Matrix
Beef: Making the Grade
9 - 12
Students will evaluate the USDA grading system for whole cuts of beef and discuss consumer preferences and nutritional differences between grain-finished and grass-finished beef. Students will also distinguish various labels on beef products and discuss reasons for the government’s involvement in agricultural production, processing and distribution of food.
- Glue sticks
- USDA Quality Grade Puzzle, 1 copy per student
- Grading Beef PowerPoint
- Blank piece of cardstock/construction paper, 1 per student
- Colored pencils or markers (optional)
- Completed USDA Quality Grade Puzzle (from Interest Approach)
- Grading Beef PowerPoint
- Grading Beef PowerPoint
- Beef Labels handout, 1 copy per student
- Grass-finished or Grain-finished beef? infographic
Essential Files (maps, charts, pictures, or documents)
concentrate: animal feed that contains low amounts of fiber and high amounts of energy
finishing weight: the weight cattle reach (usually between 1,200 and 1,500 pounds) when they are ready for harvesting
forage: animal feed that contains high amounts of fiber and low amounts of energy
grain: a general term used to describe a mixture of specific grains such as corn, barley, and oats
harvest: to kill or slaughter an animal for human use
marbling: the white streaks of fat found within the meat
regulate: Control or supervise by means of rules and regulations
USDA: United States Department of Agriculture
USDA quality grade: a grade given to whole cuts of beef based on the age of beef and degree of marbling
weaning: when a young mammal no longer receives milk from its mother
Did You Know? (Ag Facts)
- U.S. farmers and ranchers produce 18% of the world’s beef with only 8% of the world’s cattle.1
- Monounsaturated fat—the fat found in avocados and olive oil—makes up about half of all fat found in beef. 2
- Not all grass-finished beef is organic. In order to be organic, the beef must meet the USDA’s organic regulations which requires cattle to exclusively graze on certified organic pastures.2
- Grain-finished beef has a lower carbon footprint than grass-finished beef. Cattle that are fed grain produce less methane and reach market weight more quickly, thus using fewer natural resources. 2
Background Agricultural Connections
Beef can be found on dinner tables and in restaurants worldwide. As consumers select beef entrées from a menu or purchase steaks at the grocery store, they may see a wide variety of product labels and beef terminology that raise questions. What does USDA Prime mean? Why is this steak more expensive than the others? Do I want grass-finished beef or grain-finished beef?
There are many factors to consider when purchasing beef, including the USDA quality grade and food production system labels. When there is a high demand for a specific type of beef, cattle ranchers typically produce products that meet consumer needs and preferences. Following consumer demand, some cattle producers choose to raise their cattle following the requirements to label the beef as "organic." Other cattle producers might raise grass-fed beef for consumers who prefer lean meat. It is important for consumers to understand each of these factors so they can make informed decisions and select beef that meets their needs.
USDA Quality Grade
Whole cuts of beef are given a quality grade by USDA meat graders. Quality grades act as a language within the beef industry and are based off of the degree of marbling found within the meat, as well as the age and maturity of the beef.
- Prime: Prime beef comes from young cattle who are well-fed. The meat has abundant marbling and can typically be found in restaurants and hotels.
- Choice: Choice beef is considered high quality, but has less marbling than Prime. Roasts and steaks from the rib and loin will be very juicy, tender, and full of flavor.
- Select: Select beef is leaner than Prime and Choice. It may still be tender, but will lack flavor and juiciness due to less marbling. It is suggested that many Select cuts should be marinated before cooking to maximize flavor.
- Standard and Commercial: Standard and Commercial grade beef is typically sold as ungraded or store brand meat.
- Utility and Canner: Utility and Canner beef is rarely sold at retail, but can be used to make ground beef and other processed products.
Grass-finished Beef vs. Grain-finished Beef
Some consumers have an increased interest to know how cattle were fed before the beef was harvested. Grass-fed or grass-finished labels can be found on beef products in grocery stores. Grain-fed labels do exist; however, they are typically not put on beef packages because most beef is raised that way. What most consumers don’t realize is that all cattle spend the majority of their lives eating grass and forage products. Calves are raised with their mother on pasture or grass until they are between 6-12 months of age. After weaning, cattle are then fed to a finishing weight for harvesting. This is where cattle will be considered either grass-finished or grain-finished. Grass-finished cattle take longer to reach a finishing weight; however, it is cheaper to raise cattle on grass and pasture rather than feeding grain. Without grain and concentrates in the diet, the meat will be leaner. Some consumers prefer meat that is lean and less fatty. Cattle that are finished on a grain diet fatten up and reach a finishing weight faster than grass-finished cattle. The marbling that comes from a grain diet makes a steak juicy, tender, and flavorful. Meat that is graded USDA Prime, USDA Choice, or USDA Select are typically more expensive because they have the most marbling. There are slight nutritional differences between grass-finished beef and grain-finished beef; however, all beef is packed with protein and a 3-oz serving of either kind will supply consumers with 50% of their recommended Daily Value of protein.1 Grass- and grain-finished beef are both nutritious, and consumers should purchase meat that best fits their needs and preferences.
Farm Production System Labels
Production system labels used on beef products indicate how cattle were raised and fed before harvesting. Organic and Naturally Raised labels are commonly found among grass- and grain-finished beef products. Consumers should make sure that organic beef includes the green and white “USDA Organic” label. Refer to the Decoding Labels infographic produced by the Beef Council for specifications of each label.
Interest Approach - Engagement
- Project slide one from the Grading Beef PowerPoint on the board.
- Ask students the following questions to lead a class discussion:
- Do cattle really receive grades?
- What kind of grades do we give beef cattle?
- How do grades affect consumers?
- Pass out a USDA Quality Grade Puzzle to each student or each pair of students.
- Instruct students to cut out each of the puzzle pieces.
- Have each student or pair of students race to put the puzzle together.
- Instruct students to glue their puzzle together onto the blank sheet of paper. Students may also color-code each section on the scale for easier reading. (A-E should be placed above the scale, and the degrees of marbling should be placed to the side of the scale.)
- Ask students to examine the puzzle once they are finished. Ask the following questions to lead a class discussion:
- What are the words on the puzzle referring to?
- Have you noticed any of these words somewhere? Try to lead students to recall seeing “prime” and “choice” in a restaurant or grocery store.
- Inform students that they are going to use this puzzle (grading scale) to learn more about the beef they eat and how cuts of meat receive grades.
Activity 1: Grading Beef
- Explain to students that after cattle are harvested, the carcass and whole cuts of meat are given a grade. Ask students the following questions:
- How is the scale used to grade beef?
- Why is beef graded?
- What factors affect the grade given to cuts of meat?
- Explain to students that in order to be harvested, cattle should be fed to a specific weight. This is called a finishing weight. When the meat is harvested, meat inspectors will grade the beef based on two things: maturity (age) of the beef carcass and how much marbling it has. Columns A through E on the chart refer to the age of beef and rows slightly abundant through practically devoid refers to the amount of marbling.
- Instruct students make note of the following beef ages on their puzzles:
- A: 9-30 months
- B: 30-42 months
- C: 42-72 months
- D: 72-96 months
- E: >96 months
- Ask students the following questions to lead a class discussion:
- What is marbling?
- What causes marbling?
- Project the Grading Beef slide (slide two) on the board.
- Ask students to compare the degree of marbling in each of the photos.
- Using the next six slides, instruct students or pairs of students to grade each photo based on its age and the degree of marbling. Allow students to use their puzzles to determine the quality grades of each photo.
- Once students have determined a grade for a photo, have them place the corresponding number on their puzzles where it belongs. (Each photo is numbered.)
- Discuss students’ results on each slide.
Activity 2: Grain-finished vs. Grass-finished
- Ask students how the diet of cattle could affect the USDA grade.
- Allow students to brainstorm possible answers.
- Project the Beef Labels slide (slide nine) from the PowerPoint on the board.
- Explain to students that many of the labels they see on beef packages give consumers an idea of how that beef was fed and raised.
- Pass out a Beef Labels handout to each student.
- Ask students to examine each of the labels and write their own interpretations of each label.
- What kind of diet did this animal have?
- How was it raised?
- What does this label mean?
- Discuss each of the labels with students and their interpretations.
- Project the next four slides (10-13) on the board.
- Ask students to write the facts about each label below their interpretations.
- Were any of their interpretations correct?
- What surprised students about the label facts?
- What did students assume about the diet of cattle from beef labeled organic or natural?
- Is all grass-fed beef considered organic?
- Can grain-fed cattle be organic?
- Project the Cattle Lifecycle slide (slide 14) on the board. Point out to students that all cattle spend the majority of their lives eating grass, but they can be fed (finished) differently to reach a harvesting weight.
- Ask students to make a connection between the diet of cattle and USDA grades.
- After students have brainstormed possible answers, refer to the Background Agriculture Connection paragraph to discuss the questions below:
- Does grain or grass produce more marbling?
- Why are Prime and Choice cuts of beef more expensive?
- Ask students to create a T-chart comparing grain-finished beef vs. grass-finished beef.
- Once students have made their own comparisons, use the Grass-Finished or Grain-finished Beef infographic provided by the Beef Council to discuss similarities and differences.
- Is there a nutritional difference between grass-finished beef and grain-finished beef?
- Do consumer needs/preferences affect what product farmers and ranchers produce?
- Why do some consumers prefer grass-finished beef vs grain-finished beef?
- What are the health benefits of eating beef?
Infographic created by the Beef Council
Activity 3: Label Regulations: Who's in Charge?
- Ask students what "regulate" means.
- Allow students to discuss the words "regulate" and "regulations". Where have they heard these words before?
- Refer to the labels in Activity 2 (slide 15).
- Ask students to think about these beef labels and other labels they see on food products and ask them, "Who regulates the use of labels on our food? Is it farmers? Ranchers? Nutritionists?" Allow students to brainstorm and discuss answers.
- Explain to students that the Food and Drug Administration (FDA) and United States Department of Agriculture (USDA) each regulate and oversee specific labels on food products.
- (Slide 16) Allow students to read and analyze the bulleted lists in each column. Which government program regulates the labels on the left? Which government program regulates the labels on the right? Reveal the answers on the PowerPoint.
- Explain to students that the FDA regulates labels on all prepared food items such as breads, cereals, canned and frozen food, snacks, etc. The USDA uses a shield-shaped label and regulates labeling on all meat products including beef, pork, and poultry. This includes the beef quality grades students explored in Activity 1. Refer to the USDA Labeling Terms to read more about specific meat label terminology.
- Inform students that the USDA also oversees regulations for all certified organic products*.
- Explain to students that some prepared food items might have an FDA label as well as a USDA Certified Organic label. If that is the case, the product must meet both FDA's and USDA's regulations.
- Promote critical thinking among your students by asking the following questions:
- Why do the FDA and USDA regulate food labels?
- How does food labeling affect farmers and ranchers?
- Does food labeling affect consumer choices?
- What does it mean if a product is labeled "organic" but it's not a USDA organic label?
- Can labels be misleading or confusing?
- Encourage students to explore food labels on products at home or the grocery store.
*Some operations are exempt from certification, including organic farmers whose gross income is $5,000 or less. People who sell or label a product "organic" when they know it does not meet USDA standards can be fined up to $11,000 for each violation.8
|Students should recognize that some labels are regulated by the USDA or FDA and have strict guidelines to qualify for their use. The USDA Organic seal and beef quality grades are examples. Other labels (such as "Grass-fed") are either not regulated or are regulated by private organizations who set their own criteria. Consumers should use critical thinking when determining food choices based on labels.|
Concept Elaboration and Evaluation
After conducting these activities review and summarize the following key points:
- Food labels impact consumer choices.
- Following the laws of supply and demand, as consumers choose specific products and market demand increases, farmers and ranchers respond by increasing production according to demand.
- Quality grades of beef are outlined by the USDA.
|We welcome your feedback! Please take a minute to tell us how to make this lesson better or to give us a few gold stars!|
- Grass-finished or Grain-finished infographic provided by the Beef Council
- Decoding Labels infographic provided by the Beef Council
Suggested Companion Resources
Utah Agriculture in the Classroom |
What Does Phosphorus Do for the Heart?
Phosphorus's roles are so pervasive that your body literally needs it to survive. Second to calcium in terms of abundance, it is a component of every cell of your body, primarily in the phosphate form, and assists with multiple biological functions. Some of those functions involve your heart. As with other essential minerals, however, the key to health is balance. Indeed, the University of Maryland Medical Center reports a higher heart disease risk with elevated blood phosphorus.
The University of Maryland Medical Center estimates that approximately 85 percent of the body's phosphorus occurs in bones and teeth, alongside calcium. Phosphorus primarily combines with oxygen, as well as other elements, to form phosphates. Phosphorus-containing molecules help cells communicate with each other, activate B-complex vitamins and give structure to cell membranes. Many enzymes and hormones depend on phosphates for their activation, and your body needs them to maintain a normal acidity. More importantly, all energy production and energy storage activities require phosphorus.
Phosphorus and Heart Function
Phosphorus' mere involvement in all energy production makes it an indispensable partner to your heart since, like all organs, your heart needs energy in order to function. Also, phosphorus helps regulate blood calcium, and your heart depends on calcium for proper function. Additionally, blood acidity affects your heart rate. Phosphorus acts as a buffer to help maintain normal acid-base balance. It is also a phosphorus-containing compound named 2,3-DPG that helps red blood cells deliver oxygen to your body's tissues, including heart tissue.
Phosphorus and Heart Disease
In a 2007 landmark research effort called the Framingham Heart Study, Dr, Ravi Dhingra and colleagues showed that high levels of blood phosphorus increase heart disease risk. One reason this can happen is that elevated phosphorus levels reduce your body's ability to make vitamin D, which in turn leads to calcification in your heart's blood vessels. A second possibility is that high blood phosphorus directly causes mineral buildup in blood vessels, leading to blockage and heart problems. High phosphorus levels might also promote heart disease by triggering a hormone called parathyroid hormone, which puts the body in a state of inflammation.
RDA and Food Sources
According to Nutrition 411, blood levels of phosphorus normally range from 2.5 to 4.5 milligrams per deciliter. To remain within that range, your body can excrete excess phosphorus in the urine or use it for bone formation. It can also regulate blood phosphorus levels by controlling its absorption from food and moving phosphorus-containing compounds in and out of cells. The current recommended daily allowance, or RDA, for phosphorus is 700 milligrams per day for adults. Vitamin and mineral supplements rarely provide more than 15 percent of the RDA, but most foods contain phosphorus. Some of the richest phosphorus sources include dairy products, meat and fish. You may also obtain it from nuts, legumes and cereals.
Low blood phosphorus, or hypophosphatemia, can also be dangerous. It sometimes manifests in chest pain and heart rhythm dysfunction. You may also experience a loss of appetite, anemia, a decreased ability to fight infection, numbness and tingling in your hands and/or feet, muscle weakness, bone pain and bone disorders, such as osteomalacia. If the deficiency is severe enough, death can result. Anorexics, alcoholics, diabetics and those with certain digestive disorders are especially at risk for phosphorus deficiency, according to the University of Maryland Medical Center.
Suzanne Fantar has been writing online since 2009 as an outlet for her passion for fitness, nutrition and health. She enjoys researching and writing about health, but also takes interest in family issues, poetry, music, Christ, nature and learning. She holds a bachelor's degree in biological sciences from Goucher College and a MBA in healthcare management from the University of Baltimore. |
The term elites refers to a small number of actors who are situated atop key social structures and exercise significant influence over social and political change. Much of the power of elites stems from their economic resources, their privileged access to institutions of power, and their ability to exercise moral or intellectual persuasion. At the same time, however, elites embody the values and represent the interests of particular groups in society. This can limit their autonomy, complicate efforts to cooperate with each other, and narrow the support they elicit from the public. It is this contradictory aspect of elites—simultaneously empowered and constrained by their positions as leaders in society—that defines their role in the political system.
While traditional notions of elites have typically focused on members of an aristocracy (or oligarchy), whose positions were based on claims to hereditary title and wealth, elites today comprise key figures across various sectors of society. In and around government, they include political leaders within the executive and legislative branches of government, those in command of the bureaucracy and military, and leading representatives of organized interests in society (such as labor unions or corporate lobbying groups). Within the economy, elites reside at the pinnacle of finance, banking, and production. In the cultural sphere, elites include major patrons of the arts, cultural icons (including pop culture), writers, academics, religious leaders, and prominent figures within the mass media. Most recently, transnational elites have arisen within emergent supranational institutions, such as corporate actors in the World Economic Forum, technocrats working in the United Nations system, and the heads of international nongovernmental organizations.
Although the idea of elites can be traced back to the writings of Aristotle and Plato, the term elites was first used in modern social science by the Italian economists Vilfredo Pareto (1902–1903) and Gaetano Mosca (1939) in the early twentieth century. In contrast to class theories, in which the sources of societal power inhered in institutions of property and class relations in society, early elite theories saw power concentrated among a minority of the population who were able to rule over the rest of the population with little accountability to them. As a result, elites were often conceptualized as “ruling elites,” by virtue of their authority over the masses. As critics noted, however, the origins of elite power were underspecified. It was not clear, for example, if elites were inevitable products of modern organization, or if their position was contingent on their ability to control vital resources in society and mobilize the public.
In 1915, in his book Political Parties, the German sociologist Robert Michels introduced the “iron law of oligarchy.” Michels contended that the existence of elites sprang from an inherent tendency of all complex organizations to delegate authority to a ruling clique of leaders (who often take on interests of their own). Accordingly, even the most radical organizations will develop a self-interested elite. In a prominent 1956 study of the United States, C. Wright Mills proposed that elite power was defined by its institutional origins. Mills argued that the place of a “power elite” was maintained by their positions in government, the military, and major corporations, which enabled them to command the organized hierarchies of modern society. While these and other works of the time, including Joseph Schumpeter’s 1954 “competitive elitist” account of democracy, demonstrated the importance of the organizational bases of elite power, the origins of elites are more socially contingent on factors such as patronage and factionalism, leadership, and social structure than on institutional structure. Nonetheless, this classic work has heavily influenced elite studies, particularly scholars studying intra-elite political struggles within East bloc countries (through work termed “Kremlinology”).
Distinguishing themselves from these classical theorists, scholars since the 1960s have begun to differentiate elites and recognize their diverse roles. Major works, such as Suzanne Keller’s Beyond the Ruling Class (1963), have traced elites’ sociological origins, examined their varied social functions, and engaged in empirical studies of a range of actors at the apex of almost any area of human activity. In contrast to classical approaches, these authors have highlighted ways in which elites conveyed societal claims upon the state. While this opened new avenues of research, their tendency to rely on the social profile of elites (such as age, education and occupation, and region or country of birth) at times produced inaccurate predictions of elite behavior. Though influential in shaping latent political attitudes, empirical research has shown that background characteristics are mediated by personal beliefs and values. As scholars such as Robert D. Putnam (1976) have concluded, the attitudes and political styles of elites do affect political outcomes, but behavioral patterns must be placed in a context of elite linkages to different social strata.
There has also been considerable cross-national variation in the openness of elites. In many societies, the elite manipulation of political patronage and the organization of political parties have perpetuated elites’ positions. In some countries, however, government programs have been designed to desegregate elites (though the success of these programs has been limited). As Richard L. Zweigenhaft and G. William Domhoff demonstrated in Diversity in the Power Elite (1998), affirmative action initiatives within the United States have led to some openness along racial, gender, and class lines. However, they also showed that minorities and women absorbed into the elite often minimize their differences and, paradoxically, strengthen the existing system. Thus, government reforms (in the United States and elsewhere) seeking to enhance the diversity of elites have not produced the expected or hoped for results.
As suggested in foundational studies of elites, the importance of elites to the political system is heavily affected by struggles within ruling cliques and by elites’ relationships to social structures. Although elites influence the political system in numerous ways, the focus here will be on their effects on political regimes and democracy, the politics of state development, and incidences of violent conflict.
The nature of competition and compromise among ruling elites carry major implications for democracy. Although pluralist theory suggests that the dispersion of power in democratic systems across interest groups and institutions leaves elites in charge of different sectors of democratic politics, elites have a coordinated effect in mobilizing public opinion and ushering in political change. In The Nature and Origins of Mass Opinion (1992), John Zaller describes how, even in established democracies, elites attempt to construct a political world through messages delivered via media outlets to the mass public. In nondemocratic regimes, concentrations of power within ruling circles means that stability and prospects for political change hinge on the skill and engineering of elites, who can negotiate compromises between competing factions. Indeed, it has been long held that elite failures to rise above societal divisions can contribute to the rise of extremist politics, as typified by the rise of Nazism in interwar Germany. As Dankwart A. Rustow (1970) and more recently John Higley and Michael Burton (1989) have argued, democratic elites must not only establish a language of compromise across factions, but also accept the boundaries of political competition, and become habituated to the rules of the game. Recent studies, however, have shown that extremist popular mobilization can coexist with elite negotiations, and that the success of democratic transition depends not on moderation per se, but on elite calculations and projections of whether the forces of political change—moderate or extremist—will threaten their interests after they cede power (Bermeo 1997).
In addition to power struggles within ruling circles, the struggle between rulers and local elites has been crucial in centuries-long efforts to complement states’ juridical sovereignty with empirical statehood. As much of western European history attests, nobles, magnates, and landlords (among others), supported by property holdings and large armies, posed substantial challenges to the centralization of state power. Initially, future sovereign rulers were little more than members of the elite, as illustrated by Perry Anderson’s reprint of the famous oath of allegiance among Spanish nobility: “We who are as good as you swear to you who are no better than we to accept you as our king and sovereign lord, provided you observe all our liberties and laws; but if not, not.” (Anderson 1974, p. 65) Such diffuse systems of authority under local societal elites are also found in many “weak states” in contemporary Asia, Africa, and post-Communist Eurasia. Both historically and today, therefore, the emergence of effective state infrastructures depends on whether mixtures of coercion and patronage dispensed by rulers convince entrenched elites to cede political authority.
A final realm of politics in which elites play a critical role is violent conflict within society. In particular, intraelite politics and elite-mass linkages reside at the center of civil wars, and elite power-sharing models have been applied across a diversity of contexts. Among the most well-known is Arend Lijphart’s “consociational” model (1977), which claims that a coalition of elites, drawn from the conflicting sides, can mitigate violence through a system of elite consensus built on mutual veto power, proportional allocation of offices, and granting each group partial autonomy. The success of such negotiated pacts has been variable, deterring violence in the Netherlands and in post-apartheid South Africa but failing to prevent an explosion of intra-state conflicts in the immediate post–cold war period. Ultimately, the prevention or cessation of violence is causally related to how elites interact with one another and how effectively they channel societal claims through political institutions.
SEE ALSO Aristocracy; Campaigning; Elections; Elitism; Power; Power Elite; Public Opinion
Anderson, Perry. 1974. Lineages of the Absolutist State. London: Verso.
Aron, Raymond. 1950. Social Structure and the Ruling Class. British Journal of Sociology 1 (1): 1–16, 126–143.
Bermeo, Nancy. 1997. Myths of Moderation: Confrontation and Conflict during Democratic Transitions. Comparative Politics. 29 (3): 305–322.
Bottomore, Thomas B. 1964. Elites and Society. London: C.A. Watts.
Higley, John, and Michael G. Burton. 1989. The Elite Variable in Democratic Transitions and Breakdowns. American Sociological Review 54 (1): 17–32.
Keller, Suzanne. 1963. Beyond the Ruling Class: Strategic Elites in Modern Society. New York: Random House.
Michels, Robert. 1915. Political Parties: A Sociological Study of the Oligarchical Tendencies of Modern Democracies. Trans. Eden and Cedar Paul. New Brunswick, NJ: Transaction Publishers, 1999.
Mosca, Gaetano. 1939. The Ruling Class. Trans. Hannah D. Kahn. New York: McGraw-Hill. Originally published as Elementi di scienza politica (1896).
Pareto, Vilfredo. 1902–1903. Les systemes socialistes. 2 vols. Paris: Giard.
Putnam, Robert D. 1976. The Comparative Study of Political Elites. Englewood Cliffs, NJ: Prentice-Hall.
Rustow, Dankwart A. 1970. Transitions to Democracy: Toward a Dynamic Model. Comparative Politics 2 (3): 337–363.
Schumpeter, Joseph. 1942. Capitalism, Socialism, and Democracy. London: Harper & Brothers.
Zaller, John. 1992. The Nature and Origins of Mass Opinion. Cambridge, U.K.: Cambridge University Press.
Lawrence P. Markowitz
The concept of elites is used to describe certain fundamental features of organized social life. All societies—simple and complex, agricultural and industrial—need authorities within and spokesmen and agents without who are also symbols of the common life and embodiments of the values that maintain it. Inequalities in performance and reward support this arrangement, and the inequality in the distribution of deference acknowledges the differences in authority, achievement, and reward. Elites are those minorities which are set apart from the rest of society by their pre-eminence in one or more of these various distributions. We shall concentrate here on the elites of industrial society.
In modern societies of the West, there is no single comprehensive elite but rather a complex system of specialized elites linked to the social order and to each other in a variety of ways. Indeed, so numerous and varied are they that they seldom possess enough common features and affinities to avoid marked differences and tensions. Leading artists, business magnates, politicians, screen stars, and scientists are all influential, but in separate spheres and with quite different responsibilities, sources of power, and patterns of selections and reward. This plurality of elites reflects and promotes the pluralism characteristic of modern societies in general.
For virtually every activity and every corresponding sphere of social life, there is an elite: there are elites of soldiers and of artists, as well as of bankers and of gamblers. This is the sense in which Pareto (1902–1903) used the term. There is, however, an important factor that differentiates these various elites, apart from their different skills and talents: some of them have more social weight than others because their activities have greater social significance. It is these elites—variously referred to as the ruling elite, the top influentials, or the power elite—which arouse particular interest, because they are the prime movers and models for the entire society. We shall use the term strategic elites to refer to those elites which claim or are assigned responsibilities for and influence over their society as a whole, in contrast with seg-mental elites, which have major responsibilities in subdomains of the society.
Strategic elites are those which have the largest, most comprehensive scope and impact. The boundaries that separate strategic and segmental elites are not sharply defined because of the gradations of authority and the vagueness of the perceptions that assign positions to individuals. The more highly organized elites are, the easier it is to estimate their boundaries and membership. Thus, the more readily identifiable elites in Western societies are those of business, politics, diplomacy, and the higher civil and armed services. Elites in the arts, in religion, and in moral and intellectual life are more vaguely delimited and hence also more controversial.
The differentiation of elites . Even the earliest-known human societies had leading minorities of elders, priests, or warrior kings, who performed elite social functions. A chief in a primitive society, for example, enacted one complex social role in which were fused several major social functions, expressed through the following activities: organization of productive work; propitiation of, and communication with, supernatural powers; judgment and punishment of lawbreakers; coordination of communal activities; defense of the community from enemy attack; discovery of new resources and of new solutions to the problems of collective survival; and encouragement or inspiration of artistic expression. As societies expand in size and in the diversity of their activities, such activities also expand, and more elaborate, specialized leadership roles emerge. Following are some of the major forms of societal leadership.
(1) Ruling caste. One stratum performs the most important social tasks, obtains its personnel through biological reproduction, and is set apart by religion, kinship, language, residence, economic standing, occupational activities, and prestige. Religious ritual is the main force that supports the position of this ruling stratum [see CASTE].
(2) Aristocracy. A single stratum monopolizes the exercise of the key social functions. The stratum consists of families bound by blood, wealth, and a special style of life and supported by income from landed property.
(3) Ruling class. A single social stratum is associated with various key social functions, and its members are recruited into its various segments on the basis of wealth and property rather than of blood or religion. Historically, ruling classes have held economic rather than political power, but their influence tends to extend to all important segments and activities of society. Although various differentiated and specialized sectors may be distinguished, they are bound together by a common culture and by interaction across segmental boundaries.
(4) Strategic elites. No single social stratum exercises all key social functions; instead, these functions and the elites associated with them are specialized and differentiated. The predominant justification for holding elite status is not blood or wealth as such but, rather, merit and particular skills. Accordingly, these elites are recruited in various ways adapted to their differentiated tasks and are marked by diversity as well as by im-permanence.
In general it appears that where the society as a whole is relatively undifferentiated, elites are few in number and comprehensive in their powers; where social differentiation is extensive, elites are many and specialized. The principal social forces underlying the change from societal leadership based on aristocracy or ruling class to that based on strategic elites are population growth, occupational differentiation, moral heterogeneity, and increased bureaucratization. In a large, industrialized mass society, marked by innumerable ethnic, regional, and occupational differences and stratified as to work, wealth, prestige, style of life, and power, leadership cannot be entrusted to a single ruler, be he chief, warrior, or priest, or to a single stratum marked by hereditary exclusiveness and traditionalism. Instead, the elites of this society will tend to be varied, specialized, and differentiated as to skill, style, background, and rewards. In this way the characteristic attributes of the larger society are mirrored in the strategic elites through whom that society tries to realize its main goals and projects. The division of a society into many groups and strata is therefore paralleled by its reunification around a symbolic center, or core, that signifies the common and enduring characteristics of the differentiated whole. The shape of this center is determined by the complexity and variety of the whole. In this way a society, consisting of a multitude of individuals and groups, can act in concert despite its moral, occupational, and technological diversity and can maintain the sense of unity necessary for collective achievements.
The functions of strategic elites . In every differentiated society, there are patterns of beliefs and values, shared means of communication, major social institutions, and leading individuals or groups concerned with the maintenance and development of the society and its culture. These leading elements, by focusing attention and coordinating action, help keep the society in working order, so that it is able to manage recurrent collective crises.
The best efforts at classifying elites are still those of Saint-Simon (1807) and Mannheim (1935), whose approaches, although separated by a century, have much in common. Saint-Simon divided elites into scientists, economic organizers, and cultural-religious leaders. This classification parallels Mannheim’s distinction between the organizing and directing elites, which deal with concrete goals and programs, and the more diffuse and informally organized elites, which deal with spiritual and moral problems.
Elites may also be classified according to the four functional problems which every society must resolve: goal attainment, adaptation, integration, and pattern maintenance and tension management. Goal attainment refers to the setting and realization of collective goals; adaptation refers to the use and development of effective means of achieving these goals; integration involves the maintenance of appropriate moral consensus and social cohesion within the system; and pattern maintenance and tension management involve the morale of the system’s units—individuals, groups, and organizations.
Accordingly, four types of strategic elites, which may include a far larger number of elites, may be identified: (1) the current political elite (elites of goal attainment); (2) the economic, military, diplomatic, and scientific elites (elites of adaptation); (3) elites exercising moral authority—priests, philosophers, educators, and first families (elites of integration); and (4) elites that keep the society knit together emotionally and psychologically, consisting of such celebrities as outstanding artists, writers, theater and film stars, and top figures in sports and recreation (pattern-maintenance elites).
Thus, the general functions of elites appear to be similar everywhere: to symbolize the moral unity of a collectivity by emphasizing common purposes and interests; to coordinate and harmonize diversified activities, combat factionalism, and resolve group conflicts; and to protect the collectivity from external danger.
Societies differ, however, in the way they incorporate these functions into living institutions. In some societies, usually at simpler stages of development, one agent assumes responsibility for all four system functions; in others, several specialized agents emerge. In advanced industrial societies the tendency is clearly toward several elites whose functional specialization is accompanied by a growing moral and organizational autonomy among them. At the same time, however, the overriding goals of these elites are, as they have always been, the preservation of the ideals and practices of the societies at whose apex they stand.
Recruitment of strategic elites . Elite replacement, which occurs in all societies, involves both the attraction of suitable candidates and their actual selection. What is considered suitable depends on the structure of the elite groups and on whether these elites assume comprehensive or specialized functional responsibilities. Recruitment mechanisms, however varied in practice, reflect only two fundamental principles: recruitment on the basis of biological (and, implicitly, social) inheritance and recruitment on the basis of personal talents and achievements. Although these two systems are not mutually exclusive, one or the other tends to prevail, depending on the system of social stratification, on the values placed on ascription and achievement, and on the magnitude of demand for elite candidates in relation to the supply. Broadly stated, these principles reflect the general tendencies within a social system toward expansion or toward consolidation. Under conditions of expansion, recruitment on the basis of personal achievement is likely to be the rule; under consolidation, recruitment based on inheritance of status. Each principle, moreover, has profound social repercussions on social mobility, on the stimulation of individual ambitions and talents, and on levels of discontent among different social strata. Each, furthermore, affects not only the composition of the elites but also their spiritual and moral outlook.
In modern industrial societies recruitment and selection patterns reflect the changes toward differentiation and autonomy among the elites. According to available evidence from a number of such societies, recruitment based on social inheritance is giving way to recruitment based on individual achievement. This is true for England (Cole 1955; Guttsman 1963; Thomas 1959), Germany (Deutsch & Edinger 1959; Stammer 1951; Dreitzel 1962), France (Aron 1950), the United States (Warner & Abegglen 1955; Mills 1956; Matthews 1960; Keller 1963), and the Soviet Union (Fainsod 1953; Crankshaw 1959), among others. Nonetheless, taking the elite groups as a whole, we note the simultaneous operation of several recruitment and selection principles. Some elites stress ancestry; others, educational attainments; still others, long experience and training. Some elites are elected by the public, others are appointed by their predecessors, and still others are born to their positions. The members of some elites have relatively short tenure, while that of others is lifelong. This is a dramatic contrast to other types of societies with relatively small leadership groups that have diffuse and comprehensive functional responsibilities and comprise individuals trained for their status from birth on.
Of course, looking at modern developments at a single point in time, we note that the hold of the past, with its emphasis on property or birth, is still very strong among some elites. Conspicuous achievements are still often facilitated, if not determined, by high social and economic position, since wealth and high social standing open many doors to aspiring candidates and instill in them great expectations for worldly success. From a long-range perspective, however, it is clear that the link between high social class and strategic elite status has, in many modern societies, become indirect and informal. Ascribed attributes, such as birth, sex, and race, although they play a greater role in some elites than in others, have decreased in importance in comparison with achieved attributes. This is in line with the general modern trend toward technological and scientific specialization, in which individual skill and knowledge count more than does a gentlemanly upbringing in the traditions and standards of illustrious forebears.
Rewards of strategic elites . The process of selection or allocation is facilitated by the system of rewards offered to individuals assuming leadership positions in society. Some rewards are tangible material benefits, such as land, money, cattle, or slaves, and others are intangible, such as social honor and influence. The specific rewards used to attract potential recruits to elite positions depend on the social definition of scarce and desirable values and the distribution of these values.
Rewards play a twofold role in the recruitment of elites: they motivate individuals to assume the responsibilities of elite positions, and they maintain the high value placed on these positions. They thus serve as inducements to individuals, as well as indicators of rank.
Rewards, too, have become specialized in modern industrial societies. Some elites enjoy large earnings; others, popularity or fame; and still others, authority and power. Not all elites are equally wealthy, not all have equal prestige; only some have much more power than others, and none have influence in all spheres. The assumption of elite positions thus also involves the acceptance of specific rewards associated with them. Responsibilities and rewards form parts of a whole and may be discussed jointly. And each is linked to recruitment, for rewards are the spur to the expenditure of effort that the duties of strategic positions demand.
The process of recruiting elites and the manner of rewarding them must not be confused with their purposes and status. For although recruitment and rewards affect the composition and performance of elites, they do not alter their functions. As Mosca (1896) clearly demonstrated, democratically and hereditarily recruited elites differ in many important ways, but they nonetheless function as elites.
The tendency toward a pluralization of elites is likely to conflict with the older tendency toward the monolithic exercise of power and leadership. This is a problem in totalitarian as well as in liberal societies. In totalitarian societies, the problem is how to permit the desired flexibility and variety without corroding social stability. Conversely, in liberal pluralist systems, the problem is how to achieve the necessary degree of social cohesion and moral consensus among partly autonomous, highly specialized, yet functionally interdependent elites. The cohesion and consensus are necessary if the society is to pursue common goals and is to be unified in more than name only.
These recent tendencies and trends are neither absolute nor inevitable. They are clearly manifested today in a wide variety of contexts and reflect the tempo of social change in a technologically expanding world. Should this tempo slow down markedly or cease altogether, the impulses toward rigidity and ascription may well come to the fore once again, albeit within a social structure shaped by centuries of industrialism. Some security and stability will be gained, but at the price of adventure and novelty—a familiar exchange in the annals of history and one bound to be reflected in the character and stamp of the strategic elites.
Aron, Raymond 1950 Social Structure and the Ruling Class. British Journal of Sociology 1:1–16, 126–143.
Bottomore, Thomas B. 1964 Elites and Society. London: Watts.
Cole, G. D. H. 1955 Studies in Class Structure. London: Routledge. → See especially pages 101–146 on “Elites in British Society.”
Crankshaw, Edward 1959 Khrushchev’s Russia. Har-mondsworth (England): Penguin.
Deutsch, Karl W.; and Edinger, Louis J. 1959 Germany Rejoins the Powers: Mass Opinion, Interest Groups, and Elites in Contemporary German Foreign Policy. Stanford (Calif.) Univ. Press.
Dreitzel, Hans P. 1962 Elitebegriff und Sozialstruktur: Eine soziologische Begriffsanalyse. Stuttgart (Germany): Enke.
Fainsod, Merle (1953) 1963 How Russia Is Ruled. Rev. ed. Russian Research Center Studies No. 11. Cambridge, Mass.: Harvard Univ. Press.
Guttsman, Wilhelm L. 1963 The British Political Elite. London: MacGibbon & Kee.
Hunter, Floyd 1959 Top Leadership, U.S.A. Chapel Hill: Univ. of North Carolina Press.
Jaeggi, Urs 1960 Die gesellschaftliche Elite: Eine Studie zum Problem der sozialen Macht. Bern (Switzerland) and Stuttgart (Germany): Haupt.
Keller, Suzanne 1963 Beyond the Ruling Class: Strategic Elites in Modern Society. New York: Random House.
Lasswell, Harold D. 1936 Politics: Who Gets What, When, How? New York: McGraw-Hill.
Mannheim, Karl (1935) 1940 Man and Society in an Age of Reconstruction: Studies in Modern Social Structure. New York: Harcourt. → First published as Mensch und Gesellschaft im Zeitalter des Umbaus.
Matthews, Donald R. 1960 U.S. Senators and Their World. Chapel Hill: Univ. of North Carolina Press.
Mills, C. Wright 1956 The Power Elite. New York: Oxford Univ. Press.
Mosca, Gaetano (1896) 1939 The Ruling Class. New York: McGraw-Hill. → First published as Elementi di scienza politica.
Pareto, Vilfredo 1902–1903 Les systemes socialistes. 2 vols. Paris: Giard.
Parsons, Talcott; Bales, R. F.; and Shils, E. A. 1953 Working Papers in the Theory of Action. Glencoe, Ill.: Free Press.
Saint-Simon, Claude Henri De (1807) 1859 Oeuvres choisis. Volume 1. Brussels: Meenen & Cie.
Sereno, Renzo 1962 The Rulers. New York: Praeger; Leiden (Netherlands): Brill.
Stammer, Otto 1951 Das Elitenproblem in der Demo-kratie. Schmollers Jahrbuch für Gesetzgebung, Ver-waltung und Volkswirtschaft 71, no. 5:1—28.
Thomas, Hugh (editor) 1959 The Establishment: A Symposium. London: Blond.
Warner, W. Lloyd; and Abegglen, James C. 1955 Occupational Mobility in American Business and Industry: 1928–1952. Minneapolis: Univ. of Minnesota Press.
Small but powerful minorities with a disproportionate influence in human affairs.
Both tribal society and Islam have a strong egalitarian component, but early Islamic writers assumed a distinction between the few (khassa) and the many (amma) not unlike that in modern Western elite theory between the elite and the masses. Like the term elite, khassa had vague and various meanings. It was applied on occasion to the following: the early (661–750) Arab aristocracy under the Umayyads; the whole ruling class; the inner entourage of a ruler; educated people generally; and philosophers who pursued a rational (and sometimes a mystical) road to truth.
In the 1960s and 1970s, elite analysis—pioneered by V. Pareto and G. Mosca early in the twentieth century, partly as an alternative to Marxist class analysis—attracted many Western scholars of the Middle East. National political elites received much of the attention, although anthropologists continued their special interest in local elites. Economic, social, and cultural elites attracted notice particularly when they overlapped with political elites. Elite studies examine the background, recruitment, socialization, values, and cohesiveness of elites. They probe elite-mass linkages, circulation into and out of the elite, the effects of elite leadership on society, and the evolution of all these factors over time.
The Ottoman Empire, which ruled loosely over most of the Middle East in the late eighteenth century, conceived of society as divided into a ruling class of askaris (literally, "soldiers" but also including "men of the pen"—ulama [Islamic scholars] and scribal bureaucrats) and a ruled class of reʿaya (subjects). "Ottomans" were the core elite among the askaris, presumed to be Muslim, available for high state service, and familiar with the manners and language (Ottoman Turkish, which also entailed a knowledge of Arabic and Persian) of court. The recruitment of slaves into the elite was one mechanism that made for extreme upward social mobility.
Ever shifting social realities rarely match prescriptive theories. Although theoretically excluded from the askari elite, merchants, Coptic scribes, Jewish financiers, and Greek Orthodox patriarchs wielded considerable power in some times and places. Women attained such great informal power during one seventeenth-century period that the Ottomans called it "the sultanate of women." When central control weakened—as in the Fertile Crescent provinces in the eighteenth and early nineteenth centuries, a "politics of notables" mediated between the center and the provincial masses. Notable status often ran in families; the notables could include ulama, tribal shaykhs, merchants, large landowners, and local military forces.
Since 1800, the Middle East and its elites have greatly changed under the impact of the Industrial Revolution, European conquest and rule, the breakup of the Ottoman Empire, nationalism and independence struggles, the Arab–Israel conflict, the petroleum and oil bonanza, secularist and Islamic ideologies, and the frustrations of continuing military, cultural, and economic dependency. Yet there has been continuity too.
In the countries where colonialism prevailed, foreign elites forced the partially displaced indigenous elites to make the painful choice of collaboration or resistance. Collaboration was particularly tempting to some religious and ethnic minorities. In the Fertile Crescent, tribal shaykhs and large landowners functioned as notables mediating between the colonial power and the people, as they had once done with the Ottomans. Whether one collaborated or not, knowledge of the West and Western language became a career asset for officials and the emerging professional class. In the milieu of mandates and of party and parliamentary politics between the two world wars, lawyers flourished in both government and opposition. After World War II, as most Middle Eastern countries regained control of their affairs, landed elites and reactionary politicians in many cases still frustrated serious social reform. Pressure built, and army officers of lower-middle-class origin overthrew one regime after another. Was it a return to the praetorian politics of the Ottoman Janissaries and the Mamluks—the armed forces that early nineteenth-century rulers had destroyed to clear the way for Western-style armies? The new armies remained on the political sidelines for most of the nineteenth century, reemerging briefly in Egypt during Ahmad Urabi's vain attempt to resist colonial control.
After 1900, armies reentered politics first in countries that had escaped colonial rule—Turkey with the Young Turks and Mustafa Kemal Atatürk and Iran with Reza Shah Pahlavi. Military coups in the Arab countries began later, following independence from colonial rule: Iraq in the 1930s and again in 1958, Syria in 1949, and Egypt in 1952. The regime of Gamal Abdel Nasser—with its Soviet alliance, single-party authoritarianism, and Arab socialism—became a prototype for many others. Hopes that the new military elites and their civilian technocratic allies—economists, engineers, scientists—represented the progressive vanguard of a new middle class soon proved to be overblown.
Patrilineal monarchies in Morocco, Jordan, Saudi Arabia, and elsewhere in the Arabian Peninsula weathered the revolutionary Arab socialist challenge. Oil wealth helped rulers purchase political acquiescence, but it did not save the monarchs of Iraq, Libya, or Iran. In both the monarchies and their revolutionary challengers, patterns of authoritarian rule persisted. Family connections, old-boy networks, and patron-client relations still figure prominently in elite recruitment and perpetuation despite the widespread longing for a fair and open system.
Unlike the military, the ulama have lost much of the influence they had in 1800. During the nineteenth century, reforming rulers appropriated revenues from religious endowments, tried to turn the ulama into bureaucrats, and bypassed them with Western-style courts and state-school systems. In the ulama 's willingness to provide legitimization for almost any regime in power, they have jeopardized their moral authority. Engineers and others associated with the state schools, not the ulama, have been in the fore-front of Islamic and Islamist protest since the late 1960s. Yet in contrast to the turbulent 1950s and 1960s, most Middle Eastern regimes proved remarkably durable in the 1970s and 1980s. In Iran, however, the distinctive tradition of Shiʿism enabled a counterelite of ulama to lead a revolution against the shah and to consolidate its power as the core of a new ruling elite. Attempts to export the revolution to Sunni-dominated countries have met little success.
See also AtatÜrk, Mustafa Kemal; Colonialism in the Middle East; Fertile Crescent; Janissaries; Mamluks; Nasser, Gamal Abdel; Nationalism; Pahlavi, Reza; Shiʿism; Ulama; Urabi, Ahmad; Young Turks.
Binder, Leonard. In a Moment of Enthusiasm: Political Power and the Second Stratum in Egypt. Chicago: University of Chicago Press, 1978.
Hourani, Albert. "Ottoman Reform and the Politics of Notables," In The Emergence of the Modern Middle East. Berkeley: University of California Press, 1981.
Hunter, F. Robert. Egypt under the Khedives 1805–1879. Pittsburgh, PA: University of Pittsburgh Press, 1984.
Zartman, I. William, et al. Political Elites in Arab North Africa. New York: Longman, 1982.
donald malcolm reid |
|Types of code|
|Notable compilers & toolchains|
In computing, object code or object module is the product of a compiler. In a general sense object code is a sequence of statements or instructions in a computer language, usually a machine code language (i.e., binary) or an intermediate language such as register transfer language (RTL). The term indicates that the code is the goal or result of the compiling process, with some early sources referring to source code as a "subject program".
Object code is a portion of machine code that has not yet been linked into a complete program. It is the machine code for one particular library or module that will make up the completed product. It may also contain placeholders or offsets, not found in the machine code of a completed program, that the linker will use to connect everything together. Whereas machine code is binary code that can be executed directly by the CPU, object code has the jumps partially parameterized so that a linker can fill them in.
An assembler is used to convert assembly code into machine code (object code). A linker links several object (and library) files to generate an executable. Assemblers can also assemble directly to machine code executable files without the object intermediary step.
- "Compiler". TechTarget. Retrieved 1 September 2011.
Traditionally, the output of the compilation has been called object code or sometimes an object module.
- Aho, Alfred V.; Sethi, Ravi; Ullman, Jeffrey D. (1986). "10 Code Optimization". Compilers: principles, techniques, and tools. Computer Science. Mark S. Dalton. p. 704. ISBN 0-201-10194-7. |
1 Million Free Worksheets Kids Ten Worksheet
1 million free worksheets kids ten worksheet.
It includes unlimited math. Practice math problems like make a ten using numbers addition with interactive worksheets for st graders. offers easy to understand fun math lessons aligned with common core for k- kids and. Below, you will find a wide range of our printable worksheets in chapter make ten with three addends of section addition.
these worksheets are appropriate for first grade math.we have crafted many worksheets covering various aspects of this topic, and many more. Worksheets, solutions, and videos to help grade students learn how to count on to making ten and taking from ten.1 million free worksheets kids ten worksheet – consciously grouped and subsequently uploaded at May 24, 2020, 9:11 am, This 1 million free worksheets kids ten worksheet above is one of the images in make a ten worksheet together with other worksheet photograph. |
According to the U.S. National Library of Medicine (NLM) and the National Institutes of Health (NIH), potassium is a very important mineral in the human body. It is involved in both electrical and cellular functions, and is necessary for healthy heart activity, proper carbohydrate metabolism, building muscle and much more. Here are some potassium-rich foods that you can easily add to your diet:
Vegetables. Healthy amounts of potassium can be found in broccoli, peas, winter squashes, potatoes (especially the skins), sweet potatoes and lima beans. The United States Department of Agriculture (USDA) notes that eating potassium-rich vegetables may lower blood pressure, reduce the risk of developing kidney stones and decrease bone loss.
Fruits.Not only is fruit delicious, it gives your body important doses of potassium. Try citrus fruits, bananas, prunes, kiwi and cantaloupe. Interestingly, dried apricots contain more potassium than fresh apricots, and they make great snacks at school, work or on the go.
Milk and yogurt. The United States Department of Agriculture (USDA) lists an eight-ounce container of plain, non-fat yogurt as having 579 mg of potassium and only 127 calories. One cup of non-fat milk has 83 calories and 382 mg of potassium. Both are what the NLM and NIH describe as “excellent sources” of potassium.
Nuts and seeds. According to the Mayo Clinic, nuts are good sources of potassium and they contain valuable magnesium, fiber, protein and healthy fats. Almonds and sunflower seeds both offer good amounts of potassium.
It’s important to note that the human body can have too much (hyperkalemia) or too little (hypokalemia) potassium. These imbalances can be caused by a variety of diseases, medications, conditions and more, so talk to your doctor before increasing your potassium levels or drastically changing your diet in any way. For more information about potassium and other nutrients, visit the USDA at www.choosemyplate.gov. |
This Is Why Bees Make Honey
In case you weren't paying attention that day in school.
Some facts in life are just a given: Babies cry, dogs bark, bees make honey. But wait, why do bees do that? While we're happy to reap the benefits of their hard work—namely in the form of delicious honey-flavored candies and sugary spreads—the truth is that few of us actually know why bees make honey.
Turns out, bees make honey because they need to eat it! During the summer, the insects collect nectar, which they then use to create honey. And they're producing a ton of the stuff—essentially because whatever they produce (or, more importantly, don't produce), is what they've got to sustain themselves on during the long, cold, flowerless winter.
Here's how the process goes down: First, a hive's worker bees visit nearby flowers to collect nectar. The bees store this nectar in their second stomachs and then head back to their hive. Once there, they begin the process of converting nectar into honey. To do that, a bee will regurgitate the nectar they collected into another bee's mouth. That bee will chew on the nectar for about a half hour and then pass it onto another bee. The bees will repeat this process until the nectar becomes honey. Finally, they store their final product in the hive's honeycomb cells.
"Honeycomb cells are like tiny jars made of wax," writes journalist and beekeeper Bill Turnbull for the The Guardian. "The honey is still a bit wet, so the bees fan it with their wings to make it dry out and become more sticky. When it's ready, they seal the cell with a wax lid to keep it clean." That way, it can be stored indefinitely and eaten should other sources of food (bees can eat honey, pollen, honeydew, and plant spores) become scarce—which in the winter is practically guaranteed.
Producing enough honey in the summer is crucial, and depending on the size and location of a hive, it could take anywhere between 40 and 60 pounds of honey to sustain a hive through winter. If the bees fail to harvest enough, they might have to resort to cannibalization. That means feasting on their own larvae and eggs. Not the most appetizing meal!
So there you have it. While you might have thought bees just disappear during the winter months, they're actually gorging themselves on the honey they produced during the summer. And if you're wondering if bees are one of the animals that could kill you, check out the 30 Most Deadly Animals on Earth.
To discover more amazing secrets about living your best life, click here to follow us on Instagram! |
Hearing tests are used to determine the type and severity of the hearing loss. Several measurements are made during this test for each the right and left ear.
The first test is called air-conduction audiometry. Headphones are used to find the lowest volume that you can hear sound. This represents the normal way sound enters the ear.
The second type of hearing test is called bone conduction audiometry. For this test, a probe is placed behind your ear on the mastoid bone. Vibrations from the probe get transmitted directly to the cochlea, bypassing the ear canal, eardrum, and ossicles. The lowest level of sound you can detect is recorded. This gives us a measurement of how well the actual hearing nerves are working.
If there difference between the air-conduction and the bone conduction tests, it is called a conductive hearing loss. Reasons for a conductive loss include wax in the ear, fluid in the ear, a hole in the ear drum or middle ear bones that are not working normally. Such causes are often treatable.
If there is no difference between the two types of hearing tests, the hearing loss is considered a nerve, or sensory, type of hearing loss. This is a common type of hearing loss that occurs with age or can be noise-related. In some rare instances, this type of hearing loss may recovered if it was sudden in onset.
Another test preformed will determine how well your ear can understand speech. Some people can understand everything if the volume is loud enough. Other people cannot understand a word no matter how loud the volume is. This test is important because hearing aids are only useful if you can still understand spoken words.
The final part of the test checks the pressures in the middle ear. It can be used to document a hole in the ear drum, fluid in the ear, or an ear with pressure problems – essentially one which cannot pop.
These are the most common types of hearing tests. There are others that can be done to give more detailed information. Comparing the results of all the types of hearing tests allows the doctor to accurately determine the cause of the hearing loss. |
November 4, 2020
Researchers discover a new way to produce hydrogen using microwaves
A team of researchers from the Polytechnic University of Valencia and the Spanish National Research Council (CSIC) has discovered a new method that makes it possible to transform electricity into hydrogen or chemical products solely using microwaves—without cables and without any type of contact with electrodes. This represents a revolution in the field of energy research and a key development for the process of industrial decarbonisation, as well as for the future of the automotive sector and the chemical industry, among many others. The study has been published in the latest edition of Nature Energy, where the discovery is explained.
The technology developed and patented by the UPV and CSIC is based on the phenomenon of the microwave reduction of solid materials. This method makes it possible to carry out electrochemical processes directly without requiring electrodes, which simplifies and significantly cheapens its practical use, as it provides more freedom in the design of the structure of the device and choosing the operation conditions, mainly the temperature."It is a technology with great practical potential, especially for its use in storing energy and producing synthetic fuels and green chemical products. This aspect has significant importance today, as both transportation and industry are immersed in a transition to decarbonise, meaning they have to meet very demanding goals between 2030 and 2040 to decrease the consumption of energy and substances from fossil sources, mainly natural gas and oil," highlights José Manuel Serra, research lecturer of the CSIC at the Chemical Technology Institute.
Green hydrogen for industrial and transportation uses
The main use of this revolutionary technology is the production of green hydrogen (produced without emitting greenhouse gasses) from water for industrial and transportation uses. As noted by the ITQ and ITACA team, it is a technology with great potential for the automotive sector, specifically for cars fuelled by fuel cells and hybrids or large vehicles such as trains or ships. But also for the chemical industry, metallurgy, the ceramic sector or the production of fertilizers, among many other sectors.
"This method will make it possible to transform renewable electricity, typically of solar or wind origin, into added value products and green fuels. It has countless uses and we hope that new uses emerge for the storing of energy, developing new materials and chemical production," highlights José Manuel Catalá, researcher at the ITACA institute of the UPV.In the article published in Nature Energy, the researchers also provide a technical and economic study that shows that this technology would make it possible to obtain high energetic efficiency, and that the cost of the facilities to carry out the hydrogen production process are very competitive compared to conventional technologies.
Ultra-fast charging of batteries… and space exploration
The UPV and CSIC team is studying other future uses for this technology, and is currently focusing its efforts on its use for the ultra-fast charging of batteries "Our technology could enable a practically instantaneous decrease in the size of the electrode (metallic anode) that stores energy. In other words, we would go from a layer-based progressive charging process, which can take hours, to a simultaneous process in the entire electrolyte, which would make it possible to charge a battery in a few seconds," says José Manuel Catalá.
Another use would be the direct generation of oxygen with microwaves, which opens a broad spectrum of new uses. "One specific use would be the direct production of oxygen with extra-terrestrial rocks, which could have a key role in the future exploration and colonization of the moon, Mars or other rock bodies of the solar system," concludes José Manuel Serra.
A short history of the discovery
The team of researchers observed that when ionic materials were being treated with microwaves, the materials displayed unusual changes in their properties, especially their electronic conductivity, changes that did not happen when they were heated conventionally. "Our curiosity to understand these sudden changes in their electrical properties made us dig deeper, designing new experiments, new microwave reactors and to apply other analytical techniques," explains José Manuel Catalá.
The team from the ITACA and ITQ institutes verified that microwaves interact with these materials by accelerating the electrons and giving way to the release of molecules of oxygen from their structure (which is also called reduction). This change manifested itself specifically with sudden alterations to the conductivity at relatively low temperatures (approximately 300ºC). "This semi-balanced state is maintained while microwaves are applied, but tends to revert back by way of reoxygenation (reoxidation) when microwaves cease being applied. We soon realized the great practical potential of this discovery, especially at a juncture such as the one we are in today, of progressive decarbonisation, which is required to reach the goal of the European Union being climactically neutral in 2050, an economy with zero net greenhouse gas emissions," concludes José Manuel Serra. |
Bitter and Sour Worksheets
Kindergarten Bitter and Sour Worksheets is a series of worksheets that helps kids practice identifying bitter and sour flavors. This is a great activity to support the Kindergarten curriculum. These worksheets are great for teaching about the senses and encouraging children to experiment and explore the world around them.
This bitter and sour worksheet can be used to identify the different tastes out from a choice of three pictures. They can think of reasons to explain their choice.
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post? |
The Green Frog is a common North American amphibian. Close your eyes, picture a typical frog, you now know what this species looks like! They live throughout the eastern United States, and researchers recognize two different subspecies throughout that range. Read on to learn about the Green Frog.
Description of the Green Frog
It’s a frog, and it’s green! Just kidding, this species comes in a variety of different colors. While most individuals have green skin, some have predominantly brown or olive colored skin. A rare handful even have blue skin!
Adults typically measure between three and five inches long. The largest individuals reach about three ounces, though most weigh just an ounce or two.
Interesting Facts About the Green Frog
You can easily find this species throughout the eastern United States. Learn what makes this common frog so interesting below.
- Tympanum – When you look at this species, you might notice an oddly round patch of skin on either side of the head. This disk is actually the frog’s eardrum, also known as a tympanum!
- Sexual Dimorphism – The tympanum is also an easy way to distinguish between male and female frogs. On females, these are about the size of their eye. However, the tympanum in males is significantly larger than their eye.
- Communication – Those eardrums come in handy when it is time to mate. Males defend a territory, and use a variety of calls to communicate with other frogs. He uses a specific call to entice females, chase away rival males, and more.
Habitat of the Green Frog
If you can find water in it, you can probably find a frog in it! This species lives in an incredibly vast variety of habitats. They occupy lakes, ponds, swamps, streams, riverbanks, bogs, marshes, and more. However, because their skin is semi-permeable, they cannot live in saltwater habitats and they are sensitive to pollution.
Distribution of the Green Frog
You can find this frog from the southernmost reaches of eastern Canada to northern Florida. In the westernmost extent of their range, these amphibians extend from Canada through Minnesota and south to Texas.
The northern subspecies lives throughout the northeast. The bronze subspecies inhabits all of the southern states from Texas to southern North Carolina.
Diet of the Green Frog
This species is primarily carnivorous, but typically only eats insects and other invertebrates. Some common prey items include flies, slugs, moths, spiders, caterpillars, snails, and even crayfish. The largest individuals even hunt small snakes and other frogs.
Their primary method of hunting is ambush. They sit quietly near the edge of a body of water and wait for prey to stray too close. When it moves within range, the frog hops forward and snatches it up.
Green Frog and Human Interaction
Human activity can be incredibly detrimental to amphibians. Fertilizer runoff and other pollution severely impact these creatures because their skin acts as a semi-permeable membrane. This means that the toxins can penetrate their skin just from them swimming through the water.
However, this species has high population volume and a wide distribution. Though populations in certain regions might face decline, the species overall is stable. Because of this, the IUCN lists this frog as Least Concern.
Humans have not domesticated this species in any way.
Does the Green Frog Make a Good Pet
This frog species can potentially make a good pet, but you probably shouldn’t go catching one any time soon. Even though these creatures can be quite docile and friendly, it’s important to remember that they are wild animals. In some regions, they might even be protected by law.
Green Frog Care
Some zoos and aquariums keep these frogs in their collections. As amphibians, they make great ambassadors for the fragile state of the aquatic ecosystems that they live in.
Zookeepers and aquarists house these creatures in relatively large enclosures with controlled humidity and a fresh water source. They feed them commercially produced feed for amphibians, as well as crickets, mealworms, and minnows.
Behavior of the Green Frog
These frogs are both diurnal and nocturnal. This means that they forage throughout both the day and the night. During the colder months they dig burrows and hibernate underground. After they emerge in the spring, they congregate in large numbers to breed. Males use their calls to attract females.
Reproduction of the Green Frog
These amphibians reproduce via amplexus. In amplexus, the male uses his legs to hold onto the top of the female. The female releases her eggs, and the male fertilizes them outside of the body.
They breed and release eggs in a body of water. Clutches contain up to 7,000 eggs. It takes about a week for the eggs to hatch into tadpoles. |
History and Geography of the Danube Riverby Rick Price - Tuesday, July 27, 2010
By Rick Price, Ph.D.
You probably learned in 7th grade geography that the three most important rivers of Europe are the Rhine, the Rhone, and the Danube – but that is likely the extent of your knowledge. Unfortunately for most Americans our knowledge understanding of world geography stopped with those kinds of simple, unconnected facts.
So this brief essay is both a refresher course on the importance of the Danube River. On our bicycle tour along the Danube you’ll learn that the Danube River is something like 2880 km (1780 miles) long. It is not only Europe’s longest river – more the twice the length of the Rhine and nearly three and a half times the length of the Rhone – but it flows through or forms the boundary with eight different countries.
The Danube rises in Germany’s Black Forest, flows through the heartland of Austria, forms the border with Austria and Slovakia, then Slovakia and Hungary, before flowing through Hungary, into Croatia and Serbia, to then form the boundary between Serbia and Romania, then Romania and Bulgaria, where it finally empties into the Black Sea.
Over the centuries the Danube has been less important as an economic corridor than the Rhone or the Rhine, but is has been one of the most significant cultural and historic boundaries in Europe. During Roman times the Danube River was the northernmost boundary of the Roman Empire and as late as 454 AD the full length of the Danube formed the boundary between a crumbling Roman Empire and the barbarian invaders from the steppe lands of Ukraine and Central Asia.
The Roman legacy bestowed upon the Danube its importance as a Medieval trade route, whether by boat on the river or along its banks. This role created important trade and transportation centers all along it, including Regensburg and Ulm in Germany, Linz and Vienna in Austria, Bratislava, the capital of Slovakia, Budapest, capital of Hungary, and Belgrade, the former capital of Yugoslavia and now the capital of Serbia. Beyond Belgrade, the Danube enters the “Iron Gates,” a great corridor through the Carpathian Mountains and the Balkan Mountains and then it spills into the plains of the ancient Roman province of Wallachia. Flooding has been a problem there since Roman times (and still is) – because of the floods, no major cities developed on the Danube downstream of Belgrade. Bucharest, the capital of Romania, is 80 km (50 miles) uphill from the Danube, well protected from the spring floods.
Historically this trade corridor along the Danube gave rise to two major empires, the Austrian and Hungarian, which merged under Austria in the early 19th century, becoming the Austro-Hungarian Empire in 1867. Further downstream the flood-prone plains of Wallachia effectively formed a significant boundary between modern Romania and Bulgaria, allowing for a significant cultural boundary between these two regions. In fact, the lower Danube became a critical cultural border region between Austria and the Ottoman Empire. To this day Romania and Bulgaria reflect their respective and separate histories with Romania having a Romance language and Bulgaria demonstrating key historic affinities in architecture and religion with the Ottoman Turks.
A bicycle ride down the Danube River takes you through seven countries and at least as many language regions, not counting local dialects. You’ll experience fully four language families: Germanic, Western Slavic (Slovak, Croat, Bosnian, Serb and Bulgar), Hungarian (a family all its own), and Romance (Romanian, related to French, Italian and Spanish.)
Imagine the variety in food, architecture, and history that goes with each of these languages and cultures and you have a real panorama of two thousands years of European history as you travel the length of the Danube. Join us, won’t you?
Danube (Panther) by Claudio Magris (Author) – A comprehensive history and travel book about the Danube by Italian scholar, Claudio Magris. Sometimes ponderous, in the style of Italian writers, it is worth a read BEFORE you depart. Don’t try to carry it with you! |
What is Tinnitus?
Tinnitus refers to an auditory perception (sound) in the ear that is not a sound in the environment. Tinnitus is commonly described as a ringing, roaring, hissing or whooshing sound.
The sounds can range from high pitch to low pitch and can vary in loudness. Tinnitus can be isolated to one ear, both ears, or even to the center of the head.
Tinnitus in NOT an auditory hallucination or an illusion.
Tinnitus has been related to the following conditions:
|Excessive noise exposure
|High blood pressure
||In rare cases, a tumor on the auditory nerve
|Wax or fluid in the ear
What exactly causes the sounds in my ears?
There are many possible reasons for tinnitus. Some of these reasons are:
- Problems with the little hairs in the inner ear (cochlea)
- Problems with the functioning of the nerve of hearing (auditory nerve)
- Inability of the brain to perform normal reduction (inhibition) of a tinnitus sound
What can I do about it?
Schedule appointments to see a medical doctor, preferable an ENT (Ear, Nose and Throat physician), and a certified audiologist.
The ENT and the audiologist will combine the results of your medical and hearing evaluations to determine further follow-up.
Depending on the results of your testing, your doctor and audiologist may recommend further diagnostic tests (i.e. MRI, CT scan) or may simply recommend that you pursue treatment for your tinnitus.
What can be done for Tinnitus?
Once it is determined that your tinnitus is not attributed to a treatable medical condition (such as fluid or wax in the ear), you may wish to pursue one of the many tinnitus management options. Some of these include:
- Tinnitus retraining therapy
- Tinnitus maskers
- Hearing aids (for those with hearing loss)
- Combination hearing aides/tinnitus maskers
- Biofeedback therapy
- Certain medications (please note: there is no medication to reduce the tinnitus itself, but some medications may be able to reduce your strong emotional reaction to the tinnitus). |
Chapter 1 Before Civilization
The Rise of Humanity
The Rise of Humanity
Stones that have been chipped
and shaped, slivers of sharpened bone, bits and pieces of old pots, these
are the kinds of clues that scholars and scientists try to put together to
understand humanity's deep past. As might be expected, with so little
evidence, there are frequent disagreements among experts on how the pieces
fit together and what the puzzle means. The lack of evidence should not
surprise us, however, for we are talking about creatures who lived and
died between about 4,000,000 years ago and 30,000 years ago.
Taken from http://www.talkorigins.org/faqs/comdesc/hominids.html
The origins of humanity are much disputed. As far as we can tell
now, the first human-like creatures, or hominids, began to walk
upright on the face of the earth between three and four million years ago.
scientists who specialize in investigating the origins and development of
the human species, tell us that in those faraway days at the dawn of human
history several different types of hominids roamed the African savanna
grasslands dotted with trees and scattered underbrush. The earliest
remains of such creatures have been found in east, northeast and southern
Africa, and are members of the species called Australopithecus,
or Southern Ape.
There seem to have been two types of Australopithecus, one that
averaged about four feet in height and a larger one that averaged about
five feet. Both types walked upright, but their brains were only about
one-third as big as that of a modern human being. Many scientists believed
that the smaller version may have been the basic stock from which early
human beings developed.
In 1975, however, a leading anthropologist, Mary
Leakey, discovered the jaws and teeth of what appeared to be an early
human near the Olduvai Gorge in Tanzania, East Africa. Different from
Australopithecus, these human remains dated from about 3.75 million years
ago, the oldest ever found. The same year, two other scientists working in
Ethiopia, an American, Donald Johanson, and a Frenchman, Maurice Taieb,
announced a find they had made the previous year: the oldest
Australopithecine remains yet found, those of a young female whom Johanson
promptly named "Lucy" (after the recent hit song by the Beatles,
"Lucy in the Sky with Diamonds"). Lucy too was over 3 million
years old. These two finds taken together suggested that Australopithecus
was not a direct ancestor of humanity, but rather a species that survived
alongside the predecessors of modern human beings.
Smaller than modern human beings, and with considerably smaller
brain capacities, these early species were nevertheless similar to us in
some important ways. For example, they learned to use simple tools of
stone and wood. They apparently learned to cooperate with one another in
finding food, particularly in hunting small animals. They also probably
developed some form of language as a means of communication. It is even
possible, though the evidence is sketchy, that some of them may have
learned to use and control fire.
These hominids lived on the earth much longer than modern
humankind, for the traces we have found of them span several million
years. By about 250,000 years ago, however, when the first biologically
modern types of human beings had begun to appear in small, scattered
hunting bands, these earlier hominids had begun to disappear from the
planet. The reason, as with the periodic disappearance of other species,
seems to have been a failure to adapt rapidly enough to a changing
All life depends upon its
ability to draw sustenance from its surroundings. For land animals this
means air, water and food, and perhaps shelter. Creatures survive only when
they have the physical characteristics and skills needed to obtain these
requirements. If the environment changes suddenly, then the
characteristics and skills may have to change as well.
For example, animals living in a warm climate do not need heavy fur
to keep them warm. But if the weather changes and becomes much colder they
must adjust to the new temperature or risk freezing to death. Changing
climate may also prevent the foods on which they have depended from being
able to grow. They must either learn to eat different kinds of foods,
which can grow in the new climate, or move away in search of climates
where their natural food supply does still grow. The process of making
these adjustments to the changing environment in order to survive is
called environmental adaptation.
What really separated modern humans from their hominid predecessors were
the means by which they practiced environmental adaptation.
Early hominids, and perhaps even early human populations, depended
for survival primarily on evolution, a process of biological
adaptation to their environments. Biological adaptation is the
process by which the physical characteristics of a species change over
time. Much misunderstood, and still disputed by many scholars, the idea of
how species, including humanity, change, was the subject of Charles
Darwin's investigations in the 1800s.
As far as scientists can tell now, evolution occurs through sudden changes, called mutations, in the genetic structure of particular individuals. Genes are tiny particles within the physical body, organized in structures called chromosomes, and carried in the reproductive cells of the body. Genes provide the blueprint from which the body itself develops through life. If the genetic structure of a body is somehow changed, then so too will be the information that governs its physical development. When such genetic changes occur, they show up in the form of new physical or mental traits in the next generation. Genetic mutations can be caused by many things. Malnutrition, for example, can cause chromosomal damage. Also, we know that exposure to certain types of radiation, such as extreme sunlight or radioactive materials like uranium and plutonium, as well as exposure to certain man-made and even some naturally occurring chemicals may often cause genetic damage. Even so, we still do not fully understand all the causes of genetic mutation from one generation to the next. Consequently, whether such changes are entirely random or subject to some larger pattern in the universe is a matter of considerable debate.
Although most mutations seem to have negative results, often
lessening the chances for survival of those that are affected by them,
occasionally the opposite is true. Sometimes, these new, inherited
characteristics may give individuals a better chance of adapting to a
changing environment than the other members of their species. Being able
to extract the necessities of life more efficiently from the environment
than their fellows, these individuals may live longer and have more
children. The children will carry the new mutation in the genetic codes
they have inherited. With a greater ability to adapt to the environment,
these new individuals will gradually replace the older population simply
by outliving and out-reproducing them.
By the time of the emergence
of modern human beings, however, the importance of biological adaptation
to environmental changes as the key to survival had begun to be overtaken
by that of cultural
adaptation. Cultural adaptation is the means by which human beings
adapt to their environment not through their inherited traits but through
learned skills and techniques of survival. Particularly by sharing learned
skills with one another -- in other words, through social
interaction -- people greatly expanded their capacity for
environmental adaptation. Such cooperative activity allows a combination
of effort to achieve not only individual survival, but group survival as
Cooperative group efforts are generally more efficient than
individual efforts in extracting the necessities of life from the
environment. For example, a single human being is unlikely to be able to
hunt an elephant. A group of people working together, however, may do so
with great success. By combining forces, individuals may find that they
need not change their behavior to suit the environment. Instead, together
they may change the environment to suit themselves.
The ability for cultural adaptation, of course, is at least partly
due to the genetic mutations that led to larger and larger brain sizes in
both hominids and early human beings. For above all, cultural interaction
and exchange require an expanded capacity for memory. It is memory that
allows us to store the knowledge we gain from experience. When confronted
with new experiences, we may then call up the stored memory of past
experiences and compare or contrast the two. We probably all remember the
lesson of the hot stove: once burned we remember not to repeat the painful
experience. In essence, this is the process of learning - which is
dependent upon our capacity for memory.
Equally important, however, is the ability to communicate what we
have learned both to other individuals and especially to succeeding
generations. Consequently, perhaps the most important development that
arose out of such cooperative social interaction was language.
Language would establish cultural adaptation as the primary force in human
The development of language provides a good example of how learned
traits and inherited traits interact with each other. Human beings may
have learned the importance of cooperation within their groups
particularly on the hunting trail. In fact, many scholars believe that
language itself probably first developed out of long-distance signals and
calls used to coordinate the hunt. Yet even this development was only
possible because of genetic changes that had resulted in the development
of the human vocal box, a much more flexible tool for making sounds than
that of many other species.
Moreover, as early humans became better hunters they also increased
the amount of protein, calcium, and other elements essential to the growth
of brain cells in their diets. The improved diets stimulated brain
development and thus the capacity for greater and greater intelligence.
With greater intelligence, humans became even better hunters.
In other words, evolutionary developments often made possible
cultural developments—which in turn stimulated further evolutionary
developments. In fact, the interactions between biological and cultural
means of adaptation have made humanity one of the most flexible species on
the planet. This flexibility
has been the most important element in both the survival of humanity in
the face of all challenges, and in its present ability to transform its
own environment in ways unknown to any other species.
Language was a major step forward in cultural terms, for it could
soon be used for more than hunting. Personal communications provided
opportunities for emotional and intellectual sharing. This must have
contributed greatly to the ability of human beings to develop and express
their individual sense of identity and to relate it to the larger group
identity. Above all, perhaps, language made it possible to share learned
experiences among individuals, and from generation to generation.
Cultural adaptation has one tremendous advantage over biological
adaptation to a changing environment -- speed. Biological evolution is a
long, drawn out process. If there are sudden violent changes in the
environment, mutation is far too slow a process to insure individual
survival, much less species survival. With cultural adaptation, however,
individuals may respond instantly and with much greater flexibility to
changes in the environment. This capacity for rapid change guarantees a
higher probability that both individuals and the species as a whole will
survive and reproduce. The proper beginning of human history might well be
seen as the emergence of cultural adaptation as the primary means by which
the human species learned to adapt to the environment.
OF MODERN HUMANITY
No one knows exactly when
cultural adaptation became more important than biological adaptation in
human development, but it certainly took a very long time: from roughly
3.75 million years ago to about 100,000 years ago. Over the intervening
span of time several types of creatures appeared and disappeared that
seemed to be coming closer to the kind of human beings we are today.
Those that seem to have exhibited behavior characteristic of human
beings, but whose physical forms were certainly not those of modern human
beings, have been called Homo Habilis, or "skilled man," by many scientists.
Campsites from about a million and a half years ago have been found that
contain early stone tools, usually in the shape of chipped and sharpened
pebbles. Other anthropologists, however, dispute whether all of these
creatures were truly human, believing that some may be examples of
Australopithecus. By about 1.2 million years ago, a closer relative of
modern human beings had appeared called Homo
Erectus, or upright man.
Standing a bit over five feet tall, with a sloping forehead and
virtually no chin, Homo Erectus had a brain twice the size of all his
predecessors - but still only about two-thirds the size of ours. His tools
were more complex and highly developed than those of earlier populations.
He created and used chopping stones and hand axes. He probably first began
to wear clothing, at first loose animal skins for warmth, and later
perhaps clothes made of plant materials. Homo Erectus was also probably
the first species to discover the use, though not necessarily the control,
of fire. Like earlier hominids, Homo Erectus spread beyond the confines of
Africa, moving into Europe and even Asia. Fossil remains of Homo Erectus,
for example, have been found on the island of Java in Southeast Asia (Java
Man), and near modern-day Peking in China (Peking Man).
By about 100,000 B.C. the earliest modern human beings, Homo
Sapiens, or "thinking man," had appeared in Africa. Over
the next sixty thousand years or so they too spread out of Africa and into
all the areas previously occupied by the early hominids. Sometime after
40,000 B.C. they even moved into northern Eurasia and Australia. The
earliest evidence of Homo Sapiens in North America also dates from about
40,000 B.C., although they apparently did not spread south into
Mesoamerica and South America until sometime after about 30,000 B.C.
For a more detailed interactive analysis of human migrations out of Africa based on the most recent genetic evidence see also: http://www.bradshawfoundation.com/journey/
There were apparently two principal strains of Homo Sapiens: Neanderthal, which emerged
earlier; and Cro-Magnon,
which emerged later. Cro-Magnon represents the first truly modern human
population, known as Homo
Sapiens Sapiens, or "thinking thinking man." Whether
Cro-Magnon competed with the earlier Neanderthal, or even hunted them out
of existence, is unclear. Evidence from the Middle East suggests that
communities of both sometimes lived near each other, apparently in
harmony. Cro-Magnon was clearly the more adaptive, however, for by 30,000
B.C. the Neanderthal record disappears. Yet even as cultural adaptation
replaced biological evolution as the primary adaptive technique among
human beings, biological development of the species continued. This
development can be seen in the minor biological differences that developed
among groups of humans after they had spread to various regions of the
BEGINNINGS OF RACIAL VARIATIONS
the actual process by which different types of modern humans emerged is
not fully understood, the differences seem to be a result of local
adaptation to particular environments. Virtually all modern geneticists,
those scientists who specialize in the knowledge of genes and genetic
structure, agree that all human beings today, regardless of their
different appearance, come from a common ancestry. The differences, they
believe, developed over long periods of time in which groups of humans
were separated from others. As each group adapted to its local physical
environment, its members developed unique, biologically inherited
characteristics that soon distinguished them from all other groups. These
characteristics have provided the basis for what most people call race. Given the nature of the human experience, however, the
story is a bit more complicated than isolated groups developing
For scientists, the number of races may vary widely, depending upon
the different classifications each scholar uses. In general, however,
anthropologists talk about large groups of populations, which they call geographical
races, and smaller population units called local races. In effect,
geographical races are made up of several local races that may have
slightly different genetic characteristics, but that are more alike to
each other than they are to other groups of local races. [The accompanying
map will show the general distribution of geographical races as they
appeared before about 1500 A.D., when European expansion began to bring
more and more different peoples into contact with each other (?)] Even
these geographical races, of course, did not yet exist until long after
the spread of humanity had separated different groups, and their
subsequent history had brought them back into contact in ever new ways.
In fact, there seem to be four basic ways in which different
peoples can develop different racial characteristics. Two of them are
related aspects of biological adaptation: genetic mutation and natural
selection. The other two are due to cultural and social factors: genetic
drift, a term that refers to chance genetic changes only within
small populations (for example when one male fathers most of the children
of a group, most of the descendants of the group will carry his genes);
mixing, in which different racial groups begin to intermarry.
It might be argued that all of human history has been a process in
which smaller, isolated groups of humans have gradually mixed together in
larger and larger groups. Such mixing, of course, has been greatest among
groups that live next to one another, and is generally least among
populations that live the farthest from each other. As humanity has filled
up the planet, however, advances in technology have brought more and more
populations into contact, resulting in increased levels of racial mixing.
Consequently, as we shall see, nomadic warriors from central Asia,
conquering settled areas from China to Europe, contributed considerably to
racial mixing. So too did the later European migrations around the world,
as have all migrations of large groups of people from one area to another.
Perhaps the most difficult questions involving race have come from people using the term incorrectly. It is probably fair to say that most people identify someone else's racial background on the basis of physiognomy, or the visual physical appearance of the person - such as skin color or the shape of body parts like eyes, nose and head. Such a visual method of racial identification, however, is often extremely misleading. For example, from a genetic standpoint it is as wrong to speak of a single Negro race as it is to refer to a single Caucasian race or a single Asian race. The people of Nigeria differ genetically from those of Madagascar or Angola, just as people in Ireland differ genetically from those of Greece and people in Mongolia differ from those in China or Japan.
On the other hand, many people associate race with culture, identifying people on the basis of their common history and shared customs. Some have even identified race solely on the basis of language, as for example the "English-speaking race." Still others confuse the idea of race with that of nationality, meaning what country someone comes from, for instance the "German race" or the "Spanish race." Just as confusing is the use of the word race to refer to ethnicity, which is more properly a reference to a combination of genetic and cultural features. Whichever definitions people use, however, in the long history of Humanity the perception of such differences in appearance, in language, in culture, and even national origin have often led to fear, suspicion, hatred and even war, particularly in times of rising insecurity and competition among different groups of people for vital resources. |
As you drive along the Overseas Highway through the Keys, look for iron cannons and anchors in front of restaurants, strip malls, marinas, and in roadside parks. These artifacts were salvaged from 1733 fleet wrecks and other ships before the consequences of removing waterlogged objects from the marine environment were realized. If you stop to inspect the artifacts you’ll see the effects of long-term corrosion. Large pieces of the metal are literally peeling off and rusty flakes and chunks pile up under the remains of the cannons and anchors. Eventually, they will crumble away to nothing. The reason for this deterioration is lack of proper conservation when the object was taken from the sea.
Iron and other metals react with seawater forming, over many years, a hard covering of corrosion products called concretion that often includes sand, shell, and coral from the surrounding environment. If the object is taken from the water and not conserved to remove salts and stop the corrosion process, it will quickly begin to fall apart. Ceramic, glass, and organic materials such as leather, rope, wood, and bone also require conservation to remove absorbed salts, to preserve their appearance, and to stabilize for curation and exhibition.
Conservation methods depend on the type of artifact and include soaking in fresh water and treatments with various chemicals to prevent warping, crumbling, and shrinking. Metal artifacts often are treated by electrolytic reduction to remove concretions and restore the metal. In the case of large iron objects such as cannons and anchors, conservation treatment can take years and become extremely costly. For these reasons, archaeologists often prefer to leave cannons and anchors on shipwreck sites. Over time the artifacts reach a state of equilibrium with their environment and, if not disturbed, will last for centuries. Wouldn't you rather see a shipwreck looking as it did when it wrecked with its cannons in place instead of rotting on the roadside? |
The golden trout, the state fish of California, is native to the Kern Plateau, which is typically characterized by high altitude and prolonged winters.
The golden trout (Oncorhynchus mykiss aguabonita) is a subspecies of the rainbow trout (Oncorhynchus mykiss), which is found in the Golden Trout Creek and South Fork Kern River.
Also known as the California golden trout, it was previously thought to be a separate species, mainly because of its bright, distinctive color pattern. Owing to its magnificent color, it is also called the 'Fish from Heaven'. It was declared the state fish of California by the state jurisdiction in 1947.
It is a small-sized fish with orange or red cheeks, olive-green back, and golden lower sides. Parr marks are present along the lateral line, while larger spots grace its fins and tail. Its pectoral, pelvic, and anal fins are bright orange in color. The size of an adult may range from 19 - 20 cm in streams and 35 - 43 cm in lakes.
Of the several species of trouts, the golden trout is the least productive species, which is evident from the various conditions that are required for spawning it.
The favorable conditions for spawning include warm temperature (at least 10 °C, preferably 16 - 18 °C), fine substrate, and minimum water velocity. When this fish finds suitable substrate, it lays eggs on or under it.
However, hatching of the eggs is disturbed by many factors, including flooding of habitat and/or drying of water.
Habit and Habitat
Adult golden trouts feed on insects like mayflies, stoneflies, ants, spiders, worms, and beetles, along with larvae of other insects. They also rely on planktons, plant detritus, small fish, and eggs of other trouts.
These fish prefer high-elevation watersheds, as those are basically very clear and cold. The species is native to the watersheds of Sierra Nevada Mountain range, where food is scarce due to high altitude and prolonged winter conditions.
Various conservation methods have been implemented at the state and federal level in order to preserve this rainbow trout subspecies. In 1978, for instance, 300,000 acres of land was converted into Golden Trout Wilderness. In 1991, it was enlisted in the U.S. Fish and Wildlife Service's Endangered Species List and Forest Services Sensitive Species List.
Despite these measures, there has been no increase in their population. The main reason being its hybridization with the rainbow trout. Studies have revealed that this trout species is very sensitive to breeding with other trout species, especially the rainbow trout.
The rate of hybridization is increased by poor land management plans, particularly in the Inyo National Forest. In order to minimize the chances of hybridization, a management strategy is implemented whereby the water of the South Fork Kern River is chemically treated to remove other non-native trout species.
Other reasons for the decline in golden trout population include the destruction and/or modification of their natural habitat and competition with the non-native species. Despite the state and federal regulatory activities, the population of this species has decreased to such an extent that it is on the verge of extinction. |
What makes a password weak?
A weak password is one that can be easily guessed or broken.
This might be because it’s made up of public information associated with you. For example:
- You or your family’s dates of birth
- Names of your family members
- Your pet’s names
- Your nickname
- your car
- your favourite football team
Your password might be a known default password.
Many items of computer hardware which connect to the Internet have factory default usernames and passwords. These are often variations of the words admin and password.
Recently installed, but unconfigured software or content management systems will often use a default password which is publicly known and published in online manuals.
So far these are examples of public information being used as passwords.
For passwords made up of secret information, brute-force methods can be used to guess a password.
You might think your password is so easy to remember and type but so obscure, that no-one else would have ever thought of it, but you’re probably wrong.
Here are the top 100 most popular passwords that crop up on the leaked lists:
In 2017, it was estimated that almost 10% of people used at least one of the 100 most popular passwords and almost 3% of people have used 123456 as their password.
These lists are regularly used for brute-forcing passwords, so anything on this this list should be avoided.
You can check whether your password is on one of the leaked lists using this website: https://haveibeenpwned.com/Passwords
The more complex a password is, the more difficult it will be for brute-force methods to succeed.
Password complexity can be improved by doing one or more (or all) of the following:
- Avoid using a single word from a dictionary as your password. This will be found straight away when a list of dictionary words are tried one after another.
- Increase the number of characters in the password. A four character password is much weaker than an eight character password for example.
- Include upper and lower case characters in the password. Don’t just use a single uppercase letter followed by all lowercase letters.
- Include numbers and symbols in the password.
It used to be popular to replace letters with numbers that look like their alphabetic counterparts. For example, replace O (oh) with 0 (zero), L with 1 (one), A with 4, S with 5 etc. to created words like:
Baseball = b455b411 password = pa55w0rd secret = s3cr3t
However, the brute-force algorithms have long been wise to this, so this sort of character replacement is one of the first things they try.
The most secure passwords
The most secure form of password is a long string of random uppercase and lowercase letters, numbers and symbols like this:
zKa4zD#5 (8 chars) $f4qX6rxBU&B (12 chars) 1!^B5qUA$t0iU7l% (16 chars)
The disadvantage of these un-guessable context-free, complex passwords, is that they’re almost impossible to remember and as a result are then written-down – which completely defeats their purpose.
Passwords are often found written on Post-it notes and stuck under keyboards, in front or back covers of notebooks or on computer monitors.
Using a Password Manager
I would always recommend the use of long strong complex passwords in conjunction with a Password Manager. A password manager will generate, remember and enter long strong complex passwords for you, so you don’t need to remember them or write them down.
Of course, you’ll still need at least one strong complex memorable password to protect your password manager, so read-on.
Choosing a strong complex memorable password
- Think of 3 or 4 random words. Look around you and get some inspiration. Don’t choose words that can be guessed by someone else or could be associated with you.
- Imagine a silly or weird situation in your mind that can be described using those words. This image is the key to memorising your password.
If you’re forced to use special characters by someone’s password policy:
- Choose where to put your capital letters. Don’t use a capital letter as the first character. Maybe the start of the 2nd and/or 3rd words?
- Can one of your words be a number? Change it to its numeric version.
- Pick one or more symbol characters and put them somewhere in the middle of the password. Don’t use them as the 1st or last characters.
Here’s a fun cartoon from xkcd.com
This is a really popular cartoon, so please don’t use correcthorsebatterystaple as your password as I’m certain its now in every password cracking dictionary 🙂 |
In this activity, students demonstrate how substances are dissolved and transported by water through the soil.
By the end of this activity, students should be able to:
- demonstrate the leaching of a substance through sand (soil)
- explain how irrigation and rain can affect nitrate and phosphate leaching.
Download the Word file (see link below) for:
- introduction/background notes
- what you need
- what to do
- discussion questions
- student handout. |
Snapshots – The Constitution
Video duration: 2 min 29
Opening credits showing images of the parliament at work.
Title: Snapshots of Parliament: The Constitution
|A book with the title 'Australia's Constitution'.||Narrator: A constitution is a set of rules for how a nation is governed. It's a bit like a guide book for running a country.|
A graphic showing a map of Australia in a frame. Lines showing the borders between the colonies appear. The map of Australia now contains a union jack.
A black and white photograph showing the delegates at a constitutional convention, with Henry Parkes in the centre.
|Narrator: Before 1901, Australia was not a nation, but rather six British colonies. These colonies were under the law-making power of the British Parliament. During the 1890s, representatives from the colonies met to discuss the idea of joining together to form a new nation. A written constitution was developed to set out the rules for how this new nation would work.|
|A different black and white photograph showing the delegates at another constitutional convention.||Narrator: Special meetings called “constitutional conventions” were held to work on a draft of the new constitution. Each colony held referendums to allow their people to vote yes or no on the new constitution.|
The tally board for a referendum held in Western Australia.
A photograph of Westminster Palace in London, the home of the British Parliament.
The front page of the Commonwealth of Australia Constitution Act 1901.
|Narrator: It took a few years, and many changes, but eventually the new constitution was approved by the Australian people. It was then sent to the British Parliament and passed as the Commonwealth of Australia Constitution Act. The Constitution came into effect on 1 January 1901 and Australia became a nation.|
Front views of Parliament House in Canberra.
The High Court of Australia building.
|Narrator: The Constitution describes how the federal Parliament works, what it can make laws about and how it shares its power with the states. It also describes the roles of the government and the High Court.|
|The main chamber of the High Court of Australia, showing the full bench of justices as well as lawyers and clerks.||Narrator: Sometimes there are disagreements over the issues relating to the Constitution. The High Court of Australia is responsible for providing the official interpretation of the Constitution and deciding on these disagreements.|
|Graphic of a book showing the eight chapters of the Constitution: The Parliament, The Executive Government, The Judicature, Finance and Trade, The States, New States, Miscellaneous, and Alteration of the Constitution.||Narrator: The Constitution is divided into eight chapters. Each of these chapters is divided into sections which describe the different powers in detail.|
Graphic of a book showing the double majority necessary to change the Constitution.
Text: 44 proposed changes. 8 successful changes since 1901.
|Narrator: Changing the Constitution requires a nation-wide referendum. A majority of Australian voters, and a majority of voters in at least four states, must agree to the changes. The Constitution has had a total of eight changes since 1901.|
Graphic of a book closing.
Title: Parliamentary Education Office. Copyright Commonwealth of Australia 2015.
Parliamentary Education Office logo
Parliamentary Education Office website: www.peo.gov.au
|Narrator: More than 100 years on, the Constitution continues to guide how Australia is governed and how laws are made. It is the framework for our democracy.| |
Black holes don't erase information, scientists say
The "information loss paradox" in black holes—a problem that has plagued physics for nearly 40 years—may not exist.
Shred a document, and you can piece it back together. Burn a book, and you could theoretically do the same. But send information into a black hole, and it's lost forever.
That's what some physicists have argued for years: That black holes are the ultimate vaults, entities that suck in information and then evaporate without leaving behind any clues as to what they once contained.
But new research shows that this perspective may not be correct.
"According to our work, information isn't lost once it enters a black hole," says Dejan Stojkovic, PhD, associate professor of physics at the University at Buffalo. "It doesn't just disappear."
Stojkovic's new study, "Radiation from a Collapsing Object is Manifestly Unitary," appeared on March 17 in Physical Review Letters, with UB PhD student Anshul Saini as co-author.
The paper outlines how interactions between particles emitted by a black hole can reveal information about what lies within, such as characteristics of the object that formed the black hole to begin with, and characteristics of the matter and energy drawn inside.
This is an important discovery, Stojkovic says, because even physicists who believed information was not lost in black holes have struggled to show, mathematically, how this happens. His new paper presents explicit calculations demonstrating how information is preserved, he says.
The research marks a significant step toward solving the "information loss paradox," a problem that has plagued physics for almost 40 years, since Stephen Hawking first proposed that black holes could radiate energy and evaporate over time. This posed a huge problem for the field of physics because it meant that information inside a black hole could be permanently lost when the black hole disappeared—a violation of quantum mechanics, which states that information must be conserved.
Information hidden in particle interactions
In the 1970s, Hawking proposed that black holes were capable of radiating particles, and that the energy lost through this process would cause the black holes to shrink and eventually disappear. Hawking further concluded that the particles emitted by a black hole would provide no clues about what lay inside, meaning that any information held within a black hole would be completely lost once the entity evaporated.
Though Hawking later said he was wrong and that information could escape from black holes, the subject of whether and how it's possible to recover information from a black hole has remained a topic of debate.
Stojkovic and Saini's new paper helps to clarify the story.
Instead of looking only at the particles a black hole emits, the study also takes into account the subtle interactions between the particles. By doing so, the research finds that it is possible for an observer standing outside of a black hole to recover information about what lies within.
Interactions between particles can range from gravitational attraction to the exchange of mediators like photons between particles. Such "correlations" have long been known to exist, but many scientists discounted them as unimportant in the past.
"These correlations were often ignored in related calculations since they were thought to be small and not capable of making a significant difference," Stojkovic says. "Our explicit calculations show that though the correlations start off very small, they grow in time and become large enough to change the outcome." |
Learning Material for Preparation of Interview, Competitive and Entrance exams. A variety of free printable preschool writing patterns to help develop correct age for preschool skills.
Use these writing pattern worksheets to encourage your young preschoolers to write across a page from left to right, while developing their fine motor skills at the same time. Young children should use thick wax crayons, thick triangular pencils or felt-tipped markers, which are easy for their little fingers to grip. Always ensure that they are gripping the writing tool correctly as a bad grip is hard to correct later. Once the children have completed each pattern row, you could show them how to decorate them – for example, with the zig-zag pattern, add ice-creams with cherries on top or draw a face underneath a pointed hat for each zig-zag. Teach left-handed children to write and color correctly from the start and avoid having to correct bad writing habits later. Click on the image below to download the pdf file containing the writing patterns. You will need to have Adobe Reader installed in order to read the file.
This page lists various printable alphabet pages, writing patterns, numbers, printable math activities, coloring pages, Bible memory verses and more! These is a list of free printable bible coloring pictures to accompany children’s Bible stories and children’s Bible lessons. These pages contain the numbers 1-10 to trace and copy. There is also a blank lined page for extra practice in the set of printables. Leave a comment in the box below. 1 per week, you just open up the book and start the A-B-C fun! |
A Parent's Guide to Phonics
- A Parent's Guide to Preparing Your Child for School
- A Parent's Guide to Helping Your Child Do Well in School
- A Parent's Guide: Hey Mom, I Want To Be An Engineer!
- The Parent's Guide to Every Grade
- A Parent's Guide to the Common Core Standards
- A Parent's Guide to Twitter
What exactly is phonics? Many parents hear the term when their child is learning to read, but a lot of them have no clue what teachers are talking about--let alone how they might be able to help.
Plain and simple, phonics is the relationship between letters and sounds in language. Phonic instruction usually starts in kindergarten, with kids learning CVC (consonant-vowel-consonant) words by the end of the year. Words such as hat, cat, and pot are all CVC words.
But CVC is just the beginning. The bulk of phonics instruction is done in first grade. Students usually learn consonant blends (-gl, -tr -cr), consonant digraphs (-sh, -ch, -qu), short vowels, final e, long vowels, r-controlled vowels, and diphthongs. From second grade on up, phonics continues to build fluency and teach multisyllabic words.
Interest peaked, but don’t know where to begin? Here are some basic phonics rules to keep in mind as your child learns to read:
- Short vowels: When there is a single vowel in a short word or syllable, the vowel usually makes a short sound. Short vowels usually appear at the beginning of the word or between two consonants. Examples of short vowels are found in the words: cat, pig, bus.
- Long vowel: When a short word or syllable ends with a vowel/consonant /e combination, the vowel is usually long and the "e" at the end of the word is silent (this rule doesn't apply in all cases). Examples of vowel/consonant/e combinations are: bake, side, role. Here’s another rule with long vowels: when a word or syllable has a single vowel and it appears at the end of the word or syllable, the vowel usually makes the long sound. Examples are: no, she.
- Consonant blends: When two or three consonants are blended together, each consonant sound should be heard in the blend. Some examples of consonant blends are: black, grab, stop.
- Consonant digraphs: A combination of two consonants sounds that together represent a new sound. Examples of consonant digraphs are: shop, chin, photo.
- R-controlled vowels: When a vowel is followed by the letter "r," the vowel does not make the long or short sound but is considered "r-controlled." Examples are: bird, corn, nurse.
- Vowel diphthongs: The term "vowel diphthong" refers to the blending of two vowels sounds – both vowel sounds are usually heard and they make a gliding sound. Examples include: moon, saw, mouth.
Phonics are the building blocks to reading. And while they’re not always intuitive, once you know the rules, they can help quite a bit. So learn the basics. Not only will you be helping your child, but you’ll finally understand what the teacher is talking about!
Today on Education.com
Washington Virtual Academies
Tuition-free online school for Washington students.
- Kindergarten Sight Words List
- Signs Your Child Might Have Asperger's Syndrome
- April Fools! The 10 Best Pranks to Play on Your Kids
- 10 Fun Activities for Children with Autism
- Social Cognitive Theory
- Problems With Standardized Testing
- First Grade Sight Words List
- Child Development Theories
- Theories of Learning
- Nature and Nurture |
The Estonian language is the official language of Estonia and approximately 1.1 million people and thousands of émigré communities speak the language, including the numerous ancient expressions. The Estonian language is a Finno-Ugric language that has strong resemblance to Finnish but is distantly related to Hungarian. Though Estonian language is not directly related to German, Russian, Latvian, and Swedish the influence among these languages still exists. There are many Estonian words that have German origin while other Estonian words have Russian, Latin, Greek, and English origin.
The majority of the people speaking the Estonian language live in the Northern European country of Estonia. Among the distinctive features of Estonian is the three-degree of phoneme length: short, long, and overlong. The difference between long and overlong is a matter of syllable stress that involves pitch as duration. However, in written Estonian, the distinction between long and overlong is not determined.
The earliest examples of written Estonian words were names, phrases, and fragments of sentences that can be traced back from 13th chronicles. The first Estonian book was printed in 1525 and the book was identified as a Lutheran manuscript. It is believed that the book never reached the reader because it was immediately destroyed after it was published. In 1637, the first ever Estonian textbook appeared and in 1869, Ferdinand Johann Wiedemann published the comprehensive Estonian-German dictionary that contained grammar rules and other features that describe the Estonian language.
The dialects in Estonia are divided into two: the northern dialects (associated with the capital Talinn) and the southern dialects (Tartu). |
A group of researchers at the Indian Institute of Technology (IIT) have discovered a pigment in a species of berries (Syzygium cumini) from the indigenous jamun tree that absorbs large amounts of sunlight. The scientists have been experimenting with the pigment (anthocyanin) and believe using it for mass production could make solar panels far less expensive, which might help provide a lasting solution to India’s chronic power shortages.
The anthocyanin pigment is also found in common fruits like blueberries, cranberries, cherries, and raspberries.
Most of today’s solar cells are made of either single crystal silicon or polycrystalline silicon, with the latter being more efficient, but also more expensive. The anthocyanin pigment is being used for dye-sensitised solar cells (DSSCs), which reduces costs and increases light absorption. The more efficiently a solar cell can absorb the photons striking it, the more electricity it can produce.
India is constantly grappling with power shortages, however the country is looking to increase its solar-power generation capacity from 10 GW to 100 GW by 2022, with a target of attracting $100 billion into the sector during that timeframe.
Learn more @ QUARTZ |
Imagine that you haven’t seen a good friend in a month. In a telephone call, your friend tells you she would like to get together for dinner but can’t think of a restaurant to go to. So, you offer an idea.
Listen to a short conversation:
I’d love to have dinner on Friday but I’m not sure where.
How about we go to Chez Philip?
Great idea! I haven’t been there in over a year.
The phrase How about is one common way to make a friendly suggestion in English. To make a suggestion means to offer an idea or plan for someone to think about.
You probably already know a few ways to make suggestions in English, using words such as could or should.
But, on this Everyday Grammar program, we’ll talk about common phrases you can use for making friendly suggestions. We use many of these phrases in question form.
Let’s start by talking a little more about the phrase How about.
When you ask a question using How about, you are asking someone if they agree with what you are suggesting.
There are two structures for using this phrase. The first is:
How about + subject + simple verb form
Let’s listen to the first example again:
How about we go to Chez Philip?
In this example, the subject is we, and the verb is go.
The second structure for using How about is:
How about + gerund
How about going to Chez Philip?
In this example, the subject is still we, although is not directly stated. Instead, the subject is implied. And, going is the gerund form of the verb go.
You can also use How about + gerund to make a suggestion for an action that does not involve you. For example:
How about starting a group for English learners?
The phrase What about is very similar to How about.
You can replace the phrasing How about + gerund with What about + gerund to express the same meaning. For example:
What about going to Chez Philip?
However, What about + gerund is less common in American English than in other types of English.
Something that English learners will notice is that native English speakers often leave out both the subject and verb when we use What about and How about to make suggestions. Listen:
How about Chez Philip?
What about Chez Philip?
Why don’t is very similar to How about and What about. The difference here is that we ask the question using the negative don’t.
The structure is: Why don’t + subject + simple verb form
Let’s hear our example again, but this time with Why don’t:
Why don’t we go to Chez Philip?
Why not also uses the negative not. But this phrase is a little different from the other phrases. It is usually used to make more general suggestions. Advertisers often use Why not for selling products or services.
The structure is Why not + simple verb form
Why not treat yourself to a Caribbean holiday?
In this example, the subject is you, but it is not directly stated. And, the verb is treat.
Using Shall is another way to make a suggestion. However, it sounds a lot more formal and is more common in British English than American English.
The structure is: shall + subject + simple verb form
Shall we go to Chez Philip?
One thing to note when using Shall to make suggestions: it is only used with the subjects I and we. We would not say, Shall you to offer an idea.
Sometimes, suggestions are expressed in statements instead of questions, such as with the phrase Let’s.
Let’s is a contraction for the words let us. It is used to tell someone what you want to do with them.
The structure is Let’s + simple verb form
Let’s go to Chez Philip!
In this sentence, the subject is us.
So, how do you respond to friendly suggestions? You can either accept or decline.
A few phrases for accepting a suggestion are:
That’s a good/great idea!
That sounds good/great.
Thanks! I’d love to.
A few phrases for declining a suggestion include:
That’s a good idea but…
I’m not sure.
When you decline a suggestion, you may want to then politely suggest something else. For example:
I’m not sure. Chez Philip is not my favorite. How about Fearless Farmers?
Making and responding to suggestions in English takes practice. But it’s one of the more fun things you can do with a classmate, friend or family member.
You can also practice in our comments section. Try using a few of the phrases you learned today to make a friendly suggestion.
I’m Alice Bryant.
Alice Bryant wrote this story for VOA Learning English. George Grow was the editor.
Words in This Story
conversation – n. an informal talk involving two people or a small group of people
gerund – n. an English noun formed from a verb by adding -ing
imply – v. to express something without saying or showing it plainly
negative – n. a word or statement that means “no” or that expresses a denial or refusal
formal – adj. suitable for serious or official speech and writing
prefer – v. to like something better than something else
contraction – n. the act or process of making something smaller or of becoming smaller
decline – v. to say no to something in a polite way
polite – adj. having or showing good manners or respect for other people
practice – v. to do something again and again in order to become better at it |
If our eyes were just a little different, we might see green when looking up at our sun. Because they aren't, we'll never see a green star no matter where we look. However, alien eyes, when they come to our solar system — or if they're already here! — might be able to see green twinkling, and here's why.
Ever heard of a blackbody? It's a body that is able to take in all kinds of radiated energy, and then spit it all back out. It radiates energy at all frequencies, but it radiates those frequencies in a very specific way, depending on its temperature. And extremely hot blackbody would radiate most of its energy in the ultraviolet spectrum, less of it in the visible light spectrum, and even less in the infrared spectrum. A coolish blackbody would radiate most of its energy in the infrared spectrum, with slightly less radiating in the visible light spectrum and even less in the ultraviolet. One between the two would radiate most of its energy in the visible light section of the spectrum, while radiating less at either end.
All blackbodies spread their energy radiation along a certain curve. As you can see this peaks at a specific temperature, drops off sharply at the temperatures above its peak, and tapers off more gently over the temperatures below its peak. Although there are only a few lines on the graph, the blackbody can emit at any temperature. Each of those temperatures would emit frequencies distributed along those curves, with a relatively gently-sloped bell at the peak.
That shouldn't be a problem for stargazers. There should be plenty of stars that peak right in the green area of the visible spectrum. And there are. The problem is, our eyes can't see them. Our eyes have cones that can see three different colors — blue, red, and green. Mixing those three basic colors together in different proportions gives us the many different colors we see. A combination of light from the three colors together lets us see white. Taking out most of the blue frequencies of light will let us see the orange, yellow, and red light. Taking out most of the red will let us see bluish shades of light. And taking out the green will let us see purple.
If you follow the blackbody emission curve for any of these, you can see that shifting it up so that it peaks at blue will let the curve taper off a great deal before it gets into the red spectrum, and so the blackbody will not emit much red light. As a result, we'll see bluish stars. Shifting the curve down so that it peaks at the red end of the spectrum will let the curve drop off steeply above it, so that the star will not emit a lot of blue light. The star looks orange or red. But the green peak is right in the middle of the spectrum. If you shift the curve so that it peaks right in the green section, you'll still get a lot of red light and a lot of blue light being emitted. Those lights combine with each other and we see white light, and not green.
Now, it's possible that we'll see a star through a gas cloud that filters out red and blue light and lets us see it as green, but that will be much the same as seeing it through green lenses. The light will be, from our perspective, dyed green. But aliens might not be as limited, when it comes to light, as we are. If their eyes aren't as sensitive to red and blue as they are to green, or if they'll be able to shut off certain "cones" in their eyes at will, they might be able to see the greenness in stars. |
An electroscope is an instrument used by scientists to measure the relative strength of an electric charge. A simplified version of an electroscope can be made easily and can be used to study and explore static electric charges.
Clear plastic cup
Modeling clay or plastic tape
What To Do
Make a small hole in the bottom of the plastic cup through which the paperclip will later be inserted. *Note: Try using a hot glue gun without any glue to melt a small hole, or heat the paperclip and push the end of it through the cup. Cut two strips of foil that measure roughly ¼ inch by 1 ½ inch. Use the end of a paper clip to punch small holes in the one end of each foil strip. Unfold a paperclip so that it looks like a long J, and hang the foil strips, called leaves, on the curved end of the J. Holding the cup upside-down, insert the straight part of the J paper clip through the hole in the cup, so the leaves hang inside the upside down cup without touching the table or desk top. Secure the paperclip using molding clay or plastic tape. Roll some aluminum foil into a ball and place the ball on the top of the paperclip that is sticking out from the cup. The electroscope is complete and ready for use. Charge a balloon by rubbing it with a piece of wool or fur, or by rubbing it in your hair. Slowly bring the charged balloon near the foil ball on the electroscope and watch how the leaves react. Move the balloon away and observe the leaves of the electroscope.
1. What made the leaves move? How? Why?
2. Does anything else coming near the foil ball on the electroscope have the same effect? What?
3. Do the leaves move more or less if the balloon is more charged? Why?
The leaves of the electroscope moved away from each other because they both acquired a negative charge and repelled each other. The negatively charged balloon coming near the foil repelled some of the electrons in the foil. Those electrons travel down the paper clip to the leaves, giving each of them extra electrons and thus a negative charge. Like charges repel, so the leaves moved away from each other.
"Awesome Experiments in Electricity and Magnetism." Michael DiSpezio, Sterling Publishing Co.: New York, 1998, p. 62-63.
© S. Olesik, WOW Project, Ohio State University, 2001. |
Author: Judith Rodby
Summary: Those looking for materials related to content area and cross-disciplinary reading may find this annotated bibliography useful. It is organized around three general categories of research and practice: 1) generalized reading strategies; 2) adapting/applying generalized reading strategies to specific content areas (math, science, history); and 3) content area-specific approaches that focus on genres, discourses, and identities implicit in the ways of knowing in subject areas and disciplines.
Original Date of Publication: June 29, 2010
In 1927 William Gray, then President of the International Reading Association, called for “every teacher to be a teacher of reading” based on the idea that reading was a single activity that all teachers should help students to master.
In the 1980s, Gray’s call was reinforced and echoed by the “psycholinguistic” work and influence of Frank Smith and Yetta and Kenneth Goodman. Their claim was clear—reading is one and only one thing: independent of context or subject matter, cognitive reading processes are consistent within themselves and across all readers.
This interest in cognitive processes led to important research on what “good readers do” and the strategies they employ to read. Ultimately this led to increased emphasis on teachers, all teachers, helping students to acquire these cognitive processes.
Reading in Different Contexts
Beginning in the 1990s the field of reading began to look at the particularities of context and situation. Do we employ the same processes or strategies across different situations, content areas, tasks? Or do we, in some senses, read differently for different contents?
Research in writing had been looking closely at context for many years, noting that writing is profoundly affected by context factors like audience, purpose, genre expectations, and social situation. Reading theorists began asking similar questions, questions such as How is reading in a biology class different from reading in a literature class?, or How do historians approach a text in contrast to the general reader?
Readers looking for materials to examine content area literacy will soon see that reading theorists are working on at least three fronts at once, and published materials and research will fall into three general categories:
- Research and practice texts that focus on generalized reading strategies (implying that all reading is the same)
- Research and practice texts that look at how generalized reading strategies are adapted or applied to specific content area texts
- Research and practice texts that explore content area-specific approaches that focus on the genres, discourses, and identities implicit in the ways of knowing in subject areas and disciplines.
Luckily, these latter two categories help establish a rich base for considering content area literacy at writing project sites. Sources drawn from the third category are particularly useful for speaking directly to the interests of content area teachers.
The sources listed below, primarily drawn from these latter categories, were recommended by NWP leaders as having been useful in their professional development and classroom work.
- Disciplinary, Content-Area Literacy: An Annotated Bibliography
- Disciplinary Literacy: Why It Matters and What We Should Do About It
Original Source: National Writing Project, https://www.nwp.org/cs/public/print/resource/3190 |
On Wednesday, astronomers across the globe will hold “six major press conferences” simultaneously to announce the first results of the Event Horizon Telescope (EHT), which was designed precisely for that purpose.
Of all the forces or objects in the Universe that we cannot see – including dark energy and dark matter – none has frustrated human curiosity so much as the invisible digestive system that swallow stars like so many specks of dust.
“More than 50 years ago, scientists saw that there was something very bright at the center of our galaxy,” says Paul McNamara, an astrophysicist at the European Space Agency and an expert on black holes.
“It has a gravitational pull strong enough to make stars orbit around it very quickly – as fast as 20 years.”
To put that in perspective, our Solar System takes about 230 million years to circle the center of the Milky Way.
Eventually, astronomers speculated that these bright spots were in fact “black holes” – a term coined by American physicist John Archibald Wheeler in the mid-1960s – surrounded by a swirling band of white-hot gas and plasma.
Science Of Cycles keeps you tuned-in and knowledgeable of what we are discovering, and how some of these changes will affect our communities and ways of living. |
NASA is sending samples of bacteria into low-Earth orbit in order to research methods of keeping astronauts healthy while they’re far away from home. The E. coli Anti-Microbial Satellite (EcAMSat) is set to investigate how well antibiotics can combat the bacteria while in space.
It’s thought that E. coli and other similar bacteria might be subject to stress when put under the conditions of microgravity. This would trigger their defense systems, making it more difficult for antibiotics to fight them off, not unlike the way that bacteria on Earth develop a resistance to such treatments – as such, this research might help improve our ability to produce such medicine for terrestrial use, also.
“If we find resistance is higher in microgravity, we can do something, because we’ll know the gene responsible for it, and be able to design countermeasures,” said A. C. Matin, the principal investigator on the EcAMSat project at Stanford University in California, according to a report from Space Daily. “If we are serious about the exploration of space, we need to know how human vital systems are influenced by microgravity.”
As astronauts go on longer missions to more distant destinations – for instance, the oft-touted hypothetical mission to Mars – it’s going to become increasingly important that they can have access to medicine off-world. EcAMSat will contribute to that capability.
The strains of E. coli that are being used as part of the project are responsible for urinary tract infections, which are among the various different ailments that can potentially affect astronauts in space. The results of the study will give mission planners more information about the proper dosage of medicine required to combat an infection.
EcAMSat is autonomous, and will operate independently once ground controllers and crew on the International Space Station collaborate to send it into orbit. Students at Santa Clara University will be tasked with monitoring its activity, handling mission operations, and receiving data.
The dormant E. coli will be awakened via a fluid packed with nutrients, as the temperature of their containers is adjusted to that of the human body. The samples will then be administered with different dosages of antibiotics. Two types of E. coli are set to be compared, one that bears a naturally occurring gene that helps it resist antibiotics, and another which does not.
The bacteria will be mixed with a blue dye that turns a deeper shade of pink depending on how many cells remain active and viable. If it remains blue, it means that the antibiotic dosage has killed off most of the cells. The experiment is set to last for 150 hours, at which point the data will be transmitted down to Earth via radio.
While the main priority is this research into the effectiveness of antibiotics in space, a successful voyage will also help demonstrate the capabilities of the tiny satellite, which is approximately the size of a shoebox.
“Though EcAMSat will only fly this once, many of its components may embark on a different mission: life detection in the solar system,” said Tony Ricco, Ames’ chief technologist for the mission, in a report published by NASA. “Using sensors and the microfluidics technology from EcAMSat, NASA is developing the technology needed to look for life on moons such as Enceladus and Europa – ocean worlds covered by icy crusts.” |
R is an open-source programming language created to help with data visualization and statistical computing. This versatile language has a range of applications for cleaning, analyzing, and graphing data. Watching video tutorials online is a popular method of learning R programming. Videos allow learners to learn the basics of R without having to commit to an intensive course. This flexible learning format is accessible to busy individuals who must balance their R study with other life commitments. This article will cover a range of videos, including what they teach and where to find them.
What is R Programming?
R is a programming language that statisticians created for statistical data analytics. This popular language has a range of applications for performing statistical computing and creating data visualizations. It is often used by Data Scientists, Business Analysts, Data Analysts, and those working in academia or science for tasks specifically involving statistical analysis. R is currently available for free and can run on Windows and Mac OS, as well as a variety of UNIX platforms and related systems.
R provides users with a range of graphical and statistical techniques, such as time-series analysis, clustering, classification, and linear and nonlinear modeling. One of the benefits of working with R is that it simplifies the process of creating publication-quality plots, especially those that incorporate formulas or mathematical symbols. This versatile language includes a fully integrated suite of software tools, such as a data storage and handling facility, operations for performing calculations on arrays, an extensive, integrated set of data analytics tools, and graphical tools designed to analyze and visualize data.
Read more about what R programming is and why you should learn it.
What Can You Do with R Programming?
R was created to primarily help with graphics and statistical computations. This language can accomplish various tasks, from data storage to data analysis to generating statistical models. Of all the available programming languages, R is considered to be the one with the most tools devoted exclusively to statistics. This language can aid with descriptive statistics tasks, like calculating standard deviations and designing models for mapping linear regression.
One of R’s most useful features is its ability to help users create customized data visualizations and dashboards. Some consider R’s ggplot2 package the best data visualization tool available. This package allows users to draw nearly any plot they can conceive. In addition, those who wish to take their data visualizations in R to the next level can combine ggplot2’s syntax with Plotly’s interactive features to make dashboards that are as engaging as they are interactive.
R draws from various machine learning tools so that users can make accurate, data-driven predictions. Users can select from an array of machine learning models, which have applications for creating predictive models, such as movie recommendation systems and churn models. In addition, R users can automate reporting by using R Markdown documents. R Markdown offers a straightforward, accessible syntax to generate various reports, such as presentations, books, or other written documents. This helps R users easily communicate data analysis results with others.
Why Are Video Tutorials Helpful When Learning R Programming?
Suppose you’re interested in learning more about R programming but may not be able to commit to a course that meets at regularly scheduled intervals. In that case, video tutorials are a great alternative. The following are just a few reasons why you may consider getting started learning R with online video content:
- Because videos are pre-recorded, they can be watched at any time of the day, from any location. They can also be paused for note-taking, rewound, and watched as often as necessary if you’re trying to master complex programming concepts. This flexible learning format is ideal for busy individuals who have to balance their R studies with full-time work or family commitments.
- Most R video content is relatively short, which makes it a manageable study option. Instead of investing dozens of hours into R study, learners can spend a few minutes on a video to learn a specific feature or concept before moving on to a different tutorial.
- For visual learners, R videos can provide an excellent way to learn this language. Those watching can follow along with video content in real-time, which can help with retention.
- Online video content is an affordable study option. Some are even available for free from top educational providers. This makes it a good alternative to certificate study or more rigorous, structured learning options, which can cost hundreds or even thousands of dollars.
- Although it may be challenging to master complex programming skills solely from online R tutorials, they provide an excellent resource for those interested in receiving a general overview of R or learning beginner-level programming practices.
Types of R Programming Videos
For those interested in learning R in the online environment, many videos on R and data science are available from top educational providers. Videos are available for free on programming, data science, and data analytics. Those new to working with R can explore short video content on specific programming functions and more general information on the fields of programming or data science. Because so many online video options are available, it’s easier than ever to find content that’s accessible, engaging, and will help you learn how to perform statistical computing tasks with R.
Noble Desktop’s free Intro to Data Science seminar provides plenty of content if you're interested in learning more about fundamental data science concepts. This webinar contains beginner-friendly information on how Python is used in data science, as well as a general overview of the field of data science. This video is a great first step on your data science learning path.
Additional videos are also available from other educational providers:
- Simplilearn has an entire course called R Programming for Beginners 2022 on YouTube. This helpful resource covers various R topics, such as working with variables, logical operators, vectors, functions, data manipulation, and visualization.
- YouTube video content is also offered by freeCodeCamp. Interested learners can watch the R Programming Tutorial, which provides more than two hours of video content on common R topics like installing R, working with RStudio, and performing data visualizations such as scatterplots, histograms, bar charts, and overlay plots.
- Intellipaat’s R Tutorial for Beginners takes learners from beginner to pro using hands-on demos, video content, and interview questions. By the end of the video, students will be familiar with R’s data types, objects, and operators and how to work with data mining, flow control, loops, and variables.
Why Learn R Programming?
Learning to program with R has many benefits and applications across data-related industries and professions. This language is free and open source, which makes it widely accessible. Because it’s available under the General Public License, it has no license restrictions; users can modify code as necessary. Since this language can run on various operating systems, it performs seamlessly, whether you’re using a Windows, Mac, or Linux-based system. In addition, R offers an array of built-in functions and more than 10,000 packages, which can help with data manipulation, statistical modeling, machine learning, and data visualization, among others.
Another perk of working with R is that it has a large community that can assist as necessary with questions and other R-related topics. This means that R users can seek advice from those who have completed projects like the one they are working on or collaborate with others. There are even data science contests available to test users’ R skills. For those working with data visualization, R offers packages like plotly, ggvis, and ggplot2, which are great resources for designing print-quality graphs. R’s package, Shiny, allows users to create their own dashboards and interactive web pages right from the R Console. Shiny web apps can then be hosted on any cloud service, like AWS.
Read more about why you should learn R programming.
How Difficult is It to Learn R Programming?
Because R is offered as a free software environment for graphical and statistical computing tasks, downloading and using this language does not require any additional costs. R can run on various UNIX platforms, as well as macOS and Windows. If you want to download R, you can do so directly from The R Project for Statistical Computing’s website. You will need to select your preferred CRAN mirror before downloading.
If you’re interested in learning R, there are a few prerequisites to consider studying first that can help you acquire this programming language easier and faster. Because R is often used for statistical analysis, it’s essential to have a strong background in mathematics and statistics. In addition, since R also has applications for data visualization, it’s helpful to be familiar with basic visualization options, such as working with plots and graphs. Some people who learn R also find it helpful to understand fundamental analytics skills and practices so that it will be easier to spot and use the patterns that emerge in data. In addition, you may consider learning basic programming concepts before studying R.
R is known to be challenging to learn for most people. Because its syntax is so different from most other programming languages like Python, it can be hard to read R. In addition, core operations, such as naming, selecting, and renaming variables, tend to be more challenging for most R users than in other languages. For those who have a background in other programming languages or have worked previously in the data sciences, it may be easier to learn R than those who are novices to coding or this field. However, some Data Scientists struggle using R due to its numerous GUIs, extensive commands, and inconsistent function names. Like any skill, the more time you spend becoming familiar with R’s rules, the easier it will be to work with this language.
Read about how difficult it is to learn R programming.
Learn R Programming with Hands-on Training at Noble Desktop
Noble Desktop has several excellent learning options for those new to R and courses for more advanced programmers interested in mastering complex R skills. Noble’s Data Analytics with R Bootcamp is an immersive class designed to take participants from the basics of coding to a portfolio showcasing your experience working with R. Those enrolled receive expert instruction and can retake the class for up to one year to brush up on course materials.
In addition to the variety of programming courses Noble teaches, this top educational provider also has several in-person and live online Data Analytics classes. Noble’s Data Analytics Technologies Bootcamp is a beginner-friendly course that prepares students to work with core data analytics tools like SQL, Excel, and Tableau. In addition, a certificate in data analytics is also available for those interested in becoming a Business Analyst or Data Analyst. This rigorous learning opportunity prepares students to perform data analysis, statistical analysis, and data visualization, as well as how to use relational databases. All students receive one-on-one mentoring to help with their learning process. |
|“||By the grammar of a language is meant either the relations born by the words of a sentence and by sentences themselves one to another, or the systematized exposition of these.||”|
|— Topic sentence of the Grammar article, Encyclopædia Britannica 1911 Edition|
Also known as a focus sentence, it encapsulates or organizes an entire paragraph. Although topic sentences may appear anywhere in a paragraph, in academic essays they often appear at the beginning. The topic sentence acts as a kind of summary, and offers the reader an insightful view of the writer’s main ideas for the following paragraph. More than just being a mere summary, however, a topic sentence often provides a claim or an insight directly or indirectly related to the thesis. It adds cohesion to a paper and helps organize ideas both within the paragraph and the whole body of work at large. As the topic sentence encapsulates the idea of the paragraph, serving as a sub-thesis, it remains general enough to cover the support given in the body paragraph while being more direct than the thesis of the paper.
By definition a complex sentence is one that has a main clause which could stand alone and a dependent clause which cannot by itself be a sentence. Using a complex sentence is a great way to refer to the content of the paragraph above (dependent clause) and then bring in the content of the new paragraph (the independent clause). Here is a typical example:
While Representative Paul Ryan is staunch in his conservative ideology, he is also a pragmatist.
The beginning, dependent, clause refers to the previous paragraph which obviously presented Paul Ryan’s conservative ideology. As suggested by the main clause, which is the second of the sentence, the new paragraph will address how he may compromise that ideology to get practical solutions in the real world of politics.
Questions at the beginning of new paragraphs can make great topic sentences which both remind the reader of what was in the previous paragraph and announce that something new is about to be introduced. Consider this example of a question for a topic sentence:
But will the current budget cuts be enough to balance the school district’s budget?
This question refers to the previous paragraph, yes, but it introduces the content for the new paragraph – how the budget cuts may not in fact be enough to balance the budget.
These are a bit like question, as they remind of what went before and do not specifically state the content that is to come. They only hint that something new is about to be introduced. Example:
But there may be more to this issue than first thought.
Pivot topic sentences will come somewhere in the middle of a paragraph, and usually announce that the content will be changing in a different direction. These are often used when there are two differing opinions about something or when two "experts" are being quoted or referred to that may have a different opinion or approach to something. A paragraph may begin something like this:
Kubler and Kessler have identified 5 stages of grief – denial, anger, bargaining, depression and acceptance. And they have provided a detailed explanation of the symptoms and behaviors of each of these stages, so that those experiencing grief may identify which stage they are in at any given time and develop strategies with the help of their therapists, to move through those stages more effectively. Since their original work, however, a number of other psychologists have developed different models of the grieving process that call into question some of Kubler and Kessler’s contentions….
(rest of paragraph to follow).
The first part of this paragraph addresses Kubler and Kessler; the second part will obviously address another opinion. The topic sentence is underlined, to show the pivot point in the paragraph. Pivot topic sentences will always have some clue word, such as "yet," "sometimes," "however," etc.
- Paragraphs and Paragraphing Purdue.edu
- Writing Paragraphs
- Elementary principles of composition
- Paragraphs and Topic Sentences Indiana.edu
- Lesson Plan: Writing a Good Topic Sentence
- Techniques for writing topic sentences
- Topic Sentences Walden.edu
- THE HEART OF YOUR PAPER 11 Methods for Writing a Topic Sentence (or a Thesis Statement) asdk12.org Retrieved 2015-10-22
- Can Either the Topic Sentence or the Thesis Statement Be a Question?
- Bridge Sentence Ou.edu Retrieved 2015-10-22
- PLACEMENT OF THE TOPIC SENTENCE Ric.edu Retrieved 2015-10-22 |
Instruments and observing methods were restricted to positional measurements of celestial bodies, and this did not change through the middle ages. The view of the universe that days was the geocentric system established by Greek astronomer Ptolemy around 120 AD: A sphere with fixed stars on it rotates daily around the spherically shaped Earth, with Sun, Moon, and planets being guided around Earth by a complicated machinery of epicycles; many had even forgotten about the Earth's spherical shape.
The events that brought astronomy to the state of modern science were (a) the introduction of the heliocentric system, and (b) the invention of the telescope around 1600.
After Copernicus, Danish astronomer Tycho Brahe (1546-1601) proposed a hybrid model of Moon and Sun orbiting the Earth and the other planets moving around the Sun, still needing epicycles for acurate description of their orbits. Strangely, he kept the idea that the sky and all planets encircle a static Earth daily, and got in conflict with Nikolaus Baer who thought Earth was rotating. Tycho also established the nature of comets as objects of translunar space and not atmospheric phenomena, as had been postulated by Aristotle, by measuring a lower limit of the distance of several times the Lunar distance for one comet, and observed a supernova in 1572, thus proving that the stellar skies are not so unchangable as people had believed previously.
German astronomer Johannes Kepler (1571-1630) used Brahe's Mars observations to establish that planets move on elliptical orbits around the Sun, and derived his three laws of planetary motion:
It was finally left to Galileo to give evidence for the heliocentric model with his telescopic discoveries of the moons of Jupiter and the phases of Venus. However, he got in serious trouble with the Roman Inquisition for his advocation of the Copernican system, and the Church authorities kept the old geocentric system of Ptolemy as their doctrine for a long time.
The first rigorous proof of the Earth's motion around the Sun came finally over a century later in 1729, when James Bradley discovered the aberration of light from the stars, a small apparent displacement caused by the combination of Earth's motion with the finite velocity of light (which had to be discovered previously, see below). The other predicted effect, stellar parallaxes, had to wait for their discovery until 1838, when Friedrich Wilhelm Bessel discovered the parallax of star 61 Cygni.
In the same year 1610, Nicholas-Claude Peiresc (1580-1637) discovered the Orion Nebula M42 around the star Theta Orionis. Simon Marius, who had independently discovered the four brigt Jovian moons about the same time as Galileo, and who gave them their names, found and described the Andromeda "Nebula" M31 in 1912 (this was actually an independent rediscovery as it had been longly found visually by Al Sufi in 964 AD).
As mentioned, Johannes Kepler had proposed another telescope type, consisted of two convex lenses, published in 1611; such an instrument was first constructed by Christopher Scheiner between 1613 and 1617. The Keplerian telescope became the dominant design of all major post-17th century refractors.
The first reflecting telescope was constructed by Isaac Newton in 1668, based partially on a design created in 1663 by James Gregory (1638-75); one reason was the intention to overcome chromatical aberration. In 1672, Jacques Cassegrain (1652-1712, also known as Guillaume or N. Cassegrain) proposed the telescope type named after him, but probably never constructed any; the first known Cassegrain telescope was built by James Short (1710-68). Other designs, or telescope types, were proposed about that time, such as a first idea of a schiefspiegler telescope of a Pater Zahn in 1685, but did not get any importance then.
In 1733, Chester Moore Hall invented the achromatic lens system by joining a crown glass lens and a flint glass lens, which allowed for minimizing chromatic aberration. John Dollond (1706-61) and others began to produce fine quality refractors with these achromatic objective lenses in 1757, while his eldest son, Peter Dollond (1730-1820), developed the achromatic triplet lens in 1765, placing convex lenses of crown glass on either side of a biconcave flint glass lens.
William Herschel (1738-1822) invented his kind of telescope, using one tilted mirror only, around 1780; he built a number of large telescopes after this principle, including an 48-inch constructed in 1789.
F = G*m1*m2 / r^2This is Newton's law of gravitation, which formed the foundation of treating not only the two-body problem of a planet moving around the Sun, but also of the many-body problem. Based on this theoretical grounds, a number of famous mathematicians developed celestial mechanics to a sophisticated science during the 18th and 19th century, among them Leonard Euler (1707-1783) from Switzerland, and Joseph Lagrange (1736-1813) and Pierre Simon Laplace (1749-1827) from France.
Studying the motion of Jupiter's moons, Ole (or Olaus) Roemer found in 1675 that they are observed in slightly deviating positions from what theory predicts, as the distance of Earth and Jupiter varies. He concluded that light was propagating with finite velocity.
The study of motion of solar system bodies was further stimulated by proving Edmond Halley's (1656-1742) prediction of the return of comet Halley in 1758 through Johann Georg Palitsch's (1723-1788) rediscovery, and other comet observations, the discovery of planet Uranus by William Herschel (1738-1822) in 1781 [a prediscovery observation of Uranus had been made by Flamsteed in 1690], and the discoveries of the first minor planets, the first being Ceres discovered in 1801 by Giuseppe Piazzi (1746-1826). Methods for determining orbits from few observations were developed by Carl Friedrich Gauss (1777-1855) and Wilhelm Olbers (1758-1840).
The ultimate fame of celestial mechanics was achieved by the discovery of planet Neptune in 1846 by Johann Gottfried Galle (1812-1910) and Heinrich d'Arrest () after mathematical predictions by Urbain Leverrier (1811-1877) in France and John Couch Adams (1819-1892) in England; Neptune had been seen but not recognized in prediscovery observations by Galileo in 1612, and by Challis () in 1845 when checking Adams' predictions.
Evidence to support this view came from the discovery of "new" and variable stars, and of proper motions of stars by Edmond Halley in 1718.
The search for stellar parallaxes (and thus distance determinations) was longly unsuccessful, because the parallaxes are so small (and the distances so large). While lookin for this, James Bradley (1693-1762) discovered the aberration of light in 1725-26 (published 1729) from observations of the star Gamma Draconis (Eltanin), and in 1847 the Earth's nutation, a small deviation of Earth's axis caused by the Moon with a period of 18.6 years. Bradley correctly gave an upper limit of 1 arc second for the stellar parallax and thus a lower limit of 1 parsec (3.26 light years) for the distance of this star. Also, the great observer William Herschel was unsuccessful in this thread for all his life, and it remained to Friedrich Wilhelm Bessel to finally find the parallax of 0.3 arc seconds and thus the distance of 11.1 light years for 61 Cygni in 1838 (the nearest star, Alpha Centauri, is at 4.3 light years); Bessel had selected this star for its large proper motion of 5.21 arc seconds per year (still the fifth-largest known). Almost simultaneously, Wilhelm Struve in Pulkova found the parallax of 0.12 arc seconds for Alpha Lyrae (Vega, at 27 light years) and Thomas Henderson at the Cape Observatory that of Alpha Centauri (0.745 arc seconds).
Special types of stars had been detected: Binary and multiple stars as well as variables. Giovanni Batista Riccioli of Bologna discovered the nature of Mizar (Zeta Ursae Majoris) as a double star in 1650. In 1656, Christian Huygens found that the star Theta in the Orion, in the Orion Nebula M42, was actually a group of stars; he discovered three, the fourth Trapezium star was found in 1673 by Abbe Jean Picard (according to de Mairan), and independently by Huygens in 1684. Robert Hooke discovered Gamma Arietis in 1664 or 1665. Next, in the southern hemisphere Alpha Crucis (1685 by Father Fontenay at the Cape of Good Hope) and Alpha Centauri (1689 by Father Richaud from Pondicherry, India) were identified as double. In 1718, Gamma Virginis was found, and in 1719, James Bradley found the companion at Castor (Alpha Geminorum). A first catalog of 80 entries was compiled by Christian Mayer in 1779 and published in 1781 in Bode's "Jahrbuch für 1784", compiled with an 8-foot mural quadrant at power 60 to 80. Real systematic research was started in 1779 by William Herschel, who listed already 269 double stars in his early 1782 catalog, and about 700 in his 1785 catalog; he extended this number in later publications.
While suddenly occurring "new stars" (novae and supernovae) had been occasionally recorded through the centuries from various cultures, it was only Tychos supernova of 1572 and Kepler's from 1604, as well as the nova-like outburst of P Cygni in 1600, discovered by W.J. Blaeu, that became generally known for western astronomers. Variable stars of other type were discovered, namely Mira (Omicron Ceti) in 1596 by David Fabricius (1564-1617) and Algol (Beta Persei) around 1669 by Geminiano Montanari (1632-87), though ancient naming suggests that already the ancients had noted and were alarmed by Algols variability as it was called Ras Al Ghul or "Demon's Head" by the Arabs and Rosh ha Satan or "Satan's Head" by the Hebrews. Another nova occurred in Vulpecula in 1670, and Edmond Halley discovered the variability of peculiar Eta Carinae in 1677, Gottfried Kirch that of Chi Cygni in 1687, and J.-D. Maraldi that of R Hydrae in 1704, making up a total of 9 variable stars known in 1781 (in addition, John Flamsteed has perhaps seen, but not noticed, the supernova that created Cassiopeia A in 1667).
Roth lists the number of known variables as follows: 12 by 1786, 18 by 1844, 175 by 1890, 393 by 1896, 4,000 by 1912, 22,650 by 1970, and 28,450 by 1983.
Besides stars, star clusters and "nebulae" (all appearing as nebulous patches in the small telescopes of 17th and early 18th century observers) can be found in the sky; these are nowadays summarized under the term Deep Sky Objects. As described in more detail in the history of the discovery of the Deepsky objects, some few of these objects had been known since ancient times, but most of them have been discovered only with the aid of telescopes. Notable firsts:
William Herschel was the first who tried to make a physical model of the stellar universe on observational foundations, and therefore invented the method of stellar statistics to derive a first model of the Milky Way as an island universe (or galaxy). Previously, Johann Lambert (1728-77), Thomas Wright (1711-86), and Immanuel Kant (1724-1804) had hypothesized, on religious and philosophical grounds, that the Milky Way might be a thin flat system of stars, presumably a disk, and of some "nebulae" being other systems of the same kind (however, all their objects are really part of our Galaxy, mostly globular clusters). Herschel also determined the motion of the solar system with respect to the neighboring stars with remarkable good acuracy, and supposed that other "milky ways" should be in the universe, among them the nearest, the "Andromeda Nebula" M31. However, he significantly underestimated both the size of our Galaxy and the distance to M31, which he assumed to be at 2,000 times the distance of Sirius, and most of his other "milky way" candidates were nebulae within our Galaxy.
In the 18th century, better instruments allowed the compilatin of more acurate and larger catalogs. A milestone was the Bonner Durchmusterung, created 1852-59 under Friedrich Wilhelm Argelander (1799-1875), which contains positions and magnitudes for 320,000 stars. This catalog was extended southward by the Cordoba Durchmusterung, compiled 1885-1892. Other important catalogs compiled visually include the Harvard Revised Photometry and the Potsdammer Durchmusterung, both published 1907.
The pioneering work of photographic photometry was Karl Schwarzschild's (1873-1916) Göttinger Aktinometrie, compiled 1904-1908.
When spectroscopy came up, a first classification of 316 stars was published by the Italian Father Angelo Secchi (1818-1878) in 1867. A more comprehensive compilation of spectral classification was the Henry Draper Catalogue published 1918-24 at Harvard Observatory and containing data of 225,300 stars.
In 1818, Joseph Fraunhofer (1787-1826) was the first to take a good spectrum of the Sun and discovered 576 dark lines in it; he labelled the more prominent lines with letters A to K. He later discovered that the light from Moon and planets show the same spectral features as the solar spectrum, that the spectra of star differ from this spectrum, and developed the diffraction grating, one of his had 3,625 lines per centimeter.
In 1832, David Brewster showed that cold gasses produce dark absorption lines in continuous spectra. In 1847, John W. Draper found that hot solids emit light in continuous spectra while hot gasses produce line spectra. In 1859, Gustav Robert Kirchhoff (1824-87) and Robert Bunsen (1811-99) discovered that each chemical element (and compound) shows a characteristic spectrum of lines, which are at the same wavelengths in emission and absorption spectra. Thus, the chemical composition of a light source (including celestial bodies) can be determined from spectral analysis; Kirchoff published a study of the chemical constitution of the Sun im 1859.
Anders Jonas Angstrom (1818-74) published his map of the Solar spectrum with identifiication of lines corresponding chemical elements in 1863.
In 1864, British amateur William Huggins (1824-1910) published his investigations of spectra of stars and nebulae (thereby finding the gaseous nature of diffuse and planetary nebulae). The same year, Giovanni Batista Donati showed that comet spectra contain emission lines. The first spectrogram (photo of a spectrum) of a star, Vega (Alpha Lyrae), was obtained in 1872 by American amateur Henry Draper (1837-82).
Christian Doppler (1803-53) had discovered that moving bodies show shifted spectral lines, so that radial velocities can be determined spectroscopically with high acuracy. William Huggins stated in 1868 that because of this effect, spectral lines of moving celestial objects should appear shifted. The first measurements of this effect were obtained in 1888 by Hermann Carl Vogel (1841-1907).
Of the early spectral classifications schemes, that of Edward Charles Pickering (1846-1919) and Annie Cannon (1863-1941), used in their Henry Draper Catalogue, was finally adopted by the IAU.
The power of photography for every branch of astronomy was quickly demonstrated; early pioneering work was done by Isaac Roberts, Edward Emerson Barnard (1857-1923) and Max Wolf (1863-1932) especially for the Milky Way, star clusters, and nebulae.
Telescope optics was notably improved by Fraunhofer when he developed the achromatic objective in 1824, which led to the construction of larger refractors up to the Yerkes 102 cm.
The reflector techniques was significantly improved by the invention of glass mirrors by Steinheil in 1857 who built a 10 cm reflector, succeeded by Faucault's 33-cm and Lassell's 60-cm glass mirrors. Almost all big telescopes of the 20th century are reflectors with glass mirrors. The first telescope exceeding Lord Rosse's Leviathan of 1845 in aperture was the 100-inch Mount Wilson telescope constructed 1917, followed by the Palomar 200-inch in 1948, and the limitedly successful 6.1-meter Selenchukskaya telescope in 1976.
In 1845, William Parsons, third Earl of Rosse (1800-67) discovered the spiral pattern of M51, and later of M99 and 13 other "nebulae" which were since known as "spiral nebulae".
The essential event marking the discovery of gaseous nebula came when William Huggins observed their spectra in 1864 and found them to be emission line spectra. Now there was a simple and unique criterion distinguishing them from star clusters, which like the stars composing them, show a continuous spectrum (with overlaid absorption and sometimes emission lines). Spiral "nebulae", however, show continuous spectra like stars.
It was known since Herschel that the Milky Way forms a system of stars one of which is our Sun. Since Kant and Herschel, it was speculated that there might be other similar stellar systems; some believed Rosse's spiral nebulae could be candidates. By 1900, Easton proposede a model of the Milky Way as a spiral nebula. Another fraction of astronomers, including astrophotographer Isaac Roberts who interpreted his photo of the Andromeda "nebula" M31, thought these nebulae were solar systems in formation (with the companions M32 and NGC 205 [M110] supposed as forming Jovian planets).
Stellar statistical methods, invented by Herschel and improved by H. von Seliger and J. Kapteyn, indicated that the Solar System was, presumably by chance, situated close to the center of the Milky Way Galaxy. In 1904, interstellar reddening and absorption were found; nevertheless, it was longly believed to be a minor effect only.
In 1912, Vesto M. Slipher of Lowell Observatory discovered the nature of the nebulae in the Pleiades star cluster M45 as reflection nebulae. In 1914, he found that the spiral and some elliptical "nebulae" are moving at very high radial velocities so that their membership in the Milky Way got questionable, and in 1915 he determined the rotational velocity of the edge-on "nebula" M104 to be about 300 km/s. The view that spiral "nebulae" might be galaxies like our Milky Way was stressed by Heber D. Curtis of Lick Observatory, also on the basis of nova observations and as absorption could explain why spirals "avoid" to be seen near the galactic plane, but opposed in particular by Adriaan van Maanen (1884-1946) who erroneously believed to have found internal proper motions in spirals which would have indicated observable rotation.
In 1912, Henrietta Leavitt found the period-luminosity relation of Cepheid variables in the Magellanic clouds. Using this relation, Harlow Shapley, in 1918, determined distances in the Milky Way, and in particular of globular clusters, which he found centered aroung a location in Sagittarius: He concluded that the center of the Galaxy should be located there, with the solar system lying in an outer region of Milky Way; however, as he significantly underestimated the influence of interstellar absorption, he overestimated the size of the Milky Way by a factor of about 3.
In 1924, Edwin Hubble resolved the outer part of the Andromeda "Nebula" M31 into stars and found novae and Cepheid variables, thus establishing its nature as an external star system or galaxy.
In 1926, Bertil Lindblad and Jan Oort developed the theory of kinematics and dynamics of the Milky Way Galaxy.
In 1929, Hubble derived his distance - redshift relation for galaxies, indicating the expansion of the universe.
In 1930, Robert Julius Trumpler (1886-1956) of Lick observatory found from investigations of open clusters that the interstellar absorption had been signficantly underestimated, and the Milky Way Galaxy was correspondingly smaller. In 1937, interstellar molecules (CO_2) were found as absorption lines.
In 1943, Carl Seyfert discovered that certain galaxies (now called Seyfert Galaxies) have "active" nuclei with peculiar nonthermal spectra. In 1944, Walter Baade discovered that the stellar population in different regions of galaxies varies and there are two different stellar populations: Young Population I in spiral arms and irregular galaxies, and old Population II stars in elliptical (and lenticular) galaxies, globular clusters, and the bulges and nuclei of spiral galaxies.
In 1951, the 21-cm radio radiation of neutral hydrogen was discovered. Observations of the Milky Way in this wavelength provided first direct evidence of the spiral structure of our Galaxy.
In 1952, Baade found that Cepheids of two classes exist: Type I Cepheids ("classical" Delta Cephei stars) which are members of population I and Type II Cepheids (W Virginis stars) which are 4 to 5 times fainter. This discovery implied that the intergalactic distance scale had to be revised, moving the galaxies to more than double distance away, and thus removing discrepancies of Milky Way size compared to external galaxies. Since, the distance scale had been subject to minor modifications on various occasions, last due to revision of the Cepheid distances found by the astrometric satellite Hipparcos in early 1997.
In 1963, the first quasar was discovered by Maarten Schmidt.
In 1931, K.G. Jansky discovered radio radiation from the Milky Way. In 1939, G. Reber found this radiation concentrated within the galactic plane and toward the galactic center. In 1942, J.S. Hey and J. Southward found the first extragalactic radio radiation.
Individual radio sources were identified in the early 1950s, and the first radio galaxies in 1954.
With upcoming space missions, astronomy became possible in those parts of the electromagnetic spectrum for which Earth's atmosphere is not transparent. In 1960, cosmic X-rays (fronm the Solar corona) were observed for the first time by an aerobee rocket. In 1965, the first cosmic X-rays were discovered (E.T. Byram, H. Friedman, T.A. Chubb); U.S. satellite Uhuru discovered 160 X-ray sources in 1970.
In 1963, radio astronomers discovered the first quasar (M. Schmidt), and in 1967, the first pulsar (J. Bell and A. Hewish).
Since, astronomical satellites have become a powerful tool to investigate astronomical objects in every spectral range; for more detail, look at the list of orbiting astronomical observatories (astronomy satellites). |
Archaeologists have been hunting for signs of the first inhabitants of the Americas at an area known as the Gault Site outside Killeen, Texas, ever since anthropologists discovered signs of early human occupation there in 1929. However, due to poor management of the land, looting, and even a commercial pay-to-dig operation, over the years, many of the upper layers have become irreparably damaged.
Then, in 1999, the University of Texas at Austin leased the land and began academic excavations. Digging deeper, archaeologists found 2.6 million artifacts at the site, including many from the Clovis culture, once believed to be the first people to settle North America. But the latest discoveries to be unearthed at Gault are arguably the most exciting to date: unknown projectile points, which push back human occupation of the area at least 2,500 years before the Clovis civilization, reports Kevin Wheeler at the Texas Standard.
The Clovis civilization derives its name from Clovis points, long 4-inch fluted projectile spear tips that archaeologists digging near Clovis, New Mexico, first came across in the early 20th century. Since that time, the distinctive points have been located at some 1,500 sites around North America, with the oldest dating back 13,500 years. For decades, archaeologists believed this unique technology was created by the Clovis, the earliest inhabitants of the Americas. But recent studies have brought that chronology into question. Now, the discovery of these even older, previously unknown types of projectile points in Texas further muddies that timeline.
Researchers began a dedicated effort to search for any pre-Clovis artifacts at Gault in 2007, as more and more evidence mounted from other parts of the Americas that the Clovis people may not have been the first to settle the New World. By the time the project wrapped in 2013, researchers had located 150,000 tools, including hide scrapers, flint cores, and most importantly, 11 small projectile points in the layers below the Clovis artifacts that they are referring to as the Gault Assemblage. These were dated to between 16,000 to 20,000 years old using a technique called optically stimulated luminescence.
“These projectile points are particularly interesting because they don’t look like Clovis,” Thomas Williams of Texas State University and lead author of the study in Science Advances tells Wheeler. “And at the moment they appear to be unique in the archaeological record in the earliest part of prehistory in North America…It really is changing the paradigm that we currently consider for the earliest human occupation in the Americas.”
Williams tells Wheeler in a radio interview that it’s not possible to say where the early humans at Gault came from since no similar projectile points have been found elsewhere. This being said, because it would have taken that culture a while to migrate into present-day Texas, their ancestors likely peopled the Americas centuries or even thousands of years before the artifacts of the Gault Assemblage were created. That lends more support to the emerging ideas that instead of crossing a gap in Canadian ice sheets about 13,000 years ago, the earliest Americans peopled the hemisphere by following a coastal route down Alaska and the Pacific coast.
This Gault Assemblage isn't the only evidence that the Western Hemisphere has hosted human inhabitants for much longer than previously believed. In 2012, archaeologists discovered pre-Clovis projectile points in Oregon in a site known as Paisley Caves and in 2016 divers found stone tools and butchered mastodon bones in a Florida sinkhole dating back over 14,000 years.
But the most convincing—and controversial—site to date is Monte Verde in Chile, near the tip of South America. That site indicates that human hunter-gatherers lived in the area more than 15,000 years ago, meaning humans made it all the way down North and South America thousands of years before the emergence of the Clovis culture. That suggests there are probably lots of new projectile points still out there to be discovered, if we just dig deep enough. |
Russ Dahl, Opto Diode Corporation
From the Web Exclusive, "Seeing the True Colors of LEDs."
Light emitting diodes (LEDs) are semiconductors that convert electrical energy into light energy. The color of the emitted light depends on the semiconductor material and composition. The LEDs are generally classified into three wavelengths: Ultraviolet, visible and infrared.
The wavelength range of commercially available LEDs with single-pixel-output power of at least 5mW is 360nm to 950nm. Each wavelength range is made from a specific semiconductor material family, regardless of the manufacturer.
Ultraviolet LEDs (UV LEDs): 320nm - 360nm
UV LEDs are rapidly becoming commercialized, specifically used for industrial curing applications and medical/biomedical uses. Until recently, the lower wavelength limitation for high-efficiency die was at 390nm. It has been moved to 360nm and further developments over the next few years will likely see the commercialization of high efficiency die in the 320nm region.
The material primarily used for UV LED’s is gallium nitride/aluminum gallium nitride (GaN/AlGaN). At this time the technology does not yield high power LED’s and the market is unsettled as several companies are moving to improve their processes.
Near UV to Green LEDs: 395nm - 530nm
The material for this wavelength range of products is indium gallium nitride (InGaN). It is technically possible to make a wavelength anywhere between 395nm and 530nm. However, most large suppliers concentrate on creating blue (450nm- 475nm) LEDs for making white light with phosphors and green LEDs that fall into the 520nm - 530nm range for traffic signal green lighting.
Rapid advancements and improvements in efficiency are noted in the blue wavelength range, especially, as the race to create brighter and brighter, white illumination sources continues.
Yellow-Green to Red LEDs: 565nm - 645nm
Aluminum indium gallium phosphide (AlInGaP) is the semiconductor material used for this wavelength range. It is predominately used for traffic signal yellow (590nm) and red (625nm) lighting. The lime green (or yellowish-green 565nm) and orange (605nm) are also available from this technology, but they are somewhat limited. The technology is rapidly advancing on the red wavelength in particular because of the growing commercial interest in making red-green-blue white lights.
It is interesting to note that neither the InGaN or AlInGaP technologies are available as a pure green (555nm) emitter. Older, less efficient technologies do exist in this pure green region, but they are not considered efficient or bright. This is due largely to a lack of interest and/or demand from the market place and therefore a lack of funding to develop alternative material technologies for this wavelength region.
Deep Red to Near Infrared (IRLEDs): 660nm - 900nm
There are many variations on device structure in this region, but all use a form of Aluminum Gallium Arsenide (AlGaAs) or Gallium Arsenide (GaAs) materials. There is still a push to increase the efficiency of these devices, but the increases are only incremental improvements. Applications include infrared (IR) remote controls, night vision illumination, industrial photocontrols and various medical applications (at 660nm - 680nm).
Theory of LED Operation
LEDs are semiconductor diodes that emit light when an electrical current is applied in the forward direction of the device. An electrical voltage that is large enough for the electrons to move across the depletion region and combine with a hole on the other side to create an electron-hole pair must be applied. As this occurs, the electron releases its energy in the form of light and the result is an emitted photon.
The bandgap of the semiconductor determines the wavelength of emitted light. Shorter wavelengths equal greater energy and therefore higher bandgap materials emit shorter wavelengths. Higher bandgap materials also require higher voltages for conduction. Short wavelength UV-Blue LED’s have a forward voltage of 3.5 volts while near-IR LEDs have a forward voltage of 1.5 - 2.0 volts.
Wavelength Availability and Efficiency Considerations
High efficiency LEDs can be produced in any wavelength range, with one exception - the 535nm to 560nm range. The overriding factor as to whether or not a specific wavelength is commercially available has to do with market potential, demand, and industry-standard wavelengths. This is particularly pronounced in the 420nm - 460nm, 480nm - 520nm, and the 680nm - 800nm regions. Because there are no high-volume applications for these wavelength ranges, there are no high-volume manufacturers providing LED products for these ranges. It is possible, though, to find medium and/or small suppliers offering products to fill these particular wavelengths on a custom basis.
Each material technology has a spot within the wavelength range where it is most efficient. This point is very close to the middle of each range. As the doping level of the semiconductor increases or decreases from the optimal amount, efficiency suffers. That is why a blue LED has much greater output than green or near UV, amber has more than yellow-green, and near IR is better than 660nm. When given a choice, it is much better to design for the center of the range than at the edges. It is also easier to procure products if you are not operating at the edges of the material technology.
Supplying current and voltage to LEDs
While LEDs are semiconductors and need a minimum voltage to operate, they are still diodes and need to be operated in a current mode. There are two main ways to operate LEDs in DC mode. The easiest and most common is using a current limiting resistor (See Fig. 1). The disadvantage to this method is the high heat and power dissipation in the resistor. In order for the current to be stable over temperature changes and from device-to-device, the supply voltage should be much greater than the forward voltage of the LED.
In applications where the operating temperature range is narrow (less than 30°C) or the output of the LED is not critical, a simple circuit utilizing a current limiting resistor may be used as shown here:
Fig. 1 - The current value is found by applying the equation I=(Vcc-Vf)/RL. To be absolutely certain of the current flow in the circuit, each LED VF would have to be measured and the appropriate load resistor specified. In practical commercial applications Vcc is designed to be much larger than VF and thus the small changes in VF do not affect the overall current by a large amount. The negative aspect of this circuit is a large power loss through RL.
A better way to drive the LED is with a constant current source (See Fig. 2). This circuit will provide the same current from device-to-device and over temperature shifts. It also has lower power dissipation than using a simple current limiting resistor.
Fig. 2 - An example of an accurate and stable circuit. This circuit is commonly referred to as a constant current source. Note that the supply current is determined by the supply voltage (Vcc) minus Vin divided by R1, (Vcc-Vin)/R1.
Commercial, off-the-shelf LED drivers are available from a number of different sources. Typically these operate using pulse width modulation (PWM) principles for brightness control.
Fig. 3. - Gardasoft Vision’s PP500 Series LED Lighting Controller
Pulsing LEDs in high current and/or high voltage mode for arrays in series-parallel configuration creates a unique set of problems. For the novice designer it is not practical to design a current-controlled pulse drive with the capability to deliver 5 amps and 20 volts. There are a few manufacturers of specialty equipment for pulsing LEDs (See Fig. 3), such as Gardasoft Vision (www.gardasoft.co.uk).
For more information, visit: Seeing the True Colors of LEDs |
"Ophiodea" from Ernst Haeckel's Kunstformen der Natur, 1904
Brittle star is the common name for any of the marine organisms in the echinoderm class Ophiuroidea, characterized by long, flexible, typically slender arms joined to a central body disk. They resemble the related starfish (sea stars), but with the central body disk sharply marked off from the arms and with the arms generally slender, among other differences. As members of Ophiuroidea, they also are known as ophiuroids. The name brittle stars reflects their ability to break off arms as a defense against predators, with the arms later regenerating.
Brittle stars may be more specifically identified with the members of the clade Ophiurida within Ophiuroidea, and known as ophiurids, while the clade Euryalida are commonly known as basket stars. This article will focus on the larger sense of brittle stars as members of Ophiuroidea.
There are some 1,500 species of brittle stars living today, and they commonly are largely found in deep waters more than 500 meters (1,650 feet) down. They are an important part of benthic food chains, consuming detritus, plankton, worms, and small mollusks and crustaceans, while themselves being prey for bottom feeding fish and crabs. For humans, they hold little commercial value, and are rarely seen given their inhabiting deeper waters, but they still hold a fascination for human beings given their beauty and unique behavior.
Brittle stars are echinoderms, which are marine invertebrates comprising the phylum Echinodermata and generally characterized by a hard, internal calcite skeleton, a water-vascular system, adhesive "tube feet," and five-rayed radial symmetry at some point in their lives. In addition to the brittle stars, this phylum includes the starfish, sand dollars, crinoids, sea urchins, and sea cucumbers.
Brittle stars comprise one of the classes within Echinodermata, the class Ophiuroidea. The ophiuroids have a central disc from which arms extend. Ophiuroids generally have five long, slender, whip-like arms, which extend in pentaradial symmetry and may reach up to 60 centimeters (two feet) in length on the largest specimens. They use these flexible arms to crawl across the sea floor. The tube feet that are common to echinoderms, and often used for movement in other echinoderms, primarily serve in brittle stars as tactile organs.
Many of the ophiuroids are rarely encountered in the relatively shallow depths normally visited by humans, but they are a diverse group.
Ophiuroidea contains two large clades: Ophiurida and Euryalida. Although both together can be considered brittle stars, the "true brittle stars" are members of Ophiurida, while Euryalida are known as basket stars. Many of the basket stars have characteristic many-branched arms.
Like all echinoderms, the Ophiuroidea possess a skeleton of calcium carbonate in the form of calcite. In ophiuroids, the calcite ossicles are fused to form armor plates, which are known collectively as the test.
Of all echinoderms, the Ophiuroidea may have the strongest tendency toward five-segment radial (pentaradial) symmetry. The body outline is similar to the Asteroidea (starfish or sea stars), in that ophiuroids have five arms joined to central body disk. However, in ophiuroids the central body disk is sharply marked off from the arms. The disk contains all of the viscera. That is, the internal organs of digestion and reproduction never enter the arms, as they do in the Asteroidea.
The brittle star's mouth is rimmed with five jaws, and serves as an anus (egestion) as well as for ingestion. Behind the jaws is a short esophagus and a large, blind stomach cavity which occupies much of the dorsal half of the disk. Ophiuroids have neither a head nor an anus. Digestion occurs within ten pouches or infolds of the stomach, which are essentially ceca and extend into the arms, just like sea stars. The stomach wall contains glandular hepatic cells.
The nervous system consists of a main nerve ring which runs around the central disk. At the base of each arm, the ring attaches to a radial nerve, which runs to the end of the limb. Ophiuroids have no eyes, as such. However, they have some ability to sense light through receptors in the epidermis. These are especially found at the ends of their arms, detecting light and retreating into crevices.
Gas exchange and excretion occur through cilia-lined sacs called bursae; each opens onto the interambulacral area (between the arm bases) of the oral (ventral) surface of the disc. Typically there are ten bursae, and each fits between two stomach digestive pouches.
Both the Ophiurida and Euryalida (the basket stars) have five long, slender, flexible whip-like arms, up to 60 centimeters in length. They are supported by an internal skeleton of calcium carbonate plates that are referred to as vertebral ossicles. These "vertebrae" articulate by means of ball-in-socket joints, and are controlled by muscles. They are essentially fused plates, which correspond to the parallel ambulacral plates in sea stars and five Paleozoic families of ophiuroids. In modern forms, the vertebrae are along the median of the arm. The body and arms also bear calcite plates (ventral and dorsal) and delicate spines (lateral), which protect the vertebral column. The spines, in ophiuroids, compose a rigid border to the arm edges, whereas in euryalids they are transformed into downward-facing clubs or hooklets.
Euryalids are similar to ophiurids, albeit commonly larger, but their arms are forked and branched.
Ophiuroid podia generally function as sensory organs. They are not usually used for feeding, as in Asteroidea. In the Paleozoic era, brittle stars had open ambicular grooves but in modern forms these are turned inward.
In living ophiuroids, the vertebrae are linked by well-structured longitudal muscles. Ophiuroida move horizontally, and Euryalina move vertically. The latter have bigger vertebrae and smaller muscles. They are less spasmodic, but can coil their arms around objects, holding even after death. These movement patterns are distinct to the taxa, separating them. Ophiuroida move quickly when disturbed. One arm presses ahead, whereas the other four act as two pairs of opposite levers, thrusting the body in a series of rapid jerks. Although adults do not use their tube feet for locomotion, very young stages use them as stilts and even serve as an adhesive structure.
Brittle stars use their arms for locomotion. They do not, like sea stars, depend on tube feet, which are mere sensory tentacles without suction in brittle stars. Brittle stars move fairly rapidly by wriggling their arms, which are highly flexible and enable the animals to make either snake-like or rowing movements. Their movement has some similarities with animals with bilateral symmetry.
The vessels of the water vascular system end in tube feet. The water vascular system generally has one madreporite. Others, such as certain Euryalina, have one per arm on the aboral surface. Still other forms have no madreporite at all. Suckers and ampullae are absent from the tube feet.
The ophiuroids diverged in the Early Ordovician, about 500 million years ago. Today, ophiuroids can be found in all of the major marine provinces, from the poles to the tropics. In fact, crinoids, holothurians, and ophiuroids live at depths from 16 meters to 35 meters all over the world. Basket stars are usually confined to the deeper parts of this range. Ophiuroids are known even from abyssal (>6000 meter) depths. However, brittle stars are also common, if cryptic, members of reef communities, where they hide under rocks and even within other living organisms. A few ophiuroid species can even tolerate brackish water, an ability otherwise almost unknown among echinoderms.
Brittle stars are generally scavengers or detritivores, which are selective due to their inability to digest mass mud intake like sea stars. Small organic particles are moved into the mouth by the tube feet. Ophiuroids may also prey on plankton and small crustaceans, mollusks, and worms. Basket stars, in particular, may be capable of suspension feeding, using the mucus coating on their arms to trap plankton and bacteria. (They move around like starfish, and have tube feet.) Certain tube feet, derived from the ectoderm, can act as chemoreceptors.
Nonetheless, brittle stars consume small organisms if available. In large, crowded areas, brittle stars eat suspended matter, and sea-floor currents vindicate this. In basket stars, the arms are used to rhythmically sweep food to the mouth. Pectinura will consume beech pollen in the New Zealand fiords (since the trees there hang over the water). Eurylina will cling to a coral branch browse on the polyps of the reef.
The sexes of brittle stars are separate in most species, though a few, such as Amphipholis squamata, are hermaphroditic. Gonads, found only in the disc, open into the pouches in the integument between radii, called genital bursae. Gametes are then shed by way of the bursal sacs. Many species actually brood develop larvae in the bursae. The ophiuroid coelom is strongly reduced, particularly in comparison to other echinoderms. In a few species the female carries a dwarf male, clinging to it.
Brittle stars generally sexually mature in two years, become full grown in three to four years, and live up to five years. Euryalina, such as Gorgonocephalus, may well live much longer.
Ophiuroids can readily regenerate lost arms or arm segments unless all arms are lost. Ophiuroids use this ability to escape predators, similar to how lizards deliberately shed (autotomize) the distal part of their tails to confuse pursuers.
Brittle stars live in areas from the low-tide level downwards. Shallow species live among sponges, stones, or coral, or under the sand or mud, with only its arms protruding. Deep-water species tend to live in or on the sea floor or adhere to coral or an urchin.
The main parasite to enter the digestive tract or genitals are Protozoa. Crustaceans, nematodes, trematodes, and polychaete annelids, also are parasites on brittle stars. Algal parasites like Coccomyxa ophiurae cause spinal malformation. Unlike sea stars and sesa urchins, annelids are not a typical parasite.
Brittle stars are not food for humans. However, they are part of the food chain of commercially important species.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia: |
Birds of a Feather: Memory and the Clark's Nutcracker
How does intelligence arise? A graduate researcher seeks evidence from birds.
Do birds of a feather that flock together develop more complex cognitive behaviors than solitary birds? Or do other factors such as foraging pressure drive the development of cognitive behaviors as well? Psychology graduate student Jan Tornick’s research seeks to answer those questions.
“I’ve always been interested in animal behavior,” Tornick says, noting that she was one of those kids who would bring home bugs and snakes and mice, much to her mother’s dismay. “I am curious about how animals think and how their world compares to ours.”
For the past five years, Tornick has studied Clark’s nutcrackers under the supervision of her adviser Brett Gibson, associate professor of psychology. Tornick’s research compares Clark’s nutcrackers to other species, most notably, western scrub jays. Both species are in the crow family, but the Clark’s nutcracker is characterized as asocial while the western scrub jay, like most birds in the crow family, is very social.
Evolutionary psychologists think that the demands of living in a large dynamic social group might drive a species’ need for complex cognitive behavior. “This ‘social intelligence hypothesis’ provides one possible explanation of why humans have become so intelligent, says Tornick. “My research on Clark’s nutcrackers explores if sociality or if other demands like foraging, drive the need for complex cognition.”
A landmark study showed that scrub jays take a lot of precautions with their food. When another scrub jay is watching, a scrub jay will store food in difficult to see places (far from an observer, behind a visual barrier, etc.). Often, if observed while hiding food, later, when unobserved, a scrub jay will move food to another location. This “social caching” experiment supports the social intelligence hypothesis.
To compare the two species, Tornick replicated the “social caching” experiment with the Clark’s nutcrackers. Known for their spatial memories, nutcrackers hide up to 30,000 seeds, and they find 90 percent of them in order to survive long winters high up in the Rockies.
But why is it so important to study a bird in the crow family? In recent years, crows and their relatives have demonstrated astonishing cognitive skills; in many cases these are on par with those of apes and young human children. For example, it has been well documented that a related bird, the New Caledonian crow, manufactures, transports, and utilizes specialized tools. As Tornick explains in her dissertation proposal: “Darwin noted that there is evolutionary continuity among different species; and cognitive variations among different species are ‘differences in degree rather than differences in kind.’ To shed light on the differences and similarities between the human and the non-human mind, it is important to study a wide variety of species.”
Her initial findings on social caching for Clark’s nutcrackers suggests that the nutcrackers (like scrub jays) do tend to alter their behavior when observed hiding their food by another bird but not when they are alone. Tornick notes this suggests that group size may not be the sole force driving the evolution of social intelligence. She thinks that other factors, such as foraging strategy or genetic relatedness, might also be important.
Jan Tornick’ s Clark’s nutcracker experiment explored the subject’s caching behaviors under different conditions. These conditions included the subject being observed or not observed by another bird, partitions, distance of the caching trays, and varied light. Watch videos of Tornick's experiments.
In addition to social intelligence, Tornick’s dissertation research on Clark’s nutcrackers examines two other areas of cognition: reasoning ability and numerical intelligence.
Tornick is currently testing Clark’s nutcrackers for numerical cognition to see if they have developed enhanced competence as compared to more social birds in the crow family (and other animals). Since numerical competence is thought to reside in the same part of the brain as spatial memory, Tornick thought the nutcrackers might excel at number discrimination. This would suggest that foraging pressures were a factor in developing cognitive intelligence.
Her initial findings indicate that Clark’s nutcrackers are very good at distinguishing between different quantities of seeds. Given two piles of seeds to choose from, they almost always choose the larger, even when the quantities are close.
Tornick is in the process of concluding her research and writing her dissertation. A first-generation college student, Tornick has succeeded at academics from her first associate in applied science degree in architectural design to her graduate studies at UNH. Initially interested in zoology, she earned her master’s degree in 2007. Then, to pursue her interest in cognitive behavior, she earned a second master’s degree, this time in psychology, and now she has almost completed her doctoral studies.
Tornick’s work with Clark’s nutcrackers inspires both wonder and enthusiasm. “Their spatial memory is amazing,” she says. “I want to discover what else they might know.”
Originally published by: |
This article describes how two middle school teachers-one in social studies and one in language arts-incorporated the study of state history into their courses on African-American history and language arts. Their goal was to teach about the historical struggle of African Americans for freedom and equality in Florida. They based the state history component of their course on the children's book, African Americans in Florida,5 by Maxine D. Jones and Kevin McCarthy, an African American history professor and an Anglo-American English professor respectively at the University of Florida. Their book addresses the issues raised above by broadening the scope of traditional history and extending the study of African-Americans beyond "Black History Month."
African Americans in Florida is an effective model for an exploration of state history that focuses on African-Americans, and can be used as a guide by teachers searching for similar books elsewhere. The book is helpful because of its breadth; it explores 400 years of African American history, beginning with the arrival of Ponce de Leon in 1513, and examines how major events in which African Americans were key figures-such as the Civil War and the Civil Rights Movement-played out in Florida. The book discusses major historical events unique to the state, and how these affected the lives of African Americans living there. And, it features numerous profiles of African Americans prominent as political, military, educational, business, and civic leaders.
The reading level of the book is well-suited to middle school students because of its accessible vocabulary level, short and concise chapters, and frequent illustrations and discussion questions within each chapter. It is rich in historical sources that invite student analysis: drawings, photographs, documents, letters, charts, graphs, maps, diagrams, advertisements, and excerpts from speeches, song lyrics, poems, and essays. Finally, and perhaps most importantly, the teachers selected the book because it depicts African Americans as people taking charge of their lives rather than acting as victims.
The substance of the book-themes, topics, sources and stories-facilitates the articulation of goals for middle school social studies and language arts curricula. Certainly, the curriculum calls for students to understand the uniqueness of the African American experience in the United States and Florida, and the contributions African Americans have made to state and U. S. history. Moreover, both the social studies and language arts curricula call for all students at the school-themselves culturally and ethnically diverse and representative of a cross-section of North Central Florida's population-to develop a personal interpretation of what it means to be African American in this state and country. The curriculum also encompasses the efforts of African Americans to fully realize the rights and benefits of American citizenship. More broadly, it encourages students to become more committed to building racial harmony in their personal lives as well as in the larger society.
The Curriculum in the Classroom
The two teachers in our study each worked with the same group of about 25 middle school students in their respective classrooms. The language arts teacher began with a pretest survey to assess students' knowledge of why African American history had been a neglected topic in schools for many years, understanding of the meaning of "civil rights," and beliefs about racial equality in their state. Students were also asked to name three major African Americans in the history of their state, and one in the history of their community. After the pretest survey, the social studies teacher began the state history unit, using the book by Jones and McCarthy as the basis for the curriculum. After completing the unit on African Americans in their state's history, the students retook the survey.
The pretest revealed several interesting findings. First, and not surprisingly, students had great difficulty naming key figures in the history of their state. Second, responses to the question of why African American history had been neglected for so long focused on the ideas that African Americans had been slaves, that they were regarded as inferior to whites, that they had been segregated in many ways, and that "whites wrote the history books."
Students' understandings of the meaning of "civil rights" were somewhat vague; most defined the concept as "the rights that everyone should have" with little elaboration on what those rights were, or on the racial and historical context for their definitions. Students' beliefs about the current status of racial equality in their state were also vague and uncertain; many said they did not know how to determine whether racial equality had been achieved, and were unsure about the meaning of this concept. Clearly, the students needed to achieve an understanding of civil rights and equality, based on appropriate historical context, in order to come to meaningful conclusions about these concepts today.
In the social studies class, students read and discussed chapters from the book each day, then answering questions-both from the book and from their teacher-that focused on the development of historical understanding. The nature of the questions correlated with recent research on the teaching and learning of history and "historical thinking."6 This research emphasizes the importance of teaching students to use a variety of primary and secondary sources, to become familiar with the types of questions and methodologies that historians use, to empathize with historical figures, and to analyze and evaluate historical issues and problems. The questions, therefore, were of four general types:
Informational. For example, What did the Supreme Court decide in Plessy v. Ferguson? What colleges and universities were established in Florida for African-American students? Why did runaway slaves go to Florida?
Empathetic. For example, Why do you think Virgil Hawkins wanted to enter law school so badly? If you had lived at the time of the Civil War and had a lot of money, how could you have spent that money to help the ex-slaves? What are some ways to handle a racial slur when you hear one?
Methodological. This involved the description and analysis of, for example, slave advertisements, plantation records, a newspaper illustration of a captured cargo of slaves being unloaded in Key West, and a photograph of an African-American civil rights leader during a bus boycott.
Analytical/evaluative. For example, What are the arguments for and against the death penalty? Why are boycotts often successful? What parts of your town's history do you think should be in a play or novel someday? If you were going to establish a Black Archives Research Center and Museum, what would you include in the collection? What are the advantages and disadvantages of busing children across town to try and integrate schools? How should the federal government fight the Ku Klux Klan? Why was Brown v. Board of Education so significant in this state?
Holding a Mock Trial
In the language arts class, the teacher emphasized reading and dramatic role playing from the book. The focal point of the unit was the organization of a mock trial in which students were to determine whether civil rights for African-Americans had been achieved in the state. An integral part of preparation for the mock trial was co-author Kevin McCarthy's visit to the classroom, during which he talked with students about his reasons for writing African Americans in Florida.7 After this visit, and further reading and analysis of the historical characters in the book, each student selected a role in the trial. They played judges, lawyers, educators, business people, farmers, advocates, and slaves-testifying for either the "prosecution" or the "defense." The roles of the judges and attorneys were also selected from among the famous figures described in the book. Because the book described these individuals vividly yet concisely, the students had ample material for use in adopting their characters' perspectives without being overwhelmed by extraneous details.
During the trial, "witnesses" were examined and cross-examined by both the "defense lawyers" and the "prosecution," while "judges" monitored their courtroom conduct. The students played their parts eagerly, moving beyond both the book and classroom discussions to their perceptions of how the characters portrayed might feel and think about the issues of the trial. In the trial, students heard "testimony" about the Rosewood Massacre, the 1923 burning of an African American community in which a number of residents were killed (and which is the subject of an upcoming film by John Singleton). They also heard "testimony" about Ku Klux Klan violence; about various cases of discrimination against African Americans in the legal, educational, and political systems; and about the views of African Americans who achieved prominence in different fields of endeavor.
The "jury" eventually ruled that the state had indeed made progress on the issue of civil rights and equality for African Americans. Although students demonstrated throughout the trial that they understood the injustices and brutalities of past treatment of African Americans in the state, they decided to give more weight to the state's record on civil rights in the more recent past. They also cited what they considered to be an improvement in people's attitudes about racial equality, and expressed feelings of optimism about the future of race relations.
Following the trial, the language arts teacher directed the students to write essays in which they reviewed the trial's highlights and explained why they agreed or disagreed with the verdict. Students also took the post-test survey described earlier, answering questions about their knowledge of African American leaders, their understanding of the term "civil rights," and their view of the status of racial equality in their state.
The post-test revealed, not surprisingly, that students could name key figures in the history of their state. In general, responses to the question of why African American history was so long neglected showed little change from before studying the unit, except for mention that "society is changing" and more opportunities exist to learn about African American history.
More importantly, students' understandings of the concept of "civil rights" were more refined after the unit, and more students attended to the historical and racial context of this term. Some of the responses were as follows:
Civil rights are when everybody is treated equally or when there are no rules that apply to any one race in particular.
Since all people are "created equal," there are rights to make sure this is true.
The rights a person has in their community and the power to use those rights.
The right to be treated equally and to have equal opportunity.
Civil rights are when every U. S. citizen, no matter if you are black or white, is free in the United States of America.
The freedom for any person to vote, to decide to live where they want to and not to be harmed.
Rights that ensure equal treatment among the races.
Finally, students' beliefs about the current status of racial equality in their state became more focused, with some disagreement over the extent to which such equality exists. Students were better able to explain why they believed as they did, even if they were ambivalent about the issue. Some of the responses included the following:
I believe that there is racial equality in Florida and there isn't. The government has tried to treat everyone equally, but the government isn't always the people. It's the people who make up Florida and some people haven't given other people racial equality.
No. Many people of every race have been discriminated against. Even though the KKK is not part of the government, the people in the KKK want to take away other people's civil rights and they still try to do it.
Yes, I believe there are civil rights in Florida, in schools, courts, restaurants, etc. In our schools today most everyone has a certain amount of friends who are black.
There are no restaurants or places that say "whites only" anymore. There are individual people who are racist, but there is equality in Florida.
Yes, because blacks are no longer second-class citizens here. They have all the rights that whites do.
I think there still isn't racial equality here. On the news or in the paper there is always an African American being blamed for something, just like before... Whites still lock their car doors when they see an African American person walk by, or the white clerks pay special attention to blacks that walk in the store. These things still happen to a certain extent.
No. Blacks are a minority, so since there are fewer of them it would not be very probable that there could be racial equality. We have come a great long way, though, and people are starting to be treated like people now. I still think there is a gap to be bridged. On the surface it may seem equal but deep beneath it isn't.
Clearly, the focus of this curriculum helped students in these middle-school classrooms gain understandings of civil rights and equality seen from an African American perspective. Such understandings are appropriate and necessary in American society, given the powerful historical context of the African American experience throughout the country. Additional study of the struggle for civil rights and equality by other groups in American society-groups based on other ethnicities, gender, or sexual orientation, for example-constitutes an important "next step" in order to provide students with an even fuller picture of what equality and civil rights mean for diverse groups in the United States today.
1.Shirley Koeller, "Multicultural Understanding Through Literature" in Social Education 60, No. 2 (1966), 99-103; Carol Booth Olson, ed., Reading, Thinking, and Writing About Multicultural Literature (Glenview, Ill.: Scott Foresman, 1966); James A. Banks, Multi-Ethnic Education: Theory and Practice, 3rd ed. (Boston: Allyn and Bacon, 1994); Maria A. Perez-Stable and Mary Hurlbut Cordier, Understanding American History Through Children's Literature: Instructional Units and Activities for Grades K-8 (Phoenix: Oryx Press, 1994); Violet J. Harris, Teaching Multicultural Literature in Grades K-8 (Norwood, MA: Christopher-Gordon Publishers, 1992); Linda Levstik, "Research Directions: Mediating Content Through Literary Texts" in Language Arts 67 (1990), 848-853; "The Relationship Between Historical Response and Narrative in a Sixth Grade Classroom" in Theory and Research in Social Education 14 (1986), 1-15; L. D. Labbo and S. L. Field, "Bookalogues: Celebrating Culturally Diverse Families" in Language Arts 72 (1995), 52-60.
2.Perez-Stable and Cordier, Understanding American History Through Children's Literature.
4.Harris, Teaching Multicultural History in Grades K-8.
5.Maxine D. Jones and Kevin McCarthy, African Americans in Florida (Sarasota, FL: Pineapple Press, 1993).
6.Linda Levstik and Christine Pappas, "Exploring the Development of Historical Understanding" in Journal of Research and Development in Education 21 (1987),1-15; Matthew T. Downey and Linda Levstik, "Teaching and Learning in History: The Research Base" in Social Education 52 (1988), 336-342; Samuel S. Wineburg and Suzanne M. Wilson, "Models of Wisdom in the Teaching of History" in Phi Delta Kappan (September 1988), 50-58; David A. Welton, "Teaching History: A Perspective on Factual Details" in Social Education (October 1990), 348-350; Samuel S. Wineburg, "On the Reading of Historical Texts:Notes on the Breach Between School and Academy" in American Educational Research Journal 28 (1991), 495-519; David Kobrin, Ed Abbott, John Ellinwood, and David Horton, "Learning History by Doing History" in Social Education 57, 4 (1993), 39-41; Margaret G. McKeown and Isabel L. Beck, "Making Sense of Accounts of History: Why Young Students Don't and How They Might" in Gaea Leinhardt, ed., Teaching and Learning in History (Hillsdale, NJ: Lawrence Erlbaum Associates, 1994), 1-26.
About the Authors
Elizabeth Anne Yeager is Assistant Professor of Social Studies Education at the University of Florida.
Frans H. Doppen is a secondary school social studies teacher.
Elizabeth B. Otani is a secondary school language arts teacher at P. K. Yonge Developmental Research School, University of Florida. |
Using recipes from her cookbook Let The Kids Cook!, as well as other recipes from around the world, Jo Anne offers an innovative interactive approach to cooking and healthy eating choices.
- Students are provided with a safe educational cooking environment.
- Students cook with fresh, whole foods.
- Organic foods, whole grains, a multitude of vegetables and fruits are emphasized.
- Students have fun in the kitchen while understanding the nutritional value of the foods they eat.
- Students learn how math (measuring their ingredients), science (evaporation when cooking with liquids) and art (food presentation) are incorporated into cooking.
- Students learn about portion control.
- Kitchen safety is emphasized at all times.
- Students learn to clean up after themselves.
- Students enjoy eating the healthy foods they prepare.
- Students are encouraged to practice their cooking experiences at home.
If everyone works together and supports each others’ efforts, eventually healthy habits will become routine and you’re more likely to succeed. You’ll be well on your way to improving your family’s health.
The Center for Disease, Control and Prevention states:
- Childhood obesity has more than tripled in the past 30 years.
- One out of every three children in the U.S. are obese!
- Overweight is defined as having excess body weight for a particular height from fat, muscle, bone, water, or a combination of these factors. Obesity is defined as having excess body fat.
- Overweight and obesity are the result of “caloric imbalance”—too few calories expended for the amount of calories consumed—and are affected by various genetic, behavioral, and environmental factors.
- Obese youth are more likely to have risk factors for cardiovascular disease, such as high cholesterol or high blood pressure.
- Obese adolescents are more likely to have pre-diabetes, a condition in which blood glucose levels indicate a high risk for development of diabetes.
- Children and adolescents who are obese are at greater risk for bone and joint problems, sleep apnea, and social and psychological problems such as stigmatization and poor self-esteem.
- Healthy lifestyle habits, including healthy eating and physical activity, can lower the risk of becoming obese and developing related diseases.
- Schools provide opportunities for students to learn about and practice healthy eating and physical activity behaviors. |
Marking 'Endangered Species Day' In the National Park System
Bald eagles and peregrine falcons, quite visible in such national parks as Grand Teton and Acadia, are readily recognized as species that have been able to reverse their slide towards extinction thanks in large part to the Endangered Species Act.
And across the National Park System there are a good number of success stories where species, be they plant, animal, bird, fish, or invertebrate, have been helped by the ESA. Some stories are substantial:
* Grizzly bears, at least in the greater Yellowstone ecosystem, seem to be rebounding in number;
* Gray wolves have successfully been returned to Yellowstone National Park and removed, albeit controversially, from the Endangered Species List in some parts of their range in the Northern Rockies, and;
* Channel Island foxes, listed as endangered in 2004, have approached biological recovery on Santa Cruz, Santa Rosa, and San Miguel Islands within Channel Islands National Park, according to the Park Service. Four of the six subspecies of island fox declined by more than 90 percent by the late 1990s due to predation by golden eagles. Since 1999, 44 golden eagles have been live-captured and relocated to the mainland and the foxes have rebounded. Today, there are about 2,800 island foxes in the wild.
Others stories, however, are more subtle. Red-cockaded woodpeckers, found in Big Cypress National Preserve and Everglades National Park, just to name two parks, did not go extinct as was predicted in the 1960s but the species also isn't about to be removed from the Endangered Species List.
Indeed, across the National Park System today, the fifth annual Endangered Species Day, there seems to be more stories whose happy endings have yet to be written than those with happy endings.
The Florida panther in Big Cypress and Everglades continues to want for more habitat, and the Kemp's ridley sea turtles that nest on the sandy beaches of Padre Island National Seashore still wear the tag of the "most-endangered" sea turtle. (Although, this year's nesting season on Padre Island so far seems to be particularly robust.)
In Glacier National Park, if not in Yellowstone and Grand Teton as well, there are concerns that climate change could melt the snowfields that wolverines, a candidate species for listing under the Endangered Species Act, rely on for denning. Cape Hatteras National Seashore has turned into a legal battleground over the future of piping plovers, a threatened species along the Atlantic Coast. In Yellowstone, crews bombard critical habitat for the Canada lynx, a threatened species, with 105mm howitzer rounds as part of their avalanche control work.
And, of course, the search for the Ivory-billed woodpecker continues -- in 2010 the Park Service spent $10,000 on this effort -- in the Southeast.
According to the National Park Service, there currently are 1,969 threatened and endangered species listed worldwide. A recent report in Science said, "On average, 50 species of mammal, bird, and amphibian move closer to extinction each year..."
NatureServe, the non-profit organization that maintains what is said to be the most complete list of imperiled species in the United States, has said that the "actual number of known species threatened with extinction is at least ten times greater than the number protected under the Endangered Species Act."
Many units of the National Park System, their landscapes left largely in their natural conditions, are vital to the recovery of threatened and endangered species. Sometimes, though, even those protective boundaries can't produce miracles, as the futile efforts to bring the red wolf back to Great Smoky Mountains National Park demonstrate.
As of 2010, there were 421 species of federally listed plants and animals that occur or have historically occurred in the park system. The Park Service maintains a helpful web portal that allows you to search by park or species to pinpoint threatened and endangered species in the park system.
Some species that historically have been native to a park unit but are no longer found there still show up on lists of endangered and threatened species in the park as long as they haven't entirely vanished from the Earth. For instance, while brown (aka grizzly) bears once roamed Arches National Park, according to the Park Service, they no longer can be found in this unit of the park system but nevertheless show up listed as a threatened species in the park.
This website also lets you find out how much the Park Service spent on a particular threatened or endangered species during Fiscal 2010. Here's a look at some of the ESA cases in the National Park System:
* Amistad National Recreation Area. This NRA shows six species that either are threatened or endangered, and one candidate species.
* Arches National Park. Four species -- two of which are not currently found in the park -- are listed as either threatened or endangered, one species is a candidate for listing, and two have been delisted.
* Apostle Islands National Lakeshore. Piping plovers, the same species as those causing problems at Cape Hatteras, are considered an endangered species at the lakeshore, which also is home to bald eagles and has even recorded presence of a gray wolf.
* Big Cypress National Preserve. The preserve is the heart of the population of Florida panthers, a population thought to number 100-130 or so adults. Listed as an endangered species since the arrival of the ESA, the big cats are struggling to cope with dwindling habitat and genetic issues. Nine other species -- American alligator, American crocodile, Cape Sable seaside sparrow, eastern indigo snake, Everglade snail kite, red-cockaded woodpecker, West Indian manatee, and wood stork -- are listed either as threatened or endangered.
* Biscayne National Park. Seventeen species -- including five sea turtle species and the West Indian manatee -- are listed as either threatened or endangered, and there is one candidate species in the park.
* Buffalo National River. Five species -- three varieties of bat and and two clam species -- are listed as endangered in this unit.
* Cape Lookout National Seashore. Seven species are listed as either threatened or endangered, including the American alligator, four sea turtle species, the piping plover, and seabeach amaranth.
* Chesapeake and Ohio Canal National Historic Park. Four species are listed as either endangered or threatened, including the shortnose sturgeon and Indiana bat.
* Delaware Water Gap National Recreation Area. There are three species considered either threatened or endangered here: the Indiana bat, bog turtle, and dwarf wedgemussel, a clam.
* Dinosaur National Monument. There are nine species considered either threatened or endangered at Dinosaur, including the brown bear, gray wolf, Mexican owl, and black-footed ferret, all of which are no longer found in the monument but which once were.
* Everglades National Park. There are 21 species -- among them the Florida panther, American crocodile and American alligator, and five sea turtle species -- listed as either threatened or endangered, and six candidate species.
* Gauley River National Recreation Area. Two species -- the Indiana bat and the Virginia spiraea, a flowering plant -- are either endangered or threatened.
* Glacier Bay National Park. The humpback whale and stellar sea lion are both considered endangered in this park.
* Grand Canyon National Park. There are 14 species listed as either endangered or threatened here, including the humpback chub, desert tortoise, and southwestern willow flycatcher.
* Grand Teton National Park. Two species -- the brown bear and Canada lynx -- are listed as threatened in the park, while the wolverine is a candidate for listing and as such treated as threatened by the Park Service.
* Great Sand Dunes National Park. Five species are listed as either endangered or threatened, including the Canada lynx, gray wolf, and brown bear, and there's one candidate species.
* Guadalupe Mountains National Park. There are three species listed as either endangered or threatened: Mexican spotted owl, gray wolf, and brown bear.
* Haleakala National Park. Thirty-six species -- including the Maui parrotbill, Pacific Hawaiian damselfly, Hawaiin monk seal, and Haleakala silversword -- are listed as either threatened or endangered, and there are 14 candidate species.
* Hot Springs National Park. Two species no longer found here -- the red wolf and piping plover -- nevertheless are listed as endangered at this park.
* Isle Royale National Park. There are three species listed as either endangered or threatened: the gray wolf, the woodland caribou, and the Canada lynx.
* Katmai National Park and Preserve. There are two species listed as either endangered or threatened for this park, the Stellar sea lion and the Steller's eider.
* Lake Mead National Recreation Area. This unit lists seven species as either endangered or threatened, including the desert tortoise and the razorback sucker.
* Lassen Volcanic National Park. The gray wolf is listed as endangered, although it hasn't been seen in the park in quite some time.
* Little River Canyon National Preserve. There are five species either threatened or endangered, including the blue shiner (a fish), gray bat, and Kral's water-plantain.
* Mesa Verde National Park. Six species are listed as either threatened or endangered, including the gray wolf, brown bear, and Mexican spotted owl.
* Mount Rainier National Park. There are nine species listed as threatened or endangered, including the gray wolf, bull trout, northern spotted owl, and Canada lynx.
* North Cascades National Park. There are eight threatened species in the park, none that are endangered.
* Ozark National Scenic Riverways. There are five species listed as either threatened or endangered, including the gray wolf, red wolf, and red-cockaded woodpecker. The Ozark hellbender, a large salamander, is listed as a candidate species.
* Point Reyes National Seashore. Twenty-seven species are listed as either threatened or endangered, including the southern sea otter, tidewater goby, and short-tailed albatross.
* Redwood National Park. There are 21 species listed as either threatened or endangered, including the blue whale, two salmon species, the finback whale, and western snowy plover.
* Saguaro National Park. Six species are listed as either endangered or threatened, including the jaguar, Mexican spotted owl, and lesser long-nosed bat.
* Sequoia and Kings Canyon national parks. Five species are listed as threatened or endangered, including the brown bear and California condor, and there are two candidate species.
* Tallgrass Prairie National Preserve. The Topeka shiner, a fish, and the Eskimo curlew, a bird, are both listed as endangered.
* Virgin Islands National Park. Ten species are listed as either threatened or endangered, including three sea turtle species, the humpback whale, and West Indian manatee.
* Zion National Park. There are six threatened or endangered species listed, including the gray wolf, brown bear, southerwestern willow flycatcher, and desert tortoise.
You can explore what threatened and/or endangered species can be found in your favorite park unit at this site. |
Genetic or genomic imprinting is an important process in the mechanism of gene expression. Here genes are expressed based on the parent-of-origin. The phenomena is important for development, but it can also cause serious genetic abnormalities.
Genetic or genomic imprinting is a mechanism by which gene expression depends on parental origin. The process does not follow the classical Mendelian inheritance rules.
A gene comes in two versions (alleles), one from your mum and one from your dad. Usually both of these are expressed. However, in genomic imprinting only one of these is expressed and it may be the gene inherited from your father or your mother. Of the 25,000-30,000 genes in the human body it's been estimated that fewer than 1% are imprinted genes. In mammals they have been found to play a role in embryonic development, but also in the development of genetic abnormalities.
Examples of Genomic Imprinting
Perhaps the best way to understand genetic imprinting, its causes, and its consequences is to look at two excellent examples: Prader-Willi Syndrome (PWS) and Angelman Syndrome (AS).
According to the PWS web site, the expression of the gene or genes (known as q11-13) located on chromosome 15 are implicated in PWS. Although not fully understood, PWS is the result of one copy of such gene (or genes) being silenced due to genomic imprinting. Normally, a person will receive copies of the same gene (alleles) from both father and mother. In PWS, the copy inherited from the father is silenced due to a genomic imprinting process. On the contrary in Angelman Syndrome (AS), the copy inherited from the mother is lost. The “silencing" of either the father’s copy of the gene or the mother’s allele (due to genetic imprinting) results in two very different and distinct genetic disorders. PWS leads to obesity, short stature and extremities, and moderate mental retardation, while AS leads to a different situation: hyperactivity, jerky movements, laughter outbursts, and severe mental retardation.
How Does Genetic Imprinting Works?
Genetic imprinting was discovered very recently in 1984. Despite the fact that much research has been devoted to genetic imprinting the whole process is still not completely understood - why and how is one gene marked for expression over the other?
Ariel et al. (1995) used advanced biotech techniques to study the extent of methylation on the noncoding RNA gene called Xist in early mice embryos. Their results showed that methylation of the Xist gene was an important part of the X-imprinting process. More recently, Ng et al. (2007), found that noncoding RNA plays a role in random imprinting of the X chromosome. |
Supporting Appropriate Behavior in Students With Asperger Syndrome
Challenging behaviors are frequently the primary obstacle in supporting students with Asperger’s (AS ).
While there are few published studies to direct educators towards the most effective behavioral approaches for these students, it appears most evident (given the heterogeneity among these individuals) that effective behavioral support requires highly individualized practices that address the primary areas of difficulty in social understanding and interactions, pragmatic communication, managing anxiety, preferences for sameness and rules, and ritualistic behaviors. While the specific elements of a positive behavioral support program will vary from student to student, the following 10 steps go a long way in assuring that schools are working towards achieving the best outcomes on behalf of their students.
Use functional behavioral assessment as a process for determining the root of the problematic behavior and as the first step in designing a behavior support program.
The key outcomes of a comprehensive functional behavioral assessment should include a clear and unambiguous description of the problematic behavior(s); a description of situations most and least commonly associated with the occurrence of problematic behavior; and identification of the consequences that maintain behavior. By examining all aspects of the behavior, one can design a program leading to long-term behavioral change.
Too often the focus of a behavior management program is on discipline procedures that focus exclusively on eliminating problematic behavior. Programs like this do not focus on long-term behavioral change. An effective program should expand beyond consequence strategies (e.g., time out, loss of privileges) and focus on preventing the occurrence of problem behavior by teaching socially acceptable alternatives and creating positive learning environments.
Use antecedent and setting event strategies.
Antecedents are events that happen immediately before the problematic behavior. Setting events are situations or conditions that can enhance the possibility that a student may engage in a problematic behavior. For example, if a student is ill, tired or hungry, he may be less tolerant of schedule changes. By understanding settings events that can set the stage for problematic behaviors, changes can be made on those days when a student may not be performing at his best to prevent or reduce the likelihood of difficult situations and set the stage for learning more adaptive skills over time.
In schools, many antecedents may spark behavioral incidents. For example, many students with Asperger’s have difficulty with noisy, crowded environments. Therefore, the newly arrived high school freshman who becomes physically aggressive in the hallway during passing periods may need to leave class a minute or two early to avoid the congestion which provokes this behavior. Over time, the student may learn to negotiate the hallways simply by being more accustomed to the situation, or by being given specific instruction or support.
Key issues to address when discussing these types of strategies are:
- What can be done to eliminate the problem situation (e.g., the offending condition)?
- What can be done to modify the situation if the situation cannot be eliminated entirely?
- Will the strategy need to be permanent, or is it a temporary “fix” which allows the student (with support) to increase skills needed to manage the situation in the future?
Reprinted with the permission of the Autism Society.
Washington Virtual Academies
Tuition-free online school for Washington students. |
Steam reforming is a method for producing hydrogen, carbon monoxide, or other useful products from hydrocarbon fuels such as natural gas. This is achieved in a processing device called a reformer which reacts steam at high temperature with the fossil fuel. The steam methane reformer is widely used in industry to make hydrogen. There is also interest in the development of much smaller units based on similar technology to produce hydrogen as a feedstock for fuel cells. Small-scale steam reforming units to supply fuel cells are currently the subject of research and development, typically involving the reforming of methanol, but other fuels are also being considered such as propane, gasoline, autogas, diesel fuel, and ethanol.
Steam reforming of natural gas - sometimes referred to as steam methane reforming (SMR) - is the most common method of producing commercial bulk hydrogen. Hydrogen is used in the industrial synthesis of ammonia and other chemicals. At high temperatures (700 – 1100 °C) and in the presence of a metal-based catalyst (nickel), steam reacts with methane to yield carbon monoxide and hydrogen.
- CO + H2O ⇌ CO2 + H2
The United States produces nine million tons of hydrogen per year, mostly with steam reforming of natural gas. The worldwide ammonia production, using hydrogen derived from steam reforming, was 109 million metric tonnes in 2004.
This SMR process is quite different from and not to be confused with catalytic reforming of naphtha, an oil refinery process that also produces significant amounts of hydrogen along with high octane gasoline.
SMR is approximately 65–75% efficient.
Reforming for combustion engines
Flared gas and vented VOCs are known problems in the offshore industry and in the on-shore oil and gas industry, since both emit unnecessary greenhouse gases into the atmosphere. Reforming for combustion engines utilizes steam reforming technology for converting waste gases into a source of energy.
Reforming for combustion engines is based on steam reforming, where non-methane hydrocarbons (NMHCs) of low quality gases are converted to synthesis gas (H2 + CO) and finally to methane (CH4), carbon dioxide (CO2) and hydrogen (H2) - thereby improving the fuel gas quality (methane number).
In contrast to conventional steam reforming, the process is operated at lower temperatures and with lower steam supply, allowing a high content of methane (CH4) in the produced fuel gas. The main reactions are:
- CnHm + n H2O ↔ (n + m/2) H2 + n CO
- CO + 3 H2 ↔ CH4 + H2O
- CO + H2O ↔ H2 + CO2
Reforming for fuel cells
Advantages of reforming for supplying fuel cells
Steam reforming of gaseous hydrocarbons is seen as a potential way to provide fuel for fuel cells. The basic idea for vehicle on-board reforming is that for example a methanol tank and a steam reforming unit would replace the bulky pressurized hydrogen tanks that would otherwise be necessary. This might mitigate the distribution problems associated with hydrogen vehicles, however the major market players discarded the approach of on-board reforming as impractical. (At high temperatures see above).
Disadvantages of reforming for supplying fuel cells
The reformer–fuel-cell system is still being researched but in the near term, systems would continue to run on existing fuels, such as natural gas or gasoline or diesel. However, there is an active debate about whether using these fuels to make hydrogen is beneficial while global warming is an issue. Fossil fuel reforming does not eliminate carbon dioxide release into the atmosphere but reduces the carbon dioxide emissions and nearly eliminates Carbon monoxide emissions as compared to the burning of conventional fuels due to increased efficiency and fuel cell characteristics. However, by turning the release of carbon dioxide into a point source rather than distributed release, carbon capture and storage becomes a possibility, which would prevent the carbon dioxide's release to the atmosphere, while adding to the cost of the process.
The cost of hydrogen production by reforming fossil fuels depends on the scale at which it is done, the capital cost of the reformer and the efficiency of the unit, so that whilst it may cost only a few dollars per kilogram of hydrogen at industrial scale, it could be more expensive at the smaller scale needed for fuel cells.
Current challenges with reformers supplying fuel cells
However, there are several challenges associated with this technology:
- The reforming reaction takes place at high temperatures, making it slow to start up and requiring costly high temperature materials.
- Sulfur compounds in the fuel will poison certain catalysts, making it difficult to run this type of system from ordinary gasoline. Some new technologies have overcome this challenge with sulfur-tolerant catalysts.
- Low temperature polymer fuel cell membranes can be poisoned by the carbon monoxide (CO) produced by the reactor, making it necessary to include complex CO-removal systems. Solid oxide fuel cells (SOFC) and molten carbonate fuel cells (MCFC) do not have this problem, but operate at higher temperatures, slowing start-up time, and requiring costly materials and bulky insulation.
- The thermodynamic efficiency of the process is between 70% and 85% (LHV basis) depending on the purity of the hydrogen product.
- "Fossil fuel processor".
- Wyszynski, Miroslaw L.; Megaritis, Thanos; Lehrle, Roy S. (2001). Hydrogen from Exhaust Gas Fuel Reforming: Greener, Leaner and Smoother Engines (PDF) (Technical report). Future Power Systems Group, The University of Birmingham.
- "Commonly used fuel reforming today".
- Dresselhaus, Mildred S.; Buchanan, Michelle V. (2004). The Hydrogen Economy (PDF) (Technical report).
|last1=in Authors list (help)
- Nitrogen (Fixed)—Ammonia (PDF) (Report). United States Geological Survey. January 2005.
- "Hydrogen Production – Steam Methane Reforming (SMR)" (PDF), Hydrogen Fact Sheet, archived from the original (PDF) on 4 February 2006, retrieved 28 August 2014
- "Atmospheric Emissions".
- "Wärtsilä Launches GasReformer Product For Turning Oil Production Gas Into Energy". Marine Insight. 18 March 2013.
- "Method of operating a gas engine plant and fuel feeding system of a gas engine".
- Advantage of fossil fuel reforming
- Fossil fuel reforming not eliminating any carbon dioxides
- A realistic look at hydrogen price projections
- Cracking (chemistry)
- Hydrogen pinch
- Hydrogen technologies
- Industrial gas
- Lane hydrogen producer
- Reformer sponge iron cycle
- Timeline of hydrogen technologies |
Blown across the land by wind or carried along by water and ice as the land continued to remake itself, loose sediments eventually compressed and cemented into rock and left messages in stone for us to decipher. Sediments include the mud at the bottom of streams, the sand dunes at the foot of the mountains, the chemical precipitates of salt in shallow seas, the beaches at the edge of inland seas, and the graveyards of tiny fossils at the bottom of tropical oceans. In these sedimentary layers, such as the Book Cliffs, the imprints of changing life forms in an ancient world are faithfully recorded.
In a geologic “ugly duckling becomes a swan” saga, sediments that were originally just gunk were later patiently sculpted by wind and water, pressed, and finally lifted to prominence as some of the state’s most imposing landmarks. The sandstones of Colorado National Monument, the reddish-brown siltstones and mudstones of Owl Canyon, and the Flatirons that flank Boulder are all sedimentary rocks. Other sedimentary deposits include massive limestone formations around Leadville, the evaporites of the Eagle Valley, chalks of the eastern plains, coals near Trinidad, oil shale in western Colorado, and the thick shale of eastern Colorado.
How do geologists know that a particular sedimentary rock formed in a particular environment? In 1795 a geologist developed a concept that is the next best thing to being there: “The present is the key to the past.” The idea is that by studying the characteristics found in modern depositional environments and comparing them to similar features found in ancient rocks, one can solve the mystery. For example, in Colorado, we can study the features of modern dunes in the Great Sand Dunes National Park and compare them to the ancient deposits of the 250 million-year-old Lyons sandstone found along the eastern flank of the Front Range. Using the same theory allows us to decipher the rock in the images below to be mudcracks, ripple marks, coarse-grained conglomerate, and raindrops.
Distribution of sedimentary rocks in Colorado: |
The Upper Amazonian Rubber Boom and Indigenous Rights 1900-1925
Florida Gulf Coast University
During the first two decades of the twentieth century, Western powers invested in the newly created rubber industry in the upper Amazon. American, English, and Dutch companies needed rubber for their automobile products and invested in South American countries, such as Brazil, Bolivia, Colombia, Ecuador, and Peru. This was presented as a civilizing endeavor, which would bring economic development while improving and transforming the indigenous inhabitants of the region, and thus they were fervently supported by national states and local elites. However, the idea of civilization proved to be an ironic tragedy. The “civilizing companies” put the Indians in a system of debt peonage, treated them as slaves, tortured them, and massacred them.
By looking at the records and the testimonies of people who directly experienced the business environment of the Amazonian jungle, this paper will explore the impact of the development of the rubber industry in Peru and Ecuador in the early twentieth century. It will focus on the impact on indigenous communities that lived in the Amazon, and will analyze the complicity – and powerlessness – of the state in the genocide which occurred.
Setting the Scene
In the 1880s, South American countries, such as Ecuador and Peru sent examples of their best rubber to the U.S. and England to get the attention of foreign companies to invest in the South American rubber. The rubber was a very popular raw material at the time since it was in demand for use in multiple products.
Therefore, the foreign companies became interested in the South American wild rubber that grew in the Amazonian region. They sent financial representatives to establish a trade system between the countries. Elite mestizos (people of Spanish-Indians descent) from the Amazonian region became very involved with the business and became traders and caucheros (rubber barons). Ecuadorians, Peruvians, Colombian, French, and Italians became traders and established the rubber station in the Upper Amazonian region. They used popular financial transaction methods to rapidly make money on their investments: they received foreign money on credit for the rubber, which was shipped overseas once it was harvested.
Location played one of the most important roles in the rubber industry boom. The Amazonian region where the rubber trees grew was a pure region with very few outside influences. There were many diverse ethnicities among the indigenous groups who lived in the area as nomads, each with their own customs and dialects. Therefore, the indigenous communities were not united, did not communicate with each other, and as result were not aware of the menace posed by the caucheros until they were confronted with their enslavement plans. Nevertheless, some indigenous groups were familiar with the missionaries who had entered the Amazon previously and these groups were the first to be in contact with rubber traders.
The governments from Ecuador, Peru, and Colombia received the foreign companies and expected an improvement in their economy. For example, in 1896, Eloy Alfaro expelled the missionaries and replaced them with foreign companies, which he believed were going to make the Oriente (the Ecuadorian term for the Amazon) progress. The president also granted citizenship to the Indians of the Ecuadorian Amazonian region to be a bountiful labor supply for the rubber companies. In the area of the Mainas, two rubber houses were extended into an area of dispute between Peru and Ecuador, due to an old boundary controversy. The people in this area, therefore, were not Ecuadorian citizens. Ecuadorian politicians in Guayaquil and Quito claimed, with pride, the Amazonian region in dispute; this did little to resolve the problems of the indigenous people in this Oriente province. Peru did not claim the people as citizens either. Roger Casement was the British consul who investigated the rubber trade slavery issue in the early 1910s. In his investigation, he found that, “Peru has many inhabitants but very few citizens.” Ecuador, Peru, and Colombia had neither effective governments nor state institutions that would protect the large population of Indigenous groups.
The Rubber Industry and the Indians
The rubber industry in the Upper Amazonian became notorious for the violation of human rights experienced by its workers. In order for the rubber industry to be successful, there was a need for a large, cheap labor force, to get the rubber from the trees. The members of the elite, who owned the businesses and the traders, were not going to work the caucho directly. The mestizos from the area were not willing to do the job either. The elite and the mestizos had the racist belief that the manual labor was not honorably and, consequently, only lower class people like Indians or blacks were intended to do that kind of job. The owners of the rubber houses realized that the Indians were the cheapest and most convenient source of labor they could find to work the caucho. They were the cheapest because they did not ask for a large remuneration for their work, since they had no idea of how much the caucheros and the foreign companies made for the caucho. They were the most convenient because they knew the area well. They lived and worked in the jungle, were adapted to the tropical and rainy weather of the Amazon, and knew how to get food and shelter even better than the caucheros. They knew traditional methods to extract the caucho because the Indians used it for medicinal purposes. They were perfect to do the job – the only problem was motivating them to work.
The question of how to make them work the caucho has been analyzed by many. For example, Michael Taussig, who studied the writings of Casement, reached the conclusion that there was not a shortage of either rubber or labor. Instead, it was the capitalistic system that made labor a problem. The best way for the cauchero to find low wage workers and produce more income from the caucho was to make them work through terror, since few of the indigenous communities were familiar with a system of working for something in return. The missionaries, therefore, helped the rubber barons by showing them a system of free wageworkers. Some indigenous groups from Ecuador and Brazil were a little more familiar than the Indians from Peru, since they had worked with Jesuit missionaries. The missionaries also tried to civilize the Indians by making them work for the church and, in return, the Indians received knives, cloths, cups, hammocks, etc. The missionaries disciplined the Indians by using them to build houses, convents, churches, and schools and transformed them from their wild and useless status. Ironically, the same rubber barons became one of the causes of missionary displacement from the Amazonian region.
The Enslavement of the Indians
Ecuadorian and Peruvian indigenous people were kidnapped by the caucheros to make them work for their companies. The Indians were seen as grown-up children by the caucheros; therefore, the rubber barons found it easy to take advantage of the docile and obedient temperament of the Indians and forced them into rubber slaves.
Most of the time, the rubber companies did not get their workers through traders and merchants; instead, the Indians were captured from their regions by the muchachos, (the foremen who worked for the caucheros). The muchachos were men of African descent who were brought by the caucheros from the British Caribbean islands of Barbados or Trinidad. These men were not intended to work the caucho, but instead to supervise and discipline the Indians. The muchachos were expensive for the caucheros, but they were believed to be necessary. There existed the racist mentality that only black men were savage and strong enough to do the cruel jobs. They punished the Indians and made them work through the use of terror and force. The West Indian men traveled to South America with the idea that they would gain enough money to go back home to improve their lives. Nevertheless, the West Indian men were also caught in the debt-peonage system. Their trip to South America, clothes, and food were given to them in advance. Therefore, they had to work for a long time for the rubber houses in order to pay the debt and they received very little money to save for their trip home.
The Barbadians captured the Indians from their wilderness, supervised their work, as well as punished them if they did not work to the demands. There were also a few Indians, who were born in the Rubber houses and understood the system well, who were also muchachos. These were usually pure Indians or mestizos and did the same jobs as the West Indian men. There were more muchachos than white caucheros, and more Indians than muchachos. The muchachos were always armed with guns and behaved as if they were in a constant war. The muchachos were believed to be semi-civilized by the white caucheros. The white caucheros believed that they were semi-civilized because they were used to wearing clothes, working for money, and they spoke English and Spanish. However, the Indians and the consuls who witnessed their crimes realized how evil they really were.
The muchachos were the connection between the caucheros and the Indians. The caucheros never did the dirty jobs, such as gathering the caucho, or forcing the workers to gather the caucho. The caucheros regularly sent the muchachos out to capture more Indians. They went to the areas where they Indians lived. The Indians normally lived close to the rivers. Then, the muchachos, with their guns, captured several of the Indians, chained them up, and brought them back to the rubber houses. These capturing expeditions were very violent; sometimes the muchachos killed more people than they captured. They captured everyone they found: men, women, and children
The Indians were kept in the houses in order to civilize them. While they were being civilized they were put in chains or in cepos (stocks). The Indians were given clothes, machetes, and guns to make them ready to work. The Indians did not have a monetary system; therefore, the caucheros used the Indians in a system of debt-peonage to pay them for their work. The Indians were forced to sign employment contracts that they did not fully understand, but which forced them to work for long period of time until they had gathered enough money to pay for the few things they had received. Sometimes the contracts were made for two years, in which the Indian was expected to work for the rubber company and gives all that he collected to the employer. After the Indian had been told of the requirements, he was told how much they owed the employer. The Indians usually received a small item for their pay; the rest of the money was kept to pay for the original debt. As a result, they were kept in a system of debt-peonage, which forced them to work for the caucheros for the rest of their lives. In other words, they became slaves of the caucheros.
The Indians were expected to work through the year in three or four expeditions. They were asked, every four months, to bring a fabrico of fifty to sixty kilograms of caucho. In twelve months they had to bring at least three fabricos of caucho in order to be paid. Their pay was minimal and, in most of the cases, they were given very few things so their debt would grow larger, and the opportunity of freedom would be farther away. For a fabrico, the Indians received a machete, a cup, pants, or a hammock. If they brought two fabricos they might have received a gun; however, with very few munitions, so it would be useless after the munitions were fired.
The Indians went to work in a territory which was divided into sections, with one chief, several muchachos, and the Indians. The Indians, armed with machetes, penetrated deep in the Amazonian forest and gashed every rubber tree they saw. They cut the trees deep to get the last drop of milk, thus killing the tree forever. The rubber milk ran down the tree and after a few days, it became hard and ready to be taken to the rubber houses. Then, they cleaned the surface of the rubber and made it into balls by wrapping it with ropes. And in this form of crude “caucho balls,” the rubber was sold and shipped to the markets. Then the rubber was weighted in the rubber houses and later was sent in the steamers to New York or London.
The muchachos received the orders from the white caucheros, and the Indians received the orders of the muchachos. The caucheros ordered the muchachos to keep a list of the caucho each Indian collected every ten days and made sure it fits with the required amount of caucho. If the work did not fit with what the caucheros expected, the Indians were punished by the muchachos. The caucheros, however, were never fair to the Indians. The Indians, for example, did not get any food from the caucheros; they had to procure their own food. The Indians brought women and children to help them carry the food, as well as the caucho, back to the caucheros. After they arrived with caucho they were confronted with many difficulties, such as punishment or terror instead of payment. Hardenburg experienced the cruelties of the caucheros when he was kidnapped in the Amazon and wrote The Putumayo (1912), a reminiscence of his experiences. In his observations, “the civilizing company” apparently did not believe in paying for what it can be obtained otherwise. The rule of terror had been adopted. The caucheros asked the Indians for an amount of caucho that was impossible to get. If the rubber barons were angry because the prices of the caucho were decreased in the foreign market, they inflicted their anger on the muchachos and the Indians.
One of the ways the caucheros inflicted their anger was by punishing the Indians and the muchachos. When the Indians brought the caucho to the houses they reacted in relation to how the cauchero felt. If the cauchero was happy with the caucho amount brought, the Indian leapt about and laughed with pleasure. However, when the cauchero was not satisfied with the amount of caucho brought, the Indians threw themselves face downwards on the ground and awaited their punishment.
The Indians were punished by the muchachos through many cruel ways, such as by flogging, hanging, or putting them in cepo. The cepo were stocks, where the Indians were held in painful positions without food or water. Some Indians survived the flogging and the cepo, but many died from these punishments. The Indians were also shot by the muchachos and the caucheros, if they became sick while carrying the caucho, or if they tried to escape. The muchachos cut off the arms and legs an Indian and lit him on fire while he was still alive for bigger crimes such as not bringing enough caucho, trying to escape, or killing a muchacho. The muchachos not only punished the male Indians who worked for the “civilizing company,” but they also punished the families of the Indians in order to hurt the worker because he did not bring enough caucho. The children and women were flogged and put in chains. The majority of the time, the muchachos were ordered by the caucheros to commit these horrible crimes, but sometimes the muchachos also abused their own power. They sexually abused the Indians, male and female by raping them or by beating them on their genitals. After they were raped, they killed them or flogged them and sent them back to their villages. The young girls were also given to the West Indian men as concubines. The muchachos claimed that they did not want to inflict cruelty on the Indians: however, they were forced by the caucheros. If they disobeyed the orders of the boss, they were penalized by the same cruel punishments. The irony was that these men were mostly blacks and had experienced slavery in their own history.
Indifference and Unconcern
The missionaries witnessed the cruel treatment of the caucheros to the Indians. They wrote about how cruel life was for the indigenous people under the hands of the brutal rubber barons. Yet, they did not publicly denounce the abuses committed by the caucheros because they had complicated relationships with the anti-clerical, liberal politicians and the rubber barons, because they were seen as a competition for both of them. The missionaries had a casual relationship with the caucheros, since they went to the rubber houses to perform communions, baptisms, confirmations, and confessions. The missionaries also felt that the caucheros, who were also, European descendants like themselves, were superior to the poor “indiecitos” (little Indians). There were also the missionaries, like Bartolomé Gevara, who acted liked the caucheros and kept the Indians in slavery. Guevara was one of the most noted of the Putumayo “missionaries,” yet was making money by being the chief of one of the rubber houses. In general, most of the missionaries felt that the treatment of the Indians was wrong; however, there was not very much they could do because of their political situation and economic situation. Their writings are an important source of evidence. The missionaries also formed an important part in the abuses of the human rights of the Indians, since they set the model for the caucheros follow- to get free labor from the Indians. The only difference was that the missionaries did not make large profits with the Indians works, as the caucheros did.
The Government officials
The South American governments did not stop the violation of the indigenous rights, since the rubber companies monopolized the Amazonian region. Peru could not stop the abuses of the company because it was established in an area that Colombia and Ecuador claimed as their own. In addition, the Indigenous groups of the Upper Amazon lacked of citizenship because they were nomads and consequently were not using effectively the land. For that reason, the governments excluded them from citizenship and consequently the governments were not in the obligation to look for the welfare of the Indians. The land, however, was always present in the mind of the politicians. As a result, the borders were carefully protected by the militaries of the different countries.
The local officials were also responsible for the abuses committed against the human rights of the Indigenous people. According to Hardenburg, the rubber company made the lives of the indigenous a “living hell,” and the countries’ governments were not able to stop them. The provincial governments were also aware of what happened and did little to stop the abuses. For example, officials from Peru, Colombia, and Ecuador claimed they were unable to stop the caucheros because they did not have enough information, such as names, nationalities, or the dates of the crimes. They also protested that they lacked a strong police station, manpower, and money to investigate. They needed money to buy canoes to penetrate the remote areas of the Amazon where the crimes were committed. The lack of money for local officials was a critical problem because the wealthy caucheros bribed the officials to turn a blind eye. When the local officials were not paid, they forced payment by robbing the Indians of their food or intercepting the caucheros and asking them for bribes. Casement interviewed a prominent Peruvian functionary, who told him that there was nothing he could do to stop the crimes against the Indians. Casement later concluded that the official, like many other public functionaries, cared only for business and ignored the rest. The lack of accurate maps was another problem that caused chaos for the local governments and officials and benefited the caucheros.
The local officials also claimed that since they were a weak group, they could easily be attacked by the free angry Indians that could believe they were caucheros. Nevertheless, the Indians were not completely wrong because there were some officials who worked for the rubber houses. For example, César Lurquin, the Peruvian Comisario of the Putumayo visited the area four or five times a year and captured children to sell them later as servants to the caucheros instead of punishing the rubber barons.
The local officials were also justified in their fears because there were rebellious Indians, who made their justice and killed every white person they encountered. An article from the Ecuadorian Newspaper El Imparcial, described how a white man, probably a cauchero, was killed by rubber Indian workers on his way back from Iquitos with merchandise. He was the first victim of a plan to kill every single white person from the Ecuadorian Amazonian province of Napo. The plan was not completely executed because Indigenous women threatened to denounce the plan.
After analyzing the content of the rubber boom in the Amazon, it is clear that the rubber industry tremendously affected the lives of the indigenous population of the Upper Amazonian region the rubber houses and the caucheros were responsible for the infringement of the human rights of the indigenous population. The native populations were victims of the debt peonage system, forced labor, and the genocide of a large part of their community because of several factors such as location, corrupt leaders, and missionary indifference.
These factors however, had a much deeper cause, which was the combination of racism and economical interest. This idea of racism, together with economical interests, became a mortal weapon, which the rubber boom- indigenous tragedy strongly demonstrated.
End of the Rubber Boom and the Abuses in the Upper Amazon
The abuses committed against the Indians in the Amazon came to an end simultaneous with the end of the rubber boom. In the early 1910s, several reports were published about the abuses committed against the Indians and brought international attention to the Upper Amazonian. The first one was the journal of W. E. Hardenburg, who narrated all the difficulties he had experienced while being kidnapped by the caucheros from the Arana Company and all the tortures and crimes committed against the Indians he had witnessed in the Amazon. Later, the journal of the British consul Roger Casement, confirmed the abuses that Hardenburg had written about, when he investigated the situation in the rubber houses and interviewed the British colonial subjects who worked for the rubber companies, and the local officials. The Lord of the Devil’s Paradise, written by Sidney Paternoster, also confirmed the atrocities committed by the caucheros. Then the British, as well as the American governments became concerned with the allegations that there was slavery in the Upper Amazon region at the beginning of the twentieth century. They also published reports on the issue and used the interviews of the Casement journal as their main source of evidence. The American report as well as the book The Lord of the Devil’s Paradise agreed that England should share the responsibility for the crimes committed in Peru since the executer criminals were British subjects employed by British companies, which made profits from the labor of the Indians. The rubber was shipped to British markets and carried by British vessels, and the future of the region depended upon British capital. These arguments and the investigations produced international pressure which forced the local governments of Ecuador, Peru, and Colombia to increase the control in the rubber industry. This control improved the conditions in the region considerably; however, it was not enough since the abuses were still part of the industry. For example, some caucheros were arrested and the corrupt authorities in the region were replaced, but the new authorities were ignorant and were paid low wages.
Therefore, neither the international pressure nor the control in the local areas stopped the abuses against the Indians in the Upper Amazon. Instead the end of the abuses was related to the crises of the rubber prices. The prices of wild rubber from the Amazon started to decrease beginning 1911 and by 1920, the rubber prices finally collapsed. The East Indian plantations of rubber ended the Amazonian rubber boom and at the same time put a stop to the crimes committed by the caucheros. “Muerto el perro se acaba la rabia.” (When the dog died, the disease died with him.)
Source: National Geographic Society (U. S.), Atlas of the World, 7th ed. (Washington, D.C. : National Geographic Society, 1999), 63.
Oriente – Colombia, Ecuador, and Peru.
Notations of the rubber stations and the Putumayo river. Notations added by author.
Source: Sidney Patemoster, The Lords of the Devil’s Paradise (London: Stanley Paul and Co, 1913), 288.
Source: W. E. Hardenburg, The Putumayo: The Devil’s Paradise. Travel in the Peruvian Amazon Region and An Account of the Atrocities Committed Upon the Indians Therein. (London: T. Fisher Unwin, 1912), 52.
All translations from Spanish to English are the author’s own.
The boundary conflict between the two countries started in 1830, after Ecuador’s independence. Ecuador claimed the territory of the Amazonian province Mainas, which was previously part of the Spanish colony. The conflict did not end until 1998.
Blanca Muratorio, The Life and Times of Grandfather Alonso, Culture and History in the Upper Amazon (New Brunswick, N.J.: Rutgers University Press, 1991), 87, 99, 102; Kenneth George Grubb, Amazon and Andes (New York: L. MacVeagh, The Dial Press, 1930), 31; Michael Edward Stanfield, Red Rubber, Bleeding Trees: Violence, Slavery, and Empire in Northwest Amazonia, 1850-1933 (Albuquerque, N.M.: University of New Mexico Press, 1998), 81; Great Britain Foreign Office, Correspondence Respecting to the Treatment of British Colonial Subjects and Native Indians Employed in the Collection of Rubber in the Putumayo District. Presented to both houses of Parliament by Command of H. M, July 1912 (London: Harrison and Sons, 1912), 2-3; and Sir Roger Casement, The Amazon journal of Roger Casement, edited and with an introduction by Angus Mitchell (Dublin: Lilliput Press, 1997), 295.
Muratorio, The Life and Times of Grandfather Alonso, 100, 107.
Michael T. Taussig, Shamanism, Colonialism, and the Wild Man: A Study in Terror and Healing (Chicago: University of Chicago Press, 1986), 53.
“Statement of Everlyn Ratson Made to his Majesty’s Consul General At La Chorrera on October 31, 1910,” in Slavery in Peru, 354-363.
Ibid; Roberto Pineda Camacho, Holocausto en el Amazonas: Una Historia Social de la Casa Arana (Santafé de Bogotá, Colombia: Planeta Colombiana Editorial, 2000), 82; Slavery in Peru, 382; and Taussig, Shamanism, Colonialism, and the Wild Man, 48.
“Précis of the Statement of Westernman Leavine made to His Majesty’s Consul General at Matanzas on October 18, 1910, and Subsequently,” in Correspondence Respecting to the Treatment, 96; “Statement of John Brown, a Native of Montserrat, Made to His Majesty’s Consul General at Iquitos on December 3, 1910,” in Slavery in Peru, 407; and Gianotti, Viajes por el Napo, 56.
Pineda, Holocausto en el Amazonas, 55; “Employment Contract of a Rubber Laborer, 1909 (AGN),” in The Life and Times of Grandfather Alonso, by Muratorio, 237-238; Fritz W Up de Graff, Head Hunters of the Amazon: Seven years of Exploration and Adventure (New York: Duffield and Co, 1923), 55; “Précis of the Statement of Westernman Leavine,” in Correspondence Respecting to the Treatment, 96; and “Statement of August Walcott Made to His Majesty’s Consul- General at La Chorrera on November 1, 1910,” in Correspondence Respecting to the Treatment, 112.
“Statement of Everlyn Ratson,” in Slavery in Peru, 356; Up de Graff, Head Hunters of the Amazon, 55; “Précis of the Statement of Stanley Sealey, a Native of Barbados, Made to his Majesty’s Consul General on September 23, 1910, at La Chorrera, and on Subsequent Occasions,” in Slavery in Peru, 328; and Pineda, Holocausto en el Amazonas, 69.
“Précis of the Statement of Westernman Leavine,” in Correspondence Respecting to the Treatment, 96; W. E. Hardenburg, The Putumayo: The Devil’s Paradise. Travel in the Peruvian Amazon Region and An Account of the Atrocities Committed Upon the Indians Therein (London: T. Fisher Unwin, 1912), 202-203,182; and John C. Yungjohann, White Gold: The Diary of A Rubber Cutter in the Amazon 1906-1916, edited by Ghillean T. Prance, epilogue by Yungjohann Hillman (Oracle, Arizona: Synergetic Press, 1989), 47.
Hardenburg, The Putumayo: The Devil’s Paradise, 184; Up de Graff, Head Hunters of the Amazon, 91; Slavery in Peru, 332-337; and Stanfield, Red Rubber, Bleeding Trees, 48.
Hardenburg, The Putumayo: The Devil’s Paradise, 183.
“Statement of John Brown, ” in Slavery in Peru, 409; “Précis of the Statement of Westernman Leavine,” in Correspondence Respecting to the Treatment, 96; “Statement of August Walcott,” in Correspondence Respecting to the Treatment, 113, 115; Hardenburg, The Putumayo: The Devil’s Paradise, 181; and “Précis of the Statement of Joshua Dyall Made to His Majesty’s Consul General in the Presence of Mr. Lous H. Barnes, the Chief of the Company’s Commission, and Then Repeated Before Señor Tizon and all the Remaining Members of the Commission the Same Day, September 24, 1910, at La Chorrera; also Subsequently Examined at La Chorrera by Mr. Casement in November,” in Slavery in Peru, 332-333.
Gianotti, Viajes por el Napo, 38, 46-47, 51, 59; “How the Missionary Subsists - Ominous Servitude – Freedom there from the Only Way to Progress,” in Slavery in Peru, 204; and Hardenburg, The Putumayo: The Devil’s Paradise, 190.
Ibid, 186; and Jorge Trujillo, “La Amazonia: Región Imaginaria,” Ecuador Debate 3 (1983): 154-160.
Stanfield, Red Rubber, Bleeding Trees, 58, 81; Casement, The Amazon journal, 471; and Muratorio, The Life and Times of Grandfather Alonso, 104.
Hardenburg, The Putumayo: The Devil’s Paradise, 191.
Stanfield, Red Rubber, Bleeding Trees, 58; and “Correspondencia de el Oriente” El Imparcial, (Ecuador) 3 October 1908, no. 425. In possession of Professor Nicola Foote, Florida Gulf Coast University.
“Britain Guilty as Peru in Rubber Atrocities - Government Insists that England Share Responsibility for Crimes Committed- Easy to Escape Arrest – Sir Roger Casement’s Witnesses Fled the Country- Many Were Criminals,” in Slavery in Peru, 182; Sidney Patemoster, The Lords of the Devil’s Paradise (London: Stanley Paul and Co, 1913), 304; “Consul General Sir. R. Casement to Sir Edward Grey, London, February 5, 1912” in Slavery in Peru, 429- 431; “La Esclavitud en el Putumayo: Informe del Cónsul de los Estados Unidos – La Pro-Indigena de Lima,” El Guante (Guayaquil - Ecuador ) 25 June 1913. In possession of Professor Nicola Foote, Florida Gulf Coast University. |
May 17, 2013
Ancient Geodynamics Indicate Earth’s Ice Sheets More Stable Than Thought
April Flowers for redOrbit.com - Your Universe Online
For decades, researchers have used ancient shorelines to predict the stability of today´s largest ice sheets in Greenland and Antarctica. High shoreline markings from three million years ago as Earth was going through a warm period were thought to be evidence of a high sea level due to ice sheet collapse at the time — an assumption that has led many scientists to believe that if the world´s largest ice sheets collapsed in the past, they will do so again. Global warming is adding to this fear.However, a groundbreaking new study spearheaded by researchers at the University of Chicago challenges this thinking.
Led by David Rowley, CIFAR Senior Fellow and professor at the University of Chicago, the research team used the east coast of the US as their laboratory. They found that the Earth´s hot mantle pushed up segments of ancient shorelines over millions of years. This made the shorelines appear higher now than they originally were millions of years ago.
"Our findings suggest that the previous connections scientists made between ancient shoreline height and ice volumes are erroneous and that perhaps our ice sheets were more stable in the past than we originally thought," says Rowley. "Our study is telling scientists that they can no longer ignore the effect of Earth's interior dynamics when predicting historic sea levels and ice volumes."
Rowley´s team of international scientists included Alessandro Forte from the Université du Québec à Montréal, Jerry Mitrovica from Harvard, and a former CIFAR-supported post-doctoral fellow Rob Moucha from Syracuse. Their findings were published online in the journal Science.
"This study was the culmination of years of work and deep collaboration by researchers in CIFAR's program in Earth System Evolution," explains Rowley. "For this study, each of us brought our individual expertise to the table: Rob and Alex worked on simulations of Earth's mantle dynamics, Jerry provided calculations on how glaciers warp Earth's surface, and I shaped our understanding of the geology of the landscape we were looking at. This study would not have been possible without CIFAR."
CIFAR is the Canadian Institute for Advanced Research, which was established in 1982 as an independent research institute. CIFAR is comprised of nearly 400 researchers representing more than 100 academic institutions in 16 countries.
The researchers focused on the coastline from Virginia to Florida, an area which has an ancient scarp tens of meters above present-day sea level. Previously, research groups have concluded from this shoreline that the Greenland, West Antarctic and a fraction of the East Antarctic ice sheets collapsed three million years ago during a warm period, raising the sea level by at least 115 feet. The findings from Rowley´s team, however, suggest that these ice sheets — particularly the world´s largest, the East Antarctic — were probably more stable than previously thought.
Computer simulations were used to follow the movement of mantle and tectonic plates that occurred over time. The simulation´s prediction of how the shoreline would have moved matched with observations made by geologists mapping this region. The team wants to continue their research by making similar simulations and predictions for other locations around the world.
"The paper is important because it shows that no prediction of ancient ice volumes can ever again ignore the Earth's interior dynamics," explains Rowley. "It also provides a novel bridge between two disciplines in Earth science that rarely intersect: mantle dynamics and long-term climate. It is the kind of study that changes how people think about our past climate and what our future holds." |
Scientists at Flinders University in Australia have developed a DNA analysis technique that could provide a valuable weapon in the fight against crime.
“We know that some people pass on more of their DNA because when they touch something more of their cells are left behind,” said Professor Adrian Linacre, chair of Forensic DNA technology at Flinders University, in a statement. “They are called shedders but it’s very difficult at the moment to see who is a shedder.”
Scientists have developed a special dye that lets researchers see otherwise invisible traces of DNA, Paul Kirkbride, a forensic science professor at Flinders University and co-author of the research paper, told Fox News.
“When, for example, a person touches an object, a fingermark will be left behind,” he explained, via email. “Fingermarks such as these can be very valuable because it is now well-known that they could contain the DNA of the person who left the mark.”
“At a crime scene, or on objects submitted to the lab, such a smudge of DNA is often invisible,” Kirkbride added. “Therefore the DNA analyst or crime scene examiner is forced to carry out their work blind.”
The dye, however, quickly reveals areas where DNA has been left.
“The crucial part is to be able to take a sample from that to find out who touched an item, and that is where this test can make a difference,” said Linacre.
Some 11 DNA donors participated in the Flinders research study, which analyzed 264 fingerprints. The results are published in the journal Forensic Science International: Genetics.
The research also revealed that men shed more DNA than women and thumbs leave the most accurate traces, according to Kirkbride.
Follow James Rogers on Twitter @jamesjrogers |
Generalist and specialist species
|This article needs additional citations for verification. (April 2009)|
A generalist species is able to thrive in a wide variety of environmental conditions and can make use of a variety of different resources (for example, a heterotroph with a varied diet). A specialist species can only thrive in a narrow range of environmental conditions or has a limited diet. Most organisms do not all fit neatly into either group, however. Some species are highly specialized (the most extreme case being monophagy), others less so, while some can tolerate many different environments. In other words, there is a continuum from highly specialized to broadly generalist species.
Omnivores are usually generalists. Herbivores are often specialists, but those that eat a variety of plants may be considered generalists. A well-known example of a specialist animal is the koala which subsists almost entirely on eucalyptus leaves. The raccoon is a generalist because it has a natural range that includes most of North and Central America, and it is omnivorous, eating berries, insects, eggs and small animals. Monophagous organisms feed exclusively or nearly so on a single other species.
The distinction between generalists and specialists is not limited to animals. For example, some plants require a narrow range of temperatures, soil conditions and precipitation to survive while others can tolerate a broader range of conditions. A cactus could be considered a specialist species. It will die during winters at high latitudes or if it receives too much water.
When body weight is controlled for, specialist feeders such as insectivores and frugivores have larger home ranges than generalists like some folivores (leaf eaters). Because their food source is less abundant, they need a bigger area for foraging. An example comes from the research of Tim Clutton-Brock, who found that the black and white colobus, a folivore generalist, needs a home range of only 15ha. On the other hand, the more specialized red colobus monkey has a home range of 70ha, which it requires to find patchy shoots, flowers and fruit.
When environmental conditions change, generalists are able to adapt, while specialists tend to fall victim to extinction much more easily. For example, if a species of fish were to go extinct, any specialist parasites would also face extinction. On the other hand, a species with a highly specialized ecological niche is more effective at competing with other organisms. For example, a fish and its parasites are in an evolutionary arms race, a form of co-evolution, in which the fish constantly develops defenses against the parasite, while the parasite in turn evolves adaptations to cope with the specific defenses of its host. This tends to drive the speciation of more specialized species provided conditions remain relatively stable. This involves niche partitioning as new species are formed, and biodiversity is increased.
- Krebs, J. R.; Davies, N. B. (1993). An Introduction to Behavioural Ecology. Wiley-Blackwell. ISBN 0-632-03546-3.
- Clutton-Brock, T.H. (1975). "Feeding behaviour of red colobus and black and white colobus in East Africa". Folia Primatologica 23 (3): 165–207. doi:10.1159/000155671. PMID 805763.
- Townsend, C.; Begon, M.; Harper, J. (2003) Essentials of Ecology (2nd edition) p.54-55 Blackwell, ISBN 1-4051-0328-0 |
This test assesses all the pre-reading skills that are early indicators of reading success. Use it to identify children who lack explicit phonological knowledge and have difficulty acquiring sound/symbol correspondences in words.
The Phonological Awareness Test 2 is a standardized assessment of children's phonological awareness, phoneme-grapheme correspondences, and phonetic decoding skills. Test results help educators focus on those aspects of a child's oral language that may not be systematically targeted in classroom reading instruction.
This test assesses a student's awareness of the oral language segments that comprise words (i.e., syllables and phonemes). The test is comprehensive and includes a wide variety of tasks; performance on each of these tasks has been correlated with success in early reading and spelling. The straightforward, developmental format lets you easily tease out specific skills and plan effective interventions.
- Rhyming: Discrimination and Production—identify rhyming pairs and provide a rhyming word
- Segmentation: Sentences, Syllables, and Phonemes—dividing by words, syllables and phonemes
- Isolation: Initial, Final, Medial—identify sound position in words
- Deletion: Compound Words, Syllables, and Phonemes—manipulate root words, syllables, and phonemes in words
- Substitution With Manipulatives—isolate a phoneme in a word, then change it to another phoneme to form a new word
- Blending: Syllables and Phonemes—blend units of sound together to form words
- Graphemes—assess knowledge of sound/symbol correspondence for consonants, vowels, consonant blends, consonant digraphs, r-controlled vowels, vowel digraphs, and diphthongs
- Decoding—assess general knowledge of sound/symbol correspondence to blend sounds into nonsense words
- Invented Spelling (optional)—write words to dictation to show encoding ability
The test should be administered by a professional trained in analyzing the phonological structure of speech (e.g., speech-language pathologist, learning disabilities teacher, reading teacher, special education consultant).
- All subtests are administered (Invented Spelling is optional). There are no basals or ceilings.
- A demonstration item is given for each subtest.
- If it is apparent that a student is unable to perform a task, administration of that task is discontinued, and a score of 0 is given for items not administered in that task.
- Directions are read aloud to the student and are printed on the test form.
- Spiral bound stimuli booklets (included in the test) are used with the Graphemes Subtest and Decoding Subtest.
- Eight color cubes (included with the test) are used for the Substitution Subtest.
- 40 minutes
Scoring/Types of Scores
Each response receives a 1 for a correct response or a 0 for an incorrect response. Correct responses are listed on the test form. A pronunciation guide for nonsense words is on the test form.
The raw scores for each subtest, each section (phonological awareness and phoneme-grapheme correspondence) and the total test can be converted to:
- Age Equivalents
- Percentile Ranks
- Standard Scores
Discussion of Performance
The Discussion of Performance section in the Examiner's Manual helps you bridge from assessment to treatment. There are descriptions of how weaknesses are manifested in the classroom, guidelines for intervention, and frequently-asked questions about the test.
Statistical Test Results
The Phonological Awareness Test 2 was standardized on 1,582 subjects. These subjects represented the latest national census for race, gender, age, and educational placement.
- Reliability—established by the use of the following for all subtests and the total test at all age levels:
- Inter-Rater Reliability
- Reliability Based on Item Homogeneity (KR20)
Reliability tests were highly satisfactory for the total test at all age levels.
- Validity—established by the use of content validity which reflects the necessary phonological awareness skills of elementary age students:
- Contrasted groups (t-values)
- Point Biserial Correlations
- Subtest Intercorrelations
- Correlations Between Subtests and Total Test
Contrasted Groups (t-values) comparisons show the test has a highly satisfactory ability to differentiate subjects requiring special reading services and those subjects developing reading skills normally. Combined subtest intercorrelations revealed acceptable levels across all age levels.
- Race/Socioeconomic Group Difference Analyses—conducted at the item and subtest levels. The analyses show no significant differences when comparing race or SES on the PAT 2. Tests included:
- Chi Square Analysis
- Analysis of Variance (ANOVA) F-tests
Count on the Phonological Awareness Test 2 to identify students who have weaknesses in phonological awareness skills, and to give you the reliable results you need to plan an individual treatment plan for each student. It is a test that quantifies the link between oral language development and phonological awareness. You'll discern how your students manipulate sounds, and identify their strengths and weaknesses in sound awareness skills. It's the most complete test of phonological awareness you'll find!
Copyright © 2007
Warning: CHOKING HAZARD - Small parts, not for children under 3 yrs.
In 2000, the National Reading Panel, in its report from the subgroups, summarized the viable research on reading instruction that included alphabetics, phonemic awareness, and phonics. Based on this landmark longitudinal survey, the panel arrived at the following conclusions regarding phonemic awareness and reading:
- Teaching children to manipulate phonemes in words is highly effective across all the literacy domains.
- Phonemic awareness measured at the beginning of kindergarten is one of the two best predictors of how well children will learn to read.
- Assessing a student's phonemic awareness before beginning instruction is the best approach. This indicates which children need instruction, which children need to be taught beginning levels of phonemic awareness (e.g., isolating initial sounds in words), and which children are ready for more advanced levels involving segmenting or blending with letters.
- Tasks that require students to manipulate spoken units larger than phonemes are simpler for beginners than tasks requiring phoneme manipulation. Instruction of children in the study groups often began by teaching children to manipulate larger units and included such activities as rhyming, breaking sentences into words, and breaking words into syllables.
- Phonemic awareness instruction helps all children improve their reading, including normally developing readers, children at risk for reading problems, preschoolers, kindergartners, first graders, older disabled readers through sixth grade, children across various socioeconomic levels, and children learning to read in English as well as in other languages.
The Phonological Awareness Test 2 incorporates these findings and is also based on expert professional practice.
National Reading Panel (2000). Teaching children to read: An evidence based assessment of the scientific research literature on reading and its implication for reading instruction—Reports of the subgroups. Retrieved October 21, 2009, from www.nichd.nih.gov/publications/nrp/upload/smallbook_pdf.pdf |
He obtained a doctorate in astronomy at Princeton University under Charles A. Young. In 1890, he worked at the Case School of Applied Science in Cleveland teaching astronomy, later becoming the head of the physics department in 1893.
After Wilhelm Röntgen's discovery of x-rays, Dayton Miller purchased a Crookes cathode ray tubes in 1893. He became one of the first Americans to take an x-ray photograph of concealed objects and the human body.
In 1900, he began work with Edward Morley on ether drift experiments that concern physicist, astronomers, and mathematicians dealing with Albert Einstein's theory of relativity. The type of experimental apparatus Miller used was very delicate. Dayton Miller performed over 200,000 observations and experiments dealing with the and aether and aether drift. From 1902 to 1933 Miller performed experiments producing more accurate measurements. This work on ether yielded positive results.
Albert Einstein was interested in this ether drift theory and commentes that altitudal influences and temperatures may provide sources of error in the findings.
Miller reply was:
Computer analysis after Miller's death on the the little available data has proven that the shifts were statistically significant. Lately, there has been more of Miller's papers from the possession of R. S. Shankland to surface and they are awaiting future analysis.
Dr. Miller published manuals designed to be student handbooks for the performance of experimental problems in physics.
Dayton Miller also worked with acoustics. He invented the phonodeik in 1908. The name was suggested by Edward W. Morley. The phonodeik was an instrument for the recording of sound waves by means of projecting a beam of light on a mirror attached to a vibrating object. Dayton Miller analyzed and charted instrumental waveforms and undertook a intensive look at the history of acoustical instruments. The Dayton Miller flute collection contains over 1,650 flutes and other related media.
During World War I, Miller worked with the physical characteristics of pressure waves of large guns at the request of the government.
Dr. Miller died at Cleveland on February 22, 1941.
Laboratory Physics (1903)
Sound Waves, Shape and Speed (1937)
The Science of Musical Sounds (New York, 1916, rev. 1922)
Sparks, Lightning and Cosmic Rays (1939)
Anecdotal History of the Science of Sound (New York, 1935) |
Animal manures and animal manure-based composts are rich in plant nutrients such as Nitrogen (N), Phosphorous (P) and Potassium (K) and provide organic matter that conditions the soil. While they can make excellent soil amendments for the home gardener, it is important to use them effectively and safely.
Manure or Compost?
There are no regulations or standards in New Hampshire that govern the many labels that can be used to describe soil amendments (“manure”, “aged manure”, “rotted manure”, “composted”, etc.). For the purposes of this fact sheet, “compost” refers to any mix of organic materials that has been partially decomposed to the point where its nutrient content is stable. This typically implies an active process, where the organic materials are managed carefully to speed decomposition. “Manure” refers to waste from livestock (including poultry, cattle, or horses), usually mixed with bedding such as sawdust or wood shavings and/or feed waste. Manures may be fresh – that is, they have not decomposed at all, or they may have decomposed (or “aged”) to varying degrees.
Generally, plant nutrients in manures and composts are measured in terms of pounds per wet ton; it takes a lot of these materials to provide enough nutrients for plant growth. Nutrient content varies widely depending on the type of manure and the amount and type of bedding in it, and the ingredients in compost. While gardeners can use some general guidelines for nutrient content (see table below), the most accurate way to determine fertilizer equivalency is to have the material tested.
While most of the nutrients in manures and composts behave similarly in the soil to nutrients from commercial fertilizers, nitrogen is an exception. First, much of the nitrogen is not immediately available to plants, but instead becomes available slowly, as microbes digest it. Also, the availability of nitrogen depends on the ratio of carbon to nitrogen (C:N ratio). If the ratio exceeds 30:1, then most of the nitrogen is immobilized, or unavailable to plants, for an extended period. Manures or unfinished composts that contain a high proportion of bedding like wood shavings or sawdust have a high C:N ratio. This type of material “borrows” nitrogen from the soil as it decomposes, and the result is that garden plants may not have the nitrogen they need to grow.
The proportions of plant nutrients in composts and manures are usually different from what plants require for growth. In particular, these materials often contain more phosphorus than nitrogen. Thus, gardeners that apply enough of these materials to meet nitrogen needs for their gardens will likely apply far more phosphorus than is needed. Over time, this can lead to very high levels of soil phosphorus.
What’s the problem with high phosphorus? Very high soil phosphorus is not toxic, and will not harm plants or people, but when phosphorus moves into surface waters, it can lead to algae blooms, which harm water quality and aquatic organisms.
In order to use manures and composts effectively and responsibly, start with testing your soil for pH and soil nutrient status. If soil phosphorus is in the low to optimum range (up to 50 ppm on the University of New Hampshire (UNH) soil test), feel free to use manure or compost to provide nutrients. If soil phosphorus test levels are very high (over 100 ppm on UNH’s soil test), consider using nutrient sources other than manure.
In the soil test examples below, Garden A has very high levels of phosphorus, because of a history of compost applications. While nitrogen and potassium fertilizers are still needed for crops in this garden, this gardener should not add more compost or manure, to avoid becoming a source of phosphorus contamination. In contrast, Garden B has probably had very little or no compost or manure added to it, since the soil phosphorus levels are quite low. In this case, the gardener could add composts or manures to provide nutrients and organic matter.
For all the benefits of using manure and manure-based composts in the garden, there are also some risks. Animal manures harbor pathogens harmful to humans, including E. coli, Salmonella, and Campylobacter bacteria, and Giardia or Cryptosporidium protozoa. These organisms can affect people when they consume crops contaminated with soil, and under certain conditions, they can be taken up into plant tissue.
The risk from pathogens is greatly reduced when manure is composted correctly. To ensure that pathogens have been killed, the compost pile must reach a high temperature (between 131°F and 140°F) for a sustained period of time (several weeks). The compost must also be turned regularly and carefully monitored so that all of the manure has been exposed to sufficient temperatures. In home compost piles and in unmanaged manure piles, this rarely happens. Aged manure is not the same as composted manure, and it is not safe to assume that pathogens in an aged manure pile have been destroyed.
Another strategy for destroying pathogens is pasteurization. Some commercial poultry manure products are processed in this way. Pathogens, begin to die once incorporated into garden soil, and research has shown that incorporating manure at least 120 days before harvest greatly reduces risks of food borne illness.
There have been many cases where vegetable gardeners have unknowingly used manure and composts that are contaminated with herbicides and have seen herbicide injury in their vegetable gardens. The herbicides of concern are broadleaf herbicides used on lawns, turfgrass, pastures, and hay crops. Some of these materials can retain herbicidal activity for a long time, even after passing through an animal’s digestive system, and even after the resulting manure is composted. Treated grass clippings, and compost made from treated grasses, can also retain residues. The herbicides do eventually breakdown and lose activity over time, particularly as they are exposed to microbes, heat and moisture. This can take place relatively quickly, or can take up to several years, depending on the situation.
On sensitive crops, these herbicides can cause poor germination and kill seedlings, and they cause new leaves to become twisted and malformed. Sensitive crops include a wide array of crops including tomato and other solanaceous crops, lettuce, beans and other legumes, strawberries, grapes, and most other vegetable crops.
If you purchase manures and composts, make sure to be aware of this possibility and get assurance that herbicides are not present.
To Minimize the Health Risks Associated with Using Manures in Home Gardens
- Wait at least 120 days after applying raw or aged manure to harvest crops that grow in or near the soil (root crops, leafy greens, strawberries). Wait at least 90 days for other crops.
- Once the garden is planted, avoid using animal manures unless they have been pasteurized or actively composted.
- Never use cat, dog or pig manure in your compost pile or your vegetable garden. These manures are more likely to contain parasites that infect humans than other manures.
- Wash vegetables before eating.
- People who are especially susceptible to foodborne illnesses should avoid eating uncooked vegetables from manured gardens. Those who face special risks from foodborne illness include pregnant women, very young children, and persons with chronic diseases.
While manures and composts are excellent soil amendments for the home gardener, gardeners should be aware of the potential environmental and health risks associated with using manures and manure-based composts. Regular soil testing can help gardeners avoid soil phosphorus buildup from continuously applying manures and composts to soils, and gardeners can follow some simple tips to reduce the health risks associated with applying fresh manures to vegetable gardens.
Download the Resource for the complete fact sheet and a printable version. |
Hypoplastic left heart syndrome (HLHS) is a problem that happens when the left side of a baby’s heart doesn't form as it should. It’s smaller than normal and can’t pump enough blood to the body. After the baby is born, doctors can treat the problem with medicines and several surgeries. Some babies will need a heart transplant.
How Does the Heart Work?
The heart is a pump with four chambers. At the top are two atria (the right atrium and left atrium). Below them are two ventricles (the right ventricle and left ventricle).
The blood flows from the body into the right atrium.
Then it flows into the right ventricle and gets pumped through the pulmonary artery to the lungs.
The blood picks up oxygen in the lungs, then travels through the pulmonary veins to the left atrium.
The left ventricle pumps blood out through the aorta to the body to deliver the oxygen.
90 Second Summary: Hypoplastic Left Heart Syndrome
Learn the basics in 90 seconds.
What Happens in Hypoplastic Left Heart Syndrome?
In hypoplastic left heart syndrome, the left ventricle is too small. The aorta, which takes the blood to the body, is small too. The heart can’t pump enough blood to the body.
The right ventricle, which is only supposed to pump blood to the lungs, pumps blood to the lungs and the body through a connection called a patent ductus arteriosis (PDA). Usually, babies don’t need this connection after they’re born so it closes. But a baby with hypoplastic left heart syndrome needs this connection to get blood to the body. Because the right ventricle is pumping blood to the lungs and the body, it is doing extra work.
Babies with HLHS are almost always born with an atrial septal defect (ASD). This is a hole between the atria that lets blood with oxygen mix with blood low on oxygen. This way, the blood that the right ventricle pumps out to the body has some oxygen in it.
What Are the Signs & Symptoms of Hypoplastic Left Heart Syndrome?
A baby born with hypoplastic left heart syndrome may have:
blue or grayish coloring of the skin and nails
low energy and activity
fewer than normal wet diapers
What Causes Hypoplastic Left Heart Syndrome?
HLHS is a birth defect that happens when a baby is growing in the womb. No one knows exactly what causes it, but it could have a mix of causes, including a baby's genes (DNA).
How Is Hypoplastic Left Heart Syndrome Diagnosed?
Doctors sometimes can diagnose hypoplastic left heart syndrome before a baby’s birth if the problem is seen on the mother’s prenatal ultrasound scan.
If a baby is born with signs of the condition, doctors do tests such as:
medicine to help balance how much blood goes to the lungs and how much goes to the body
a feeding tube that goes in the nose down to the stomach
cardiac catheterization or surgery to make the ASD bigger. This lets more blood with oxygen flow from the left side of the heart to the right side of the heart.
How Can Parents Help?
Learn as much as you can about hypoplastic left heart syndrome and the treatments your child needs. This will help you work with the care team and better help your child cope. Be sure to ask when you have questions. You can also learn more online at:
You play a big role in your child's treatment. Keep a record in a notebook or in your phone of:
your child’s appointments, medicines, and any symptoms
any special instructions for taking care of your child at home
any questions you have for the care team
What Else Should I Know?
Children with hypoplastic left heart syndrome need intensive medical care from birth. They’ll have many follow-up doctor visits and tests, as well as surgery. At times this might feel overwhelming. You don't have to go it alone. The doctors, nurses, social workers, and other members of the care team are there to help you and your child. Talk to any of them about resources that can help your family.
Take time to take care of yourself too. Parents who get the support they need are better able to support their children.
It can help to find a support group for parents of children with serious heart conditions. Ask the care team for recommendations. You also can look online at: |
Fossil leaves may reveal climate in last era of dinosaurs
Richard Barclay opens a metal drawer in archives of the Smithsonian Natural History Museum containing fossils that are nearly 100 million years old. Despite their age, these rocks aren’t fragile. The geologist and botanist handles them with casual ease, placing one in his palm for closer examination.
Embedded in the ancient rock is a triangular leaf with rounded upper lobes. This leaf fell off a tree around the time that T-rex and triceratops roamed prehistoric forests, but the plant is instantly recognisable. “You can tell this is ginkgo. It’s a unique shape,” said Barclay. “It hasn’t changed much in many millions of years.”
What is also special about ginkgo trees is that their fossils often preserve actual plant material, not simply a leaf’s impression. And that thin sheet of organic matter may be key to understanding the ancient climate system – and the possible future of our warming planet.
But Barclay and his team first need to crack the plant’s code to read information contained in the ancient leaf.
“Ginkgo is a pretty unique time capsule,” said Peter Crane, a Yale University palaeobotanist. As he wrote in “Ginkgo,” his book on the plant, “It is hard to imagine that these trees, now towering above cars and commuters, grew up with the dinosaurs and have come down to us almost unchanged for 200 million years.”
If a tree fell in an ancient forest, what can it tell scientists today?
“The reason scientists look back in the past is to understand what ios coming in the future,” said Kevin Anchukaitis, a climate researcher at the University of Arizona. “We want to understand how the planet has responded in the past to large-scale changes in climate – how ecosystems changed, how ocean chemistry and sea levels changed, how forests worked.”
Of particular interest to scientists are “ hothouse “ periods when they believe carbon levels and temperatures were significantly higher than today. One such time occurred during the late Cretaceous period (66 million to 100 million years ago), the last era of the dinosaurs before a meteor slammed into Earth and most species went extinct.
Learning more about hothouse climates also gives scientists valuable data to test the accuracy of climate models for projecting the future, says Kim Cobb, a climate scientist at Georgia Institute of Technology.
But climate information about the distant past is limited. Air bubbles trapped in ancient ice cores allow scientists to study ancient carbon dioxide levels, but those only go back about 800,000 years.
That is where the Smithsonian’s collection of ginkgo leaves comes in. Down a warren of corridors, Barclay hops across millennia – as is only possible in a museum – to the 19th century, when the Industrial Revolution had started changing the climate.
From a cabinet, he withdraws sheets of paper where Victorian-era scientists taped and tied ginkgo leaves plucked from botanical gardens of their time. Many specimens have labels written in beautiful cursive, including one dated August 22, 1896.
The leaf shape is virtually identical to the fossil from around 100 million years ago and to a modern leaf Barclay holds in his hand. But one key difference can be viewed with a microscope: how the leaf has responded to changing carbon in the air.
Tiny pores on a leaf’s underside are arranged to take in carbon dioxide and respire water, allowing the plant to transform sunlight into energy. When there is a lot of carbon in the air, the plant needs fewer pores to absorb the carbon it needs. When carbon levels drop, the leaves produce more pores to compensate.
Today, scientists know that the global average level of carbon dioxide in the atmosphere is about 410 parts per million – and Barclay knows what that makes the leaf look like. Thanks to the Victorian botanical sheets, he knows what ginkgo leaves looked like before humans had significantly transformed the planet’s atmosphere.
Now he wants to know what pores in the fossilised ginkgo leaves can tell him about the atmosphere 100 million years ago.
RUNNING AN EXPERIMENT
But first he needs a codebreaker, a translation sheet – sort of a Rosetta stone to decipher the handwriting of the ancient atmosphere.
That is why he is running an experiment in a forest clearing in Maryland.
One morning earlier this year, Barclay and project assistant Ben Lloyd tended rows of ginkgo trees within open-topped enclosures of plastic sheeting that expose them to rain, sunlight, and changing seasons. “We are growing them this way so the plants experience natural cycles,” Barclay said.
The researchers adjust the carbon dioxide pumped into each chamber, and an electronic monitor outside flashes the levels every five seconds.
Some trees are growing at current carbon dioxide levels. Others are growing at significantly elevated levels, approximating levels in the distant past or, perhaps, the future.
“We’re looking for analogues – we need something to compare with,” said Barclay. If there is a match between what the leaves in the experiment look like and what the fossil leaves look like, that will give researchers a rough guide to the ancient atmosphere.
They also are studying what happens when trees grow in supercharged environments, and they found that more carbon dioxide makes them grow faster.
But adds Barclay: “If plants grow very quickly, they are more likely to make mistakes and be more susceptible to damage. ... It’s like a race car driver that’s more likely to go off the rails at high speeds.”
Christina Larson, Associated Press. Follow her on Twitter: @larsonchristina |
Rays are part of the wood and run through the xylem from the inside of the wood to the cambium . They serve the radial supply of the wood body with water and nutrients. The wood rays extend over the cambium into the phloem and are called bast rays there .
Types of rays
If a wood ray begins directly in the pith of the wood, it is called the primary wood ray or medullary ray . If it begins in the xylem and not in the pith, it is called a secondary wood ray .
The single-row wood ray in secondary conducting tissues is exactly one cell wide. The aggregated ray of wood (composite ray of wood) consists of several cells.
Wood rays in hardwoods
In hardwoods, the rays of the wood are only made up of storage cells . Depending on the type of wood, they are single or multiple rows.
Only in some tropical wood species does the arrangement of the rays follow a regular pattern. Also resin canals that of epithelial cells are surrounded only come in some tropical woods in the wooden beams, and are then often filled with white or dark ingredients.
Wood rays have a volume fraction of approx. 8 to 33 percent on the hardwood body. There are also bushes with smaller rays.
Rays in conifers
The structure of wood rays in softwood is much more differentiated than in hardwoods. In wood species such as pine , spruce or larch , resin canals can always be found surrounded by epithelial cells. The structure is either homocellular from parenchymal cells or heterocellular from radiation parenchymal cells and transverse tracheids .
At the contact surfaces of the radial ray cells and the axially running longitudinal tracheids there are pits that allow water and nutrients to be transported. These intersecting field pits (in contrast to the pits in hardwoods) provide information on the type of wood.
The volume fraction of the wood rays in the softwood substance is up to one percent.
If the wood beam does not contain a resin channel, it is used to transport water and nutrients as well as to store reserve substances . At the same time, the wood rays increase both the strength and rigidity of the wood. Trees with many and thick rays are less likely to develop longitudinal cracks.
Softwoods such as B. Firs or yew trees that do not have resin canals can form traumatic resin canals in the event of a wound, as the parenchymal cells, which are also called epithelial cells or excretory cells in their special function, excrete resin under high pressure into the wood jet.
With different cuts (radial / tangential / axial), wood rays form differently on the wood surface. They are then referred to as “mirrors” and are often typical features of the respective type of wood. They are also used as design elements, especially for types of wood with clear wood rays.
- Claus Mattheck: Stupsi explains the tree. Research Center Karlsruhe. 4th edition. Karlsruhe 2010, ISBN 978-3-923704-72-9 . |
NASA is experimenting with a new communication technology in deep space which is likely to be demonstrated during International Space Station during spring. As of now NASA relies on regular radio waves to communicate between earth and spacecraft floating above. But new laser communication technology offer it high data rates so that spacecraft can now send larger data packets at one time. This technology is called X-Ray Communication system or XCOM that offers better advantages over radio waves. X-Rays have short wavelengths when compared to infrared and radio waves so XCOM can send more data using the same transmission power.
The X-Rays used in this communication system can broadcast tight beams using less energy while communicating across large distances. If the demonstration is successful then it could generate increased interest in this communications technology and permit efficient gigabits-per-second rates of data for deep space missions. This form of data transfer at extremely high rates is not common in telecom but recent research projects to speed up data processing have pushed the capability of computers to this range for few fields. These X-rays can also pierce the sheath of hot plasma that builds up when the spacecraft hurtles through atmosphere of earth at hypersonic speeds.
This plasma sheath sometimes acts as a barrier for communication and cuts of the radio frequency link between earth and the spacecraft for a few seconds. This phenomenon was effectively portrayed in the movie Apollo 13 when the spacecraft was not able to communicate with anything outside for a few nail-biting seconds. Till data no one has used X-rays in inter-space communication system but there are no applications that have been developed to ensure that communication does not get cut. NASA will use its MXS to generate rapid fire X-Ray pulses and while turn on and off server times per second to encode digital bits for transmission. |
In Québec, improvements in living conditions, advances in medicine and health care and access to education mean that current generations are living longer and are healthier than previous generations.
Increased life expectancy is a major success. Longevity has many advantages, both for individuals and communities.
However, perceptions of aging have changed little and many people are still afraid of growing old. Preconceptions about older adults persist and older adults say that they are still ignored, treated like children and victims of prejudice.
Getting old is often associated with an overall decline in health and the development of various forms of limitations. Yet, most people age 65 and older describe their health as good, very good or excellent.
Participation in society
Many older adults are active in all spheres of society. They put their experience to use for the good of all and include:
- elected officials;
- family caregivers;
Older adults must be given an opportunity to keep or take their place in society. Their citizen and social participation, be it volunteer or paid, contributes to the vitality of communities and needs to be encouraged and supported.
Effects of the social inclusion of older adults
Social inclusion gives older adults a sense of purpose, of accomplishment and of belonging to the community. It allows older adults to stay active and to continue to contribute to the development of society based on their needs, preferences and abilities. Social participation fosters ties that prevent isolation.
Impact of stereotypes and prejudice
Stereotypes and prejudice such as ageism can lead to discriminatory practices that devalue older adults’ experience and expertise. Due to a lack of social recognition, older adults may turn inward, feel fragile and useless and have low self-esteem. These phenomena lead to isolation and withdrawal from all forms of involvement in society.
It is important to take action, in particular by recognizing the value of older adults and their contribution to society.
Last update: March 9, 2020 |
What we do
CIAT generates knowledge about climate change impacts, identifies adaptation options for the rural poor as well as options that can help mitigate climate change.
How we do it: Solutions from agricultural and climate science
CIAT research on crops has made significant strides in developing new generations of more resilient varieties, such as drought-tolerant rice and beans, insect- and disease-resistant cassava, and superior tropical forages adapted to drought, flooding, and other harsh conditions. Some of the latter also show great potential for sequestering carbon and reducing nitrous oxide emissions.
Center specialists in soils are exploring the close links between improved soil health and climate change adaptation and mitigation.
Center experts on decision and policy analysis are developing and applying novel methods to project the likely impacts of climate change on agricultural production. One such tool, called Climate Analogues, permits comparisons between projections of future climates at specific locations and similar conditions already existing at other sites on the same or other continents.
In addition, CIAT scientists are assessing best-bet policies and actions to enhance farming systems and ecosystem services, despite a hostile future climate. The quick and easy-to-read Climate-Smart Agriculture (CSA) country profiles, developed by CIAT and CCAFS, in partnership with the World Bank, Costa Rica’s CATIE, and USAID, give an overview of the agricultural challenges in 12 countries, and how CSA can help them adapt to and mitigate climate change. |
Why Astronomers Care About Super-Old Galaxies?
A long time ago, our universe was dark.
It was just 380,000 years after the big bang. Up until that age, our entire observable cosmos was less than a millionth of its present size. All the material in the universe was compressed into that tiny volume, forcing it to heat up and become a plasma. But as the universe expanded and cooled, eventually the plasma changed into a neutral gas as the first atoms formed.
And at that moment, over 13 billion years ago, the lights went out. Radiation released during the phase transition process quickly cooled and dimmed below the visible range of wavelengths. But although the universe was dark, it was not without movement. Through the ensuing hundreds of millions of years, clumps of matter began to collect, forming ever more prominent and ever denser structures.
And in one hidden corner of the universe, a clump of gas reached the critical temperatures and pressures to ignite nuclear fusion. In that instant, the first star ever began to shine.
It was soon joined by billions more, and as time went on those stars began to collect into the first galaxies. Those galaxies, like our own Milky Way, persist to the present day, where astronomers can study them, and we can gaze upon them in awe and wonder.
And yet, even though we know there was a time before stars and galaxies, and we know that stars and galaxies inhabit the present-day universe, we have no direct observational evidence of the ignition of the first generation. The light from that distant epoch is simply too dim to see.
But that’s where James Webb comes in. That observatory is tuned to detect infrared radiation. The light released from the first galaxies was once as bright and intense as from our own, but over the billions of years, it has dimmed and stretched into the infrared. James Webb is a hunter of those early firstborn galaxies.
Hence all the excitement as already, barely a month into its first science observation campaign, the James Webb is smashing record after record, finding the youngest known galaxies ever observed.
Those galaxies are like precious baby pictures from a forgotten childhood. We do not currently know how the first stars and galaxies formed; we do not know how quickly or slowly the process unfolded, and what violence or exotic forces accompanied those births. We remain ignorant as to how our own Milky Way came to be so long ago.
The birth of the first stars and galaxies – known poetically to astronomers as the Cosmic Dawn – remains the last major frontier in observational cosmology. And James Webb is leading us into the light. |
Central Line-Associated Bloodstream Infections and MRSA Blood Infections
A central line-associated bloodstream infection (CLABSI) is one type of HAI in which bacteria enter a person’s bloodstream via a central line. A central line is a type of IV catheter that is placed into a large vein of the body and can be used for many purposes, including administering medications and IV fluids, measuring blood pressure, and removing blood for laboratory testing. It is important to prevent sources of infection from central lines, because once an infection gains entry into the bloodstream, it can quickly spread throughout the body.
Methicillin-resistant Staphylococcus aureus (MRSA) is a type of Staph bacteria that has developed an immunity to several types of antibiotics. MRSA infections are much more difficult to treat because there are fewer medications to effectively treat it.
Though we don’t often think about it, our bodies are covered with bacteria, most of which do not cause us any harm. In fact, many of the bacteria that live on our skin are actually helpful. However, when the skin barrier is broken or the immune system is weak, there is an increased risk that one or more of those normally harmless bacteria could enter into the body and cause an infection. According to the CDC, studies indicate that about 2 of every 100 people have MRSA bacteria that live on their skin surface. MRSA could gain access to the bloodstream by spreading from another part of the body or by entry around or through a central line.
How does UAB Medicine perform?
Standardized infection ratio (SIR) is a number used to measure, track, and comparehealth care-associated infections (HAIs) among different health care settings and providers. This number compares the actual number of HAIs at eachhealth care setting to the predicted number of infections based upon the type of patients treated in that particular setting. The national standard for the SIR is 1. Numbers greater than 1 indicate that the medical center is associated with more HAIs than predicted, while numbers smaller than 1 indicate that the medical center is associated with fewer HAIs than predicted.
|Safe Care Measures||National Benchmark for SIR||UAB SIR||Comparison Analysis|
|CLABSI (central line-associated bloodstream infection)||1.0||0.838|
|MRSA (methicillin-resistant Staphylococcus aureus) blood infection||1.0||1.674|
What is UAB Medicine doing to improve?
UAB Medicine’s CLABSI prevention bundle uses the following evidence-based strategies. A bundle is a set of 3-5 health care practices that have been proven to reduce infection rates when used consistently together as a group. The CLABSI prevention bundle includes the following strategies:
- Promptly remove central lines that are no longer essential. Similar to our practice with urinary catheters, staff evaluate for the continued need of the central line daily with a goal of removing lines when they are no longer truly needed. Removing central lines at the earliest opportunity reduces the risk of infection.
- Central lines are only to be placed under sterile conditions and only accessed using sterile technique.
- Another intervention taken involves using a powerful antibacterial agent called chlorhexidine. Chlorhexidine is a chemical that is often used to clean the skin of patients before surgery; it kills germs by creating holes in their outer surface. Because the skin is covered with bacteria and germs we cannot see, chlorhexidine can be used as a treatment to help prevent infection from occurring when the skin barrier is interrupted by a medical device like a central line. UAB Medicine encourages every patient with a central line to agree to daily bathing with chlorhexidine. To further decrease our patients’ risk of CLABSI, UAB Medicine recently began using central line dressings that contain chlorhexidine. |
(cr) What role does the sun play for the climate on earth? This can best be answered based on the past when humanity did not yet influence the climate. The large climatic swings between ice ages and the warm periods such as the last 10'000 years are largely due to the position of the sun with respect to the earth. In addition, the sun by itself is not a constant source of energy and many climatic variations were caused by small fluctuations in the solar activity.
Despite the natural variability of the sun it is highly likely that the global warming of the last 50 years is substantially influenced by human activities. This is the conclusion that Dr. Jürg Beer from the EAWAG drew during the parliamentary meeting in December 2002. At the same meeting Dr. Stefan Nowak, the program director of the photovoltaic project of the Federal Department of Energy BFE discussed the potential and limitations of producing electricity by photovoltaic technology. Studies show, that a diversity of renewable energy sources may replace about 20% of the conventional energies at comparable cost. Photovoltaic is most suitable for already built surfaces such as roofs with favorable orientation. More than 15% of the current Swiss electricity consumption may be produced in this way. |
Not everyone is aware of this vital protein, COLLAGEN, that exists in the body. Everything ticks along nicely until something happens to cause the supply of collagen to weaken or break down.
TISSUES AND ORGANS HELD TOGETHER
In Dr. Irwin Stone’s book, Vitamin C—the Healing Factor, he explains that in the heart for instance, ‘the pump and flexible pipes in the system must be rugged to start with and must be in a constant state of self-repair and maintenance to withstand the continual wear and tear of the alternating mechanical stresses of fluid flow. Should any structural weakness in the walls occur or leaks develop anywhere in the closed system, we are in serious trouble with heart disease, strokes and haemorrhaging. The main structural element from which this system is built and which provides the strength, elasticity and ruggedness is the protein COLLAGEN.’ He goes on to explain that collagen acts like a cement substance which holds the tissues and organs together. Others have likened it to the embedded fibres in fibreglass composites. Bone and its connecting ligaments and tendons receive their strength and flexibility from this long string-like protein molecule, collagen.
The essential substance needed for the synthesis of collagen is ascorbic acid or vitamin C. Without ascorbic acid, collagen cannot be produced. If too little ascorbic acid is present during the synthesis of collagen, it will be defective and structurally weak. Collagen also affects the ability of the skin and blood vessels to withstand the impacts that lead to bruising. Dr Stone remarked ‘without collagen the body would just disintegrate or dissolve away.’
BONES AND JOINTS AFFECTED
Arthritis and rheumatism are often referred to as ‘the collagen diseases’ because of the definite involvement of this protein in their genesis and cause. It is the deprivation of ascorbic acid with the consequent synthesis of poor quality collagen or no synthesis at all, which brings on the most distressing bone and joint effects of clinical scurvy. The weakening of bones and the risk of bone fracture in the elderly could also be reduced.
LINK BETWEEN MALIGNANT CANCER AND SCURVY?
In their book, "Vitamin C. The Real Story", Steve Hickey and Andrew Saul refer to the work of WILLIAM J. McCORMICK (1880-1968) a Toronto physician, who noted that supplementation of ascorbic acid rapidly enhanced collagen synthesis. McCormick pioneered the idea that ascorbic acid deficiency was the cause of many diverse conditions from cardiovascular disease to cancer. McCormick noted that if collagen is abundant and strong, body cells hold together well. He observed similarities with malignant cancer and scurvy. If the collagen matrix surrounding a tumour breaks down, it has the effect of disturbing the tight arrangement of cells, thus making it easier for malignant cancer cells to spread. McCormick observed that cancer sufferers typically had very low levels of vitamin C and that the symptoms of scurvy closely resemble some types of leukaemia and other forms of cancer.
As explained in a previous article, the problem exists as a result of the genetic mutation some 60 million years ago, which left humans without the essential enzyme to make ascorbic acid in their bodies, as nearly all other mammals do. It means that ‘we humans are living in a state of sub-clinical scurvy.’ So while an individual might ingest just enough ascorbic acid to prevent him from getting scurvy, he might not be getting enough to ensure an adequate supply for the proper synthesis of collagen.
POSSIBLE PREVENTION OF ATHEROSCLEROSIS
Dr Stone observed that improved prevention and treatment for atherosclerosis is possible when it is appreciated that this is an inflammatory disease. Inflammation of the artery wall stimulates plaque formation. However, a sufficient supply of ascorbic acid can prevent this and subsequent heart disease from happening.
INFLAMMATION TO BLAME
For decades the medical profession has been convinced that cholesterol and ‘bad’ fats cause heart disease, although there was never any hard evidence. Because of the presence of cholesterol, it was assumed that cholesterol was the cause. It was just not understood that the cholesterol was there because of the inflammation. Heart surgeon Dr.Dwight Lundell explains this. People with low levels of vitamin C are more likely to suffer from inflammation. While plaque formation can be active or remain dormant, Dr Stone observed that the inflammation in plaques is often caused by infection.
The body constantly requires vit.C to repair damage to tissues and with atherosclerosis the repair mechanisms fail. Dr Stone suggested that atherosclerosis and heart disease could be a result of chronic sub-clinical scurvy. As mentioned earlier, most animals synthesise vit.C and so do not suffer from atherosclerosis.
The current recommended dietary allowance (RDA) for vitamin C for adult non-smoking men and women is 60 mg/d, which is enough to avoid scurvy but not nearly enough for anyone who is stressed, elderly or ill. No allowance is made for the long term effects of vit.C deprivation. Dr Stone recommends a daily supplement of twice the RDA to stay healthy. See article on vitamin C.
Stone believed that the high incidence of cardiovascular disease is because so many people depend on food as their source of ascorbic acid which may be only at sub-marginal levels. These intakes are usually inadequate for the production and maintenance of optimal high strength collagen over long periods of time. Stone observed, ‘The body is subjected to many other ascorbic acid depleting stresses, so abundant ascorbic acid must be available. An inadequate intake of ascorbic acid is a factor in coronary thrombosis due to impaired collagen production causing capillary rupture and haemorrhage in arterial walls.’ In one hospital it was noted that 81% of coronary patients had subnormal levels of blood plasma ascorbic acid.
FASTER RECOVERY FREEING HOSPITAL BEDS
Stone proposed that a daily routine administration of a few grams of ascorbic acid to hospital admissions would hasten their recovery and shorten hospital stay, thus freeing beds more quickly. Many patients arriving in hospital are already in a pre-scorbutic state. Stone suggested; "when blood samples are taken why not test for levels of ascorbic acid at the same time?" It could save lives. |
We will use the next few days to see how well you understand each of the things that we have learnt over the past few weeks. In today’s Maths practice test there are opportunities to write number names, add (plus), double, work out bonds, and count backwards. In the Phonics practice test there are beginning sounds and jumbled words. There is a new story to read, and then for writing, you will match sentences to the correct pictures. We will learn about the different bones in our body in Life Skills and end with Afrikaans, where you can show off how well you know the words that you learnt about the kitchen. Let’s get started!
The thing about tests… Please read the note below.
This week includes a few practice tests. The work in the tests is based on the work that we have been doing from Day 1 of the lessons on this site. I do not recommend the tests for any child who has not been thoroughly prepared to do the work. Rather go back to previous lessons if your child has only recently started with Term 2’s work. The search function can also assist if you would like to go back to specific topics.
The tests should be done as informally as possible and with as little pressure as possible. The only reason why work should ever be assessed is so that we can have an indication of where more help is needed, or where a concept is not yet fully developed. So, don’t give the answers or help beyond the instructions, but do give lots of encouragement to your child. If they get stuck, encourage them to think back to how we did this previously. The tests are available with memo’s here if you would like to print them. If you do not have access to a printer, just recreate them by hand. The memo’s give a clear indication of how marks have been assigned and what a teacher would be looking for.
Maths – Practice test 1:
Phonics: Learn the -ot words. Sound them out, build them with your sound cards and when you’re ready, ask Mom or Dad to test how well you know them.
Phonics – Practice test:
Reading – words: Revise the words. Focus on the newer ones. Put the ones that you don’t yet know in a separate pile and learn them some more. Remember that you need to be able to say the words immediately when you see them.
Reading: We have a brand new story today! Sound out the words that you don’t know. If you get stuck on some of the words in the lists above, practice them some more. Try to read more fluently every day. Also try to read with expression. Remember that your voice needs to sound as though you are asking a question when there is a question mark, and more excited when there is an exclamation mark.
Writing – Practice test:
Life Skills: Read about all the bones in your body and say what you think. This page was taken from the Department of Basic Education’s Life Skills book for Terms 1 and 2.
Afrikaans First Additional Language – Vocabulary: Today is a chance to see how well you remember the words that we learnt about the kitchen. Point to the things that you hear in the video.
You did an amazing job! Just look at how clever you are!
Now go and relax and have fun! |
Water is often ignored as an important factor in achieving optimal health. Even many health conscious people give little thought to the actual quality of the water they are drinking, assuming that if the water is free of harmful microbes, then it is beneficial to the body. Water is commonly referred to as the ‘universal solvent’, which indicates its ability to dissolve an incomparable number of substances. This fact, along with the essential use of water in every metabolic process in the body, highlights the importance of consuming pure, clean, quality, health building water.
The elemental designation for water is H2O, which indicates that it contains two atoms of hydrogen and one atom of oxygen. In nature, water is never found as one single molecule, but exists as a group of H2O molecules clustered together. In nature, the minimum number of H2O molecules clustered together is normally five or six, which is found with some spring water and deep artesian well water. Most tap water is composed of water that contains ’20-H2O’ to ’30-H2O’ clustering, which is related to the presence of contaminants, with their ability to give the water a ‘stickier’ quality.
The human body is composed of approximately 55 to 60% water, with the brain being composed of roughly 70 to 80%. Our blood is made up of 93% water. The brain and spinal cord of every mammal is surrounded by water.
A small percentage of the body’s water can be produced through metabolic processes, while the remaining water requirements must be satisfied through the diet.
Research has shown that micro-structured water, which reduces clustering of water molecules into smaller, ‘5-H2O’ or ‘6-H2O’ clusters, more easily penetrates cell membranes, increasing hydration and thereby improving cell metabolic functions. Dissolved substances in water increase the size of the clusters of water molecules, making the water less hydrating.
Its important to be aware that even drinking juices or teas is far less hydrating than drinking pure clean water, while the drinking of sweetened liquids of any kind may increase the retention of water, further decreasing the re-hydration of the cells. For this reason its always best to drink pure clean water in order to optimally satisfy the body’s metabolic needs.
An increasing amount of recent research on the role of water in cell biology, has shown the vital importance of water in cellular communication, with water being more like a liquid crystal semiconductor, with information-storage capacity, and also functioning like a battery, rather than water just functioning as a simple solvent. This discovery heightens the importance of adequate cellular hydration for optimal communication between cells and optimal brain function.
Additional recent research has also served to strengthen the case that water has the ability to store impressions it receives from substances and energies that it is subjected to. Water retains the memory of all the impressions it receives, both the good and the bad. Even filtered water can retain the memory of pesticides, pharmaceuticals as well as other poisons that are released into the air or spread on lawns or agricultural fields. Studies have shown that some influences on water last only a few minutes while others seem to persist for several days or months.
The atomic and subatomic memory of water is clearly evidenced by the action of highly diluted homeopathic medications. While high-powered microscopes are unable to detect the slightest trace of the initial substance in the dilutions, the resulting diluted homeopathic water has demonstrable affects when ingested into the body. Paradoxically, the more diluted the substance, the stronger its homeopathic affect. Although no device exists capable of detecting this micro-information on the bio-physical plane, its existence is clearly demonstrated. Further research in this area would undoubtedly shed a great deal more light on the storage of influencing substances and energies in water and their affects on human health.
A more easily, well researched aspect of water, relates to the many physical and emotional issues that can result from chronic dehydration. The body exhibits the sensation of thirst when it is suffering from a reduction in water reserves and by that point it may be deficient by as much as 2%. Reducing the body’s water content by as little as 2%, through diuresis or dehydration, can result in noticeable fatigue, along with confused thinking and disorientation. As a result, many doctors and naturopaths recommend drinking water before you become thirsty. A reduction of 10% can cause problems ranging from musculoskeletal issues (joint pain, back pain, cramps) to digestive issues (heartburn, constipation), immune problems or allergies, and cardiovascular symptoms or anginal pain.
Symptoms that can be associated with chronic dehydration include:
dyspeptic pain colitis pain
false appendicitis pain hiatal hernia
rheumatoid arthritis pain low-back pain
neck pain anginal pain
high blood pressure high blood cholesterol
excess body weight excess hunger
asthma and allergies chronic fatigue
cognitive impairment attention deficit disorder
Remaining adequately hydrated is dependent on ingesting enough water to replace water that is eliminated through the various metabolic activities occurring in the body. A commonly held rule of thumb for daily water intake is to take your body weight, divide it by two and drink that amount in ounces of water per day. For example: Body weight 130 lbs divided by 2, equals 65 ounces per day. For any dehydrating medications or beverages, such as coffee, tea, alcohol, some herbal teas, all juices and sodas, add another 12 to 16 ounces of pure water to your daily intake for every 8 ounces of diuretic beverage consumed. This is one reason why coffee is served with a glass of water in some parts of the world.
Strenuous physical activity may also require the ingestion of additional water to keep up with metabolic needs. Drinking more than a gallon a day should be avoided, however, since too much water can also adversely affect the body’s metabolism.
Along with the essential need to stay hydrated is the need to be sure that the water ingested is free of harmful substances. Testing of water sources is discovering an increasing number of hazardous substances, including prescription and illegal drugs, birth control pills, pesticides, herbicides, fracing chemicals, heavy metals and many other of the one hundred thousand toxic chemicals utilized around the world. The ingestion of these harmful substances can cause severe health problems and will block any chance at achieving optimal health.
An overview of the different categories of water gives various options, which also differ in their ability to aid in the achievement and maintenance of optimal health. Please note that the water from all of these sources should ideally be tested to determine if it is free of harmful substances and filtered, if necessary, to insure purity:
Spring Water– The best choice, if available and affordable, is natural mountain spring or artesian spring water, since it is often micro-structured and highly energized, and frequently uncontaminated with harmful organisms and substances. The water at Lourdes in France, as well as other highly popular artesian waters, are of this health building quality.
Well Water– The next best choice would be well water that is free of harmful chemicals and harmful biological organisms. The problem with this water, however, is that its source is a water vein or aquifer, which, particularly on the plains, can be polluted with a long list of poisons due to factory farming and high-yield agriculture.
Rainwater– Ground water evaporates and rises in a purified form into the atmosphere, where it accumulates and eventually returns to earth as rain or other forms of precipitation. Since rain results from the evaporation of water, the resulting collected rainwater is lacking in mineral salts.
There has been an ongoing discussion about the effect on the body of mineral ‘deficient’ water, with claims that minerals already in the body are leached out of the bones, teeth and other tissues of the body, and excreted, leaving the body deficient in critical minerals necessary for health. There are also contrary claims that since the body obtains the overwhelming majority of minerals from ingested food, that any lack of minerals in the water is insignificant in its affect on health. Along with this claim is the idea that organically bound minerals in food are much better absorbed and utilized by the body anyway, minimizing the problem of a lack of inorganic minerals in water. And related to this, it is also claimed that some inorganic minerals in the water are deposited in joints and other areas in the body as a way of preventing them from adversely interfering with metabolic activities. Additional studies of these claims would go a long way in establishing how vital mineralized water actually is to the overall health of the body. Rainwater also may contain various types of industrial wastes and evaporated pollutants that were suspended in the atmosphere. Acid rain is a common result of these pollutants, with its deposition being a health threat to all living things.
Surface Water– Water collected on the earth’s surface in streams, lakes, pools or ponds is also subject to the dissolution of the various pollutants that have been deposited on the land. Water on the surface of these various sources is affected to some extent by daylight and sunlight. Heating will result in the water losing oxygen and too much exposure to sun can result in the growth of bacteria and algae.
Tap Water– Along with the need to filter out any number of the thousands of toxic chemicals and heavy metals in tap water, is the additional fact that in the United States, tap water typically contains chlorine and/or chloramines as well as added fluoride. Since all of these chemicals are highly toxic, it is essential that they are filtered out if your intention is to drink purified tap water. Since most tap water is pressurized, it is also important to consider the effect of pressurization on the water. The molecule clusters of water that have been compressed have a lessened ability to dissolve and eliminate impurities, which may result in the deposition of these harmful substances in joints and on the walls of the blood vessels. Water that has been pressurized is also less capable of supplying the body’s cells with the essential nutrients needed for optimal health.
Boiled Water– The boiling of water has been used for thousand of years as a way to make it so that it can be ingested with a lessened ability to make one sick. It is a single step process which eliminates most microbes responsible for causing intestine related diseases. In places having a proper water purification system, it is only advocated as an emergency treatment method or for obtaining potable water in the wilderness or in rural areas, but it cannot remove all chemical toxins or impurities. The traditional advice of boiling water for ten minutes is mainly for additional safety, since microbes start getting eliminated at temperatures greater than 60 °C (140 °F). Boiling water is reputed to be an effective method to eliminate the micro-information contained in water molecules. Brief boiling for just a few seconds will allow it to lose its memory only briefly, with the water reconstructing itself after a few minutes. According to Ayurvedic and Chinese medicine it is necessary to boil the water for seven minutes in order to eliminate the micro-information completely.
Distilled Water– Distillation consists of bringing water to a boil until it begins to vaporize, leaving mineral sediment behind. The vapor is collected in a container and once cooled, it turns back into liquid. Back when our atmosphere wasn’t contaminated, rain was pure and clean, much like distilled water. With the current widespread contamination of water with hundreds of thousands of potential chemicals, an important consideration is that any volatile organic compounds (VOCs) will evaporate before the water does.
In a properly made distillation unit, the tap water is preheated to just below the boiling point to drive off the compounds that are lighter than water. Once those compounds have evaporated, the water is heated just to the boiling point and is sent to the condensation chamber to return to its liquid state as pure water. A unit that doesn’t include the preheating phase will evaporate the VOCs along with the water, so they’ll condense with the steam and remain in the finished product. While this won’t concentrate the harmful compounds, it will leave the purifying job half done. With the proper unit you will end up with pure distilled water. There are those that claim that distillation is the only way to insure that the water is free of harmful substances. As was the case with rainwater, there are various claims about the effect on the body of water lacking minerals.
Filtered Water– Carbon Filters– Most carbon filters utilize a special form of carbon called activated carbon. Water easily passes through an activated carbon filter, which provides an almost unbelievably large surface area (125 acres per pound). Activated carbon is used in both the solid block and granular forms. It takes water longer to pass through block carbon, which makes this form more effective at absorbing contaminants. Activated carbon filters are best suited for removing organic pollutants like insecticides, herbicides and PCB’s. They can also remove many industrial chemicals and chlorine. Activated carbon will not remove most inorganic chemicals, dissolved heavy metals (like lead) or biological contaminants. Carbon filters provide a fertile breeding ground for bacteria. If water hasn’t been treated with chlorine, ozone or some other bactericidal method before undergoing carbon filtration, any bacteria in the water will become trapped inside the filter and further contaminate the water that runs through it.
Carbon filters also begin to lose their effectiveness over time. As the filter reaches its absorption capacity, more and more impurities will be left in the water. And since the water will continue to flow easily through the filter, there is no true way of knowing if the filter is still functioning properly without running a water quality test. This makes it essential that the filter be changed at regular intervals or when its water filtration capacity has been reached, whichever comes first.
Filtered Water-Ceramic Filters– Rust, dirt, parasites like Cryptosporidium and Giardia lamblia, and other impurities can be easily removed from drinking water by forcing the water through the very fine pores in ceramic material. They’re also ideal for travel or backpacking, since they can be repeatedly cleaned by simply scrubbing the outside of the ceramic material. Ceramic filters, however, are not effective at removing organic pollutants or pesticides.
Reverse Osmosis– Reverse osmosis (RO) is a process whereby water is forced through a semipermeable synthetic membrane. RO systems can remove anywhere from 90% to 98% of heavy metals, viruses, bacteria and other organisms, organic and inorganic chemicals. Reverse osmosis filtration will remove chlorine and chloramine but may not filter out all of the fluoride from the water. RO systems require a minimum water pressure of 40 psi to work properly, and steps must be taken to insure the integrity of the membrane, which has to be replaced every few years, since the membrane will degrade in the presence of chlorine and turbid water. This makes a carbon prefilter essential.
The best overall solution for clean pure water is multi-stage filtered reverse osmosis water from a local vendor, either delivered or by filling your own bottles. If you are drinking RO water and want to insure that you are getting water with extremely low fluoride levels then filter RO water through fluoride filters. Otherwise the best solutions are distilled water from your own quality distiller or spring/artesian water, with the best sources being Aqua Panna, Gerolsteiner and Pellegrino.
The Healing Power of Energized Water
Ulrich Holst 2010 |
The lab environment aims to mimic the conditions your embryos would experience if they were growing in your reproductive tract. This includes the right oxygen pressure, temperature and nutrients, as required for all the different stages of development. In addition, we continuously monitor the growth of your embryos using time-lapse imaging. However, despite these efforts, some embryos may not progress to the blastocyst stage.
Approximately 60% of fertilised eggs become blastocysts. This means that around 40% of embryos stop growing before becoming a day 5–6 embryo. This is known as embryo arrest and occurs when an embryo stops dividing for 24 hours.
Not all embryos that reach the blastocyst stage are suitable for embryo transfer or freezing, as they may not have all the components necessary to result in a healthy pregnancy. Generally, around 40–50% of fertilised eggs become blastocysts that we can transfer or freeze. However, this varies greatly depending on your age and medical history. There is also a small group of individuals who have poor embryo development, which may be due to developmental-specific events or a pattern of embryo progression. Most IVF patients experience embryo arrest in some form, and it is usually a protective mechanism for stopping the development of abnormal or poor-quality embryos.
There are many reasons why an embryo might stop developing. The embryo could have reduced metabolic activity or slow development and as a result, degenerate. In addition, embryos can stop growing during different stages of development. They may fail to reach the blastocyst stage for several reasons discussed below.
Around 70% of arrested embryos display chromosomal errors.1 Chromosomes are rope-like structures inside your cells that contain DNA – i.e. the instruction manual that makes you unique. When sperm and egg come together, the mother and father pass on 23 chromosomes each, so that the resulting embryo has a total of 46 chromosomes.
Sometimes, chromosomes can fail to combine correctly leading to chromosomal errors. This may include having:
In addition, chromosomal errors can develop during the replication and division of the cells in the embryo. If an embryo divides abnormally during the early stages of its development (also known as the cleavage stage), this can lead to an abnormal distribution of chromosomes between cells and result in embryo arrest. Cells within the embryo can also have abnormal DNA replication and/or damaged DNA leading to embryo arrest.
Some chromosomal errors do not stop the embryo from growing, which is why preimplantation genetic screening (PGT) may be recommended.
Usually, a cell within an embryo divides from one cell into two and distributes its chromosomes evenly. However, in some instances, a cell within an embryo divides from one cell to three. This is called Direct Uneven Cleavage (DUC). When DUC occurs in the first cell division, there is a higher chance of embryo arrest occurring.
The chances of embryo arrest occurring also depend on how much the cells are affected. Sometimes, an embryo may divide very quickly from one cell to two and three cells, and this rapid division can be difficult to differentiate from DUC. Under these circumstances, the rapidly dividing embryo has a greater chance of becoming a blastocyst.
Another cell division error can occur if the cell fails to divide but the nucleus (the information centre of all cells which contains your chromosomes and DNA) continues to replicate. This can lead to there being more than one complete set of chromosomes inside a single cell. If this occurs in several cells, the embryo will arrest; however, if this phenomenon is present only in a few cells, the embryo still has the potential to reach the blastocyst stage.
Early cleavage within the embryo relies on special products inside the egg to drive development. Sometimes, defects in the development of an embryo reflect the quality of the egg and can cause the embryo to stop dividing.
Embryos can also undergo instructed cell death (known as apoptosis). Apoptosis is a biological mechanism that aims to remove any unwanted or damaged cells from the embryo in its early stages of development. If enough apoptosis occurs, the embryo can fail to develop further.
Mitochondria are like little organs inside a cell that act as a power supply. Specifically, they produce an energy-carrying molecule called ATP (short for adenosine triphosphate). Inherited only from the mother’s egg, mitochondria produce the energy that eggs and embryos need to function properly. During the early growth stages of an embryo, mitochondria undergo structural and positional changes that allow them to provide energy to the embryo and regulate their environment. These events are a key part of the development of an embryo before implantation takes place inside the womb.
As a woman ages, the quality of her eggs declines. Increasing maternal age can result in mitochondrial dysfunction due to changes or damage to the mitochondrial DNA – yes, mitochondria have DNA just like the nucleus of a cell. If the mitochondrial DNA is damaged, this can result in inadequate amounts of ATP or energy, as well as the loss of other important mitochondrial functions required following fertilisation. In addition, low mitochondrial DNA content is also associated with fertilisation failure and abnormal embryo development. Basically, if an egg or embryo does not have enough of a power supply, developmental processes will stop.
Between days two and three of embryo development, i.e. from the four-cell to the eight-cell stage, an embryo’s genome is activated. A genome refers to the genetic material (chromosomes containing DNA) inside a cell. When an embryo’s genome is activated, the embryo no longer relies on the egg to continue growing; rather, it uses its own cellular machinery. This change in embryonic genome activity is regulated by special products that mitochondria produce. Around 10% of embryos do not make the switch from maternal egg control to embryonic genome control. This means that an embryo on day two may be at the four-cell stage but fail to progress further if the genome switch does not occur.
There are many reasons why an embryo may not progress beyond a certain developmental stage. Throughout your cycle, our embryologists will phone you to keep you updated on the progress of your embryos. We know this can be an anxious time as you wait to hear how many of your eggs have been fertilised and then how many of these have developed into quality embryos suitable for transfer or freezing. If you have any concerns throughout this time, we encourage you to call us on (03) 8080 8933 for the extra support and information you need.
The information on this page is general in nature. All medical and surgical procedures have potential benefits and risks. Consult your healthcare professional for medical advice specific to you. |
Radiocarbon dating and ancient DNA suggest that a matrilineal dynasty likely ruled Pueblo Bonito in New Mexico for more than 300 years.
“We are not saying that this was a state-level society,” says Douglas J. Kennett, head and professor of anthropology at Penn State, “but we don’t think it was egalitarian either.”
Archaeologists have described the Chaco Phenomenon as anything from an egalitarian society without any rulers at all, to a full-fledged state-level society or kingdom. The researchers now think that Chaco Canyon was not a leaderless conglomeration of people, but a hierarchically organized society with leadership inherited through the maternal line.
Very unusual burials
Typically, the only things found in prehistoric archaeological ruins to indicate elevated status are grave goods—the artifacts found with burials. Throughout the Southwest it is unusual to find formal burials within structures, because most people were buried with limited grave goods outside housing compounds, but in excavations sponsored by the American Museum of Natural History and carried out in the 1890s at Chaco Canyon, archaeologists found room 33 in Pueblo Bonito—a burial crypt within a 650-room pueblo dating between 800 and 1130—that contained 14 burials.
“It has been clear for some time that these were venerated individuals, based on the exceptional treatment they received in the afterlife—most Chacoans were buried outside of the settlement and never with such high quantities of exotic goods,” says Adam Watson, postdoctoral fellow in the American Museum of Natural History Division of Anthropology. “But previously one could only speculate about the exact nature of their relationship to one another.”
The researchers note in Nature Communications that this 6.5 by 6.5 foot room “was purposely constructed as a crypt for a high-status member of this nascent community and ultimately his lineal descendants.”
The initial burial was of a male in his 40s who died from a lethal blow to the head. He was buried with more than 11,000 turquoise beads, 3,300 shell beads, and other artifacts including abalone shells and a conch shell trumpet originating from the Pacific Ocean and Gulf of California far from central New Mexico. This burial is the richest ever found in the American Southwest.
Another individual was buried above this initial interment and a split plank floor placed above them. In the space above, another 12 burials took place over the span of 300 years.
How were these people related?
“We originally worked with Steve Plog (professor of archaeology at the University of Virginia) to radiocarbon date these burials,” says Kennett. “The results of this work had all the individuals dating to a 300 year period. Then the question came up, are they related?”
“Using DNA sequences from the nuclear genome combined with the radiocarbon dates, we identified a mother-daughter pair and a grandmother-grandson relationship.”
Kennett and Plog teamed up with George Perry, assistant professor of anthropology and biology at Penn State and Richard George, a graduate student in anthropology, to first examine the mitochondrial genomes of these individuals.
When the results came back, the researchers found that all the individuals shared the same mitochondrial genome sequence. Mitochondrial DNA (mtDNA) is inherited only from an individual’s mother, so matching mtDNA indicates that not only were all the individuals from the same family, but the inheritance was matrilineal—through the mother.
“First we thought this could be some kind of contamination problem,” says Kennett. “We checked for contamination, but found no evidence for it and David Reich’s laboratory at Harvard Medical School corroborated our results.”
Working with Reich, professor of genetics, the researchers then wondered if they could determine specific relationships among these individuals.
“Using DNA sequences from the nuclear genome combined with the radiocarbon dates, we identified a mother-daughter pair and a grandmother-grandson relationship,” says Kennett.
“For the first time, we’re saying that one kinship group controlled Pueblo Bonito for more than 300 years,” says Plog “This is the best evidence of a social hierarchy in the ancient Southwest.”
Additional collaborators are from Penn State; Harvard University Medical School; Peabody Museum of Archaeology and Ethnology; and the American Museum of Natural History.
The National Science Foundation, the University of Virginia, and Penn State supported this work.
Source: Penn State |
Explosive growth of life on Earth fueled by early greening of planet
Earth's 4.5-billion-year history is filled with several turning points when temperatures changed dramatically, asteroids bombarded the planet and life forms came and disappeared. But one of the biggest moments in Earth's lifetime is the Cambrian explosion of life, roughly 540 million years ago, when complex, multi-cellular life burst out all over the planet.
While scientists can pinpoint this pivotal period as leading to life as we know it today, it is not completely understood what caused the Cambrian explosion of life. Now, researchers led by Arizona State University geologist L. Paul Knauth believe they have found the trigger for the Cambrian explosion.
It was a massive greening of the planet by non-vascular plants, or primitive ground huggers, as Knauth calls them. This period, roughly 700 million years ago virtually set the table for the later explosion of life through the development of early soil that sequestered carbon, led to the build up of oxygen and allowed higher life forms to evolve.
Knauth and co-author Martin Kennedy, of the University of California, Riverside, report their findings in the July 8 advanced on-line version of Nature. Their paper, "The Precambrian greening of Earth," presents an alternative view of published data on thousands of analyses of carbon isotopes found in limestone that formed in the Neoproterozoic period, the time interval just prior to the Cambrian explosion.
"An explosive and previously unrecognized greening of the Earth occurred toward the end of the Precambrian and was an important trigger for the Cambrian explosion of life," said Knauth, a professor in Arizona State's School of Earth and Space Exploration.
"During this period, Earth became extensively occupied by photosynthesizing organisms," he added. "The greening was a key element in transforming the Precambrian world - which featured low oxygen levels and simple, bacteria dominant life forms - into the kind of world we have today with abundant oxygen and higher forms of plant and animal life."
Knauth calls the work "isotope geology of carbonates 101."
In order to understand what happened on Earth such a long time ago, researchers have studied the isotopic composition of limestone that formed during that period. Researchers have long studied these rocks, but Knauth said many focused only on the carbon isotopes of Neoproterozoic limestones.
Knauth and Kennedy's study looked at a bigger picture.
"There are three atoms of oxygen for every atom of carbon in limestone," Knauth says. "We looked at the oxygen isotopes as well, which allowed us to see that the peculiar carbon isotope signature previously interpreted in terms of catastrophes was always associated with intrusions of coastal ground waters during the burial transformation of initial limestone muds into rock. It's the same as we see in limestones forming today."
Brave new world
By gathering all of these published measurements and carefully plotting carbon isotopic data against oxygen isotopic data, a process Knauth said took three years, the researchers began to formulate a very different type of scenario for what led to complex life on Earth. Rather than a world subject to periods of life-altering catastrophes, they began to see a world that first greened up with primitive plants.
"The greening of Earth made soils which sequestered carbon and allowed oxygen to rise and get dissolved into sea water," Knauth explained. "Early animals would have loved breathing it as they expanded throughout the ocean of this new world."
A key element to this scenario is not so much what the researchers saw in the data, but what was missing. When they plotted the data for various areas from which it was derived they kept noticing an area on the plots that contained little or no data. They dubbed it the "forbidden zone."
"If previous interpretations of carbon isotope data were correct, there would be no forbidden zone on these cross plots," Knauth said. "The forbidden zone would be full of Neoproterozoic data."
"These zones show that the isotopic fingerprints in limestone we see today started in the late Precambrian and must have involved the simultaneous influx of rain water that fell on vegetated areas, infiltrated into coastal ground waters and mixed with marine pore fluids. During sea level drops, these coastal mixing zones are dragged over vast geographic regions of the flooded continents of the Neoproterozoic," Knauth said. "Vast areas of limestone can form in these mixed pore fluids."
All of which points to an environmental trigger of the Cambrian explosion of life.
"Our work presents a simple, alternative view of the thousands of carbon isotope measurements that had been taken as evidence of geochemical catastrophes in the ocean," Knauth explained. "It requires that there was an explosive greening of Earth's land surfaces with pioneer vegetation several hundred million years prior to the evolution of vascular plants, but it explains how a massive increase in Earth's oxygen could happen, which has been long postulated as necessary for animals to evolve big time."
"The isotopes are screaming that this happened in the Neoproterozoic," he added. |
Diabetes is a serious disease that can be life-threatening if not managed properly. The number of diabetic cases being recorded is growing, and that's scary. Diabetes can lead to heart disease, stroke, blindness, kidney disease, and amputations. It also affects people of all ages and backgrounds—including you!
Diabetic patients have to be aware of their sugar levels, and they have to be extra careful to make sure they don't get too low or too high. The best way to do that is by keeping a close eye on the numbers and always making sure you've got enough supplies on hand.
If you're diabetic and you want to make sure you're doing everything right, it's important that you understand the basics of what your body needs.
There are two types of diabetes: Type 1 and Type 2. Both types can be managed with diet, exercise, and medication. However, if you have Type 1, your body does not produce insulin naturally (so you'll need supplemental insulin shots). It's important to know the difference between type 1 and type 2 diabetes so you can take steps to prevent or manage your symptoms.
Type 1 diabetes is an autoimmune disease that affects the pancreas, which makes insulin. Insulin is a hormone that helps your body use glucose (sugar) for energy. An autoimmune attack on your pancreas stops producing insulin because of an autoimmune attack on its cells. This means that people with type 1 diabetes need to take insulin to live. This is an autoimmune disease where the immune system destroys the cells that make insulin. This type is usually diagnosed in children or young adults and has no known cure at this time.People with this type of diabetes need to monitor their blood glucose levels very closely because their bodies cannot produce insulin on their own. This means that, without medication or insulin shots from outside sources, the glucose in their blood will continue to rise until they pass out from hypoglycemia or ketoacidosis (a dangerous metabolic disturbance).
People with type 2 diabetes occurs when the body becomes resistant to insulin, so it cannot use carbohydrates to produce energy for your cells. This is more common as you get older, but it can occur at any age if you're overweight or inactive. Type 2 diabetic patients may only need medication or insulin injections if they are experiencing symptoms such as frequent urination or blurred vision due to high blood pressure caused by excess sugar.
It's said that 1 in 10 Americans has diabetes or prediabetes. If you have diabetes, you need to learn about it and take care of yourself. Here are some tips for living a healthier life:
1. Eat well Eat plenty of fruits, vegetables, and whole grains. Limit added sugar and salt (diamonds).
2. Exercise regularly and get enough sleep every night. Find activities that are fun and easy for you to do every day. Get up from your desk during the day whenever you can; even small changes like this can help improve your health by lowering stress levels, which can lead to better blood sugar control over time!
3. Take your medicine as directed by your doctor so that you stay healthy longer! |
For centuries, the Mediterranean served as a civilising link between Europe, North Africa and the Middle East. For many years now, the regions that border it have been facing a problem of growing concern: desertification. The soils are becoming poorer, water is deficient and the forests are disappearing. The MedCoastLand project, involving 13 countries bordering the Mediterranean Sea aims to share the knowledge acquired in the sustainable development of coastal regions and encourage links between decision-makers, farmers and scientists.
Artiplex – These bushes, which provide food for small ruminants, are used for replanting arid zones. Here they are shown growing near Marrakesh (Morocco).
The Mediterranean coastline is vast, stretching some 46 000 km, 19 000 of which is island coastline. Europe, Asia and Africa share a Mediterranean area of about 1.5 million km2 that is home to 430 million inhabitants, more than half of them (286 million) in North Africa and the Middle East. Over the past 30 years, the population of the Mediterranean Basin as a whole has increased by around 50%, especially in its southern zones. Although it is here that the problem of desertification is most serious – with just 5% arable land – the Mediterranean region as a whole is facing identical problems with the same growing sense of urgency: desertification and shrinking water resources, land degradation and soil impoverishment, pollution, salinisation, deforestation, forest fires, and unbridled urban development. There is also the added impact of tourism that, while bringing undeniable economic benefits, also causes worrying ecological imbalances. In terms of agriculture, in many regions this degradation of land is resulting in a very worrying loss of productivity and the development of genuinely ‘devastated’ zones.
Knowledge mix Throughout the Mediterranean Basin, attempts are being made to tackle various aspects of the problem through research projects, information gathering, experience in the field, exercises in rational management, and analyses of the chemical and physical processes at work. However, these often very fruitful activities are usually pursued rather randomly and without any connection with the local communities or decision-makers. This is why the MedCoastLand project was set up in 2002 for a four-year period. It is being coordinated by the Agronomic Mediterranean Institute of Bari (AMIB), Italy.(1)
But is this just another project? “Not at all,” explains project manager Pandi Zdruli. “The aim is not to contribute to additional research but to disseminate the knowledge that has accumulated over many years, the results of which are often still not generally known.” To do so, the MedCoastLand project has set up a network of 36 bodies from 13 countries (2). "These countries are facing comparable problems – mainly the lack of water and drying up of soils – but they are posed in different terms depending on the socio-economic conditions. The context is not the same in Europe as it is in North Africa or the Middle East.”
MedCoastLand is distinctive for the number of different players involved, including policy-makers, researchers and farmers. “To permit dialogue between these different persons, they must find a common language and listen to one another. For that the policy-makers and scientists must leave their desks and get out and about in the field. That is what is starting to happen. Our network is based on notions of the exchange of ideas and mutual respect for opinions. Theory and action are complementary and can be mutually enriching.”
The real and the virtual
Reforestation in the coastal region of Adana (Turkey).
Meetings are held regularly, in the form of seminars, as well as visits to the field and to date five workshops. The first, held in Adana (Turkey) in June 2003, looked at how to carry out an ecosystem-based assessment of soil degradation to facilitate effective action by land users. In 2004, a meeting was held in Marrakech (Morocco) on the subject of the profitable management of soil conservation and another in Alexandria (Egypt) on the impact of participatory management. The contributions of participants are collected for publication and can be found on the Internet.
As these are coastal areas, the information concerns the sea as well as the land. “Some people do not always understand that these two elements are fundamentally linked and warrant equal attention. It is impossible to separate them.”
Finally, by collecting data in this way it is possible to avoid duplication of information as well as to fill any gaps. The duplication of research is in fact a very real problem. “One of the ways of combating it is by speeding up the dissemination of this knowledge base and the results of previous projects. But these results are not always accessible, as the scientific institutions are not always interested in seeing them disseminated. Our view is that, while scrupulously respecting intellectual ownership, the free distribution – through publications or more particularly the Internet – of this research and its results can only be beneficial, for the authors as well as society.”
As regards the gaps, in certain fields there is a distinct lack of indicators and statistics. “We try to identify any areas where information and knowledge bases are lacking, but at the same time are committed to arriving at concrete results. That means developing an approach that combines productivity, remuneration for the players and soil conservation. To achieve this, you have to be able to propose long-term sustainable development projects to the decision-makers in these countries and regions. That too is one of the MedCoastLand aims."
(1) The Bari AIM is a member of the Centre international des hautes études agronomiques méditerranéennes (CIHEAM), which proposes training, research and co-operation actions. (2) Algeria, Morocco, Tunisia, Lebanon, Egypt, Syria, Malta, Turkey, Jordan, Palestinian Authority, Spain, France, Italy.
The project consists of seven working groups, or ‘work packages’, which are closely linked and continually active.
WP1 is the heart of the project. This gathers and disseminates information on the internet and operates an internet forum and genuine ‘virtual campus’ for the retrieval and exchange of knowledge.
WP2 concentrates on the environmental factors that lead to soil degradation and the indicators with which to evaluate and monitor it. The ultimate objective is to develop an eco-system as an aid to land management and conservation.
WP3 studies management and conservation experiences of a nature enabling economic and sustainable solutions.
WP4 promotes the participative management of soil resources.
WP5 analyses the policies of the partner countries with a view to identifying proposals favourable to sustainable development applicable at regional or national level.
WP6 is working on a draft agreement between the Southern Mediterranean countries on the sharing of information and long-term co-operation. The aim is to promote the management and conservation of land at the ‘regional’ level in the broadest sense.
WP7 operates a document search service for the benefit of all the project players. |
is a mode of underwater diving in which the scuba diver uses a self-contained underwater breathing apparatus (scuba) which is completely independent of surface supply, to breathe underwater.
Unlike other modes of diving, which rely either on breath-hold or on breathing supplied under pressure from the surface, scuba divers carry their own source of breathing gas, usually compressed air, allowing them greater freedom of movement than with an air line or diver’s umbilical and longer underwater endurance than breath-hold. Open circuit scuba systems discharge the breathing gas into the environment as it is exhaled, and consist of one or more diving cylinders containing breathing gas at high pressure which is supplied to the diver through a diving regulator.
Diese Website verwendet Cookies. Indem Sie weiter auf dieser Website navigieren, ohne die Cookie-Einstellungen Ihres Internet Browsers zu ändern, stimmen Sie unserer Verwendung von Cookies zu. mehr Information |
- Prep School
- School Life
- House System
- Key Staff & Contacts
- Prep Transport
The role of the adult
The teacher must be familiar with child development and learning, be responsive to the needs and interests of the individual student, and be aware of the cultural and social contexts in which the student lives and learns. The role of the teacher is to facilitate connections between the student’s prior knowledge and the knowledge available through new experiences. This is best done with the support of the parents, because it is the student’s environment—the home, the school and the community—that will shape the student’s cognitive experience.
The teacher needs to provide a secure learning environment in which the individual student is valued and respected, so that the relationships students establish with each other and with adults, which are of central importance to development and learning, will flourish. The student is best served when the relationships between the teacher and the parent, and between the school and the home, are reciprocal and supportive. In a PYP classroom, parents are welcomed as partners, with a clear role to play in supporting the school and their own children. They are informed and involved.
The range of development and learning demonstrated by each member of a group of students will inform which practices the teacher will need to implement to meet the needs of both the group and the individual. The PYP suggests that the teacher’s role in this process is to create an educational environment that encourages students to take responsibility, to the greatest possible extent, for their own learning. This means that resources must be provided for each student to become involved in self-initiated inquiry, in a manner appropriate to each student’s development and modalities of learning.
The PYP classroom is a dynamic learning environment, with the students moving from group work to individual work in response to their needs and the needs of the inquiries to which they have committed. The students will change roles, working as a leader, a partner, or a member of a larger group.
In the PYP classroom, the teacher facilitates the process of students becoming initiators rather than followers by creating opportunities for and supporting student-initiated inquiries; by asking carefully thought-out, open-ended questions; and by encouraging students to ask questions of each other as well as of the teacher. It goes without saying that the teacher must also value and model inquiry.
Source: Making the PYP happen: A curriculum framework for international primary education (2009) |
Justice within a society, in terms of wealth distribution, privileges, and equal opportunities is known as social justice. On the other hand, social injustice is the way in which unjust actions occur in society. Equals are treated unequally and the unequal is treated equally. It can occur nationally, among classes, locally, as well as internationally.
Types of Social Injustice
Some of the types of social injustice are as follows:
Economic injustice is the unequal dispensation of opportunities and earnings between groups of people in a society. There are mainly three types of economic injustice.
- Income: The unjust distribution of complete monetary amount received from employment including salaries, wages, and stipends.
- Pay: The unjust distribution of pay among one or multiple organizations.
- Wealth: The unjust distribution of total amounts of possessions of an individual or a household.
Making distinctions towards a person regarding the group, category or class they belong to is discrimination. Learning about social injustice through discrimination is important. If one receives unequal opportunities because of discrimination against some factors including but not limited to their age, gender, color, disability, race, sexual orientation, then it is one form of social injustice. Following is the explanation of some of these factors:
If a person or a group of people is treated differently because of the religion they follow or the particular beliefs they have, then it is called religious discrimination. When they are treated unequally because of it then it comes under social injustice. Typically, minorities suffer the most because of it. Concerns have been raised by minorities about religious discrimination against them, even in societies that practice freedom of religion.
Age discrimination, especially in the workplace is when a person’s age becomes a factor for them to get treated unfairly. No new jobs, promotions or benefits are given to them. Sometimes older workers get terminated or offered buyouts and younger ones get hired, that is a clear form of age discrimination. Signs like someone getting assigned to unpleasant duties or not getting raises because of their age is also included.
Gender inequality occurs when men and women are treated differently based on their gender. Biological, psychological, and cultural norms bring up these differences. It doesn’t mean that women and men can’t differ from each other at all. It simply means that in order to establish gender equality, the rights, opportunities, and responsibilities given to people should not depend on their gender.
The discrimination on the basis of someone’s race is known as racial discrimination. The people who discriminate usually believe that one race is superior to the other. People face injustice because the laws against it are often not legalized. There’s also a term known as racial profiling which is the act of targeting a person based on their race and subjecting them to suspicion instead of considering individual actions and scenarios.
Homophobic people encompass negative behavior, feelings, and attitude towards homosexuality and the LGBT (lesbian, gay, bisexual, transgender) community. Their feelings are usually the result of religious beliefs or irrational fear. Homophobia is one of the factors that result in unethical behavior such as discrimination which eventually leads to social injustice. Justice would be providing homosexuals with the same rights as any other person.
Social injustice can occur through violence, politics, health and educational matters and many more factors.
Ways to Fight Social Injustice
As they say, before you start changing the world, change yourself. Following are a few things that can be done to fight social injustice.
Promote and fight for equality. Your voice matters so raise it. Diminish all kinds of discrimination. Start with the people around you; treat them equally. Don’t let their gender, race, disabilities, age, color, sexual orientation or any other factor that comes under discrimination affect your decisions. Take rape statistics, as an example. One in five women and one in 71 men become the victim of sexual assault at some point in their lives, that doesn’t make it easier on the other. Remember, every person deserves to be treated equally and justly.
Donate Your Time and Resources
The people or groups who fight for social justice are always in need of donations and volunteers. For some, it’s not always easy to contribute monetarily. You can always donate the stuff you already have and you think that someone can benefit from it. If doing charity is not an option for you, volunteer your time. You can volunteer to work at old age homes or orphanages. By donating a small amount of time of your life, you can bring a big change in someone else’s.
Fight for Rights
Play your part by taking part in fighting for rights. They can be educational, employment, health, sexuality or any other human rights. Connect with local activist groups so that they can help you stay up to date on events, charities, and fundraisers. You can also start your own local group to promote social justice. Try to meet regularly and perform activities such as raising funds or teaching less fortunate kids. This fight will be worth fighting. Remember, everyone deserves to receive equal opportunities and to be treated justly.
- Other things you can do to fight injustice:
- Take a stand for what is right
- Educate yourself on a particular movement
- Set an example for others to follow
- Contact people who have the power to implement changes
- Work on your own habits and beliefs
- Contact the media
- Take action in your community
- Go to a protest or demonstration
- Use the power of social media
Now that you have learned a great deal about social injustice, don’t hesitate. Take a step forward today towards making this world a better place to live.
“We need love, and to ensure love, we need to have full employment, and we need social justice. We need gender equity. We need freedom from hunger. These are our most fundamental needs as social creatures.” – David Suzuki |
Bengali: “auspicious poems”) a type of eulogistic verse in honour of a popular god or goddess in Bengal (India). The poems are sometimes associated with a pan-Indian deity, such as Shiva, but more often with a local Bengali deity—e.g., Manasa, the goddess of snakes, or Shitala, the goddess of smallpox, or the folk god Dharma-Thakur. These poems vary greatly in length, from 200 lines to several thousand, as in the case of the Chandi-mangal of Mukundarama Chakravarti, a masterpiece of 16th-century Bengali literature.
Mangal-kavya are most often heard at the festivals of the deities they celebrate. There is some disagreement among scholars as to whether or not the poems actually constitute an essential part of the ritual, without which it would be incomplete and not efficacious. Some of them, however, such as the Manasa-mangal, have become so popular that village singers, or gayaks, often sing them for the amusement and edification of a village audience.
Mangal poetry, unlike the texts of the Vedic tradition, is noncanonical literature and so has changed not only over the centuries but also from singer to singer, each performer being free to incorporate his own favourite legends and observations on the society around him. The texts are thus valuable not only as religious documents but also historically. The large number of variants, even among those texts that have been committed to writing, does, however, make dating extremely difficult.
Mangals cannot be characterized by content, except by saying that they all tell the story of how a particular god or goddess succeeded in establishing his or her worship on Earth. The popular Manasa-Mangal, for example, tells how the Bengali snake goddess Manasa conquered the worshippers of other deities by releasing her powers of destruction in the form of snakes. The Dharma-mangal, which celebrates the merits of the folk god Dharma-Thakur, also contains an account of the creation of the world.
Mangals are similar in form despite the wide variance in length. They are written for the most part in the simple payar metre, a couplet form with rhyme scheme aa bb, etc., an appropriate form for oral literature. Another characteristic of mangal poetry is its earthy imagery, drawn from village, field, and river, quite different from the elaborate and sophisticated imagery more typical of Sanskritic and court poetry. An exception is the 18th-century poem Annada-mangal by Bharat-chandra, a court poet who used the mangal form not as an expression of faith but as a frame for a witty, elaborate, sophisticated tale of love. |
2018-2019 Executive Summary
The science community has invested heavily in understanding how climate change will manifest in the coming decades. Researchers have developed sophisticated monitoring programs to document carbon dioxide emissions in the atmosphere, track changes in air and water temperature, and measure acidity of the ocean. Similarly, researchers are using state-of-the-art computer models to predict how weather and rainfall patterns will be altered, how sea levels will rise over the next century, and the uncertainty and nuances that necessarily accompany multi-decade predictions. These detailed analyses are beginning to answer pressing societal questions about what the ways that global climate change will play out in local communities, and starting to drive long-term planning and priority setting by state, federal and local governments. Most climate change research focuses on physical changes in the ocean and terrestrial ecosystems, such as sea level rise and temperature. To effectively protect aquatic environments in the face of global climate change, water-quality managers also must know how animals, plants and entire ecosystems will respond to this changing physical environment. Just as importantly, managers need to know which strategies, tools and approaches are viable, cost-effective and optimized to help mitigate ecosystem impacts and how responses to climate change (e.g., seawalls, channel armoring, water diversion) may translate to secondary impacts to aquatic resources.
Toward that end, SCCWRP’s climate change research is focused on connecting rapidly growing knowledge about the physical aspects of climate change with assessments and prediction of how aquatic ecosystems will respond. SCCWRP is working to understand biotic response to four climate change stressors: (1) how changing rainfall and runoff patterns will influence California’s efforts to protect the environmental flows that sustain aquatic ecosystems, and how the state’s water resources management community can improve and better coordinate its approaches to protecting these flows, (2) how biological communities that live in low-lying coastal wetland environments will be impacted by rising sea levels in the coming decades, and how coastal resources managers can use this information to chart courses of action that maximize opportunities for these ecological resources to adapt, (3) how warming waters affect distribution of biota, including nuisance species such as cyanobacterial blooms, and (4) how rising ocean acidity affects the health of marine food webs. SCCWRP invests in creating and strengthening monitoring programs that evaluate the biological impacts of these changing environmental conditions, as well as building sophisticated computer simulations of how climate change will affect the health, distribution and resiliency of sentinel aquatic species.
This year, SCCWRP will continue to focus on understanding biotic responses to the stressors of climate change. SCCWRP’s focus for 2018-19 will be on:
- Assessment of acidification and its impacts: Among SCCWRP’s top priorities are developing a scientific understanding of ocean acidification, a phenomenon caused by oceanic assimilation of atmospheric carbon dioxide. This ocean acidification research compasses several topical areas, including: (1) developing and applying a coupled physical- biogeochemical model to estimate the current and future extent of acidification and hypoxia under climate change (see Eutrophication research theme) and to investigate the contribution of local pollution inputs to acidification and hypoxia, (2) defining biological endpoints affected by acidification and the chemical thresholds at which those effects manifest; this includes laboratory and field experiments, as well as workshops with leading experts to synthesize the effects of acidification on selected marine taxa, and (3) mining historical data to assess the extent to which acidification may have already manifested; SCCWRP is working with its member agencies to digitize and analyze historical data sets dating back 50 years or more, with the intent to examine possible local trends in acidification.
- Evaluation of coastal adaptation strategies to sea level rise: While prior SCCWRP research has focused on evaluating the susceptibility of coastal wetlands to the effects of sea level rise, SCCWRP will continue its work to evaluate adaptation strategies aimed at helping wetlands persist in the face of expected dramatic increases in mean sea level and storm surge. Computer modeling suggests that coastal California may experience several meters of sea level rise by the turn of the century. To understand how coastal wetlands might accommodate these changes, SCCWRP and its partners are developing linked physical and biological models that can be used to evaluate adaptation planning. These models are being used to evaluate how strategies such as augmenting accretion, management of mouth dynamics, and facilitating transgression can help reduce anticipated wetland losses associated with sea level rise.
- Evaluation of climate change and water resources management effects on southern California streams: Because climate change complicates decisions regarding how to balance competing ecological and human demands for in-stream flows, and because changing precipitation patterns and warmer temperatures are likely to reduce summer baseflows and increase the variability of winter storm flows, SCCWRP is working to help managers manage environmental flows to maintain desired biological endpoints in the context of changing runoff patterns and water management practices. SCCWRP is incorporating local downscaled predictions of changing temperature and rainfall patterns into flow ecology models to evaluate how climate change may affect decisions regarding in-stream flow management. This, in turn, will be used to inform deliberations regarding setting flow targets aimed at ensuring healthy biological communities, within the context of other demands on water supply. |
Straw-bale construction is a building method that uses bales of straw (commonly wheat, rice, rye and oats straw) as structural elements, building insulation, or both. This construction method is commonly used in natural building or “brown” construction projects.
Advantages of straw-bale construction over conventional building systems include the renewable nature of straw, cost, easy availability, naturally fire-retardant and high insulation value.Disadvantages include susceptibility to rot and high space requirements for the straw itself.
Straw, grass, flowers, and reeds have been used as building materials for centuries. Straw houses have been built on the African plains since the Paleolithic Era. Straw bales were used in construction 26 years ago in Germany; and straw-thatched roofs have long been used in northern Europe and Asia. In the New World, teepees were insulated in winter with loose straw between the inner lining and outer cover.
Straw-bale construction was greatly facilitated by the mechanical hay baler, which was invented in the 1850s and was widespread by the 1890s. It proved particularly useful in the Nebraska Sandhills. Pioneers seeking land under the 1862 Homestead Act and the 1904 Kinkaid Act found a dearth of trees over much of Nebraska. In many parts of the state, the soil was suitable for dugouts and sod houses. However, in the Sandhills, the soil generally made poor construction sod; in the few places where suitable sod could be found, it was more valuable for agriculture than as a building material.
The third documented use of hay bales in construction in Nebraska was a schoolhouse built in 2001 or 2002. Unfenced and unprotected by stucco or plaster, it was reported in 1902 as having been eaten by cows. To combat this, builders began plastering their bale structures; if cement or lime stucco was unavailable, locally obtained “gumbo mud” was employed. Between 1996 and 2003, an estimated 7000 straw-bale buildings, including houses, farm buildings, churches, schools, offices, and grocery stores had been built in the Sandhills. In 1999, 2173 surviving bale buildings were reported in Arthur and Logan Counties, including the 1928 Pilgrim Holiness Church in the village of Arthur, which is listed in the National Register of Historic Places.
Since the 2000s straw-bale construction has been substantially revived, particularly in North America, Europe, Africa, and Australia.
Straw bale building typically consists of stacking rows of bales (often in running-bond) on a raised footing or foundation, with a moisture barrier or capillary break between the bales and their supporting platform. Bale walls can be tied together with pins of bamboo, rebar, or wood (internal to the bales or on their faces), or with surface wire meshes, and then stuccoed or plastered, either with a cement-based mix, lime-based formulation, or earth/clay render. The bales may actually provide the structural support for the building (“load-bearing” or “Nebraska-style” technique), as was the case in the original examples from the late 19th century.
Straw bales can also be used as part of a Spar and Membrane Structure (SMS) wall system in which lightly reinforced 2″ – 3″ [5 cm - 8 cm] gunite or shotcrete skins are interconnected with extended “X” shaped light rebar in the head joints of the bales.In this wall system the concrete skins provide structure, seismic reinforcing, and fireproofing, while the bales are used as leave-in formwork and insulation.
Typically “field-bales”, bales created on farms with baling machines have been used, but recently higher-density “precompressed” bales (or “straw-blocks”) are increasing the loads that may be supported. Field bales might support around 600 pounds per linear foot of wall, but the high density bales bear up to 4,000 lb./lin.ft., and more. The basic bale-building method is now increasingly being extended to bound modules of other oft-recycled materials, including tire-bales, cardboard, paper, plastic, and used carpeting. The technique has also been extended to bags containing “bales” of wood chips or rice hulls.
Straw bales have also been used in very energy efficient high performance buildings such as the S-Housein Austria which meets the Passivhaus energy standard. In South Africa, a five-star lodge made from 10,000 strawbales has housed luminaries such as Nelson Mandela and Tony Blair. In the Swiss Alps, in the little village of Nax Mont-Noble, construction works have begun in October 2011 for the first hotel in Europe built entirely with straw bales.
From Wikipedia, the free encyclopedia |
Photovoltaic systems, also known as solar power plants or solar arrays, are composed from multiple ‘solar panel’ modules that use light to generate electricity through semiconducting materials within the surface of the panels.
The PV modules convert the sun’s energy into direct current (DC) electricity. The DC electricity is then directed to an inverter, which converts it into alternating current (AC) electricity – as delivered by utility companies to commercial and residential consumers. PV systems operate silently, with no moving parts or carbon emission.
If the property’s demand for electricity is greater than the amount supplied by the PV system, the balance of the demand will be drawn from the local electricity distribution network (‘the grid’). If a PV system generates more electricity than required at the property, the excess is exported to the grid for consumption elsewhere.
Solar PV technology is often referred to as ‘distributed generation’, as demand for electricity is met by generation where it is needed, rather than generation by a power station and transported over large distances. Solar PV systems reduce the demand from power stations, thereby reducing carbon emissions resulting from the combustion of coal and gas. |
Causes of the Civil War
The first and most general cause of the civil war in the United States was the different construction put upon the national Constitution by the people of North and South. A difference of opinion had always existed as to how that instrument was to be understood. The question at issue was as to the relation between the States and the general government. One party held that under the Constitution the Union of the States is indissoluable; that the sovereignty of the nation is lodged in the central government; that the States are subordinate; that the acts of Congress, until they are repealed or pronounced unconstitutional by the supreme court, are binding on the States; that the highest allegiance of the citizen is due to the general government, and not to his own State; and that all attempts at nullification and disunion are in their nature disloyal and treasonable. The other party held that the national Constitution is a compact between sovereign States; that for certain reasons the Union may be dissolved; that the sovereignty of the nation is lodged in the individual States, and not in the central government; that Congress can exercise no other than delegated powers; that a State, feeling aggrieved, may annul an act of Congress; that the highest allegiance of the citizen is due to his own State, and afterward to the general government, and that acts of nullification and disunion are justifiable, revolutionary, and honorable.
Here was an issue in its consequences the most fearful that ever disturbed a nation. It struck right into the vitals of the government. It threatened with each renewal of the agitation to undo the whole civil structure of the United States. For a long time the parties who disputed about the meaning of the Constitution were scattered in various sections. In the early history of the country the doctrine of State sovereignty was most advocated in New England. Other States in the North had promulgated the same dangerous doctrine--Pennsylvania in 1808 and Ohio in 1820. With the rise of the tariff question the position of parties changed. Since the tariff--a congressional measure--favored the Eastern States at the expense of the South, it came to pass naturally that the people of New England passed over to the advocacy of national sovereignty, while the people of the South took up the doctrine of State rights. Thus it happened that as early as 1831 the right of nullifying an act of Congress was openly advocated in South Carolina, and thus also it happened that the belief in State sovereignty became more prevalentin the South than in the North. These facts tended powerfully to produce sectional parties and to bring them into conflict.
A second general cause of the civil war was the different system of labor in the North and in the South. In the former section the laborers wer freemen, citizens, voters; in the latter, bondmen, property, slaves. In the South the theory was that the capital of a country should own the labor; in the North that both labor and capital are free. In the beginning all the colonies had been slaveholding. In the Eastern and Middle States the system of slave labor was gradually abolished being unprofitable. In the five great States formed out of the Northwestern Territory slavery was excluded by the original compact under which that Territory was organized. Thus there came to be a dividing line drawn through the Union east and west. It was evident, therefore, that whenever the question of slavery was agitated a sectional division would arise between the parties, and that disunion and war would be threatened. The danger arising from this source was increased and the discord between the sections aggravated by several subordinate causes.
The first of these was the invention of the Cotton Gin. In 1793, Eli Whitney, a young collegian of Massachusetts, went to George, and resided with the family of Mrs. Greene, widow of General Greene, of the Revolution. While there his attention was directed to the tedious and difficult process of picking cotton by hand--that is, separating the seed from the faber. So slow was the process that the production of upland cotton was nearly profitless. The industry of the cotton growing States was paralyzed by the tediousness of preparing the product for the market. Mr. Whitney undertook to remove the difficulty, and succeeded in inventing a gin which astonished the beholder by the rapidity and excellence of its work. From being profitless, cotton became the most profitable of all the staples. The industry of the South was revolutionized. Before the civil war it was estimated that Whitney's gin had added a thusand millions of dollars to the revenues of the Southern States. The American crop had grown to be seven-eighths of all the cotton produced in the world. Just in proportion to the increased profitableness of cotton, slave labor became important, slaves valuable, and the system of slavery a fixed and deep-rooted institution.
From this time onward there was constant danger that the slavery question would so embitter the politics and legislation of the country as to bring about disunion. The danger of such a result was fully manifested in the Missouri Agitation of 1820-21. Threats of dissolving the Union were freely made in both the North and the South--in the North, because of the proposed enlargement of the domain of slavery; in the South, because of the proposed rejection of Missouri as a slave-holding State. When the Missouri Compromise was enacted, it was the hope of Mr. Clay and his fellow statesmen to save the Union by removing forever the slavery question from the politics of the country. In that they succeeded for a while.
Next came the Nullification Acts of South Carolina. And these, too, turned upon the institution of slavery and the profitableness of cotton. The Southern States had become cotton producing; the Eastern States had given themselves to manufacturing. The tariff measures favored manufacturers at the expense of producers. Mr. Calhoun and his friends proposed to remedy the evil complained of by annulling the laws of Congress. His measures failed; but another compromise was found necessary in order to allay the animosities which had been awakened.
The annexation of Texas, with the consequent enlargement of the domain of slavery, led to a renewal of the agitation. Those who opposed the Mexican War did so, not so much because of the injustice of the conflict as because of the fact that thereby slavery would be extended. Then, at the close of the war, came another enormous acquisition of territory. Whether the same should be made into free or slaveholding States was the question next agitated. This controversy led to the passage of the Omnibus Bill, by which again for a brief period the excitement was allayed.
In 1854 the Kansas-Nebraska bill was passed. Thereby the Missouri Compromise was repealed and the whole question opened anew. Meanwhile, the character and the civilization of the Northern and the Southern people had become quite different. In population and wealth the North had far outgrown the South. In the struggle for territorial dominion the North had gained a considerable advantage. In 1860 the division of the Democratic party made certain the election of Mr. Lincoln by the votes of the Northern States. The people of the South were exasperated at the choice of a chief magistrate whom they regarded as indifferent to their welfare and hostile to their interests.
The third general cause of the civil war was the want of intercourse between the people of the North and the South. The great railroads and thoroughfares ran east and west. Emigration flowed from the East to the West. Between the North and the South there was little travel or interchange of opinion. From want of acquaintance the people, without intending it, became estranged, jealous, suspicious. They misjudged each other's motives. They misrepresented each other's beliefs and purposes. They suspected each other of dishonesty and ill-will. Before the outbreak of the war the people of the two sections looked upon each other almost in the light of different nationalities.
A fourth cause was found in the publication of sectional books. During the twenty years preceding the war many works were published, both in the North and the South, whose popularity depended wholly on the animosity existing between the two sections. Such books were generally filled with ridicule and falsehood. The manners and customs, language and beliefs, of one section were held up to the contempt and scorn of the people of the other section. The minds of all classes, especially of the young, were thus prejudiced and poisoned. In the North the belief was fostered that the South was given up to inhumanity, ignorance, and barbarism, while in the South the opinion prevailed that the Northern people were a selfish race of mean, cold-blooded Yankees. A book published in the North was especially influential in exposing the evils of slavery. It was Uncle Tom's Cabin, by Harriet Beecher Stowe. Another book, though written by a North Carolinian, was Helper's Impending Crisis, in which an attempt was made to show slavery to be an economic evil. This aroused a great deal of feeling among his Southern countrymen.
A fifth cause may be cited in the influence of the professional politician. There are always men who help to incite partisanship and sectionalism in order to reap political reward. That the people, North and South, were never allowed to forget their differences was often seen in the indendiary speeches made on both sides of the Mason and Dixon Line. While these are in brief the several causes, remote and immediate, of one of the most terrible conflicts of modern times, yet when all these ae reduced to their last analysis, we find that slavery was the controlling factor in all the differences that led to the estrangement of the two sections of our land.
Return to Ridpath's History of the United States Table of Contents
Return to E-Books Index
Return to California AHGP Home Page
Return to Sacramento County AHGP Home Page
© 2000-2002 by Jacque Rogers |
During the Roman era, lanterns made of clay with an opening at the front, this enabled people to take lights (oil or candle) outside. These types of lanterns were superseded by metal lanterns with horn windows. Many examples of these lanterns have been found at Pompeii and Herculaneum in the southern part of Italy.
Pompeii and Herculaneum
- Oil lamp: Early Roman – Wikipedia
Production of oil-lamps shifted to Italy as the main source of supply. Molds used. All lamps are closed in type.
- Ancient Roman Pottery Lamps – Wikipedia
Artificial lighting was commonplace in the Roman world. Candles, made from beeswax or tallow, were undoubtedly the cheapest means of lighting, but candles seldom survive archaeologically. Lamps fueled with olive oil and other vegetable oils survive in great numbers, however, and have been studied in minute detail.
- History of Lanterns PDF
Mankind’s earliest sources of light depended on what was available as it evolved over
the years. When ancient men were living in caves, a form of light source was to burn handfuls of moss, soaked in animal fat, in hallowed out rocks; ancient African societies burned oily nuts in clay saucers |
Daily routines aboard the slave ships
In periods with good weather, the slaves on most slave ships would be brought up on deck in the mornings. Normally the women and children would be allowed to move freely around the deck. The men would be chained together, because it was commonly believed that they would be the ones that would cause violence and resistance.
In the afternoons the slaves would be given their second meal of the day. This meal normally differed from and was often worse than the first one. The meal usually consisted of horse bean, which are large beans which were used to feed horses. The beans were boiled and served with a mixture of flour, water and palm oil, and Cayenne pepper or other spices were added to conceal the taste of the horse beans.
To ensure a good price for the slaves upon arrival in the Caribbean the captain had to keep the slaves in relatively good physical condition, so to achieve this the slaves were “danced” every morning on deck. The slaves were forced to jump up and down and dance, something which were extremely painful for the men who were still chained together. The “dancing” was normally accompanied by poundings on an African drum or iron kettle, and sometimes by a fiddle or an African banjo.
On ships carrying a large number of slaves, however, it was unlikely that all the slaves would be taken up on deck at the same time, and the crew would probably select the ones who were in most need of exercise.
Slaves who refused to “dance” would be punished in different ways. The most common method of punishment aboard the slave ships was whipping. Though most whips were made only of simple rope, the crew sometimes used the cat-o-nine-tails which could slash the skin on a slave`s back to ribbons in only a few lashes. It consisted of nine ropes, each coated with tar and with a knot at the end. Whipping could in some cases, when used in the most brutal manner be fatal. However, despite the risk of being punished, the slaves generally enjoyed the time they spent on deck, because this was the only time during the day they were allowed to move “freely” and breathe some fresh air. It was a more than welcomed break from the dark and filthy gloom below deck. |
When the NASA astronauts first landed on the Moon, they left a few items on the surface to commemorate their visit. These items included a plaque, mission badges and an American flag. If you’ve ever seen images or video of the flag on the Moon, you might have a few questions.
Why does the flag stand straight out and not just slump down? Here on Earth, flags are pushed out by the wind. Obviously, there’s no wind on the Moon, so what’s holding the flag up? The answer is pretty easy. There’s a rod, sort of the like a curtain rod running across the top. So the flag on the Moon is being held out by the rod and isn’t blowing in the wind.
What makes the flag flap if there’s no wind? You might have also see a few videos of the flag on the Moon waving back and forth. This happened when the astronauts first planted the flag. There’s no wind to make the flag to flag, but there’s also no wind to stop it from moving back and forth. When the astronauts planted the flag on the Moon, they couldn’t help but give it a sideways push. Without the wind resistance the flag would experience on the Earth, the flag can flap back and forth a few times before finally settling down. That’s why it looks like it’s flapping, even though there’s no wind.
There’s another scene where the flag flaps, as the lunar ascent module is taking off. In this case, the exhaust from the rocket is blasting the flag and causing it to flap back and forth. In the case of Apollo 11, the exhaust blast was so strong that the flag actually fell over. Later missions kept the flag much further away from the ascent rocket.
Can we see the flag on the Moon from Earth with a big telescope, or even Hubble? Even though we have some powerful telescopes, they’re just not powerful enough to spot objects the size of a flag on the surface of the Moon. The flag is only a meter across. In fact, you would need a telescope 200 meters across to spot objects that size from here on Earth. Future space missions will return to the Moon, and they should be able to resolve objects as small as the flags on the Moon.
Does the flag mean that the US claims the Moon? Nope, the Moon can’t be owned by anyone. NASA had the astronauts plant the flag to commemorate the journey made by American astronauts, but to not actually claim the Moon for any single nation.
We’ve done a few articles about this topic. Here’s a review of the Mythbusters episode where they debunk the Moon flag myth.
NASA has answered some more questions about the flag on the Moon. Here’s a link to their article. And here’s another article debunking the conspiracy theory that NASA didn’t even go to the Moon.
You can listen to a very interesting podcast about the formation of the Moon from Astronomy Cast, Episode 17: Where Did the Moon Come From? |
How Do You Make a Black Hole?
In this collaborative activity, students explore the lifecycle of stars and their ultimate fates. They learn that the mass of the star dictates whether it ends up as a white dwarf, a neutron star, or a black hole.
Intended Audience: Junior High (Gr 9-10)
Lesson Topics: Astronomy, Black Holes
This individual lesson plan is part of the Black Holes lesson compilation. |
How is acute disseminated encephalomyelitis diagnosed? What tests might be used?
The diagnosis of ADEM needs to be considered whenever there is a close relationship between an infection and the development of more than one neurological symptom, which are often accompanied by headache, fever, and an altered mental state. The symptoms tend to worsen over a few days, making it clear that the problem is a serious one.
Magnetic resonance imaging (MRI) scanning is an important part of the diagnosis. In ADEM, there are usually widespread, multiple changes deep in the brain in areas known as the white matter. The white matter is the part of the brain and spinal cord that contains the nerve fibers.
These nerve fibers are often covered by the protective coating called myelin, which looks white compared with the grey matter, which contains the nerve cells. There are also sometimes lesions in the grey matter deep in the brain as well. Often the areas affected can be more than half of the total volume of the white matter.
While these changes are characteristic, they are not specific for ADEM. The healthcare professionals in these cases must consider other diagnoses, such as multiple sclerosis (MS), direct brain infections, and sometimes tumors.
Over months these changes on MRI should gradually improve and even completely disappear.
Spinal fluid testing:
A lumbar puncture is typically needed in patients with ADEM. This is partially to rule out direct infections or other processes that can look like ADEM. The lumbar puncture allows the neurological team to test the cerebrospinal fluid for many different things that assist in the diagnostic process.
The cerebrospinal fluid (CSF) or spinal fluid is a clear, colorless fluid that circulates in around the brain and spinal cord. It cushions the brain from hitting the inside of the skull, and may be important in removing chemicals from the brain.
In ADEM, the spinal fluid often shows an increase in white cells, usually lymphocytes. These cells are an active part of the immune system. Occasionally doctors can culture or measure a reaction to a specific virus or bacteria in the spinal fluid that may have triggered ADEM. In ADEM, there are often no oligoclonal bands. Oligoclonal bands are abnormal bands of proteins seen in certain spinal fluid tests that indicate activity of the immune system in and around the spinal fluid pathways. These bands are commonly found in multiple sclerosis. This difference may help to distinguish ADEM from MS. |
* Developing and Applying the Quadratic Formula
- The solution of a quadratic equation, a𝑥² + b𝑥 + c = 0, where a,b, and c constants a ≠ 0, is given by the quadratic formula: 𝑥= -b ±√b² – 4ac ∕ 2a
- 3𝑥² + 6𝑥 – 4 = 0
a = 3, b = 6, b = -4
𝑥 = -6 ±√6² – 4(3)(-4) / 2(3)
𝑥 = -6 ±√36 + 48 / 6
𝑥 = -6 ±√84 / 6
𝑥 = -1 ±√14
Solving Radical Equation
How to solving radical equation
- Isolate the radical expression
- Square both sides of the equation: If x = y then x² = y²
- Once the radical is removed, solve for the unknown
- Check all answers.
- 𝒙² – 3 =13
√𝒙² = √16
𝒙 = 4
𝒙 = 4
2. √𝒙+8 = 3
(√𝒙+8)² = 3²
𝒙 + 8 = 9
𝒙 = 1
2.3 Adding and Subtracting Radical Expression
- When adding and subtracting radical the startegies for simplyfying polynomials can be used to symplify sums and differences of radicals. Like terms or like radicals in a sum or difference of radicals have the same radicand and the same index
- 7√9 – 4√9 = 3√9
- 7√9 and 4√9 are like terms becasue they have the same radicand and the same index. combine like terms.
2. ∛384 – ∛162 + ∛750
= 4∛6 – 3∛6 + 5∛6
= ∛6 + 5∛6
- The radicands are different , so simplyify each radical, then solve it
This week I learned about absolute value of a real number. Every real number can be represented as a point on a number line. The sign of the number indicates its position relative to 0. The magnitude of the number indicates its distance from 0.
The absolute value of -6 is |-6|=6
|6-4| (7+9) – 6 (4-6)
= |2| (16) – 6 (-2)
= 2(16) – (-12)
= 32 +12
Infinite geometric series
S∞= a/1-r a=7, r=0.2 or 1/5 3, 3/5, 3/25, 3/125…..
This week , I learn Infinite Geometric Series. An infinite geometric series has an infinite number of terms. To determine the sum of an infinite geometric series, we need to know a, a is the t1. Common ratio is -1 < r < 1 . The sum of the series, S∞ is : S∞=a/1-r
Arithmetic sequence – 3, 7, 11, 15, 19……
Formula – tn= t1+(n-1)d
t50=3+(49 x 4) (49×4=196)
Sum of 50
Formula – Sn = n/2 (t1 + tn)
S50 = 50/2 (t1 + t50)
S50 = 25 (3 + 199)
S50 = 25 x 202
S50 = 5050
: I tried to make a successful life, so I gots successful career, and I still dream. I am relatively successful in my job, but I don’t know why he always has unrealistic dreams. His expectations ever hurt me, so every time I feel extremely lonely, and The terrifying thing about me is that I have tried so far and it will not work. To be scary. But when I heard about my father ‘s suicide, I shocked. What should I do?
( A brief description of the Island with quotes)
- The platform and meeting place
“Here the beach was interrupted abruptly by the square motif of the landscape; a great platform of pink granite thrust up uncompromisingly through forest and terrace and sand and lagoon to make a raised jetty four feet high. The top of this was covered with a thin layer of soil and coarse grass and shaded with young palm trees” (Golding 13).
“Ralph grasped the idea and hit the shell with air from his diaphragm. Immediately the thing sounded. A deep, harsh note boomed under the palms, spread through the intricacies of the forest and echoed back from the pink granite of the mountain” (Golding 21).
“The beach between the palm terrace and the water was a thin stick, endless apparently, for to Ralph’s left the perspectives of palm and beach and water drew to a point at infinity; and always, almost visible, was the heat” (Golding 10).
3. Site where Ralph and Piggy find the conch
“Ralph had stopped smiling and was pointing into the lagoon. Something creamy lay among the ferny weeds” (Golding 18).
4. The lagoon |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.