content
stringlengths 275
370k
|
---|
Earth is a very unique planet because of no other planet that we know of is close to being like it. The earth’s mantle i nearly two thausand nine hundred km thick. The earth’s core is the center ad most dense part of the earth. It is manly made up of iron. There are two types of seismic waves p waves are primary waves and they travel through liquids, solids and gases. S waves travel faster through more-rigid materials. and are more useful to scientists exploring the earth’s interior. The magnetosphere is the source of the earth’s magnetic field may be the liquid iron in the earth’s outer core. The law of gravitational force is if you where standing on a mountain you would weight less because you are farther away from the earth’s core. The farther away from the earthe core the less gravitational pull the earth has on you.
Mr WordPress on Hello world! |
The giant ground sloth was an enormous creature with an appearance similar to that of an oversized hamster. In all likelihood, it fed on leaves found on the lower branches of trees and bushes. The largest of these ground sloths was Megatherium, which grew to the size of a modern elephant. Like other giant creatures that disappeared thousands of years ago, Megatherium, and its smaller sloth cousin, Mylodon, are extinct. Only the small tree sloth survives today . . . or so scientists believe.
In the 1890's an Argentinean explorer, geographer and adventurer, Ramon Lista, was hunting in a portion of his country, known as Patagonia, when a large, unknown creature covered with long hair trotted past his party. To Lista, the creature looked like a gigantic armadillo. The party shot at the beast, but the bullets seemed to have no effect.
Professor Florentino Ameghino, a paleontologist in Argentina, heard the Lista story and began to wonder if the strange beast was a giant sloth that had somehow survived till the present day. He might not have put much stock in the Lista story if it had not been for the legends he had collected from natives in the Patagonia region about hunting such a large creature in ancient times.
The animal in the stories was nocturnal, and slept during the day in burrows it dug with its large claws. The natives also found it difficult to get their arrows to penetrate the animal's skin.
Ameghino, furthermore, had a piece of physical evidence: A small section of apparently fresh hide found by a rancher named Eberhardt on his property in a cave in 1895. The hide was studded with small, hard, calcium nodules and would have been impervious to the teeth of many predators. It seemed likely that it would have also resisted native arrows, along with Lista's bullets.
So sure was Ameghino this was the creature Lista had seen, he decided to name it after him: Nemoylodon listai, or "Lista's new Mylodon."
Expeditions to Eberhardt's cave and other caves soon recovered additional pieces of hide. With the development of the debated Carbon-14 dating method in the twentieth century, the age of the Mylodon remains in the Eberhardt's cave was apparantly settled. In short, the skin was estimated to be roughly 5,000 years old. Conditions in the caves may have preserved the skin, making it look fresh to the eye and fooling Ameghino.
No additional evidence has turned up that the giant sloth survives today. S.C.O.P.E, however, wishes to make history, and we support and congratulate their efforts. |
How can something have negative mass ? Image Credit: CC BY 2.0 Jose Manuel Suarez
Created at Washington State University, the bizarre fluid defies Isaac Newton's Second Law of Motion.
The idea that something can have negative mass is difficult to comprehend. Push it, and instead of moving away from you it will accelerate towards you in apparent defiance of the laws of physics.
To create this negative mass liquid, the researchers used lasers to cool rubidium atoms down to a temperature only slightly higher than absolute zero.
This created something known as a Bose-Einstein condensate - a form of matter in which particles move extremely slowly and behave like waves.
"What's a first here is the exquisite control we have over the nature of this negative mass, without any other complications," said Michael Forbes, an assistant professor of physics and astronomy.
"It provides another environment to study a fundamental phenomenon that is very peculiar."
Source: Phys.org | Comments (16) |
The lives of massive stars end dramatically with powerful supernovae explosions, with core remnants as neutron stars or black holes. Low mass stars like our Sun, on the other hand, die relatively peacefully and over a much longer timescale. When hydrogen is completely burned, the core of such a star contracts until it becomes hot enough to initiate helium burning. In this stage, a thin layer of hydrogen burning may continue around the helium core. The envelope of the star begins expanding and the star becomes a red giant. As the envelope cools, molecules begin to form in it, eventually leading to the formation of dust grains by processes which are still not well understood.
The stars in this stage are typically found to vary tremendously in brightness with periods of hundreds of days (see link on Mira's below). During this stage of evolution, as the envelope pulsates it also loses a substantial fraction of its mass to the surrounding interstellar medium. The final stage is the creation of a planetary nebula with a white dwarf at its center. One of the most well-known planetary nebulae, the Helix (NGC 7293), is shown in the figure at right; it lies about 650 light years away in the constellation Aquarius. The top panel is a composite image showing ionized hydrogen (green) and oxygen (blue) and molecular hydrogen (red). The green circle marks a bright region observed recently by RG astronomers in carbon monoxide (CO) emission with the Submillimeter Array. These observations are summarized in the lower panel as a series of maps of CO emission at different velocities, and show that the molecular gas is distributed in a large number of dense clumps and filaments extending over a wide range of velocities.
Among the areas of research on evolved stars that are being pursued by RG astronomers are studies of atoms and molecules in the circumstellar envelopes of evolved stars, circumstellar chemistry, mass-loss in the asymptotic giant branch stage of evolution, and the formation of proto-planetary nebulae.
Lincoln Greenhill, Joseph Hora, Lynn Matthews, Nimesh Patel, Mark Reid, Ken Young |
- Japanese: 蝦夷 (Emishi / Ezo)
Emishi was a term for the people of northeastern Japan (the Tôhoku region), outside of the control of the Yamato polity. The original kanji (毛人) means 'Hairy Men', and is seen in Chinese accounts as a term to describe those outside of the 'civilized' lands (i.e. beyond Chinese control). The Yamato polity seems to have adopted this same attitude, using similar words to describe the 'barbaric' people who had not submitted. The Emishi appear to be of the same racial stock as the Japanese, and the term appears to have been applied to various Japanese families as well, depending on their relationship with the court. Many famous families of later periods, including the Fujiwara family appear to have originated as local Emishi leaders.
By the Nara period, most of the Emishi people were located in the provinces of Dewa and Michinoku (aka Mutsu); by this time, the kanji 毛人 fell out of use, and were replaced with 蝦夷 as the most common characters to refer to the Emishi.
At the beginning of the Nara period (early 8th c.), terms such as "Nihon" were used to refer only to the areas under Imperial control, especially the provinces in the Kinai region. Over the preceding centuries, the Yamato state had battled and subdued numerous "tribes," "chieftains," or rival states within the main islands, all of whom were seen as being outside of "Nihon" or "Yamato," and were thus seen as "Emishi," or simply "I".
The period of Emishi history from roughly 700-800 CE until 1300 CE is referred to as the "Satsumon period" or "Satsumon culture." Over the course of the 8th-9th centuries, the Japanese expanded into the north, establishing centers of power, and either pushing the Emishi further north, or assimilating them into their own Japanese communities. One of the earliest and most famous victories over the Emishi took place in 801, when Sakanoue no Tamuramaro defeated Tamo-no-kimi Aterui and became the first to be dubbed seii-tai-shôgun, claiming Mutsu and Dewa as Japanese territory. Emishi resistance was by no means at an end, however, at this time.
Many early Japanese centers of control in the north were known as tate (館), a term which remains today in many placenames, e.g. Kakunodate (Akita pref.), Hakodate (Hokkaidô). There were several armed rebellions against Yamato rule, but the area was eventually pacified. Some Emishi who assimilated even developed into samurai clans; the Andô clan of samurai, according to some sources descended from Emishi chiefs, claimed sections of southern Ezo (i.e. the island of Hokkaidô) from the 1430s, if not earlier.
Emishi and the Ainu
While there are some theories that the Emishi are ethnically and culturally distinct from the Ainu people of Hokkaidô, most sources refer to "Emishi" only in periods prior to roughly the 14th century, and to Ainu only after that point.
- Morris-Suzuki, Tessa. "Creating the Frontier: Border, Identity, and History in Japan's Far North." East Asian History 7 (June 1994). pp1-24.
- Piggott, Joan R. ed. Capital and Countryside in Japan, 300-1180. University of Cornell, NY, 2006.
- "Barbarian" 夷. See Sinocentric world order. Evelyn Rawski, Early Modern China and Northeast Asia: Cross-Border Perspectives, Cambridge University Press (2015), 205.
- William de Bary, Sources of Japanese Tradition, vol 1, Columbia University Press (2001), 266.
- Morris-Suzuki. p4. |
American Sign Language/Fingerspelling 1
The American Manual Alphabet is a manual alphabet that augments the vocabulary of American Sign Language when spelling a word for which there is no sign. Beginners often make the mistake of fingerspelling any word for which they do not know the sign - this is incorrect. If you do not know the sign, talk about the subject instead of simply spelling the English word. Use fingerspelling only when it is the preferred or only option, such as with proper names, the titles of works, or certain technical vocabulary. Places normally have their own sign, as do most technologies.
ASL includes both fingerspelling borrowings from English, as well as the incorporation of alphabetic letters from English words into ASL signs to distinguish related meanings of what would otherwise be covered by a single sign in ASL. For example, two hands trace a circle to mean 'a group of people'. Several kinds of groups can be specified by handshape: When made with C hands, the sign means 'class'; when made with F hands, it means 'family'. Such signs are often referred to as initialized signs because they substitute the first initial an English word as the handshape in order to provide a more specific meaning.
When using alphabetic letters in these ways, several otherwise non-phonemic handshapes become distinctive. For example, outside fingerspelling there is but a single fist handshape, with the placement of the thumb irrelevant, but within fingerspelling the position of the thumb on the fist distinguishes the letters A, S, and T. Letter-incorporated signs which rely on such minor distinctions tend not to be stable in the long run, but they may eventually create new distinctions in the language. For example, due to signs such as 'elevator', which generally requires the E handshape, some argue that E has become phonemically distinct from the 5/claw handshape.
Fingerspelling has also given way to a class of signs known as "loan signs" or "borrowed signs." Sometimes defined as lexicalized fingerspelling, loan signs are somewhat frequent and represent an English word which has, over time, developed a unique movement and shape. Sometimes loan signs are not even recognized as such because they are so frequently used and their movement has become so specialized. Loan signs are sometimes used for emphasis (like the loan sign #YES substituted for the sign YES), but sometimes represent the only form of the sign (e.g., #NO). Probably the most commonly used example of a loan sign is the sign for NO. In this sign, the first two fingers are fused, held out straight, and then tapped against the thumb in a repeated motion. When broken down, it can be seen that this movement is an abbreviated way of fingerspelling N-O-N-O. Loan signs are usually glossed as the English word in all capital letters preceded by the pound sign(#).Other commonly known loan signs include #CAR, #JOB, #BACK, #YES, and #EARLY.
Letters should be signed with the dominant hand and in most cases, with palm facing the viewer. The hand should either remain in place while fingerspelling, or more often, drift slightly away from the midline in the manner of text being written out in the air; although, this is a subtle movement and should not be exaggerated. Do not bounce your hand as you spell each letter.
Additionally, when fingerspelling the hand must not bounce between letters. An exception is the case of double letters as with the word carry in which the double
R can be shown by slightly bouncing the corresponding handshape, or by dragging it, slightly, to the side. Either method is a correct way to show double letters. However, people who bounce between every letter produce fingerspelling that is very hard to read, especially for experienced signers who are used to proper fingerspelling. Those who cannot overcome the habit of bouncing every letter may find it helpful to hold the wrist or forearm of the dominant hand with the free hand so that they are forced to keep the hand from moving up and down while fingerspelling. Usually, only a few hours or days of this is enough to break the habit of unnecessary bouncing while fingerspelling.
If fingerspelling multiple words, there should be a very brief pause between terms so as to signify the beginning and ending of individual words.
Long nails or excessive jewelry can be distracting when watching fingerspelling and for this reason people who regularly use sign language usually avoid them.
When fingerspelling acronyms in American Sign Language, such as with
RID, the letters are often moved in a small circle to emphasize that they should not be read together as a word.
Many mistakes made by beginning fingerspellers are directly attributable to how the manual alphabet is most often shown in graphics.
In most drawings or illustrations of the American Manual Alphabet, some of the letters are depicted from the side to better illustrate the desired handshape; however in practice, the hand should not be turned to the side when producing the letter. The letters
O are two that are often mistakenly turned to the side by beginners who become used to seeing them from the side in illustrations. This means the viewer will not see the hole in your
O - that is how it's supposed to be.
Important exceptions to the rule that the palm should always be facing the viewer are the letters
H. These two letters should be made, not with the palm facing the viewer or the speaker, but with the palm facing sideways with the hand in an ergonomically neutral position.
Another mistake made by people faithfully following the pictures in most illustrations of the ASL fingerspelling alphabet is the signing of the cardinal numbers
5 with the palm facing out. The cardinal numbers
5 should be signed palm in (towards the signer). This is in contrast with the cardinal numbers
9 which should be produced with the palm turned to face the person being addressed.
As with the letter
O, the zero should not be turned to the side, but shown palm facing forward.
This applies only to the cardinal numbers however. Using numbers in other situations, such as with for showing the digits of the time for example, has different rules. When signing the time, the numbers are always facing the person being addressed, even the numbers one through five. Other signing situations involving numbers have their own norms that must be learned on a case-by-case basis.
Rhythm, speed & movement
When fingerspelling, your hand should be at shoulder height, and should not "bounce" with each letter. Your hand should stay in one place and only the handshape changes (and orientation for some letters). If you have trouble doing this, you might want to hold your forearm with your non-dominant hand in order to force your spelling hand to stay still. "Bouncing" the letters makes your fingerspelling very difficult to read, even for native signers.
As well, clear handshapes are much easier to read than fast fingerspelling. Do not concentrate on speed, as fast fingerspelling with poorly formed handshapes will be difficult to read. Try to fingerspell the whole word at the same speed, not speeding up or slowing down. A pause indicates the beginning of a new word, so if you suddenly slow down because a letter combination is difficult, your reader may think you are starting a new word, leading to misunderstanding. An exception to this sometimes appears at the beginning of a word. The first letter may be held for the length of a letter extra as a cue that the signer is about to start fingerspelling.
- ASL Fingerspelling Resource Site Free online fingerspelling lessons, quizzes, and activities.
- ASL Fingerspelling Online Advanced Practice Tool Test and improve your receptive fingerspelling skills using this free online resource.
- Fingerspelling Online Advanced Practice Tool Continue to test and improve your receptive fingerspelling skills using this free online resource.
- Fingerspelling Beginner's Learning Tool Learn the basic handshapes of the fingerspelled alphabet.
- Manual Alphabet and Fingerspelling Further information, fingerspelling Tips and video example of ASL Alphabet. |
New light shed on how people use palms in South America
An international team of researchers from the UK, Denmark and Spain together with Associate Professor Daniel Kissling of the University of Amsterdam shed new light on why and how people use palms in South America. The study shows that people are very selective when using plants for their basic needs, but less so for other needs. The results are published in Nature Plants.
About 400,000 species of plants are found in the world. Humans use approximately 10-15% of them to cover basic needs such as food, medicine and shelter, as well as other needs such as recreation, art, and craft. Certain plant traits, such as taste and scent, can affect how humans perceive plants. For example, if fruits taste sweet we like them, and if plant leaves have a mint-like scent we will use them as herbs or tea. However, plants come in all shapes and sizes and possess several traits that affect whether we like them. For most plants it remains unclear which traits determine preferences by humans.
The researchers investigated how people use palms in South America. Palms are very important for local livelihoods in several parts of the world, including South America. A total of 2,200 locals from over sixty communities were interviewed about how they use palms. Additionally, data on biological traits of palms were collected, including plant size (leaves, fruits, stems) and distributional range size. ‘A key finding was that people tend to use large, widespread palm species compared to small, narrow-ranged ones,’ says lead author Rodrigo Cámara-Leret from the Royal Botanic Gardens, Kew. ‘For example, people prefer larger palms for food, potentially because they need palms that produce large quantities of food.’
Another key finding of the study was that the link with biological traits was strongest the more basic the human need is that the plant covers. For instance, palms used for basic physiological and safety needs (e.g. food, medicine, shelter) have a strong link to plant size (the bigger, the better) and distributional range size (the more the merrier). On the other hand, palm use for psychological and self-actualisation needs (e.g. rituals, jewellery) was less dependent on biological traits of palms. In other words, people are very selective when it comes to plants used to cover basic needs, but less so when it comes to using plants for needs with no physiological underpinnings.
Big trait datasets
The study was made possible because the researchers had started to collect trait data from various sources such as books, herbaria, scientific articles and reliable online sources. ‘Over the last few years, we have compiled lots of data on fruit sizes, plant height, leave sizes, etc. for the more than 2500 species of palms in the world’, says Daniel Kissling, co-author of the study. ‘Only by mining this information from published literature and herbaria we are able to make it digitally available. ‘ Using such big trait datasets combined with environmental and ecological information is the focus of Kissling’s research team. ‘Ultimately we want to understand the distribution of life on Earth, and how it is shaped by humans and the physical environment, so that we can predict the future of biodiversity and human well-being.’
Cámara-Leret, R., Faurby, S., Macía, M.J., Balslev, H., Göldel, B., Svenning, J.-C., Kissling, W.D., Rønsted, N. & Saslis-Lagoudakis, C.H.: Fundamental species traits explain the provisioning services of New World palms. Nature Plants DOI 10.1038/nplants.2016.220 |
Preserving Traditional Knowledge in Drylands
Drylands are home to some of the most widely recognized indigenous groups in the world. The Masaii, Bedouin and Berbers, amongst others, have been immortalized in many popular books and films. Many indigenous groups in drylands have retained specialized traditional knowledge and a close association with biodiversity resources.
| Dryland species, such as lions, figure strongly in traditional cultural practices such as rites of passage.|
| About 70% of the traditionally used wild plants in North Africa have potential economic value.|
| Seeds from the senna plant (Cassia italica) have long been used in the Middle East as a laxative.|
| Milkweed (Calotropis procera) has traditionally been used to fill hollow teeth, produce charcoal, and heal rheumatism.|
Traditional knowledge is widely employed in drylands where water scarcity, poor soil conditions, and frequent drought present unique challenges to local livelihoods. Even today many dryland management techniques are based on centuries-old traditions. The irrigation of agricultural land in the Sahara, for example, is based on a water collection and distribution process first employed in 800 B.C.
| Traditional nomadic livelihoods in drylands typically blend herding with hunting and gathering and small-scale agriculture. |
| More sedentary oasis communities in desert regions have long relied on date and olive crops and the grazing of small livestock.|
| Common property and access regimes are more common in drylands than in any other ecosystem.|
Traditional knowledge of drylands is, however, coming under threat as government incentives and land laws can act as perverse incentives against their propagation. Furthermore, as populations continue to increase in dryland areas previously sustainable management practices become unsustainable.
In recognition of the value of traditional knowledge to the conservation and sustainable use of drylands biodiversity a number of Governments are stepping up efforts to preserve this valuable information.
| The Government of Uganda has developed an indigenous knowledge management plan.|
| Burkina Faso, Malawi, Kenya and Tanzania are launching processes to develop similar plans.|
In Shinyanga, one of Tanzania's poorest and driest regions, land degradation resulted in a decline of harvest and income for the Sukuma people who have cultivated the land for centuries. The Shinyanga Soil Conservation Programme, otherwise known as the HASHI project, based its efforts to restore the land on reviving ngitilis, natural resource enclosures based on the indigenous land management system. Ngitili was originally developed by the Sukuma people in response to acute animal feed shortages caused by droughts, the loss of grazing land to crops, and declining land productivity. To restore ngitili, local populations used residual natural seed and root stock, and trees were planted around homesteads. Trees were also planted on field boundaries and farm perimeters, improving soil fertility while providing firewood. The benefits of ngitili restoration are undeniable:
| The cash value benefits derived from ngitili in Shinyanga were estimated to be US$14 per person per month - the average monthly spending per person in rural Tanzania is US$8.50.|
| Maintaining ngitili has enabled some villagers - mainly through sales of timber and other wood products - to pay school fees, purchase new farm equipment, and hire agricultural labor.|
| Income generated by communal ngitili has been used to build classrooms, village offices, and healthcare centers.|
| In 1986, approximately 600 ha in Shinyanga were under the ngitili land management system. By the late 1990s, ngitili covered approximately 78,000 ha.| |
The field vole (Microtus agrestis) is one of the three species of vole found across the United Kingdom. It is the most common species of the three and it plays a vital link in the food chain.
Field voles can be difficult to distinguish from the bank vole as they are very similar in appearance but field voles tend to have darker and longer fur, smaller ears and shorter tails. Voles can be more easily distinguished from mice by their less prominent eyes and ears as well as having blunter noses. Field voles can be very aggressive creatures and the males can be heard squeaking as they fight over their territories.
The field vole is predated on by several species such as kestrels, barn owls, foxes, stoats and snakes. It is thought that between 40-80% of a barn owl’s diet is made up of field voles showing their importance in the ecosystem.
As a vole travels, it marks its runways with urine to warn off other voles, however these urine tracks can radiate ultraviolet light which can be detected by birds of prey, therefore leaving a trial for the birds to trace.
As a rodent, field voles have high reproductive rates; females may have up to 7 litters of between 4 to 6 young a year. Rarely, field voles can reach plague proportions with up to 500 individuals per acre. Despite this, it is thought that the field voles numbers are decreasing and although they are common across the country, their roles in the food chain makes them important species to protect. |
You can't talk about lists without talking about list comprehension. List comprehensions are a concise way to create lists.
What you might have written this way:
result = for x in range(3): for y in range(3): result.append((x, y,)) result > [(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2), (2, 0), (2, 1), (2, 2)]
Can be written like this:
result = [(x, y,) for x in range(3) for y in range(3)]
List comprehensions are used primarily to make your code cleaner and more readable.
If you'd like to read more about this, we've written a more comprehensive (pun intended) guide on list comprehensions.
Python's ability to slice lists (or strings) is quite powerful.
Here is the basic breakdown of how lists can be sliced, given the list a:
a[start:end] # start index through end-1 index a[start:] # start index through the end of the list a[:end] # beginning of the list through end-1 index a[:] # a copy of the whole array
For example, if you have a list of names and want to return a subset, it's quite easy:
names = ['Lancelot', 'Galahad', 'Arthur', 'Robin'] names[1:3] > ['Galahad', 'Arthur'] names[1:] > ['Galahad', 'Arthur', 'Robin'] names[:2] > ['Lancelot', 'Galahad'] names[:] > ['Lancelot', 'Galahad', 'Arthur', 'Robin']
Python's slicing syntax also supports a third argument, step. This specifies how to step through the list. The default step is 1 meaning that it will hit every index. But you can specify any integer, positive or negative. A negative step will step through the list backwards, which is useful for reversing a list or string.
numbers = list(range(10)) numbers[::2] # should print only even numbers > [0, 2, 4, 6, 8]
When you've got a list of strings, sometimes you'll want to join all of the elements to create a single string. To do this you'll need to provide a delimiter and use the join function.
words = ['knights', 'who', 'say', 'ni'] sentence = ' '.join(words) sentence > 'knights who say ni'
Python has a built in function called sum that accepts an iterable and adds together each item. So adding elements in a list is cake:
primes = [0, 1, 2, 3, 5, 7, 11] sum(primes) > 29
Python makes it very easy to find the max and min values in a list - using the built-in max and min functions.
l = [0, 10, 100, 50, 500, 30] max(l) > 500 min(l) > 0
If you have two lists and want to combine them into a single list, you can do so simply using the + operator.
a = ['Lancelot', 'Galahad', 'Robin'] b = ['Patsy', 'Arthur'] result = a + b result > ['Lancelot', 'Galahad', 'Robin', 'Patsy', 'Arthur']
Again, built-in functions make life easy.
Suppose you have this list:
numbers = [0, 10, 4, -4, 22, 2]
And you'd like to return a sorted list:
sorted(numbers) > [-4, 0, 2, 4, 10, 22]
Or if you want to sort the list rather than returning a new, sorted list:
numbers.sort() numbers > [-4, 0, 2, 4, 10, 22]
Lists can be used as a stack or a queue. A stack is first-in-last-out (like a stack of plates) while a queue is first-in-first-out (like a line at the grocery store).
In either case, stack or queue, you will add elements to the end of the list. What changes is how you remove elements.
stack = stack.append('Robin') stack.append('Lancelot') stack.append('Arthur') stack > ['Robin', 'Lancelot', 'Arthur']
And we can use the pop method to remove elements from the end of the list.
stack.pop() > 'Arthur' stack.pop() > 'Lancelot' stack.pop() > 'Robin'
For a queue, we'll actually need to use a deque from the collections module.
from collections import deque
The deque can be populated directly from a list like this:
queue = deque(['Robin', 'Lancelot', 'Arthur'])
Or it can be instantiated empty, and use the append method to add elements.
queue =deque() queue.append('Robin') queue.append('Lancelot') queue.append('Arthur') queue > deque(['Robin', 'Lancelot', 'Arthur'])
And we can use the popleft method to remove elements from the beginning.
queue.popleft() > 'Robin' queue > deque(['Lancelot', 'Arthur']) queue.popleft() > 'Lancelot' queue > deque([ 'Arthur']) queue.popleft() > 'Arthur' queue > deque()
There are often cases where you'll want to count the frequency of elements in a list. Consider a problem where you need to find the frequency of words in a document.
We can use this short paragraph as an example.
paragraph = "I am. And this my trusty servant Patsy. We have ridden the length and breadth of the land in search of knights who will join me in my court of Camelot. I must speak with your lord and master."
First we want to make sure this document only contains words and spaces (no punctuation).
paragraph = re.sub(r'[^\w\s]', '', paragraph)
And then we want to split the paragraph by whitespace and return a list.
words = re.split(r'\s+', paragraph)
And now with our list of words, we can use Counter to give us our word frequencies.
from collections import Counter counter = Counter(words)
From this we can get the 5 most commonly used words:
counter.most_common(5) > [('of', 3), ('and', 2), ('in', 2), ('I', 2), ('the', 2)]
Or we can return the frequency of a desired word.
counter['of'] > 3
Python's map function lets you run a function on every element in a list. This is often used in conjunction with lambda functions.
Suppose we have a list of numbers and we want to replace each element in with it's square.
numbers = [0, 1, 2, 3, 5, 7, 11] squared = map(lambda x: x**2, numbers) squared > [0, 1, 4, 9, 25, 49, 121]
Filter lets you do exactly that - filter out elements from a list.
Suppose we have a list and we only want to keep numbers greater than 0.
numbers = [-1, 0, -5, 10, 8, -20, -3, 16] positives = filter(lambda x: x > 0, numbers) positives > [10, 8, 16]
Reduce allows you to write a function that accepts two parameters, the first is the current product (generated from all previous iterations) and the second is the current element. As opposed to map and filter, reduce returns a single value.
Suppose we have a list of numbers that we want to multiply together - but only if the number is not 0.
numbers = [-1, 0, -5, 10, 8, -20, -3, 16] product = reduce(lambda x, y: x*y if not y == 0 else x, numbers) product > 384000
Last but not least, itertools is essential for working with iterable data - like lists. You may not use it often, but it's valuable to know what is available in the module.
I won't cover everything in this step, but here is a link to the documentation for itertools.
But to give an idea for the sorts of things you can do with itertools, check out a few of these examples.
Accumulate allows you to perform some binary function as you iterate through the list. It's very similar to something you might write with map but a bit easier to read. Each element in the resulting iterable is the accumulated value.
By default accumulate adds the values.
list(accumulate([1, 2, 3, 4])) > [1, 3, 6, 10]
But you can also pass in a binary function as well.
list(accumulate([1, 2, 3, 4], operator.mul)) > [1, 2, 6, 24]
Compress takes data and an iterable, and it return the values from data only when the corresponding value the iterable evalutes to true.
Suppose you have a binary string "Lancelot" and you only want to return values in this string that correspond to 1's in the following list [1, 1, 1, 0, 0, 0, 1, 1].
You could write:
list(compress('Lancelot', [1, 1, 1, 0, 0, 0, 1, 1])) > ['L', 'a', 'n', 'o', 't'] |
Pupils learn how to take risks, becoming resourceful, innovative, enterprising and capable citizens. Through the evaluation of past and present design and technology, they develop a critical understanding of its impact on daily life and the wider world. High-quality design and technology education makes an essential contribution to the creativity, culture, wealth and well-being of the nation.
The design technology curriculum aims to ensure that all pupils:
- develop the creative, technical and practical expertise needed to perform everyday tasks confidently and to participate successfully in an increasingly technological world
- build and apply a repertoire of knowledge, understanding and skills in order to design and make high-quality prototypes and products for a wide range of users
- critique, evaluate and test their ideas and products and the work of others
- understand and apply the principles of nutrition and learn how to cook.
At Key stage 3, through a variety of creative and practical activities, pupils are taught the knowledge, understanding and skills needed to engage in an iterative process of designing and making. They undertake projects from a range of domestic and local contexts [for example, the home, health, leisure and culture], and industrial contexts [for example, engineering, manufacturing, construction, food, energy, agriculture (including horticulture) and fashion]. |
The Australian Flag was first flown on 3rd September 1901. Entries were invited to compete in the flag design of the country. A design was passed in 1903 by King Edward VII; while the modern 7-pointed Commonwealth Star version was adopted on 8th December 1908. Its dimensions were officially gazetted in 1934 and The Flags Act of 1953 gave the flag the official status of "Australian National Flag”; which was recognized and officially approved by the British sovereign on 14th February 1954.
Australia's national flag consists of a dark blue field and features three primary components namely the Commonwealth Star, the Southern Cross, and the Union Jack. In the upper left corner (hoist-side quadrant), the Union Jack represents Australia's association with Great Britain (UK). Directly below the Union Jack in the lower hoist-side quadrant is a large seven-pointed star known as the white Commonwealth or Federation star. The star depicts one point for each of the six original states of Australia in 1901; and the seventh point, added to the stars in 1909, denotes all the internal and external territories of the Commonwealth of Australia. On the fly-half is a representation of the Southern Cross constellation in white with one small five-pointed star and four larger, seven-pointed stars. The stars are a reminder of Australia's geographical location, as this constellation can only be seen from the southern hemisphere. The flag has a height to length proportion ratio of 1:2.
In 1901, the first Prime Minister of Australia – the Rt Hon Sir Edmund Barton, announced an international competition to design a flag for the new Commonwealth of Australia. The flag was flown for the first time on 3rd September 1901 at the Exhibition Building in Melbourne. In 1903 King Edward VII approved two designs for the flag of Australia: The Commonwealth blue ensign and the Commonwealth red ensign for the Merchant Navy. The Flags Act 1953 subsequently proclaimed the Australian blue ensign as the Australian National Flag and the Australian red ensign as the flag for merchant ships registered in Australia. Australia also has several other official flags including the Australian Aboriginal Flag, the Torres Strait Islander Flag, and the ensigns of the Australian Defence Force.
The Commonwealth Coat of Arms is the formal symbol of the Commonwealth of Australia and was officially adopted on September 19, 1912 as granted by King George V. It consists of a shield portraying the badges of the six Australian states, enclosed by an ermine border. The three states on the top half (from left to right) are New South Wales, Victoria, and Queensland. The bottom half (from left to right) are South Australia, Western Australia, and Tasmania. The border of the shield symbolizes the federation of the states, which took place in 1901. The shield supporting the Coat of Arms is held up by the native Australian animals - the red kangaroo and the emu. It symbolizes a nation’s progress and an unwillingness to back down, based on the fact that neither animal can walk backwards. Above the shield is the crest, a seven-pointed gold Commonwealth Star on a blue and gold wreath. Six of the star’s points are representative of the states of the Commonwealth and the seventh point represents the territories of Australia. Gold and blue are the Commonwealth Coat of Arm’s identifying colors. The background contains a wreath of golden wattle (Acacia pycnantha) - the official national floral emblem of Australia. At the bottom of the Coat of Arms is a scroll containing the name ‘Australia’.
The red kangaroo (Macropus rufus) is the national animal of Australia. It is the largest terrestrial mammal endemic to Australia and is found all across the mainland. It is known for its excellent ability to adapt to the harsh environments prevailing across large parts of Australia. Due to its stable populations and widespread distribution, it is a "Least Concern" species on the IUCN Red List. Red kangaroos have front limbs with small claws and two muscular and robust hind-limbs that are mainly used for jumping. They also have a strong tail which is used for support when standing in an upright position and balance while leaping. In a typical leap, the male kangaroo can cover 8-9m while reaching heights of 1.8-3m.
The emu is another animal endemic to Australia. It is the second tallest bird in the world and the largest native bird in Australia. The flightless bird is found throughout most of Australia. Its stable populations and wide range has placed it in the the ICUN conservation category of "least concern" alongside the kangaroo. The emu is an important animal in Australian Aboriginal mythology, included in creation myths and cultural dances. Although flightless, emus have specialized musculature to enable them to run very quickly. They also use their large wings to stabilize themselves while running. The average step is 3.3 ft while walking, but can reach 9 ft at a gallop, reaching almost 50km/h in speed and covering large distances.
The National Anthem of Australia is “Advance Australia Fair”. Scottish-born composer – Peter Dodds McCormick had both composed the music and penned the lyrics in 1878. The National Anthem was officially adopted on April 19, 1984.
“God Save the Queen” which had been the National Anthem from 1788-1974 was designated as the “Royal Anthem” from 1984 onwards; only to be played at events attended by any member of the Royal Family. The National Anthem of Australia is also officially one of the symbols of Australia and hence is treated with respect and dignity. The National Anthem identifies Australia at home and overseas and is used at official public ceremonies, sporting and community events.
Australians all let us rejoice,
For we are young and free;
We've golden soil and wealth for toil;
Our home is girt by sea;
Our land abounds in nature's gifts
Of beauty rich and rare;
In history's page, let every stage
Advance Australia Fair.
In joyful strains then let us sing,
Advance Australia Fair.
Beneath our radiant Southern Cross
We'll toil with hearts and hands;
To make this Commonwealth of ours
Renowned of all the lands;
For those who've come across the seas
We've boundless plains to share;
With courage let us all combine
To Advance Australia Fair.
In joyful strains then let us sing,
Advance Australia Fair.
The Australian Dollar (A$ or AU$) is currently the official currency used in the Commonwealth of Australia which includes Australia, Christmas Island, Keeling Islands, Kiribati, Nauru, and Norfolk Island. The Australian Dollar was first issued on February 14th, 1966 replacing the Australian Pound.
The Australian Dollar is made of subunits known as cents with 1 Australian Dollar being equivalent to 100 cents. It has been the 5th most traded currency in the world behind the US dollar, the Euro, the Yen, and the sterling pound.
The Australian Dollar is issued by the Reserve Bank of Australia and the currency comes in minted coins and banknotes. The coinage of the Australian Dollar is issued in A$2, A$1, 50 cents, 20 cents, 10 cent, and 5 cent denominations. The banknotes in circulation are comprised of A$100, A$50, A$20, A$10, and A$5 denominations.
The Holey Dollar was the first-ever currency to be struck in Australia, in the then New South Wales colony. The Holey Dollar was essentially a Spanish Dollar with the middle part of the coin being punched out to form two types of coinage – the larger coin with a hole was known as the Holey Dollar and the small coin was called the dump. Due to the alteration of the original Spanish Dollar during the production of the Holey Dollar, the currency was only valuable within New South Wales and was not used anywhere else. A sudden decline in the supply of the Spanish Dollar in the early 19th century caused the Imperial British Empire to reconsider the usage of the Holey Dollar in its New South Wales colony. In 1816, the United Kingdom introduced the gold standard which led the British Empire to adopt the use of sterling coinage in all of its colonies. The Bank of New South Wales was subsequently formed in 1817 and began issuing sterling pound banknotes. Other regions also began minting their distinct currency, with Sydney commencing the minting of sovereigns and half-sovereigns in 1855, Adelaide’s Government Assay Office began the issue of gold pound coins in 1852, and the Government of Queensland issued its distinct banknotes in 1893. In the early 20th century, the Australian Government established policies in line with the adoption of a common national currency. The Australian Pound was adopted in 1910 and was made up of subunits known as shillings, with 20 shillings being equivalent to 1 Australian Pound. The currency was replaced in 1966 by the Australian Dollar. |
Statistiken: Cereal production (metric tons)
|Datum||1961 - 2018|
|Vorheriger Wert||20,339,248 (2017)|
Definition: Cereal production (metric tons)
Production data on cereals relate to crops harvested for dry grain only. Cereal crops harvested for hay or harvested green for food, feed, or silage and those used for grazing are excluded.
Zeitplan - Iran: Cereal production (metric tons) (1961 - 2018)
Relevanz der Entwicklung: Cereal production (metric tons)
The Food and Agriculture Organization (FAO) estimates that cereals supply 51 percent of Calories and 47 percent of protein in the average diet. The total annual cereal production globally is about 2,500 million tons. FAO estimates that maize (corn), wheat and rice together account for more than three-fourths of all grain production worldwide. In developed countries, cereal crops are universally machine-harvested, typically using a combine harvester, which cuts, threshes, and winnows the grain during a single pass across the field. In many industrialized countries, particularly in the United States and Canada, farmers commonly deliver their newly harvested grain to a grain elevator or a storage facility that consolidates the crops of many farmers. In developing countries, a variety of harvesting methods are used in cereal cultivation, depending on the cost of labor, from small combines to hand tools such as the scythe or cradle. Crop production systems have evolved rapidly over the past century and have resulted in significantly increased crop yields, but have also created undesirable environmental side-effects such as soil degradation and erosion, pollution from chemical fertilizers and agrochemicals and a loss of bio-diversity. Factors such as the green revolution, has led to impressive progress in increasing cereals yields over the last few decades. This progress, however, is not equal across all regions. Continued progress depends on maintaining agricultural research and education. The cultivation of cereals varies widely in different countries and depends partly upon the development of the economy. Production depends on the nature of the soil, the amount of rainfall, irrigation, quality of seeds, and the techniques applied to promote growth.
Einschränkungen und Ausnahmen: Cereal production (metric tons)
Data on cereal production may be affected by a variety of reporting and timing differences. Millet and sorghum, which are grown as feed for livestock and poultry in Europe and North America, are used as food in Africa, Asia, and countries of the former Soviet Union. So some cereal crops are excluded from the data for some countries and included elsewhere, depending on their use. The data are collected by the Food and Agriculture Organization (FAO) of the United Nations through annual questionnaires and are supplemented with information from official secondary data sources. The secondary sources cover official country data from websites of national ministries, national publications and related country data reported by various international organizations. The FAO tries to impose standard definitions and reporting methods, but complete consistency across countries and over time is not possible. Thus, data on agricultural land in different climates may not be comparable. For example, permanent pastures are quite different in nature and intensity in African countries and dry Middle Eastern countries. The data collected from official national sources.
Statistisches Konzept und Methodik: Cereal production (metric tons)
A cereal is a grass cultivated for the edible components of their grain, composed of the endosperm, germ, and bran. Cereal grains are grown in greater quantities and provide more food energy worldwide than any other type of crop; cereal crops therefore can also be called staple crops. Cereals production data relate to crops harvested for dry grain only. Cereal crops harvested for hay or harvested green for food, feed, or silage and those used for grazing are excluded. The Food and Agriculture Organization (FAO) allocates production data to the calendar year in which the bulk of the harvest took place. Most of a crop harvested near the end of a year will be used in the following year. |
HWST 130 : Hula ‘Ōlapa: Traditional HawaIIan Dance
In this class students will learn various beginning traditional hula interpretations. Students will be taught the basic footwork and hand gestures of traditional hula accompanied by chanting, Ipu Heke (double gourd) or Pahu (drum). Students may also be required to make accompanying instruments like Ipu (smaller single gourd), Kala‘au (sticks), ‘Ili‘ili (stones), and Pū‘ili (split bamboo), and learn accompanying oli (chants) under the direction of the class Instructor. Students will be taught different historical aspects of specific hula, associated hula mythology, ali‘i (chiefly) genealogies, plants and place names.
- Learn a basic understanding of the differences between traditional and more modern styles of hula including the significance of hula as part of Hawaiian culture in traditional times.
- Learn the histories and mythologies behind the creation and performance of various hula.
- Learn how to perform several hula in unison, and the relationship between movements with the significance of lyrical content in a mele or oli combined with the occasions for which one is dancing.
- Learn how to prepare adornments for their specific hula. |
Grammar is the foundation of good english language skills and our class 1 english grammar worksheets are valuable in teaching grammar to kids. You can practice, check answers and upload your sheets for free using schoolmykids worksheets for kids.
Vocabulary and word usage worksheets for grade 2.
English worksheets for grade 2. Developing reading power grade 2; Save and download english worksheets for second grade pdf. Learn to use addition and subtraction within 100 and also solve simple word problems in these grade 2 worksheets.
Letters, words and sounds grade 2 english worksheet: These worksheets introduce students to parts of speech, punctuation and related concepts which form the building blocks for writing proper sentences. Tap into the 2nd std worksheets for different subjects and learn all the topics in it.
Worksheets > vocabulary > grade 2. Here’s our grade 2 compilation of worksheets, workbooks, and lessons for download. Grade 2 english worksheets curriculum:
It eases the experience of homeschooling for the parents as well. Common and proper nouns, singular and plural nouns, irregular nouns & collective nouns. Cbse worksheets for class 2 english:
20 english worksheets for grade 2. Some of the worksheets for this concept are fiction fection fictiun steng sting stinge nurv nerv nerve, adjectives, wzr, 2nd grade reading comprehension work second grade, contractions grade 2, w o r k s h e e t s, english home language work, big grammar book. Along with the class 2 english grammar worksheets, there are many worksheets that help kids to understand the use of why, how, when and where while asking questions.
Rooms in the house other contents: Letters, words and sounds grade 2 english worksheet: Based on the singaporean math curriculum for second graders, these math worksheets are made for students in grade level 2.
A collection of english esl worksheets for home learning, online practice, distance learning and english classes to teach about grade, 2, grade 2 The worksheets in this section will have students working on their ability to improve their reading comprehension within appropriately leveled literature. However, also students in other grade levels can benefit from doing these math worksheets.
It is easy to download on your device and print. Second grade, lesson 7—cognates 1 second grade, lesson 7 cognates vital information grade / level: English as a second language (esl) grade/level:
Our grade 2 math worksheets are free and printable in pdf format. Free printable english second grade worksheets to help younger kids learn and practice their concepts related to english. Worksheets > grammar > grade 2.
» grade 2 english worksheet: Grade 2 worksheets for reading comprehension; A brief description of the worksheets is on each of the worksheet widgets.
The english worksheets for class 2 also helps learners to understand the usage of specific words such as who, what, which, must, mustn’t etc. Reading comprehension worksheets for grade 2 all of the work on this page is focused on the grade 2 core reading standards. Work with equal groups to learn the basics of multiplication.
Initially that you need to use the worksheets template for your child you should read the instructions carefully. This page contains all our printable worksheets in section reading comprehension of second grade english language arts.as you scroll down, you will see many worksheets for literary response and analysis, comprehension and analysis, structural features of informational materials, and more. The worksheets here are generally suitable for students studying in ib (pyp), singapore math, cambridge primary, uk national, k12 common core standards, australian, new zealand & all international curriculum.
Quality free printables for students, teachers, and homeschoolers. A english worksheets for grade 2 is the best thing for the kid to learn because they could go about creating their very own worksheets to tackle math problems that they may be having. We have lot of english worksheets for grade 2.
Worksheets for class 2 | cbse second grade printable worksheets. Workbook/ grade 2 practice book; Rooms in the house add to my workbooks (0) download file pdf embed in my website or blog add to google classroom
English worksheets and topics for second grade. English worksheets and printables for second grade. You will need to print out the second grade verb worksheets when you are doing second grade.
Math in english exercise book; This workbook with grade 2 english worksheets is available in pdf format. This page is filled with over 300,000+ pages of grade 2 worksheets, games, and activities to make learning math, english / language arts, science, social studies, art, bible, music, and more fun!
English is a complex language, and there are going to be lots of grammar rules that you will have to learn along the way. Feel free to print them. If the student has difficulty, ask the class to help him/her out.
Second grade, lesson 7 cognates. Subjects like moral science, hindi, english start to come fast and furious in your 2nd grade. One of the best teaching strategies employed in most classrooms today is worksheets.
Ask for a volunteer to come up to the board and draw a line matching the english word with its spanish equivalent. Circle the word that has the same meaning. Please check the following links and download high quality printable pdf files.
English worksheets for grade 1 aim to teach children the basics of language usage. Choose the word that has the same meaning as that in a sentence Use these worksheets to improve vocabulary and word usage and introduce compound words, synonyms, antonyms, homonyms, homophones and affixes.
Practice the questions in class 2 printable worksheets and clear the exam with better grades. This is because the teaching method will be more focused on the grammatical rules of the english language rather than the speaking. |
It’s no secret that students learn better through hands-on projects, particularly in subjects like science, technology, engineering and math. Science experiments are a great way to elevate classroom participation and pique student interest and concepts. When students aren’t just listening, but talking and moving, they activate multiple areas of the brain, making it more likely they’ll retain the information.
A little fun in the classroom—and maybe a little mess—never hurt.
We started a new Science Club. This club is a weekly lesson where we do practical experiments to let the children have a first hand experience of “what” science and its workings.This is a new module that we have implemented where we will conduct classes every week. Its more a discovery and trial and error to experience first hand what is science and the the mystery unfolds in the experiments that are conducted in the class.
Question: How Do Plants Get Rid of Extra Water?
When we get hot we sweat to lower our body temperature. Plants also sweat but not because they are hot. They perspire to get rid of extra water that they have no use for. Water travels through a plant from its roots to its leaves. Any excess water is lost through pores in the plant leaves. When people sweat its perspiration. When plants sweat its called transpiration. The leaves have tiny holes called called Stomata. |
The planet is becoming greener, literally. But certain areas have in fact become browner. Climate change is a major cause.
That the Earth has become greener in the past decade due to global warming has already been known for some time. Higher temperatures and more precipitation make plants grow faster. This can be clearly mapped from space by using satellites to measure reflected sunlight. Plants use some of the sunlight for photosynthesis. Infrared light bounces back. The amount of reflected infrared light is therefore a measure of plant activity. De Jong has developed new calculation tools to show this activity more accurately.
The picture as a whole is clear: the planet is getting greener and plants are becoming more productive. But this general picture looks very different when viewed in more detail, says De Jong. 'While the northern hemisphere is becoming greener, in contrast, parts of the southern hemisphere are not.' Plant activity there is decreasing. Land degradation is a major cause. But these tendencies towards discolouration are also in a process of change and can even be reversed, says De Jong. His calculations show that this has indeed taken place in 15 percent of the Earth's surface in the past decade. 'Browning' changes to 'greening' and vice versa.
Monitoring from space enables changes in plant growth on Earth to be better mapped. But it does not provide explanations. 'You can measure vegetation activity. But what lie behind all these - the underlying processes - are very diversified.' De Jong made a first attempt towards providing an explanation by establishing links to climate change. More than half of the changes observed - particularly those in forested areas - could be caused by global warming.
Global warming prolongs the growth season in the northern hemisphere. But this extension does not necessarily mean that plant growth increases at the same rate. This is evident, says De Jong, when the change in growth is mapped for a period. 'Within a growth season, the so-called photosynthesis intensity decreases. This indicates that there are other factors which limit growth, such as the availability of water or nutrients. This can be seen especially in the northern hemisphere at the end of a growth season. Ecologists call this 'late summer stress'. |
You are watching: Calculate the concentration of h+ ions in a 0.010 m aqueous solution of sulfuric acid.
Write out the well balanced equation for the dissolution, or dissociation, of sulfuric acid in water. The balanced equation must be: H2SO4 + 2H2O -> 2H3O+ + SO4(2-). The equation mirrors that for the dissolution of one mole of sulfuric acid in water, 2 moles of hydronium ions and also 1 mole of sulfate ions are developed in the reaction. For a solution of sulfuric acid via an initial concentration of 0.01 moles, this means that tbelow are 0.01 moles of sulfuric acid in 1 liter of solution.
Multiply the initial acid concentrations by their coefficients to determine the individual concentrations of the ions. The coefficients are the numbers before the formulas in the well balanced chemical equation. Formulas without numbers before them have a coefficient of 1. This would intend that the initial acid concentration is multiplied by 1 to determine the molarity of sulfate ions in solution; 1 x 0.01 mole = 0.01 mole SO4(2-). The initial concentration is multiplied by 2 to identify the concentration of hydronium ions in solution; 2 x 0.01 mole = 0.02 mole H3O+.
Multiply the initial acid concentration by 3 to recognize the total ionic concentration of the 0.01-mole sulfuric acid solution. Because one mole of the acid produces a complete of 3 moles of ions, the complete ionic concentration is 3 x 0.01 moles = 0.03 moles of ions.
The presumption that sulfuric acid entirely dissolves in water is valid bereason sulfuric acid is a strong acid and also finish dissolution in water is a characteristic of solid acids. Further steps are essential to compute the concentration of ions in solution for a weak acid.
Almeans follow safety steps in the laboratory or at any time as soon as dealing with acids. This includes the usage of security devices such as laboratory gowns, goggles, gloves and appropriate glassware.
See more: How Many Weeks Is 50 Days To Weeks, How Many Weeks In 50 Days
The presumption that sulfuric acid totally dissolves in water is valid bereason sulfuric acid is a solid acid and also finish dissolution in water is a characteristic of strong acids. More procedures are needed to compute the concentration of ions in solution for a weak acid.
Always follow safety actions in the laboratory or at any time when dealing with acids. This has the usage of safety and security equipment such as laboratory gowns, goggles, gloves and correct glassware.
Joshua Suico is a university teacher specializing in chemistry and the life scientific researches. He holds a Master of Science degree in chemistry. During his college days, he as soon as intentionally dropped sodium pelallows into a sink for fun and also for science. |
Today, the Earth will be at a point in its orbit around the Sun called perihelion; the point in its orbit about which it is closest to the Sun. Until early July, we’ll be getting further and further away from the Sun, after which point we start getting closer again.
The overall change in distance is quite small, comparatively. Today, we’re approximately 3.1 million miles (just shy of 5 million kilometers) closer to the Sun than we will be in July, at aphelion. When you compare that to an average distance of around 93 miles, you’ll realize why the change in distance is virtually unnoticed by us Earthlings (unless we’re scientists specifically studying the Sun).
That distance has a negligible impact on the temperatures on Earth. It’s the amount of direct sunlight we receive, based on the Earth’s axial tilt, that gives us our seasons and varying temperatures. |
Heatstroke is a state of hyperthermia (core body temperature elevated above the normal range) resulting in thermal injury to tissues. Heatstroke occurs when heat generation exceeds the body’s ability to lose heat. Heatstroke is a very serious condition: it can lead to multiple organ failure and animals can die quickly if not treated. All animals are susceptible to heatstroke so you need to make sure that you take active steps to prevent it.
How should you treat a pet with heatstroke?
- Instigate emergency first aid to help normalise your pet’s body temperature. Apply or spray tepid/cool water onto their fur/skin, followed by fanning to maximise heat loss. Don’t use ice-cold water or ice as this may exacerbate the problem. Wetting down the area around your pet can also help.
- Take your pet to the nearest vet immediately. Heatstroke is a life-threatening emergency, so even if your pet looks like they may be recovering or you just suspect they might have suffered heatstroke they should still always be checked by a vet. Heatstroke can cause organ damage which might not appear straight away. Given the seriousness of this condition, it is better to be safe than sorry and have your pet checked out.
What are the signs of heatstroke?
Signs may vary between individuals, but commonly include:
- Relentless panting (increases as heatstroke progresses)
- Drooling, salivating
- Agitation, restlessness
- Very red or pale gums
- Bright red tongue
- Increased heart rate
- Breathing distress
- Vomiting, Diarrhoea (possibly with blood)
- Signs of mental confusion, delirium
- Dizziness, staggering
- Lethargy, weakness
- Muscle tremors
- Collapsing and lying down
- Little to no urine production
What are the main predisposing factors?
- A warm/hot, humid environment
- Lack of adequate ventilation/air flow
- Lack of adequate shade
- Lack of adequate drinking water
- Excessive exercise
How do you avoid heatstroke for your pets?
You can help to prevent heatstroke by ensuring your pets are kept in appropriate environmental conditions and being aware of the symptoms so action can be taken swiftly.
- Provide your pets with a cool, shaded area with good ventilation at all times – adequate ventilation and air flow are important as many animals cool down via evaporative cooling (panting) which requires adequate air flow.
- Make sure they have plenty of clean fresh water and extra water sources in case of spillage.
- Bring your pets indoors on hot, humid days if the indoor environment is cooler for the animal (e.g. air-conditioning, child-safe fans, open windows where possible and shade).
- Do not exercise your pets in hot, humid conditions. On hot days try to walk your dog very early in the morning or late in the evening when it is cool, and avoid the hottest part of the day. Avoid walking on hot sand, concrete, asphalt areas or any other areas where heat is reflected and there is no access to shade.
- Do not leave your dog in a car or vehicle – even when the windows are down dogs can still overheat and die. The high temperatures in the car combined with inadequate ventilation or air flow mean that the dog cannot adequately thermoregulate leaving them vulnerable to overheating. Animals in these conditions suffer horribly – please don’t risk it. See Just 6 minutes and Dogs Die in Hot Cars for more information.
- Small animals including rabbits, guinea pigs, ferrets, birds, rats and mice are highly susceptible to heatstroke. These animals are often confined in cages and hutches and are unable to move away to cooler places, so they need to be moved into a cool, shaded and well-ventilated area in hot weather. They also require clean, fresh drinking water at all times. On very hot days you may need to bring them into a cool place indoors, for example the laundry.
How do vets help pets with heatstroke?
Vets are trained to assess the severity of the heatstroke and then provide emergency medical treatment as required. They will check your pet’s body temperature and vital signs and then instigate emergency treatment which may include:
- Putting your pet on a drip (intravenous fluids)
- Cooling treatments e.g. cooling enemas
- Supplemental oxygen
- Medication as required
- Blood tests to check organ function
- Ongoing monitoring and treatment as required
More tips for taking care of pets in hot weather
- Dogs travelling on the back of utes are susceptible to burning their footpads/in contact body parts on the ute tray – many of which can get very hot in the sun. Owners need to cover the trays with a suitable material to prevent this problem and provide a shaded area.
- Owners need to be aware of sunburn particularly in pets with white, non-pigmented skin and a white-coloured coat.
Other exacerbating factors can include:
- Brachycephalic breeds (short-nosed and flat-faced) e.g. Pugs, English bulldogs, French bulldogs, Pekingese as well as Persian and Himalayan cats.
- Thick/long hair coat
- Extremes in age (young/old)
- Excessive exercise
- Respiratory disease/breathing problems – laryngeal paralysis, collapsing trachea
- Heart problems/Cardiovascular disease
- Neurological disease |
MIT researchers present a new circuit design that could unlock the power of experimental superconducting computer chips and make simple superconducting devices much cheaper to manufacture.
Computer chips with superconducting circuits — circuits with zero electrical resistance — would be 50 to 100 times as energy-efficient as today’s chips, an attractive trait given the increasing power consumption of the massive data centers that power the Internet’s most popular sites.
Superconducting chips also promise greater processing power: Superconducting circuits that use so-called Josephson junctions have been clocked at 770 gigahertz, or 500 times the speed of the chip in the iPhone 6.
But Josephson-junction chips are big and hard to make; most problematic of all, they use such minute currents that the results of their computations are difficult to detect. For the most part, they’ve been relegated to a few custom-engineered signal-detection applications.
In the latest issue of the journal Nano Letters, MIT researchers present a new circuit design that could make simple superconducting devices much cheaper to manufacture. And while the circuits’ speed probably wouldn’t top that of today’s chips, they could solve the problem of reading out the results of calculations performed with Josephson junctions.
The MIT researchers — Adam McCaughan, a graduate student in electrical engineering, and his advisor, professor of electrical engineering and computer science Karl Berggren — call their device the nanocryotron, after the cryotron, an experimental computing circuit developed in the 1950s by MIT professor Dudley Buck. The cryotron was briefly the object of a great deal of interest — and federal funding — as the possible basis for a new generation of computers, but it was eclipsed by the integrated circuit.
“The superconducting-electronics community has seen a lot of new devices come and go, without any development beyond basic characterization,” McCaughan says. “But in our paper, we have already applied our device to applications that will be highly relevant to future work in superconducting computing and quantum communications.”
Superconducting circuits are used in light detectors that can register the arrival of a single light particle, or photon; that’s one of the applications in which the researchers tested the nanocryotron. McCaughan also wired together several of the circuits to produce a fundamental digital-arithmetic component called a half-adder.
Resistance is futile
Superconductors have no electrical resistance, meaning that electrons can travel through them completely unimpeded. Even the best standard conductors — like the copper wires in phone lines or conventional computer chips — have some resistance; overcoming it requires operational voltages much higher than those that can induce current in a superconductor. Once electrons start moving through an ordinary conductor, they still collide occasionally with its atoms, releasing energy as heat.
Superconductors are ordinary materials cooled to extremely low temperatures, which damps the vibrations of their atoms, letting electrons zip past without collision. Berggren’s lab focuses on superconducting circuits made from niobium nitride, which has the relatively high operating temperature of 16 Kelvin, or minus 257 degrees Celsius. That’s achievable with liquid helium, which, in a superconducting chip, would probably circulate through a system of pipes inside an insulated housing, like Freon in a refrigerator.
A liquid-helium cooling system would of course increase the power consumption of a superconducting chip. But given that the starting point is about 1 percent of the energy required by a conventional chip, the savings could still be enormous.
Cheap superconducting circuits could also make it much more cost-effective to build single-photon detectors, an essential component of any information system that exploits the computational speedups promised by quantum computing.
Engineered to a T
The nanocryotron — or nTron — consists of a single layer of niobium nitride deposited on an insulator in a pattern that looks roughly like a capital “T.” But where the base of the T joins the crossbar, it tapers to only about one-tenth its width. Electrons sailing unimpeded through the base of the T are suddenly crushed together, producing heat, which radiates out into the crossbar and destroys the niobium nitride’s superconductivity.
A current applied to the base of the T can thus turn off a current flowing through the crossbar. That makes the circuit a switch, the basic component of a digital computer.
After the current in the base is turned off, the current in the crossbar will resume only after the junction cools back down. Since the superconductor is cooled by liquid helium, that doesn’t take long. But the circuits are unlikely to top the 1 gigahertz typical of today’s chips. Still, they could be useful for some lower-end applications where speed isn’t as important as energy efficiency.
Their most promising application, however, could be in making calculations performed by Josephson junctions accessible to the outside world. Josephson junctions use tiny currents that until now have required sensitive lab equipment to detect. They’re not strong enough to move data to a local memory chip, let alone to send a visual signal to a computer monitor.
In experiments, McCaughan demonstrated that currents even smaller than those found in Josephson-junction devices were adequate to switch the nTron from a conductive to a nonconductive state. And while the current in the base of the T can be small, the current passing through the crossbar could be much larger — large enough to carry information to other devices on a computer motherboard.
“I think this is a great device,” says Oleg Mukhanov, chief technology officer of Hypres, a superconducting-electronics company whose products rely on Josephson junctions. “We are currently looking very seriously at the nTron for use in memory.”
“There are several attractions of this device,” Mukhanov says. “First, it’s very compact, because after all, it’s a nanowire. One of the problems with Josephson junctions is that they are big. If you compare them with CMOS transistors, they’re just physically bigger. The second is that Josephson junctions are two-terminal devices. Semiconductor transistors are three-terminal, and that’s a big advantage. Similarly, nTrons are three-terminal devices.”
“As far as memory is concerned,” Mukhanov adds, “one of the features that also attracts us is that we plan to integrate it with magnetoresistive spintronic devices, mRAM, magnetic random-access memories, at room temperature. And one of the features of these devices is that they are high-impedance. They are in the kilo-ohms range, and if you look at Josephson junctions, they are just a few ohms. So there is a big mismatch, which makes it very difficult from an electrical-engineering standpoint to match these two devices. NTrons are nanowire devices, so they’re high-impedance, too. They’re naturally compatible with the magnetoresistive elements.”
McCaughan and Berggren’s research was funded by the National Science Foundation and by the Director of National Intelligence’s Intelligence Advanced Research Projects Activity.
Publication: Adam N. McCaughan and Karl K. Berggren, “A Superconducting-Nanowire Three-Terminal Electrothermal Device,” Nano Letters, 2014, 14 (10), pp 5748–5753; DOI: 10.1021/nl502629x
PDF Copy of the Study: A superconducting-nanowire 3-terminal electronic device
Image: Adam N. McCaughan |
are fluid-filled sacs that develop in or on the ovaries. Many cysts don't cause any symptoms and cause no harm. Some cysts require treatment.
Ovarian Cysts Overview
Ovarian cysts are fluid-filled sacs that develop in or on the ovaries. The ovary is the part of a woman’s body that produces eggs. There are several types of ovarian cysts. The most common type of ovarian cyst is a functional cyst, which often forms as a result of the normal function of the menstrual cycle.
Many ovarian cysts don't cause symptoms and cause no harm. Others can cause a variety of symptoms including: pressure, swelling, or pain in the abdomen; pelvic pain; a dull ache in the lower back and thighs; and changes in menstruation.
Ovarian cysts are most often found during routine pelvic exams, although some may need further testing.
Sometimes ovarian cysts are simply watched for a period of time to see if they disappear on their own. However, some ovarian cysts are treated with birth control pills or surgery.
Ovarian Cysts Symptoms
Many ovarian cysts don't cause symptoms. Others can cause:
- Pressure, swelling, or pain in the abdomen
- Pelvic pain
- Dull ache in the lower back and thighs
- Problems passing urine completely
- Pain during sex
- Weight gain
- Pain during your period
- Abnormal bleeding
- Nausea or vomiting
- Breast tenderness
If you have these symptoms, get help right away:
- Pain with fever and vomiting
- Sudden, severe abdominal pain
- Faintness, dizziness, or weakness
- Rapid breathing
Ovarian Cysts Causes
No one knows exactly what causes an ovarian cyst. Some experts think that common ovarian cysts come from a hormonal imbalance. If a woman has a hormonal imbalance, her body will not make eggs (ovulate). In most cases, this imbalance does not last long. The doctor may want to just watch you until the cyst goes away.
The most common type of ovarian cyst is a functional cyst.
Functional cysts often form during the menstrual cycle. The two types are:
- Follicle cysts. These cysts form when the sac doesn't break open to release the egg. Then the sac keeps growing. This type of cyst most often goes away in 1 to 3 months.
- Corpus luteum cysts. These cysts form if the sac doesn't dissolve. Instead, the sac seals off after the egg is released. Then fluid builds up inside. Most of these cysts go away after a few weeks. They can grow to almost 4 inches. They may bleed or twist the ovary and cause pain. They are rarely cancerous. Some drugs used to cause ovulation, such as Clomid® or Serophene®, can raise the risk of getting these cysts.
Other types of ovarian cysts are:
- Endometriomas. These cysts form in women who have endometriosis. This problem occurs when tissue that looks and acts like the lining of the uterus grows outside the uterus. The tissue may attach to the ovary and form a growth. These cysts can be painful during sex and during your period.
- Cystadenomas. These cysts form from cells on the outer surface of the ovary. They are often filled with a watery fluid or thick, sticky gel. They can become large and cause pain.
- Dermoid cysts. These cysts contain many types of cells. They may be filled with hair, teeth, and other tissues that become part of the cyst. They can become large and cause pain.
- Polycystic ovaries. These cysts are caused when eggs mature within the sacs but are not released. The cycle then repeats. The sacs continue to grow and many cysts form. For more information about polycystic ovaries,
Ovarian Cysts Diagnosis
Ovarian cysts are often felt, if large enough, during routine pelvic exams. Once a cyst is found, tests are done to help plan treatment. Tests include:
- An ultrasound. This test uses sound waves to create images of the body. With an ultrasound, the doctor can see the cyst's:
- Mass — if it is fluid-filled, solid, or mixed
- A pregnancy test. This test may be given to rule out pregnancy.
- Hormone level tests. Hormone levels may be checked to see if there are hormone-related problems.
- A blood test. This test is done to find out if the cyst may be cancerous. The test measures a substance in the blood called cancer-antigen 125 (CA-125). The amount of CA-125 is higher with ovarian cancer. But some ovarian cancers don't make enough CA-125 to be detected by the test. Some noncancerous diseases also raise CA-125 levels. Those diseases include uterin fibroids and endometriosis. Noncancerous causes of higher CA-125 are more common in women younger than 35. Ovarian cancer is very rare in this age group. The CA-125 test is most often given to women who:
- Are older than 35
- Are at high risk for ovarian cancer
- Have a cyst that is partly solid
Living With Ovarian Cysts
Be sure to have regular pelvic examinations in order to help ensure that changes in your ovaries are diagnosed as early as possible.
Pay attention to your menstrual cycle. Watch for symptoms that are not normal for you. Talk with your doctor about any changes that concern you.
Ovarian Cysts Treatments
Watchful waiting. If you have a cyst, you may be told to wait and have a second exam in 1 to 3 months. Your doctor will check to see if the cyst has changed in size. This is a common treatment option for women who:
- Are in their childbearing years
- Have no symptoms
- Have a fluid-filled cyst
It may be an option for postmenopausal women.
Surgery. Your doctor may want to remove the cyst if you are postmenopausal, or if it:
- Doesn't go away after several menstrual cycles
- Gets larger
- Looks odd on the ultrasound
- Causes pain
The two main surgeries are:
- Laparoscopy – Done if the cyst is small and looks benign (noncancerous) on the ultrasound. While you are under general anesthesia, a very small cut is made above or below your navel. A small instrument that acts like a telescope is put into your abdomen. Then your doctor can remove the cyst.
- Laparotomy – Done if the cyst is large and may be cancerous. While you are under general anesthesia, larger incisions are made in the stomach to remove the cyst. The cyst is then tested for cancer. If it is cancerous, the doctor may need to take out the ovary and other tissues, like the uterus. If only one ovary is taken out, your body is still fertile and can still produce estrogen.
Ovarian Cysts Related Medications
Low-dose birth control pills stop a woman's body from releasing eggs (ovulation), allowing time for the cyst to go away on its own before the body resumes its regular cycle.
You can also use Depo-Provera. It is a hormone that is injected into muscle. It prevents ovulation for 3 months at a time.
If the cyst does not go away, or if it grows larger, then surgery may be considered.
Ovarian Cysts Prognosis
The good news is that most cysts:
- Don't cause symptoms
- Are not cancerous
- Go away on their own
Talk to your doctor or nurse if you notice:
- Changes in your period
- Pain in the pelvic area
- Any of the major symptoms of cysts
Most functional ovarian cysts occur during childbearing years. And most of those cysts are not cancerous. Women who are past menopause (ages 50–70) with ovarian cysts have a higher risk of ovarian cancer. At any age, if you think you have a cyst, see your doctor for a pelvic exam. |
In today’s information warfare, secure communication has become a vital part of any type of organization, especially the firms related to national security and military. The critical information regarding these firms is highly confidential and sensitive because its exposure to enemies can be disastrous. General Paul M. Nakasone, the U.S. Cyber Command and Director, stated that we faced a challenging and volatile cyber threat environment and cyber threats to our national security interests and critical infrastructure ranked at the top of the list.
In addition, the state-sponsored attack has also become the norm of the day. According to U.S. Depart of Defense (DoD), China has sought to erode U.S. military overmatch and the economic vitality of the nation by continually infiltrating into the critical information of the U.S. private and public sector institutions.
What are the ways to ensure a secure digital communication today? How organizations can prevent data breaches and make certain the confidentiality of sensitive information? How to survive penalties due to non-compliance such as that of GDPR? The answer to these questions is possible with the help of cryptography and PKI. In this article, we will learn how cryptography help individuals and organizations to conduct secure communication and ensure data confidentiality, integrity, and authentication.
How Does Encryption work?
Encryption, in fact, is two-way communication and an essential part of cryptography. It is the process of converting electronic data and information (or “plaintext”) into the code, known as “ciphertext.” The ciphertext is not in the human-readable form. When a sender sends a message, he/she would encrypt it (encryption) to convert it into an unreadable form/ciphertext. When the intended receiver receives this message at the destination, he/she would decrypt the message (decryption) to reconvert it into a readable form/plaintext again. Both sender and receiver perform this transmission using the cryptography keys, which are known as Public key and the Private Key. There are two types of modern encryption –namely, Symmetric Key Encryption and Asymmetric Key Encryption. Asymmetric key encryption is also referred to as public key encryption.
Since the message is in the encrypted form, the attackers cannot read it due to unavailability of the keys that can unlock it or convert it into a readable form/plaintext.
How Cryptography Ensures Data Confidentiality?
To understand this concept, first, we need to understand what data confidentiality is. Data confidentiality is the act of protecting data against unlawful, unintentional, or unauthorized access, theft, or disclosure. To ensure data confidentiality, security analysts apply an encryption technique. Encryption ensures data confidentiality by preventing attackers from intercepting data during its transmission over a network. According to the Ponemon Institute’s 2018 Global Encryption Trends study, “43 per cent of enterprises now have a company-wide and consistent encryption strategy.”
How Does Cryptography help to Achieve Data Integrity?
Data integrity is the act of securing data and information from deliberate change, damage, or manipulation. The methods used to ensure data integrity involve Hashing (e.g., MD2, MD4, MD5, and Secure Hash Algorithm – 1), Digital Signatures, and Non-repudiation. Digital signature prevents the tampering and impersonation of data while it is in transit. Non-repudiation is also a security service that ensures that a sender when sending a message, cannot deny the authenticity of a message that he sent to the recipient. Likewise, the recipient cannot deny having received that message. Non-repudiation is achieved through cryptography.
How to Achieve Authentication Using Cryptography and PKI?
Cryptography can be employed for authentication using digital signatures, digital certificates, or through the use of Public Key Infrastructure. Authentication is one of the widely used security approaches for websites. Using user names and passwords for login purposes are examples of authentication.
Though data and sensitive information are vulnerable due to sophisticated cyber threats, the use of cryptography and PKI can help organizations to conduct secure communication and transmission of such data and information. Cryptography ensures data confidentiality, integrity, as well as authentication of only authorized users. In 2019, security analysts believe that the cryptography will help to reduce data breaches. |
Times Tables and Number Facts
These are the most important areas of maths for you to work on at home. Your child will need to know all of their times tables by the end of Year 4. We would like you to spend time every week working with your child on these.
For all ages, children will only learn their key number facts by saying them repeatedly. Support your child to chant the times tables, ask them quick recall questions, count in multiples of the number forwards and backwards.
We have bought Times Tables Rock Stars and PiXL Times Tables to support your child's learning. All children from Y2-Y6 have a login for these apps. Please ask at the office if you need help with using these really useful resources.
These are the key number facts that your child must know:
Y1- number bonds within 10 then 20, counting in 2s, 5s and 10s.
Y2- recall and use multiplication and division facts for 2, 5 and 10 times tables. Number bonds within 20.
Y3- recall and use multiplication and division facts for 3, 4 and 8 times tables. Number bonds within 100.
Y4 – recall and use multiplication and division facts for 6, 7 and 9 times tables.
Y5- recall and use multiplication and division facts for 11 and 12 times tables. Use knowledge of number bonds in decimal calculations e.g. 3-2.3=
Y6- Use all multiplication and division facts in decimal questions e.g. 3x0.7= or 1.8÷6=
All children will be tested periodically in class on all of the times tables that they should know. You will be kept informed about their progress. |
– The Great Depression was a severe worldwide economic depression in the decade preceding World War II
– Two Big Shifts in Aggregate Demand: The Great Depression and World War II. From 1929 to 1933 (The Great Depression), GDP fell by 27 percent. From 1939 to 1944 (World War II), the economy’s production of goods and services almost doubled
– It started in 1930 and lasted until the late 1930s or middle 1940s. This period consists of a decline in economic activity (1929–33) followed by a recovery (1934–39). It was the longest, most widespread, and deepest depression of the 20th century.
– The depression originated in the U.S., after the fall in stock prices that began around September 4, 1929, and became worldwide news with the stock market crash of October 29, 1929 (known as Black Tuesday). Sharp asset price declines: the stock market fell 13% on October 28, 1929, and fell 89% by 1932.
– Personal income, tax revenue, profits and prices dropped, while international trade plunged by more than 50%. Unemployment in the U.S. rose to 25%, and in some countries rose as high as 33%.
– Over 1/3 of all banks failed by 1933, due to loan defaults and a bank panic. Over 9,000 banks closed and money supply fell 28% from 1929-1933. This drop in the money supply may have caused the Great Depression. It certainly contributed to the severity of the Depression. During that period, business investment fell nearly 80%, consumption of durables goods declined almost 55%, consumption of nondurables goods and services declined almost 29%.
– A credit crunch and uncertainty caused huge fall in consumption and investment. Falling output magnified these problems. Federal Reserve allowed money supply to fall, creating deflation, which increased the real value of debts and increased defaults.
– US Real GDP Per Capita Trend (2000 dollar):
– Unemployment and Real GNP on Great Depression
Next page, the causes of Great Depression |
Primary and Intermediate
Children, from the ages of 6 to 12, are in the sensitive period for Imagination. Imagination is the ability to relate the know to the unknown and is the basis of life. The child’s interest turns from the development of the individual to that of social and cultural development. Through the Montessori philosophy we seek to open the mind of the child. The Blake School’s program enables each child to discover true values through unbiased investigation and self-awareness. Education should not merely be to acquire knowledge, to gather and correlate facts, but to cultivate an integrated outlook on life.
Reading “Literacy is the essential tool of a liberal education, thus learning to read is a crucial aspect of learning to learn.” Reading begins with the decoding of graphic symbols and always involves a search for meaning. Comprehension is the goal of all reading.
Creative writing provides the child with an opportunity to utilize his imagination and creative ability. Computer skills will be developed and/or reinforced through this academic process.
In the Montessori School, mathematics is introduced and is practiced with the utilization of concrete materials. The “rule” is to be the reaching point, not the starting point. The child will reach abstraction only when the mind is capable of doing it without the hand. The objective of the math program is not only to add, subtract, multiply and to divide numbers, but also to understand the meaning for such.
The study of science stimulates and guides students in understanding the constantly changing and growing forces, processes, materials and living things that are in the worked around them. To observe and appreciate the relationship and interaction taking place in nature, the student will be exposed to the study of life, earth, chemical and physical science.
Social studies encompasses the study of history, geography and human society. Learning experiences are related to the social growth and development of the child, seeking to help them understand man’s relationship to the world he inhabits.
Handwriting is the development of refined eye-hand coordination and should be considered artistic ability. As a graphic representation of one’s self, the child is encouraged to take pride in the product resulting from this ability. |
Grapes were already growing in the United States when the settlers first arrived. There are at least two dozen species that occur in the United States. They are easily recognized by their woody vines that climb high into trees, often at the edges of woods. The forked tendrils and heart-shaped leaves distinguish them from other vines.
Wild grapes are the ancestors of cultivated grapes and are edible, as are the seeds. Young leaves can also be eaten. All are edible, but some are sweeter or larger than others. Most grapes are sweet to eat straight from the vine and make excellent jelly, pies, and wine. Since the fruits contain lots of natural pectin, jelly can be made without purchasing commercial pectin, and honey can be substituted successfully for sugar.
The most popular wild grape in areas where it grows is the muscadine grape. The fruits are larger than other wild grapes, with a thick, tart skin and sweet, juicy pulp. Scuppernong grapes that are cultivated in backyards or grown commercially are a variety of the muscadine and have light-colored skin that is thinner than the muscadine. The scuppernong also grows wild and is sometimes referred to as the blond muscadine.
Muscadines begin ripening in late summer and continue into the fall, hanging in small clusters rather than large bunches like other wild grapes. They can be gathered rather quickly by holding a bucket under the clusters and picking them by the handful, letting them drop into the bucket as they fall. Muscadines can be found from Delaware south to Florida and west to Texas and Oklahoma.
Wild grapes contain seeds that are crunchy and somewhat woody tasting. Regardless, eat them as well. Grape seeds, as well as the leaves and skin, provide a rich source of resveratrol, an anti-aging compound. Instead of buying grape seed extract, eat the wild grapes, seeds and all.
There are other grape species that are more tolerant of colder temperatures and grow farther north than the muscadines. Fox grapes can be found as far north as Canada and west to Wisconsin and Michigan. The leaves are wider than other grapes with either shallow lobes or no lobes. Summer grapes have smaller fruits that are less sweet than other wild grapes. Look for the deep lobes on some of the leaves to distinguish this grape from others. Winter grapes also have small berries that get sweeter after a frost.
Oregon Grape—Not a True Grape
In the west is a fruit that looks like a grape, but it is not a grape. Its common name is Oregon grape, and it is in the barberry family. The name is referring to the fruits that grow in clusters and turn dark bluish-purple when they ripen in the fall. The leaves are dark green and shiny, somewhat resembling the leaves of American holly. The fruits are best made into jams or jellies. |
Arch pain, also known by the medical term plantar pain, refers to pain in the arch at the bottom of the foot. This pain can present as a result of various causes, usually following activities that involve significant stress to the arch of the foot.
The arches of the feet are a principle structure of the foot, which play an important role to absorb and return the force between the body and the ground, supporting bodily movement when people are on their feet. However, when the arches are put under excessive stress due to intense movements or extended periods of time standing on their feet, injury and pain to the area can present.
The most common cause of arch pain is a condition known as plantar fasciitis, which involves inflammation of the plantar fascia connective tissue along the arch of the foot. This usually follows excessive stress to the area from activities such as extended periods of time spent on feet at work or after sporting activities.
Injury to the arch of the foot can occur due to direct force trauma can result in pain and inflammation. This may include:
- Ligament sprains
- Muscle strains
- Biomechanical misalignment
- Fractures due to mechanical stress
- Muscle overuse
- Inflammatory arthritis
Activities that are most likely to cause damage to the foot arch include those that involve a significant amount or extended period of stress to the feet. This includes intense sport activities, long distance running and simply standing on the feet all day in a workplace environment.
Some deformities of the foot, such as hammertoe or clubfoot, may also cause arch pain. Additionally, people with abnormal arches of the feet are more likely to be affected by arch pain, including both people with flat feet and those with high arches.
Sudden Weight Changes
Drastic changes in weight that occur over a short period of time can be responsible for causing stress to the arch of the foot and result in arch pain.
In particular, people who are obese, have Type 2 diabetes mellitus, or are pregnant are more likely to be affected by excess stress on their feet and report symptoms of arch pain.
Inappropriate footwear that is ill-fitting or does not provide adequate support to the food and may lead to pain and inflammation in the arch of the foot. The sole of the shoe is of particular importance and those with poor arch support or soft soles are most likely to cause problems.
The structure of the foot is very complex and, for this reason, an individual that is experiencing arch pain should be referred to a podiatrist who will make the relevant investigations as to the cause of the condition.
This usually begins with a physical examination of the foot and a consultation about the medical history and events recent that may have caused the pain. There is often evidence of a lump or bruise in the arch of the foot that is evidence of damage to the connective tissue.
Other tests that may be used to determine the cause of the arch pain include X-ray imaging, magnetic resonance imaging (MRI) or computed tomography (CT) scans. |
Hearing tests are an important assessment to evaluate your current level of hearing. This helps decide if any treatment, such as the use of hearing aids, would be recommended to improve the quality of hearing.
Hearing tests are carried out in a soundproof environment and are used to determine how you interpret different sounds. This can help ensure you get the treatment and care required for your specific circumstances. As discussed frequently on this hearing blog, there are many different reasons someone may experience hearing loss so it’s important you understand why your hearing is impaired.
Hearing tests and assessments can be used by everyone, and as we get older, it’s recommended to have them more frequently. A big part of hearing tests is the use of audiograms. But what exactly are audiograms and why are they important?
What are Audiograms?
An audiogram is a graph that details how you interpret sound. This helps show how you hear different frequencies at different volumes. This can help translate what that means for your day-to-day hearing. A qualified health professional may use various hearing assessments to test your hearing and often plot the results as an audiogram.
The premise of an audiogram is to show how well you can hear different sound frequencies. Health professionals use the term “hearing threshold” to describe the point at which a sound frequency becomes inaudible. A hearing threshold of between 0 and 25 dB is considered normal. This means an audiogram that shows hearing thresholds below this may indicate a degree of hearing impairment.
Based on what the audiogram looks like, it can help diagnose the extent of your hearing impairment and help illustrate how significant it is.
When we offer hearing assessments here at Hearing Solutions UK, we ensure we go through the audiogram with our patients as detailed as possible, so they fully understand what the results mean. For the best management of hearing impairment and any other health condition, it’s always best the patient fully understands and is engaged with what the results mean and subsequent course of treatment. If you’d like to learn more about our hearing assessments, get in touch today and one of our friendly staff will help you get started.
Interpreting Audiogram Results
The frequency of a sound is expressed as cycles per second, also known as Hertz. This relates to the “pitch” of a sound. This is why different noises and sounds are interpreted by us differently, as they have different pitches.
Low frequency sounds include things like thunder, or a rumble, or a deep voice. High frequency sounds include things like a whistle, squeak or a bird’s calling.
Each frequency is tested at varying volumes, known as decibels (dB). Normal hearing is when the softest sounds (just before you can’t hear the sound anymore) is between 0-20dB. Mild hearing loss is between 21-40dB. This type of hearing loss may mean you struggle following conversations in loud environments. 41-70dB is associated with quite moderate hearing loss. This may mean you struggle to interpret speech. Between 71-95dB is considered severe hearing loss and is likely to mean you can’t hear speech, even in quiet surroundings. Anything over 95dB is regarded as profound hearing loss and means you are unlikely to hear most sounds, unless they are very loud.
What an Audiogram Looks Like
An audiogram usually plots the volume on the vertical axis, which is measured in decibels. The loudest sounds are at the bottom and the softest near the top. This means those will good hearing will have an audiogram with a line plotted closer to the top of the graph, compared to someone with hearing impairment who may have a line plotted closer to the middle/bottom of the graph.
The horizontal line represents the sound frequency, or pitch, which is measured in Hertz (Hz). Usually low pitch sounds are on the left, and high pitch sounds on the right. Hearing impairment may not impact all frequency sounds, which is why plotting the horizontal line is so important in hearing assessments. It may be a case that your hearing loss impacts high or low frequency sounds more. An audiogram is great at being able to visualise and illustrate this to help understand how your hearing will relate to everyday activities and sounds.
Both ears are usually tested separately and if plotted on the same graph, red is usually used for right ear, and blue for the left ear.
Importance of Hearing Tests and Audiograms
Having a clear presentation of how you responded to different sounds frequencies at different volumes is the best way to understand the degree of any hearing impairment. Often hearing aids can work wonders at helping amplify and improve hearing. It may be a case that you’re currently getting by without enjoying the full spectrum of sound and may not be using a simple solution like a hearing aid to improve your hearing.
Our blog on cookie bite hearing loss is a great example of how an audiogram can help illustrate a specific type of hearing impairment, which may otherwise be hard to interpret on your own. Gradual hearing loss is also something that can be hard for patients to recognise as the impairment happens so gradually, but can result in significant impairment of hearing.
Hearing Assessments with Hearing Solutions UK
If you think you need a hearing assessment, please don’t hesitate to get in touch with us here at Hearing Solutions UK. We offer the very best in hearing assessments and hearing aids, to help anyone suffering from hearing impairment get the care and treatment they need. We have hearing devices supplied in a variety of shapes and styles, from those completely invisible in the ear canal to radical new hearing bands that are more akin to the latest music headphones.
Book now for a free consultation or download our brochure to learn more. |
Presentation on theme: "SPH4U: Lecture 7 Today’s Agenda"— Presentation transcript:
1SPH4U: Lecture 7 Today’s Agenda FrictionWhat is it?Systematic catagories of forcesHow do we characterize it?Model of frictionStatic & Kinetic friction (kinetic = dynamic in some languages)Some problems involving friction
2New Topic: Friction What does it do? It opposes relative motion of two objects that touch!How do we characterize this in terms we have learned (forces)?Friction results in a force in the direction opposite to the direction of relative motion (kinetic friction, static – impending mot)jNFAPPLIEDimafFRICTIONsome roughness heremg
3Surface Friction...Friction is caused by the “microscopic” interactions between the two surfaces:
4Surface Friction... Force of friction acts to oppose relative motion: Parallel to surface.Perpendicular to Normal force.jNFimafFmg
5Model for Sliding (kinetic) Friction The direction of the frictional force vector is perpendicular to the normal force vector N.The magnitude of the frictional force vector |fF| is proportional to the magnitude of the normal force |N |.|fF| = K | N | ( = K|mg | in the previous example)The “heavier” something is, the greater the friction will be...makes sense!The constant K is called the “coefficient of kinetic friction.”These relations are all useful APPROXIMATIONS to messy reality.
6Model... Dynamics: i : F KN = ma j : N = mg so F Kmg = ma (this works as long as F is bigger than friction, i.e. the left hand side is positive)jNFimaK mgmg
7Lecture 7, Act 1 Forces and Motion A box of mass m1 = 1.5 kg is being pulled by a horizontal string having tension T = 90 N. It slides with friction (mk = 0.51) on top of a second box having mass m2 = 3 kg, which in turn slides on a frictionless floor. (T is bigger than Ffriction, too.)What is the acceleration of the second box ?(a) a = 0 m/s2 (b) a = 2.5 m/s2 (c) a = 3.0 m/s2Hint: draw FBDs of both blocks – that’s 2 diagramsslides with friction (mk=0.51 )Tm1a = ?m2slides without friction
8Lecture 7, Act 1 Solution First draw FBD of the top box: N1 m1 f = mKN1 = mKm1gTm1g
9Lecture 7, Act 1 SolutionNewtons 3rd law says the force box 2 exerts on box 1 is equal and opposite to the force box 1 exerts on box 2.As we just saw, this force is due to friction:= mKm1gm1f1,2f2,1m2
10Lecture 7, Act 1 Solution Now consider the FBD of box 2: N2 (contact from…)(friction from…)f2,1 = mkm1gm2(contact from…)(gravity from…)m1gm2g
11Lecture 7, Act 1 SolutionFinally, solve F = ma in the horizontal direction:mKm1g = m2aa = 2.5 m/s2f2,1 = mKm1gm2
12Inclined Plane with Friction: Draw free-body diagram:maKNjNmgi
13Inclined plane... Consider i and j components of FNET = ma : i mg sin KN = maj N = mg cos KNmg sin Kmg cos = mamajNa / g = sin Kcos mgmg cos img sin
14Static Friction... j N F i fF mg So far we have considered friction acting when the two surfaces move relative to each other- I.e. when they slide..We also know that it acts in when they move together: the ‘static” case.In these cases, the force provided by friction will depend on the OTHER forces on the parts of the system.jNFifFmg
15Static Friction… (with one surface stationary) Just like in the sliding case except a = 0.i : F fF = 0j : N = mgWhile the block is static: fF FjNFifFmg
16Static Friction…The maximum possible force that the friction between two objects can provide is fMAX = SN, where s is the “coefficient of static friction.”So fF S N.As one increases F, fF gets bigger until fF = SN and the object starts to move.If an object doesn’t move, it’s static frictionIf an object does move, it’s dynamic frictionjNFifFmg
17Static Friction...S is discovered by increasing F until the block starts to slide:i : FMAX SN = 0j : N = mgS FMAX / mgjNFMAXiSmgmg
18Lecture 7, Act 2 Forces and Motion A box of mass m =10.21 kg is at rest on a floor. The coefficient of static friction between the floor and the box is ms = 0.4.A rope is attached to the box and pulled at an angle of q = 30o above horizontal with tension T = 40 N.Does the box move?(a) yes (b) no (c) too close to callTqmstatic friction (ms = 0.4 )
19Lecture 7, Act 2 Solution y x Pick axes & draw FBD of box: Apply FNET = may: N + T sin q - mg = maY = 0NN = mg - T sin q= 80 NTx: T cos q - fFR = maXqThe box will move if T cos q - fFR > 0fFRmmg
20Lecture 7, Act 2 Solution y x y: N = 80 N x: T cos q - fFR = maX The box will move if T cos q - fFR > 0NT cos q = 34.6 NTfMAX = msNfMAX = msN = (.4)(80N) = 32 NqmSo T cos q > fMAX and the box does movemgNow use dynamic friction: max = Tcosq - mKN
21Static Friction: We can also consider S on an inclined plane. In this case, the force provided by friction will depend on the angle of the plane.
22(Newton’s 2nd Law along x-axis) Static Friction...The force provided by friction, fF , depends on .fFma = 0 (block is not moving)mg sin ff ijN(Newton’s 2nd Law along x-axis)mg
23Static Friction...We can find s by increasing the ramp angle until the block slides:mg sin ffIn this case, when it starts to slide:ffSN Smg cos MSNijmg sin MSmg cos MNMmgStan M
24Additional comments on Friction: Since fF = N , kinetic friction “does not” depend on the area of the surfaces in contact. (This is a surprisingly good rule of thumb, but not an exact relation. Do you see why??)By definition, it must be true that S ³ K for any system (think about it...).
25Model for Surface Friction The direction of the frictional force vector fF is perpendicular to the normal force vector N, in the direction opposing relative motion of the two surfaces.Kinetic (sliding): The magnitude of the frictional force vector is proportional to the magnitude of the normal force N.fF = KNIt moves, but it heats up the surface it moves on!Static: The frictional force balances the net applied forces such that the object doesn’t move. The maximum possible static frictional force is proportional to N.fF SN and as long as this is true, then fF = fA in opposite directionIt doesn’t move!
26Aside: Graph of Frictional force vs Applied force: fF = SN fF = KN fF = FAFA
27Problem: Box on TruckA box with mass m sits in the back of a truck. The coefficient of static friction between the box and the truck is S.What is the maximum acceleration a that the truck can have without the box slipping?Sma
28Problem: Box on Truck Draw Free Body Diagram for box: Consider case where fF is max... (i.e. if the acceleration were any larger, the box would slip).NjifF = SNmg
29Problem: Box on Truck Use FNET = ma for both i and j components i SN = maMAXj N = mgaMAX = S gNjaMAXifF = SNmg
30Lecture 7, Act 3 Forces and Motion An inclined plane is accelerating with constant acceleration a. A box resting on the plane is held in place by static friction. What is the direction of the static frictional force?SaFfFfFf(a) (b) (c)
31Lecture 7, Act 3 SolutionFirst consider the case where the inclined plane is not accelerating.NFfmgmgFfNAll the forces add up to zero!
32Lecture 7, Act 3 SolutionIf the inclined plane is accelerating, the normal force decreases and the frictional force increases, but the frictional force still points along the plane:NFfamgAll the forces add up to ma!F = maThe answer is (a)mgFfNma
33Putting on the brakesAnti-lock brakes work by making sure the wheels roll without slipping. This maximizes the frictional force slowing the car since S > K . |
A ship moving through shallow water experiences pronounced effects from the proximity of the nearby bottom. Similarly, a ship in a channel will be affected by the proximity of the sides of the channel. These effects can easily cause errors in piloting which lead to grounding. The effects are known as squat, bank cushion, and bank suction. They are more fully explained in texts on ship handling, but certain navigational aspects are discussed below.
Squat is caused by the interaction of the hull of the ship, the bottom, and the water between. As a ship moves through shallow water, some of the water it displaces rushes under the vessel to rise again at the stern. This causes a venturi effect, decreasing upward pressure on the hull. Squat makes the ship sink deeper in the water than normal and slows the vessel. The faster the ship moves through shallow water, the greater is this effect; groundings on both charted and uncharted shoals and rocks have occurred because of this phenomenon, when at reduced speed the ship could have safely cleared the dangers. When navigating in shallow water, the navigator must reduce speed to avoid squat. If bow and stern waves nearly perpendicular the direction of travel are noticed, and the vessel slows with no change in shaft speed, squat is occurring. Immediately slow the ship to counter it. Squatting occurs in deep water also, but is more pronounced and dangerous in shoal water. The large waves generated by a squatting ship also endanger shore facilities and other craft.
Bank cushion is the effect on a ship approaching a steep underwater bank at an oblique angle. As water is forced into the narrowing gap between the ship's bow and the shore, it tends to rise or pile up on the landward side, causing the ship to sheer away from the bank.
Bank suction occurs at the stern of a ship in a narrow channel. Water rushing past the ship on the landward side exerts less force than water on the opposite or open water side. This effect can actually be seen as a difference in draft readings from one side of the vessel to the other, and is similar to the venturi effect seen in squat. The stern of the ship is forced toward the bank. If the ship gets too close to the bank, it can be forced sideways into it. The same effect occurs between two vessels passing close to each other.
These effects increase as speed increases. Therefore, in shallow water and narrow channels, navigators should decrease speed tominimize these effects. Skilled pilots may use these effects to advantage in particular situations, but the average mariner's best choice is slow speed and careful attention to piloting. |
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Accounting is the information system that measures business financial activities and processes that information into reports, and communicates the results to decision makers. Accounting is therefore called "the language of business".Individuals,Businesses, Investors,Creditors,Government Regulatory Agencies,Taxing authorities,Nonprofit Organizations and Other users (Employees,labour unions,consumer groups and news media) are some of the people and organizations who use accounting to make decisions. Users of accounting information can be categorized as external users or internal users.This distinction allows us to classify accounting into fields-financial accounting and management accounting.
Financial accounting provides information to people outside the firm. Some examples of external users are creditors and outside investors who are not part of the day-to-day management of the company. Government agencies and the general public are also external users of a firm's accounting information. Financial Accounting provides financial statements based on GAAP or Generally Accepted Accounting Principles.Financial Accounting reports what happened in the past.
Management accounting generates information for internal decision makers,such as top executives,department heads, college deans and hospital administrators. Management accounting measures and reports financial and non financial information that helps managers to fulfil the goals of an organization. Managers use management accounting information to choose, communicate and implement strategy coordinate product design, production and marketing decisions.Management Accounting is future oriented.
Cost accounting is a central element of managerial accounting. Cost accounting establishes budget and actual cost of operations, processes, departments or product and the analysis of variances, profitability or social use of funds. Managers use cost accounting to support decision-making to cut a company's costs and improve profitability. As a form of management accounting, cost accounting need not follow standards such as GAAP, because its primary use is for internal managers, rather than outside users. Cost accounting provides information for management and Financial accounting
Cost is defined as a resource sacrificed or foregone to achieve a specific objective.Cost driver is a factor which causes the amount of cost incurred to change.Actual cost is a cost that has occurred.Budgeted Cost is a predicted cost. Costs can be classified in the following way:
Product costs are :Direct Materials, Direct Labor and Indirect Manufacturing - factory costs that are not traceable to the product. Also known as Manufacturing Overhead costs or Factory Overhead costs
Direct and Indirect costs- Direct cost is a cost that can be conveniently and economically traced to the cost object eg.Parts,Assembly line wages.Indirect Costs are costs that cannot be conveniently or economically traced to a cost object.Instead of being traced, these costs are allocated to a cost object in a rational and systematic manner.eg. Electricity,Rent,Property taxes
Fixed, Variable, Semi-variable- . Fixed Costs remain unchanged in total regardless of changes in the related level of activity or volume .eg. amortization,insurance.Variable Costs change in total in proportion to changes in the related level of activity or volume.eg.materials(parts) or fuel for a trucking company
Companies strive to control costs by doing only value added activities and by efficiently managing the use of cost drivers in those value added activities. In this paper we are going to discusss how Starbucks Corporation an international coffee and coffeehouse chain is adopting the method of Cost-Volume-Profit analysis to control costs and improve profitability.
Marginal Costing/Cost-Volume-Profit Analysis: (Leslie.G.Eldenburg,Susan Wolcott,2003,Cost Management: Measuring, Monitoring, and Motivating Performance,John Wiley & Sons Inc)
Cost-volume-profit (CVP) analysis is a technique that examines changes in profits in response to changes in sales volumes, costs, and prices. Accountants often perform CVP analysis to help Managers and decision makers to plan their operations. Decision makers utilize the details of the CVP analysis to decide on the following:
The products or services the company should emphasize
The volume of sales needed that would help achieve the targeted level of profit
The amount of revenue the company should generate in order to avoid losses.
To determine if the company should increase expenditure on fixed costs
To determine the amount of money the company should budget for discretionary expenditures and
to find out if fixed costs expose the organization to an unacceptable level of risk.
CVP analysis begins with the basic profit equation.
Profit = Total revenue-Total costs
Separating costs into variable and fixed categories, we express profit as:
Profit=Total revenue-Total variable costs-Total fixed costs
The contribution margin is total revenue minus total variable costs. Similarly, the contribution margin per unit is the selling price per unit minus the variable cost per unit. Both contribution margin and contribution margin per unit are valuable tools when considering the effects of volume on profit. Contribution margin per unit tells us how much revenue from each unit sold can be applied toward fixed costs. Once enough units have been sold to cover all fixed costs, then the contribution margin per unit from all remaining sales becomes profit.
If we assume that the selling price and variable cost per unit are constant, we can rewrite the profit equation in terms of the contribution margin per unit.
Profit = P*Q - V*Q -F = (P-V)*Q - F
where P = Selling price per unit
V = Variable cost per unit
(P-V) = Contribution margin per unit
Q = Quantity of products sold (units of goods or services)
F = Total fixed costs
CVP Analysis can be performed using either Units or Revenues(in dollars)
CVP Analysis in Units
Assuming that fixed costs remain constant, we solve for the expected quantity of goods or services that must be sold to achieve a target level of profit.
Profit = (P-V) * Q -F
Solving for Q = F + Profit
Where Q is the Quantity(units) required to obtain target profit. The denominator is (P-V) which is the contribution margin.
CVP Analysis in Revenues
The contribution margin ratio (CMR) is the percent by which the selling price (or revenue) per unit exceeds the variable cost per unit, or contribution margin as a percent of revenue. For a single product, it is
CMR = P-V
To analyze CVP in terms of total revenue instead of units, we substitute the contribution margin ratio for the contribution margin per unit. We rewrite the equation to solve for the total dollar amount of revenue we need to cover fixed costs and achieve our target profit as
Revenue = F + Profit
Revenue = F+ Profit
A CVP analysis can be used to determine the breakeven point, or level of operating activity at which
revenues cover all fixed and variable costs, resulting in zero profit. We can calculate the breakeven point from any of the preceding CVP formulas, setting profit to zero. Depending on which formula we use, we calculate the breakeven point in either number of units or in total revenues
Breakeven using quantity = (F+0)
Breakeven using Revenue= (F+0)
CVP Sensitivity Analysis
Cost-Volume-Profit Analysis examines the behaviour of total revenues, total costs, and operating income as changes occur in the output level, selling price, variable costs or fixed costs. Sensitivity analysis helps managers in decision making and cope with uncertainity.
 PROBLEM OF THE RESEARCH
(Charles.T.Horngren,Srikant.M.Datar,George Foster,2007.Cost Accounting:A Managerial Emphasis:Prentice Hall)
Cost-Volume-Profit analysis makes the following assumptions:
Changes in the number of product or service units produced and sold affects the level of revenues..
Total costs can be divided into a fixed and a variable component with respect to the level of output
The analysis either covers a single product or assumes that the sales mix when multiple
products are sold will remain constant as the level of total units sold changes.
The unit selling price, unit variable costs, and fixed costs are known and constant.
.Revenues and costs can be added and compared without taking into account the time
value of money.
(Robert Kee,2007, Cost-volume-profit analysis incorporating the cost of capital.: Journal of Managerial Issues)
Cost Volume Profit Analysis is used widely, but faces criticism for its use of following assumptions
CVP assumes deterministic and linear cost and revenue functions.
CVP focuses on a single product and its single-period analysis.Firms across a variety of industries have found the simple CVP model to be helpful in both strategic and long-run planning decisions. However,in situations where revenue and cost are not adequately represented by the simplifying assumption of CVP analysis, managers should consider more sophisticated approaches to financial analysis. (Charles.T.Horngren, George Foster, and Srikant M. Datar.1994.Cost Accounting:
A Managerial Emphasis:Prentice-Hall.)
An implicit assumption, and one that is frequently overlooked in evaluating the use of CVP analysis, involves its treatment of the cost of capital. CVP analysis, like other managerial accounting techniques and models, uses accounting profitability as the primary decision criterion for evaluating resource allocation decisions. CVP analysis, like other managerial accounting techniques, ignores the cost of capital and treats it as if it were zero. However, the opportunity cost of the funds invested in the assets used to manufacture a product is a cost the same as the cost of operating resources, such as direct material, labor, and overhead. The failure of CVP analysis to incorporate the cost of capital into a product's cost function can lead to underestimating a product's cost, while overstating its profitability. For products that require a significant investment of capital, ignoring the opportunity cost of invested funds may lead to accepting products whose rate of return is less than the firm's cost of capital. In effect, traditional CVP analysis encourages managers to select products that destroy, rather than create, economic value for the firm. Finally, using an accounting measure of profitability creates a bias to employ capital relative to operating resources because the cost of capital is not reflected in a product's cost like those of operational resources. Therefore, product designers and developers may employ investment funds beyond the point where the marginal benefit of the last dollar of capital used is equal to its marginal cost.
IMPORTANCE AND OBJECTIVES OF THE RESEARCH:
(Flora Guidry, James O.Horrigan,Cathy Craycraft,1998.CVP analysis:a new look:Journal of Managerial Issues)
Cost Volume Profit analysis (CVP) is used widely and is one of the simplest, analytical tools in management accounting. CVP allows managers to examine wide range of strategic decisions such as pricing policies,product mixes,market expansions or contractions,outsourcing contracts and other planning decisions.Critics of CVP argue that it is too simplistic.The real world and the world of managerial affairs is complicated. , CVP analysis does not consider the impact of strategic decisions on the wealth of firms, nor does it consider the effect of those decisions on firms' asset structures and risk levels. Those considerations are important because virtually all CVP analysis deal with decisions that alter the asset and cost structures of firms, which means that the risk levels and costs of capital of those firms will also change because of those decisions. These missing elements in CVP analysis can be filled in with a small number of additional variables. The wealth effects can be included by analyzing the cost of capital of the assets necessary to carry out a decision. The risk level imposed by a decision can be incorporated by considering the degree of operating risk or the systematic risk level as reflected by an accounting beta risk variable. The cost of capital itself can be estimated through an analysis of the revenue patterns and the asset structures involved in a decision. In general, through the use of information that would usually be available in a CVP analysis, the full impact of a strategic decision can be assessed.
This section should contain a rationale for my research. i will ask some questions like why I AM undertaking the project? Why is the research needed? I need to show how my work will build on and add to the existing knowledge.
Starbucks Corporation is an international coffee and coffeehouse chain which was founded in 1971 and is based in Seattle, Washington. Starbucks Coffee Company is the leading retailer, roaster and brand of specialty coffee in the world, with more than 16,000 retail locations in North America, Latin America, Europe, the Middle East and the Pacific Rim - wherever there is a demand for great coffee.
Starbucks operates in  Kuwait, KSA, UAE, Egypt, Lebanon, Jordan, Qatar, Bahrain & Oman in the Middle East region. Starbucks stores have been operating in the Middle East since 1999 through a licensing agreement with trading partner and licensee MH Alshaya WLL, a private Kuwait family business.
Starbucks has three reportable operating segments, United states segment
constituted 73%,International segment constituted 19% and Global Consumer Products Group constituted 8% of total net revenues for fiscal year 2009. The Company's primary competitors for coffee beverage sales are quick-service restaurants and specialty coffee shops. The Company employed approximately 142,000 people worldwide as of September 27, 2009. The company believes that factors such as customers trading their products to lower priced products,unfavorable economic conditions,decline in the Starbucks brand name could negatively impact sales,net revenues,operating income,operating margins and earnings per share.
DATA ANALYSIS AND RESEARCH (www.forbes.com)
STARBUCKS CORP (NASDAQ: SBUX) | Income Statement
Will provided in next submission after completion of research.
Will provided in next submission after completion of research. |
Mercury and Air Toxics Standards (MATS)
Continuing to improve our air quality with the new Mercury and Air Toxics Standards means the difference between being sick and being healthy - in some cases, life and death - for hundreds of thousands of people. These new standards will avert up to 11,000 premature deaths, 4,700 heart attacks and 130,000 asthma attacks every year.
- The value of the air quality improvements for people's health alone totals $37 billion to $90 billion each year. That means that for every dollar spent to reduce this pollution, Americans get $3-9 in health benefits.
- The benefits are widely distributed and are especially important to minority and low income populations who are disproportionately impacted by asthma and other debilitating health conditions.
- Up to 540,000 missed work or "sick" days will be avoided each year, enhancing productivity and lowering health care costs for American families.
|Health Effect||Cases Avoided|
|Hospital and emergency room visits||5,700|
|Restricted activity days||3,200,000|
Health Impacts of Power Plant Emissions
Toxic air pollutants from fossil fuel-fired power plants cause serious health impacts. These facilities are the largest source of mercury emissions to the air. Once mercury from the air reaches water, microorganisms can change it into methylmercury, a highly toxic form that builds up in fish. People are primarily exposed to mercury by eating contaminated fish. Methylmercury exposure is a particular concern for women of childbearing age, unborn babies, and young children, because studies have linked high levels of methylmercury to damage to the developing nervous system. This damage can impair children’s ability to think and learn.
Other toxic metals such as arsenic, chromium and nickel can cause cancer. Acid gases cause lung damage and contribute to asthma, bronchitis and other chronic respiratory disease, especially in children and the elderly.
Reducing toxic power plant emissions will also cut fine particle pollution and prevent thousands of premature deaths and tens of thousands of heart attacks, bronchitis cases and asthma attacks.
Mercury and many of the other toxic pollutants also damage the environment and pollute our nation's lakes, streams, and fish. |
Economics Online Tutor
|National Income Accounts
Per Capita Measurements
Nominal And Real Values
NATIONAL INCOME ACCOUNTS
GROSS DOMESTIC PRODUCT (GDP) = CONSUMPTION (C) + INVESTMENT (I) +
GOVERNMENT SPENDING (G) = NET EXPORTS (NX, OR (X-M), OR X)
GROSS NATIONAL PRODUCT (GNP) = GDP + RECEIPTS OF FACTOR INCOME
FROM THE REST OF THE WORLD - PAYMENTS OF FACTOR INCOME TO THE
REST OF THE WORLD
NET DOMESTIC PRODUCT (NDP) = GDP - CAPITAL CONSUMPTION
NET NATIONAL PRODUCT (NNP) = GNP - CAPITAL CONSUMPTION
NATIONAL INCOME (NI) = NNP - INDIRECT BUSINESS TAXES
PERSONAL INCOME (PI) = NI - INCOME EARNED BUT NOT RECEIVED
(RETAINED CORPORATE PROFITS, OR RETAINED EARNINGS; CORPORATE
INCOME TAXES, AND SOCIAL SECURITY CONTRIBUTIONS BY FIRMS) +
INCOME RECEIVED BUT NOT EARNED (GOVERNMENT TRANSFER PAYMENTS)
DISPOSABLE INCOME (DI) OR DISPOSABLE PERSONAL INCOME (DPI) = PI -
PERSONAL INCOME TAXES
Comparisons of the
different economies of the
world, as well as
comparisons of the
economy of one nation
over different time frames,
gain additional meaning
(especially in the standard
of living) if the national
income measurements are
reported on a per capita
basis. Per capita means
per person, and is
determined by taking the
measurement, and dividing
by the total number of
people in the population.
So what is the "best" measurement of the macroeconomy?
GDP is the one that is most widely used. It counts the total production within the economy, so it probably
is a better measurement for many kinds of comparisons than most of the other measurements. But NDP
is a better measurement to reflect growth: it doesn't count replacement of capital as "new" production.
GDP is more widely used than NDP because it is easier to calculate, and easier to make comparisons
between different countries that may use different accounting methods. The best measurement for
determining the standard of living would be real GDP per capita, but even that is not a perfect
measurement of the standard of living. It can tell you the changes in wealth for an "average" person in
the economy, but it will not tell you if the changes in wealth are distributed equitably. If all of the gains
go to a very small segment of the economy, a tiny fraction of the population, then a per capita
measurement can be very misleading. And it doesn't account for the fact that many people view a
"standard of living" as including things that cannot be measured in monetary terms.
Nominal and Real Values
One of the uses of national income accounts, such as GDP, is for the comparison of an economy's
performance over time. It is a measurement of economic growth. However, just looking at the value of
GDP from one period of time compared to the value of GDP from another period of time will not give an
indication of economic growth. This is because a change in the value of GDP has two components: a
change in total output, and a change in prices (or the overall price level). To measure growth, you would
need to isolate these two components, take out the price level component, and only look at the total
In order to do this, economists adjust the GDP numbers by a price index to reflect the change in the
overall price levels over the relevant time frame. This means that they adjust out the price level
changes, leaving only the changes in output to account for a change in GDP.
The actual raw numbers for GDP are called nominal values. The numbers for GDP after the adjustment
for the change in the price level are called real values. The price index used for adjusting for the price
level change relating to GDP is called a GDP deflator, or GDP price index (GDPPI). The calculation is as
Real GDP = Nominal GDP divided by GDPPI
In order to have a reference point for any price index, a base year is established. For the purpose of
calculations, this base year can be considered to be arbitrary (although if you have to do calculations on
a given set of numbers, and at the same time you have to decide what to use as the base year, you might
want to pick the year that makes your calculations the easiest - using the beginning year under
consideration is often easiest). For the base year, a number of 100 is assigned for a price index. This
means that the resulting calculations must be adjusted by a factor of 100. This amounts to simply moving
the decimal point over two spaces. Knowing the nominal values for different years, as well as the price
index used for each year, will allow you to calculate not only real GDP but also GDP growth and the rate of
For examples of calculations involving price indexes and finding real values from nominal values, please
refer to the section in this site that deals with the subject of inflation.
You may be familiar with the concept of price indexes (or indices, my dictionary lists indexes as the
preferred plural). All of them are based on the prices over time of a constant bundle of goods
considered to be relevant for what the index is trying to measure. Besides the GDP deflator (GDPPI)
mentioned here, other price indexes in common use are:
Consumer Price Index (CPI): Measures the prices of a "typical" bundle of goods that an "average"
household purchases. Cost of living adjustments (COLAs) for people on fixed incomes, as well as many
wage rates, are tied to this measurement. This measurement is far from perfect. For one thing, the
economy is not made up of only "average" households that purchase "typical" bundles of goods.
Differences are especially noticeable between different demographic groups, such as age groups. In
addition, price changes alone can cause changes in what actual bundles of goods that consumers
purchase. For example, if some goods in the bundle increase in price while others decrease in price,
the law of demand says that consumers will tend to buy fewer of the goods with rising prices and more of
the goods with falling prices, relative to each other over time. A fixed "bundle" does not take this into
consideration, overstating the price index (and the rate of inflation).
Producer Price Index (PPI): Formerly known as the wholesale price index (WPI), this measures the
prices received by producers. This is considered to be a leading economic indicator, because it
measures price changes at an earlier stage than the CPI does. If the PPI increases, it can be expected
that a CPI increase will soon follow.
|This page, along with additional commentary, was posted on the "Economics Online
Tutor" Facebook page's timeline on August 20, 2012.
|Do you find the information on this page helpful?
The left column of the home page lists all of the economics topics
covered in this website. Now, you can receive a FREE ebook version of
the information in this site under the title
Basic Economics for Students and Non-Students Alike
By Jerry Wyant
Or if you prefer, you may purchase a paperback version from Amazon,
Barnes & Noble, Sony, Apple, and other distributors.
This makes a great handbook and reference. Students: please help to
make sure your classmates and teachers are aware of this resource!
Click here to order a FREE ebook from smashwords.com
Click here to purchase a paperback version from amazon.com |
19 October 1983
The Royal Swedish Academy of Sciences has decided to award the 1983 Nobel Prize for chemistry to
Professor Henry Taube, Stanford University, Stanford, USA,
for his work on the mechanisms of electron transfer reactions, especially in metal complexes.
Chemistry prize awarded to one of the most creative contemporary workers in inorganic chemistry
Chemical reactions were known to man long before chemistry had attained the status of science. It was observed that substances changed their properties under certain external conditions, which is a characteristic of chemical reactions. Thus the ancient Egyptians found that if malachite, a green ore, was fired with charcoal, a red metal was obtained, called copper. It was also found that when clay was baked, ceramic products with properties quite different from clay were obtained.
Much earlier than this, man had found that a piece of dry wood caught fire if it could be made hot enough: changes in the properties of substances occurred only on certain conditions. Temperature was early the factor which was varied in order to bring about changes, and it was also found at an early stage that the speed with which the changes occurred frequently depended on the temperature. With the discovery of black powder it was also noted that processes could take place very rapidly, leading to explosions. The branch of chemistry concerned with how fast chemical reactions take place is known as chemical kinetics, and the scientist engaged in explaining how is said to study the mechanism of chemical reactions.
Millennia of hypotheses, experiments and observations, new hypotheses and new experiments and observations were to pass before a fairly firm scientific structure had been created. At the beginning of this century, progress had been considerable. In particular, a physical-mathematical description of the reactions had been produced, and it was possible in figures and formulas to express the conditions determining whether a chemical reaction would occur, and it was possible to provide mathematical equations for how rapidly it took place. A beginning had also been made in the treatment of reactions which did not pass completely in one direction, as opposed to those mentioned above. It was realized that chemical equilibria existed, and it was possible to deal with these theoretically. It is a characteristic of chemical equilibria that the reacting ions or molecules, although on average bound to another a given bond is not permanent and that the bonds are always being broken down and restored. Three major types of equilibrium reactions have come to be of dominant importance in chemistry. The concepts of acid and base were combined in the acid/base reactions and the pH associated with this.
Metal ions dissolved in water may attract ions or molecules. This is known as complex formation and usually, although not always, occurs as an equilibrium reaction. Finally the combustion of the burning piece of wood and the production of metallic copper from its ore through a reaction with charcoal have been generalized as oxidation and reduction. As a further generalization it has been found that oxidation and reduction are associated with a transfer of electrons, e.g. in metal ions such as cobalt and chromium. Under certain conditions it is possible to make cobalt with three positive charges react with chromium having two positive charges, where cobalt gets only two but chromium three positive charges. The effect is thus that an electron having a negative charge has been transferred from the two-valent chromium to the three-valent cobalt. This is particularly frequent phenomenon in complex compounds of metal ions. Taube has today been awarded the 1983 Nobel Prize for his studies of the mechanisms of electron transfer in metal complexes. Better than anyone else he has helped us understand how these electron transfers take place. It is particularly the structural preconditions governing electron transfers in metal complexes which he has studied. The electron transfer process as such is a separate major problem in theoretical chemistry and physics, where other scientists have contributed more than Taube.
What are the experiments made by Henry Taube and what conclusions has he been able to draw? In his studies, he started from the fact that three-valent ions of cobalt and chromium do not form equilibrium complexes (an example of the exceptions already referred to). The ions or molecules which are bound to these metal ions are therefore joined to them without ever leaving them. But the corresponding two-valent ions form equilibrium complexes. If an ion or molecule bound to the three-valent ion (in this instance, three-valent cobalt) could somehow be marked so that it is possible to find experimentally whether this marked ion or molecule in the electron transfer has at the same time been transferred to the other metal ion (in this instance, two-valent chromium), that is, in the opposite direction as the electron in this case. This was exactly what Taube found, and from this he drew the conclusion that before the electron transfer could take place, a bridge was formed between the metal ions of the ion or molecule which changed places. He proved this in a large number of cases and investigated how the electron transfer was affected by changes in the bridging molecule.
His next step was to lengthen the bridge between the metal ions (while using molecules which could bind two metal ions) and he found that in some instances there was still an electron transfer in spite of the greater distance between the metal ions. There was thus a form of what Taube calls ”distant attack”.
A logical continuation was the bonding of three-valent ions to the two ends of the bridge before reducing this complex with a two-valent ion (in this instance, europium). This reacted rapidly with one of the metal ions and Taube could then follow the slow transfer within the complex (in this case from ruthenium to cobalt) free from all assumptions on how rapidly the bridge was formed.
Finally Taube let the three-valent metal ions on either side of the bridge be identical and could then study if in reduction with an electron this was captured by one of the identical metal ions or it belonged to both, a phenomenon known as delocalization. (Delocalization generally gives rise to strong colours, such as in Prussian blue.)
This entire development was dominated both experimentally and theoretically by Taube, who according to one of the nominations has in eighteen listed instances been first with major discoveries in the entire field of chemistry. The examples selected here, which are all included in the prize award, may seem rather specialized, not to say esoteric. However, during the last ten years it has become increasingly apparent that Taube’s ideas have a considerable applicability, particularly in biochemistry. All respiration which is associated with oxygen consumption is thus also associated with electron transfers, and a growing number of scientists in this field are basing their work on Taube’s concepts of electron transfers in metal complexes.
It should be added that, as already pointed out, Taube has made major contributions throughout the chemistry of complexes. Thus he was the first to produce a complex between a three-valent metal ion, which was based on the ideas developed by Taube in his electron transfer studies.
Finally a quotation from one of Nobel Committee’s reports on Taube: ” There is no doubt that Henry Taube is one of the most creative research workers of our age in the field of coordination chemistry throughout its extent. He has for thirty years been at the leading edge of research in several fields and has had a decisive influence on developments.”
Figure: the bridge.
Their work and discoveries range from cancer therapy and laser physics to developing proteins that can solve humankind’s chemical problems. The work of the 2018 Nobel Laureates also included combating war crimes, as well as integrating innovation and climate with economic growth. Find out more. |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2015 March 3
Explanation: It was late in the northern martian spring when the HiRISE camera onboard the Mars Reconnaissance Orbiter spied this local denizen. Tracking across the flat, dust-covered Amazonis Planitia in 2012, the core of this whirling dust devil is about 140 meters in diameter. Lofting dust into the thin martian atmosphere, its plume reaches about 20 kilometers above the surface. Common to this region of Mars, dust devils occur as the surface is heated by the Sun, generating warm, rising air currents that begin to rotate. Tangential wind speeds of up to 110 kilometers per hour are reported for dust devils in other HiRISE images.
Authors & editors:
Jerry Bonnell (UMCP)
NASA Official: Phillip Newman Specific rights apply.
A service of: ASD at NASA / GSFC
& Michigan Tech. U. |
NOAA Works with Indonesian Team to Protect Endangered Sea Turtles
Young leatherback sea turtle making its
way out to sea.
Indonesian monitoring team measuring leatherback sea turtle. The team counts and keeps track of how many eggs were laid, how many went to full term, how many were eaten, and how many did not hatch–information scientists then use to learn more about hatching turtles’ biggest threats in order to develop strategies to boost the hatching numbers.
Indonesian monitoring tea counting leatherback sea turtle eggs.
NOAA Fisheries scientist, Dr. Manjula Tiwari, with local Indonesian monitoring team, holding a hands-on workshop about sea turtle identification.
July 2, 2012
Despite gloomy predictions for the future of leatherback sea turtles in the Pacific Ocean, an ongoing, long-standing collaboration between a NOAA Fisheries expert and local scientists in Papua, Indonesia, might be the path forward to help researchers understand how to prevent a critical nesting population from going extinct.
Leatherback sea turtles are endangered, and their populations have declined dramatically in the Pacific Ocean. The most migratory of sea turtle species, leatherbacks are known to cross entire ocean basins in search of food. While they mate in water, females lay their eggs on tropical sandy beaches, making safe beaches critical to their survival.
The last strongholds for leatherback nesting in the Pacific are two beaches—Jamursba-Medi and Wermon, located in Papua, Indonesia. These beaches have the highest number of leatherback nests in the Pacific, averaging about 1,500 annually. They also attract the highest number of egg-laying females across the Pacific Ocean, averaging around 200 every year.
This summer, researchers from the State University of Papua are teaming up with NOAA Fisheries scientist, Dr. Manjula Tiwari, to count the female leatherbacks that come to Jamursba-Medi and Wermon to nest. They are counting the number of leatherback nests and characterizing the nests and hatchlings, including how many eggs are laid, how many hatchling turtles go to full term, how many are eaten, and how many did not hatch.
The Science of Protecting Leatherbacks
Based on this detailed research, scientists identify the specific threats facing the leatherbacks hatchlings, including predation by wild pigs and dogs. The researchers then use their data and observations to develop specific strategies to protect the turtles, nests, eggs, and hatchlings with a goal of boosting the number that hatch and reach the ocean. Some prevention strategies include building fences around nests or lining the beach with electric fences to keep predators like pigs off the beach. Scientists have also used bamboo grids over the tops of nests to keep dogs away from hatchlings. However, the scientists must constantly generate new methods to protect the incubating nests because predators continue to learn new ways to reach them. In the end, fewer predators means more baby turtles can live long enough to try their luck in the ocean.
For the past 12 years, NOAA scientists have contributed to the local science infrastructure in Papua, training local scientists in virtually all aspects of sea turtle monitoring including aerial surveys, satellite telemetry, tagging, genetic sampling, and hatching success research. Strengthening Indonesia’s local capacity to address a wide range of sea turtle conservation issues and biological data collection is fundamental to the leatherback population and NOAA Fisheries commitment to leatherback recovery.
This article was developed by Sarah M. Shoffler, Manjula Tiwari, and Jeffrey Seminoff of the NOAA Fisheries Southwest Fisheries Science Center.
LEATHERBACK TURTLE FACTS—Did you know? |
|Behavior Disorder Home | Child Behavior | Teen Behavior | Adult Behavior ||
Dissociative Disorders refers to a number of different disorders. This article has information on four dissociative disorders and what symptoms are associated with each disorder. We also include treatment options for dissociative disorders.
Dissociative disorders is a category in the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR) that includes four specific disorders and a category for dissociative disorder NOS (not otherwise specified). The four disorders are called Dissociative Amnesia, Dissociative Identity Disorder, Dissociative Fugue, and Depersonalization Disorder in the DSM-IV-TR. As none of these names is illuminating, let's look at what dissociative means and how each of the separate disorders is characterized.
What Does Dissociative Mean?
Dissociation refers to the psychological coping mechanism of separating thoughts, feelings, or body sensations that create anxiety from the rest of a person's psyche. Usually experiences, good and bad, are integrated into a person's history and personality, but when something so psychologically traumatic happens that a person cannot integrate it, the person may dissociate him- or herself from the memory and feelings. This is not a conscious choice, and the dissociated material is not easily accessible to the conscious mind.
Trauma is generally assumed to be at the root of the dissociative disorders, and it is therefore unsurprising that they are often seen in combination with Post-Traumatic Stress Disorder (PTSD). The types of events that can lead to dissociative disorders include chronic emotional, physical, or sexual abuse of a child, who doesn't have more mature ways of enduring this type of trauma. Other crises, such as kidnapping or war, can also be responsible.
What Are the Four Dissociative Disorders ?
Three of the four dissociative disorders have the word dissociative in their name, referring to the mechanism. The fourth has the word depersonalization, which speaks to the result of the dissociation in the person who experiences it.
• Dissociative Amnesia—This is a form of amnesia that is not caused by a physical trauma, like a fall in which the head sustains a severe bump and the results are different as well. In dissociative amnesia, the memories are still present, but not available to conscious memory, in effect, shielding the person from the memories. Dissociative Amnesia used to be called psychogenic amnesia. It is often accompanied by anxiety or depression.
• Dissociative Identity Disorder (DID)—Formerly called Multiple Personality Disorder, DID refers to a condition in which a person copes with unbearable stress by switching to an alternative identity, with an individual name, personality, and history. People who have DID, usually also suffer from Dissociative Amnesia.
ª Dissociative Fugue—This condition, which used to be known as psychogenic fugue, involves people losing their sense of identity, often spontaneously leaving their community and typical activities to go on unplanned journeys and sometimes creating new identities and lives. Alcohol and drug abuse can cause states that are similar to fugues, in which blackouts occur.
• Depersonalization Disorder—Feeling outside of oneself, observing one's situation and actions rather than experiencing them directly, is what characterizes Depersonalization Disorder. Distortion of time and the appearance of things may also occur, and a general sense of unreality may be felt. These feelings may be fleeting or recurring.
Treatment of Dissociative Disorders
Treatment usually involves psychotherapy. While medication is not useful directly, if the person is depressed or anxious, medications that treat these issues may be helpful. Cognitive and creative therapy may also help with aspects of treatment.
Related Article: Behavior Disorder Treatment >> |
For well over a year now, two washer-machine sized twin spacecraft have been orbiting the moon, carrying out NASA’s Gravity Recovery and Interior Laboratory mission, also known as GRAIL. The twin spacecraft, named Ebb and Flow, have since created a high-resolution gravity field map of the moon, revealing to scientists its internal structure and composition in unprecedented detail.
The map is the highest resolution gravity field map of any celestial object in space, and it will help researchers better understand how Earth, Mars, Mercury, and other rocky planets formed and evolved in our solar system.
“What this map tells us is that more than any other celestial body we know of, the moon wears its gravity field on its sleeve,” said GRAIL principal investigator Maria Zuber of the Massachusetts Institute of Technology in Cambridge. “When we see a notable change in the gravity field, we can sync up this change with surface topography features such as craters, rilles or mountains.”
VIDEO CREDIT: NASA/JPL-Caltech/MIT/GSFC
The map reveals tectonic structures, craters, volcanic landforms, and other features that have never been observed in such detail. Ebb and Flow have shown that the moon has been pulverized by the leftover materials which built the solar system much harder than previously thought. Its gravity field is unique from any other terrestrial planet in the solar system. The bulk density of the moon’s highland crust is much lower than scientists had previously believed, which agrees with data obtained from samples returned to Earth by Apollo astronauts over 40 years ago. The lunar gravity field preserves a record of the moon’s violent history of impacts endured throughout its existence, and the map reveals fractures in the moon’s interior that extend into its deep crust, possibly even into its mantle.
“With our new crustal bulk density determination, we find that the average thickness of the moon’s crust is between 21 and 27 miles (34 and 43 kilometers), which is about 6 to 12 miles (10 to 20 kilometers) thinner than previously thought,” said GRAIL co-investigator Mark Wieczorek of the Institut de Physique du Globe de Paris. “With this crustal thickness, the bulk composition of the moon is similar to that of Earth. This supports models where the moon is derived from Earth materials that were ejected during a giant impact event early in solar system history.”
The twin GRAIL spacecraft, launched in September 2011 from Cape Canaveral Air Force Station in Florida, created the map by communicating with each other using radio signals to precisely define the distance between them as they orbit 34 miles above the lunar surface. They can measure the slightest change in distance from one another, down to a few microns—about the diameter of a red blood cell. Flying in formation at 3,600 mph in a near-polar, near-circular orbit, Ebb and Flow travel through an uneven gravity field, which causes slight changes in their speed. The change in speed by one spacecraft as a result of gravity can then be measured as a change in distance between the two vehicles by the other spacecraft. This technique is a first for any mission beyond Earth’s orbit. The Gravity Recovery and Climate Experiment (GRACE) mission has been using the same technique to map Earth’s gravity since 2002.
“We used gradients of the gravity field in order to highlight smaller and narrower structures than could be seen in previous datasets,” said Jeff Andrews-Hanna, a GRAIL guest scientist with the Colorado School of Mines in Golden. “This data revealed a population of long, linear, gravity anomalies, with lengths of hundreds of kilometers, crisscrossing the surface. These linear gravity anomalies indicate the presence of dikes, or long, thin, vertical bodies of solidified magma in the subsurface. The dikes are among the oldest features on the moon, and understanding them will tell us about its early history.”
The primary phase of the mission is now complete, and the extended mission will continue until December 17, with the two spacecraft continuing to collect gravity science about our moon as they gradually lower their orbital altitude.
“Next time you look up and see the moon, you might want to take a second and think about our two little spacecraft flying formation, zooming from pole to pole at 3,600 mph,” says David Lehman, GRAIL project manager at NASA’s Jet Propulsion Laboratory. “They’re up there, working together, flying together, getting the data our scientists need. As far as I’m concerned, they’re putting on quite a show.” |
Viburnum plants are susceptible to an array of fungal infections that cause disease and severely damage the plant. Familiarize yourself with the infections and controls for surefire identification and management of any problems that arise. Maintain your plants through proper care for vigorous viburnums that are ready to fight and win.
Viburnums include more than 150 deciduous and evergreen species that grow to heights ranging from 2 to 30 feet, according to the Clemson University Extension. Prized for the aromatic blossoms and colorful fall foliage, this plant makes an impact in the garden. The display of fruit in oranges, reds, pinks, blues, blacks and yellows brings butterflies to your garden. Protect these plants from common diseases to prevent cosmetic harm to your garden.
Vigorous plants are more likely to resist or heal from disease than weakened viburnums. Grow your viburnum in an area with the correct light exposure. Most viburnums are sun-lovers, but some prefer shade. Viburnums thrive in moist, acidic soil with a pH of 5.5. to 6.5, and soils that are rich in, organic content, according to the Clemson University Extension.
Most diseases that affect viburnums are fungal in nature. Fungal leaf spot, caused by fungi of the Cercospora species, Phoma species and Phyllosticta species, result in foliage damage. Fungal leaf spot thrives in warm, humid summer conditions. Powdery mildew causes infection due to the fungus Erysiphe sparsa, with a particular preference for warm daytime temperatures, cool nighttime temperatures and high moisture content; these conditions promote fungal germination. Powdery mildew infects new leaf growth as well as young shoots, according to the Clemson University Extension.
Fungal leaf spot disease of viburnums results in the presence of abnormal spots on leaf surfaces in red to gray-brown that expand and form larger splotches. Leaf tissue within the spots dries out. The problem may diminish health and causes cosmetic damage, but does not usually cause severe injury. Powdery mildew disease on viburnum plants produces a white to gray growth of fungus that resembles powder on foliage surfaces, according to the Clemson University Extension. Extreme cases result in malformed leaves.
To control viburnum leaf spot, keep leaves dry by watering at the base of the plant. Moist leaves offer an ideal environment for the proliferation of fungi. Apply a fungicide with the active ingredient chlorothalonil or mancozeb as soon as you notice a problem; apply once every two weeks until the problem improves. For powdery mildew control, consider planting a resistant variety like Viburnum burkwoodii Mohawk, advises the Clemson University Extension. Additionally, the application of horticultural oil or a fungicide with the active ingredient triadimefon offers effective control. For both diseases, remove and destroy infected plant parts to decrease infection severity. Sanitize pruning tools after each cut and from one plant to the to next to prevent the spread of fungal pathogens. |
Why is NH3 (Ammonia) a weak electrolyte?
Electrolyte is a solution and a medium that consists of free ions which help in the conduction of electricity. The solute in an electrolyte will break up from its molecular form to form free ions. A strong electrolyte consists of a solute that dissociates into free ions in large quantity while a weak electrolyte does not release much of the free ions. Some of the examples of strong electrolyte are sodium nitrate, sodium chloride and sodium sulphate and one example for weak a electrolytes is ammonia solution.
Weak electrolytes are solutions that have the substances dissolved in them in the form of molecules rather than ions. Ammonia in water is an example for weak electrolyte. It exists as molecule in water and to some extent get dissociated as ion. Since the weak electrolytes have fewer ions in the solution, it acts as weak conductor of electricity. The weak electrolyte consists of ions and molecules in equilibrium with each other. They exist as molecules as well as dissociate back into ions. The reactants (molecular form) and the products (ionic form) will be in equilibrium. Hence enough free ions are lacking to conduct electricity. In the case of hydrogen chloride, the hydrogen and chlorine get dissociated and form cation and anion. These ions do not get converted back into HCl again. As the ions exist as such, the solution of HCl will have ample ions to conduct electricity and hence acts as a strong electrolyte.
The equation given below shows the dissociation of ammonia into ions and vice versa.
NH3 (aq.) + H2O = NH4+ (aq.) + OH- (aq.)
This equation works out in both the directions. The cation and anion that are formed to conduct electricity will not stay back as such. They get immediately converted into ammonia and water. This is the reason for ammonia to act as weak electrolyte. |
A reaction is a process by which species will be consumed or produced. It is characterized by a description of how fast this happens. It contains a list of substrates, i.e. the species that are consumed if the reaction takes place, along with the information about how many molecules of each substrate are consumed when the reaction event happens once (the stoichiometry). Correspondingly there is a list of products with the respective stoichiometries. Reactions without substrates are possible, as well as reactions without products. However reactions without both substrates and without products are not allowed. In addition so called modifiers can be specified which are neither produced nor consumed in the reaction but which have influence on the speed of the reaction.
The speed of the reaction is always specified by a reference to a kinetic function. The kinetics can depend on the concentrations of the substrates, products, and modifiers, on the volume of a compartment, on local or global parameters, and on the simulation time. The difference between local and global parameters is that local parameters only specify a numerical value of a kinetic parameter for one specific reaction. Global parameters can be used in several reactions.
Reactions can be reversible or irreversible. Kinetic functions for irreversible reactions should always be positive. Also they should not depend on the concentration of the products (only on the concentrations of the substrates and modifiers). While all built in kinetic functions satisfy these conditions, they are not enforced for user defined functions. |
A downloadable version is attached at the bottom.
The Metric System, States and Properties of Matter Test Study Guide
Review Your Documents Beginning with Document 7
1. In the metric system, we always use _______________ instead of fractions.
2. The metric system is based on the number____________________
3. Fill in the blanks with the correct prefix to complete the metric system:
kilo ________ ________ basic unit _________ _________ milli
4. Convert 7.1cm to mm ___________________________________
5. Write the base units for Volume ___________Mass___________Length_________
6. Be able to determine the volume of a substance using the 3 methods on your metric notes.
7. Know how to use a triple beam balance and determine the mass of a substance in a container or measure out a substance.
States of Matter
1. ________________ is anything that has mass and volume (takes up space). Give examples of matter and non-matter.
2. There are ________ known states of matter: solid, liquid, gas, plasma, Bose-Einstein condensates, and fermionic condensates. Be able to give examples of the first 4.
3. ____________________ : has a definite shape and a definite volume
4. ____________________: does not have a definite shape but does have definite volume
5. ____________________: does not have a definite shape and volume
6. ____________________: has an indefinite shape, indefinite volume and ions.
7. ____________________: particles that have gained or lost electrons. Have a charge.
8.____________________&_____________________________: occur at almost absolute zero
9.____________________: no electrical resistance, will carry a current forever
10.____________________: no friction, will flow forever.
11. What is heat?__________________________________What is cold?__________________
12. What is absolute zero?_______________________________________________________
13. Be able to identify
the changes of state: ionization, deionization, evaporation, condensation,
melting, freezing, sublimation, and deposition.
14. What is the boiling point of water in Celcius________________Freezing Point__________
Properties of Matter
1. ___________________: a little ball or rounded mass; example: jelly beans
2. ___________________: a dry substance in fine dust like particles; example: flour
3. ___________________: a small grain-like particle; example: sand
4. ___________________: atoms/molecules in the solid are arranged in a pattern; ex: salt
5. ___________________: describes traits/characteristics of matter thatare observed without changing the matter (ex: color, density, texture).
6. ___________________: when matter changes appearance but remain the same chemically
1. If an object is denser than water it would ______________ in water.
2. If an object is less dense than water it would _____________ in water.
3. What is the formula to find density? ____________________
4. The unit for density is _________________________ or ________________________
5. When heat increases, the density of a substance __________________
6. If the mass of a substance is 10g and its volume is 5 ml what is its density?______________
7. If you have 2 samples of the same substance, but they are different sizes are their densities different?
8. To go from liquid to gas, thermal energy (heat) must be ________________to overcome the bonds.
Review your labs. Make sure that you understand the concepts and not just the terms. Be able to explain what we did and why. |
This is no figment of a scientist's imagination, and research revealed that when scientists wrapped spider silk in carbon nanotubes, they got eco-friendly wires.
And what is more they could also conduct electricity when they were bent. Researchers adhered the carbon nanotubes to the spider silk with a drop of water, which led to creation of wires from the materials.
"It turns out that this high-grade, remarkable material has many functions," said Eden Steven, one of the researchers. It can be used as a humidity sensor, a strain sensor, an actuator (a device that acts as an artificial muscle, for lifting weights and more) and as an electrical wire."
"Understanding the compatibility between spider silk and conducting materials is essential to advance the use of spider silk in electronic applications. Spider silk is tough, but becomes soft when exposed to water.The nanotubes adhere uniformly and bond to the silk fiber surface to produce tough, custom-shaped, flexible and electrically conducting fibers after drying and contraction,"researchers said. |
Evidence for the existence of sharks extends back over 450–420 million years, into the Ordovician period, before land vertebrates existed and before many plants had colonized the continents. All that has been recovered from the first sharks are some scales. The oldest shark teeth are from 400 million years ago. The first sharks looked very different from modern sharks. The majority of the modern sharks can be traced back to around 100 million years ago.
Contrary to popular belief, sharks have not remained unchanged for 300 million years. However, many of the families we have today have been in existence for perhaps the last 150 million years.
Mostly only the fossilized teeth of sharks are found, although often in large numbers. In some cases pieces of the internal skeleton or even complete fossilized sharks have been discovered. Estimates suggest that over a span of a few years a shark may grow tens of thousands of teeth, which explains the abundance of fossils. As the teeth consist of calcium phosphate, an apatite, they are easily fossilized.
Instead of bones, sharks have cartilaginous skeletons, with a bone-like layer broken up into thousands of isolated apatite prisms. When a shark dies, the decomposing skeleton breaks up and the apatite prisms scatter. Complete shark skeletons are only preserved when rapid burial in bottom sediments occurs.
The snout of a Devonian shark was typically short and rounded, and the jaws were longish and located at the front of the head. In modern sharks, the snout is typically longish and pointed, the jaws shorter and located underneath the head. Long jaws are structurally weaker than short ones and less able to produce a powerful bite, so early sharks may have plucked prey from the bottom or 'on the fin' with forceps-like delicacy.
Early sharks' upper jaws were fixed to the braincase at both the front and the back (the so-called 'amphistylic' form of jaw suspension), unlike most modern sharks in which the upper jaw is fixed to the braincase at the back only ('hyostylic' jaw suspension). As a result, ancient sharks may have been less able to protrude their jaws than modern sharks, reducing their ability to suck prey into their mouths and restricting the size of their food.
The braincase and olfactory capsules (which house the scent organs) of ancient sharks were relatively small, suggesting that they had a lesser brain and less well-developed sense of smell than their modern descendants. Smaller brain size may also indicate that their other senses were less acute, predatory behavior less flexible, and social dynamics less sophisticated than in most modern sharks (especially the whalers and hammerheads).
The teeth of the earliest sharks were smooth-edged and multi-cusped, with a large central blade flanked by two or more smaller cusplets on either side (a tooth type termed cladodont, meaning 'branch-toothed'). Although some of the more conservative modern sharks (such as the six- and seven-gills, nurse sharks and smoothhounds) have multi-cusped teeth, the most recent forms (such as whalers, hammerheads, and the white shark) typically have single-cusped teeth often with serrations. Cladodont teeth are best suited to grasping prey that can be swallowed whole; whereas the sharp-edged or serrated single-cusped teeth of modern sharks opens new dietary options, enabling them to gouge pieces from food items too large to be swallowed whole.
The pectoral fins of ancient sharks were triangular and rigid with broad bases. In contrast, most modern sharks have falcate, highly flexible pectoral fins with narrow bases. Therefore, the fins of ancient sharks were probably somewhat less maneuverable than those of modern sharks, making them less agile.
The backbone of ancient sharks was composed of many, relatively simple vertebrae which were uncalcified and did not constrict the spinal column. The backbone of most modern sharks contains fewer, complexly sculpted vertebrae which have calcified bands and constrict the spinal column at regular intervals. (Exceptions include the squaloid dogfishes and the six- and seven-gilled sharks, most of which inhabit very deep waters. It is not clear whether this is due to retention of primitive characteristics or a secondary adaptation to their nutrient-poor deep-sea environment.) The poorly calcified backbone of ancient sharks may have been less able to withstand the forces generated by the flank muscles, making them less powerful swimmers than most of their modern descendants.
Yet in many respects, ancient sharks were very similar to modern sharks. Like the sharks of today, ancient sharks had a cartilaginous skeleton, replaceable teeth, tooth-like scales called 'dermal denticles', multiple gill slits, two sets of paired fins (pectoral and pelvic), claspers (the paired, cartilage-supported copulatory organs of male sharks, developed along the inner margin of the pelvic fins), a backbone that extended into the upper lobe of the tail, and a strongly heterocercal tail fin (more properly called a caudal fin), in which the upper lobe is considerably longer than the lower.
Geneticist Andrew P. Martin and his co-workers have measured mtDNA differences in several species of sharks. In order to calibrate his molecular clock for sharks, Martin needed to relate a genetic change in one or more populations of these animals to a reliably-dated geological event. He compared genes of two populations of a small species of hammerhead (Sphyrna tiburo) that were separated by the rise of the Isthmus of Panama, which occurred some 7 to 3 million years ago. To their surprise, Martin and his colleagues found that the rate of genetic change in sharks is positively glacial compared with that of mammals — some seven to eight times slower. They have 10 hearts.
Morphological studies of modern lamnids by systematist Leonard J.V. Compagno and others provide another source of evidence useful for tracing the group's evolutionary history. Such studies not only support that Isurus derived from Carcharodon, but also suggest that Carcharodon derived from Lamna. Intriguing new evidence from molecular genetics fully supports this evolutionary hypothesis. It is not yet clear from the fossil record which lamnoid was the common ancestor of Lamna, Carcharodon, and Isurus. Some paleontological circles suspect the best candidate may be or a similar as-yet undiscovered species. Other circles favor a species called , known from fossil teeth dating from the late Cretaceous to the mid-Paleocene (about 100 to 60 million years ago). The teeth of Cretalamna are much more solidly built than those of any modern lamnid. But Cretalamna teeth resemble those of Lamna in being smooth-edged with well-developed basal cusplets (small secondary cusps on either side of the main blade). In addition to being a possible ancestor of the mighty great white, Cretolamna almost certainly gave rise to one of the most fearsome predators the ocean has ever produced, the giant-toothed shark known as Megalodon.
Based on his research to date, Martin estimates that in sharks a 1 percent difference in gene sequence corresponds to approximately 6 million years' divergence time. In a recent study, Martin measured the percent difference in the mtDNA of representatives of all three genera within the family Lamnidae. He found that Lamna was the most divergent genus, being roughly 7.6% different from Carcharodon. This genetic difference suggests that the separation of Lamna and Carcharodon occurred some 65 to 35 million years ago. Martin also found that Isurus differed from Carcharodon by about 7.1%. This difference suggests that these two genera diverged about 60 to 35 million years ago. Therefore, according to Martin's genetic studies, the can be traced back no more than about 60 million years ago.
Evidence from molecular genetics thus supports the paleontologist's proposed origin time of Carcharodon (if Isurolamna is included in its lineage) and Isurus (if is considered the earliest representative of that lineage). However, the genetic evidence suggests a far more ancient origination time for Lamna than is presently supported by the fossil record (65 million years ago versus 42-38 million years ago).
Using molecular clocks to calculate origin times of biological lineages is still in its infancy, and — like any newfangled technique — remains controversial. But both paleontologists and geneticists agree that, compared with other modern sharks, Carcharodon is a relatively ancient genus.
There are thousands of fossil shark scales in collections (which are probably the most abundant of vertebrate microfossils, but often overlooked because of their tiny size), hundreds of fin spines, the occasional vertebra or cranium, and - very exceptionally - impressions of soft tissues. But, because they are mineralogically stable and shed throughout a shark's lifetime, mostly we have teeth - thousands upon thousands of fossilized shark teeth sparkling in an enormous void of geologic time.
The earliest sharks are represented by a mere handful of isolated scales. Shark scales have a characteristic tooth-like structure, so we can be reasonably confident that such scales did, in fact, come from some kind of shark. The oldest shark-like scales date back to the Late Ordovician period, about 455 million years ago, from what is now Colorado. These scales, however, differ from those of modern sharks in several important respects, so not all paleontologists agree that they came from true sharks. The oldest undisputed shark scales are about 420 million years old, from early deposits in Siberia. These diminutive survivors of prehistory have been assigned to the genus Elegestolepis, but we have no clues about what the rest of the shark might have looked like. Shark-like scales of similar age are also known from what is now Mongolia, and have been assigned to the genera Mongolepis and Polymerolepis. Other than having names for these earliest sharks, we know almost nothing about them.
Fortunately, the shark fossil record becomes richer and more varied from the Devonian Period onward. The earliest fossil shark teeth are from early Devonian deposits, about 400 million years old, in what is now Europe. These teeth are two-pronged and puny, less than an eighth of an inch (3-4 millimetres) in length. They belonged to a mysterious ancient shark known as . Based on its double-cusped teeth, Leonodus may have belonged to a family of freshwater sharks known as the xenacanths. But not all paleontologists agree on this interpretation. Thus, like most of the earliest sharks, Leonodus is a name without a face.
The oldest fossilized shark braincase is from mid-Devonian deposits about 380 million years old, in what is now New South Wales, Australia. Based on the form of this nearly complete braincase, many paleontologists believe that its former owner may have been a xenacanth. The oldest partially articulated fossilized shark remains were discovered by geologist Gavin Young in deposits of about the same age in the Lashley Range of Antarctica. Although they display an odd combination of features, these remains may also have been from a xenacanth - possibly the same species as produced the oldest fossil shark braincase. Young named this 16-inch (40-centimetre) shark Antarctilamna, meaning "lamnid shark from Antarctica". Impressions of braincases, fin spines, and teeth from this early shark are known from Australia and Saudi Arabia.
The earliest known skeletal fragments of any chondrichthyans date from at least 380 million years ago. New evidence suggests that neurocrania (the cartilaginous “skull”) of the shark genus Pucapampella, from Mid Devonian rock strata of Bolivia and South Africa, may be even slightly older than 380 million years.
Despite all these fossilized Antarctilamna bits and pieces, paleontologists have had a difficult time puzzling out what the whole animal was like in life. Antarctilamna had a stout spine in front of the long, low dorsal fin and two-pronged teeth (a tooth type termed "diplodont"), a combination which immediately suggests xenacanth affinities.
Xenacanths were almost exclusively freshwater inhabitants, and had a long, rearward-pointing fin spine just behind the cranium (the name xenacanth means "strange spine"), diplodont teeth, a slender, eel-like body, an elongate dorsal fin extending along most of the back, and a symmetrical, tapering tail. If Antarctilamna was a xenacanth, it probably had the same type of body form and tail, which may have allowed it to swim among dense lake vegetation. Thus far, Antarctilamna is known only from freshwater deposits, therefore - whatever its body form - it seems likely that it led a xenacanth-like lifestyle, haunting freshwater lakes and rivers. But Antarctilamna also had some very unxenacanth-like features. In particular, its fin spines more closely resemble those of another group of ancient sharks known as the ctenacanths. In both Antarctilamna and the ctenacanths, the fin spines are cylindrical and ornamented with unique rows of small thorn-like denticles (the name ctenacanth means "comb spine"). The ctenacanths were more typically shark-shaped than the eel-like xenacanths, with a solidly-built, tapered body, two separate dorsal fins, and a deeply-forked tail. Yet ctenacanths are also characterized by having multi-cusped teeth (a tooth type termed cladodont, meaning "branch-toothed"), which are very unlike those of Antarctilamna and the xenacanths. Current paleontological consensus tentatively classifies Antarctilamna as a xenacanth, but it is still not settled whether it was a xenacanth with ctenacanth-like fin spines, a ctenacanth with xenacanth-like teeth, or something else altogether.
Despite these uncertainties about the interrelationships, form and lifestyle of Antarctilamna, there is no doubt that it was a full-fledged, card-carrying shark - making it among the very earliest verified ancestors of modern sharks. Thus, sharks were already a distinct lifeform by the middle Devonian Period, more than 400 million years ago.
The world was a very different place back then. There were only two continents, Laurasia in the north and Gondwanaland in the south. These landmasses were surrounded by warm, shallow seas. If you were to travel back in time 400 million years, you would find a veritable bestiary of strange and bizarre creatures. Life thrived in the Devonian seas.
Although early sharks are rooted in the Ordovician period, the first well preserved early shark fossil to be discovered was Cladoselache dating from approximately 350 million years ago which has been found within the strata of Ohio, Kentucky and Tennessee. The fossil of this shark was found miraculously intact in the of Lake Erie. It was so well preserved that its muscle fibers were visible as were its kidneys. Cladoselache had two low dorsal fins both with prominent spines, broad based pectoral fins and eyes set far forward on the head. The mouth was at the front of the head as opposed to the under slung mouths of modern sharks, and the teeth had a large central pointed cusp with a smaller point on each side. Although Cladoselache was almost certainly not the first ever true elasmobranch, armed with Cladoselache, paleontologists were able to categorically state that elasmobranchs had arrived.
Cladoselache was only about 1 m long with stiff triangular fins and slender jaws. Its teeth had several pointed cusps, which would have been worn down by use. From the number of teeth found in any one place it is most likely that Cladoselache did not replace its teeth as regularly as modern sharks. Its caudal fins had a similar shape to the great white sharks and the pelagic shortfin and longfin makos. The discovery of whole fish found tail first in their stomachs suggest that they were fast swimmers with great agility.
Like many ancient sharks, Cladoselache had a short, rounded snout, a mouth located at the front of the head (a mouth type called "terminal"), long jaws attached to the cranium under the snout and behind the eye, cladodont teeth, and a stout spine in front of each dorsal fin. Yet it also had strong keels developed along the side of the tail stalk and a crescent-shaped tail fin, with an upper lobe about the same size as the lower (in most modern sharks, the tail is decidedly top-heavy, with the upper lobe considerably longer than the lower). In these posterior respects, Cladoselache resembles the modern mackerel sharks of the family Lamnidae, a group which includes the white shark and its close relatives, the makos and mackerel sharks. The combination of lateral keels and crescentic tail fin is highly characteristic of fast-swimming fishes such as tunas, billfishes, and mako sharks. Many paleontologists therefore believe that Cladoselache was specialized as a high-speed predator. Remarkably well-preserved specimens from the Cleveland Shale of Ohio support this notion.
Except for small, multi-cusped scales along the edges its fins, in the mouth cavity, and around the eye, Cladoselaches' skin seems to have been almost devoid of dermal denticles. Dermal denticles serve as more than simple armor against injury, they strengthen the skin to provide firmer attachments for swimming muscles, yet Cladoselache managed to make do almost without them. Cladoselache's fin spines were odd, too. They were unusual in being short and blade-like, composed of a porous bony material, and located some distance anterior to the origin of each dorsal fin. These fin spines may have been lighter and sturdier than the denser, more spike-like ones of other sharks. These light-weight but stout fin spines may have reduced swimming effort yet provided solid discouragement to would-be predators.
Unlike any other shark, ancient or modern, Cladoselache seems to have lacked claspers. Other sharks had already developed claspers by the time of Cladoselache's appearance. The xenacanths, for example - which appeared some 50 million years before Cladoselache - had limb-like claspers supported by skeletal elements which are sometimes preserved as fossils. Diademodus, a contemporary of Cladoselache, apparently also had well-developed claspers. It seems highly unlikely that every known specimen of Cladoselache is female, so it is something of a mystery how these sharks reproduced. Yet Cladoselache obviously managed to procreate somehow, as its lineage survived for nearly 100 million years. It may seem an unpleasant idea, but perhaps Cladoselache achieved internal fertilization by partially extruding the rear part of its cloaca and using that as the organ of sperm transfer. This is the method of copulation used by most modern birds and a few modern amphibians and reptiles - namely, the caecilians (which resemble legless salamanders) and the lizard-like tuatara.
From about 300 to 150 million years ago, most fossil sharks can be assigned to one of two groups. One of these, the Acanthodii, was almost exclusive to freshwater environments. By the time this group became extinct (about 220 million years ago) they had achieved worldwide distribution. The other group, the hybodonts, appeared about 320 million years ago and was mostly found in the oceans, but also in freshwater.
The 'Cleveland Shale' on the south shore of Lake Erie have provided paleontologists with some of the most remarkable - and fortunate - geological accidents ever: about 100 specimens of a 370-million-year-old, 4-foot (1.2-meters) long shark called Cladoselache, some of which are so exquisitely preserved that not only teeth and fin spines, but also jaws, crania, vertebrae, muscle fibers, and even kidney tubules are discernible to varying degrees.
These extremely well-preserved Cladoselache specimens support the notion - inferred from its tail shape - that it was a fast-swimming hunter. Paleontologist Mike Williams has studied many of the superbly preserved fossil specimens of Cladoselache excavated from the 'Cleveland Shale'. Astonishingly, 53 of these specimens had identifiable traces of their last meal preserved in their gut regions. These allowed Williams to glean some insights into the predatory habits of Cladoselache. He found that 65% of specimens examined had eaten small ray-finned bony fishes, 28% shrimp-like Concavicaris, 9% conodonts (peculiar hagfish-like proto-vertebrates with complex, comb-like teeth), and one specimen had eaten another shark. (These percentages add up to more than 100 because some specimens had eaten more than one kind of prey.)
The orientation of food items in the body cavity suggests that Cladoselache was swift enough to catch its prey on the fin. Its teeth were multi-cusped and smooth-edged, making them suitable for grasping but not tearing or chewing. Cladoselache therefore probably seized prey by the tail and swallowed it whole.
There may have been another reason for Cladoselache to adopt a high-speed lifestyle. It shared the Devonian seas with Dunkleosteus, a 20-foot (6-metre) long predatory placoderm with huge teeth and massive, heavily armored jaws.
About the same time Cladoselache first appeared, there evolved an important group of sharks known as the ctenacanths. The ctenacanths shared numerous conservative features with Cladoselache, but also developed several more advanced ones. Like Cladoselache, the ctenacanths had cladodont teeth, jaws attached to the skull at front and back, broad-based pectoral fins, and a strong spine in front of each dorsal finBut unlike Cladoselache, the pectorals of ctenacanths were supported at the base by three blocks of cartilage - as in most modern sharks - allowing them greater flexibility. But unlike Cladoselache, the pectorals of ctenacanths were supported at the base by three blocks of cartilage - as in most modern sharks - allowing them greater flexibility.
Ctenacanths were also different in that their fin spines were long and cylindrical, with characteristic longitudinal ridges and unique comb-like rows of tubercles (hence their name). These spines were composed of a dense enameloid material and deeply imbedded along the front margin of each dorsal fin - as in modern spiny dogfishes (family Squalidae) and bullhead sharks (Heterodontidae).
Ctenacanths are known almost entirely from abundant fossils of their distinctive fin spines (body impressions or skeletal remains of these sharks are quite rare). The best-known genus is Goodrichthyes, known from a 7.5-foot (2.3-metre) specimen from early deposits in what is now Scotland. Unfortunately, this specimen is contained in some 200 separate pieces of rock, and is thus rather difficult to interpret. The genus Ctenacanthus itself is represented by many species, almost all of them established on the basis of fin spines. The ctenacanths appeared in the Late Devonian (about 380 million years ago - slightly earlier than Cladoselache) and persisted until the Permian, with a few hanging on into the (about 250 million years ago). But there is no doubt that their heyday - in terms of diversity and abundance - was during the Carboniferous.
The first major shark radiation occurred during the Carboniferous Period, 360 to 286 million years ago. The Carboniferous (meaning "coal forming") gets its name from the thick layer of plant matter, laid down when shallow seas drowned northern continents, that were later squeezed into coal. In freshwater lakes swam lungfishes and xenacanth sharks (descendants of Antarctilamna that persisted in freshwater environments until the Early Triassic Period, about 220 million years ago). In the sea, corals, bryozoans, crinoids, and molluscs flourished. But, with the exception of acanthodians, few fishes swam in early Carboniferous seas. The fossil record indicates that more than 75% of fish groups alive during the Late Devonian died out before the beginning of the Carboniferous. The placoderms — a once dominant group of armored fishes — survived this extinction event, but at greatly reduced diversity and abundance.
The misfortune of the placoderms presented a splendid opportunity for sharks in general and one group in particular: the stethacanthids. Perhaps in response to the ecological niches vacated by the placoderms, the stethacanthids exploded into a riot of bizarre forms and lifestyles. It was a kind of stethacanthid golden age — complete with outrageously unwieldy headgear and strange but fascinating rituals. One of the most outlandish of these sharks was Stethacanthus itself. Best known from Carboniferous deposits in central Scotland and Montana, Stethacanthus was a two-foot (60-centimeters) long shark that inhabited warm, shallow seas. Intriguingly, no female specimen (identifiable by the lack of claspers) of Stethacanthus has ever been found. Yet a contemporary and very similar genus, Symmorium, is represented entirely by specimens without claspers. One possibility is that Symmorium may actually be female Stethacanthus. If this is so, then female Stethacanthus were perfectly charming, graceful little sharks. But the males can perhaps be best described as haberdashery-impaired. Male Stethacanthus (sporting well-developed claspers) had an enormous, flat-topped dorsal fin bristling with enlarged scales. Basically, it looked like a fish with a brush sticking out of its back. In addition, male Stethacanthus had similar enlarged scales on top of the head, making the whole contraption resemble a set of large, bristle-toothed jaws.
Dozens of highly imaginative ideas have been advanced to 'explain' the function of Stethacanthus' bizarre headgear. One suggestion is that the paired structures might have mimicked the jaws of some creature much too big to intimidate. Another, somewhat more whimsical notion, is that — by craning its neck and arching its back — Stethacanthus might actually have clamped onto the belly of a larger marine animal and hitched a ride. This hitch-hiking behavior is reminiscent of modern-day remoras, which use their sucker-disc (also a modified dorsal fin) to cling to whales, sea turtles, sharks and other large fishes. Unfortunately, according to paleoichthyologist and stethacanthid specialist Richard Lund, the brush structure does not appear to have been very mobile. Neither of the above proposals, however, explains why only male Stethacanthus are so endowed.
It seems far more likely that the dorsal brush and cranial bristles of Stethacanthus played some role in their courtship rituals. Perhaps the brush was a symbol of virility, like the antlers of deer stags, enabling Stethacanthus females to have chosen the best, most genetically fit male with whom to mate. Or perhaps the brush and bristles were used during male-to-male pushing matches, enabling the combatants to grapple together as they tested each other's strength in competition for access to mating grounds or sexually receptive females. Similar contests of strength are known to occur in modern bannerfishes of the genus Heniochus. To a lesser degree than Stethacanthus, bannerfishes have sculpted foreheads which facilitate males locking together, eyeball to eyeball, for macho pushing matches. If, like modern sharks, Stethacanthus relied on forward motion to ventilate its gills, the weaker combatant in such matches would become breathless fairly quickly, and be forced to concede victory.
In overall body form, Stethacanthus was apparently a highly streamlined shark, with falcate, relatively narrow-based pectoral fins and a nearly symmetrical, Cladoselache-like tail fin. Therefore, Stethacanthus may have been a fast swimmer with good maneuverability ... were it not spoiled by the unwieldy dorsal brush. If the name of the evolutionary game is reproductive success, it seems that good hydrodynamics and swimming efficiency were not the highest priorities governing natural selection in male Stethacanthus. Unfortunately, we will probably never know to what purpose — if any — Stethacanthus males put their large and ungainly headgear.
Adding credence to the notion that Symmorium, lacking a dorsal brush, may actually be female Stethacanthus, is a diminutive species known as Falcatus falcatus. Falcatus was also a stethacanthid, but it grew to a length of only about six inches (15 centimetres) — about the same size as the very smallest of living sharks. inhabited the warm, shallow seas that invaded the American mainland during the Early Carboniferous, about 325 million years ago.Discovered by Richard Lund in the Bear Gulch formation of Montana, Falcatus may have been even more sexually adventurous than its brush-headed cousin, Stethacanthus. Male Falcatus had a large, sword-like appendage — apparently a modified fin spine — projecting forward over its head like a sunshade. Falcatus seems to have used this odd head ornament in a kind of piscine foreplay. Lund's best-known discovery consists of a pair of fossilized Falcatus falcatus apparently preserved in the act of mating. A single slab of limestone shows the larger female grasping the male (identifiable by its claspers) by the 'antler' projecting from its head. Precopulatory rituals have been observed in only a few species of modern sharks, but in most cases the male ritualistically bites the female's back, pectoral fins, or gill pouches prior to intromission. It would seem that Falcatus females were more liberated than some of their descendants. In any case, in Falcatus we have clear evidence of sexual dimorphism in ancient sharks. Such obvious differences between the sexes is unknown in modern sharks.
The stethacanthids were only one group of sharks freed by the decline of the placoderms. Many other shark groups also underwent massive radiations during the Carboniferous Period. As long as the placoderms ruled the seas, sharks were relegated to ecological gutters. But when the placoderms were all but decimated at the dawn of the Carboniferous, the marine playing field was leveled and the dogfish had their day. For all their weirdness, however, many of these strange Carboniferous sharks were apparently quite successful. During the majority of the Carboniferous Period, sharks outnumbered bony fishes by a ratio of three to two.
Shark diversity during the Carboniferous Period was nothing less than astonishing. The Carboniferous boasted about 45 families of sharks (compared with about 40 families of modern sharks — not counting the rays, which would appear later). It was a veritable Golden Age of Sharks. At the close of the Permian Period, about 250 million years ago, there occurred what has been called the Permian-Triassic extinction event. In a geological instant, fully 99% of marine species were wiped out — including the extravagant stethacanthids. But some shark lineages squeaked through this catastrophe, one of them eventually giving rise to modern sharks. Although modern sharks are remarkably diverse in form and lifestyle, no shark today matches those of the Carboniferous for sheer weirdness.
During the evolution of chrondrichthyes there have been many groups with bizarre appearances. Sometimes these families are collectively referred to as "paraselachians" . Many fossil skeletons contain unusual appendages. Most of which have as yet not been conclusively explained.
Some examples of these paraselachians include:
- Stethacanthus - a Cladodont which lived through the Silurian Period between 380 and 300 million years ago. It had a modified first dorsal fin that terminated in a spine covered pad reminiscent of an inverted scrubbing brush. Its forehead also had a similar surface. These surfaces may have been used for pinning prey or for mating.
- Helicoprion- from the Permian Period, had a conveyor belt of teeth that spiraled out of its lower jaw and a thin corresponding line of sharp teeth in the upper jaw. The lower whorl of teeth rotated out of the jaw as the shark grew. Unlike most sharks it retained the smaller previous teeth which rotated back into the jaw forming a spiral or whorl not unlike the growth pattern of a shell. The two dermal surfaces sliced against each other giving it a formidable shearing weapon.
- Falcatus- from the Carboniferous period had a curving, forward facing appendage in place of its first dorsal fin. It has been suggested that only the male may have had this sword like structure.
- Xenacanthus- a member of the pleurocanthids. It had a long backward facing spike extending from the back of its skull and an eel like or ribbon like fin running down the length of its back.
- Iniopteryx - Iniopterygians lived from the into the period. More closely related to modern day chimaeras, they had flexible pectoral fins which were disproportionately long and rayed for strength. It is unclear whether these "wings" were used to glide above the water or to paddle under it. The leading edge of the wings were covered with sharp toothy denticles.
As the Permian Period was drawing to a close the seas were filling with Actinopterygians - the ray finned fishes. This was a food source that could not be ignored by the oceans predators. In response, the elasmobranchs began to radiate again and during the early Triassic a shark appeared in the fossil record that was similar enough in appearance to modern day sharks to be considered one of the first of the "modern sharks". The name of this shark was Palaeospinax.
Palaeospinax was morphologically similar to the dogfish of the family squalidae. It had a calcified sectioned vertebral column instead of a continuous notochord, its two dorsal fins had supportive leading edge spines, and most notably it had the under slung mouth of a modern shark.
Amongst the first of the presently extant sharks to swim in the seas were the slow swimming Horn sharks and the Cow sharks but towards the mid cretaceous the fair to be had in the mid oceans was enough to push the development of fast moving predators that could pick off large, schooling, off shore fishes. At the time the seas were ruled by enormous icthyosaurs and plesiosaurs so this new food source did not come without risk to the sharks.
During the Cretaceous most of the present genera were firmly established and then around 60 million years ago at the end of the Cretaceous a catastrophe occurred which wiped out the dinosaurs and many other species, leaving the remaining sharks as the supreme rulers of the oceans.
Modern sharks began to appear about 100 million years ago, during the middle of the Jurassic period during the Mesozoic era. The second major radiation of sharks occurred during the Jurassic Period, 208 to 144 million years ago. At this time, pterosaurs ruled the skies and the first birds were taking to the air. On land, gigantic sauropod dinosaurs such as Brachiosaurus stripped leaves from the branches of tall trees like cycads and conifers. Stegosaurs dined on smaller plants, nervously watching for Allosaurus and other large carnivorous theropods. In the seas, ichthyosaurs, long-necked and short-necked plesiosaurs, and mesosuchian crocodilians pursued schools of bony fishes and flotillas of ammonites. It was into this Jurassic world that the modern sharks first appeared.
Fossil mackerel shark teeth occurred in the Lower Cretaceous. One of the most recent families of sharks that evolved is the hammerhead sharks (family Sphyrnidae), which emerged in Eocene. The oldest white shark teeth date from 60 to 65 million years ago, around the time of the extinction of the dinosaurs. In early white shark evolution there are at least two lineages: one with coarsely serrated teeth that probably gave rise to the modern great white shark, and another with finely serrated teeth and a tendency to attain gigantic proportions. This group includes the extinct Megalodon, Carcharodon megalodon, which like most extinct sharks is only known from its teeth and a few vertebrae. This shark could grow to more than 16 metres (52 ft) long and is recognized as the biggest known carnivorous fish to have ever existed. Fossil records reveal that this shark preyed upon whales and other large marine mammals.
It is believed that the immense size of predatory sharks such as the great white may have arisen from the extinction of giant marine reptiles, such as the mosasaurs and the diversification of mammals. It is known that at the same time these sharks were evolving some early mammalian groups evolved into aquatic forms. Certainly, wherever the teeth of large sharks have been found, there has also been an abundance of marine mammal bones, including seals, porpoises and whales. These bones frequently show signs of shark attack. There are hypotheses that suggest that large sharks evolved to better take advantage of larger prey.
No one is sure from which group of ancient sharks their modern descendants evolved. Until recently, it was thought that all modern sharks descended from a group known as the hybodonts.
Hybodus, which grew to a length of eight feet (2.5 metres) and lived in shallow seas about 180 million years ago, is perhaps the best known example of this group. (The hybodonts even had representatives in freshwater and brackish habitats. Most of these freshwater hybodonts were extremely small - such as the 6-inch [15-centimetre] Lissodus, known from Permian deposits in Africa about 275 million years old.) Hybodus was certainly very sharky-looking, with a blunt head, a curious ridge above the eyes, a well-developed spine on the forward edge of both dorsal fins, and two types of teeth: high grasping 'canines' in the front of the mouth, low crushing 'molars' in the rear.
It has even been suggested that Hybodus was a direct ancestor to the modern bullhead sharks (family Heterodontidae), which have somewhat similar brow ridges, fin spines and teeth. Paleoichthyologist John G. Maisey is probably the world's foremost authority on hybodonts. Based on his extensive studies of fossil and modern sharks, Maisey believes that the hybodonts were a side-branch of shark evolution that did not give rise to any group of modern shark. Maisey has proposed that a fossil genus known as Synechodus may be more closely related to modern sharks than the hybodonts. In the long run, this may mean that Synechodus and modern sharks are "sister groups" (sharing a relatively recent common ancestor) with the hybodonts sharing a more distant ancestor with both groups. Only time and further research can shed more light on the murky origins of modern sharks.
The earliest known modern shark may be Mcmurdodus, which is known from mid-Devonian deposits (about 390 million years old) in what is now western Queensland, Australia. On first consideration, this is an astonishingly early date for the origin of modern sharks - actually predating Cladoselache and Antarctilamna. The modern shark status of Mcmurdodus is based on the structure of its tooth enameloid, which - although proper histological examination has not yet been carried out - appears to be of a multi-layered type which is found in all living sharks but not in most ancient sharks (a potentially important exception is Xenacanthus, which is still regarded as an ancient shark due to other structural features). If Mcmurdodus, Cladoselache, and Antarctilamna are members of lineages that appeared at about the same time, it is tempting to speculate that they are all results of a single massive shark radiation that occurred in the early Devonian. Although Cladoselache, Antarctilamna and other ancient shark lineages persisted for a while, only the modern sharks (whether descended from Mcmurdodus or not) made the final casting cut in the Great Evolutionary Drama.
How Mcmurdodus is related to living sharks is not clear. Like many of the earliest sharks, Mcmurdodus is known only from its fossilized teeth. In overall form, these resemble the sawlike lower teeth of the extant cowsharks (family Hexanchidae). This resemblance, however, may be due to convergence (the development in unrelated organisms of similar anatomical solutions to shared environmental challenges) rather than evolutionary relatedness. There is a 190-million-year gap in the fossil record between the last Mcmurdodus and the first unquestionable cowshark. This large gap does not preclude the possibility that Mcmurdodus was related to the modern cowsharks, but it does make it difficult to 'connect the dots' with any confidence.
About 140 million years after Mcmurdodus, there appeared another early modern shark named Paleospinax. Paleospinax is known primarily from teeth of Early Triassic to Eocene age, about 250 to 60 million years ago. However, a few specimens from Early Jurassic deposits in what is now England and Germany include well-preserved impressions of jaws and vertebrae. Paleospinax was less than three feet (1 meter) long and had many of the features associated with modern sharks, including: a longish snout, a mouth located underneath the head (a mouth type termed "subterminal"), short jaws that were attached to the cranium only at the back, teeth with dense enameloid, and well-developed vertebrae. In its overall body form and the presence of fin spines, Paleospinax resembled the modern spiny dogfishes (family Squalidae) - although some paleoichthyologists have suggested that it may have been a galeomorph, a group which includes many of the most familiar non-dogfish sharks. Despite their antiquity and uncertainties as to how they are related to living sharks, both Mcmurdodus and Paleospinax are among the earliest modern sharks, belonging to a group known as the neoselachians ("new sharks").
Rise of the NeoselachiansEdit
The neoselachians radiated rapidly and by the mid-Cretaceous, about 100 million years ago, most modern groups of sharks had appeared. In a 1985 paper, paleontologists Detlev Thies and Wolf-Ernst Reif proposed that the neoselachian radiation was an opportunistic response to abundant new sources of food. Thies and Reif deemed the radiation of two types of bony fishes particularly important in fueling the flowering of new sharks: the ray finned fishes (class Actinopterygii) - especially the carp-shaped semionotids and other basal neopterygians - in the late Triassic, and the so-called 'higher' teleosts (a phylogenetic hodgepodge containing most of the better known ray finned fishes) from the early Jurassic onward. These new bony fish types provided vast shoals of fast swimming, thin scaled food-on-the-fin for those predators that could catch them. The neoselachians answered this 'dinner call' admirably - they had the speed, maneuverability, flexible jaws, and enhanced sensory systems essential to hunting such swift prey.
The early neoselachians were predominately near-shore predators. But in the mid-Cretaceous, neoselachians evolved a whole new mode of making a living: fast offshore hunting. Thies and Reif suggested that this new hunting mode was in response to increased size and speed of teleost fishes and pelagic squids. These fast off-shore neoselachians did not, however, rule their pelagic realm uncontested. The marine reptiles of that time - such as the dolphin-shaped ichthyosaurs and the long-necked, rhomboid-paddled plesiosaurs - may have been fast enough to compete with these neoselachian upstarts, and perhaps even eat the smaller species. In contrast, the late Cretaceous mosasaurs - huge, short-necked, crocodile-like relatives of the plesiosaurs - were probably too slow and lumbering to compete with the faster, more agile offshore sharks and may have been preyed upon by the very largest species.
- Main article: Otodus-Carcharocles lineage
Cow and Frilled sharksEdit
Among most ancient of surviving neoselachian lineages are the cow and frilled sharks (orders Hexanchiformes and Chlamydoselachiformes, respectively). Cow sharks are represented in the fossil record by their characteristic cockscomb-shaped lower teeth, dating as far back as the early Jurassic Period, about 190 million-years ago. Articulated cow shark remains are known from the late Jurassic, about 150 million years ago. The eel-like Frilled Shark (Chlamydoselachus anguineus) is probably at least as ancient, but its unique trident-shaped teeth are known only as far back as the late Cretaceous, about 95 million years ago. Although a few cow sharks have secondarily invaded coastal, shallow-water habitats (notably the Broadnose Sevengill, Notorynchus cepedianus and - in certain parts of its range - the Bluntnose Sixgill, Hexanchus griseus), most hexanchoids are dedicatedly deep-sea animals. One is tempted to suspect that these sharks have been 'hiding out' in the stygian blackness of the abyss while their shallow-water cousins competed vigorously with each other for vital resources in the sunlit shallows, far above. But these sharks are not ecological draft evaders, they are simply adapted to making a living in very specialized and difficult surroundings. Yet, because food and other resources are much more abundant over the continental shelves which surround large landmasses, neoselachian diversity and abundance to this day remains richest in these fecund near-shore waters.
Perhaps the most astonishing and unprecedented expression of neoselachian adaptability is the evolution of filter-feeding sharks and rays. At about the same time during the early-to-mid Tertiary Period, roughly 65 to 35 million years ago, four separate neoselachian lineages independently shifted from active predation to a more laid-back grazing modus vivendi. The carpet shark lineage (order Orectolobiformes) gave rise to the modern Whale Shark (Rhincodon typus), two distinct lineages of mackerel shark (Lamniformes) gave rise to the Basking (Cetorhinus maximus) and Megamouth (Megachasma pelagios) sharks, and the stingrays (Myliobatiformes) gave rise to the devil rays (Mobula species) and Manta (Manta birostris).
All these swimming colanders share four key adaptations that enable them to separate their tiny prey from the saline broth through which they swim: 1) large to enormous size, 2) a very wide terminal or nearly terminal mouth, 3) reduced dentition, and 4) elaboration of the gill tissues to form plankton sieves. We may never know what environmental changes precipitated this profound dietary shift. But it is probably no coincidence that the filter-feeding baleen whales also appeared at about the same time as these planktivorous neoselachians.
The strange and wonderful hammerheads (family Sphyrnidae) are among the most recent sharks to appear in the fossil record. The earliest of their single-cusped teeth are known from mid-to-late Eocene deposits, about 50 to 35 million years old. (The origin of hammerheads is difficult to determine precisely, as their teeth are very similar to those of closely related carcharhinids - notably Rhizoprionodon and Scoliodon.) Thus, hammerhead sharks appeared at about the same time as the 'dawn horse', Hyracotherium (better known by its older and more euphonious name, Eohippus) appeared on land - and more than 35 million years before the first ape-like creature that could be considered even remotely human. Hammerheads may seem an improbable design, but they were here long before us.
Evolution of Lamnoid sharksEdit
The lamnoids (order Lamniformes) include many of the most famous and instantly-recognizable of sharks. The Goblin Shark, Sandtiger, threshers, Megamouth, Basking, and the Great White are all members of this group. From the dim depths of prehistory, these sharks have left a rich fossil record.
As a group, lamnoids are characterized by heavily-built, solid teeth that have proven durable against the onslaught of erosion over geological time. As a result, their ancestors have left many beautiful and highly informative fossil teeth. In addition, the lamnoids have heavily calcified but fragile vertebral centra which are also sometimes preserved. Beyond these structural basics, only a few assorted fossilized bits and pieces survive - some of them squirreled away in private collections, where their true value remains hidden from paleontologists.
Curiously, very few lamnoids are known from articulated fossil remains. An important exception is Scapanorhynchus lewisii, which is known from well-preserved body fossils from early Cretaceous deposits (about 120 million years old) in Lebanon. Scapanorhynchus is believed to be a direct ancestor of the modern Goblin Shark (Mitsukurina owstoni), based on the many features they share such as a long, blade-like snout, striated, fang-like teeth, and a long tail with a weak lower lobe. Although the goblin shark can reach a length of 11 feet (3.4 metres), most specimens of Scapanorhynchus are much smaller, about two feet (65 centimetres) long. (A large, shallow water species known as S. texanus had 2-inch [5-centimetres] teeth, suggesting it grew as large as the extant goblin shark, but there is some contention whether these two sharks are actually related.) Scapanorhynchus is also known from numerous spike-like fossil teeth which are superficially similar to those of the Sandtiger (Carcharias) and have been confused with them, but differ in the presence of fine grooves on the inner surface near the base of the blade. These teeth are known from deposits representing most of the Cretaceous (about 120 to 65 million years old), in such widely scattered locations as Europe, Africa, southwestern Asia, Australia, New Zealand, and South America. Due to fortuitous finds such as these, it is seems likely that the goblin shark lineage diverged from the common ancestor of the lamnoids and became specialized relatively early in its evolutionary career.
Despite an abundant fossil record, how ancient sharks are related to each other and to their modern descendants is far from clear. Most of what we know about the evolution of lamnoid sharks comes from detailed studies of their fossilized teeth. Yet with only teeth to go on - no matter how beautifully preserved - it is extremely difficult to trace the evolutionary history of lamnoids. As a result, we often have more theories than we do specimens. Despite this paucity of data, the fossil record suggests two clear features of lamnoid evolution: these sharks underwent several massive bursts of adaptive radiation, followed by long periods of very slow and gradual diversification along separate lineages.
Fossil collector Gordon Hubbell has remarked that studying shark evolution is like watching a movie in slow motion. But at least with a movie, one has all the frames in order. In the shark fossil record, huge sections of the story are missing, distorted, or out-of-sequence and each specimen is more like a single frame from a very long movie. As such, the challenge facing shark paleontologists is more akin to figuring out a cohesive plot-line despite having only a few scattered and warped snapshots with which to work. The lamnoid sharks we see today are thus the products of a long, immensely convoluted history that is mostly hidden from human investigation.
Great White sharkEdit
The White Shark is a member of the family Lamnidae, which includes three genera: Carcharodon, Isurus, and Lamna. In Oligocene deposits about 30 million years old, teeth have been found that are very similar to those of the White Shark but lack the serrations that characterize the genus Carcharodon. Since the extant mako sharks of the genus Isurus have teeth that are always smooth-edged, these fossils have traditionally been classified as Isurus hastalis. Miocene deposits, about 23 million years old, in Italy have yielded very similar teeth, but with faint serrations near the tip of the blade. These teeth were classified as Isurus escheri, and were regarded as 'proof' that the modern saw-toothed great white evolved gradually from smooth-toothed mako sharks of the genus Isurus.
But nature is often subtler than human ideas about how it 'works'. Paleoichthyologist Henri Cappetta, one of the most distinguished researchers on fossil sharks, noticed that fossil teeth of 'Isurus' hastalis are very similar to those of the modern White Shark. In fact, Cappetta has remarked that the two are so similar that fossil Carcharodon carcharias teeth in which the serrations have been abraded away by geological activity are virtually impossible to differentiate from specimens of hastalis. In 1995, paleoichthyologist Mikael Siverson began to question the assignment of hastalis to the genus Isurus. Based on striking similarities between the root shape and overall structure of the tooth blade, Siverson now believes that hastalis and escheri are not makos at all, but direct ancestors of the modern White Shark. Siverson has therefore suggested that they should be re-assigned to the genus Cosmopolitodus. This view has also been adopted by paleontologist David Ward and seems to be gaining acceptance in at least some paleontological and fossil collecting circles.
The assumption that saw-toothed Carcharodon evolved from smooth-toothed Isurus is based on the idea that the appearance of serrations coincides with the origin of the genus Carcharodon. But it's relatively easy to serrate a tooth, as shown by many clearly separate shark lineages which have independently evolved serrated teeth. A newer interpretation of the lamnoid fossil record holds that the Carcharodon lineage was originally smooth-toothed and is actually older than that of Isurus. According to this scenario, the Carcharodon lineage can be traced back to the smooth-toothed Isurolamna inflata, which lived about 65 to 55 million years ago. I. inflata gave rise to Macrorhizodus praecursor, which lived about 55 million years ago and had smooth edged but broader teeth than its ancestor. Praecursor gave rise to Cosmopolitodus hastalis, which lived about 35 million years ago and developed even braoder teeth. Hastalis, in turn, gave rise to Cosmopolitodus escheri, which lived about 25 to 20 million years ago and had weak serrations on its teeth. And finally, escheri gave rise to the modern White Shark, Carcharodon carcharias, which appeared some 11 million years ago and had the coarsely serrated teeth for which the genus is renowned today. Therefore, Carcharodon and Isurus both descended from Isurolamna inflata and many smooth-edged fossil teeth originally named Isurus are in fact part of the Carcharodon lineage. |
The human digestive tract seamlessly adapts to any of the wide variety of foods that people eat. It has evolved to break down foods into their component nutrients and excrete waste efficiently. With many different organs playing a role in digestion, the humble digestive tract is actually a complex system.
Sterile Before Birth
The bacteria that populate the digestive tract and help in nutrient absorption are not present in the fetus. Babies acquire bacteria from the mother and the environment during birth and in the days after being born.
Outside the Body
Technically, the digestive tract is not inside the body but outside the body. It is considered a canal that allows food in and waste out, but the food has to pass through the digestive wall to get into the body proper.
Liver-Esophagus Vein Connection
Veins that take blood from the digestive organs to the liver can back up the liver has a problem. These liver veins also connect with veins in the esophagus. When liver problems cause a backup of blood, the esophageal veins swell and may burst, causing severe bleeding.
Sonic Hedgehog Gene
Human embryos have a simple digestive tube before the various parts of the digestive system develop. The oddly named sonic hedgehog gene is the driving force behind the development of the primitive digestive tube into the various digestive organs. This gene cues the activation of a variety of other genes in different parts of the primitive digestive tube.
Taste Ability Varies
The taste buds in the mouth vary in concentration among people. According to the medical textbook "Histology," 25 percent of people are "supertasters," with lots of taste buds. Another 25 percent are "nontasters," with a reduced ability to discriminate among certain tastes.
Frequent Gas Expulsion
Gas in the digestive tract comes from swallowed air and bacterial production. Some swallowed air is absorbed in the intestine. Passed gas, or flatus, is a mixture of nitrogen, carbon dioxide, hydrogen and methane. People normally pass gas up to 20 times per day, according to the medical text "Kumar and Clark's Clinical Medicine."
The stomach has sensory receptors that send information through the vagus nerve to the brainstem, the bottom part of the brain. If the information indicates the presence of toxins, the brainstem triggers nausea and vomiting.
The salivary glands in the mouth produce approximate 1.2 liters of saliva daily, according to "Histology." In addition to lubricating the mouth, saliva also contains enzymes that break down starches and kill certain potentially dangerous bacteria. It also provides calcium and phosphate to keep the teeth strong.
Esophageal Lining Changeable
The cells lining the esophagus differ from those lining the stomach, as stomach cells must protect against acid. Severe gastroesophageal reflux disease, which causes stomach acid to repeatedly gets into the esophagus, can stimulate the cells of the lower esophagus to change into cells like those of the stomach lining as a protective mechanism.
Bacteria May Cause Inflammation
A 2013 review in the journal "Pediatric Gastroenterology, Hepatology and Nutrition" states that the various types of bacteria in the intestine influence certain types of immune system cells. Research is ongoing to determine possible ways that digestive system bacteria might contribute to the development of certain diseases, such as inflammatory bowel disease. |
Over the last fifty years, a vast number of Americans have reaped the benefits of hydropower. Hydropower, or electricity produced from moving water, does not produce solid, liquid, or gaseous pollutants, and it is renewable as water is considered an inexhaustible resource. The Hoover Dam, Niagara Power Plant, and their large and small relatives are responsible for contributing more than 90% of all the renewable electric energy produced in the United States .
Large-scale hydropower, or systems producing over 30 megawatts, are what often come to mind when one thinks of this power source. Hydropower systems such as Hoover Dam, the powerhouse for Las Vegas and parts of California, function by backing up a river within a canyon to create a deep, slow-moving body of water behind a concrete dam. The force of water being let out through the dam, either at a constant rate or certain times of the day or seasons of the year, generates electricity that is sent to remote regions via a power lines. Other large dam systems generate energy through a process of moving water from different elevations within a multi-dam system.
However clean, such hydroelectric systems are not truly environmentally-friendly. Large-scale hydropower has serious consequences for native species, local lifestyles, and landscape. Large dams in the Pacific Northwest have hindered salmon migrations and adversely affected the salmon population, archeologically and anthropologically valuable canyons have been flooded along the Colorado River in the West, and hundreds of residents have been displaced in the flooding that following building a large dam. The following environmental effects can have drastic, documented consequences upon river species and ecosystems:
- disruptions to river water temperature and composition,
- barring species? migrations routes, and
- changes to natural river flow and intensity (including peak flood seasons).
The Low-Impact Alternatives
Green Energy Ohio and many other environmental groups around the nation advocate small-scale hydropower, or systems under 30 megawatts. Related systems are mini hydro and micro hydro home-scale systems, which produce up to 1 megawatt and 100 kilowatts, respectively. An average micro-hydro turbine can produce anywhere from 1 kWh (1,000 watt-hours) to 30 kWh per day. Without particular energy-efficiency measures, the average American home uses 10 - 15kWh of energy per day.
The Green-e certification program analyses different types of hydropower. Generally, only small hydro (dams 30 megawatts or less) and LIH facilities qualify. The Low Impact Hydropower Institute certifies dams as truly low-impact by studying the total environmental impacts of a particular hydropower dam. The Low Impact Hydropower Institute has created a Low Impact Hydropower Certification program to identify and reward efforts by dam owners to minimize the impacts of their hydropower dams. The program certifies hydropower facilities with impacts that are low compared to other hydropower facilities based on eight environmental criteria:
1. river flows
2. water quality
3. fish passage and protection
4. watershed protection
5. threatened & endangered species protection
6. cultural resource protection
8. facilities recommended for removal
What it looks like
Hydro systems have smaller battery banks than their output would suggest. With a constant flow of water, these systems only need to compensate for the occasional demand stress on the system. Conversely, solar and wind systems store power for windless or cloudy weather.
How it works
Standard micro-hydro systems are made of the following key components:
- Penstock, the pipeline carrying water from source to turbine.
- Turbine, which transforms the energy of the flowing water into rotational energy.
- Alternator or generator, which transforms the energy of motion into electricity.
- Regulator, which either controls the electricity produced by the generator, or reroutes excess energy.
- Wiring delivering the electricity to either the power grid, home, or storage batteries.
- Batteries (optional) to store the electricity.
- Inverter (on DC-producing systems) to convert the electricity to the standard AC current used in the home.
A key component of the system's functionality is the height and pressure of falling water, known as "head." Head is a function of the height of the fall and the characteristics of the channel, and can be calculated by a professional, or on your own using the techniques mentioned in the "Steps to Micro-Hydro" section. The higher the head, the less water needed to produce power, and the smaller, cheaper, and more efficient equipment can be used in your system. A "high head" site typically has a height of over 10 feet, whereas shorter drops are referred to as "low head." Sites with drops of less than 2 feet may not support a system.
The power available at a site is the product of the flow volume and head. Flow volume is measured in cubic feet per second (cfs) or gallons per minute (gpm), one cubic foot equaling 7.48 gallons.
Run-of-the-river plants can be designed using large flow rates with low head or small flow rates with high head.
Drop-in-the-creek generators are options for sites with low head (1 - 2 feet) and high volume.
What it can do
Micro-hydro, like other forms of renewable energy, is a more environmentally-benign, and can be more reliable than traditional sources of power. Systems that hook into a home or business provide a back-up source of power during outages. Micro-hydro production also cumulates nicely over time, as power is generated 24 hours a day, under any weather conditions. Homestead systems produces systems often produce enough power to run several refrigerators and space heaters.
How much it costs
On the right site, a hydropower system can cost as little as one-tenth the cost of a photovoltaic (solar power) system producing the same amount.
Steps to Micro-Hydro
These steps are intended to serve as a guideline only; you may find that your particular area has more, less, or different additions to this list.
1. Determine legal restrictions and requirements in your area. Use and alternation of water systems are regulated in many areas of the country, regardless of whether or not they are on private property. Before you begin, you must consider how your work will alter wildlife habitat both on the site, near the stream, and down stream of your installation. Will you be diverting water flow? Will you be disrupting habitat? Depending on the land you are considering, you may need permits from the US Fish and Wildlife Service, the US Forest Service or Bureau of Land Management (if the land is federally owned).
Governmental points of contact for other permitting information and building
restrictions are your county engineer, state energy office, Federal Energy Regulatory
Commission and the US Army Corps of Engineers.
2. Determine head. There are two types of head to consider, gross (or 'static') and net (or 'dynamic') head. Gross head is the vertical distance between the penstock (the pipe that takes water from the stream) and where the water leaves the turbine. Net head is the gross head, minus pressure losses due to friction and turbulence. Minimizing length of, and turns in, the pipeline can prevent some losses to pressure.
To determine gross head, you can hire a professional to survey the site, or try the hose method. With the hose method, two people work together to stretch a hose down the stream from the proposed penstock (intake pipeline) site. With a funnel attached to the hose, Person A holds the funnel underwater to fill the hose. Person B lifts the downstream end of the tube until water stops flowing from it. The gross head is the vertical between Person B's end of the tube and the surface of the stream. This process is repeated for the length of the stream between the proposed penstock site, and the proposed turbine site. The sum of the measurements is a rough estimate of the gross head for the micro-hydropower site.
3. Determine water flow. Your profession site surveyor will be able to calculate this information for you. Otherwise, the US Geological Survey posts surface water flow data on their web site, and the county engineer, local water supplier, or flood control authorities may also be helpful in gathering this information. On small streams, you may be able to measure the flow yourself using the bucket method. Dam the stream to divert the water flow into a bucket, and time the rate at which the bucket fills. Divide the number of gallons filled by the time to determine gallon-per-minute (or second) flow rate. One cubic foot per second is equivalent to 448.8 gallons per minute.
4. Determining power. The potential power for you site can be determined by multiplying:
Gross Head (feet) X Flow Rate (feet / per second) X System Efficiency (decimal value) X .085 (for calculations in American units) = Power (kW)
System efficiency ranges from 40% - 70%, with an average efficiency rating of 55% (.55). Don't forget to consider seasonal deviations in your flow rate!
5. Determine economic feasibility. One of the simplest ways to determine whether or not the project is economically feasible is to add up costs (developing, operating, and maintaining the site over the life of the system), and divide the amount by the system's productive capacity on your site. Compare this price-per-watt number with the costs of power from another source. Net metering may also be an option for you, whereby excess power produced by your system would be fed back into the utility grid and credited to your account.
Low-impact hydro and Ohio
The City of Columbus operates O'Shaughnessy Dam, a low-impact hydropower installation on the Scioto River, at a head of 5.5 meters. The installation consists of two turbines spinning at 64.3 rpm. Each turbine has an output capacity of 25.9 megawatts. Photo below from the 2003 Central Ohio Solar Tour as participants view base of O'Shaughnessy Dam near turbine room.
US Department of Energy's Renewable Energy Clearinghouse: 1.800.363.3732 or www.eren.doe.gov.
Contributors and Works Consulted:
"Divided Over Dams." The American Experience, PBS, http://www.pbs.org/wgbh/amex/hoover/sfeature/damdivided.html.
Katya Chistik, Project Coordinator, Green Energy Ohio.
Ron Feltenberger, formerly Vice President and Hydro System Designer, Universal Electric Power. |
Mankind hasn’t had to deal with much in the way of deadly meteors over the years, but on the few occasions when one of the pesky space rocks does target Earth, they often self-destruct in the air before it even reaches the ground. For years, researchers have puzzled over why that happens, but a new study published in the Meteoritics & Planetary Science suggests the first concrete explanation.
Using a recent meteor explosion event — the rock that detonated in the sky above Chelyabinsk, Russia — as an example, scientists attempted to explain why the massive object seemed to cut its life short before striking ground. Using computer simulations to model the incoming path of the large meteor, the data revealed that it wasn’t necessarily the friction of the upper atmosphere the caused the explosion, but rather the pressure difference between the air in front of the rock and the air behind it.
“There’s a big gradient between high-pressure air in front of the meteor and the vacuum of air behind it,” Jay Melosh, a professor with Purdue University and co-author of the study, explains. “If the air can move through the passages in the meteorite, it can easily get inside and blow off pieces.”
With the contrasting pressures surrounding the rock, and air seeping into the rock as it careened towards the ground, even a relatively strong chunk of rock would grow unstable and begin to fall apart. Given the speed at which meteors come flying in, that rapid disintegration takes the form of an explosion, and the resulting shockwave becomes the real damage-dealer for us here on the surface.
This might sound like a preferable outcome for any creatures that call Earth home, but that’s only if you’ve allowed images of asteroid strikes from disaster movies cloud your judgment. In reality, a fast-moving space rock exploding in the air above a city can cause just as much — and in some cases more — damage than a ground strike. The meteor that detonated over Chelyabinsk exploded with the force of a small scale nuclear weapon, and injuries numbered in the hundreds.
The study is also quick to note that this type of airborne disintegration is only likely to happen with smaller objects, while particularly large and strong “planet killer” rocks will almost certainly remain unaffected. |
Yellowstone’s relief is the result of tectonic activity (volcanism and earthquakes) combined with the erosional actions of ice and water. Most of the park consists of broad volcanic plateaus with an average elevation of about 7,875 feet (2,400 metres). Three mountain ranges, each aligned roughly north to south, protrude into the park: the Gallatin Range in the northwest, the Absaroka Range in the east, and the northern extremity of the Teton Range along the park’s southwestern boundary. The tallest mountains in the park are in the Absarokas, where many summits exceed elevations of 10,000 feet (3,050 metres). The range’s Eagle Peak, on the park’s boundary in the southeast, is the high point, reaching 11,358 feet (3,462 metres). Aside from its rugged mountains and spectacular deep glacier-carved valleys, the park has unusual geologic features, including fossil forests, eroded basaltic lava flows, a black obsidian (volcanic glass) mountain, and odd erosional forms.
Yellowstone is also known for its many scenic lakes and rivers. The park’s largest body of water is Yellowstone Lake, which, having a surface area of 132 square miles (342 square km) and lying at an elevation of 7,730 feet (2,356 metres), is the highest mountain lake of its size in North America. The West Thumb area—a knoblike protrusion of the lake on its west side—was formed by a relatively small eruption in the caldera about 150,000 years ago. The next largest lake, Shoshone Lake, lies in the caldera southwest of Yellowstone Lake.
The park’s most extensive drainage system is that of the Yellowstone River, which enters at the southeast corner, flows generally northward (including through Yellowstone Lake), and exits near the northwest corner of the park. The river’s Yellowstone Falls, located in the north-central part of the park, descend in two majestic cascades: the Upper Falls, with a drop of 114 feet (35 metres), and the Lower Falls, with a drop of 308 feet (94 metres). The falls constitute the western end of the spectacular Grand Canyon of the Yellowstone. There the river has cut a gorge 19 miles (30 km) long, between 800 and 1,200 feet (240 and 370 metres) deep, and up to 4,000 feet (1,200 metres) wide. The walls of the canyon, sculpted from decomposed rhyolite (volcanic rock), are brilliantly coloured in hues of red, pink, yellow, buff, lavender, and white. Other streams of note include the Snake River, which rises and flows along the park’s southern boundary before joining the Lewis River and heading south; and the Gallatin and Madison rivers, both of which rise in and flow through the northwestern part of Yellowstone before exiting the park and eventually forming (along with the Jefferson River) the Missouri River in southern Montana.
Yellowstone’s principal attractions, however, are its some 10,000 hydrothermal features, which constitute roughly half of all those known in the world. The region’s deeply fractured crust allows groundwater to seep down to where it makes contact with the magma. The superheated and mineral-rich water then returns to the surface as steam vents, fumaroles, colourful hot pools, mud cauldrons, paint pots, hot springs and terraces, hot rivers, and geysers. It is thought that the constant stream of minor tremors that shake the region act to keep open the myriad cracks and fissures in the ground that might otherwise become clogged with minerals precipitating out of the hot water as it cools. Of the park’s more than 300 geysers—greater than half of the world’s total—many erupt to heights of 100 feet (30 metres) or more. Old Faithful, in west-central Yellowstone, the most famous geyser in the park, erupts fairly regularly, roughly every 90 minutes with a range of reasonably predictable variability.
Many of Yellowstone’s noted geysers and other thermal features are located in the western portion of the park, between Old Faithful and Mammoth Hot Springs some 50 miles (80 km) to the north. The greatest concentrations are in the Upper Geyser, Midway Geyser, and Lower Geyser basins that extend northward for about 10 miles (16 km) from Old Faithful. These include Giantess Geyser near Old Faithful, with a two- to six-month wait between eruptions, and the deep-blue Morning Glory Pool just to the northwest in the Upper Geyser Basin; Excelsior Geyser in the Midway Geyser Basin, which rarely erupts but discharges thousands of gallons of boiling water per minute; and the Fountain Paint Pots in the Lower Geyser Basin, with pink, plopping mud geysers, fumaroles, and a blue hot-spring pool.
Norris Geyser Basin lies roughly midway between the southern hydrothermal area and Mammoth Hot Springs. It is noted for having some of the hottest and most acidic hydrothermal features in the park and also includes Steamboat Geyser, which can throw water to heights of 300 feet (90 metres) and higher and is the world’s highest-erupting geyser. Mammoth Hot Springs consists of a broad terraced hillside of travertine (calcium carbonate) deposited there by dozens of hot springs. Among its notable formations are the multicoloured Minerva Terrace and Angel Terrace, each which consists of dazzling white rock that in many areas is tinted by microorganisms on the rock. |
Acer saccharum, also known as “Sugar Maple”, is a deciduous tree species native to the St-Lawrence Lowlands’ area. It is usually found in the northeast of the United-States and in the southeast of Canada. The sugar maple is very dominant and broadly distributes in the northern hardwood forests of this region (Lovett & Mitchell, 2004). It usually grows to 35m tall with a trunk diameter of around 0.6m in width. Its bark changes from light gray and smooth to dark grey and furrowed as it ages. This species forms clusters of green-yellow flowers in the spring which produce red-brown paired samara fruits of 2-2.5cm in length. The dark green leaves usually grow to 5-11cm in length and width. They have 5 lobes which are shouldered by pointed teeth. This species is sometimes considered as an ornamental tree because it produces a striking fall foliage (Westing, 1966). Its leaf colour change ranges from yellow to red in autumn. The sugar maple’s name comes from the properties of this species sap. The Native Americans discovered that if you boil a sugar maple’s sap, it will form a syrup due to its high concentration in sugar. This knowledge was passed on to the European settlers and has since been a tradition in this area.
Sugar maples, Acer saccharum, are an eastern maple species inhabiting mixed deciduous forests. They range from the Maritimes, excluding Newfoundland, to Ontario. They are distributed as far north as the most northern part of Quebec and go as far south as Louisiana, USA (USDA). Depending on where it is distributed, the sugar maple can tolerate different elevations (Cornell University).
Sugar maples, based on their geographical range, can tolerate a wide temperature spectrum (Cornell University). The temperature range which is best for optimal growth is known as a temperate climate (Ressources Naturelles Quebec). In the summer, April/May till August/September, the average temperature in a temperate climate will range from 160C to 270C while the maximum summer temperature can go from 320C to 380C (Cornell University). In the winter, October/November till March/April, the average temperatures are -180C to 100C where the minimum winter temperature can reach up to -400C (Cornell University). The main concern regarding the health of maple trees, especially for a silviculture, is the effects of frost which can damage young maple buds and hinder their growth (Cornell University). It has been recorded that the first frost occurs between the months of September and November and the last frost phenomena occurs from March to May (Cornell University).
The annual precipitation of a temperate climate has an average of 51 cm to 127 cm of rain, whereas the average snow precipitations are 2.5 cm to 381 cm (Cornell University). In more southern regions where the probability of heavy rainfall is much higher than northern regions, the amount of rain precipitation can reach up to 203 cm (Cornell University).
The sugar maple grows on low pH soils that are also low in calcium, magnesium and high in aluminum. To obtain a good amount of calcium, the sugar maple can extent their roots to collect the element from the lower horizons. Forests who count a large number of sugar maple tend to have a large quantity of nitrogen in the soil. The nitrate leaches in the soil from the sugar maple trees. In addition, the tree plays a large role for the nitrogen cycling in the forest ecosystems. The tree is also called a “hydraulic lifter”, which means that the roots can raise the water located in the deep soil to bring it to the surface horizon so that the whole soil can maintain its moisture. Sugar maples also contain micorrhizae, which is a symbiotic association between a fungus and the roots of the tree. This symbiosis emphasizes the quantity of nutrients to the soil (Lovett, Mitchell. 2004).
There are many reasons and different ways of managing the sugar maple in a forest. The tree can be used for the production of maple sap or logwood. Landowners must first determine if the soil is suitable for the tree. For the sugar maple, the ideal soil would be one that has moderate to good drainage as well as fine texture. If the landowner wants to use the sugar maple for the production of sap, the trees need to be big, have large crowns, and have branches that extend close to the ground. In order to have this, the land needs to be free from competition for sunlight so the tree can grow fast. However, if the landowner’s intention is to produce logwood, the trees need to be tall and straight. Therefore there must be competition for sunlight. The shading will also lead to the shedding of the lower tree limbs, which will result to a clearer and more valuable lumber (Forest Management, 2002).
Sugar maple seedlings have to face a number of different insect and diseases throughout their development. The health and survival of young trees may be greatly affected depending on the extent of damage that they face. According to Gardescu (2003), the types of damage that sugar maple seedlings can suffer are ‘’ 1) herbivory of the stem or cotyledons, by caterpillars, slugs, and beetles; 2) damage associated with pear thrips; 3) leaf herbivory by caterpillars and beetles; 4) white flecking on leaves, caused by leafhoppers; 5) phloem-feeding insects (with no visible seedling damage); 6) leaf fungal diseases; and 7) other kinds of damage, such as stem breakage or burial’’ (Gardescu, 2003). The insects which dominate all other types of insects present on young sugar maples are the pear thrips. These insects are less than 1.5 mm in length and feed on emerging flowers and leaves. They also lay their eggs in the same areas (Gardescu, 2003). Pear thrips also attck other types of maples but cause the most damage on sugar maples. The insects return every year. Some signs of pear thrips damage are unusually small leaves and yellow or white spots on leaves (Pear Thrips, 2011).
Lovett G. M. & Mitchell M. J. (2004) Sugar maples and nitrogen cycling in the forest of eastern North America. Frontiers in Ecology and the Environment, 2, 81-88.
Westing A. (1996) Sugar Maple Decline: An Evaluation. Economic Botany, 2, 196-212.
Gardescu, S. (2003). Herbivory, Disease, and Mortality of Sugar Maple Seedlings. Northeastern Naturalist, 10(3), pp. 253-268.
The Life of a Sugar Maple Tree. The Research Center Cornell University, N.d. Web. 10 October, 2012.
USDA. Sugar Maple. Natural Resources Conservation Center. USDA. N.d. Web. 10 October, 2012.
Ressources Naturelles Quebec. Giants of the Plant World. Highlights on Forests. Ressources Naturelles Quebec. N.d. Web. 10 October, 2012.
Pear Thrips. (2011, October 27). Retrieved October 21, 2012, from Natural Ressources Canada
“Forest Management.” Maple Info. Departments of Forests Parks & Recreation.2002. Web. 21 Oct. 2012 |
There is no correlation between a symbol and what it means; lack of correlation between the linguistic form, sound and meaning.
the multiplication symbol (x) does not resemble its meaning.
the relationship between the symbol and what it represents for is thus arbitrary.
Telephone (icon) is to indicate that there is a public phone nearby.
The relationship between the icon and what it stands for is thus non arbitrary.
However, the icon and the real thing will never look exactly alike, if so, it would then be known as the referent.
In context of language
The word table does not look or sound like the actual 4 legged wooden thing in our room (table) . In fact, it is known as zhuozi in chinese. There is no logical explanation to why it was so named but the relationship between the word table and its meaning is arbitrary.
Although words are largely symbolic, in the context of human language, words that mimic natural sounds are actually iconic. these words actually do resemble actual sounds to some extent. For eg: Oink sounds like the sound a real pig makes.
the property of displacement states that we are able to use language to talk about things not in our immediate surrounds or occurring at the moment of speaking.
This means that we are able to use language to talk about things in 3 aspects
1) in space
animals cannot talk about place that is not here but humans are able to talk about things distant in place.
2) in time,
humans are able to talk about things in the past, in the present as well as in the future
3) in reality
humans are able to talk about things that are not in reality; things that are abstract, fiction.
Why the importance of displacement
Displacement is built on top of social communication system which is based on facial expression, gesture and sounds.
However, it is an aspect of communication that cannot be replaced by gesture, grunts, facial expression, body language and other non linguistic communication.
It involves in some way of symbolizing by creating abstract referent.
- we're able to talk about mythical creatures like dragons and fairies (whose existence we are not sure about)
- a person can relate what he did last summer in Britian even though now is not summer and he is not in Britain
the ability of the human language to create new words and utterances
- human language are not limited to reproducing sentences they have heard but also produce and comprehend novel utterances.
-thus humans are able to communicate a wide variety of information.
- a finite number of linguistic units and rules are capable of yielding an infinite number of grammatical utterances.
- thus humans are able to communicate facts, opinions and emotions regardless event in the past, present or future.
Pidgin is a contact language that was formed to enable communication between inhabitants of a country with no common languages
- humans are constantly making new words as inventions are created and new ways of living emerge.
1)i invented a new type of candy and decided to give it a brand name 'sweeeeeeeetiez'
2) the above sentence may have never been used before, it is a complete new utterance which has been just been composed by me.
3) age of computers with new words like mouse, facebook, blog, frienster
Comparison to animals
For animals, every signal has a fixed reference which means that it can only refer to one idea and meaning cannot be broadened.
Bees for examples have a fixed set of signal. This gives them the inability to communicate the concept of vertical distance.
Acquiring of language and not inherit it the way human get genes from parents.
Language is passed on from one language user to the next, consciously or subconsciously.
- Eg :If we have chinese parents, we will inherit physical features like black hair. However it doesn't mean we'll grow up speaking one of the chinese languages. If we grew up in the Great Britian instead, we will most likely grow up speaking British English
) Language is culturally transmitted to the extent that learners acquire linguistic competence on the basis of the observed linguistic behavior of others.
- Eg:Thai children grow up speaking Thai.
2) cultural transmission leads to cultural evolution.
3) the distinctive structural characteristics of human language are a consequence of its cultural transmission.
- Eg: French children learn to put adjectives after the noun but English children learn to put adjectives before the noun
Animals like the German sheepdog born in Germany will make the same sound as the one born in Thailand.
Language is organised in 2 layers, 1) layer of sounds 2) layer of meaning
the word cat is both a symbol meaning a feline animal and a sequence of 3 sounds : /k/ /ae/ /t/
Duality of patterning
***2 levels of minimal units
1) alphabet for writing and phonemes for speech which do not have meaning on their own
2) the level where the meaning emerges as a result of combination of the units from level one.
Important property to help make use of the existing inventory of sounds in language to form countless words simply by manipulating the combination of sounds.
In english, there are distinct sounds like /p/, /ae/ , /t/
However we can combine these sounds together to form the word pat /paet/
it can also be /aept/ /taep/
Different from animals
cows say moo, can never say oom.
dogs can say bow-wow, can never say wow-bow.
-sounds in a language are meaningfully distinct and separable.
- language can be broken into discrete units ( words and phonemes )
the word mess consists of 3 separate sounds and phonemes /m/ /e/ /s/
-there is no gradual continuous shading from one sound to another in linguistic system although there may be a continuum in the real physical world
-consists of small units, words which can be combined into larger units (sentences)
In english /t/ and /d/ are meaningful.
-helps us to differentiate between two words tin and din which refer to 2 different things.
Phonemes can be differentiated easily using minimal pairs where the words sound very similar but have only 1 differing sound.
- /paet/ /paed/ /tIn/ /dIn/
small units (words) combine to be larger units , sentences |
Botanical Words Alphabetical List - CR
CREEPER: A plant that grows along the surface of the soil or any other surface such as a wall or fence, by sending out rootlets from the stem. (i.e. Ivy -Hedera, Virginia Creeper - Parthenocissus quinquefolia, Trumpet Creeper - Campsis radicans)
CREEP SOIL: The downward, slow, irregular mass movement of sloping soil.
CRENATE: Toothed with shallow, rounded scallops, as the edges of a leaf. When the scallops have smaller ones upon them, the leaf is said to be doubly crenate.
CRENULATE: Minutely crenate.
CREST: Also called fasciation; a mutation that results when the growing point of a plant forms a long line, rather that a single point. In botanical terms, it is usually signified by cristate.
CRISPATE: Curled or rippled at the edge such as the leaves of Pittosporum eugenioides. Also crisped.
CROCKS: Broken pieces of clay pot, used to cover drainage holes of pots in order to provide free drainage and air circulation to the root system and to prevent the medium (i.e. soil) from washing down the holes.
CROSS: 1. To produce a hybrid plant by cross-fertilizing individuals of different varieties or species. 2. A hybrid produced by cross-fertilization.
CROSS-FERTILIZATION: The fertilization of the ovules of one flower by the pollen of another, either between plants of the same species or between individuals of different species, resulting in the production of a hybrid.
CROWN: The basal part at soil level where roots and stem join and from where new shoots are produced; the tops of the rootstocks.
CRUCIATE: Having leaves or flowers in the shape of a cross with equal arms, as certain members of the Mustard family.
CRUMB STRUCTURES: The porous granular structure in soil.
CRUST: A thin, dry, hard layer of soil that forms on the surface of many soils when they are exposed to excessive heat.
CRUSTACEOUS: Having a hard, brittle texture.
CRYPTOGAM: A plant, such as a fern, which reproduces by means of spores rather than seeds.
Back to our botanical home page. |
The sun sends out a constant flow of charged particles called the solar wind, which ultimately travels past all the planets to some three times the distance to Pluto before being impeded by the interstellar medium. This forms a giant bubble around the sun and its planets, known as the heliosphere. NASA studies the heliosphere to better understand the fundamental physics of the space surrounding us - which, in turn, provides information regarding space throughout the rest of the universe, as well as regarding what makes planets habitable.
The solar wind is a gas of charged particles known as plasma, a state of matter governed by its own set physical laws just as the more common solids, liquids, and gases are. As the solar wind sweeps out into space, it creates a space environment filled with radiation as well as magnetic fields that trail all the way back to the sun. This space environment is augmented by interstellar cosmic rays and occasional concentrated clouds of solar material that burst off the sun, known as coronal mass ejections.
This complex environment surrounds the planets and ultimately has a crucial effect on the formation, evolution, and destiny of planetary systems. For one thing, our heliosphere acts as a giant shield, protecting the planets from galactic cosmic radiation. Earth is additionally shielded by its own magnetic field, the magnetosphere [link to 2e.Magnetosphere], which protects us not only from solar and cosmic particle radiation but also from erosion of the atmosphere by the solar wind. Planets without a shielding magnetic field, such as Mars and Venus, are exposed to such processes and have evolved differently.
NASA's studies of the heliosphere include research into: how the solar wind behaves near Earth; what causes and sustains magnetic and electric fields around other planets; how does the heliosphere interact with the interstellar medium; what do the boundaries of the heliosphere look like; what is the origin and evolution of the solar wind and the interstellar cosmic rays; and what contributes to the habitability of exoplanets.
The field is, therefore, intensely cross-disciplinary. Heliospheric research often works hand in hand with planetary scientists, astrophysicists, astrobiologists, and space weather researchers.
NASA heliophysics missions contributing to heliospheric research are: the Advanced Composition Explorer; NOAA's Deep Space Climate Observatory, the Interstellar Boundary Explorer, the Solar Terrestrial Relations Observatory; Voyager, and Wind.
Additionally, instruments on such NASA missions as Maven and Juno, observe the space around Mars and Jupiter respectively, and contribute to heliospheric research. |
While the cod fishery dominates Newfoundland and Labrador history, another type of fishery was just as important to many of the ships travelling to Newfoundland and Labrador in the 16th century: the whale fishery.
The Basque whalers from the coasts of southwestern France and northern Spain were at the forefront of the whale fishery. Seeking these huge creatures, the Basque established stations in the southern Labrador’s Red Bay as a base for their fishery.
Whales provided a source of oil for fuel and lamps, and the oil was in high demand in 16th century Europe.
For much of the 1500’s Red Bay was the home of whaling stations harvesting the large local population of whales to fill market for oil in Europe. The Basque people, long renowned for their maritime tradition, made a unique enterprise in Labrador that shows that there was more than cod along this province’s shores.
Launching an estimated 15 ships and 600 men each season the Basque captured thousands of whales off the coast of Labrador and in the Straits of Belle Isle from 1530 to 1600. Due to a declining whale population, hostile relations with natives and conflict between Spain and England the Basque left Red Bay at the beginning of the 17th century.
15 years of archaeological work, starting in 1978, including underwater archaeology, produced a number of significant finds, including the remains of 20 whaling stations along the shores of Red Bay and the wrecks of three galleons and several smaller vessels. These wrecks include the galleon San Juan, which is one of the oldest and best-preserved shipwrecks found in the Americas. These finds indicate not only that the Basque held a presence in Labrador but that their ventures across the Atlantic were a major economic endeavour and attracted significant numbers of ships and whalers.
With a centre that features original artifacts, tours, and a detailed interpretation centre, Red Bay is an excellent place to discover these early visitors to the province and to discover that the early fisheries were about more than cod. |
Define Normal Flora:
Bacteria (including mycoplasma), yeast, and protozoans which inhabit the skin and mucosa of healthy persons
10^13 cells; 10^14 bacteria
What are three characteristics of normal flora:
1. Resident -- what you expect
2. Transient -- edging towards pathogenic, high turnover rate (e.g. S aureus)
3. Usually commensal -- (you are food and shelter !)
Why is it that highly invasive bacteria causes diarrhea in adults but not in newborns?
This exhibits the idea that specificity of receptors on cells changes with age --- newborns have yet to develop the associated receptor
What is bacterial interference?
1. Bacteria compete for binding sites
2. Compete for nutrients
3. Elaborate antibacterial factors vs. pathogens
NF compete with pathogenic bacteria
What benefit do NF provide?
1. Metabolism in the GI tract (b/d nutrients for absoprtion)
2. Stiulation of GI and other mucosal immune sites, cells, systems (i.e. priming of innate and adaptive immunity)
3. Development of cross reactive, protective antibodies on surface of bacteria
Loss of normal flora often leads to what?
1. Sites for pathogen adherence
2. Absence of interference allows replication after adherence
3. Skin, respiratory, GI and GU mucoae all are susceptible to adverse effect
INCREASED SUSCEPTIBILITY TO INVASION BY PATHOGENS
How can normal flora be potential pathogens?
If they penetrate the mucosa that they are living in --- can change the local environment -- causing disease
EX: adherence to perineal and vaginal surface --> UTI
Skin trauma introduces bacteria -- especially transient NF such as S. aureus -- to new adherence sites.
What are the 4 determinants of NF?
2. Menstrual status
3. Environment (temperature, humidity)
4. Diet/nutritional state
What NF is common to the nose (skin and mucosa)?
1. Corynebacterium spp
2. S. epidermidis (resident)
3. S. aureus (transient)
What NF is common to the oral mucosa?
1. alpha hemolytic streptococci
2. gram negative anaerobic bacilli, facultative cocci, actinomycetes (dental plaques), protozoans, and candida (fungus) **** more likely to cause problem
NF of pharynx:
Alpha streptococci, gram negative anaerobic bacilli, and cocci, gram positive anaerobic occic, haemophilus spp, actinomycetes
EXCEPT TRACHEA --- = sterile except in chronic lung disease; should not have bacteria below larynx except in smokers
NF of stomach?
Normal acid level --> very few --- primarily streptococci and lactobacillus (unlikely to infect, but perhaps inflammation)
Low acid level -- MANY ORGANISM -- higher density, increased yeast -- starts to look like small bowel
There is increasing density of NF as you move more distally in the small bowel - True or False?
True --- enterococci, facultative gram negative bacilli, anaerobic bacteria the further you go
Perforation to distal bowel results in disease dominated by what?
Anaerobes --- (Anaerobic bacteria exist further down in bowel)
NF of the colon?
Favorable spot for anaerobic gram negatives and gram positives
10^11/g ... >150 identified strains
Anaerobic gram positive bacilli, anaerobic gram negative bacilli, facultative gram negative bacilli, enterococci
E. Coli -- 2-4% --- high turnover
What is the NF of the vagina?
Vagina is most sensitive to need for NF to maintain mucosal health
Lactobacilli (pH <4.5 maintained)
Changes with time
Group B streptococcus during reproductive time = transient NF
What is the NF of the urethra?
--- small numbers of perineal and cutaneous organisms
Kept in check by normal urine flow
Urinary function impairment = PROBLEMO
When do NF become pathogens?
Most NF are intrinsically non-invasive in situ and in normal host.
Transient NF are more likely to be pathogenic with a little push.
1. Trauma to Mucosa
2. Bronchogenic aspiration
3. Perforation of GI mucosa or bowel
Trauma, Injury, and penetration of the oral mucosa are associated with what consequences?
1. Dental trauma -- > hemoyltic strep --> bacteremia, endocarditis
2. Penetration --> polymicrobial necrotizing infection and abscess
3. Injury (by another infectious disease)--> abscess, tonsilar
What happens with aspiration of oral flora?
Pneumonia (inflammation of the lungs) or lung abscess (destruction of tissue and replacement by white blood cells
How can you mess up the NF?
Antibiotics -- decrease bacterial interference allowing:
1. Attachment of pathogens (respiratory mucosa, skin)
2. Replication of pathogens (GI)
3. Loss of protective pH (vagina)
4. Increased susceptibility to enteric pathogens like salmonella, etc.
5. The clostridium difficult problem (lose colon's function to absorb water) |
Together with rice and wheat, maize provides at least 30 percent of the food calories of more than 4.5 billion people in 94 developing countries. Maize is often consumed indirectly in the form of eggs, corn syrup, milk and cheese products, beef and pork, but is commonly a staple food in developing countries, providing food for 900 million people earning less than US $2 per day.
Higher demand for maize, lower yields expected
Today, maize is the most important food crop in Sub-Saharan Africa and Latin America, and is a key Asian crop. In Sub-Saharan Africa, maize is consumed by 50 percent of the population and is the preferred food for one-third of all malnourished children and 900 million poor people worldwide. As the world’s population increases and more people begin to include higher amounts of meat, poultry and dairy into their diets, demand for maize is expected to rise. By 2025, maize will be the developing world’s largest crop and between now and 2050 the demand for maize in the developing world is expected to double.
However, while consumption is expected to increase two-fold, yields are expected to decline – leading to higher global prices and malnutrition, poverty and hunger for those whose diets are heavily dependent on maize.
Across the maize-producing regions of Asia, Sub-Saharan Africa and Latin America the effects of climate change will be felt unevenly, in some cases causing catastrophic loss of yields due to heat stress or introducing novel challenges such as disease. This will stretch the ability of local and regional agricultural systems to cope, and produce great hardships for those farmers that are not given the support they need to adapt. Yield penalties are predicted to be especially strong in tropical and sub-tropical areas, affecting well over 90 percent of resource-poor maize farmers and consumers.
Two thirds of studies predict overall yield declines of over 10 percent by 2050, meaning that developing countries would have to increase maize imports by 24 percent at an annual cost of US $30 billion. In China, over 30.2 million hectares of prime agricultural land is dedicated to maize production. But even this is not enough. In 2011 China became a net importer of maize for the first time in 14 years and by 2015, China is expected to import 15 million tons of maize from the US alone. In 2010, Indonesia imported 1.6 million tons of maize and it is estimated that Indonesia imported 3.2 million tons in 2012. Japan – the world’s largest importer of maize – imports an estimated 16 million tons of maize annually.
By 2050, global maize consumption is expected to increase from 32 to 52 kilograms per person per year. For industrialized countries, maize shortages and declining yields mean increased prices. However, for developing countries, maize shortages result in increased malnutrition for children, higher rates of poverty for smallholder farmers and extended periods of hunger for families.
A valuable feed grain
Maize, either grain or silage, is a reference feed around the world, endorsed by animal farmers for 30 years. Positive nutritional and economic features (easy to grow, harvest and store) have made it a competitive product, which has helped lower the price of food staples such as meat and dairy products. Rapid increases in poultry consumption in Africa and developing countries is a major factor contributing to the increased use of maize for livestock feed.
Maize is the world’s number one feed grain, including the developing countries. It is used extensively as the main source of calories in animal feed and feed formulation. Maize gives the highest conversion of dry substance to meat, milk and eggs compared to other cereal grains, and is among the highest in net energy content and lowest in protein and fiber content. Animals like and eat it readily. Studies have shown that it is possible to breed maize fit for both human and animal consumption without compromising on traits such as yield.
Biofuels: potential opportunities
Prices for maize, wheat, rice and soybeans tripled between 2006 and 2008 as demand for grains to be used for fuel increased. The Food and Agriculture Organization (FAO) of the United Nations recognizes the potential opportunities that the growing biofuel market offers to small farmers and aquaculturers around the world and has recommended small-scale financing to help farmers in poor countries produce local biofuel.
However increased biofuel production is also criticized for its potential impact on food availability, as it is feared that rising demand for crop land will cause deforestation and grassland conversion. FAO statistics on crop production and land use in the period 2000 to 2010 show that the impact of biofuel expansion on land use has been limited. Other sources have caused more (and more permanent) loss of agricultural area, such as urbanization, infrastructure development, as well as tourism and even nature development. (Biofuel expansion and land use change, Biomass Research Report, Wageningen, July 2013)
Learn more about the Challenges and Opportunities for MAIZE
Learn about what we do in MAIZE Impacts |
Q. A block of mass `'m'` is sliding down a rough,fixed incline of andle `theta` with constant velocity.The coefficient of friction between the block and the incline is `mu.`
The force that the block exerts on the incline is:
D) `mu_k mgcostheta`
The forces acting on the block in the direction of incline are
`mgsintheta` and friction `muN` , where N is the normal force.
The forces acting on the block in the direction perpendicular to incline are
`mgcostheta` and N.
Since the acceleration is 0,
`mgsintheta-muN=0` and `mgcostheta-N=0` .
From here, `N=mgcostheta` and
`mgsintheta = muN = mumgcostheta`
So the friction coefficient must be `mu=sintheta/costheta=tantheta` .
Now consider the force that acting on the incline from the block. By the third Newton's Law, since there are two forces on the block from the incline - normal and friction - there are also two forces on the incline from the block, equal in magnitude and opposite in direction.
Thus, the force on the incline perpendicular to it equals `N=mgcostheta` .
The force on the incline parallel to it equal `muN = mumgcostheta` .
But since `mu = tantheta` , the parallel force is `tanthetamgcostheta=mgsintheta` .
The magnitude of NET force on incline from the block can be found as
`sqrt((mgcostheta)^2+(mgsintheta)^2) = mg` .
Choice B is correct.
A block of mass m is sliding down a rough, fixed incline of angle `theta` with constant velocity. The coefficient of friction between the block and the incline is `mu` .
The force that the block exerts on the incline is its own weight, mg which operates in the downward direction. This can be resolved into two mutually perpendicular components, though their vector summation will be the same, i.e. `mg` .
Therefore, option B) has the correct answer. |
If you plan to use movies in your homeschool, a movie study is a great way to reinforce the story and any important concepts. Relate the work to the movie as much as possible to really drive home the topics you want your children to learn.
With this multi-subject, printable Anne of Green Gables Movie Study, you'll find:
- discussion questions to check for comprehension and for further discussion
- character-building worksheet about forgiveness, which touches on bible verses and other religious concepts
- character study of Anne
- geography lesson worksheet on Prince Edward Island |
Originating Technology/NASA Contribution
Johnson Space Center, NASA’s center for the design of systems for human space flight, began developing high-resolution visual displays in the 1990s for telepresence, which uses virtual reality technology to immerse an operator into the environment of a robot in another location. Telepresence is used by several industries when virtual immersion in an environment is a safer option, including remote training exercises and virtual prototyping, as well as remote monitoring of hazardous environments. Microdisplay panels, the tiny screens that comprise the visual displays for telepresence, are also used in some electronic viewfinders for digital video and still cameras.
In 1993, Johnson Space Center granted a Small Business Innovation Research (SBIR) contract to Displaytech Inc., based in Longmont, Colorado, and recently acquired by Micron Technology Inc., of Boise, Idaho. Under Phase I of this contract, Displaytech began developing miniature high-resolution displays based on its ferroelectric liquid-crystal-on-silicon (FLCOS) technology. Displaytech proposed that pixels could be made small enough to fit a complete high-resolution panel onto a single integrated circuit.
Displaytech first determined how to make a panel that could reproduce grayscale using only standard complementary metal-oxide-semiconductor (CMOS) logic circuitry, which just recognizes binary values (such as a “0” for black and a “1” for white) and was not well suited for subtle shades of gray. Dr. Mark Handschy, Displaytech’s chief technology officer, explains the company perfected time-based grayscale techniques in a Phase II follow-on NASA contract: “Because our ferroelectric liquid crystal material can switch faster than the eye can follow, a sequence of displayed black and white images is averaged by the eye into a single grayscale image.”
For FLCOS panels to work well, Handschy explains, they need a smooth and shiny wafer top surface. Without this, the pixel mirrors form in the last metal layer on the semiconductor wafer, scatter, and then absorb light, resulting in a dim appearance. “The Phase II of our NASA SBIR came at a very opportune time,” Handschy says. “We were able to have an SXGA [super-extended video graphics array] CMOS backplane we’d designed under the NASA project using one of the first commercially available CMP silicon processes.” Chemical mechanical planarization (CMP) is a special technique of polishing semiconductor wafers to allow more metal layers—and smoother integrated-circuit surfaces—and was one of the factors that led to Displaytech’s success.
Another important development during the mid-1990s was the introduction of efficient blue light-emitting diodes (LEDs). Displaytech took these bright blue LEDs and combined them with red and green LEDs to illuminate its panels, rapidly sequencing through the color LEDs to create the illusion of different hues as they reflect off the panels. “In this SBIR program, we developed grayscale and color for microdisplay panels,” Handschy says, “And that was a first for us. We’ve since leveraged that into a line of products.” |
Although the shell sort algorithm is significantly better than insertion sort, there is still room for
improvement. One of the most popular sorting algorithms is quicksort. Quicksort executes in
O(n lg n) on average, and O(n2) in the worst-case. However, with proper precautions, worst-case
behavior is very unlikely. Quicksort is a non-stable sort. It is not an in-place sort as stack space
is required. For further reading, consult Cormen .
The quicksort algorithm works by partitioning the array to be sorted, then recursively sorting
each partition. In Partition (Figure 2-3), one of the array elements is selected as a pivot value.
Values smaller than the pivot value are placed to the left of the pivot, while larger values are
placed to the right.
- 12 -
Figure 2-3: Quicksort Algorithm
In Figure 2-4(a), the pivot selected is 3. Indices are run starting at both ends of the array.
One index starts on the left and selects an element that is larger than the pivot, while another
index starts on the right and selects an element that is smaller than the pivot. In this case,
numbers 4 and 1 are selected. These elements are then exchanged, as is shown in Figure 2-4(b).
This process repeats until all elements to the left of the pivot are £ the pivot, and all items to the
right of the pivot are ³ the pivot. QuickSort recursively sorts the two sub-arrays, resulting in the
array shown in Figure 2-4(c).
4 2 3 5 1
1 2 3 5 4
1 2 3 4 5
Lb M Lb
Figure 2-4: Quicksort Example
As the process proceeds, it may be necessary to move the pivot so that correct ordering is
maintained. In this manner, QuickSort succeeds in sorting the array. If we’re lucky the pivot
selected will be the median of all values, equally dividing the array. For a moment, let’s assume
int function Partition (Array A, int Lb, int Ub);
select a pivot from A[Lb]…A[Ub];
reorder A[Lb]…A[Ub] such that:
all values to the left of the pivot are £ pivot
all values to the right of the pivot are ³ pivot
return pivot position;
procedure QuickSort (Array A, int Lb, int Ub);
if Lb < Ub then
M = Partition (A, Lb, Ub);
QuickSort (A, Lb, M – 1);
QuickSort (A, M + 1, Ub);
- 13 -
that this is the case. Since the array is split in half at each step, and Partition must eventually
examine all n elements, the run time is O(n lg n).
To find a pivot value, Partition could simply select the first element (A[Lb]). All other
values would be compared to the pivot value, and placed either to the left or right of the pivot as
appropriate. However, there is one case that fails miserably. Suppose the array was originally in
order. Partition would always select the lowest value as a pivot and split the array with one
element in the left partition, and Ub – Lb elements in the other. Each recursive call to quicksort
would only diminish the size of the array to be sorted by one. Therefore n recursive calls would
be required to do the sort, resulting in a O(n2) run time. One solution to this problem is to
randomly select an item as a pivot. This would make it extremely unlikely that worst-case
behavior would occur.
The source for the quicksort algorithm may be found in file qui.c. Typedef T and comparison
operator compGT should be altered to reflect the data stored in the array. Several enhancements
have been made to the basic quicksort algorithm:
· The center element is selected as a pivot in partition. If the list is partially ordered,
this will be a good choice. Worst-case behavior occurs when the center element happens
to be the largest or smallest element each time partition is invoked.
· For short arrays, insertSort is called. Due to recursion and other overhead, quicksort
is not an efficient algorithm to use on small arrays. Consequently, any array with fewer
than 12 elements is sorted using an insertion sort. The optimal cutoff value is not critical
and varies based on the quality of generated code.
· Tail recursion occurs when the last statement in a function is a call to the function itself.
Tail recursion may be replaced by iteration, resulting in a better utilization of stack space.
This has been done with the second call to QuickSort in Figure 2-3.
· After an array is partitioned, the smallest partition is sorted first. This results in a better
utilization of stack space, as short partitions are quickly sorted and dispensed with.
Included in file qsort.c is the source for qsort, an ANSI-C standard library function usually
implemented with quicksort. Recursive calls were replaced by explicit stack operations. Table
2-1 shows timing statistics and stack utilization before and after the enhancements were applied.
count before after before after
16 103 51 540 28
256 1,630 911 912 112
4,096 34,183 20,016 1,908 168
65,536 658,003 470,737 2,436 252
time ( m s) stacksize
Table 2-1: Effect of Enhancements on Speed and Stack Utilization |
Short for "picture elements," which provide image resolution in vidicon-type detectors plage; bright, granular areas in the chromosphere of the Sun
are picture elements - the individual dots of a computer screen's image.
short form for picture elements, which make up digital albums• the more pixels or dots per inch (dpi) the better the resolution
number of dots on the screen (typical screen approximately 640-480)
(Picture Elements): Definable locations on a VDT display used to form images on the screen. Screens with more pixels provide a higher resolution image.
Pixels are generated by video monitors or some specialized recorders. They are spots with varying levels of energy. They can be 1% dark or at some percentage of gray. Therefore Red Green and Blue Pixels can be combined at varying levels to display pictures. to top
Picture elements. Definable locations on a display screen that are used to form images on the screen. Pixels refer to the basic unit of graphics resolution. See: bit-map graphics.
The smallest area that can be displayed on a screen. In other words, the image on the screen is made up of pixels of different colors which together form the image. The number of pixels depends on the screen' s resolution.
The elements that make up a display or image. The larger the number of pixels, the greater the detail and clarity of the display, or the image.
A measurement of the smallest "dot" that can be displayed a computer screen. Computer monitors display 72 pixels per inch, so one inch equals 72 pixels.
The small units that sub-divide space to make up a raster surface. They are usually small grid squares.
The dots used to make an image on a monitor.
Picture elements that are the building blocks for raster images (Images consisting of dots).
The technical term for 'dots' in relation to images on screen. They are minute units which together form an image.
the smallest discrete element of an image or picture on a CRT screen (usually a single-colored dot).
A Picture Element - the smallest element you can see on a monitor or television display. The more pixels an image contains, the higher its resolution.
The little individual dots that make up images. See Part IV.
A short name for Pixel Element. The smallest part that can be displayed on a monitor.
LCD?s (Liquid Crystal Display) are made up of many tiny individual liquid crystal displays, which make up a whole image. Each one of these little crystal displays is what we call a pixel.
Acronym for PIC ture EL ements. Pixels are the smallest un-break-down-able units of a picture on a monitorís screen. When the image is poor you will be conscious of looking at a collection of square dots. On the Internet, the standard picture resolution is between 72 and 96 pixels per inch.
The units of measurement on your screen that are used by display hardware to paint images on your screen. These units, which often appear as tiny dots, compose the pictures displayed on your screen. Pictures created for the Web are often measured in pixels instead of inches or centimeters. The video card installed in your PC display system determines the color capability of each pixel.
The lower the resolution, the larger things appear on your screen. Most computer monitors are set at 800 x 600 resolution, meaning 800 pixels wide by 600 pixels high. Some people's monitors are set at 1024 x 768 or higher. Others are set at 640 x 480. When designing a Web site, keep in mind that your Web pages will look different to viewers depending on their monitor resolutions.
A small dot or square which contains a single color and all of the pixels to form the image. The more pixels, the better the image.
Small dots or squares of light making up your computer screen which, when combined, allow you to view text or images.
An abbreviation for picture element. The minimum raster display element, represented as a point with a specified color or intensity level. One way to measure picture resolution is by the number of pixels used to create images.
In any monitor or display, an image is made up of individual dots containing only one color value called pixels. The amount of pixels in a given area, known as the 'resolution', is usually given using horizontal and vertical measurements, for example 1024 by 768 dots. The higher this figure is, the better and more exact the image.
The pixel figure on a digital camera determines how many individual pieces of data make up one picture. The more pixels there are, the greater the digital accuracy and the sharper the picture.
Small squares of color used to display a digital image.
The small 'dots' which make up all television screens. Each is a discrete element of the picture, but when viewed from a distance the pixels are small enough that they blend into one another to create a smooth picture. Stands for "Picture Element".
are dots or squares on a computer or television screen that combine to form an image. Computer images are created as an array of pixels, each having a specific color.
A unit of picture measurement. One pixel is about the size of a period (.) in 12 point font. Web banners and other graphics are measured in pixels. A standard banner size would be 468 pixels long and 60 pixels high (468 X 60). Monitor resolution is also measured in pixels. Right now, the most popular monitors display 800 pixels wide and 600 pixels high (800 X 600).
The dots that make up a screen display.
For Digital TV, Resolution Level is measured in Horizontal Pixels per Line and Vertical Lines per Frame. (e.g. “1080i” is defined as 1920 (pixels wide) x 1080 (lines vertically). Although, CEMA definitions only include Vertical Lines per Frame; i.e. “1080i” means 1080 interlaced lines.
Short for Pic ture El ement, a pixel is a single point in a graphic image. Graphics monitors display pictures by dividing the display screen into thousands (or millions) of pixels, arranged in rows and columns. The pixels are so close together that they appear connected.
discrete picture elements, or little pieces of the entire image.
PIcture ELement: The smallest unit that makes up an image on a screen. The more pixels there are, the higher the resolution of the image.
'picture elements', are the small graphic units that make up the picture. The greater the number of pixels, the better the resolution.
Picture elements, i.e., dots on a computer screen that form an image. Pixels are used to measure the size of a banner ad in length and width; for example, a full banner is 468 x 60 pixels. Approximately 72 pixels equal one inch.
Short for 'picture elements', the minute, coloured dots used to store images. The greater the number of pixels, the better the resolution (see below).
Short for 'picture element,' a dot that represents the smallest graphic unit of measurement on a screen. A pixel is screen-dependent; that is, the dimensions of screen elements vary with the display system and resolution. It is used to describe the main measure of unit that make up a graphic in online advertising, similiar to how inches are used in print media advertising.
The small, bright points of light that make up the letters or pictures displayed on computer screens or other visual displays. These points of light are brighter at their centers than their edges, which make it difficult for the human eye to focus on them.
in computers, pixels per inch (ppi) is a measure of the sharpness (that is, the density of illuminated points) on a display screen.
Digital images are comprised of many tiny coloured pixels. Each pixel should be too small to distinguish unless the image is over-enlarged. The basic rule is the more pixels comprising an image, the better it will appear.
What are pixels / screen pixels
or picture elements are the separate units of color which make up an image on your screen. The screen is made up of many pixels (the exact number of pixels depends on the resolution of your monitor). Each pixel is either entirely colored or not. You cannot partially color a pixel. Through the use of pixels, you can neither create a truly curved object nor diagonal lines. However, you can come extremely close through the use of anti-aliasing techniques.
Digital images are made up of lots of little squares of colour in a grid. These are called pixels.
(Basic image elements) The individual dots that are used to display an image on a computer monitor.
(Picture elements). Cells of an image matrix. The ground surface corresponding to the pixel is determined by the instantaneous field of view (IFOV) of the sensor system, e.g. the solid angle extending from a detector to the area on the ground it measures at any instant. The digital values of the pixels are the measures of the radiant flux of electromagnetic energy emitted or reflected by the imaged Earth surface in each sensor channel.
A pixel is the smallest unit that makes up an image on a screen. The more pixels, the higher the resolution.
Are the building blocks for every digital image. Higher pixel amounts in a digital photo signify a higher resolution photo.
These are "picture elements," or little black squares that come together to form images and numbers on a liquid crystal display (LCD). The more pixels per square inch, the sharper and more detailed picture you see.
Small dots of light that compose the images displayed on the computer screen. Commonly used as a unit of measurement.
The individual picture elements that make up a digital display. The more pixels, the greater the detail and sharpness of the display or the image.
The value is an integer that represents the number of pixels of the canvas (screen, paper). Thus, the value "50" means fifty pixels.
Short for picture elements, the tilelike bits of color and tone that form a digital image.
The small picture elements that make up a digital photograph.
abbreviation for picture elements. The tiny squares of light making up the picture are transmitted in digital form and reconstituted as a visual image.
The individual phosphors that form the image on a television screen. A color television screen or computer monitor screen consists of red, green and blue pixels (RGB) in a black background.
A small unit of measurement on a computer monitor.
The word pixel derives from picture element. Pixels are the building blocks of a digital photo are usually square in shape and are made from a colour recipe combining red green and blue ingredients.
Individual dots used to create an image. The greater the number of pixels in an image, the higher the resolution and better the quality
PIC ture EL ements = Pixel. The tiny dots comprising a picture. |
Brief SummaryRead full entry
There are seven extant species and one extinct species (Acanthognathus poinari) in this genus.
Ants of the genus Acanthognathus stalk small insects and catch their prey by a strike with their long, thin mandibles. The mandibles close in less than 2.5 ms and this movement is controlled by a specialized closer muscle. In Acanthognathus, unlike other insects, the mandible closer muscle is subdivided into two distinct parts: as in a catapult, a large slow closer muscle contracts in advance and provides the power for the strike while the mandibles are locked open. When the prey touches specialized trigger hairs, a small fast closer muscle rapidly unlocks the mandibles and thus releases the strike. The fast movement is steadied by large specialized surfaces in the mandible joint and the sensory-motor reflex is controlled by neurones with particularly large, and thus fast-conducting, axons.
Species in this genus are found in rotten logs, hollow twigs and branches and sections of wood burried in leaf litter. The colony size is rather small, often less than 20 workers. Individual foragers can be seen hunting collembola prey with mandibles wide open on the surface of leaf litter. |
var foo = 42; // foo is a Number now var foo = "bar"; // foo is a String now var foo = true; // foo is a Boolean now
The latest ECMAScript standard defines seven data types:
- Six data types that are primitives:
- and Object
All types except objects define immutable values (values, which are incapable of being changed). For example and unlike to C, Strings are immutable. We refer to values of these types as "primitive values".
Boolean represents a logical entity and can have two values:
According to the ECMAScript standard, there is only one number type: the double-precision 64-bit binary format IEEE 754 value (number between -(253 -1) and 253 -1). There is no specific type for integers. In addition to being able to represent floating-point numbers, the number type has three symbolic values:
To check for larger or smaller values than
+/-Infinity, you can use the constants
Number.MAX_VALUE and starting with ECMAScript 6, you are also able to check if a number is in the double-precision floating-point number range using
Number.isSafeInteger() as well as
The number type has only one integer that has two representations: 0 is represented as -0 and +0. ("0" is an alias for +0). In the praxis, this has almost no impact. For example
+0 === -0 is
true. However, you are able to notice this when you divide by zero:
> 42 / +0 Infinity > 42 / -0 -Infinity
- A substring of the original by picking individual letters or using
- A concatenation of two strings using the concatenation operator (
Beware of "stringly-typing" your code!
It can be tempting to use strings to represent complex data. Doing this comes with short-term benefits:
- It is easy to build complex strings with concatenation.
- Strings are easy to debug (what you see printed is always what is in the string).
- Strings are the common denominator of a lot of APIs (input fields, local storage values,
XMLHttpRequestresponses when using
responseText, etc.) and it can be tempting to only work with strings.
Use strings for textual data. When representing complex data, parse strings and use the appropriate abstraction.
In computer science, an object is a value in memory which is possibly referenced by an identifier.
There are two types of object properties which have certain attributes: The data property and the accessor property.
Associates a key with a value and has the following attributes:
|[[Configurable]]||Boolean||If false, the property can't be deleted and attributes other than [[Value]] and [[Writable]] can't be changed.||false|
Associates a key with one or two accssor functions (get and set) to retrieve or store a value and has the following attributes:
|[[Get]]||Function object or undefined||The function is called with an empty argument list and retrieves the property value whenever a get access to the value is performed. See also
|[[Set]]||Function object or undefined||The function is called with an argument that contains the assigned value and is executed whenever a specified property is attempted to be changed. See also
|[[Configurable]]||Boolean||If false, the property can't be deleted and can't be changed to a data property.||false|
"Normal" objects, and functions
__proto__ pseudo property must be used with caution. In environments that support it, assigning a new value to
__proto__ also changes the value of the internal object prototype. In a context where it is not necessarily known where the string comes from (like an input field), caution is required: others have been burned by this. In that case, an alternative is to use a proper StringMap abstraction.
Functions are regular objects with the additional capability of being callable.
When representing dates, the best choice is to use the built-in
Indexed collections: Arrays and typed Arrays
Arrays are regular objects for which there is a particular relationship between integer-key-ed properties and the 'length' property. Additionally, Arrays inherit from
Array.prototype which provides to them a handful of convenient methods to manipulate arrays. For example,
indexOf (searching a value in the array) or
push (adding an element to the array), etc. This makes Arrays a perfect candidate to represent lists or sets.
|Int8Array||8-bit signed Integer||signed char|
|UInt8Array||8-bit unsigned Integer||unsigned char|
|UInt8ClampedArray||8-bit unsigned Integer (clamped)||unsigned char|
|Int16Array||16-bit signed Integer||Short|
|UInt16Array||16-bit unsigned Integer||unsigned short|
|Int32Array||32-bit signed Integer||Int|
|UInt32Array||32-bit unsigned Integer||unsigned int|
|Float32Array||32-bit IEEE floating point||Float|
|Float64Array||64-bit IEEE floating point||Double|
Keyed collections: Maps, Sets, WeakMaps, WeakSets
These data structures take object references as keys and are introduced in ECMAScript Edition 6.
WeakSet represent a set of objects, while
WeakMap associate a value to an object. The difference between Maps and WeakMaps is that in the former, object keys can be enumerated over. This allows garbage collection optimizations in the latter case.
One could implement Maps and Sets in pure ECMAScript 5. However, since objects cannot be compared (in the sense of "less than" for instance), look-up performance would necessarily be linear. Native implementations of them (including WeakMaps) can have look-up performance that is approximately logarithmic to constant time.
Usually, to bind data to a DOM node, one could set properties directly on the object or use
data-* attributes. This has the downside that the data is available to any script running in the same context. Maps and WeakMaps make easy to privately bind data to an object.
Structured data: JSON
JSON for more details.
More objects in the standard library
Determining types using the
typeof operator can help you to find the type of your variable. Please read the reference page for more details and edge cases.
|ECMAScript 1st Edition.||Standard||Initial definition.|
|ECMAScript 5.1 (ECMA-262)
The definition of 'Types' in that specification.
|ECMAScript 6 (ECMA-262)
The definition of 'ECMAScript Data Types and Values' in that specification. |
Current thinking is that flu evolves each year in southern China and spreads around the globe. "It's the way they farm, out in the villages," says M. Louise Herlocher. "They keep the pigs in the house, and they raise the ducks right next to the pigs, so there's a lot of opportunity for close interaction of the three species. The theory is that influenza goes from the avians to the pigs, from the pigs to the humans."
In support of that theory, researchers at St. Jude Children's Research Hospital have isolated avian and swine flu genes in humans and avian flu genes in pigs, but they have not demonstrated the presence of swine or human flu genes in birds.
At the Pittsburgh Supercomputing Center, Herlocher is using the Zuker fold analysis program to determine the RNA folding of the smallest (890 bases) flu gene in the three species. It's known as the non-structural (NS) gene, and its function is not well understood. To accomplish this, Herlocher is using 70 genetic sequences -- avian, swine and human -- from a database at St. Jude.
"This may or may not tell us anything about host adaptation," Herlocher says, but the folding may highlight which areas of the NS gene's strand are the same in all three species, information that could in turn help pinpoint which parts of the RNA play an active role in disease-producing functions of the virus.
go back to the main screen |
Nationalism is a form of identification with one’s own group, nation, or country. Â Experts on this subject matter stress that people always have the natural tendency to group together based on affinity even at birth. Â With this tendency, people in the same country basically like to think of themselves as one united people. Â On one side of the coin, being affiliated with one’s nation is a good thing because peace and progress can only be achieved by nationalistic attributes like unity and cooperation. Â But on the other side of things, nationalism can also have a negative impact on people and the whole nation or society in general.
For one reason, nationalism is bad in a sense that it also promotes the division of people from around the world. Â There may be cooperation from within a nation, but it also gives a sense of its separate identity from other nations and groups. Â When applied to a food source crisis, for example, one nation typically provides for itself first before trying to serve the needs of others. Â Internal interests are attended to first while other groups or nations are considered last. Â The whole concept of being nationalistic in this sense creates borders between different countries. Â Because of greed in resources also, nationalism paved the way for many wars between countries and invasions of nearby territories.
Nationalism is also blamed for various atrocities to different groups of people in many countries around the world. Â Because of different ethnic backgrounds, many people have suffered and died in countries like Germany, Cambodia, and Yugoslavia, for example.
Many people also point out that the extremes of nationalism have led to hatred for other people just because they are of a different color or race.  Some groups of people also disapprove of immigrants entering their own countries as they are considered threats to their jobs, livelihoods, and their nationalistic identity.  With all of these negative impacts on people and society, many consider “nationalism†bad. |
Africa, in ancient Roman history, the first North African territory of Rome, at times roughly corresponding to modern Tunisia. It was acquired in 146 bc after the destruction of Carthage at the end of the Third Punic War.
Initially, the province comprised the territory that had been subject to Carthage in 149 bc; this was an area of about 5,000 square miles (13,000 square km), divided from the kingdom of Numidia in the west by a ditch and embankment running southeast from Thabraca (modern Ṭabarqah) to Thaenae (modern Thīnah). About 100 bc the province’s boundary was extended farther westward, almost as far as the present Algerian-Tunisian border.
The province grew in importance during the 1st century bc, when Julius Caesar and, later, the emperor Augustus founded a total of 19 colonies in it. Most notable among these was the new Carthage, which the Romans called Colonia Julia Carthago; it rapidly became the second city in the Western Roman Empire. Augustus extended Africa’s borders southward as far as the Sahara and eastward to include Arae Philaenorum, at the southernmost point of the Gulf of Sidra. In the west he combined the old province of Africa Vetus (“Old Africa”) with what Caesar had designated as Africa Nova (“New Africa”)—the old kingdoms of Numidia and Mauretania—so that the province’s western boundary was the Ampsaga (modern Rhumel) River in modern northeastern Algeria. The province generally retained those dimensions until the late 2nd century ad, when a new province of Numidia, created in the western end of Africa, was formally constituted under the emperor Septimius Severus. A century later Diocletian, in his reorganization of the empire, formed two provinces, Byzacena and Tripolitania, from the southern and eastern parts of the old province.
The original territory annexed by Rome was populated by indigenous Libyans who lived in small villages and had a relatively simple culture. In 122 bc, however, an abortive attempt by Gaius Sempronius Gracchus to colonize Africa aroused the interest of Roman farmers and investors. In the 1st century bc Roman colonization, coupled with Augustus’ successful quieting of hostile nomadic movements in the area, created conditions that led to four centuries of prosperity. Between the 1st and 3rd century ad, private estates of considerable size appeared, many public buildings were erected, and an export industry in cereals, olives, fruit, and hides flourished. Substantial elements of the urban Libyan population became Romanized, and many communities received Roman citizenship long before it was extended to the whole empire (ad 212). Africans increasingly entered the imperial administration, and the area even produced an emperor, Septimius Severus (reigned ad 193–211). The province also claimed an important Christian church, which had more than 100 bishops by ad 256 and produced such luminaries as the Church Fathers Tertullian, Cyprian, and St. Augustine of Hippo. The numerous and magnificent Roman ruins at various sites in Tunisia and Libya bear witness to the region’s prosperity under Roman rule.
By the end of the 4th century, however, city life had decayed. The Germanic Vandals under Gaiseric reached the province in 430 and soon made Carthage their capital. Roman civilization in Africa entered a state of irreversible decline, despite the numerical inferiority of the Vandals and their subsequent destruction by the Byzantine general Belisarius in 533. When Arab invaders took Carthage in 697, the Roman province of Africa offered little resistance. |
Lupus is an autoimmune disease. In an autoimmune disease, the body's immune system mistakes healthy tissues and organs as foreign and potentially dangerous invaders into the body and attacks them. This results in inflammation that eventually can damage and destroy the affected tissues and organs. Medications commonly used to treat lupus include non-steroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen (Advil) and aspirin.
Extrapolation of Prevalence Rate of Lupus is 167,319 among the estimated population of 32,507,8742 in Canada. Another study is testing a combination of two medicines. One is a standard drug and the other is a new drug. Scientists hope that the combination will be more effective and cause fewer side effects. NSAIDs are very effective in treating the pain and inflammation of mild lupus.
Anti-inflammatory drugs can help control arthritis symptoms; skin lesions may respond to topical treatment such as corticosteroid creams. Oral steroids, such as prednisone, are used for the systemic symptoms. Wearing protective clothing and sunscreen when outdoors is recommended. However, long-term use of NSAIDs can cause serious, even life threatening, side effects and adverse events. These include bleeding gastrointestinal ulcers and possible heart problems and cardiovascular events.
NIAMS researchers have found a gene linked to a higher risk of lupus kidney disease in African Americans. Changes in this gene keep the immune system from removing harmful germ-fighters from the body after they've done their job. Other genes may also play a role.Lupus is more common in women than in men. Researchers are looking into the role of hormones and other male-female differences. One NIAMS project is testing a new drug that scientists hope will have milder side effects than standard treatments. |
Transcript for Hidden Fury, segment 04 of 11
The great San Francisco earthquake in nineteen oh six matched the power of the strongest New Madrid shock. It had a Richter magnitude of eight point three, enough to devastate San Francisco. Beyond the city, the quake had a damage area of twelve thousand square miles. That's twenty times smaller than the area of damage for the New Madrid earthquakes.
Earthquakes are normally seen as a California concern, and while quakes are not as common in the New Madrid Earthquake Zone, their enormous damage areas put millions of people at risk in the central United States.
Damage areas are so large partly because of the zone's relation to global plate tectonics. The zone lies at the core of the North American tectonic plate, one of twelve major plates that form the outer skin of our planet. The plates are floating on molten material below, forcing them to crash together or pull apart at the edges. Over ninety percent of all earthquakes occur where plates meet, but the plates' cores or critons have remained relatively stable for billions of years. They have become rigid. When earthquakes strike, seismic waves travel great distances through this dense, solid rock. |
Pest: Aphids, general
Many and varied
There are many species of aphids and all are a bit different in appearance, biology and the host plants attacked. Therefore, they will be discussed here in the general sense. Aphids are soft and plump bodied, have long spindly legs, are often wingless and posses cornicles, which immediately distinguish them from other insects. Aphids have a piercing-sucking mouth-type that is adapted to feeding on plant sap. Their guts are long, coiled tubes and as the plant sap passes through this digestive tract, plant sugars (among other compounds) are concentrated out into their blood. The remainder of the sap in the gut is finally expelled as a clear liquid that still contains many plant sugars; this waste is known as honeydew.
Honeydew appears as clear drops of liquid on foliage, as well as whatever is under a plant housing aphids, and acts as a food source for a group of fungi known as sooty molds. These molds colonize the honeydew and become very black, hence the name. Heavy populations of sooty mold can make a plant quite unattractive and even interfere with the process of photosynthesis. Primarily, however, sooty mold is an aesthetic problem. Occasionally, ants can be seen on foliage tending to the aphids and collecting their honeydew for its sugar content. These ants may also help defend the aphids against such natural predators such as ladybug immatures and syrphid fly larvae. Some aphids are categorized as woolly aphids and these produce waxy strands over their bodies. One common member of this group is the woolly beech aphid which is primarily found on European varieties of beech (Fagus ). These are active in the spring only and produce large amounts of honeydew that may create aesthetic problems.
Aphid feeding does not cause the typical yellow stippling injury to the host plant that is so common for most other piercing-sucking insects. Although, they may cause leaf distortion if new, emerging foliage is fed upon.
General Life Cycle:
Many aphid species have a primary host plant that usually involves a deciduous tree species (at the beginning and end of the season) and very many secondary host species during the summer. A generalized life cycle for an aphid species with both a primary and secondary hosts is as follows:
- Winged females lay eggs on the primary woody host in the fall and then die.
- In the spring, these eggs all hatch to be wingless females.
- Upon maturity, these wingless females give birth to another generation of wingless females via parthenogenesis (asexual reproduction).
- After several generations of wingless females, crowding occurs and this most likely triggers a chemical cue that leads to a hormonal response within the population and a generation of winged females is produced.
- These winged individuals then disperse from the primary host plant and venture far and wide in search of a suitable secondary host plant. These can include numerous different types of plants such as weeds, annuals, vegetables, perennials and others.
- Once a quality host is found, the winged female will then produce a generation of wingless females on this plant.
- Several more generations of wingless females will follow until late summer/early fall when a generation of winged males and females are produced that will seek out the primary host plant species for that specific aphid species.
- They will then find a mate, the female will produce eggs, and the adults will then die.
- The eggs will over-winter and hatch in the spring.
Aphids can cause curling of the foliage, as with the Snowball Aphid on certain viburnums, and much honeydew that can lead to unacceptable levels of sooty mold. To deal with sooty mold, one must manage the aphids. Several common groups of predators occur naturally and visual monitoring should be performed to determine their presence and numbers prior to administering any chemical controls that may be extremely detrimental to their population numbers. These natural controls include: ladybugs, lacewing larvae, wasp parasites, and syrphid fly larvae, among others.
Written by: Robert Childs |
Cadel Evans touches on bike equipment safety and being observant on and around roads.
Bike riding is the perfect activity for kids. It’s lots of fun, it’s great exercise and it’s something that the whole family can enjoy together.
But like any physical activity it does come with risks, especially when riding on the road. Just like motorists, bike riders of all ages need to follow the road rules and share the road responsibly.
So what’s the best way to educate our children on safe biking and road use? One of the most important things we can do is to set a good example. By following the road rules and modelling safe bike behaviour, we can teach our children what’s right and wrong when riding.
To help you brush up on your bike safety skills, here’s a checklist to follow.
The right equipment
Before either you or your child get on a bike, always make sure to:
- Wear an Australian Standard bike helmet that fits well and is fastened correctly.
- Wear enclosed shoes and bright colored clothing. This will make it easier for motorists to see you.
- Keep your bike in good working order. Check your brakes, tyres and quick release wheels each time before you ride.
- Check your bike has a working bell or horn that you can use to warn other riders and pedestrians.
- If you’re riding at night, check your bike has working front and rear lights, as well as reflectors.
“Understanding bike and road safety is life-saving skill, so make sure your kids get plenty of time to learn and practice.”
Riding on paths
Children under 12 years of age and their supervising adults are legally allowed to ride on the footpath. But remember, footpaths are made for people to walk on, so you need to be careful and courteous when riding.
- Make sure both you and your child always:
- Keep left and give way to pedestrians on the path.
- Ride at a safe and manageable speed.
- Use your bell or call out to let others know that you’re approaching.
- Check over your shoulder and behind you before turning a corner.
- Ride in single file so other people can pass you if needed.
- When you come to a road, stop, hop off and walk your bike across when it is safe to do so. Try to use pedestrian crossings when you can.
- Be alert and keep watch for cars that are coming in and out of driveways.
- Keep an eye out for other hazards, such as fallen branches, potholes, broken glass or dogs off their leash.
- “Always use the safest route. Try to use cycle paths when you can and avoid heavy traffic areas.”
Riding on roads
Bike riders who are 12 years or older must use designated bicycle paths or ride on the road. When you and your child are riding on the road, make sure you always:
- Obey all road rules, road signs and traffic lights.
- Ride in a straight line about one metre out from the kerb or any parked cars.
- Keep both hands on the handlebars, except when signalling turns.
- Before you turn a corner, check behind you and hold your arm out to signal which direction you’re turning.
- Look ahead and keep watch for any potential hazards.
- Keep a safe distance behind cars so motorists can see you in their mirrors.
- Never ride more than two abreast. That means no more than two people riding side by side.
- Use the safest route. Try to use cycle paths when you can and avoid heavy traffic areas.
Understanding bike and road safety is life-saving skill, so make sure your kids get plenty of time to learn and practice. That way, both you and your child can feel confident when the time comes for them go out and enjoy a ride on their own.
Find more kid-friendly bike and road safety tips. |
Western Meadowlark Bird
Category: Birds Other
Western meadowlark bird. "Scientific name for Western meadowlark bird is Sturnella neglecta". The Western meadowlark bird is an average-sized bird that belongs to the Sturnella genus of the Icteridae family. The Western meadowlark bird encompass a warbled song similar to a flute. These calls compare with the simple, screeched call of the eastern meadowlark birds. The Western Meadowlark bird is the state bird of the six states of the United States, such as Montana, Nebraska, Kansas, Oregon, North Dakota, and Wyoming.
An adult Western Meadowlark bird has a maximum body length, ranging from 6 5/16 inches to 10 1/4 inches (16 cm to 26 cm), with the wingspan of 16 1/8 inches (41 cm). These birds carry a maximum body weight that ranges from 3.1 oz to 4.1 oz (89 gm to 115 gm).
Usually, the Western Meadowlark bird nests on the land in open country in central and western parts of North America grassland. The bird has unique calls portrayed as watery or flute-like, which differentiate it from the closely associated eastern meadowlark bird.
Adult Western Meadowlark birds have yellow color underparts, with a black color "V" on their breast, and white color flanks that are striped with black. The upper parts of these birds are mostly brown in color, but they also contain black stripes. These birds encompass elongated sharp bills and their heads are striped with light tan and black.
Diet of Western meadowlark bird
Western Meadowlark birds forage on the land or in low to semi-low plants. They occasionally hunt for food by probing by means of their bills. They mostly feed on insects, even though they will feed on seeds and berries greedily. These birds habitually feed in groups during the winter season.
Breeding of Western meadowlark bird
Western Meadowlark birds will interbreed with the Eastern Meadowlark birds where their collections overlap, but resulting juvenile emerge to contain low fertility.
The breeding homes of the Western Meadowlark birds include grasslands, pastures, prairies, and deserted fields, all of which may be seen from across central and western parts of North America to the northern part of Mexico. Where their collection overlaps with the eastern varieties, these birds choose thinner, drier plants and the two kinds of birds usually do not interbreed, but they protect territory against each other.
Even though the Western Meadowlark bird looks almost the same to the Eastern Meadowlark bird, the two varieties hybridize only extremely rarely. Usually, mixed pairs crop up only at the edge of the collection where only some mates are accessible. Captive breeding trials establish that hybrid meadowlark birds were fertile, but created only some eggs that hatched.
The nests of Western Meadowlark bird are located on the ground, and they are sheltered by a roof made from grass. However, the nests may be totally open, or they may have an absolute roof and an entry tunnel, with a length of several feet. Sometimes, the nests of these birds are destroyed by mowing functions with eggs and juvenile in them.
Western meadowlark birds are permanent occupants all through much of their range. Northern birds may travel to the southern divisions of their range, and some Western meadowlark birds travel east in the southern parts of the United States, as well.
There may be in excess of one nesting female bird in the territory of a male Western meadowlark bird. Usually, a male Western Meadowlark bird has two mates simultaneously. The female birds carry out all the brooding and incubation, and the majority of the feeding of the juvenile birds.
The average lifespan of the Western meadowlark bird ranges from 5 years to 8 Years. |
According to the National Parkinson Foundation, Parkinson’s disease is a slowly progressing brain disorder caused by damaged dopamine-producing neurons in the brain. Dopamine regulates muscle movement. However, when more than 60 percent of those dopamine-producing cells are damaged, the symptoms of Parkinson’s appear. The earliest symptoms, such as sleeping disorders, may appear years in advance of the telling jerky motor movements associated with the disorder.
The National Parkinson Foundation stated that more than 50,000 new cases of Parkinson’s disease are diagnosed each year in the United States. Worldwide, more than four million people have the disease.
The disease most often appears after 50 in men and women. Young-onset Parkinson’s is rare.
Parkinson’s Disease Symptoms
The motor symptoms of Parkinson’s disease are the most recognizable symptoms associated with the disorder.
- Shaking while at rest
- Moving slowly
- Stiffness in the arms, legs and torso
- Difficulty balancing
However, the major motor symptoms often do not appear for years after the initial, non-motor symptoms are diagnosed, such as sleeping disorders, constipation and hallucinations or psychosis.
Secondary symptoms of Parkinson’s disease can include:
- Small, cramped handwriting
- Reduced arm swing and a slight foot drag on the affected side
- Loss of facial expressions
- Muffled speech
- Decreased ability in automatic reflexes, such as to blink or to swallow
Another symptom associated with Parkinson’s disease is known as freezing, which refers to the feeling of being stuck in place when attempting to walk.
If Parkinson’s disease is not treated, the symptoms will continue to progress until an individual is completely disabled. A person with Parkinson’s disease may have an early death with or without treatment.
Parkinson’s Disease Causes
There is no known cause for Parkinson’s disease, but genetics and environmental triggers, such as environmental toxins, may factor in the disorder’s appearance.
Parkinson’s Disease Treatment
There is no cure for Parkinson’s disease. Treatment, such as medication and surgery, focuses on treating the symptoms. Medications are used to increase the levels of dopamine in the brain, which can offset the motor symptoms associated with the disorder. Surgery to affect brain tissue may also be used to alleviate some of Parkinson’s motor symptoms.
Assistive devices, such as special utensils, wheelchairs, bed lifts, shower chairs and wall bars might eventually be required by persons with Parkinson’s disease.
Other treatment options include getting enough rest and physical exercise, physical therapy and speech therapy.
Parkinson’s Disease Prevention
There are no known preventive measures to take to prevent Parkinson’s disease.
Parkinson’s Disease Resources
Learn More About Parkinson’s
- National Parkinson Foundation |
Indigenous Studies 30
|Indigenous Studies 30: Canadian Studies is based on the premise that distinct perspectives are common, that diversity of truth exists, and that he motivations for most behaviors and attitudes may be traced to the worldviews and philosophical orientations of people. Within Aboriginal philosophy, four dimensions of human nature (mental, emotional, spiritual, and physical) are identified and viewed as interrelated. These are developed through personal commitment. This course advocates a holistic, inquiry-based, activity-oriented approach. The aim of Indigenous Studies is to develop personal awareness and cultural understanding and to promote the development of positive attitudes in all students towards Indigenous peoples.|
Introduction Unit – 10 Days
Topics: World View, Culture, Important Terminology, History of Indigenous People in Canada
Unit 1 – Aboriginal and Treaty Rights – 25 Days
Topics: Indigenous Diversity in Canada, Indigenous Values and Beliefs, Canadian Expansion, Aboriginal and Treaty Rights
Unit 2 – Governance – 30 Days
Topics: Traditional Governance, Governance Under Colonial Rule, The Indian Act, the Constitution.
Unit 3 – Land Claims and Treaty Land Entitlements – 25 Days
Topics: Indigenous Relationship to the Land, Land Claims, Treaty Land Entitlements, Metis Land Claims.
Unit 4 – Economic Development 25 Days
Topics: Economic Development and Resource Management, Aboriginal Rights and Economic Developments, Implications of Economic Development, Perspectives on Economic Development.
Unit 5 – Social Development – 35 Days
Topics: Conflict Resolution, Foreign Justice Systems, Education, Traditional Wellness, Alternative Justice, Health and Child Welfare, Social Justice. |
English Language and Literacy
We adopt the Code Cracker system for our Phonics programme and the Reading Bee system for our Reading programme. These are established systems that well received by many preschool centres. In addition to reading the text, the reading system promotes thinking and comprehension skills. Opportunities are provided during 'Show and Tell' sessions for children to verbalise their feelings and thoughts. such experiences will boost the children's confidence in oral expressions.
Chinese Language and Literacy
The Enriched Chinese Programme endorsed by ECDA and NTU is adopted to enhance the children’s learning of the Chinese Language over a range of topics. The children progressively learn the language through listening, speaking, reading and writing. Lessons are conducted daily.
Number knowledge and simple numeracy concepts are taught through hands-on play-based activities. Learning resources and activities are made available at the learning corners for the children to apply their knowledge and practise skills taught.
At the preschool levels, the children will learn about the world around them through hands-on real-life experiences such as growing plants or making popcorn, or investigating live samples/artifacts. They also conduct experiments to find out how things work such as the water cycle and how simple machines are useful in our daily lives.
At the Nursery levels, stories and actual artifacts will be used to stimulate their curiosity and interest. Child-appropriate activities will be conducted to encourage them to explore and experiment.
Mini Integrated Project Work
The preschool children will be guided to discover knowledge and learn more authentically about certain topics. Learning areas that may be integrated include Science, Technology, Engineering, Art, Mathematics, Language, Fine Motor and Social. Their learning experiences may culminate in an individual or a group creation of a craft.
Stories are used to teach good social and moral values. As the children relate to and interact with one another during school, we teach and advocate that they practice good social values such as respect, courtesy, honesty, and fairness. We teach them to be kind and to respect and care for others. They are also nurtured to be independent and responsible for themselves and the tasks assigned to them.
At the preschool levels, we adopt HPB’s recommended social and emotional development program, Zippy’s Friends. This program aims to develop children’s coping skills, including their ability to manage stressful situations.
Our physical exercise lessons are planned for children to enjoy exercising their big muscles as well as developing their fine motor and coordination skills. It is also our intent to promote physical exercise as a part of a healthy lifestyle.
Their fine motor and coordination skills are developed when the children are engaged at the various learning centers such as the art, writing, blocks, and manipulatives centers.
We encourage children to express themselves through art and craftwork. Often these are integrated with the lessons learned and form an expression of their feelings, knowledge, and application. Our children get to experience various art mediums and techniques as well as the different elements of art.
Children enjoy music and moving in response to the rhythm. The children enjoy singing fun action songs daily and their music lessons help them better appreciate music and its elements. |
Circadian rhythms, otherwise known as your body's 24 hour sleep / wake cycle, determine when you feel sleepy, and when it's time to wake up in the morning. There is a number of large-ranged impacts on your health. According to a new study by researchers at the University of Bristol, breast cancer risk lowers for women who wake up early compared to their night-owl counterparts. While the study was conducted in the United States, the findings of the study were published in the Journal of the American Medical Association. to CNN.
CNN reports that this study, sleep schedule preferences were reported over 180 women in the European descent in the UK. Cancer risks associated with sleep schedules have been suggested by previous research, and UK researchers set out to expand upon those findings with the current study. While the study participants were self-reported early risers, they showed lower rates of breast cancer. Lead study author, Dr. Rebecca Richmond, a research fellow in the Cancer Research UK Integrative Cancer Epidemiology Program at the University of Bristol, presented at the NCRI Cancer Conference in Glasgow on Tuesday, according to CNN.
Per the BBC, everybody has a body clock that influences when you sleep, your moods, and maybe even your susceptibility to certain illnesses. Morning people tend to have energy peaks earlier in the day, and get tired earlier in the evening. People who like to go to bed late in the evening, and feel sleepier in the morning than early risers do. When circadian rhythms get disrupted, mood and health disorders can result. UK researchers also conducted a genetic analysis of the study participants to better understand what the link between sleep patterns and breast cancer may be, according to CNN.
"We know that sleep is important for health," Richmond told CNN. "These findings have potential policy implications for influencing sleep habits of the general population in order to improve health and reduce risk of breast cancer among women."
However, a link to breast cancer risk and sleep patterns, the statistical model used in this study does not necessarily imply causality, Dipender Gill, a clinical research training fellow at Imperial College London told CNN. "For example, the genetic determinants of sleep may also affect other … mechanisms that affect breast cancer risk independently of sleep patterns, "Gill said. So while sleep patterns might be associated with breast cancer risk, they do not necessarily cause it, according to Gill – there may be other genetic and health factors at play.
"Sleep is likely to be an important risk factor for breast cancer," Richmond told CNN. But other health factors, like excessive alcohol consumption, are more of a concern, she said. There are many factors that contribute to breast cancer risk, including breast cancer risk.
When it comes to getting enough sleep, and reducing the risk of illnesses like breast cancer, getting to bed early. And while sleep disruption, or not getting enough solid sleep on a regular basis, can increase your chances for some cancers, more research is needed to fully understand how circadian rhythm impacts breast cancer risk. |
Seychelles is home to a very unusual endemic family of frogs called Sooglossids. The family
|Seychelles sooglossid © Gideon Climo|
Sooglossidae consists of 2 genera and 4 species of frogs found only on the granitic islands of the Seychelles. The Seychelles granitic islands are unique among oceanic islands in representing a fragment of continental Gondwanaland. The Seychelles microcontinent split from India isolating the ancestral Sooglossidae at least 75 million years ago. Today, the Seychelles archipelago is an important centre of endemism for many plants and animals, including amphibians. The four known Sooglossid species have only been recorded from the two highest (Mahé and Silhouette) of the 115 islands in the archipelago, where they live in the mountain mist forests. They are Sooglossus gardineri found on Mahé and Silhouette; Sooglossus pipilodryas found on Silhouette only; Sooglossus sechellensis found on Mahé and Silhouette; and Nesomantis thomasseti found on Mahé and Silhouette.
Smallest frogs in the world
The sooglossidae are tiny frogs: the smallest species Sooglossus gardeneri, believed to be the tiniest frog in the world, measures just about 9-12 mm long. Their newly emerged juveniles measure only 1.6 mm long, and are literally almost too small to see. The Sooglossus sechellensis is about 15 to 18 mm long and the Nesomantis thomasseti is slightly bigger, about 35 to 45 mm long.
What do they eat
The diet of Sooglossus gardineri consists mostly of mites, sciarid fly larvae, ants and amphipods, while Sooglossus sechellensis is known to consume termites. Nothing is known about the feeding ecology of Nesomantis although it probably eats correspondingly larger prey than the other species.
Because of their extremely limited global distributions (less than 100km2 of montane forest) three of the species are listed on the IUCN Red List of threatened species: Sooglossus gardineri –Vulnerable, Sooglossus sechellensis –Vulnerable, and Nesomantis thomasseti –Endangered. Sooglossus sechellensis actually appears to be the rarest species, rather than Nesomantis thomasseti, which is still fairly common. Sooglossus gardineri is very abundant and widespread. The fourth Sooglossus pipilodryas was discovered too recently to be listed, although it too no doubt will be in due course.
Silhouette, with an area of 20 km2 and a population of less than 150 people, faces few immediate threats. This island currently appears to support healthy populations of the three species of Seychelles frog listed above, plus the recently discovered (2002) palm frog, Sooglossus pipilodryas. However, new hotel and infrastructure development might be a potential threat to these species.
In contrast, Mahé has a population of over 70,000 people and, although it is a much larger island at 148 km2, faces steadily growing development pressures. With space at a premium on the island, these pressures increasingly include the residential development of montane forest. While important areas of Seychelles frog habitat are now protected within the 30.5 km2 Morne Seychellois National Park in north Mahé, the current status of the three species known to occur here is unclear. Moreover, significant tracts of potentially suitable habitat in Mahé’s central and southern mountains lack any protection against encroaching development, and the distribution and status of Seychelles frogs in these areas is completely unknown.
What we have done
Very little monitoring work has been carried out on the main island of Mahe and consequently little was known about their status. Concern about the dramatic global declines of many amphibian populations prompted Nature Seychelles to initiate a project to investigate the status of these unique frogs on Mahe. With support from the Royal Society for Protection of Birds (RSPB) and the Herpetological Conservation Trust in the United Kingdom, a scoping study of the sooglossids on Mahe was undertaken.
What we are doing
Our scoping study trailed methods and recommended the most practical method for determining distribution and numbers in the field, outlined a long-term monitoring programme for regular assessment of the frogs status and abundance and identified priority research needs. With this information Nature Seychelles is implementing a programme to look at the long-term status of these frogs. |
Character Charting The Tempest
In this lesson, after being introduced to the unit, students will begin reading act 1 of The Tempest with the goal of understanding who the characters are and what happens. They’ll begin to chart the characters and find useful vocabulary.
- Read the lesson and student content (act 1 of The Tempest and other materials).
- Anticipate student difficulties and identify the differentiation options you will choose for working with your students.
- Help students locate copies of the Independent Reading texts.
- Facilitate a discussion of the Guiding Questions.
- Note that students will be returning to these questions periodically throughout the unit and they will see how or if their ideas evolve.
- SWD: Consider talking about these questions to help students further understand the topic. These are quite abstract topics, and SWDs might need additional time to fully absorb them.
- ELL: Be sure that at all times, all cultures and civilizations are spoken about with a high degree of respect, whether that country or culture is represented in the classroom or not. It is important that all students witness respect for all cultures at all times.
Respond to the unit’s Guiding Questions.
- What role do national identity, custom, religion, and other locally held beliefs play in a world increasingly characterized by globalization?
- How does Shakespeare’s view of human rights compare with that in the Universal Declaration of Human Rights?
- Who is civilized? Who decides what civilization is or how it’s defined?
- How do we behave toward and acknowledge those whose culture is different from our own?
Discuss your responses with your classmates.
- Display the Unit Accomplishments for students, answer their questions, and clarify as needed. It isn't necessary that they have a full grasp of everything required for this unit at this point.
- ELL: To be sensitive to all students, be sure to check that your ELLs and their family members are not refugees themselves. Ask questions to find this out. If you find out that a student or somebody in his or her family is a refugee, have a conversation with the student to be sure he or she will be able to handle the activity emotionally.
Review the Unit Accomplishments and ask your teacher any questions you have about them.
- Read William Shakespeare’s play The Tempest and write a short argument about who in the play is truly civilized.
- Participate in a mock trial in which you argue for or against granting asylum to a teenage refugee, and then write arguments in favor of and against granting asylum to teenage refugees.
- Read an Independent Reading text and write an informational essay about a global issue and how that relates to your book.
Independent Reading Text
- Encourage students to find a book that interests them and is on a comfortable reading level.
- Help students locate copies of the Independent Reading texts.
- Encourage students to reach for challenging texts, but check in with them to make sure they are able to comprehend their selections.
As noted in the Unit Accomplishments, you will read an Independent Reading text and write an informational essay about a global issue and how that relates to your book.
Read the descriptions of the Independent Reading texts.
- By Lesson 12, choose and locate a copy of the text you want to read.
You will have from Lesson 12 to Lesson 22 to read the book.
Characters and Vocabulary
- Remind students that Shakespeare's plays were meant to be performed and seen, not read. However, Shakespeare's place in Western civilization is so honored and revered that we have a tradition of studying the plays.
- Work with your class to create a Characters in The Tempest chart on which students can record information that will help them keep the characters straight.
- Some of the words students will encounter while reading the play are particular to Shakespeare's time. Students need to understand the meaning but not learn the words for academic use. Other, more useful, words will be listed in each lesson as they are encountered. Consider using those words in a Vocabulary in The Tempest class chart.
Look at the list of characters in Shakespeare’s The Tempest at the start of the play. Note that in Shakespeare’s day, the characters were listed according to rank, with kings and nobles first, followed by gentlemen, followed by workers and slaves, followed by women and then “spirits.”
- To help you keep the characters straight, use the list of characters at the beginning of the play (the “dramatis personae”) and work with your teacher to create a Characters in The Tempest chart. Maintain a list of characters and information about them in your Notebook. Fill in information each day as you read and find out more about the characters.
- As you run across vocabulary unfamiliar to you, note the word and where you found it in a Vocabulary in The Tempest list. Then figure out its definition. For example, the wordusurping to describe Antonio in the dramatis personae means that he has seized the position of Duke without having a legal right to do so.
Act 1, Scene 1
- Your reading in class can be oral, as many students would enjoy that, but reading Shakespeare aloud in a cold reading could inhibit understanding if the reader stumbled with meaning and pronunciation. For that reason, you should assign parts for reading aloud only after students have had an opportunity to read over the lines first, and you should choose only the most able readers. For some or all scenes, you can give them an overview or summary of the action before they read.
- Ask students to read act 1, scene 1, silently.
- In act 1, scene 1, we meet some of the characters on board a ship that is being tossed violently in a storm, a tempest. The sailors are impatient with the interfering noblemen, who are keeping them from their work to save the ship. Explain that the chaos the storm has created on board the ship makes for a lot of order giving, yelling, cursing, and criticism.
- Circulate through the room, assisting students in defining words.
- Organize your students into equal-sized discussion groups with no more than four students in each group.
- After students have had a chance to discuss the conflicts (in act 1, scene 1) and respond to the questions, ask for volunteers to share what their group discussed.
- In addition to usurping, words to study are bawling, line 39; insolent, line 42; and glut, line 59.
Silently read and annotate act 1, scene 1. Mark any words or lines you don’t understand. Find the conflict between the characters as the ship is foundering.
In your group, discuss the following questions.
- Who’s in charge on the ship?
- When the Boatswain tells Gonzalo, “You are a / counsellor; if you can command these elements to / silence, and work the peace of the present, we will / not hand a rope more; use your authority. If you / cannot, give thanks you have liv’d so long, and make / yourself ready in your cabin for the mischance of / the hour, if it so hap. . . . Out / of our way, I say” (1.1.19–26). Explain the Boatswain’s frustration and Gonzalo’s response. How do the other noblemen respond to the Boatswain later in the scene?
- At the end of the scene, what seems to have happened to the ship and its passengers and sailors?
Choose one member to report to the whole class if the teacher calls on your group.
Act 1, Scene 2
- Before students start reading, let them know that Miranda, daughter of Prospero, knows her father has the “art” or magic to cause a storm. She has seen the foundering ship and asks him to stop the storm and save the passengers. Prospero uses the opportunity to explain his life to his daughter.
- Assign two able readers to read aloud act 1, scene 2. Stop after each long section to make sure everyone understands what has been said. The first stop should be at line 203.
- Words to study are allay, line 2; perdition, line 35; perfidious, line 82; sans, line 113; prerogative, line 121; inveterate, line 142; and extirpate, line 145.
Continue with act 1, scene 2.
- Read along silently while student readers take the parts of Miranda and Prospero.
Your teacher will interrupt the reading several times to provide explanations or to ask for responses.
Miranda and Prospero
- Impose a time limit for the Quick Write response. Three minutes is plenty of time.
Complete a Quick Write in response to the following question.
- How did Prospero and Miranda get marooned on this island?
Miranda and Prospero
- Before releasing students to read the play independently, allow them 3–5 minutes to clarify for themselves what happened to Miranda and Prospero at the hand of Prospero's brother Antonio.
- SWD: Be sure that SWDs are engaging in the activity successfully. If you find that some students need support in clarifying for themselves what happened, consider grouping those who need extra help and working with them.
- Also before releasing students, let them know that in the next part of the scene, Prospero puts Miranda to sleep and gives orders to his servant-slave Ariel to provide for the safety of the people on the ship. Ask them to notice how Prospero treats his two servant-slaves, Ariel and Caliban. Ariel fetches Ferdinand, who has been separated from the rest of his shipmates, and Ferdinand meets and falls instantly in love with Miranda.
So, how did Prospero and Miranda get marooned on this island?
- Confer with your small group and share your ideas in answering the Quick Write prompt.
Act 1, Scene 2
- Additional words to study are prescience, line 210; and precursors, line 232.
Read and annotate the remainder of act 1, scene 2.
Write two paragraphs.
- Briefly summarize the action of the rest of scene 2.
- Describe what one character who is not in the scene might think of the action were she or he able to observe the action.
Submit your writing to your teacher.
Study the vocabulary from act 1 of The Tempest. |
The name Trapdoor spider covers several families and many different species. Trapdoor spiders include the Funnel-web, Mouse, Whistling, and Curtain-web spiders; they are distinguished by the stocky body, long leg-like palps, and two knee-like lobes to which the fangs join (chelicerae) in front. Most live in burrows with or without trapdoors in the ground, but some live in trees. Trapdoor spiders have powerful chelicerae and four pale patches (the book-lungs) under the abdomen. The correct identification of Trapdoor spiders is often quite complicated.
Trapdoor spiders can be distinguished from the more dangerous Funnel web spider by its brown or mottled markings. When in danger, a Trapdoor spider will freeze or flee whereas a Funnel web will rear back aggressively. Trapdoor spiders construct burrows lined by their silk and closed by a hinged door of silk, moss, and soil. There they lie in wait for passing prey, usually an insect; when the prey touches silken threads radiating out on the ground near the door, the spiders quickly open the door and seize it. Closely related to Tarantulas, Trapdoor Spiders make up the family Ctenizidae.
They range in size from 1.5 cm – 3 cm in body length, are harmless to humans, and are found in many warm climates. They also use their burrows for protection and as nest sites, the female spinning her egg sac for about 300 eggs in the burrow.
All photos are copyright to their owners and may not be reproduced without permission. Click on the images to enlarge them.
Black Trapdoor Spider
Most of the photos below are of male Trapdoor Spiders. The males wander looking for a female and are often found in backyard swimming pools where they learn they can’t swim!!
Brown Trapdoor Spider
Ravine or Cork-Lid Trapdoor Spider
Ravine Trapdoor Spider is the common name of a rare, oddly shaped North American spider, Cyclocosmia truncata, belonging to the trapdoor spider family Ctenizidae. The Ravine Trapdoor Spider is a burrowing spider, inhabiting sloping riverbanks and ravines in Georgia, Alabama, and Tennessee. The abdomen of spiders in this genus is abruptly truncated and ends in a hardened disc which is strengthened by a system of ribs and grooves. They use this to clog the entrance of their 7 to 15 cm deep vertical burrows when threatened, a phenomenon called phragmosis.
Strong spines are located around the edge of the disc. The four spinnerets are found just anterior to it, with the posterior, retractable spinnerets particularly large. The disc diameter in the females is 16 mm. The individual species are separated from each other by the pattern of the abdominal disc, the number of hairs on its seam, and the shape of the spermathecae. The female reaches a body length of 1.2 inches (3 centimetres). The male grows to 0.75 inch (1.9 centimetres). This species can be incredibly difficult to find due to the superb camouflage of their burrows. Colonies of Cyclocosmia truncata tend to be focused within certain micro-habitats. They are primarily found in hilly, undisturbed woods that are far from any flood-prone bodies of water, such as rivers (They are frequently found near stream banks, however). The burrow is a vertical tube that narrows toward the bottom. Only the bottom portion of the burrow is silk lined. |
COURSE 1 - MONEY MANANGEMENT
a) Exploring how spending, saving and values impact your finances.
b) The Value of planning how money is used.
c) Set financial goals that are specific and measurable.
d) How can personal goals be achieved
through money goals.
COURSE 2 - CREDIT - USE, DON'T ABUSE
a) Weighing the benefits and risks of borrowing.
b) Discuss why (money smart) people borrow.
c) Examples of acceptable and unacceptable circumstances to use credit.
d) Compare the costs and terms of borrowing options
COURSE 3 - SPENDING POWER, WRITE DOWN YOUR EXPENSES
a) Explore the payoffs of investing in yourself.
b) Discuss the value of investing in yourself.
c) Identify how education can impact earnings.
d) List strategies to minimize the costs of education.
COURSE 4 - INVESTING
a) Explore how saving and investing can be used to build wealth.
b) Make a distinction between saving and investing & compare the types of investments.
c) Demonstrate how to calculate compound interest.
d) How the time value of money impacts saving and investing.
COURSE 5 - FINANCIAL SERVICES
a) Explain how services are used to handle business transactions.
b) Discuss reasons to use check payments.
c) Demonstrate how to use a checking account.
COURSE 6 - INSURANCE
a) Justify reasons to be insured.
b) Discuss ways that teens face risks that can be costly.
c) Give examples of ways that teens can manage the risk of financial loss.
d) Describe consequences of not being sufficiently insured. |
Written by George Mitchell, Programme Executive
Take a moment to think about what a scientist looks like to you. The picture I have in my mind is an individual wearing the trademark lab coat and safety spectacles. While trying to decide whether he looks more like Walter White or Emmett Brown, I am struck by the fact that my scientist is undoubtedly a man. Despite the most influential scientists I worked with at university being women, I cannot shake the stereotypical image I was immediately drawn to. I believe this is an unfortunate reflection of a field in which women continue to be overlooked and underrepresented.
Science is supposed to be paving the way for the future and yet, when it comes to gender equality, it is stuck in the past. At present, less than 30% of researchers worldwide are women and with a lack of role models and equality in the field, only 30% of women choose to study STEM subjects at university.
Today is the International Day of Women and Girls in Science and, to mark the occasion, I would like to highlight three incredible scientists who made ground-breaking discoveries in the world of science and healthcare.
Rosalind Franklin (1920 – 1958)
Rosalind Franklin was a chemist and x-ray crystallographer who, in May 1952, captured an image that would quite literally change the DNA of biological and healthcare research. The seemingly uninspiring and blurry ‘Photo 51’ would lead Watson and Crick to discover the DNA double helix, for which they won the Nobel Prize in Physiology or Medicine in 1962. Since then, we have sequenced our genome, increased our understanding of genetic disease and even learned how to edit our DNA.
Franklin died in April 1958 from ovarian cancer, possibly caused by exposure to the very x-rays which led to her discovery, something for which she was not recognised until the years following her death. With the Nobel Committee still unwilling to award posthumous prizes, Franklin remains one of the greatest unsung heroes in the history of biology and healthcare research.
Tu Youyou (1930 – Present)
During the Vietnam War, malaria claimed the lives of more Vietnamese soldiers than the war itself. Tu Youyou is a pharmaceutical chemist who, in 1969, was appointed the leader of the ‘Project 523’ research group, tasked with finding a cure for malaria. In 1972, after turning to traditional Chinese medicine for a cure, Tu discovered that sweet wormwood had been used 1,500 years before to treat symptoms of malaria.
Tu had discovered the antimalarial medication, artemisinin, which is still used to treat malaria today. Her discovery saves 100,000 lives in Africa every year and has saved an estimated 3 million lives this century alone. However, Tu was not acknowledged for her discovery until 2007 and she would become the first Chinese woman to win a Nobel Prize in 2015, 43 years after her lifesaving work.
Jennifer Doudna (1964 – Present)
60 years after Rosalind Franklin captured Photo 51, American biochemist Jennifer Doudna discovered CRISPR-Cas9 genome editing, argued to be the most significant discovery in the history of biology.
Doudna’s breakthrough has opened the gates to a new era of healthcare research, with endless possibilities and implications, one of which could be finding cures for genetic disease. In recognition of this monumental achievement, she was a runner-up for Time ‘Person of the Year’ in 2016, missing out to a certain President, Donald Trump.
These three scientists achieved their healthcare breakthroughs in a male dominated field, without role models or recognition. The current landscape is unfortunately not as balanced as it should be, and we need to better communicate the achievements of women in science and healthcare to inspire the next generation and possibly, the next great discovery. |
Climate change and global warming tend to be thought of as relatively slow processes when measured on the human time-scale. But some scientists believe that abrupt climate change is very possible and that we should start planning now on how to respond to a global warming crisis that might develop in decades, rather than centuries.
Roger Angel, a University of Arizona boffin in the field of astronomical adaptive optics, has been looking at ways to cool the Earth in just such an emergency.
Angel’s plan involves launching a flotilla of trillions of small free-flying spacecraft a million miles above Earth into an orbit aligned with the sun, called the L1 Lagrange orbit. The spacecraft would form a long, cylindrical cloud about 4,000 miles in diameter and 60,000 miles in length. About 10 percent of the sunlight passing through the length of the cloud would be diverted away from Earth, uniformly reducing sunlight by about 2 percent over the entire planet. Enough to balance the heating caused by a doubling of atmospheric carbon dioxide in Earth’s atmosphere, believes Angel.
A space shade to deflect sunlight from Earth was first proposed by James Early of the Lawrence Livermore National Laboratory in 1989. “The earlier ideas were for bigger, heavier structures that would have needed manufacture and launch from the moon, which is pretty futuristic,” Angel explained. “I wanted to make the sunshade from small, light and extremely thin spacecraft that could be completely assembled and launched from Earth, in stacks of a million at a time. When they reached L1, they would be dealt off the stack into a cloud. There’s nothing to assemble in space.”
The lightweight spacecraft mooted by Angel would be made of a transparent film pierced with small holes. Each craft would be two feet in diameter, 1/5000 of an inch thick and weigh about a gram, the same as a large butterfly. The craft would use tiny, maneuverable solar sails to stay in position. Angel has calculated that the total mass of all the fliers would be around 20 million tons, making launch by conventional chemical rocket prohibitively expensive. Instead, Angel proposes using electromagnetic space launchers, which could bring the launch cost down to as little as $20 a pound.
Once in space, the craft would be steered to their orbit by solar-powered ion propulsion, a method pioneered by the European Space Agency’s SMART-1 moon orbiter. “The concept builds on existing technologies,” Angel said. “It seems feasible that it could be developed and deployed in about 25 years at a cost of a few trillion dollars. The solar shade should last about 50 years, so the average cost is about $100 billion a year, or about two-tenths of one percent of the global domestic product. [It’s] no substitute for developing renewable energy, the only permanent solution, but if the planet gets into an abrupt climate crisis that can only be fixed by cooling, it would be good to be ready with some shading solutions that have been worked out.” |
Malaria, a potentially fatal disease, is caused by a parasite known as Plasmodium. Anopheles mosquito acts as a carrier for this parasite and it is transmitted to humans through bite of anopheles female mosquitoes. There are more than 400 plus species of Anopheles, but there are 5 known species that cause malaria in humans and amongst these there are 4 species that are concerning to humans. These are P. falciparum, P. vivax, P. ovale and P. malariae. The 5th species is P. knowlesi. By far, P. falciparum is the most deadly parasite amongst them, which is most prevalent in the African subcontinent and is responsible for around 91% deaths all around the world. P. vivax and P. ovale are usually found outside the African subcontinent.
What Are The Signs And Symptoms Of Malaria?
Malaria, an acute febrile disease, typically presents with “flu-like symptoms” such as fever and chills, body aches, malaise (a general feeling of illness), fatigue and headache. After the onset of initial symptoms, it progresses into paroxysm of high fever and chills along with profuse sweating. The severity of symptoms mostly depends on the species of plasmodium that has caused the illness.
The symptoms of malaria caused by Plasmodium vivax, Plasmodium ovale and Plasmodium malariae are typically milder than their counterpart Plasmodium falciparum, which causes severe life-threatening complications if left untreated and can eventually lead to death. In P. vivax, P. ovale and P. malariae, the incubation period is around 2 to 3 weeks and may even take up to months to manifest their symptoms when in dormant stage. The initial symptoms of malaria are malaise, an intermittent fever with chills, headache and body aches.
There can also be following associated symptoms of nausea and vomiting, diarrhea, fatigue, abdominal pain and blood in stools. The initial symptoms will be followed by paroxysms of high grade fever (around 104 degree Fahrenheit) with chills, profuse sweating (diaphoresis) and on rare occasions delirium. Between these paroxysms there will be intervals where the patient feels good without any symptoms. The paroxysmal symptoms correspond to the release of more parasites into the blood stream due to destruction of infected red blood cells.
It typically takes about 48 hours (2 days) to release increased parasites in to the bloodstream in P. vivax and P. ovale and takes about 72 hours (3 days) in P. malariae infection that coincide with the paroxysmal symptoms. Eventually, the body clears the parasites from the blood and the paroxysms become less and less severe until they subside. The symptoms do subside even in untreated cases within a month, but may also recur. There have been known cases of relapses in P. vivax and P. ovale that corresponds to the inactive liver stages occasionally releasing parasites into the bloodstream causing recurrence of infection.
The symptoms caused by Plasmodium falciparum are usually severe and can be fatal. The incubation period of P. falciparum is around 10 to 14 days. The symptoms are also the same with high grade fever, severe headache, chills, diaphoresis, along with anemia, dark colored urine, drowsiness, delirium, confusion, convulsions and can even lead to coma in cerebral malaria (in this swelling of brain blood vessels also may occur), which is the most life threatening complication of malaria and could be fatal in infants, pregnant women and also travelers to high risk areas (basically who have less immunity to the infection).
In P. falciparum too, there are paroxysmal symptoms that coincide with the destruction of red blood cells and release of parasites (around 48 hours) into the bloodstream. However, in P. falciparum the paroxysms are not so well defined and also there are increased numbers of parasite released into the bloodstream, which increase the severity of the infection with P. falciparum.
The other complications of P. falciparum malaria may include:
- Organ failure mainly kidney, liver or spleen.
- Anemia due to destruction of large amount of red blood cells.
- Pulmonary edema: It causes fluid accumulation in the lungs that causes respiratory distress making it hard to breathe. However, this is a rare complication.
- Hypoglycemia: Low blood sugar (this may happen in other forms of malaria too).
Untreated P. falciparum malaria can be fatal and one should see a doctor when one notices the above signs and symptoms even if they have been treated prophylactically. |
|Better Farming Series 04 - The Soil: How the Soil is Made up (FAO - INADES, 1976, 37 p.)|
· Air must circulate in the soil.
The microbes, which are living things, need air to
To live, they decompose the organic matter in the soil.
If there is no air, the microbes cannot breathe.
They cannot change the organic matter into humus.
Roots too need air to breathe.
Without air, roots die.
They cannot go on feeding the plant (see Booklet No. 1, page 28).
· How to give the soil air.
When you work the soil, air enters into the soil.
If there is too much water the air does not circulate well.
Water prevents air entering the soil.
So ditches are made to get rid of the surplus water.
If the soil structure is good the air circulates well.
To get a good soil structure, there must be humus.
Humus makes it easier for air to circulate in the |
Electrical Engineering ⇒ Topic : Magnetic force
Magnetic force. It is the force exerted by one magnet on another to attract it or repel it
Electrostatic forces have been discussed in Chapter, where it was stated that if a charge q is placed in an electric field of intensity E, then the electrostatic force experienced by the charge is
Fe = q. E newtons ...........(1)
The force Fe acts along E and the charge experiences the force, whether it is atrest or moving.
In a similar way, a moving charge q in a magnetic field will experience a magnetic force given by
Fm = q(u x B) newtons ............ (2)
Here, q is the charge in Coulombs,
u is the velocity of the moving charge in m/s, and
B is the magnetic flux density in Wb/m2.
The direction of the force is given by the cross product (u x B) and will be perpendicular to the plane determined by u and B as shown in Fig. (a). The magnitude of the force Fm is given as
Fm = q.u.B sin θ ...............(3)
where, θ is the angle between the vectors u and B.
The following cases can be visualised:
Case (a) When θ = 0 : When the charge q moves along the magnetic field, O becomes zero and the force experienced by the charge also becomes zero.
Case (b) When θ = π/2: When θ = π/2, the force is in a direction normal to the direction of the magnetic field and the force experienced by the charge is maximum. The maximum force is given as
Fm =q.u.B newtons
and this acts normal to the velocity vector. The acceleration of q is given by
FIGURE (a) Illustrating Fm = q. (U x B)
where, m is the mass of the charged particle.
The acceleration acts along Fm. Since the acceleration and the velocity vectors are mutually perpendicular, the component of a along u is zero. Therefore, there is no change in the initial velocity of the charged particle when it is in the magnetic field, but its direction changes. This implies that there is no change in the kinetic energy of the particle in the original direction of motion. But in an electric field, the charge is accelerated in the direction of the field E and the velocity of the particle increases continuously. The particle acquires kinetic energy from the electric field
!! OOPS Login [Click here] is required for more results / answer |
When do you use a hyphen?
Use a hyphen to connect two words or names.
The hyphen is often confused with the dash (which is represented by two hyphens in succession, or one really long hyphen). They have different functions. The hyphen connects two words (like nation-state) or names (Jonathan Rhys-Davies, for example, or Austria-Hungary). If you have to split a word to finish it on the next line, a hyphen indicates that it is still the same word.
Normally, there is no space before or after the hyphen, unless you have a pair or list of hyphenates where one word is used repeatedly ("Dada was an anti-war and anti-capitalism movement" becomes "Dada was an anti-war and -commerce movement." "The Avengers were a group of super-heroes and anti-heroes" becomes "The Avengers were a group of super- and anti-heroes"). |
By the end of this course learners will:
• Understand the key stages of development children may go through within maths, relating to numbers
• Recognise how to support number with the needs of the unique child
• Recognise how to support number with enabling environments
• Recognise how to support number with positive relationships
• Learn different ways to integrate number into other areas of learning and areas of the setting through these approaches.
Who should complete this course?
Suitable for all early years practitioners working with young children in England. This course complements and builds on the knowledge provided in the Maths in Early Years for England and 'Let's look at...' Maths: Shape, Space and Measure courses.
Buy our Maths in Early Years Package (5 courses) and Save 15%.
Buy now from the NDNA online shop. |
Causes and Concerns
Soil compaction is caused when an external force applies pressure to soil which reduces porosity, destroys structure, limits water infiltration, reduces air circulation and increases resistance to penetration of roots. This condition can diminish crop yield and plant vigor. While many people are aware of the negative effects soil compaction can have, they may underestimate its significance. Soil conditions contribute largely to plant health or disorders in a landscape setting. Overly compact soil can reduce the amount of root growth and consequently limit the ability of plants to uptake water and nutrients. If it has been a dry growing season, compacted soil can increase the symptoms of drought stress. During wet years soil aeration will become difficult, leading to loss of nitrate-nitrogen to the atmosphere (denitrification) which will present as symptoms nutrient deficiency but may require more than just fertilizer to reverse effects. This condition can be very difficulty to correct, so efforts need to be made to prevent compaction from occurring. The most common causes of soil compaction on residential lands are home construction and foot traffic. Fine textured and clay-ey soils can also be affected by rain drops and sprinkler irrigation.
There are several cultural practices that can be employed on home landscapes to prevent or minimize soil compaction:
- Addition of Organic Matter- Amending soil with organic matter within the first 6 to 8 inches is ideal, for clay type and compacted soils any less will lead to a shallow root system, reduced growth and vigor and lower stress tolerance. Discuss your options and plan with your arborist to determine timing and composition for amendments.
- Traffic Flow Management- Heavy foot traffic through your gardens and landscapes can be a major contributor to soil compaction. Moist soils can be compacted up to 75% from the first time it is trod upon. Implementing raised garden beds and establishing walkways can prevent this problem. Also limiting foot traffic on clay-ey soils and when soil is wet will aid in preventing compaction.
- Mulching- Using the right type and amount of mulch can reduce the force of compaction. Mulch will also mitigate any compaction caused by rainfall or irrigation by absorbing the shock before it contacts the soil.
- Mechanical Aeration- Aerating your lawn and around trees will reduce soil compaction, consult with your arborist regarding when and where aerating should occur.
- Moderate Moisture- Avoid cultivating overly moist or dry soils. Irrigate the appropriate amount for your landscape, the time of year and your plants. Consult with your arborist for advice specific to your property.
- Equipment Placement- If you have upcoming home construction projects requiring material stockpiling and heavy equipment, carefully plan where these materials will be stored or placed. Avoid placing or storing anything heavy in the root protection zone of your trees; this can comprise an area larger than the dripline. Ask your arborist to evaluate your property prior to construction commencement, together you can develop a plan to ensure the success and health of your trees throughout the project and beyond. |
School Aims and Values
Equality - Resilience - Excellence - Teamwork - Respect - Independence - Creativity - Problem Solving
As a school we aim to ensure that;
- Each child is valued for their individual contributions and develops a positive attitude towards everyone in the life of the school and community.
- Each child develops an understanding of citizenship and their role in the local, national and global community.
- Each child develops high self-esteem, confidence and a true feeling of self-worth and develops a sense of responsibility.
- Each child appreciates the spiritual nature of life.
- Each child acquires a set of moral values and attitudes including honesty, respect, sincerity, trust and personal responsibility.
- Each child develops understanding and mutual respect of other religions, races, cultures, gender, people with disabilities and associated points of view.
- Each child develops a lively, enquiring mind and life skills so that he/she will have the ability to experiment, think independently, investigate, take risks, challenge, discriminate and make informed choices whilst at school and in their later life.
- Each child develops the skills and attitudes necessary to work both independently and collaboratively.
- Each child is able to respond positively as a learner to all aspects of the curriculum and performs at a level of competency in all areas with confidence and enthusiasm.
- Each child will be enriched, motivated and challenged by a broad and balanced curriculum and will be valued for all their efforts and achievements.
- Each child will be given equal opportunities to participate in all aspects of school life. |
Home -> Ptolemaic System ->Notable Features
Astronomy is the oldest as well as the most prestigious of the mathematical sciences. Observing the heavens for the purposes of predicting eclipses and other phenomena occurred in the time of the Babylonians, if not earlier. Well before the beginning of the Christian era, astronomy was a demanding technical enterprise. It required long training and intense dedication of its practitioners. In this module, the computer will stand in for that training, and provide you with the theoretical toolkit possessed by a practitioner of Ptolemaic astronomy.
For much of its history, planetary astronomy, at least, has been dominated by a relatively simple set of conceptual tools. Any late-medieval or Renaissance astronomer was familiar with these tools. This module recreates the practice of astronomical theorizing pursued by such an astronomer.
What does an astronomer do, and why does he or she do it? In this period, an astronomer's duty was to predict planetary positions and eclipses. To achieve those ends, he might have to construct both observational instruments and theoretical constructs. By contrast, there were some surprising things that an astronomer did not do. He did not concern himself with the nature of the heavens and heavenly bodies themselves - at anything beyond a fairly elementary level, at least. He was regarded as unqualified to speculate extensively on the causes of celestial motions, nor, indeed, to probe far beyond the numerical figures themselves that he derived from observed planetary positions. And he was not very concerned, even within this calculational work, about the actual paths followed by the planets through the heavens. These were matters on which the mathematical techniques of astronomy could yield no certain or authoritative knowledge. They were more appropriate to the philosopher than the mathematician, since they related to real, physical entities, not abstracted, numerical ones. An astronomer was correspondingly something of a subordinate figure. His was a service industry, dedicated to providing dates and times for others of higher status (physicians, churchmen, philosophers) to put to use.
Ptolemaic astronomers thus avoided controversial speculation on the nature and mechanisms of the heavens. But their basic assumptions were nonetheless supposed to be compatible with natural philosophy -- and in particular the natural philosophy of Aristotle.
|As seen here in a hand-colored image from a seventeenth-century atlas, Aristotelian natural philosophy portrayed a cosmos with the Earth stationary at its center. The four elements of earth, water, air, and fire all had their own proper spheres concentric to the earth, followed by the sphere of the Moon. This marked the boundary between the "sublunary" world, in which things came into being, changed, and died, and the "superlunary" realm, in which things were eternal. Beyond lay the planets, or "wandering stars," which moved around the earth in perfect spheres. The sphere of the fixed stars contained all of these, with nothing beyond it except God. Christians changed the gloss slightly to assert that the cosmos as a whole was not eternal - it had been created by God, they insisted, and would eventually suffer annihilation at his hand - but they kept the natural-philosophical principles largely intact. On this basis, Christianized Aristotelianism provided coherent and largely convincing knowledge of natural processes for some 450 years, from the reintroduction of Greek philosophy in around 1200 to its eclipse in the "Scientific Revolution" around 1600-1650.|
But astronomers were not philosophers. They accepted the basic structure of the Aristotelian cosmos, but did not see their task as one of explaining its nature. Their role was to predict significant celestial events (like eclipses and conjunctions ), provide astrological forecasts, and identify propitious days on which to administer medicines. For such purposes Aristotelian cosmology proved not so much inadequate as inappropriate. Astronomers instead developed their own, rich, mathematical tradition. But the result was a multiplicity of different theories, all unique, but all properly called "Ptolemaic" because they embodied the theoretical devices of the Almagest.
What was it like to pursue this kind of enterprise - an enterprise very different from the modern science of astronomy, and yet of which that modern science is the descendant? In this module you are given the chance to find out. You yourself become a Ptolemaic astronomer. The computer provides you with the theoretical armoury an astronomer possessed as a result of his training at university or through dedicated reading. You are faced with an observed path of a planet. Your task is effectively that faced by any astronomer of the Middle Ages or Renaissance seeking to placate his prince or advance in his university. Can you achieve success?
Click here for detailed instructions, or begin the Simulation
You have probably left the simulation with your theoretical model matching the observed motions well in some places, but with distressing divergences in others. In that, your experience has corresponded fairly well to those of Ptolemaic astronomers in medieval Islam and Renaissance Europe. Exercising the kinds of judgment they exercised should also have served to fix in your mind some characteristics of astronomy as it was then defined that may seem puzzling to a modern user. And it may well have raised questions that would not have occurred to a real Ptolemaic astronomer at all. These questions reach to the very definition of astronomy - and they profoundly affect how we think of its history.
One of the most difficult practical questions for an astronomer doing this kind of work was this: when do I stop? By combining equant, deferent, and epicycle, you will be able to match the motions of a planet fairly closely. But how closely is sufficient? And how do you know that the degree of approximation you choose is the right one? Bear in mind that much may hang on your decision, from the date of Easter to the moment when you take your medicine. In fact, the decision to stop doing astronomy marked the moment when a new theory came into being.
But this was not the end of the problem. Not only was the appropriate degree of exactitude obscure; there were also an indefinite number of ways of achieving it. With Ptolemaic assumptions and tools, an astronomer could "save" the appearances of a planet in an infinite number of ways. Who was to decide which of these ways should be preferred, and be accounted an astronomical theory? On what grounds?
It is particularly evident, then, that given a practice of this kind, the knowledge that Ptolemaic astronomers prized as their achievement was of a peculiar sort by the lights of modern science. It could be very precise, and accurate to the nth degree, and there was widespread agreement about the legitimacy of Ptolemaic assumptions in general - yet there was no guarantee as to the physical truth of any theory in particular. This reflects very important characteristcs about both the natural world of the Renaissance and the social world of the astronomers.
In the sixteenth century awareness of such conundrums gave rise to fierce controversy. Nicolaus Copernicus wrote De Revolutionibus (On the Revolutions) out of conviction not only that the earth was in motion, but that certain knowledge of astronomical motions was possible.
Ptolemaic astronomy could produce a planetary trace matching observations in any number of ways. But the theories might not be equivalent after all. The actual paths followed through space by the planets carried on all those epicycles and deferents differed widely. Some of these planets had orbits so bizarre that they could not really be accounted orbits at all. Could this not provide grounds for choosing between rival proposals?
In practice, it often could not. The reason for this lies in what Ptolemaic astronomers knew about both the physical and the social worlds.
It seems obvious today that mathematics has a principal part to play in understanding natural phenomena. But in Renaissance Europe this was widely doubted. Aristotelian philosophers in particular tended to regard the making of claims about nature on the basis of mathematical arguments with suspicion. It seemed to them to embody a fundamental category mistake. The point of natural philosophy was to consider nature in its substantial whole, and to provide causal accounts of why phenomena occurred as they did; mathematics treated quantity - at best but one aspect of such phenomena - and had nothing to say on the crucial matter of causation. The mathematical sciences were therefore "mixed," in that they applied mathematical techniques to inappropriate, non-mathematical objects. They might produce tangible effects, but not real philosophical knowledge.
Most Ptolemaic astronomers would have agreed. Theirs was a mathematical enterprise. That is, it did not advance philosophical claims about the nature of the heavens, the heavenly bodies, and their motions. It was not that they did not see the value of seeking the truth about such matters; solely that mathematics was not the enterprise for those doing so. The right enterprise for such seekers was natural philosophy. Natural philosophers were thus more prestigious than astronomers (and all other mathematicians). They received greater salaries and more renown.
Finding out about Ptolemaic practice thus not only reveals important information about astronomy - showing what a different enterprise it was in the Renaissance from the science that now goes by the same name. It also tells us something important about the natural philosopher's role, too. And it shows how those two distinct disciplines resulted in the nonexistence of something that to us seems a self-evident presence in the world. In fact, it is not just that the more crazy planetary paths generated by Ptolemaic mechanisms cannot be accounted orbits. None of the paths are orbits. Even when Copernicus published his De Revolutionibus, working astronomers could comfortably transform his models into a system with a stationary Earth because for someone pursuing this practice the reality of these paths required no commitment. Only when Johannes Kepler overthrew the entire practical enterprise of Ptolemaic astronomy did the concept of an orbit begin to have some meaning. Yet even Kepler owed something to Ptolemy. He rejected existing astronomical theories - and the entire enterprise of which they were the product - partly by reviving a very ancient idea that the universe must exhibit harmony, and by insisting that it was his role to understand this harmony. And Kepler was convinced that Ptolemy had known this long before him. He tried hard to recover and complete a long-neglected text by the ancient astronomer on the subject of harmony itself - a text which, he believed, would reveal the commitment of Ptolemy to identifiably similar views. The ravages both of time and of the religious wars of Kepler's own age foiled this plan to revive Ptolemy for yet another age.
Site Map | Contact Us |
A glycemic index diet is an eating plan based on how foods affect your blood sugar level.
The glycemic index is a system of assigning a number to carbohydrate-containing foods according to how much each food increases blood sugar. The glycemic index itself is not a diet plan but one of various tools — such as calorie counting or carbohydrate counting — for guiding food choices.
The term "glycemic index diet" usually refers to a specific diet plan that uses the index as the primary or only guide for meal planning. Unlike some other plans, a glycemic index diet doesn't necessarily specify portion sizes or the optimal number of calories, carbohydrates, or fats for weight loss or weight maintenance.
Many popular commercial diets, diet books and diet websites are based on the glycemic index, including the Zone Diet, Sugar Busters and the Slow-Carb Diet.
The purpose of a glycemic index (GI) diet is to eat carbohydrate-containing foods that are less likely to cause large increases in blood sugar levels. The diet could be a means to lose weight and prevent chronic diseases related to obesity such as diabetes and cardiovascular disease.
Why you might follow the GI diet
You might choose to follow the GI diet because you:
- Want to lose weight or maintain a healthy weight
- Need help planning and eating healthier meals
- Need help maintaining blood sugar levels as part of a diabetes treatment plan
Studies suggest that a GI diet can help achieve these goals. However, you might be able to achieve the same health benefits by eating a healthy diet, maintaining a healthy weight and getting enough exercise.
Check with your doctor or health care provider before starting any weight-loss diet, especially if you have any health conditions, including diabetes.
The glycemic index
The GI principle was first developed as a strategy for guiding food choices for people with diabetes. An international GI database is maintained by Sydney University Glycemic Index Research Services in Sydney, Australia. The database contains the results of studies conducted there and at other research facilities around the world.
A basic overview of carbohydrates, blood sugar and GI values is helpful for understanding glycemic index diets.
Carbohydrates, or carbs, are a type of nutrient in foods. The three basic forms are sugars, starches and fiber. When you eat or drink something with carbs, your body breaks down the sugars and starches into a type of sugar called glucose, the main source of energy for cells in your body. Fiber passes through your body undigested.
Two main hormones from your pancreas help regulate glucose in your bloodstream. The hormone insulin moves glucose from your blood into your cells. The hormone glucagon helps release glucose stored in your liver when your blood sugar (blood glucose) level is low. This process helps keep your body fueled and ensures a natural balance in blood glucose.
Different types of carbohydrate foods have properties that affect how quickly your body digests them and how quickly glucose enters your bloodstream.
Understanding GI values
There are various research methods for assigning a GI value to food. In general, the number is based on how much a food item raises blood glucose levels compared with how much pure glucose raises blood glucose. GI values are generally divided into three categories:
- Low GI: 1 to 55
- Medium GI: 56 to 69
- High GI: 70 and higher
Comparing these values, therefore, can help guide healthier food choices. For example, an English muffin made with white wheat flour has a GI value of 77. A whole-wheat English muffin has a GI value of 45.
Limitations of GI values
One limitation of GI values is that they don't reflect the likely quantity you would eat of a particular food.
For example, watermelon has a GI value of 80, which would put it in the category of food to avoid. But watermelon has relatively few digestible carbohydrates in a typical serving. In other words, you have to eat a lot of watermelon to significantly raise your blood glucose level.
To address this problem, researchers have developed the idea of glycemic load (GL), a numerical value that indicates the change in blood glucose levels when you eat a typical serving of the food. For example, a 4.2-ounce (120-gram, or 3/4-cup) serving of watermelon has a GL value of 5, which would identify it as a healthy food choice. For comparison, a 2.8-ounce (80-gram, or 2/3-cup) serving of raw carrots has a GL value of 2.
Sydney University's table of GI values also includes GL values. The values are generally grouped in the following manner:
- Low GL: 1 to 10
- Medium GL: 11 to 19
- High GL: 20 or more
A GI value tells us nothing about other nutritional information. For example, whole milk has a GI value of 31 and a GL value of 4 for a 1-cup (250-milliliter) serving. But because of its high fat content, whole milk is not the best choice for weight loss or weight control.
The published GI database is not an exhaustive list of foods, but a list of those foods that have been studied. Many healthy foods with low GI values are not in the database.
The GI value of any food item is affected by several factors, including how the food is prepared, how it is processed and what other foods are eaten at the same time.
Also, there can be a range in GI values for the same foods, and some would argue it makes it an unreliable guide to determine food choices.
A GI diet prescribes meals primarily of foods that have low values. Examples of foods with low, middle and high GI values include the following:
- Low GI: Green vegetables, most fruits, raw carrots, kidney beans, chickpeas, lentils and bran breakfast cereals
- Medium GI: Sweet corn, bananas, raw pineapple, raisins, oat breakfast cereals, and multigrain, oat bran or rye bread
- High GI: White rice, white bread and potatoes
Commercial GI diets may describe foods as having slow carbs or fast carbs. In general, foods with a low GI value are digested and absorbed relatively slowly, and those with high values are absorbed quickly.
Commercial GI diets have varying recommendations for portion size, as well as protein and fat consumption.
Depending on your health goals, studies of the benefits of GI diets have produced mixed results.
Results of a 16-year study that tracked the diets of 120,000 men and women were published in 2015. Researchers found that diets with a high GL from eating refined grains, starches and sugars were associated with more weight gain.
Other studies show that a low GI diet may also promote weight loss and help maintain weight loss. However, data from another study indicated a substantial range in individual GI values for the same foods. This range of variability in GI values makes for an unreliable guide when determining food choices.
Blood glucose control
Studies show that the total amount of carbohydrate in food is generally a stronger
predictor of blood glucose response than the GI. Based on the research, for most people with diabetes, the best tool for managing blood glucose is carbohydrate counting.
Some clinical studies have shown that a low-GI diet may help people with diabetes control blood glucose levels, although the observed effects may also be attributed to the low-calorie, high-fiber content of the diets prescribed in the study.
Reviews of trials measuring the impact of low-GI index diets on cholesterol have shown fairly consistent evidence that such diets may help lower total cholesterol, as well as low-density lipoproteins (the "bad" cholesterol) — especially when a low-GI diet is combined with an increase in dietary fiber. Low- to moderate-GI foods such as fruits, vegetables and whole grains are generally good sources of fiber.
One theory about the effect of a low-GI diet is appetite control. The thinking is that high-GI food causes a rapid increase in blood glucose, a rapid insulin response and a subsequent rapid return to feeling hungry. Low-GI foods would, in turn, delay feelings of hunger. Clinical investigations of this theory have produced mixed results.
Also, if a low-GI diet suppresses appetite, the long-term effect should be that such a diet would result over the long term in people choosing to eat less and better manage their weight. The long-term clinical research does not, however, demonstrate this effect.
The bottom line
In order for you to maintain your current weight, you need to burn as many calories as you consume. To lose weight, you need to burn more calories than you consume. Weight loss is best done with a combination of reducing calories in your diet and increasing your physical activity and exercise.
Selecting foods based on a glycemic index or glycemic load value may help you manage your weight because many foods that should be included in a well-balanced, low-fat, healthy diet with minimally processed foods — whole-grain products, fruits, vegetables and low-fat dairy products — have low-GI values.
For some people, a commercial low-GI diet may provide needed direction to help them make better choices for a healthy diet plan. The researchers who maintain the GI database caution, however, that the "glycemic index should not be used in isolation" and that other nutritional factors — calories, fat, fiber, vitamins and other nutrients — should be considered.
Aug. 25, 2020
See more In-depth
- Augustin LSA, et al. Glycemic index, glycemic load and glycemic response: An International Scientific Consensus Summit from the International Carbohydrate Quality Consortium (ICQC). Nutrition, Metabolism & Cardiovascular Diseases. 2015;25:795.
- Matthan NR, et al. Estimating the reliability of glycemic index values and potential sources of methodological and biological variability. American Journal of Clinical Nutrition. 2016;104:1004.
- Bosy-Westphal A, et al. Impact of carbohydrates on weight regain. Current Opinion in Clinical Nutrition and Metabolic Care. 2015;18:389.
- Liu S, et al. Dietary carbohydrates. https://www.uptodate.com/home. Accessed May 27, 2017.
- GI foods advanced search. The University of Sydney. http://www.glycemicindex.com/foodSearch.php. Accessed May 27, 2017.
- Glycemic index and diabetes. American Diabetes Association. http://www.diabetes.org/food-and-fitness/food/what-can-i-eat/understanding-carbohydrates/glycemic-index-and-diabetes.html. Accessed May 27, 2017.
- Glycemic load. Glycemic Index Foundation. http://www.gisymbol.com/about/glycemic-load/. Accessed May 27, 2017.
- Frequently asked questions. The University of Sydney. http://www.glycemicindex.com/faqsList.php. Accessed May 27, 2017.
- Sun FH, et al. Effect of glycemic index of breakfast on energy intake at subsequent meal among healthy people: A meta-analysis. Nutrients. 2016;8:37.
- Dietary guidelines for Americans, 2015-2020. U.S. Department of Health and Human Services. https://health.gov/dietaryguidelines/. Accessed May 28, 2017.
- Smith JD, et al. Changes in intake of protein foods, carbohydrate amount and quality, and long-term weight change: Results from 3 prospective cohorts. American Journal of Clinical Nutrition. 2015;101:1216.
- Zeratsky KA (expert opinion). Mayo Clinic, Rochester, Minn. June 9, 2017.
- Roder PV, et al. Pancreatic regulation of glucose homeostasis. Experimental and Molecular Medicine. 2016;48:e219. |
When should the measles vaccine be given? How effective is it?
Dr. Greene’s Answer:
The measles vaccine is an effective vaccine in preventing measles. After 2 doses of the measles vaccine, over 99% of recipients will be immune to measles. The initial dose of the measles vaccine is usually administered after a child is 12 months of age. The second dose is recommended at the age of kindergarten entry (i.e. age 4-6 years), but may be given any time 1 month after the first dose. In areas where measles is very common, the vaccine can be given as young as 6 months of age, but protection is suboptimal. In these children, repeat vaccination at 12-15 months and 4-6 years is recommended. When the vaccine is not completely effective, it at least minimizes the length, and particularly the severity, of the disease.
Measles has been a major cause of suffering and death at least since the societies of ancient China, Persia, and Rome. Measles epidemics ravaged Europe throughout the Middle Ages, and attacked the Americas beginning in 1657. Before the measles vaccine became generally available in 1965, there were 3 to 9 million cases of measles in the United States each year. It was a common cause of pneumonia, blindness, seizures, brain damage and death. From 1997-2004, the incidence of measles in the United States ranged from 37-116 cases per year (Redbook 2006), and serious complications are far less common. In developing countries, however, measles is still widespread, infecting almost all unimmunized children by the age of 4. The mortality rate in unimmunized children remains about 10%, and blindness is common if the child is unimmunized and malnourished.
Sign-up for DrGreene's Newsletter
About once a month we send updates with most popular content, childrens' health alerts and other information about raising healthy children. We will not share your email address and never spam. |
The Hubble Space Telescope stands tall in the cargo bay of the space shuttle Atlantis following its capture and lock-down in Earth orbit on May 13, 2009 during the STS-125 mission.
In the last 20 years, the Hubble Space Telescope has revolutionized the way humanity views the universe. In many ways, it may have been the most influential telescope since Galileo peered at the night sky with one four centuries ago.
The greatest insights often make the world seem like a larger place than it was before. In Hubble's case, the most important and perhaps most confounding discovery it helped find accomplished just that, by revealing the universe was expanding faster than anyone had known. [New Hubble photos.]
NASA launched the Hubble Space Telescope, a joint effort by NASA and the European Space Agency, on April 24, 1990 aboard the space shuttle Discovery to much fanfare that soon fell flat. A flaw in the telescope's optics gave it blurry vision and turned the iconic space telescope into a potential boondoggle in orbit.
But Hubble was built to be upgraded by astronauts riding NASA shuttles. In 1993, the first crew of space mechanics fixed the Hubble telescope's vision flaw, with four more maintenance and repair missions to follow.
NASA's last trip to Hubble was in May 2009, when the crew of shuttle Atlantis paid one final service call to the orbital observatory. They replaced Hubble's old batteries and worn out parts, revived broken cameras never designed to be fixed in space and added two new instruments. The result: A Hubble Space Telescope more powerful than ever.
Here's a look at some of Hubble's greatest astronomical achievements:
Hubble's greatest discovery
Scientists have dubbed the suspected culprit behind this accelerated expansion "dark energy," and it is now thought to make up 74 percent of the combined mass-energy in the entire universe. In comparison, ordinary matter accounts for only 4.6 percent.
"The discovery of dark energy was extremely surprising, and is I think the greatest discovery it helped make," said astrophysicist Mario Livio at the Space Telescope Science Institute, the science operations center for the Hubble Space Telescope. "And we still don't have any idea really what it is. The nature of this dark energy is at some level the biggest problem that physics is facing today."
And Hubble did not only make the universe seem a larger place by showing us it was growing ? the orbiting telescope also hinting there was a lot more for us to learn.
"It's generated a tremendous sense of humility because we've discovered how we understand so little about the universe, from dark energy to dark matter to how galaxies can change across 13 billion years of cosmic history," said Space Telescope Science Institute director Matt Mountain. "It's completely changed our perspective on the universe."
A surprising find
When Hubble was launched, one of its main missions was discovering when the universe was born. Before the orbiting telescope was deployed, it was highly uncertain as to how old the universe was, which could lead to laughable possibilities, such as stars older than the universe.
By measuring where distant galaxies are more accurately than ever before and how fast they are moving, Hubble greatly narrowed down the rate at which the universe is expanding, helping refine estimates of the universe's age down to roughly 13.75 billion years. However, by solving the mystery of the universe's age, it unexpectedly turned up an even more profound enigma ? the universe's expansion is inexplicably accelerating, instead of slowing down as one might expect due to the pull of gravity from galaxies.
There were earlier suggestions that a "cosmological constant" might exist with the effect of a repulsive force that acted against matter's gravitational attraction, with the most notable proposal coming from Einstein. Before Hubble, however, "without observations, no one took those speculations particularly seriously," Livio said.
The promise of dark energy
Solving the mystery of dark energy could revolutionize physics. It has prompted new theories regarding the origin of the universe, such as one where clashing membranes of reality trigger endless cycles of cosmic death and rebirth. It has also prompted speculation regarding the universe's fate, raising the possibility that dark energy ends the universe in a Big Rip.
Still, much remains unknown about dark energy. One idea is that it literally comes from empty space ? from energy that quantum mechanics theorizes should exist in vacuum.
The problem is that preliminary calculations as to how strong dark energy might be if it was a consequence of vacuum energy were an astounding 120 orders of magnitude greater than we actually see with dark energy. That is a 1 with 120 zeroes behind it.
"Even if you refine estimates further, you still miss the mark by more than 50 orders of magnitude, which is again ridiculous," Livio said. "Another possibility is that it's some sort of field, but we don't understand why that field should be there, and whether it might be related in some fashion to what caused inflation of the universe at its very beginning. A third possibility is that there's really no dark energy at all, but that we have to change our theory of gravity, that Einstein's theory of general relativity is not correct when we get to larger scales of the universe."
In each of those cases, "we're talking about a fundamental change in our understanding of physics, the very basic physical theory that governs the universe," Livio noted.
Not since Galileo
"It's fair to say that when you look back at history, Hubble will have had as much impact as Galileo's telescope," Mountain said. "No telescope has had the kind of public draw that Hubble's had. It's the way the public gets to participate. Everyone gets to see its pictures ? you don't have to know how to read Latin to read Galileo's 'Starry Messenger' to find out what he was up to."
A great deal of Hubble's revolutionary impact comes from its staying power. "It's been serviced five times, and each such mission has allowed Hubble to renew itself with new instruments that make it almost a new telescope each time, so that it can keep making new discoveries," Livio said.
After the last mission that serviced Hubble, "we expect it to last at least another five years," Mountain said. "In principle, if we're lucky and smart, we may be able to celebrate Hubble's 30th birthday."
- Images - 20 Years of the Hubble Space Telescope
- The 10 Most Amazing Hubble Discoveries
- Special Report: Hubble Space Telescope's Final Overhaul |
Ice ages occur due to continental movement and changes in ocean and atmospheric patterns. When the plates beneath the continents shift, warm ocean water is obstructed and can no longer reach the poles. This causes glaciers to grow.
The Earth has experienced at least five ice ages, which last for millions of years apiece. However, these ice ages fluctuate. There are warmer and cooler periods during an ice age known as glacial and interglacial periods. These glacial-interglacial cycles are characterized by long periods when glaciers advance of between 70,000 and 90,000 years, and by much shorter periods of warmer weather when glaciers retreat, from 10,000 to 30,000 years. The cycles are caused by variations in the Earth's orbit and the tilt and wobble of its axis.
Currently the world is in an ice age during an interglacial period that has lasted for the last 11,000 years. Scientists hypothesize that it began when the land connecting North and South America prevented tropical currents from flowing between the oceans.
Researchers study ice sheets in Greenland and Antarctica to better understand ice ages. The ice cores and landforms give many hints as to what the last ice ages were like and how long they lasted. In order to obtain these important ice cores, scientists drill deep into glaciers where the ice is old. |
RNA Translation: the synthesis of a protein. (Lewis, 3) The biosynthesis of peptides and proteins on ribosomes, directed by messenger RNA, via transfer RNA. (MeSH)
The process in which messenger RNA is transported out of the “nucleus” and delivered to a "ribosome," itself composed of RNA and "proteins," where the information in the sequence of the "messenger RNA" will be used to generate a new protein molecule. "Transfer RNA’s," complementary to each “base pair” triplet “codon” in the messenger RNA, deliver “amino acids” which are bonded together to form a protein chain. (Watson, 77-78) The information within a gene is ultimately translated into the sequence of amino acids in a "polypeptide." Translation requires many cellular components including a ribosome and two types of RNA molecules. (Brooker, 69) If the protein is "synthesized" on a 'free ribosome’ (one that is free floating in the "cytoplasm"), it will probably be used in the cell. If the protein is synthesized on a ribosome that is located on the “endoplasmic reticulum” it will probably be placed in a “vesicle,” move through the “smooth ER” and the “golgi apparatus” for processing, and then be transported outside the cell. (RNA translation) interprets one language into another language - "nucleic acid" language into amino acid language. (Norman, 7/22/09) Also referred to as 'translation,' 'protein translation,' and 'genetic translation.'
RNA Translation Stages: the stages of protein synthesis in which the code on the mRNA molecule is used to control the production of a polypeptide chain by a ribosome. (Indge, 274) Editor’s note - RNA translation stages listed below in order of occurrence.
RNA Translation Initiation: mRNA strand becomes attached to the (ribosome) “small subunit.” The “large subunit” "binds" to the top of the small subunit with mRNA sandwiched between. (Norman, 7/22/09)
Single Recognition Particle: a protein/RNA complex that has two functions. First, it recognizes the endoplasmic reticulum (ER) signal sequence and pauses translation. Second, it binds to a "receptor" in the endoplasmic reticulum membrane, which docks the ribosome over a “channel.” At this stage the single recognition particle is released and translation resumes. (Brooker, 117) Also referred to as 'initiation.'
RNA Translation Elongation: mRNA is ‘read’ by the ribosome one codon at a time beginning at start sequence ‘AUG.’ Multiple ribosomes are working on a mRNA strand simultaneously. Each codon (3 nucleotides long) is translated into one amino acid. Marshall Nirenberg discovered the first codon (UUU) in 1961. (Norman, 7/22/09) Also referred to as 'elongation.'
RNA Translation Translocation: tRNA retrieves amino acid, with matching nucleic acid codon, from the cytoplasm and brings it to the large ribosome subunit, where it attaches to the ‘A-site’. A peptide bond forms between an amino acid on the polypeptide already in the ‘P-site,’ and the new amino acid. The polypeptide is transferred to the A-site. The ribosome moves one codon to the right. The now ‘amino acid-less” tRNA is released from the ‘E-site’ (‘exit site’). This process is repeated again and again until a stop codon (UAA, UGA, or UAG) is reached. (Norman, 7/22/09) Also referred to as 'translocation.'
RNA Translation Termination: a protein 'releasing factor’ binds to the ‘A’ site. This hydrolyzes the bond at the P-site breaking it, and the newly synthesized polypeptide is released. The ribosome subunits, mRNA, and releasing factor, dissociate. Note as the mRNA strand is being translated into amino acids, the polypeptides being produced are assuming structural "conformation." Note for proteins destined for the “ER” - during protein synthesis, the new protein is bound by the ‘signal recognition particle (SRP)’, which, in turn, binds to an ‘SRP receptor’ in the ER membrane. This anchors the ribosome to the ER. Note the polypeptide chains have an “N-terminus” and a “C-terminus.” (COOH) (Norman, 7/22/09) Also referred to as 'termination.' |
Peppered Moths: An Example of Natural Selection
A species of moth in England called the peppered moth is found in two varieties: light gray and dark gray. The light gray version used to be far more common, but researchers observed that between 1848 and 1898 the dark colored ones were becoming more common. In fact, only 2% of the moths near one industrial city were light gray.
This change in moth coloration occurred at the same time that coal was becoming a major source of power in England. Coal is not a very clean energy source and burning vast quantities of it put large amounts of soot into the air in and near London and other industrial cities. The soot would settle over the land, buildings and even the trunks of trees. Tree trunks turned from light gray to black. Peppered moths are active at night but rely on places where they can blend in, avoiding predators, during the day. Light-colored peppered moths were no longer well camouflaged on the darkened tree trunks. The dark colored moths, however, were well camouflaged. Because predators were able to spot the light moths more easily, the dark moths were more likely to survive and reproduce. Eventually, moths in industrialized areas of England were predominantly the dark variety and moths in the non-industrialized regions (where tree trunks were still light in color) remained predominantly light gray in color.
Several scientific studies have tested the hypothesis that peppered moth coloration was due to natural selection. For example, a scientist named Kettlewell bred both varieties of moths and marked them so that he would know when he found them again. Then, he released some of each variety into a region where pollution was high, and some of each variety into a region where pollution was low. Kettlewell later went out to recapture as many of the moths as he could from both areas. He found more dark moths in the polluted area and more light gray moths in the low pollution area, suggesting that more of the dark ones survived in the soot covered industrial setting and more of the light colored ones survived where the tree trunks remained light in color. This supports the hypothesis that the change in moth color was caused by natural selection.
The peppered moth case is an example of natural selection. In this case, changes in the environment caused changes in the characteristics that were most beneficial for survival. The individuals that were well adapted to the new conditions survived and were more likely to reproduce. This particular type of natural selection, when amounts of genes varieties shift in a particular direction in response to a new factor in the environment, is called directional selection. |
There are some unusual mechanisms which can meet common needs of mechanical engineering problems. These tools help with a range of purposes like generating straight lines, transferring torque between non-coaxial shafts, self centering steering and mechanical punched card readers.
Some mechanisms have special motion characteristics different from those of generic mechanisms. These mechanisms are used for special purposes and few particular categories of motion. These mechanisms are unusual enough to be called as Special Mechanisms. Some common needs of mechanical engineering practice are:
- Generation of a straight line motion by linkage mechanism.
- Reproduction of a path traced by one point at another tracing point with a change in scale.
- Transfer of torque and motion between non-coaxial shafts with changing relative alignment.
- Automotive steering mechanisms and suspension mechanisms.
- Indexing: Intermittent timed motion.
Straight Line Mechanisms
Generation of straight line motion using linkage mechanisms has always been a common requirement in machine design practice. Although exact straight line cannot be generated using simple mechanisms though some simple mechanisms are designed such that they can produce approximate straight lines for short range of motion. These approximate straight line mechanisms has important applications in machine design. These mechanisms were used extensively in classical machines such as steam engines. Perfect straight lines can also be generated using linkage mechanisms but those are relatively complex mechanisms.
There are two classes of straight line mechanisms:
- Approximate Straight Line Mechanisms
- Exact Straight Line Mechanisms
The straight line mechanisms were mostly developed in industrial revolution days when many machines required straight line paths in their operations, whether it was guiding the piston of engines or for operating valves. Straight line mechanisms were developed by continuous effort in trail and error process with making intelligent variations in linkage mechanisms.
Approximate Straight Line Mechanisms
Watt's Straight Line Mechanism
Approximate straight line mechanisms can generate straight line motion to a good deal of accuracy for short range. Such mechanisms are generally four bar linkage mechanisms. The straight line mechanism developed by James Watt, to guide the piston of steam engines through a straight line path, is considered to be as the best and simplest mechanism able to generate close to straight line motion for considerable distance. This mechanism is called as Watt's straight line mechanism or simply Watt's Linkage.
Watt's linkage is a simple four bar mechanism of double-rocker type with the two rockers connected through a coupler. When the two rockers move the mid-point of the coupler moves in an almost straight line path for the motion close to coupler's mean position. If something is hinged to the middle point of the coupler of Watt's linkage it will be constrained to move in straight line path close to the coupler's mean position.
This property of Watt's linkage is utilized in construction of rear axle suspension system of car to prevent sideways motion of car body relative to the rear axle. |
More than twenty five different tribes of Native Americans lived in the southwestern area of the United States. Many of these tribes lived in villages called pueblos and became known as the Pueblo people.
The Pueblo people lived in unusual houses made of adobe brick. Adobe is a type of mud which the people shaped into bricks and then let dry. Many adobe homes exist today in the Southwest.
Long ago, the adobe homes had no doors. The people entered through a type of trap door at the top. The homes were usually three or four stories high. The ground floor had no windows and was used for storage. Usually several adobe homes were centered around a plaza.
The Pueblo people were peaceful and seldom fought in wars. They were talented at crafts and the men of some of the tribes made beautiful jewelry. The women were very good at pottery and painting.
Pueblo people were also known for wearing special masks when they danced. These masks, called kachinas, represented the faces of dead ancestors. |
Sewers stitch fabric pieces together, and a garment is assembled
In this page
This is the main assembly stage of the production process, where sewers stitch fabric pieces together, and a garment is assembled. Computerized sewing machines (costly), can be programmed to sew a specific number of stitches to perform a standard operation, such as setting a zipper or sewing a collar. However, even though new machines mechanize and hasten the sewing process, sewing remains largely labour-intensive. There are four general types of sewing machines: single-needle machines, over lock machines, blind-stitch machines, and specialized machines. Single needle machines are most common, as are their operators. Because operating more complicated machines requires additional training, there is frequently an oversupply of single-needle operators and a shortage of sewers who can use other machines.
Sewers need to be familiar with many different types of fabric and how to stitch each, but they usually specialize in a particular fabric or a particular machine. Working with cotton knit fabrics is very different from working with denim, silk, or linen. Learning how to work with each fabric type is part of the training-usually informal-that sewers undergo. Sewers may also specialize in zipper-setting, embroidery, and other hand stitching techniques.
Sewers may also affix labels. Certain labels identify the garment as belonging to a particular line and designer. Other labels inform the consumer of fabric content, care instructions, country of origin, size, or production by a union shop. |
The functions described here perform various operations on vectors and matrices.
Do a vector concatenation; this operation is written ‘x | y’ in a symbolic formula. See Building Vectors.
Return the length of vector v. If v is not a vector, the result is zero. If v is a matrix, this returns the number of rows in the matrix.
Determine the dimensions of vector or matrix m. If m is not a vector, the result is an empty list. If m is a plain vector but not a matrix, the result is a one-element list containing the length of the vector. If m is a matrix with r rows and c columns, the result is the list ‘(r c)’. Higher-order tensors produce lists of more than two dimensions. Note that the object ‘[[1, 2, 3], [4, 5]]’ is a vector of vectors not all the same size, and is treated by this and other Calc routines as a plain vector of two elements.
Abort the current function with a message of “Dimension error.” The Calculator will leave the function being evaluated in symbolic form; this is really just a special case of
Return a Calc vector with args as elements. For example, ‘(build-vector 1 2 3)’ returns the Calc vector ‘[1, 2, 3]’, stored internally as the list ‘(vec 1 2 3)’.
Return a Calc vector or matrix all of whose elements are equal to obj. For example, ‘(make-vec 27 3 4)’ returns a 3x4 matrix filled with 27's.
If v is a plain vector, convert it into a row matrix, i.e., a matrix whose single row is v. If v is already a matrix, leave it alone.
If v is a plain vector, convert it into a column matrix, i.e., a matrix with each element of v as a separate row. If v is already a matrix, leave it alone.
Map the Lisp function f over the Calc vector v. For example, ‘(map-vec 'math-floor v)’ returns a vector of the floored components of vector v.
Map the Lisp function f over the two vectors a and b. If a and b are vectors of equal length, the result is a vector of the results of calling ‘(f ai bi)’ for each pair of elements ai and bi. If either a or b is a scalar, it is matched with each value of the other vector. For example, ‘(map-vec-2 'math-add v 1)’ returns the vector v with each element increased by one. Note that using ‘'+’ would not work here, since
defmathdoes not expand function names everywhere, just where they are in the function position of a Lisp expression.
Reduce the function f over the vector v. For example, if v is ‘[10, 20, 30, 40]’, this calls ‘(f (f (f 10 20) 30) 40)’. If v is a matrix, this reduces over the rows of v.
Reduce the function f over the columns of matrix m. For example, if m is ‘[[1, 2], [3, 4], [5, 6]]’, the result is a vector of the two elements ‘(f (f 1 3) 5)’ and ‘(f (f 2 4) 6)’.
Return the nth row of matrix m. This is equivalent to ‘(elt m n)’. For a slower but safer version, use
mrow. (See Extracting Elements.)
Return the nth column of matrix m, in the form of a vector. The arguments are not checked for correctness.
Return a copy of matrix m with its nth row deleted. The number n must be in range from 1 to the number of rows in m.
Flatten nested vector v into a vector of scalars. For example, if v is ‘[[1, 2, 3], [4, 5]]’ the result is ‘[1, 2, 3, 4, 5]’.
If m is a matrix, return a copy of m. This maps
copy-sequenceover the rows of m; in Lisp terms, each element of the result matrix will be
eqto the corresponding element of m, but none of the
conscells that make up the structure of the matrix will be
eq. If m is a plain vector, this is the same as
Exchange rows r1 and r2 of matrix m in-place. In other words, unlike most of the other functions described here, this function changes m itself rather than building up a new result matrix. The return value is m, i.e., ‘(eq (swap-rows m 1 2) m)’ is true, with the side effect of exchanging the first two rows of m. |
An intro to those devilish descriptors:
It seems that there is often confusion about descriptors – what they are and how to use them in your mixed methods or qualitative data analysis. Descriptors are powerful. They allow you to break out your qualitative work across demographic and other survey data for greater insights. When using descriptors in your qualitative data analysis, in addition to being able to analyze media files, you are able to break out the information that describes the source of your media files to see things from different perspectives and introduce new dimensions to your analysis. And since Dedoose allows for multiple sets of descriptors, you can add as many levels of analysis as your study needs.
Since descriptors are simply too powerful for you to ignore, or worse, use incorrectly, we are going to take this slow and steady. And we will count on your feedback and questions along the way. Most of the questions about descriptors we receive in the forum, on Facebook, Twitter, and via email are rooted in a lack of understanding of key terms and where to find them in Dedoose. So that is where we will start.
Our descriptor series will have multiple parts beginning with Part 1: The Lingo. Here we will give a brief overview of descriptors as well as a glossary, and image map showing where to find each key term in Dedoose. Next week we will be looking at how to create, delete, and edit descriptors manually. If you have any tips, tricks, or questions on this topic send them our way and we will do our best to address them in the coming weeks!
The brief overview: descriptors and your qualitative data analysis.
So, what are these descriptors anyway…? And what does it mean for my qualitative data analysis?
In Dedoose, descriptors are sets of information you use to identify and describe the sources of your resources/media (e.g., documents, video, audio, images). Generally, descriptors are the characteristics of the participants in your research (e.g., individuals, dyads, families), but can also be descriptions of settings in which observations are made (e.g., stores, schools, neighborhoods, cultures). The descriptor fields and variables that comprise each descriptor set may include demographic information, dates, scores from survey measures, test results, and any other information you gather that is useful in describing and distinguishing the source of your media—essentially your level(s) of analysis. Once you upload or manually add in your descriptor information, you ‘link’ each source with the corresponding media.
So, let’s look at an example that might be used in a qualitative data analysis. Let’s say that you are doing a study that involves parents and children in which you have interviewed 10 children about reading at home. And, based on your research questions and goals you decide you want to look at three levels of analysis. You have information on each child (name, gender, ethnicity), the school they attend (e.g. location, faculty size, curriculum etc), and the district to which each school belongs (average family annual income, square miles of capture area, percent rural versus urban neighborhoods). Let’s look at the ‘Children’ set for now. For this descriptor set you might know the ID, name, gender, ethnicity, and home language of your participants. These will become the descriptor fields (or variables). Think of these fields as questions. What is your ID? What is your name? What is your Gender? For each field you will list each participant answer — for example, ID#123, Dee Doose, female, Hispanic, and so on. These answers are called data points (or just data) and the specific response is a ‘value’—ex. the gender variable has two possible values, male or female (a categorical or option list field); while ID can be any number that uniquely identifies a research participant.
For each descriptor set the fields will change as will the possible values in the data. It is important to make sure you are in the proper descriptor set before adding, editing, or deleting fields or descriptors. In Part 2 and 3 of our blog series on descriptors we will outline the step by step process of creating/adding/editing fields for each set and variables for each descriptor.
But, how does this help my qualitative data analysis?
Descriptors allow you to take your analysis to another level. Once you add descriptors into Dedoose, you can link each participant (in this example, each child), and subsequent demographic information to the interview (e.g. document, video, or audio file) you wish to excerpt and code. This way, when you open up the interview media in the Media Workspace, you will be able to see the descriptor information in the top right corner of the screen in a green box labeled ‘Descriptors.’ Click on this box and you will see all the descriptor information linked to this particular interview. See below:
Selecting the descriptor icon will take you to the descriptor workspace. In the Descriptor Workspace there are three collapsible panels—the expanded ‘Set Fields’ panel, the expanded ‘Columns & Filters’ panel, and the collapsed ‘Charts’ panel (clicking on the arrows in any green panel header will serve to expand or collapse each panel).
Descriptor (SINGULAR): The data used to uniquely identify and describe the sources of your media (e.g., documents, video, audio, images). Commonly, these are the characteristics of your research participants (e.g., individuals, dyads, families), but can also be descriptions of settings in which observations are made (e.g., stores, schools, neighborhoods, cultures). When looking at the image above, you will see that a descriptor is the entire row of data. A Descriptor can also be called a Case by some disciplines. The terms Case and Descriptor can be used interchangeably with terms like “individuals” for example if your descriptors have to do with the individual participants in your study.
Descriptors/Cases (PLURAL): When used in the plural form, these words may refer to your total number of descriptors.
Descriptor Sets: These are containers in Dedoose that hold sets of descriptor fields and descriptors. Dedoose uses sets because you can have multiple sets in a study. Examples of descriptor sets include research participants, families, schools, other settings. Sets are most helpful when you have multiple levels of analysis – for example, your sets may include a school district, schools within each district, and students (or children) within each school.
Think of descriptor sets as folders, and descriptors as your files. Each set is a group that can be characterized. In the image above, the descriptor sets are: Children, School Districts, Schools etc. The image shows that the descriptor set labeled ‘Children’ is highlighted and therefore the descriptors shown in the image above are those that correlate to the ‘Children’ set. It is as though the folder marked ‘Children’ is open and we are able to view the “files” of descriptors.
A full list of the sets you have created are listed in the ‘Descriptor Sets’ panel at the top left of the Descriptor Workspace.
Descriptor Field: Think of fields as the questions you ask. What is your name? For example, “gender, ID, ethnicity” are all the descriptor fields. Also commonly referred to as ‘variables’ as individual cases ‘vary’ on their answer to the questions. The descriptor fields that comprise each descriptor set may include demographic information, dates, scores from survey measures, test results, and any other information you gather that is useful in describing the source of your resources—essentially your level(s) of analysis.
Dynamic Fields: These fields are used to show change over time. Dynamic fields should be used sparingly and only after careful review of your research questions. Typically we recommend using dynamic descriptors if you will speak to the same participant multiple times over the course of a study. In our example above you can see this field is labeled “Phase.” - Note: You can only enter data for dynamic descriptors when linking to a particular document. When you link to this document, a pop up will reveal the phase options you have to choose from. We will visit this topic again later in this series exclusively on dynamic descriptors.
Data Point or Value: The data within each field for each descriptor. Using the question and answer analogy, the question might be, “What is your gender” for the gender field. The answer of “female” would be the data point or value. As fields can be of different types, i.e. number, text, data, option list, the values can be unique to each case (as with an ID number) or one of a set of valid values (as with ‘Income Level’ where the three choices might be ‘high,’ ‘moderate,’ and ‘low’).
Linked Media: This column shows how many times the given descriptor has been linked to a media file (e.g. to a document, video file, or audio file). Each document can only be linked to one descriptor per descriptor set unless a dynamic field is present for the purpose of showing change over time. In the image above, we have one dynamic descriptor field – for the phases of the project so we have two descriptors that have been linked to more than one media file. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.