content
stringlengths 275
370k
|
---|
Understanding Sleep Apnea
Sleep apnea is a common sleep disorder that causes you to stop breathing (momentarily) while you are asleep. These pauses in breathing can last anywhere from 30 seconds to 90 seconds and sometimes longer. They can occur numerous times over a short period before a normal breathing pattern begins again—usually with a choking sound or a loud snort. This disorder can be potentially serious and disrupts your sleep; as your breathing becomes shallower or is interrupted, you will move out of deep sleep and into light sleep.
There are two common types of sleep apnea:
- Obstructive sleep apnea. This is the most common type of sleep apnea and generally occurs when the throat muscles relax and tissue obstructs the airway.
- Central sleep apnea. Occurs when the brain does not send the proper signals to the muscles that control your breathing.
Symptoms Associated with Sleep Apnea
Many people experience sleep apnea without realizing it, so often times it goes undiagnosed. However, some common symptoms that could indicate sleep apnea include:
- Fatigue and excessive daytime sleepiness
- Morning headaches
- Abrupt awakenings with shortness of breath
- Loud snoring
- Cessation of breathing (witnessed by another)
- Concentration difficulties
If you are noticing these symptoms to the point where they are becoming disruptive to your everyday routine (i.e. falling asleep at work or the inability to sleep well at night) then you should consult your doctor about the possibility of sleep apnea. Untreated sleep apnea can lead to an increased risk of a number of health complications including:
- High blood pressure
- Heart attack
- Heart failure
- Irregular heartbeats (arrhythmias)
- Daytime accidents due to fatigue
Causes of Sleep Apnea
Sleep apnea commonly occurs when the muscles in your throat relax, narrowing the central airway. This makes it so that you cannot get an adequate amount of air when you breathe in. It can also occur (in more rare cases) when your brain is not transmitting the proper signals to your muscles.
Your chances of developing sleep apnea are increased if:
- You are obese. Fat deposits around your upper airway can obstruct your air flow and breathing.
- You are male. Men are twice as likely to develop sleep apnea as women. (However, if a woman is obese or overweight, her chances are increased).
- You are older. Sleep apnea is more common in those who are over the age of 60.
- You smoke tobacco. Smoking increases the amount of inflammation and fluid retention in the upper airway.
- You consume alcohol or sedatives before sleep. These substances can cause the throat muscles to relax.
Treatment of Sleep Apnea
If you are struggling with obesity, sleep apnea is a common health concern that can arise. Maintaining a healthy body weight and a healthier lifestyle is key to treating most cases of sleep apnea. Obesity can be a difficult and debilitating disease and sometimes surgery options such as sleeve gastrectomy or the LAP-BAND are treatment options that will reduce and resolve obesity-related disorders such as sleep apnea. Other treatment options your doctor may recommend could include breathing therapies and oral appliances to assist with your breathing or surgeries to open your airway more.
If you are suffering from sleep apnea Dr. Choi will determine the best treatment option for you based on your current lifestyle. |
Specialized Teaching Materials
“Quand apprendre, c’est être l’acteur conscient de son apprentissage”
Cuisenaire rods and words in color are among the different pedagogical materials used in class. They illustrate the importance given to « the subordination of teaching to learning »* and to activity based programs.
Simple problem solving stimulates children to observe, manipulate, explore, hypothesize, build and conclude through trial and error.
Important mathematical notions are compiled so as to create the foundation for new learning.
Used by the entire student body (Nursery-5thgrade) at Ecole Aujourd’hui, Cuisenaire rods contribute to the development of students’ mathematical thinking through the exploration of clear and tangible problems.
Words in Color
The underlying theory to the creation of this material is that learning occurs when acts of awareness are self-generated as opposed to learning through passive impregnation or imitation. The purpose of this material is to emphasize the exploration of language and an understanding of its functions.
The relationship between the written and spoken word is rendered tangible through the use of color and allows the student to focus on the phonemes. Each color corresponds to a particular phoneme and its different graphemes.
Each student progresses individually at his/her own pace. The student enriches his/her reading/writing by the physical act of pointing to the different colors and graphemes; thus integrating the particularities of spelling and conjugation so essential to the French language.
- to use his/her mental capacities and previously acquired knowledge.
- to be aware of the work that needs to be undertaken.
- to be in a position of creativity and responsibility.
- to construct his/her own criteria and learning strategies.
- to manipulate language in a quick and simple manner.
- to explore the relationship between the oral and written language
- to ask questions regarding language
- to be constantly encouraged by relevant activities and realistic challenges.
English as a foreign language has been taught at l’école Aujourd’hui since its opening in 1975. Learning another language exposes the student to an alternative way of thinking. It encourages children to be globally aware.
Early and continual foreign language learning provide children with a useful tool for later in life. The presence of bilingual classmates (roughly 25% English speaking and 20% other languages ) motivates children to find out more about different languages.
English is taught by two native speaking members of the faculty. It is integrated within the curriculum of the school, as required by the French National Education System. English is considered a means of communicating and sharing ideas and experiences. It is used naturally and as much as possible in every day situations. The English classroom is centrally located and is equipped with a library and an interactive Smart Board.
Pre-school, Kindergarten and 1st grade – class of 10-12, lessons of 30 minutes, 3 or 4 times per week
Children discover English through TPR (Total Physical Response) and the use of objects and stories adapted to their age.
Language is constantly recycled and reinforced through a variety of meaningful activities : songs, poems, games, books, dances, videos, cooking, artwork….
The homeroom teacher and the English teacher work together in order to integrate English in the class syllabus with common themes such as the body, the town, transport, toys, food…
As the English teacher participates in the life of the class (Physical Education, choir, lunchtime supervision) English can be used in authentic situations.
English speaking students take part in the English classes with their classmates. They are encouraged to participate at an appropriate level. The use of children’s literature provides a framework for vocabulary enrichment.
2nd to 5th grade – class of 10-12, lessons of 45 minutes, 3 or 4 times per week
In 2nd, 3rd and 4th grade teaching is based on the well known course « Kid’s Box » (Cambridge University Press). This course includes a pupil’s book, an activity book and an interactive DVD. Children learn key language items, useful phrases and vocabulary and start to read and write in English.
The children of 5th grade take part in a three week residential field trip to the United States. The preparation for this project allows for a reinforcement of vocabulary and skills previously taught within the themes of everyday life – school, family, home, food… Some aspects of American history, day to day life and traditions are also covered.
The English as a Foreign Language curriculum taught at l’École Aujourd’hui is in conformity with the recommendations of the French Ministry of Education, aiming for the skill level of A1 of the Common European Framework of Reference for Languages (CEFR).
From 2nd to 5th grades English speaking children begin a personalized syllabus of reading and writing skills adapted to their level. They work alongside the other members of their class and may also participate in special projects such as cooking, theatre and art. Each student has a personal weekly work assignement that includes reading, writing, grammar and spelling tasks. Learning tools used include: « Basic Skills English workbooks» (Collins), « Junior English » (Ginn) and a variety of reading materials.
An E.C.A. (English Classroom Assistant) is usually available to help the English speaking children once a week.
The teaching of the arts at Ecole aujourd’hui serves several objectives. Pupils can:
- develop a wide range of art and design techniques.
- learn about artists past and present.
- visit places of artistic interest.
- expand their own personal culture.
Visual art activities take place regulary within the class and occasionally with visiting artists or within a workshop session in a museum.
Music education is taught through weekly choir rehearsals.
Art activities are linked to the curriculum when possible – history, geography, literacy.
Some recent projects :
- Creating animated films.
- Drama workshops led by a parent.
- Light painting
- Land art
- Puppet making.
In all classes many interdisciplinary and interclass projects include art, drama and musical elements.
Physical education activities are crucial to the maturity of children and are an essential part of their education. The athletic activities offered at Ecole Aujourd’hui contribute to the devolopment of motor skills, self expression and setting personaml goals.
Ecole Aujourd’hui uses of the athletic facilities in the neighborhood (Huygens gymnasium, the Stanislas school pool) and our all purpose room (le préau). Track and field activities as well as team sports also take place in the Jardin Atlantique.
Field Trips and Residential Class Trips
Field trips and residential class trips are an integral part of the school’s curriculum. These trips are incorporated into each class syllabus. They are also offer excellent opportunities for approaching global awareness.
Ecole Aujourd’hui takes full advantage of the many cultural activities Paris has to offer.
- The arts: Musée en herbe, Musée Bourdelle, Forum des images, projet « Ecole et cinéma », Orsay, Beaubourg, Musée du Quai Branly…
- History : Musée de Cluny, Musée Carnavalet, Musée de la Tapisserie de Bayeux…
- Science: Muséum d’Histoire Naturelle, Cité des Sciences, Palais de la Découverte…
Every other year the students in the pre-K class through 4th grade (MS to CM1), go on a residential field trip. Some previous trips:
- Nature and Music in the Drôme
- Ecology in Picardy
- The Middle Ages or Prehistory in the Périgord
- Nature study in the mountains of Rhone Alpes
- Marine life in Brittany
The 5th grade class travels to the US every year. The children live with an American host family for three weeks thus consolidating the years of instruction of English as a foreign language as well as discovering and learning about another culture and environment. |
Many green pot plants are grown with little direct sunlight. Reason for this is the origin of the plants: they live on the bottom of a rainforest. But what happens with the production and quality when a grower does allow more light in the greenhouse? De Business Unit Glastuinbouw van Wageningen University & Research discovered that one tropical plants can handle it better than the other.
For the study several varieties of two types of green plants were studied during the summer and autumn: Dracaena and Calathea. The plants were grown in two different departments with diffuse glass at the test location in Bleiswijk. In one department sun screens are being used to prevent an excess of sunlight. In the other department the plants received 80 percent more sunlight for 5 months.
The Calathea handled better than the Dracaena. The Calathea produced 20 percent more leaf crop. Good news for growers of this crop: by the screens not that often production rises. Growers can thus shorten their growing cycle and thereby grow more plants per year. But the percentage of 20 percent indicates that the plant sometimes had to deal with stress. Otherwise, production would have increased by a maximum of 80 percent. After all, in other crops - such as tomatoes - the principle is '1 percent light equals 1 percent production'.
Stress indicates that the plant can not convert the sunlight into photosynthesis, and therefore can not convert more CO2 from the air into sugars. The surplus of light is therefore converted into heat and into light with other wavelengths (fluorescence). During the research fluorescence has been measured at various times to determine when stress occurs. This acquired knowledge provides tools for improving the cultivation of photosensitive pot plants.
The Dracaena type is susceptible to stress. More sunlight in this plant hardly causes a higher production. The quality had to suffer: the leaves of the plant yellowed. By temporarily giving these plants less light, these effects could be nullified again, making the damage happily reversible.
Club of 100
The research into the influence of light on green pot plants has been funded by the Club van 100, a collaboration between supply companies in greenhouse horticulture, initiated by WUR Greenhouse Horticulture. |
Key point: The fall of Constantinople was a catastrophic moment for the West and it was made possible by new cannons called bombards. In the 15th century the great powers of medieval Europe paid talented gunsmiths to build massive bombards to batter walls and shorten the length of sieges. The introduction of bombards meant that artillery replaced mining as the surest way to breach a stronghold.
In the 15th century the great powers of medieval Europe paid talented gunsmiths to build massive bombards to batter walls and shorten the length of sieges. The introduction of bombards meant that artillery replaced mining as the surest way to breach a stronghold.
Bombards were massive guns, the largest of which weighed 20 tons or more. Smaller guns were referred to during the period as cannons. Bombards were transported on massive carts to the siege site where engineers transferred them by crane onto a wooden platform or frame. Wheeled carriages could not withstand the devastating recoil of these behemoths.
Upon the death of his father Sultan Murad II in 1451, Sultan Mehmet II began making preparations to capture Constantinople, the last and mightiest bastion of the Byzantine Empire. He hired a Hungarian named Urban to oversee the production of bombards and cannons for his campaign against Byzantine Emperor Constantine XI’s army at Constantinople.
The largest bombard made for the siege was a 27-foot-long bronze gun that fired a 1,500-pound stone ball. Urban oversaw the manufacture of 70 bombards and cannons specifically for the siege. The walls of Constantinople had withstood 20 earlier sieges, but the bombards Mehmet commissioned would give the Ottomans a major advantage.
For a 15th-century artillery piece to be effective, it had to use gunpowder made from purified saltpeter. The purified saltpeter was mixed with sulfur and charcoal to create gunpowder. The ingredients in the gunpowder used for bombards tended to separate during the bumpy ride to the battlefield, so the crews transported the ingredients separately and mixed them on site.
The smoothbore, muzzle-loading bombards had ranges of upward of 1,000 yards. However, to avoid barrel explosions crews used smaller powder charges. This meant firing from a range of 200-250 yards. Wooden barriers protected the crews from enemy archers, crossbowmen, and hand gunners.
The bombard crews set wooden blocks or beams behind the behemoths in an effort to contain the recoil. The recoil was tremendous; it routinely smashed the beams behind it. After each firing, the crew repaired damage to the beams and the firing platform. This fire-and-fix process meant that the largest bombards might fire as little as five times a day.
The bombards of the mid-15th century were either cast of bronze or forged of iron strips. Although it was far less expensive to make a bombard of iron, such a gun ran a far higher risk of exploding. The cast bronze bombards had walls of uniform thickness, whereas the forged cannons had numerous welded joints that were suspect.
Stone balls for bombards were made of limestone. Making the balls was a time-consuming process by which the ball was smoothed and rounded by hand. However, smaller caliber guns in the mid-15th century did use iron balls. It was not until the close of the century that manufacturers were able to perfect the production of cast iron balls. Cast iron balls were denser than stone balls and caused greater destruction.
Sultan Mehmet’s bombards gave him a decisive advantage in the 1453 siege of Constantinople. Yet the Ottoman victory owed as much to the tenacity of the crack Turkish soldiers as to the bombards. |
The porcupine and hedgehog are prickly mammals. They are often confused because they both have sharp, needle-like quills on their body. However, that’s about the only similarity between the two animals. Porcupines and hedgehogs differ in size, defensive behavior, diet and country of habitation. Their quills also have different features.
TL;DR (Too Long; Didn't Read)
Porcupines and hedgehogs differ in size and the structure and function of their quills. They live in different habitats. Porcupines eat plants while hedgehogs are carnivorous.
The porcupine and hedgehog are found in different areas of the world. Both are native to Africa, Europe and Asia. Hedgehogs have also been introduced to areas of New Zealand. Some species of porcupines are native to the New World, ranging from the Canada and the United States to South America.
Tree and Grassland Habitats
The porcupine covers a range of habitats, unlike the hedgehog. North and South American species of porcupine spend a lot of their time in trees. They have gripping tails that help them climb. The species of porcupines in Europe, Asia and Africa inhabit deserts, grasslands and forests. On the other hand, the hedgehog prefers building a nest under vegetation around parks, farmlands and gardens. They live close to hedgerows, woodland edges and suburban gardens for a convenient food supply.
Long or Short Quills
The quills on each type of animal have different features. The hedgehog has shorter quills, about 1 inch in length. In contrast, the porcupine’s is 2 to 3 inches, while the African species can grow quills over 11 inches long. A hedgehog’s quills cannot easily come off its body, while a porcupine’s quills can easily detach themselves. The number of quills on the body also differs. On average a hedgehog has 5,000 quills. In comparison, a porcupine has approximately 30,000.
A hedgehog can grow between 4 and 12 inches in length. A porcupine can grow to triple the size, between 20 and 36 inches. Each animal has a tail, which differs in size also. A hedgehog’s tail is up to 2 inches in length, while the porcupine’s is between 8 and 10 inches. The porcupine uses its long tail to climb trees, while the hedgehog stays on the ground.
The hedgehog and porcupine exhibit different behaviors when they feel threatened. When the hedgehog feels threatened, it curls up into a ball. Its quills act like a defensive barrier to stop predators attacking. On the other hand, the porcupine arches its back so the quills stick up when it feels threatened. It waves its tail around to hit an attacker, and the tail quills will detach and stick into a predator.
Herbivore and Carnivore
Porcupines are herbivores. Depending on the food available in their habitat, they will eat fruit, leaves, grass, buds, bark and stems. Hedgehogs are carnivores and act as useful garden pets. They eat pests like slugs, which damage gardens. Other food for them includes insects, centipedes, worms, mice, snails, frogs and small snakes. |
• Parts of a Circle
• Radius, Diameter, Circumference
• Area of Shapes
• Volume of Figures
• Find the Surface Area
• Measure Angles
• Properties of Lines
Students will become experts of all things shapes through identification and measurement.
Our resource introduces the mathematical concepts taken from real-life experiences, and provides warm-up and timed practice questions to strengthen procedural proficiency skills. Learn the different parts of a circle and how to calculate the radius, diameter and circumference. Calculate the area of squares, rectangles, parallelograms, triangles, circles, and trapezoids. Then, find the volume of cubes and rectangular prisms. Measure the surface area of spheres, cylinders, cubes, and rectangular prisms. Use a protractor to measure angles. Identify pairs of lines as parallel, perpendicular, skew, or intersecting. The task and drill sheets provide a leveled approach to learning, starting with grade 6 and increasing in difficulty to grade 8. Aligned to your State Standards and meeting the concepts addressed by the NCTM standards, reproducible task sheets, drill sheets, review and answer key are included.
Your payment information is processed securely. We do not store credit card details nor have access to your credit card information. |
Stimulation of serotonin neurons boosts the rate of learning in mice, an international team from the Champalimaud Centre for the Unknown (CCU), in Portugal, and the University College London (UCL) in the U.K. has found.
Serotonin is a monoamine neurotransmitter that nerve cells use to communicate with each other, and its effects on behavior are still unclear. For a long time, neuroscientists have been set on constructing an integrated theory of what serotonin actually does in the normal brain.
But it’s been challenging to pin down serotonin’s function, especially for learning. Using a new mathematical model, now the authors found out why.
“The study found that serotonin enhances the speed of learning. When serotonin neurons were activated artificially using light, it made mice quicker to adapt their behavior in a situation that required such flexibility. That is, they gave more weight to new information and therefore changed their minds more rapidly when these neurons were active,”
said Zach Mainen, one of the study’s leaders. Serotonin has previously been implicated in boosting brain plasticity, and this study adds weight to that idea, thus departing from the common conception of serotonin as a mood-enhancer.
Combining Antidepressants With Behavioral Therapies
The new finding may help to better explain why selective serotonin reuptake inhibitors (SSRIs), a class of antidepressants that are thought to act by increasing brain levels of circulating serotonin, are more effective in combination with behavioral therapies based on the reinforced learning of behavioral strategies to stave off depressive symptoms.
In the experiments, mice had to perform a learning task in which the goal was to find water.
“Animals were placed in a chamber where they had to poke either a water dispenser on their left side or one on their right — which, with a certain probability, would then dispense water or not,”
explained Madalena S. Fonseca.
When they analysed the data, the scientists found that the amount of time the mice waited between trials was variable — either they immediately tried again, poking on one of the water-dispensers, or they waited longer before making a new attempt. It was this variability that allowed the team to reveal the likely existence of a novel effect of serotonin on the animals’ decision-making.
The long waiting intervals were more frequent at the beginning and at the end of a day’s session (run of trials). This probably happens because initially, the mice are more distracted and not very engaged in the task itself,
“perhaps hoping to get out from the experimental chamber,” the authors write.
At the end, having drunk enough water, they are likewise less motivated for seeking reward.
Whatever the case, the team found that, depending on the length of the interval between trials, the mice adopted one of two different decision-making strategies to maximize their chances of reward (obtaining water).
Specifically, when the interval between trials was short, the mathematical model that best predicted the animals’ next choice was based almost completely on the outcome (water or no water) of the immediately preceding trial (namely, they poked the same water-dispenser again; if that failed to provide water, they would next switch to the alternative water-dispenser, a strategy known as “win-stay-lose-switch”).
This, the authors write, suggests that when the interval between two trials was short, the animals were mostly relying on their working memory to make their next choice — that is, on the part of short-term memory concerned with immediate perceptions.
On the other hand, when the interval between two consecutive trials lasted more than seven seconds, the model that best predicted the mice’s next choice suggested that the mice were using the accumulation of several experiences of reward to guide their next move — in other words, their long-term memory kicked in.
The CCU group also stimulated the serotonin-producing neurons in the animals’ brain with laser light, through a technique called optogenetics, to look for the effects of higher levels of serotonin on their foraging behavior. They sought to determine whether and how an increase in serotonin levels would affect each of the two different decision-making strategies they had just uncovered.
Something surprising then occurred.
When they pooled together all the trials in their calculations, without taking into account the duration of the preceding interval, the scientists found no significant effect of their serotonin manipulation on the behavior. It was only when they took into account the decision-making strategies that they were able to extract from the data an increase in the animals’ rates of learning.
Stimulation of serotonin-producing neurons boosted the effectiveness of learning from the history of past rewards, but this only affected the choices made after long intervals.
“Serotonin is always enhancing learning from reward, but this effect is only apparent on a subset of the animals’ choices,”
said Masayoshi Murakami.
“To our surprise, we found that animals’ choice behavior was generated from two distinctive decision systems,” said first author Kiyohito Iigaya. “On most trials, choice was driven by a ‘fast system,’ where the animals followed a win-stay-lose-switch strategy. But on a small number of the trials, we found that this simple strategy didn’t explain the animals’ choices at all. On these trials, we instead found that animals followed their ‘slow system,’ in which it was the reward history over many trials, and not only the most recent trials, that affected their choices. Moreover, serotonin affected only these latter choices, in which the animal was following the slow system.”
As to the role of SSRIs in treating psychiatric disorders like depression:
“Our results suggest that serotonin boosts [brain] plasticity by influencing the rate of learning. This resonates, for instance, with the fact that treatment with an SSRI can be more effective when combined with so-called cognitive behavioral therapy, which encourages the breaking of habits in patients,”
the authors conclude.
The work was supported by the Gatsby Charitable Foundation, the Joint Initiative on Computational Psychiatry and Ageing Research between Max Planck Society and UCL, the Japan Society for the Promotion of Science, the European Research Council, Fundação para a Ciência e a Tecnologia, and the Champalimaud Foundation.
Kiyohito Iigaya, Madalena S. Fonseca, Masayoshi Murakami, Zachary F. Mainen & Peter Dayan
An effect of serotonergic stimulation on learning rates for rewards apparent after long intertrial intervals
Nature Communications volume 9, Article number: 2477 (2018)
Image: Microphotography of serotonin-producing neurons (in pink) in the mouse brain. Credit: Matias, Lottem et al./CCU |
Copperheads are a species of venomous snake in North America, a member of the Crotalinae (pit viper) subfamily. Five subspecies are currently recognized. They get their name from the unique copper hue present on the scales of their heads.
Copperheads grow to an average length of 20–37 inches (including tail). They have unmarked copper-colored heads, reddish-brown coppery bodies and chestnut brown crossbands that constrict towards their midline. Their thick bodies feature keeled scales. Males are usually larger than females, though females grow to greater lengths. Their bodies are relatively stout and their heads are broad and distinct from their necks. Their snouts slope down and back, with the top of their heads extending further forward than the mouth.
Copperheads live in the United States in the states of Alabama, Arkansas, Connecticut, Delaware, Florida, Georgia, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Ohio, Oklahoma, Maryland, Massachusetts, Mississippi, Missouri, Nebraska, New Jersey, New York, North Carolina, Pennsylvania, South Carolina, Tennessee, Texas, Virginia and West Virginia. In Mexico, copperheads are found in Chihuahua and Coahuila.
Copperheads occupy a variety of different habitats. In most of North America they favor deciduous forest and mixed woodlands. They are often found in rock outcroppings and ledges, and also in low-lying swampy regions.
Copperheads feed on small rodents, large insects, mice, small birds, frogs and lizards. They are generally ambush predators, waiting for prey to arrive. When hunting insects, however, copperheads actively pursue their prey. Some copperheads will climb trees to hunt insects. Young copperheads use their brightly colored tails to attract frogs and lizards.
Copperheads inject animals with venom to subdue the prey, to make it easier to swallow it. Using heat-sensitive pits, they detect animals that are warmer than their environment. Using this ability, they are also able to find nocturnal animals. Smaller prey are usually held in their mouths until they die. Larger animals are released and then tracked after the venom takes affect.
Copperheads in the southern United States are nocturnal during hot summer months, while active during the day during the spring and fall.
Copperheads are social animals. They hibernate in communal dens with other copperheads and other species of snakes. They usually return to the same den each year. They sun, eat and drink together. They also migrate together in the late spring to summer feeding territories, then return together in early fall.
During the spring and fall mating seasons, males seek females using their tongues to detect pheromones in the air. They then move their heads or rub their chins on the ground. Males may engage in combat by elevating their bodies, swaying side to side, hooking necks, and intertwining their entire bodies. Courtship may last over an hour, with the male and female bodies aligned. Mating can last over 8 hours. Other males then pay little attention to a female who has mated, as their partner emits a pheromone that makes the female unattractive to others. Females may store the sperm until they are ready to begin the gestation period which lasts three to nine months.
Copperheads do not always breed every year. Females may give birth for several years in a row, then not breed for several years. Copperheads give birth to live babies, from 1 to 20, each of which is about 7.9 inches in length. Copperhead babies look similar to their parents, but are lighter in color and have a yellow-mark on the tip of their tails. Copperhead parents provide no direct care for their babies.
During the winter copperheads hibernate in dens, in limestone crevices, often together with Timber Rattlesnakes and Black Rat Snakes.
Copperheads attempt to avoid humans, but unlike other viperids they will often "freeze" instead of slithering away. As a result, many bites occur from people unknowingly stepping on or near them. They usually will give a warning bite and inject only a small amount of venom, or none at all.
Copperheads live up to 18 years.
THREATS TO COPPERHEADS
Humans are the main predators of copperheads, clearing land for their own use while destroying the natural habitats of copperheads. Humans also hunt and kill copperheads out of fear, or for amusement. Copperheads are also victims of the unethical “pet” trade, sold as “exotic animals”. They are inhumanely kept in captivity for the amusement of humans. These wild animals are deprived of their natural lifestyle, confined to small enclosures, and endure stress and health ailments from their unnatural living conditions. |
I. The congruence notation:
It often happens that for the purposes of a particular calculation, two numbers which differ by a multiple of some fixed number are equivalent, in the sense that they produce the same result. For example, the value of depends only on whether n is odd or even, so that two values of n which differ by a multiple of 2 give the same result. Or again, if we are concerned only with the last digit of a number, then for that purpose two numbers which differ by a multiple of 10 are effectively the same.
The congruence notation, introduced by Gauss, serves to express in a convenient form the fact that the two integers a and b differ by a multiple of a fixed natural number m. We say that a is congruent to b with respect to the modulus m, or in symbols,
The meaning of this, then, is simply that is divisible by m. The notation facilitates calculations in which numbers differing by a multiple of m are effectively the same, by stressing the analogy between congruence and equality. Congruence, in fact, means “equality except for the addition of some multiple of m”.
A few examples of valid congruences are :
A congruence to the modulus 1 is always valid, whatever the two numbers may be, since every number is a multiple of 1. Two numbers are congruent with respect to the modulus 2 if they are of the same parity, that is, both even or both odd.
Two congruences can be added, subtracted or multiplied together, in just the same way as two equations, provided all the congruences have the same modulus. If
The first two of these statements are immediate: for example, is a multiple of m because and are both multiples of m. The third is not quite so immediate because , and is a multiple of m. Next, , for a similar reason. Hence, .
A congruence can always be multiplied throughout by any integer; if the10n . Indeed this is a special case of the third result above, where and are both k. But, it is not always legitimate to cancel a factor from a congruence. For example, , but it is not permissible to cancel the factor 6 from the numbers 42 and 12, since this would give the false result . The reason is obvious: the first congruence states that is a multiple of 10, but this does not imply that is a multiple of 10. The cancellation of a factor from a congruence is legitimate if the factor is relatively prime to the modulus. For, let the given congruence be , where is the factor to be cancelled, and we suppose that a is relatively prime to m. The congruence states that is divisible by m, and hence, is divisible by m.
An illustration of the use of congruences is provided by the well-known rules for the divisibility of a number by 3 or 9 or 11. The usual representation of a number n by digits in the scale of 10 is really a representation of n in the form
where a, b, c, … are the digits of the number, read from right to left, so that a is the number of units, b the number of tens, and so on. Since , we have also , and , and so on. Hence, it follows from the above representation of n that
In other words, any number n differs from the sum of its digits by a multiple of 9, and in particular n is divisible by 9 if and only if the sum of its digits is divisible by 9. The same applies with 3 in place of 9 throughout.
The rule for 11 is based on the fact that so that , and so on. Hence,
It follows that n is divisible by 11 if and only if is divisible by 11. For example, to test the divisibility of 958 by 11 we form 1-8+5-9, or -11. Since this is divisible by 11, so is 9581.
Ref: The Higher Arithmetic by H. Davenport, Eighth Edition, Cambridge University Press. |
A set of four pages for practicing upper case and lower case letter recognition.
This teaching resource has a four-page download which includes two pages to match upper case letters and two pages to match lower case letters.
To save paper, put these into dry erase pockets and use them in your reading rotations!
National Curriculum Curriculum alignment
- Early Years Foundation Stage
- Key Stage 1 (KS1)
Key Stage 1 (KS1) covers students in Year 1 and Year 2.
English has a pre-eminent place in education and in society. A high-quality education in English will teach pupils to speak and write fluently so that they can communicate their ideas and emotions to others and through their reading and listening, ot...
- Communication, language and literacy
Children read and understand simple sentences. They use phonic knowledge to decode regular words and read them aloud accurately. They also read some common irregular words. They demonstrate understanding when talking with others about what they...
Children use their phonic knowledge to write words in ways which match their spoken sounds. They also write some irregular common words. They write simple sentences which can be read by themselves and others. Some words are spelt correctly and o...
We create premium quality, downloadable teaching resources for primary/elementary school teachers that make classrooms buzz!
Request a change
You must be logged in to request a change. Sign up now!
Report an Error
You must be logged in to report an error. Sign up now! |
Today was a complete work day on the Pedigree project. It is my expectation that the rough draft is complete by the end of the class period. The full final draft is due next Thursday (10/19) at the beginning of class.
Today we discussed one way to represent a visual for inheritance of traits, a pedigree. A pedigree for a family is basically a family tree. We discussed three types of inheritance patterns: autosomal dominant, autosomal recessive, and sex-linked inheritance.
The mandatory project for this nine weeks, The Pedigree Project, was assigned. Students were given the story of a fictitious family and were tasked to create a pedigree for them. They were then given a trait to trace through this family and then determined the inheritance pattern for the trait. All information for the project can be found under "First Nine Weeks" -> "Unit 3" and at the bottom of that page. Students were given a half work day to start the rough draft; if not done today, it is my expectation that it is done tomorrow towards the beginning of class.
HW: 270-271 (by Monday)
We discussed Non-Mendelian (traits that do not fall under Mendel's pattern of dominant vs. recessive) genetics today; discussing incomplete dominance, co-dominance, multiple alleles, polygenic traits, and sex-linked traits.
HW: pg 201-205
Also, do Punnett square practice parts 4-5
Today we began our discussion about Punnett squares and advanced Punnett squares, dihybrid crosses. These types of Punnett squares involve the probability of inheriting two different traits. The rest of class was work time for the Punnett square packet and notes for tonight.
HW: 208-216, 219; Punnett Square Packet Parts 1-3
Today we began by going over yesterday's test.
We then reviewed basic terminology regarding genetics, such as dominant/recessive, homozygous/heterozygous, and genotype/phenotype. We also discussed the importance of Gregor Mendel, the "father of genetics".
Use the DNA strand below to answer the following questions:
3' - TAC CCA GAT CGA TAT ATT - 5'
1. Create the strand of mRNA that would result after transcription (DNA to RNA; use the same beginning strand).
2. Use the genetic code chart (given Thursday) to determine what amino acids would be in this protein.
Today we reviewed protein synthesis from yesterday. Please see the powerpoint in the notes folder if absent.
We then did a lab activity called "The Mysterious Monster". Students were tasked with taking 7 DNA strands, converting them to mRNA, then to amino acids, and ultimately determining the traits that the DNA coded for. Students will then draw a monster that has the seven traits in question. The lab can be found in the Labs/Activities folder if absent (or if you lose yours). For absent students, you may use this set of DNA strands:
HW: 242, 245-247; cancer infographic (due Monday, 5pm)
Today we began class by reviewing the cell cycle and mitosis as a class and did a mini-lab where we looked at mitosis in an onion root tip (see Google classroom). For any students still having difficulties, please watch the videos below.
We then discussed how this process is controlled; all cells do not go through the cell cycle, nor do they do so at the same speeds. There are factors which control its timing, and if those factors go awry, bad things can happen, like cancer. The notes for this can be found towards the end of the Cell Cycle - Mitosis powerpoint in the "Notes/Lecture" folder under Unit 2.
HW: Finish Cell Cycle Book and study for tomorrow's quiz!!
Today we began class by taking a quiz over DNA Structure and Replication. We then discussed how a molecule as large as DNA can fit into such a tiny location, the nucleus. The video below helps explain.
We then worked on the Cell Cycle Flipbook. Students are to create a flip book (example seen below on the left) over the stages of the cell cycle. Directions for the booklet can be seen below (to the right), or are found in the documents folder in Unit 2, or are on Google classroom. This is due Tuesday.
HW: Egg Osmosis Lab Report (due Monday) and Cell Cycle Flipbook (due Tuesday). No notes!
All posts written by Samuel the cat in a catnip-induced haze. Forgive any spelling/grammatical errors. Do your homework or feel his wrath... |
Create table in Python using Tkinter
Here we are going to discuss creating a table in Python using Tkinter. Before moving in further let’s first understand what is Tkinter. Tkinter is a standard yet powerful GUI library in Python. GUI means ‘ Graphic User Interface’ which provides a link between the user and the code running in the background. So how does it help? Tkinter provides a strong object-oriented interface that helps to create a user interface.
How to create a table in Python with Tkinter
from tkinter import Tk, Entry, Button, Label, Text, END class Window(object): def __init__(self, master): self.master = master self.label_cols = Label(self.master, text='Number of Columns') self.label_rows = Label(self.master, text='Number of Rows') self.entry_cols = Entry(self.master) self.entry_rows = Entry(self.master) self.btn = Button(self.master, text='Generate', command=self.create_table) self.out = Text(self.master) self.out.config(width=100) self.label_cols.grid(row=0, column=0, sticky='E') self.entry_cols.grid(row=0, column=1, sticky='W') self.label_rows.grid(row=1, column=0, sticky='E') self.entry_rows.grid(row=1, column=1, sticky='W') self.btn.grid(row=2, column=0, columnspan=2) self.out.grid(row=3, column=0, columnspan=2) def create_table(self): table = '' cols = self.entry_cols.get() rows = self.entry_rows.get() if (rows.isdigit() and int(rows) > 0) and (cols.isdigit() and int(cols) > 0): for r in range(int(rows) + 2): if r is not 0: table = table + '\n' for c in range(int(cols) + 1): if r is 1 and c is not int(cols): table = table + '|---' else: table = table + '| ' self.out.delete(1.0, END) self.out.insert(END, table) root = Tk() root.title('Chart') m = Window(root) root.mainloop()
Let us understand the working of the code:
The first step is always to import the desired libraries or modules from a library.
Next, we create a Class which I named as Window.
- Window class has a constructor and a method named as create_table.
- Constructor: The main function of this constructor is to accept values from the user. This data will be used to create the table accordingly.
- create_table: This method of the class Window actually creates the table based on the values received from the constructor. We start a loop that continues until the input is met.
Finally, we create an object of the Tkinter named as root. We then use this object to provide the data to the class Window whose object is created as m. |
With the Radiation Sensor, a versatile Radiation Cube and the Stefan-Boltzman Lamp, four key experiments in thermal radiation can be performed.
Students begin with a study of thermal radiation from different types of surfaces at the same temperature. The Thermal Radiation Cube has four different surfaces which can be monitored (black matte, white matte, polished aluminum and dull aluminum). The cube is heated electrically with a 100-watt bulb (its output can be varied). The thick aluminum walls assure the same temperature on all four walls to within a fraction of a degree. The Radiation Sensor provides an accurate measure of thermal radiation throughout the infrared region. Its output is a voltage that is proportional to the intensity of radiation.
Another important introductory experiment is the Inverse Square Law. The Stefan-Boltzman Lamp uses a special bulb to provide a near-perfect point source, providing accurate results.
Finally, students can verify the Stefan-Boltzman Law for both low and high temperatures using the Radiation Cube for the low temperatures and the Stefan-Boltzman Lamp for the high temperatures. |
Microscopes are used for a wide array of applications, ranging from scientific research to diagnosis of health problems, to study forensic reports to teach science to children in schools. Most of the microscopes that we know are upright microscopes which have the objective lens at the top and the stage at the bottom. The objective lens magnifies the sample placed on the stage when the sample is lit from below using a light source and observed using an eyepiece which further magnifies the already magnified sample image. This is the basic structure of the regular, upright microscopes that we know. So what are inverted microscopes, how different are inverted microscopes from regular ones and what advantages do they offer? Read on to know.
Inverted microscopes, contrary to the common belief have the light source and the condenser on the top of the stage and the objective lens is placed below the stage, hence the name inverted microscopes. Inverted microscopes were invented by J. Lawrence Smith in 1850 and ever since the invention, inverted microscopes are widely used for biological observations which include cells, organisms and blood samples. One of the main reasons why researches use inverted microscope is because a few samples are severely affected by gravity and settle down, which results in lesser visibility from the eyepiece that is placed above the stage in regular microscopes like compound microscope or stereo microscope. In situations like this, an inverted microscope helps visualizing the samples easily as the eyepiece and the objective lens are placed below the stage and gravity doesn’t affect the observation. Apart from the above mentioned fact, researchers have found many more advantages of these microscopes that has led to their widespread use in industrial applications as well. Here is a list of those advantages.
1. It’s quick and cost-effective
Yes, inverted microscopes are time saving and also cost effective at the same time. This is because once you focus a sample in an inverted microscope the sample stays focused for further magnifications and remains the same for all other samples with the same focal preferences. Hence it is easier to make multiple observations within a short span of time and even with little or no training one can make the observations without multiple operations and focal adjustments. Thus it reduces the workforce needed in big laboratories and saves the cost on their training.
2. More freedom
Regular, uptight microscopes have one annoying limitation that doubles the amount of work invested. This limitation is that the sample under observation should have an average height of 80 mm and a weight of 3 kg. This limitation hinders a lot of observations in industrial applications and requires that bigger samples are observed part by part. However, since the objective lens in placed below the sample, using inverted microscopes it is possible to place samples that are as heavy as 30kg.
3. It’s easier to prepare samples
As mentioned above, inverted microscopes offer flexible options to place the samples as it is without any size or height restrictions. In addition, processing of only one side of the sample is sufficient and there’s no need to embed the samples nor cut them down into smaller samples. Also, while observing there’s no need to level the sample using a sample press for accuracy. Thus inverted microscopes make it very easy to prepare samples and place them on the stage.
4. Less Risky
Crashing the objective into the sample is one of the most annoying but inevitable risks while using upright microscopes. Such events will delay the entire process and meanwhile sensitive samples might alter their state and again fresh samples have to be prepared for accuracy. However, there’s no such risk of crashing the objective into the sample in inverted microscopes as the objective lens is placed below the stage in these microscopes.
5. Easy to use
Lastly, it is very easy to use inverted microscopes (www.wisegeek.com/how-do-i-choose-the-best-inverted-microscope.htm) without much training as the sample moves in the same direction in the same direction as the stage is moved by the observer. Thus, it is easier for trainees and beginners to get used to the equipment quickly and go about the work without much training.
No related posts. |
Metamorphoses Notes & Analysis
The free Metamorphoses notes include comprehensive information and analysis to help you understand the book. These free notes consist of about 82 pages (24,309 words) and contain the following sections:
Metamorphoses Plot Summary
Metamorphoses is a collection of ancient stories of mythology written in poem form. Each of the stories that Ovid presents contains some sort of transformation, or metamorphosis, and that is the link that ties them all together. Ovid uses sources like Virgil's Aeneid, as well as the work of Lucretius, Homer, and other early Greek works to gather his material. Although some of the stories he presents are true to these sources, Ovid also adds his own to twist to many of them and changes details where it better suits his purpose.
The theme throughout the stories is the power of the gods, but towards the end, the poem seems to emphasize the greatness of Rome and its rulers. Many of the metamorphoses that take place throughout Ovid's work are changes wrought by the gods as punishment for something that a mortal has done. There are a few times, however, when a transformation takes place in order to save a mortal from death. Although there is considerable evidence to support the superiority of the gods in the poem, Ovid does include some moments that rattle that theory.
Ovid ties the stories together using characters as links from one transformation myth to the next. These characters, their interactions with each other and the gods, are the primary focus of Metamorphoses. Some of the more prevalent themes that recur throughout the poem are rape, revenge, and violence. A few of Ovid's stories echo others in the collection, but they all have some unique aspect that qualifies it for the poem.
Since this poem was written in 17 or 18 B.C., it has become one of the most important surviving Roman works. Its translation to English has made these myths a cornerstone of literature. |
1 INTRODUCTION TO OSI MODEL
Established in 1947, the International Standards Organization (ISO) is a multinational body dedicated to worldwide agreement on international standards. An ISO standard that covers all aspects of network communications is the Open Systems Interconnection model. It was first introduced in the late 1970s. An open system is a set of protocols that allows any two different systems to communicate regardless of their underlying architecture. The purpose of the OSI model is to show how to facilitate communication between different systems without requiring changes to the logic of the underlying hard-ware and software. The OSI model is not a protocol; it is a model for understanding and designing a network architecture that is flexible, robust, and interoperable.
Figure: Seven Layers of OSI Model
The OSI model is a layered framework for the design of network systems that allows communication between all types of computer systems. It consists of seven separate but related layers, each of which defines a part of the process of moving information across a network. An understanding of the fundamentals of the OSI model provides a solid basis for exploring data communications.
The OSI model is composed of seven ordered layers: physical (layer 1), data link (layer 2), network (layer 3), transport (layer 4), session (layer 5), presentation (layer 6) and application (layer 7). Figure shows the layers involved when a message is sent from device A to device B. As the message travels from A to B, it may pass through many intermediate nodes. These intermediate nodes usually involve only the first three layers of the OSI model.
Figure: Interaction Between Layers In OSI Model
In developing the model, the designers distilled the process of transmitting data to its most fundamental elements. They identified which networking functions had related uses and collected those functions into discrete groups that became the layers. Each layer defines a family of functions distinct from those of the other layers. By defining and localizing functionality in this fashion, the designers created an architecture that is both comprehensive and flexible. Most importantly, the OSI model allows complete interoperability between otherwise incompatible systems.
Within a single machine, each layer calls upon the services of the layer just below it. Layer 3, for example, uses the services provided by layer 2 and provides services for layer 4. Between machines, layer x on one machine communicates with layer x on another machine. This communication is governed by an agreed-upon series of rules and conventions called protocols. The processes on each machine that communicate at a given layer are called peer-to-peer processes. Communication between machines is therefore a peer-to-peer process using the protocols appropriate to a given layer.
At the physical layer, communication is direct: In Figure, device A sends a stream of bits to device B (through intermediate nodes). At the higher layers, however, communication must move down through the layers on device A, over to device B, and then back up through the layers. Each layer in the sending device adds its own information to the message it receives from the layer just above it and passes the whole package to the layer just below it.
At layer 1 the entire package is converted to a form that can be transmitted to the receiving device. At the receiving machine, the message is unwrapped layer by layer, with each process receiving and removing the data meant for it. For example, layer 2 removes the data meant for it, then passes the rest to layer 3. Layer 3 then removes the data meant for it and passes the rest to layer 4, and so on.
Interfaces Between Layers
The passing of the data and network information down through the layers of the sending device and back up through the layers of the receiving device is made possible by an interface between each pair of adjacent layers. Each interface defines the information and services a layer must provide for the layer above it. Well-defined interfaces and layer functions provide modularity to a network. As long as a layer provides the expected services to the layer above it, the specific implementation of its functions can be modified or replaced without requiring changes to the surrounding layers.
Organization of the Layers
The seven layers can be thought of as belonging to three subgroups. Layers 1, 2, and 3-physical, data link, and network-are the network support layers; they deal with the physical aspects of moving data from one device to another (such as electrical specifications, physical connections, physical addressing, and transport timing and reliability). Layers 5, 6, and 7-session, presentation, and application-can be thought of as the user support layers; they allow interoperability among unrelated software systems. Layer 4, the transport layer, links the two subgroups and ensures that what the lower layers have transmitted is in a form that the upper layers can use. The upper OSI layers are almost always implemented in software; lower layers are a combination of hardware and software, except for the physical layer, which is mostly hardware.
Figure: An Exchange Using OSI Model
In Figure, which gives an overall view of the OSI layers, D7 means the data unit at layer 7, D6 means the data unit at layer 6, and so on. The process starts at layer 7 (the application layer), then moves from layer to layer in descending, sequential order. At each layer, a header, or possibly a trailer, can be added to the data unit. Commonly, the trailer is added only at layer 2. When the formatted data unit passes through the physical layer (layer 1), it is changed into an electromagnetic signal and transported along a physical link.
Upon reaching its destination, the signal passes into layer 1 and is transformed back into digital form. The data units then move back up through the OSI layers. As each block of data reaches the next higher layer, the headers and trailers attached to it at the corresponding sending layer are removed, and actions appropriate to that layer are taken. By the time it reaches layer 7, the message is again in a form appropriate to the application and is made available to the recipient.
Figure reveals another aspect of data communications in the OSI model: encapsulation. A packet (header and data) at level 7 is encapsulated in a packet at level 6. The whole packet at level 6 is encapsulated in a packet at level 5, and so on. In other words, the data portion of a packet at level N - 1 carries the whole packet (data and header and maybe trailer) from level N. The concept is called encapsulation; level N - 1 is not aware of which part of the encapsulated packet is data and which part is the header or trailer. For level N - 1, the whole packet coming from level N is treated as one integral unit. |
The metabolic syndrome refers to that constellation of risk factors that increase the likelihood of such potentially life-threatening conditions as coronary heart disease (CHD), stroke and diabetes. It is now estimated that some forty-seven million individuals currently suffer from metabolic syndrome within the United States. The size of this at-risk population is a cause for concern from a public health perspective. On account of the fact that this population is growing strongly suggest that there are environmental factors involved and that need to be explored.
The five conditions enumerated below are referred to as metabolic risk factors and they are -
- · “A large waistline. This also is called abdominal obesity. Excess fat in the stomach area is a greater risk factor for heart disease than excess fat in other parts of the body, such as on the hips.
- · A high triglyceride level (or you're on medicine to treat high triglycerides). Triglycerides are a type of fat found in the blood.
- · A low HDL cholesterol level (or you're on medicine to treat low HDL cholesterol). HDL sometimes is called "good" cholesterol. This is because it helps remove cholesterol from your arteries. A low HDL cholesterol level raises your risk for heart disease.
- · High blood pressure (or you're on medicine to treat high blood pressure). Blood pressure is the force of blood pushing against the walls of your arteries as your heart pumps blood. If this pressure rises and stays high over time, it can damage your heart and lead to plaque buildup.
- · High fasting blood sugar (or you're on medicine to treat high blood sugar). Mildly high blood sugar may be an early sign of diabetes.”
The projected risk for heart disease, diabetes, and stroke increases with the number of metabolic risk factors presented. The risk of having metabolic syndrome is also closely linked to obesity and a lack of physical activity.
Insulin resistance is a definitive risk factor for the metabolic syndrome and is strongly associated with Type II Diabetes. Insulin is the small protein hormone produced by specialized cells of the pancreas (Islets of Langerhans). Insulin (see image below) is produced in response to the presence of elevated glucose in the blood – usually following a meal – and is responsible for regulating glucose levels in the blood. Insulin resistance refers to that condition in which the body fails to use this hormone properly. |
Theory of Change and Common Core Standards
To prepare educators for the implementation of Common Core State Standards, state departments of education and school systems are launching efforts to promote the changes that need to be made. Underneath their efforts are either explicit or implicit theories of change driving their decisions.
A theory of change includes three elements. The first two are planned actions, selected after the study and application of research on large-scale change, curriculum implementation, and change in practice, sequenced to accomplish the intended outcomes. The third element is the set of assumptions that underpin the selection and sequence of the planned actions. These assumptions explain the rationale for the planned and sequenced actions to enact change.
For example, many state departments of education developed crosswalks between existing curriculum and Common Core standards. Those that did likely assumed that the crosswalk eases the transition between what exists and what's new. Other states chose to emphasize the difference between Common Core and existing standards, based on the assumption that emphasizing the differences enhances understanding of the change in student learning and instruction embedded in Common Core.
Assumptions about professional learning vary as well. Some states and districts turn to long-standing practices in professional learning as they plan the necessary learning for educators. The assumption is that existing practices build sufficient foundational knowledge to support implementation of Common Core standards. Some states and districts are using approaches to professional learning that include training cadres of leaders, facilitators, or trainers to disseminate information, develop practice, and broaden support. An assumption driving this action is that bringing the standards to, as one state DOE person described it, 800,000 students, 2,200 schools, and 25,000 educators, requires more hands on deck. Another assumption is that moving the support closer to the point of practice increases personalization and meaningfulness of the support.
Too few states and districts are thinking about what lies beyond the initial dissemination of fundamental knowledge. To make substantive changes in teaching and learning, professional learning must be a continuous process sustained over a period of time that engages educators in learning from experts and with and from one another.
State departments of education and school districts must consider a different set of assumptions and practices about professional learning. The research-based Standards for Professional Learning provide a solid foundation upon which to base decisions about professional development for Common Core. If the standards become the set of assumptions that drive actions to implement Common Core, professional learning will be transformed to model what teaching and learning will look like in classrooms.
Many districts and state departments of education are approaching the impending changes and the essential professional learning required to achieve full implementation of Common Core with deliberate planning and attention to long-term support. Achieving the necessary changes to realize the promise of Common Core requires explicit and publicly communicated theories of change based on clearly articulated assumptions. With a solid theory of in place, state departments of education and school districts are far more likely to plan and enact intentional, purposeful, and coherent efforts to implement Common Core standards and to prepare all students to be college- and career-ready.
Senior Advisor, Learning Forward |
Soil… It’s Alive!
IN THE GARDEN for February 2013, Kendall Weyers, Nebraska Statewide Arboretum
Possibly the most important factor in growing healthy landscape and food garden plants may be the least appreciated or understood—soil. It’s not easy to see, but good soil is a highly functioning and incredibly dynamic ecosystem. According to the USDA Natural Resources Conservation Service, “soil is by far the most biologically diverse part of the Earth.”
The complex soil food web is versatile and adaptable, and includes microorganisms (bacteria, protozoa and fungi), earthworms, spiders, beetles, springtails, pillbugs, ants and other arthropods. That list might make your skin crawl, but all these creatures work together to contribute to more productive soil. They:
- Process organic matter. Each organism on the list helps break down organic matter on top of and in the soil, improving the physical and chemical properties of the soil. The main benefit of this process is the nutrient cycling that occurs, recycling the organic residues into nutrients to make them available and accessible to plant roots.
- Create beneficial symbiotic relationships. There are specialized bacteria and fungi that form mutually beneficial relationships with plants. Rhizobia are bacteria that allow legumes to collect nitrogen from the air. Mycorrhizae are host-specific fungi that attach to roots and create extensive systems of filaments that act like an extension of the root system. This greatly increases the plant’s ability to take up water and nutrients and increases the plant’s tolerance of environmental stress.
- Filter and store water and air. As the creatures move through the soil, they create channels and space for water and air to move down into the soil. This increases the availability and capacity for these soil components, both of which are critical to plant health and survival.
- Control pests. A biodiverse soil does a tremendous job of keeping a wide range of pest organisms in check. Keep this in mind before using pesticides—they reduce the number of beneficial organisms, not just the intended target pest.
- Stabilize soil. One result of microorganisms breaking down organic matter is a highly stable material called humus. Humus binds soil particles together into aggregates or clumps. This improves soil structure and makes the soil more resistant to erosion. Humus also buffers the soil pH, helping to keep it in a range ideal for plant growth.
- Store carbon. Humus, the end product of organic matter breakdown, can contribute greatly to carbon storage. It is highly stable and can sequester carbon for decades.
There are a number of things you can do to increase the biodiversity of your soil and reap the many benefits diversity provides. The good news is that these organisms are probably already present. Even if their numbers are low, populations will increase rapidly with favorable conditions you can provide by:
- Adding organic matter. Organic matter is the key food in the soil food web. It can be increased by incorporating compost and plant residues into the soil.
- Mulching with organic materials. Mulch helps moderate soil moisture and temperature, and adds organic matter as it breaks down. Plus it helps prevent compaction from foot traffic and heavy rain.
- Watering properly. Soil organisms thrive in damp but not soggy conditions. Over-irrigating can create adverse conditions harmful to many beneficial organisms.
- Limiting use of pesticides. As mentioned earlier, applications of pesticides (insecticides, fungicides, herbicides) often kill beneficial organisms in addition to the target pest.
- Limiting tillage. Excessive tilling can be devastating to beneficial fungal networks and soil structure.
- Avoiding compaction. Compaction, whether from pets, people or vehicles, reduces the ability of soil to provide essentials of air and water.
Soil is a hidden miracle of nature, a complex web of self-sustaining interaction right under your feet. Considering the many benefits of a biodiverse soil, it is worth the time to nurture and appreciate. |
Corals are in trouble, but they could soon receive the help they need.
The National Oceanic and Atmospheric Agency (NOAA) proposed listing 66 species of reef-building corals under the Endangered Species Act (ESA), which is a step in the right direction for coral conservation. Being added to the Endangered Species list is more than a title upgrade (or downgrade, really). Listing species as endangered would prohibit harming, wounding or killing the species. It also prohibits the extraction of listed species, which includes importing or exporting the corals.
What has made these corals candidates for the list? A number of things: pollution, warming waters, overfishing and ocean acidification threaten the survival of corals. These threats can make corals more susceptible to disease and mortality. Protections like endangered species listing are vital to preserving coral from threats and helping them cope with changing environmental conditions.
Corals are tremendously important economically and environmentally. Corals provide habitat to support fisheries that feed millions of people; create jobs and income for coastal economies through tourism, recreation and fisheries; and protect coastlines from storm damage. One independent study found that coral reefs provided about $483 million in annual net benefit to the U.S. economic from recreation and tourism activities. Marine life, such as fish, crustaceans and sea turtles rely on corals for food, shelter and nursery grounds. Over 25% of fish in the ocean and up to two million marine species use coral reefs as their home. Because of their significance, supporting NOAA’s proposed ESA listing for 66 coral species is incredibly important to their survival and our local economies. |
Just Graph It
Making your own graph is a fun way to look at information! Making it online could be even better! Students can use these free sites provided to learn and practice graphing strategies that will allow them to dig deeper into information in order to gain a stronger understanding of manipulating and understanding data.
- Be exposed to various sites that can be used for graphing data.
- Be able to explore various types of graphs that they can create.
- Be able to gather some data and create graph of their choice.
- Graph: A graph is a diagram of values, usually shown as lines or bars.
- Data: Data is facts or information that is collected together and shown as numbers.
- Scale: A scale is the numbers that show the units used on a graph.
- Title: A title is the label on that top that tells what the graph is showing.
- Key: A key is a part of a map, graph, or chart that explains what the symbols mean.
- Table: A table is a visual representation that is used to organize information and to show patterns and relationships.
- Histogram: A histogram is a graphical display where the data is grouped into ranges (such as "40 to 49", "50 to 59", etc), and then plotted as bars. Similar to a Bar Graph, but in a Histogram each bar is for a range of data.
- Bar Graph: A Bar graph is a graph that uses the height or length of rectangles to compare data.
- Pictograph: A pictograph is a graph that uses pictures or symbols to show data.
- Pie graph: A pie graph is a circular chart divided into sectors, each sector shows the relative size of each value.
- Line plot: A line plot is a diagram showing frequency of data on a number line.
- Vertical line (y axis): A vertical line is a line that moves up and down.
- Horizontal line (x axis): A Horizontal line is a line that moves from side to side.
Decide which of these graphing tools you would like to demonstrate to your students.
Go to the National Library for Virtual Manipulatives. (NVLM)
Explore the graphing manipulatives. Manipulatives are under the section called the Data Analysis and Probability and can be found by scrolling to the bottom of the page. This application requires Java to download.
Watch the General Tutorial from NLVM website.
Watch the NLVM Bar Chart Tutorial Video.
Go to the NCES Create A Graph website.
Go to the NCES Tutorial.
Go to Mr. Nussbaum’s Graphmaster website.
Watch the tutorial.
- See Accommodations Page and Charts on the 21things4students.net site in the Teacher Resources.
- The teacher will say to the students, “We use graphs all the time. What do we use them for?”
- Suggested responses from the students might be; to collect data, to find trends, to compare information.
- Teacher will then say to the students, "Today we will be looking at various tools that you can use to create graphs online."
- The teacher will demonstrate the 3 sites or sites of their choice.
- Go to the National Library for Virtual Manipulatives.
- Create a quick bar graph using the graph creator.
- Time permitting: Say, “Now you will have an opportunity to create a graph using the data I provide on the board.”
- Present students with some data sets and have them build a graph based on data set given.
5b. Students collect data or identify relevant data sets, use digital tools to analyze them, and represent data in various ways to facilitate problem-solving and decision-making.
5c. Students break problems into component parts, extract key information, and develop descriptive models to understand complex systems or facilitate problem-solving.
MITECS: Michigan adopted the "ISTE Standards for Students" called MITECS (Michigan Integrated Technology Competencies for Students) in 2018.
Devices and Resources
Device: PC, Chromebook, Mac, iPad
Browser: Chrome, Safari, Firefox, Edge, ALL
Google Play: Simple Graph Maker-Free
Bar Graphic Rubric
Create a Graph (NCES)
CONTENT AREA RESOURCES
Read a class graph and write an compare contrast paper on the data.
Use Graphs to find mean, median and mode.
Graph data from a science experiment.
Graph demographic data.
This task card was created by Courtney Conley, Utica Community Schools, February 2018. |
Each year we do an activity that involves Archimedes principle. You might wonder...why do this in chemistry? Leading up to the activity, students do a series of labs and activities that involve measuring, accuracy, precision, significant numbers and density. The culminating guided inquiry activity takes place by which students take an object, find the volume in multiple fluids and find the mass in multiple fluids. An examination of class data starts to show that the volume of a solid does not change in fluids but the mass in air and the mass in different fluids are different. They also use the density of the fluid and the volume of the fluid displaced by the submerged mass to find the mass of the fluid displaced. The hope is to guide student's thinking to help them understand that the apparent loss of mass, or the buoyant force of the fluid against the mass is the same as the mass of the fluid displaced. In theory, this should be a great lab. The reality is that the instruments we have are less then ideal, it is tough to guide students with bad data and there are many connections that need to be made.
I started teaching in a chronological order when I began using Modeling Instruction in my classroom. During the second year of "walking in the footprints of the scientists that came before us", I wanted my students to see where they were walking and a colleague and I came up with the idea of making footprints for each of those scientists and posting them on a timeline.
How do teachers encourage building individual lab skills in classes of over 30 students where labs are done in groups of five or six students? My science department collaborates daily, and we have been discussing this concern for a few years now. Many trials and errors have occurred.
This school year my district is launching a 1:1 Chromebook initiative. 6th and 9th graders will receive their Chromebooks next semester as part of the rollout. In the meantime, I continue to have access to my Chromebook cart from the Blending Learning pilot I participated in last school year. My goal is to incorporate even more tech use when appropriate; so far, I have increased Chromebook use in my classroom for things like warm up questions, EdPuzzles, and quizzes. My experience with quizzes has been especially interesting.
College Board offers an excellent online resource for teachers and students. It's not free, but my school district pays the bill. AP Insight provides curriculum outlines, teaching ideas and resources, student handouts, and digitally-graded assessments. I have elected to begin using the resources in first semester honors chemistry.
Here is what I told my students as we were studying gas laws. I have a bag of potato chips at see level and then I go to Denver where the pressure is less? What happens? Draw and build a model on your whiteboard.
Is it possible to use materials found in high school chemistry labs to extract and subsequently detect cocaine on dollar bills? Let me know what you think after reading this blog post!
In the article “Reactions Catalyzed by an Assault on a Favorite Principle”1, Emeric Schultz (who incidentally taught me General Chemistry, was my undergraduate advisor, and is now a dear friend and colleague) argues the following:
“Although I have read and heard about ‘big ideas’ in chemistry, I have never seen a commensurate effort to work toward a high school chemistry program that starts from…big ideas and works down.” |
posted by Annabella Richards .
The points A , B , C, D and E are located on a straight line in that order. The distance from A to E is 20cm. Te distance from A to D is 15cm. The distance from B to E is 10com. C is halfway between B and D. What is the distance from B to C?
First you draw a straight line (measured at 20 cm) with A at the beginning and E at the end. Then you know A to D is 15 cm so you can mark D. Next you measure from B to E. You know it is 10 cm so you can mark B. Since C is half-way between B and D, you can get the distance from B to C.
10 5 5 |
Three years ago, researchers found evidence that Saturn’s moon Dione was home to an ocean located deep beneath its surface when it originally formed, and now, a new study appearing in Geophysical Research Letters suggests that said body of water might still be there.
Research published in the journal Icarus in March 2013 used images collected from the NASA Cassini spacecraft to hypothesize that the moon’s topography suggested that a subsurface ocean caused Dione’s crust to become cracked and stressed early on during the satellite’s lifespan.
Now, researchers from the Royal Observatory of Belgium and their colleagues used computer modeling techniques to demonstrate that gravitational data observed during Cassini’s flybys of the moon can be explained if its crust is floating above an ocean located roughly 62 miles (100 km) below its surface, the American Geophysical Union (AGU) explained in a blog post.
If confirmed, this would make Dione the third of Saturn’s moons found to harbor oceans under their surface, joining Titan and Enceladus in that increasingly less-exclusive group. It would also suggest that the subsurface waters have likely persisted throughout the moon’s history, meaning that Dione may be home to a long-term habitable zone for microbial life.
Computer model also sheds new light on Enceladus
In a statement, the authors explain that this newfound ocean is likely “tens of kilometers” deep and surrounds “a large rocky core” in the heart of the moon. From within, they explained, Dione is similar to but smaller than Enceladus, suggesting that both of the satellites have icy shells that are made up of global icebergs immersed in water and supported by deep keels.
Similar models have been used by scientists in the past, but that research suggested that Dione lacked an ocean and Enceladus was home to an extremely thick crust. According to lead author Mikael Beuthe, the researchers “assumed that the icy crust can stand only the minimum amount of tension or compression necessary to maintain surface landforms,” as additional stress “would break the crust down to pieces.”
Based on this model, Beuthe’s team determined that Enceladus’ ocean is closer to the surface than Dione’s, especially near the moon’s south pole, where geysers have been spotted erupting through a thin layer of crust. The findings support the discovery made by Cassini last year that the moon undergoes extensive back-and-forth oscillations or libration during its orbit.
Dione, on the other hand, is home to a much deeper ocean located between its crust and core, according to the new study. It too undergoes libration, co-author Antony Trinh explained, but it does so at levels that the Cassini probe is unable to detect. Of course, this is only a prediction, Trinh noted, and will need to be confirmed or disproven by sending a future orbiter to analyze Saturn’s moons – one with more sensitive instruments than Cassini.
The study provides “the first clear evidence for a present-day ocean within Dione,” the authors wrote. With this discovery, there are now three “ocean worlds” orbiting Saturn, along with three orbiting around Jupiter and at least one believed to be in the Pluto system based on observations made recently by the New Horizons spacecraft. In light of these observations, Beuthe said that he believes future missions should be sent to explore the Uranus and Neptune systems.
Image credit: NASA/JPL
The post Dione could be Saturn’s third moon with a subsurface ocean appeared first on Redorbit.
offers Science, Space, Technology, Health news, videos, images and reference information. For the latest science news, space news, technology news, health news visit redOrbit.com frequently. Learn something new every day.” |
Broccoli (Brassica oleracea) is a cool-season crop grown in spring or fall for its vitamin-rich heads. Unlike some other garden crops, broccoli grows best when introduced into the garden as a transplant rather than from seeds. Properly preparing the site and caring for young plants encourages successful transplanting and rapid establishment. Young broccoli plants are ready for transplant about 5 or 6 weeks after seeds are sown, when plants have three or four true leaves.
Harden young broccoli plants off beginning about two weeks before planting. Reduce water and fertilizer applications and gradually expose them to the temperature, light, wind and moisture levels in their future site by first placing them outdoors in a protected, shady site for a few hours. Over several days, increase the time the broccoli plants spend outdoors and the light they receive until their growing conditions closely reflect those in the garden or planting bed.
Break up the top several inches of soil and work 2 to 4 inches of organic matter into the soil. Broccoli grows best in fertile, well-drained soil with a pH between 6.0 and 7.0. If necessary, work a balanced, slow-release fertilizer into the planting site. The best way to determine if a fertilizer is warranted is to conduct a soil test.
Dig a small hole for each transplant. Space holes for broccoli transplants 12 to 18 inches apart in rows spaced 2 to 3 feet apart.
Set a sturdy, healthy young broccoli plant carefully in each hole, moving it using a mass of medium around the roots or by holding the leaves. Do not hold the plant by its stem or growing tip. Plant the broccoli slightly deeper than it was previously grown at and fill the space around the roots with soil, gently firming the soil down to hold the plant in place. If the broccoli was grown and transplanted in a peat pot, completely bury the lip of the pot to prevent wicking and rapid drying of soil around the roots.
Water the transplanted broccoli in thoroughly but gently to settle soil around the roots and remove air pockets. Regular moisture is necessary for establishment. Provide the broccoli with 1 to 1.5 inches of water weekly when rainfall is inadequate.
Spread a layer of mulch about 2 or 3 inches thick around the broccoli, leaving a few inches of open space around the broccoli stem. Mulch conserves soil moisture, blocks weeds and encourages faster broccoli growth.
Things You Will Need
- Garden fork or tilling implement
- Organic soil amendment such as well-rotted compost or aged manure
- Balanced fertilizer, if needed
- Time broccoli cultivation so it will grow and mature when daytime temperatures are between 65 and 80 degrees, if possible. Some heat-tolerant cultivars can grow in slightly warmer conditions.
- Starting broccoli seeds in peat pots or other containers that will break down readily in the soil minimizes shock that occurs during transplanting.
- Avoid leaving young broccoli plants in their original container for too long, as this can cause plants to "button", or form small heads that flower early.
- Tom Brakefield/Stockbyte/Getty Images |
Grain Sorghum Facts
Grain Sorghum, also called milo, is a member of the grass family. The round starchy seed’s tolerance for heat and drought plays a critical role in agriculture production throughout the state of Texas.
Not only is it an important grain crop, it is also very important as a forage, hay, and silage crop generating over $1 billion for Texas annually.
- Grain Sorghum is one of the oldest known grains originating in Africa and India.
- Benjamin Franklin is credited with introducing the first crop to the United States in the 1700s.
- Before the 1940s, most grain sorghums were 5 to 7 feet tall, which created harvesting problems.
- Today, sorghums have two or three dwarfing genes in them and are 2 to 4 feet tall.
Growing Grain Sorghum in Texas
- Grain Sorghum seeds are planted in rows during the spring, March to April, when soil temperatures exceed 65 degrees F.
- Growth is not very rapid until the plant is about 10 inches tall. This is because the plant is establishing a root system and taking up nutrients rapidly.
- Next, the plant begins to produce leaves and the stem begins to grow. The production of the head that holds the round seeds begins to develop at the top of the plant.
- The new leaves are a brilliant green and the seeds darken to a color depending on variety, usually red in Texas. Other varieties may be white, yellow, or bronze.
- When the grain sorghum plant reaches maturity and is ready for harvest, it is approximately four feet high, the leaves have turned to a light brown, and the seeds have hardened.
- Farmers use combines to harvest their grain sorghum. The combine cuts the seed head off and threshes, or removes, the seed from the head.
- The grain is loaded on to trucks and stored at the farm in a grain bin to sell later or delivered to a local grain elevator where it is then sold to many different industries.
- Grain that is stored in bins must be stored at specific temperatures and moisture content until it is used for seed, animal feed, or sold to industries for food and non-food uses, or to export to another country.
- While most other grains are sold by the bushel, grain sorghum is commonly sold by the hundred weights (cwt – increments of 100 pounds).
- Grain Sorghum is well suited for Texas because it does not require much water and it grows well during the long, hot summers. Most grain sorghum is not irrigated.
- Grain Sorghum is a drought-tolerant, versatile grain with many varieties.
- Some varieties can be used in the cereal, snack food, baking and brewing industries.
- These varieties contain a white berry, and tan glumes on a tan plant.
- Other varieties are used in the US for livestock feed, pet food, industry and ethanol.
- These may include yellow, red and bronze sorghums.
Sorghum’s Food Characteristics
- Gluten Free
- Antioxidant Dense
- Absorbs & Enhances Flavors
- Environmentally Friendly
- Baked Goods
- Grits & Couscous
Grain Sorghum Uses
- The seed can be ground or mixed into feed for dairy cattle.
- The entire plant can be made into high-moisture grain silage when cut at 25-30% moisture.
- After grain has been harvested, livestock can be pastured on sorghum stubble utilizing both roughage and dropped seed heads.
- Pet food manufacturers include this highly digestible carbohydrate grain to their feed formulations.
- Distillers grain, an ethanol by-product, is a valuable feed for both feedlot cattle and dairy cows.
- Used as a substitute for wood to make wallboard for the housing industry.
- Used in biodegradable packaging material that does not conduct static electricity. This is beneficial for the shipping of electronic equipment.
- About 15% of the U.S. grain sorghum crop currently is used for ethanol production with one bushel producing the same amount of ethanol as one bushel of corn.
- Sorghum is the only crop that can effectively be utilized into starch, sugar, and cellulose ethanol production.
- Worldwide, sorghum is a food grain for humans.
- Used in snack foods in the U.S. and Japan such as granola bars and cereals, baked products, dry snack cakes, and more.
- Replaces wheat flour with a gluten-free flour for use in a variety of baked goods.
Worldwide, about 49% of the sorghum consumed is for food. Sorghum provides an important part of the diet for many people in the world in the form of unleavened breads, boiled porridge or gruel, malted beverages, and specialty foods such as popped grain and beer.
Sources: Texas Farm Bureau via txfb.org |
Introduction to Calculus Introduction - At A Glance:
Lots of fields can benefit from the concepts in calculus. In cases where relationships can be graphed, calculus can be used. Velocity, which is similar to speed but includes direction, is a result of the relationship between distance and time. How fast is a diver or a long jumper going upon impact (or at any point during the dive or jump). What path does a gymnast follow when she releases the uneven bars? How long does it take for a car to drive from Point A to Point B?
All of these questions can be answered using calculus.
In a graph of distance and time, velocity is the derivative. Chemicals react with one another, and calculations about the rates at which they react involve calculus. Engineers might use calculus for optimization. For instance, they can find the largest volume that can be held by a soda and/or pop can, while using the smallest possible amount of aluminum. They can also figure out the best size can top and bottom for optimal stacking ability.
Video game engineers might use various forms of calculus to simulate real-life situations. Depending on the angle that a force is applied, where should those angry birds land after sling-shot release? Will the pigs pay? Video games are steeped in calculus simulations. |
Read this article to learn about the following three important types of unit hydrographs, i.e., (1) Normal or Average Unit Hydrograph (Av. UH), (2) Dimensionless Unit Hydrograph, and (3) Instantaneous Unit Hydrograph (IUH).
1. Average Unit Hydrograph:
To obtain normal or average unit hydrograph for a basin several storms are taken and unit hydrographs plotted for each of them. Obviously these unit hydrographs will be of different unit durations. They are then reduced to suitable unit duration by ‘S’ hydrograph method.
All such unit hydrographs of equal duration are then plotted on the same sheet overlapping each other with their peaks on the same ordinate. The unit hydrographs with wide peaks are neglected. Using the plotting’s of other unit hydrographs a mean curve is drawn which gives normal or average unit hydrograph for that basin. This unit hydrograph may be tested by reproducing an observed flood by applying corresponding effective rainfall. Such a unit hydrograph is considered quite adequate for all design floods except probable maximum flood.
2. Dimensionless Unit Hydrograph:
It is a unit hydrograph for which the discharge is expressed by the ratio of discharge to peak discharge and the time is expressed by the ratio of time to lag time. Knowing the peak discharge and lag time for the duration of effective rainfall the unit hydrograph can be estimated. The peak discharge qp and lag time may be estimated by Snyder method.
1. It eliminates effects of basin size and shape. It facilitates comparison of unit hydrographs of basin of different sizes and shapes.
2. It is excellent means of averaging unit hydrographs.
3. Instantaneous Unit Hydrograph:
If the duration of the effective rainfall becomes infinitesimally small, the resulting unit hydrograph is called instantaneous unit hydrograph (IUH) words, for an IUH the effective rainfall is applied to the drainage basin in zero time. Of course this is only fictitious situation and a concept to be used in hydrograph analysis. |
The Encyclopedia Americana (1920)/Evergreens
|←Evergreen Isle||The Encyclopedia Americana
|Everhart, Benjamin Matlack→|
|Edition of 1920. See also Evergreen on Wikipedia, and the disclaimer.|
EVERGREENS. Those plants which imperceptibly shed their leaves and acquire new foliage, without noticeable change in their aspect, and those which, like certain biennials and alpines, maintain their leaves throughout the winter season so that they may make a quick start in the spring, are called evergreens. In the northern countries cultivated evergreens are roughly divided into two groups popularly called conifers and “broad-leaved” evergreens, the latter including laurels, rhododendrons, hollies, box, etc. The tropical flora is chiefly evergreen, and some trees, like the Magnolia glauca, that shed their foliage in the north, retain it in the south.
This evergreen character, especially where the plants are subjected to extremes of drought and wetness, or of heat and cold, has given many devices for regulating transpiration or the deleterious effects of too much moisture, such as the rolling of leaves, waxy deposits on the leaves, and various curious arrangements of pits, hairs and cells. Wherever the foliage is persistent for several years, as is the case of the holly and of many tropical trees and epiphytes, it is often thick and leathery, being provided with a thickened cuticle, especially where the leaf undergoes drought periodically. Other evergreens like cacti and rock-plants become fleshy or succulent, when living in arid conditions, storing water in their tissues and sometimes retaining it there with mucilaginous juices and salts. Furthermore they are apt to assume a more or less cylindrical shape in both leaf and stem, the foliage often being reduced to mere needles and scales, or being absent entirely. This rodlike, nearly leafless, condition is particularly noticeable in the so-called whip-plants of arid regions, which are reduced to switch branches with scales for leaves, thus greatly reducing the evaporating surface during the heated term. They often occur on the Mediterranean shores where another type of device for controlling exhalation is conspicuous; for there the evergreens are really gray, like the lavender, hoary with their envelopes of hair, just as some alpine plants, notably the edelweiss, are smothered in felted hairs. In the shadowless forests of Australia many trees reduce their evaporating surfaces by presenting only the edges of their leaves to the midday sun.
Coniferous evergreens furnish some of our most valuable forest products in the way of timber, naval stores and tanning materials, and also various food products as nuts and bark, chiefly of value to the aborigine. One or two, as the West Indian yacca and the yew, furnish cabinet woods, but the latter seems to have been used wherever it grows, chiefly for bows. Most of them also are useful for windbreaks, hedges or for ornamental planting, where shelter, concealment or winter-color is desired; various species being adapted for differing soils and climates. Some of them, as the arbor-vitae and yew, stand shearing well, and can be pruned into sundry geometrical forms; holly and box share this distinction, and the custom was formerly carried into grotesque excess in topiary gardening.
Laurel, rhododendrons and other “broad-leaved” evergreens are often valuable in shrubberies not only on account of their winter verdure but because they also have handsome blossoms or fruit; they moreover afford shelter for birds.
Their long life and perpetual verdure have caused many of the evergreen tribe, particularly the fir and mistletoe, to be included among sacred plants; and they have become adopted as symbols of immortality, of resurrection and of perennial remembrance, at funeral services and in graveyards. Several kinds, as the yew, served as “palms” on Palm Sunday. On the other hand, yews and cypresses, especially the latter, serve as emblems of eternal death and are frequently referred to in this connection in classical literature “with every, baleful green denoting death.”
Evergreens are favorite plants for decorating during the Christmas holidays; in England a certain order was observed in their disposal, as we find in Herrick's ‘Ceremonies for Candlemas Eve’:
Down with ths rosemary and bays,
Presumably these holiday garlands and decorations of evergreens — rosemary, ivy, laurel, box, holly and mistletoe — were survivals, with the Christmas tree, of pagan ceremonies and tree-worship, more or less incorporated in the rites of the early Christian churches; the mistletoe, however, was so intimately connected with Druidical rites that it was excluded from the Church decorations. There is a large trade in these Christmas greens, both of the foreign and native kinds, the latter including southern smilax, long-eared pine, ground-pine and hemlock. |
I am an issue understanding the changes that take place when we decrease the pressure of a reaction. I have understood that when we increase the pressure the side having more number of moles wants its moles to go to the other side where it is more empty(this is how I learnt it). But I fail to understand why if we decrease the pressure by increasing volume the side having less moles goes to the side having more moles? Also in the case of adding inert gases why does the equilibrium shift in the direction in which larger no of moles of gas are formed? Please help.
The system doesn't "want" anything. This is a major misconception with Le Chatelier's Principle.
Think about the following reversible reaction, everything in the gas phase:
A + B = C
We'll simplify the rate law of the forward reaction to: rate = kf[A][B]
The reverse rate law is rate = kr[C]
If the volume is doubled, what happens to the rate of the forward reaction? It is quartered. What happens to the forward reaction? It is halved. So, the reaction "shifts to reactants".
If the volume is halved, what happens to the forward rate? It is quadrupled. The reverse rate is doubled. The reaction "shifts to products".
What happens when an inert gas is added? Since the inert gas appears in neither the forward nor the reverse rate law, nothing happens.
I'm afraid you have learned some misconceptions.
Le Chatelier's Principle was developed before much of what is studied in a general chem course today was known. It has been used to manipulate the outcome of reactions. All of the predictions made by Le Chatelier's Principle can be made by understanding how reactions work at the particle level. Le Chatelier's Principle is from the macroscopic perspective and leads many students to generate misconceptions. |
Definition - What does Adaptive Software mean?
Adaptive software is specialized software designed for physically challenged users. This software usually runs on specialized hardware.
This term is also known as assistive software.
Techopedia explains Adaptive Software
There are people who cannot use computers normally because of their physical disabilities. Adaptive software makes it possible for such persons to enjoy the benefits of using a personal computer. Otherwise, they may not be able to do so for either work or recreation.
Some examples of adaptive software include:
- Narrator (within the Windows Accessibility features): This software can read aloud menu commands, dialog box options and more. Plus, it can announce events on screen and read typed characters.
- Text-to-Speech Adaptive Software (also known as speech recognition software): This software can type spoken words into a computer. |
Lesson PlansBack to lesson plans archive April 11, 2013
Science Genius: Creating a Rhyme
A rhyming/rap activity for educators and students created by Christopher Emdin and Timothy Jones of Columbia University and Thaisi Da Silva and Allison McCartney of PBS NewsHour Extra.
Music, Science, Arts & Culture, Technology
One class period plus an assignment
In an effort to engage students, legendary rapper GZA has teamed up with Columbia University Teachers College professor Christopher Emdin to use hip-hop to teach everything from biology to physics.
Students will watch the PBS NewsHour report “Songs in the Key of Biology: Students Write Hip-Hop to Learn Science” and have a discussion about using hip-hop music as a tool in the classroom.
Warm up questions:
- What keeps you engaged in the classroom?
- How do you learn best? What techniques do you have for retaining information?
- What do you think could be done in the classroom to make learning science more fun?
- What did you find most interesting about this video?
- Do you think you would learn well in this program? Why or why not?
- Do any of your classes use interdisciplinary techniques to teach you information? If so, how do you feel about them?
Next, students will watch GZA’s science rap:
After students have watched both videos, ask them to work independently and research a science topic they’d like to write a rap about.
Students will need a chorus or hook for the rap they will create. A chorus has been provided for them. This will also help acclimate students to the process of writing and performing his/her rap.
Sometimes in the world it is hard to dream
Based on realities my eyes have seen
Formulate rhymes from life as a thesis
This is what makes me a Science Genius
(Each line is 10 syllables which gives it a natural rhythm. Practicing this chorus, will help students construct their rhymes.)
Some/times/ in/ the/ world/ it/ is/ hard/ to/ dream
Based/ on/ re/a/li/ties/ my/ eyes/ have/ seen
For/mu/late/ rhymes/ from/ life/ as/ a/ the/sis
This/ is/ what/ makes/ me/ a/ Sci/ence/ Ge/nius
After students completed some research on a science topic, have them complete the following steps:
- Write down the first (8) words that come to mind. (These will most likely be science terms you have just learned.)
- After you have completed the first step, write down the first (5) words that come to mind when you think of each of the first (8) words.
- From you list of (40) words identify all the words that may rhyme and that you could use as “end words” in a rhyme sequence.
- Create sentences that connects the first (2) rhyming words.
- Create as as many of these rhyming sentences as possible.
- Review the rhyme when you have exhausted your rhyming words and begin to check for the following:
- Sentence structure and flow (matching syllables in sentences like the example provided in the chorus
- Coherence /logic in the sentences (how connected are they to the science topic you want to get across
- Revise your work for clarity, coherence, and recite it to perfect performance
- Begin getting creative by thinking of analogies and metaphors you can use to get your point across, different (more complex) or rhyme patterns you can use to develop the initial text
- Revise the rap you have crafted
- Continue the entire process till you create a rhyme you are comfortable with performing
The Materials You Need
Tooltip of materials
- Access to the Internet
- Writing utensils and notepads
- PBS NewsHour report: Songs in the Key of Biology: Students Write Hip-Hop to Learn Science
- Wu Tang Clan’s GZA Raps About Science
Common Core Standards
Tooltip of standarts
Relevant National Standards:
McRel Compendium of K-12 Standards Addressed:
- Standard 1: Knows the characteristics and uses of computer hardware and operating systems
- Standard 3: Understands the relationships among science, technology, society, and the individual
- Standard 6: Understands the nature and uses of different forms of technology
- Standard 1: Uses the general skills and strategies of the writing process
- Standard 4: Gathers and uses information for research purposes
- Standard 5: Uses the general skills and strategies of the reading process
- Standard 7: Uses reading skills and strategies to understand and interpret a variety of informational texts
- Standard 8: Uses listening and speaking strategies for different purposes
- Standard 10: Understands the characteristics and components of the media
Listening and Speaking
Tooltip of related stories
More Lesson Plans
Tooltip of more video block
Tooltip of RSS content 3
Every year, tech giant Google creates a zeitgeist, or a summary of the victories, defeats…World
DOWNLOAD VIDEO New consumer technology brings the world of science fiction fantasy to video gaming…ScienceTechnology
DOWNLOAD VIDEO Families who rely on food stamps will have to make do with less…Government & CivicsSocial Issues
DOWNLOAD VIDEO A year after the massacre at Sandy Hook Elementary School in Newtown, Conn.,…Social Issues
The contest for Middle and High School students is now closed. We will announce the…Arts & CultureWorld |
We all understand the challenges of teaching poetry in ELT classrooms, a point that Kent Grosh addresses from a broader perspective about education in this issue. But if we are able teach well, poetry can add an important aspects to our students’ language skills, including understanding metaphors, connotations, symbolic meanings, and so on. In this entry, Mabindra Regmi shares a lesson plan for teaching a particular poem. – Ed.
(in reference to the poem “A Girl” by Ezra Pound)
– Mabindra Regmi
by Ezra Pound
The tree has entered my hands,
The sap has ascended my arms,
The tree has grown in my breast-
The branches grow out of me, like arms.
Tree you are,
Moss you are,
You are violets with wind above them.
A child – so high – you are,
And all this is folly to the world.
Challenges for a teacher
A Girl by Ezra Pound is a product of what is called imagist movement in modern poetry. The preference is given to the picture that the poem portrays rather than the meaning. There could be many challenges that the teacher can face while teaching this type of poem.
One of the complications that the teacher might face while teaching the poem “A Girl” by Ezra Pound will be the structure of the poem itself for it does not follow standard metric stanzas. The lack of stanzas is perhaps deliberate as the whole poem centered to the page looks like a picture of a tree. This is one of the qualities that a imagist poet aspires for- to give the poem form of the topic under discussion.
Since the poem is created in order to paint a picture of the poet’s expression, it will be very difficult for the teacher to come to a singular conclusion as to what the poem might mean. Nonetheless, it does not affect the beauty and the creativity of the text and the profound impact that it will inevitably have on the reader. The teacher has to be careful not to ladle preconceived meaning for this poem, but encourage the students to come up with their own interpretations. It is also advisable not to consider any of the interpretations as totally wrong.
Literary criticism permeates any literary text. Many a times, it is the text itself that draws onto one or the other type of literary criticism. Even if the theories are not explicitly discussed in a secondary school literature classroom, certain inferences are unavoidable. Because “A Girl” is a poem of post modern times, it can be safely assumed to be critically appreciated through the lens of post-modernism in literature. But again, post-modernism is not a clear cut literary theory and the resulting explanation will retain some of the ambiguities and the surrealisms that post-modernism theoretical explanations propagate.
A Lesson plan for teaching the poem
Draw a picture of a tree and a picture of a girl side by side on a blank piece of paper.
Read the poem “A Girl” by Ezra Pound. Look at the drawings that you have made. Now consider the questions below.
- Does the poem evoke a sense of comparison between a girl and a tree?
- How is the tree compared to a girl?
Make two columns and list down the words related to a tree and those related to a girl.
A Poem can be interpreted in many different ways. Read the poem once again. What do you think is the poet trying to say in this poem?
There are many metaphors used in this poem. Can you make a list of them?
Lesson Plan: Teacher’s Copy
“A Girl” by Ezra pound
Talk about pictures and paintings and how they might be similar to poems.
Ask the students to draw a picture of a tree and a picture of a girl and ask them to compare and see if there are any similarities.
Discuss and preteach the following vocabulary:
sap moss violet
Metaphors are devices used in writing poems or any other form of literature where comparison is made between two things which have some similarities but are essentially different. There is no use of comparing words like “like” or “as”.
He is a dog.
In the sentence above, (he) is compared with a (dog).
Ask the students to create metaphors for another object or person. Since a girl was used as the subject of the topic, you can use a boy instead to create some metaphors.
Suggested activities in the lesson plan
Few activities have been suggested in the lesson plan for teaching the poem “A Girl” by Ezra pound.
Activity 1: Scene setting
Setting the scene for the matter to be taught will enable the students to activate the schema and in turn, it will make the comprehension of the subject matter easier to comprehend. Here, as a scene setting activity, the teacher is suggested to talk about how there are similarities between text of poems and pictures. Moreover, words can also be an effective means to express picture-like representations. As an imagist poem, “A Girl” also paints a picture of a comparison between a girl and a tree.
The teacher can further ask the students to draw a picture of a tree and a picture of a girl and try to see if there are any similarities that exist between them. The students can work individually, in pairs or in groups and give a short presentation on what kind of similarities that they have found.
Activity 2: Preteach vocabulary items
Although the vocabulary items presented in the selected poem are reasonably simple and most of the words can be understood by the students of secondary level, it is a good idea to discuss some of the items so that the students will have a complete comprehension of the poem that they are reading. Here, three words sap, moss and violet are referred for consideration for discussion. Of course the list can be contracted or extended based upon the cognitive knowledge of the students regarding vocabulary.
It is suggested that the vocabulary items are discussed in contextual and context free manner regarding the poem. The words might have been used in a different way in the poem as it so often happens while studying poetry. The students should have a clear concept of what the words mean in the poem before they start reading.
Activity 3: Reading and comprehension
Ask the students to read the text of the poem, preferably aloud. Ask them to answer the comprehension check questions given at the end. This will enable the students to get a basic gist of the poem and what it is trying to say.
Activity 4: Vocabulary used for comparison in the poem
There are different words used to describe the tree and to describe the girl. Ask the students to make a list of these words so that they can get an idea how these two unlikely items are compared using more or less equal number of words.
Activity 5: Finding the metaphors
One of the figurative devices used in this poem is metaphors. After explaining what metaphors are, the students can be assigned to make a list of all the metaphors used to compare the girl with a tree.
Activity 6: Creating Metaphors
After the students are familiar with the concept of metaphors, they can be asked to create some metaphors of their own. Here, the students are asked to create metaphors for a boy since the poem deals with metaphors dealing with a girl. |
Bacterial and other microbial infections have become a serious problem, especially in hospitals and other clinical settings with the use of antibiotics and sterilizing chemicals have given rise to strains that are resistant to standard treatments. Scientists have explored other solutions, such as silver nano-particles that have been included in everything from computer keyboards to socks, but these particles have found their way into various ecosystems, causing some conservationists to sound alarms.
A company in Colorado has come up with a purely mechanical solution to the problem. Sharklet Technologies has created a material with an nano-textured surface that prevents bacteria and other microbes from getting a foothold so that they cannot colonize and spread infection. The diamond pattern of small rectangular ridges was inspired by the surface of sharkskin. Sharkskin naturally resists the growth of organisms in the ocean, such as barnacles or algae, that can establish themselves on other sea creatures.
The use of Sharklet-patterned films in an simulated operating room setting was shown to “significantly reduce” surface contamination, though it has not yet been tested in clinical trials. The material could be useful in reducing infections in applications ranging from catheters and trachea tubes to adhesive films. |
Devils Postpile National Monument
About the Park
Devils Postpile National Monument, located on the western slope of the Sierra Nevada range, was established to protect and provide access to the Postpile and Rainbow Falls. The Postpile is a striking formation of columnar basalt, rising up to 60 feet in height, formed from eruption and uniform cooling of basalt lava. The San Joaquin River transforms throughout Devils Postpile, from a broad, low-gradient meander to scattered pools, fast-flowing rapids, cascades, and finally culminating at Rainbow Falls, a waterfall that stands 101 feet tall.
Climate change presents significant risks and challenges to the National Park Service and specifically to Devils Postpile National Monument. Scientists cannot predict with certainty the severity of climate change nor its effects but a relatively modest increase in temperature is expected to affect precipitation, fire regimes, and organism habitats in the local ecosystems. The most pronounced changes are likely to be seen in snowpack volume, surface water dynamics, and hydrologic processes. For example, regional average temperature increases would cause earlier snowmelt runoff, reduce summer base flow in local streams and rivers, lower snowpack volume at mid-elevations, and increase the incidence and severity of winter and spring flooding. Changes in the type and timing of precipitation are already being observed within the park and surrounding areas, as flow in many western Sierra Nevada streams has been observed to begin one to three weeks earlier than in the mid 20th century. Prolonged summer droughts have altered natural fire regimes and increased the potential for high severity wildfires.
Increasing temperature and changing precipitation patterns could also result in a shift of specific habitat to higher elevations. Local flora and fauna with specific needs and limited mobility could be locally extirpated, resulting in a decline of biodiversity. For example, high alpine habitat may shrink or even disappear, leading to an irreversible loss in species such as pika, Belding’s ground squirrel, yellow bellied marmot, and Sierra Nevada bighorn sheep. The 2009 Devils Postpile Wetland Inventory and Condition Assessment revealed that 8.5% of the Monument is wetlands, which are also at risk of being impacted by changes in temperature and hydrologic regimes. Additional effects from changes in climate and precipitation patterns in Devils Postpile could include diminished integrity of meadows, seeps, springs, tributaries, and the San Joaquin River, thus compromising the vitality, diversity, and distribution of native species and habitats.
This Action Plan identifies steps that Devils Postpile National Monument can undertake to reduce greenhouse gas (GHG) emissions and mitigate its impact on climate change. The plan presents the Park’s emission reduction goals, and associated reduction actions to achieve the park’s goals. The park’s Environmental Management System will describe priorities and details to implement these actions, integrating emission reduction strategies into regular park operations and activities.
GHG emissions result from the combustion of fossil fuels for transportation and energy (e.g., boilers and electricity generation), the decomposition of waste and other organic matter, and the volatilization or release of gases from various other sources (e.g., fertilizers and refrigerants). At Devils Postpile National Monument, the main sources of energy are propane and wood for heating buildings, purchased electricity, gasoline for the vehicle fleet and for gas-powered equipment.
In 2008, GHG emissions within Devils Postpile National Monument totaled 46 metric tons of carbon dioxide equivalent (MTCO2E). This includes emissions from park and cooperating association operations and visitor activities, including vehicle use within the park. For perspective, a typical single family home in the U.S. produces approximately 12MTCO2 per year (U.S. EPA, Greenhouse Gases Equivalencies Calculators – Calculations and References, Retrieved; Website: http://www.epa.gov/cleanenergy/energy-resources/calculator.html). Thus, the combined emissions from park, its cooperating association, and visitor activities within the park are roughly equivalent to the emissions produced by four U.S. households each year.
The largest emission sectors for Devils Postpile National Monument are transportation and waste, each totaling 19 MTCO2E. The transportation sector is the combined emissions from park operations and visitor vehicles. All visitors, with some exceptions, are required to ride the shuttle bus, which significantly reduces emissions from visitor vehicles. It is estimated that the required use of the shuttle bus reduced vehicle miles traveled (VMT) into the monument by 437,779 miles in the 2009 season; this reduction in VMT decreased the CO2 emissions of our visitors by approximately 118 MTCO2E.
The graph below, taken from our Action Plan, shows our baseline emissions in 2008 broken down into sectors.
Devils Postpile National Monument intends to reduce emissions produced by park operations as follows:
- Energy use emissions to 35 percent below 2008 levels by 2016.
- Waste emissions to 35 percent below 2008 levels by 2016 through waste diversion and reduction.
- Maintain transportation emission levels.
To read more about what we are doing at Devils Postpile National Monument about Climate Change, check out our Action Plan! |
By Anupum Pant
The length of Australia’s coastline according to two different sources is as follows:
- Year Book of Australia (1978) – 36,735 km
- Australian Handbook – 19,320 km
There is a significant difference in the numbers. In fact, one is almost double the other. So, what is really happening here? Which one is the correct data?
Actually, it depends. The correct data can be anyone of them or none of them. It completely depends on the kind of precision you decide to use while measuring the coastline. This is the coastline paradox.
The coastline paradox
The coastline paradox is the counter-intuitive observation that the coastline of a landmass does not have a well-defined length. – Wikipedia
The length of the coastline depends, in simple terms, on the length of scale you use to measure. For example, if you use a scale that is several kilometers long, you will get a total length which is much less than what you’d get when you would use a smaller scale. The longer scale, as explained neatly in this picture, will skip the details of the coastline.
This is exactly what happened when the two different sources measured the coastline of Australia. The first, Year Book of Ausralia, used a much longer scale than the one, Australian Handbook used. Ultimately, the great disparity in the result had to do with the precision of measurement. Had they used a scale just 1 mm in length, the result would have been a whooping 132,000 km.
Another factor is to take into account the estuaries to measure the length. Then,what about those little islands near the coast? and the little rocks that protrude out of the water surface? Which ones do you include to come out with the data? And the majestic Bunda cliffs? Probably this article from the 1970’s clarifies what was included and what was not during the time the results were published.
So, the next time someone decides to test your general knowledge and asks you the length of certain country’s coastline, your answer should be – “It depends.” |
Simple Mendelian Genetics: An interactive lecture using "DNA from the Beginning"
This activity has undergone a peer review process.
This activity has undergone a peer review process by which submitted activities are compared to a set of criteria. Activities meeting or revised to meet these criteria have been added to the collection. To learn more about the review criteria, see [http://taste.merlot.org/evaluationcriteria.html]. More information about the peer review process can be found at [http://taste.merlot.org/peerreviewprocess.html].
This page first made public: Jan 25, 2007
This material is replicated on a number of sites as part of the SERC Pedagogic Service Project
This active uses a flash based animation, "DNA from the Beginning," to introduce students to the researchers and their experiments that led to the current model for inheritance of genes. Each of the five modules has an animation showing one experiment, a simple multiple choice question based on the experiment, and a more challenging multiple choice question that requires students to extend the concepts further. The modules help students learn about doing genetic crosses, that genes come in pairs, alleles can be dominant or recessive, and how to use Punnet squares to predict ratios of phenotypes in offspring. All five modules can be covered in two to three hours either as an interactive lecture in class or on the students own time.
- Learn the meaning of the terms phenotype, genotype, allele, dominant, recessive, heterozygous, homozygous, hybrid, Punnet square
- Understand how crosses of plants and animals can be used to investigate the principles of inheritance
- Use data from crosses to determine which allele is dominant
- Use data from crosses to determine whether the parents are homozygous or heterozygous
- Deduce the expected ratios in the progeny given the alleles in the parents
Context for Use
Description and Teaching Materials
To use this in a lecture, give the students a brief explanation of the topic and warn them that there will be questions along the way. Then go through the first animation, stopping at the end to let the students attempt to answer the first question. They can do this on their own and then vote on the correct answer (using cards, IR clickers, or just a show of hands) or they can work in small groups and then different groups volunteer, or are randomly selected, to explain their answer to the class. All five modules can be done in a couple of hours or, if students are struggling, the instructor can give additional questions and help and take three or four hours to go through all five modules.
The five modules on simple Mendelian genetics are:
- Children resemble their parents.
- Genes come in pairs.
- Genes don't blend.
- Some genes are dominant.
- Genetic inheritance follows rules.
Alternatively, the animations could be assigned as homework and class time could be used to discuss the animations and tackle more difficult problems. As the animations reveal whether students have the correct answers to the questions and allow multiple trys, this asingment can't be graded other than as pass (for gettign the answer to all questions) and fail for not doing the assignment.
Teaching Notes and Tips
References and Resources
MERLOT description of the "DNA from the Beginning" resource that is used in this activity.
Direct links to DNA from the Beginning: |
MUSCULAR SYSTEM. Kaitlyn Skidmore Qua’Shaya Hammon Camille Torres. The Functions of the Muscular System. The Muscular System : Provides Structure Aids in Movement Production of Heat Stability of Joints. SKELETAL MUSCLE STRUCTURE. MUSCLE FIBER STRUCTURE.
The Muscular System:
Neuromuscular junction- Connection between the motor neuron and the muscle fiber
Motor End Plate-The flattened end of a motor neuron that transmits neural impulses to a muscle
Neurotransmitter- Cytoplasm at the distal ends of these motor neuron axons; rich in mitochondria and contains many tiny vesicles that stores chemicals.
Muscle Contraction- A complex interaction of organelles and molecules in which myosin binds to actin and exerts pulling action.
A protein involved in cell movement: a protein present in all cells and in muscle tissue where it plays a role in contraction.
A muscle protein: a protein in muscles that helps them contract when connected with the protein actin.
The process of contraction begins with:
The stimulation of a motor neuron, which releases
acetylcholine, causing the chemical, calcium (Ca++) to be
This calcium molecule will shift a troponin, making a site
available for the myosin heads to connect to the actin
molecule creating the contraction.
Once the myosin filament attaches to the actin, ATP releases
the bond between the two, then a chemical,
acetylcholinesterase, is released to digest the acetylcholine,
so that it may be recycled and used once again for
Muscle Fatigue: a muscle exercised strenuously for a prolonged period may lose its ability to contract.
A protein, calmodulin binds to calcium ions (no troponin) and activates the contraction mechanism.
Most calcium diffuses in to smooth muscle cells from the extracellular fluid (reduced SR).
Norepinephrine and acetylcholine are smooth muscle neurotransmitters.
4. Contraction is slow and sustained.
Origin is the attachment site of the muscle’s tendon to a more stationary bone. It has very less movement and normally a muscle contracts towards it. Some muscles have more than one origin, for example, biceps brachii.
Insertion is the attachment site of the muscle’s tendon to a more movable bone is known as the muscle’s insertion. It has the greatest motion when the muscle contracts and it tends to be more distal.
An antagonist muscle is one that works in opposition to the movement initiated by an agonist muscle. The antagonist muscle in a muscle set brings a limb or other anatomical part back to its initial position of rest.
A synergist muscle is a muscle which works in concert with another muscle to generate movement. These muscles can work with the agonists or prime movers which surround a joint, or the antagonistic muscles, which move in the opposite direction.
Muscular dystrophies, or MD, are a group of inherited conditions, which means they are passed down through families. They may occur in childhood or adulthood.
The doctor's exam may show:
•Abnormally curved spine (scoliosis)
•Joint contractures (clubfoot, clawhand, or others)
•Low muscle tone (hypotonia)
•Heart testing - electrocardiography (ECG)
•Nerve testing - electromyography (EMG)
•Blood testing - including CPK level
•Genetic testing for some forms of muscular dystrophy
There are no known cures for the various muscular dystrophies. The goal of treatment is to control symptoms
Summation- Occurs in the neuromuscular junction; it is the additive effect of several electrical impulses.
Recruitment- contraction of one motor unit and all muscle fibers at the same time
Sustained contraction- also called a tetanic contraction, occurs when there is an accumulation of acetylcholine in the neuromuscular junction.
-It occurs rapidly, giving no time for the muscle to relax between stimuli
-Remains constant in a steady state; maximal muscle contraction
Muscle Tone- continuous and partial contraction of muscles
-recruitment and summation combined |
There has been much press and controversy in recent years about global warming or climate change. Opinions range from doomsday scenarios to absolute disbelief. As a meteorologist (not climatologist), I say that the truth is somewhere in between these extremes.
Although climatologists and meteorologists debate the merits of global climate change theory, there are some facts that are not disputed by scientists. The vast majority of scientists agree that the global surface temperatures have increased since the mid-1800s and that humans are adding CO2 to the atmosphere. They also agree that CO2 and other greenhouse gases have a warming effect on the planet. There is disagreement, however, on whether the warming since 1950 has been primarily caused by human activities and just how much the planet will warm during the 21st century. They also tend to argue whether the warming is dangerous and whether or not anything can or should be done to prevent this.
Just what are greenhouse gases anyway?
Greenhouse gasses are those gases that allow the atmosphere to retain heat and thus warm the earth’s surface above what it would be from sunlight alone. The most significant greenhouse gas is water vapor followed by CO2, methane, nitrous oxide and ozone. Sunlight heats the earth’s surface and that heat is then partially absorbed by greenhouse gases. Without these greenhouse gases the earth’s temperature would be about 33°C (59ºF) colder than it is now, and our world would likely be a giant snow ball.
The issue today is that some of these greenhouse gases, particularly carbon dioxide and methane have been increasing steadily since the beginning of the industrial revolution. Over the past approximately 150 years the level of carbon dioxide, for example, has undergone a very significant increase of about 40 percent, which cannot be accounted for by natural sources alone. Carbon dioxide is a powerful greenhouse gas, however, it is present in our atmosphere in very small quantities, about 400 parts per million or 0.04%. CO2 contributes between 9% (3C) to as much as 26% (8.6C) or the total 33C greenhouse effect. Given that atmospheric CO2 has increased about 40% over the past 150 years then you might well expect some significant warming. Just how much warming is what is in dispute and the best estimates I can find is that doubling CO2 would result in a warming of 0.7C to 1.2C alone or about 0.3C to 0.5C at the current 40% increase level. Add in various feedbacks and you might double that number.
The oldest instrumental temperature record comes from Central England and goes back to the 1600s. In that record there is a sharp warming from 1690 to 1740 and another from 1820 to 1840. Although we can’t infer too much from this about global temperature variations, the Central England temperature record does illustrate the magnitude of natural climate variability.
Solar activity also varies over time which affects global temperatures. From about 1750-1950 total solar irradiation has been estimated to have increased by about 1 to 1.5 Watts per square meter which can account for 0.2C to 0.3C warming during that 200 year period. More recently, total solar irradiation has begun to slowly decrease.
Since 1998 there appears to be a pause in global warming not explained by the global climate models as they all predicted a 0.2C per decade warming. Are the models too sensitive to CO2 or is the treatment of natural variability not modeled well? Some papers suggest that the pause is not really there, however, the satellite data and the NCEP 2 meter temperature data strongly suggest that it is.
In the most recent (2013) IPCC report it states that “Warming of the climate system is unequivocal, and since the 1950s, many of the observed changes are unprecedented over decades to millennia. The atmosphere and ocean have warmed, the amounts of snow and ice have diminished, sea level has risen, and the concentrations of greenhouse gases have increased“.
If this is true, we might then ask what caused the strong warming noted from about 1910 to about 1945? This warming seems very similar to the more recent run-up from the late 1970s to 1998 but occurring prior to the 1950s. In the chart below, you will note that the global temperatures were falling from before 1880 until 1910 and then there were two periods when temperatures were nearly steady or fell slightly. The first pause occurred from the mid-1940s until the late 1970s then starting again from about 1998 until today. The most recent IPCC report, in my opinion, does NOT have a convincing explanation for the large warming between 1910 and 1945, the cooling between 1945 and 1975, and the flat temperatures in the 21st century.
This suggests to me that there is more in play here than just CO2, since the earlier warming occurred prior to when the majority of CO2 was emitted. Another theory offered by Dr. William Gray suggests that the ocean has multi-decadal and multi-century cycles that may have a significant influence on global warming. Below you see a similar trend with the ocean temperatures.
The role of the Oceans
The total heat content of the oceans is enormous when compared to the atmosphere. The atmosphere contains only about 2% of earth’s heat, the land masses another 2% and the oceans, about 93% with the remainder locked up in ice. Given the above, one must ask just what role the oceans play in global warming?
The ocean-atmospheric interactions are not fully understood and given that the vast majority of earth’s heat is stored in the oceans it is vital to any predictions regarding global warming to understand those interactions. One of the key questions is how much mixing of cold deep ocean water with warmer surface water varies over time and how does that effect the global temperature.
One key ocean cycle that is well documented is the Pacific Decadal Osculation (PDO). The PDO is a warming and cooling of the Pacific Ocean over a time period of 2-3 decades. If you plot the PDO index (see above) you will see that the two periods of strong warming (1910 – 1945) and (1978 – 1998) corresponds well with the positive (warm) PDO while the two periods of no global warming or some cooling corresponds well with the periods of negative PDO.
The theory that global warming is driven by ocean cycles seems to have some validity. Increasing atmospheric CO2 does have a role here, however, the longer-term ocean cycles may have an equal or possibly stronger effect. Besides the multi-decadal ocean cycles like the PDO, it is theorized that the Thermohaline Circulation (the global ocean current conveyor belt) runs in cycles that can extend out 100 years or more. As this circulation increases you get more mixing and thus colder sea surface temperatures and when if slows down you get less mixing and therefore warmer sea surface temperatures.
The spike in global temperature for 2014-2015 reflects, in part, a brief return to the positive PDO in late 2013 and as we return to a negative PDO in the coming years the current pause in warming will continue. By 2020 we may have a better idea on how much ocean cycles play in atmospheric warming.
Ocean Weather Services |
Rainbows are an arc-shaped spectrum of light which are caused by the reflection of light in water droplets.
Rainbows are caused when rays of light from the sun hit water droplets which reflect some of the light back. The water droplets are usually rain drops, but could also be spray from a waterfall, a fountain, or even fog. To see a rainbow, you must have the sun shining behind you and the water droplets in front of you.
Sunlight is made up of a spectrum of different colours that look white when we see them all mixed together. These colours get reflected by slightly different angles inside the raindrop, so they get spread out. This is why we see the familiar colours of the rainbow.
The rainbow's shape is a circle whose centre is at the anti-solar point. This is the point in the sky that is at the end of an imaginary line that passes through the sun and your head. We can usually only see a part of the rainbow's circle however, because the rest of it is below the horizon.
The amount of the rainbow circle that is visible therefore depends on how high the sun is in the sky. When the sun is very high, you may see a rainbow that only just appears above the horizon. On the other hand, if you are lucky enough to see a rainbow from a plane or the top of a mountain you might be able to see the whole circle.
Sometimes we can see a second, larger, rainbow outside the main one. This is called a "secondary" rainbow, and it is formed by rays of light that are reflected inside the rain drop twice.
If you look carefully, you will see that the extra reflection means that the colours in the secondary rainbow are in the opposite order to the first (or "primary" rainbow). The secondary rainbow is also less bright because the light is being spread over a larger area of the sky.
The area between the two rainbows is known as 'Alexander's Band', named after Alexander of Aphrodisias who first described its occurrence in 200 AD.Moonbows Moonbow
Although quite rare, it is possible to see a rainbow at night. If the moon is shining brightly enough, light can be reflected through water droplets in the same way a rainbow is created. Because the moon is much less bright than the sun, 'moonbows' are much fainter than daytime rainbows.
This makes the colours difficult to see so they usually look white to the human eye, but you may be able to see the colours in a photograph taken with a long exposure time as in the example to the right.
Similar in appearance to rainbows, but different in their formation, you may have observed an upside-down rainbow or perhaps a bright circle that surrounds the sun. These are not rainbows but are examples of halos, formed by ice crystals high in the atmosphere.
Last updated: 25 July 2013 |
CBS Local — Before George Washington became the nation’s first president, the famed Revolutionary War general entered the history books by creating a revered national symbol: the Purple Heart. August 7 marks National Purple Heart Day, the 235th year since the military award’s creation.
In 1782, General Washington, commander-in-chief of the Continental Army, established a “badge for military merit.” The award was originally decorated for “any singularly meritorious action” and was reportedly only given to three soldiers during the Revolutionary War. Washington’s heart-shaped badge of merit was largely forgotten until the 20th century.
In 1932, Washington’s 200th birthday, the U.S. War Department created the “Order of the Purple Heart.” A picture of the first president was added the Purple Heart and it would now be awarded to members of the armed forces who had been killed or wounded in combat. The Purple Heart is also given to soldiers who have been held and mistreated as prisoners of war.
It’s estimated that over 1.8 million Purple Hearts have been awarded in the country’s history. Many were retroactively given to soldiers who fought in World War I and the Civil War. In April, President Trump awarded his first Purple Heart to Army sergeant Alvaro Barrietos at Walter Reed National Medical Center. Barrietos had suffered a serve leg injury while serving in Afghanistan. |
Anaphylaxis is a set of serious symptoms triggered by a severe hypersensitivity Type I allergic reaction.
Anaphylaxis is triggered by even tiny amounts of an allergen in some people who are suseptable to it.
Anaphylaxis involved multiple systems in the body, including respiratory (upper and lower), gastro-intestinal, skin and cardiovascular.
Anaphylaxis is usually diagnosed in childhood. It can also begin later in life for some people.
Anaphylaxis is most commonly caused by food, but can also be triggered by medication, insect stings, as well as other substances such as latex.
The most common foods to trigger anaphylaxis are:
Sympoms range from mild to severe. In thier worst case, they are life-threatening (when symptoms are so bad that they casue anaphylactic shock).
Symptoms can be delayed or start off as slight, but rapidly develop in to anaphylactic shock.
Anaphylaxis can result in some, many or all of the following symptoms:
- Abdominal pain
- Angioedema (swelling of the lips, face, neck and throat)
- Bronchospasm (constriction of the airways)
- Encephalitis (acute inflammation of the brain)
- Flushed appearance
- Hypotension (low blood pressure)
- Polyuria (passage of large volumes of urine)
- Rapid heartbeat
- Respiratory distress
- Tears (due to angioedema and stress)
- Throbbing ears
- Urticaria (hives)
- Vasodilation of arterioles (small diameter blood vessels dialate, causing a rapid drop in blood pressure) |
People have explored the topic of gender rights for many decades as women’s conventional role in modern society drastically changed. This evolution changed how genders interacted with one another and challenged the conventional norms of patriarchy that went unchecked for centuries. Women’s rights in Ghana is important socially and economically. Although ahead of its neighboring counterparts economically, politically and developmentally, there is still a wide gender gap that needs bridging.
Beginning of Women’s Independence
Ghana is a West African country located on the Gulf of Guinea and enjoys a tropical climate. Ghana gained independence from British colonial rule in 1957. There is no denying the role of Ghanaian women’s benefaction to the outcome of this freedom, as it segued into the establishment of the National Council of Ghana Women in 1960. The council’s intent was to empower and benefit women’s rights in Ghana by developing vocational training centers and daycare facilities.
Efforts to propel women to the forefront of the country’s progression were lacking. The numbers show how far behind women were in comparison to their male counterparts. Ghana is “in the bottom 25% worldwide for women in parliament, healthy life expectancy, enrolment in tertiary education, literacy rate, and women in the professional and technical workforce.”
Enrollment in Tertiary Education
Tertiary education illustrated the gender gap in Ghana best. Looking at the reasons separating women from pursuing higher learning exposes the patriarchal ideology woven into society. In general, keeping girls in education raises a country’s GDP. According to a report by Water.org, increasing accessibility for children in Ghana “on a global scale, for every year a girl stays in school, her income can increase by 15-25%.”
Impact of Literacy Rates
The impact of literacy is as severe as reducing a country’s GDP. However, with such devastating numbers related to the gender gap in Ghana, the sinking literacy rates had to be addressed. Women in Ghana do not necessarily obtain the ability to read and write from receiving a formal education due to the consequences of the quick development of schools in low-income countries such as Ghana. There is a current disruption in educating students due to the exponential growth within education systems, which impacts the school’s full potential. However, the literacy rate for women in Ghana has made significant progress over the years. According to the World Bank’s data report in 2018, the literacy rate for females aged 15 or older is 74.47%. While the literacy rate for females aged 15 to 24 years old is 92.2%, increasing young girls’ independence.
Women’s Employment and Labor Force
Currently, 46.5% of the labor force in Ghana is female. However, these women participate in domestic labor, such as in the agricultural field, without any pay, which limits their independence. Despite the rights Ghanaian women have gained since the 1960s, the country has recognized that economic growth does not necessarily reduce gender-based employment and wage gaps.
Contrary to the women who receive no pay, women who earn a subsistence wage through agriculture are at risk of significant health issues due to the physically demanding nature. Ghana is a traditional-based society explaining gender-based roles. However, one nongovernmental organization defending women’s rights in Ghana is Womankind. The organization emerged in 1991 with the goal of ending violence against all women in Ghana. This can help increase their social rights and political power within the government. Over 600 women in Ghana received recognition for their professional training experience to construct their own political decisions within the last five years. The secondary school leadership roles consist of 30 young girls who studied management within the organization. As a result, this increases the chances of independence and rights for women in Ghana.
Developing Women’s Rights in Ghana
Women and men are legally equal in Ghana, and women’s rights in Ghana have made significant progress. However, multiple aspects of traditional society affect gender equality, impacting their rights as women. With educational empowerment and recognizing that economic growth does not necessarily mean women are receiving the same job opportunities as men, gender equality will be more promising in Ghana.
– Montana Moore |
Emerald Wonders: Exploring The Science Behind Green-Hued Comets
The ethereal beauty of a green comet streaking across the night sky is a captivating sight, but what causes this unusual hue? The solution rests within the intricate dance involving sunlight, molecules, and the distinct composition of a comet’s nucleus.
Comets are essentially cosmic time capsules composed of ice, dust, and volatile compounds. While a comet follows its elliptical orbit toward the Sun, heightened solar radiation induces the sublimation of its icy core. This process liberates gases and dust, giving rise to a radiant coma—a misty, luminous shroud encircling the nucleus—and a tail that extends in the direction opposite to the Sun, driven by the solar wind.
The vibrant green color in certain comets is primarily attributed to the presence of diatomic carbon (C2) molecules within the coma. When ultraviolet (UV) sunlight hits the C2 molecules, it causes them to undergo a process called photodissociation, where they break apart into individual carbon atoms.
These carbon atoms then recombine with nearby molecules, often forming more complex hydrocarbons like cyanogen (CN) and formaldehyde (H2CO). It’s the presence of these hydrocarbons and the subsequent chemical reactions that contribute to the green glow.
Specifically, the green color emerges from the emission of light at a specific wavelength when the hydrocarbon molecules transition from an excited state to a lower energy state. Cyanogen (CN), for example, emits green light at around 500 nanometers—a shade often associated with the color of a green comet.
Not all comets exhibit this green coloration, as their composition varies based on factors such as their origin and history. Additionally, other gases like carbon monoxide (CO) and neutral carbon (C) can also contribute to the overall color of a comet’s coma. |
Cancer spreads by the process of metastasis, where malignant cells break off from tumors in one part of the body and travel to other parts of the body via the blood stream. It's been thought that when a rather large clump of cells breaks off a tumor that its progress through narrow blood vessels would be not be possible due to size. Simply put, clumps of cells that were too large to fit could not spread. Much like a square peg not fitting through a round hole, it was believed that lumpy clusters of cells could not access tiny capillaries.
Researchers have now found that's not always the case. A team at Harvard Medical School has published research in the Proceedings of the National Academy of Science that shows clusters of cells actually break apart and change shape to fit through small spaces. Individual cells from these clusters transform into oval shapes and almost march, one by one, through vessels as small as 10 micrometers in size. Once there, they can reform into clusters and continue to spread disease. The work at Harvard was done using live zebra fish implanted with clusters of cancer cells. |
These days, in our ever complex society, it's becoming increasingly important to be able to think critically, in particular with young minds that can be easily molded. Understanding cognitive biases, logical fallacies and mental models at an early age may well give them and us the hope of a better and brighter future. The Decision-Making Blueprint by Patrik Edblad lists 45 of these important tools, though I will only focus on several that young learners in particular struggle with.
The status quo bias and the homeostasis model work hand in hand with many students. The status quo bias is the tendency to prefer things stay the same, while homeostasis is the state of a system that wants to maintain internal stability. For example, kids who say they don't like art or PE or math, continue to maintain that viewpoint. This could have come about because of confirmation bias, where people tend to favor information that confirms their existing beliefs. So if a student does poorly on a math quiz, that confirms the "fact" that they are poor at math. All of these ways of thinking run counter to the growth mindset, where effort and dedication will lead to success, which is what teachers try to instill in learners. Connected to this idea is that of compounding. Most kids have trouble seeing too far into their future, but modest gains on a daily or weekly basis can achieve dramatic results, according to James Clear, author of Atomic Habits. If you memorize your times tables 5 minutes, 3 times a week, for one week, the results might be negligible. However, if you maintain that for a whole year, then you will probably memorize them quite easily.
In some cases, students suffer from the Dunning-Kruger Effect, which is the tendency to be more confident the less you know. "If you're incompetent, you can't know you're incompetent," says David Dunning. I think the simple graphic organizer, K-W-L, helps visualize the gaps in learning. Also, the W, which represents "want" is critical. What do students want to learn? Curiosity is one of the main bulwarks to remaining ignorant or incompetent.
Young kids also suffer from self-serving bias. For example, if a student gets a good mark, it's because of their effort or intelligence, but if they do poorly, it's because of the tough teacher or unfair tests. This goes hand-in-hand with the fundamental attribution error: when someone else makes a mistake, it's their fault entirely, but when you make an error, it's due to circumstance. A classic example would be if you hit a student in the head with a dodgeball; it was an accident and the ball slipped out of your hand or they ducked at the inopportune time. But if they hit you in the head, it was intentional and mean-spirited. This aptly segways into Hanlon's Razor (a subset of Occam's Razor), which is to never attribute to malice what can be explained by neglect. However, when you listen to countless arguments between kids, this is often the mental model that they base their thinking on. In addition, the availability bias, is a contributing factor--basing our judgments on what easily comes to mind. Kids (along with adults) often keep a mental record of all the wrongs done to them. Instead of a pattern of behavior, Hanlon's Razor would treat each as an isolated occurrence.
The 80/20 Principle, developed by Italian economist Vilfredo Pareto in the late 1800s, has proven to be a valuable principle today--essentially that 80% of the effects come from 20% of the causes. It would be wise and valuable to figure out what that 20% is in the classroom, which would result in 80% of the learning. It could be the basics of the curriculum--reading, writing and math; or it could be instilling curiosity, hard work, passion, self-directed learning, and the growth mindset. Or maybe which 20% of the students need the most support. Or something entirely different in another classroom.
On a final note, of course there are caveats for all of these biases, logical fallacies and mental models. Sometimes the opposite can be true. They are not 100% accurate all the time, so treat them as principles or guidelines, not laws or commandments.
Source: The Decision-Making Blueprint, Patrik Edblad, 2019
Is ultralearning the new method of learning? Can anyone do it? How effective is it? What is it, exactly?
The author, Scott Young, begins the book with a bang. He essentially completed the equivalent of an MIT engineering degree in one year, using ultralearning strategy. The book also describes his other experiences, such as learning four languages, in a year, as well as numerous stories of other friends and acquaintances that have learned in this unique manner. This includes Roger Craig of Jeopardy! fame and Eric Barone, who spent five years of his life creating a computer game called Stardew Valley entirely on his own. It sold over 10 million copies and he is now a multimillionaire. Of course, not all ultralearners achieve fame and fortune, but many achieve their goals of learning something new in an accelerated and intensive way.
So what is ultralearning? It is an rigorous self-directed strategy of learning. Right away this should tell you that it is not for the faint of heart. But it may be something that will continue to gain momentum for several reasons. First, Tyler Cowen, in his Average is over book, talks about "skill polarization," where only the top and bottom of the income spectrum is remaining, so more specialized, advanced skills are needed to succeed in this society. (Unless you want to be in the bottom layer.) As post-secondary education costs skyrocket, unless you need a required professional degree, this learning strategy is a cheap alternative. Finally, technology and endless resources allow for self-directed learning to soar to new heights.
Young discusses nine principles to ultralearning:
Principal #1: Metalearning; First Draw a Map
First, answer the 3 W(H)s. Why? Is your project instrumental (extrinsic) or intrinsic? For instrumental reasons, you'll need to do extra research. Find an expert and get advice. What? Get a piece of paper and write down Concepts, Facts, Procedures. How? Use benchmarking to compare what you want to learn with existing programs. Then you can Emphasize/Exclude elements that you need to achieve your goal. Spend about 5-10% of your time planning (this is essential).
Principal #2: Focus: Sharpen Your Knife
Problem #1: Failing to get started (procrastinating)
First find out why you're procrastinating. The main solution is to start! Five minutes, and later the Pomodoro Technique of 25 minutes, then 5 minute break.
Problem #2: Failing to sustain focus (Getting Distracted)
Mihaly Csikszentmihalyi pioneered the flow concept, that sweet spot of an activity--not too hard or too easy. K. Anders Ericsson, the psychologist behind deliberate practice, said flow did not occur during deliberate practice. Young feels that during ultralearning, you may or may not be in the flow state, but that is not of importance. Chunks of about 50 minutes are ideal for learning, if possible. Try to eliminate the distractions of the environment, task and mind.
Problem #3: Failing to create the right kind of focus
High arousal (energy, alertness) is good for simple tasks or intense concentration activities. These can be done in a slightly noiser setting, such as a coffee shop. Complex tasks (solving math problems or writing essays) require a more relaxed kind of focus. A quiet room is a good place to focus.
Principal #3: Directness: Go Straight Ahead
Directness is tying the learning as closely to the actual situation or context you want to use it in. He gave the example of a recent architectural graduate, Vatsal Jaiswal, whose program focused mostly on design and theory. After submitting hundreds of resumes with zero interest, Jaiswal decided to learn about two things: Revit (a current design software) and knowledge of architectural drawings. He then designed his own building using his newfound knowledge and skills. After applying to just two firms with his new portfolio, he was offered both jobs.
Educational psychology deals with the idea of transfer, and its failings. Psychologist Robert Haskell says that the research has shown that transfer of learning has been minimal at best. For example, college students who have taken a high school psychology course do no better than those who haven't take a course.
Here are some possible solutions:
Tactic #1: Project-based Learning
At the end of your project, you will have something to show for it. As well, a number of other subskills will be gained during the process.
Tactic #2: Immersive Learning
When possible, try to seek the environment or situation of the desired goal. If you are learning a language, then speak the language only in that location or with native speakers.
Tactic #3: The Flight Simulator Method
Of course, when the actual experience is impossible, then a simulation is fine. So Skype tutoring is better than flash cards.
Tactic #4: The Overkill Approach
Try to increase your directness by increasing your challenge. That means more risk-taking and putting yourself in uncomfortable situations. But if you can overcome your fears and anxieties you will achieve more much that much quicker.
Principal #4: Drill: Attack Your Weakest Point
Young highlights the rate-determining step, the "bottleneck" in the learning process. For example, in language learning, if you can increase your vocabulary dramatically, then your ability to speak with your existing language skills expands greatly. This is where drills come in. You can simplify a skill enough to focus your cognitive resources in one area.
Direct-Then-Drill Approach: First practice the skill directly; for example, learning programming by writing software. Analyze the skill and try to isolate components to improve on and create drills. Finally, go back to direct practice and integrate what you've learned.
Tactics: First, you need to figure out when and what to drill--what would be of most benefit. The key is to experiment, make a hypothesis, do some drills, then get feedback. Second, design the drill to produce improvement and transfer. Finally, remember drills can be hard, so be prepared to work hard and not quit.
Principal #5: Retrieval: Test to Learn
Psychologists Jeffrey Karpicke and Janell Blunt conducted a study in reading, examining students' choice of learning strategy: 1) review the text once; 2) review it repeatedly; 3) free recall; 4) concept mapping. The clear winner? Free recall (retrieving information without looking at the text), remembering almost 50% more than the other groups. Surprisingly, even when the final test was to produce a concept map, the free recall group performed better.
So if free recall is the best method of retrieval, why isn't it used more? That's because of our judgements of learning (JOLs). If we feel the learning task is easy, we believe we've learned it; on the other hand, the harder it feels, the less we think we know it.
Psychologist R.A. Bjork talks about the concept of desirable difficulty. Free recall tests tend to result in better retention than cued recall tests (multiple-choice). Giving a test immediately after learning is less effective than delaying a bit. However, too long of a wait results in information being completely forgotten. Also, testing more difficult material before you are "ready" is more efficient. Even giving the final exam (a pre-test) has benefits, known as the forward-testing effect. The analogy is that of laying down a road leading to a building that has yet to be built. The mechanism could also be of attention. Your mind uses its attentional resources to spot information you learn later on.
Methods of Recall:
Principal #6: Feedback: Don't Dodge the Punches
Why does famous comedian Chris Rock perform at the modest Comedy Cellar in Greenwich Village, NY, from time to time? He wants honest, sometimes brutal feedback--an essential component of ultralearning.
Feedback can be a tricky thing. In a large meta-analysis, Avraham Kluger and Angelo DeNisi found that although the overall effect of feedback was positive, over 38% was negative.
There are three types of feedback: 1) outcome: an aggregate or broad-scale form, like a letter grade; 2) informational: this explains what's going wrong but not how to fix it, like an error message in coding; 3) corrective: this is the best form and it comes from a coach, mentor or teacher who can pinpoint mistakes and correct them.
How quick should feedback be? According to James A. Kulik and Chen-Lin C. Kulik, in applied studies, immediate feedback in usually more effective than delay. Yet in lab studies, delaying the correct response was more effective.
Tactics to improve feedback:
Principal #7: Retention: Don't Fill a Leaky Bucket
Psychologist Hermann Ebbinghaus discovered the forgetting curve, an exponential decay in knowledge especially right after learning. The reasons why: 1) time: memories decay with time; 2) interference: overwriting old with new memories; 3) forgotten cues: memories are inaccessible.
Memory mechanism #1: Spacing: Find a perfect gap between learning sessions. Spaced-repetition systems (SRS) are tools to help. Both tech and paper tools work.
Memory mechanism #2: Proceduralization: declarative skills become procedural often, so emphasize a core set of reusable information that have longer lasting effects
Memory mechanism #3: Overlearning: if you study and learn beyond the adequate, you can remember it for a longer period of time. Personally, that's probably why I still remember by multiplication facts instantly even after 4 decades or more.
Memory mechanism #5: Mnemonics: overall, they are rigid and specific but powerful tools that work as intermediaries to memory, but not a strong foundation to base learning efforts on
Principal #8: Intuition: Dig Deep Before Building Up
Rule 1: Don't Give up on Hard Problems Easily: Push yourself even beyond frustration. Even if you fail, you'll more likely remember how to get to the solution when you find it.
Rule 2: Prove things to understand them: Rebecca Lawson talks about the "illusion of explanatory depth." People think they know more than they do. For example, most couldn't draw a bicycle properly or explain how it worked.
Rule 3: Always start with a concrete example: We go from concrete to abstract. Also, how we think about something is more important than how much time we spend. This is known as the levels-of -processing effect.
Rule 4: Don't fool yourself: The Dunning-Druger effect is when a person believes he or she knows more than experts.
Principal #9: Experimentation: Explore outside your comfort zone
Vincent van Gogh was not a child prodigy and suddenly start painting sunflowers and stars. In fact, he started late, 26, and tried countless styles, resources and techniques. The lesson learned is that experimentation is critical for ultralearning. Scott considers experimentation as an extension of the growth mindset, a concept from psychologist Carol Dweck. Experimentation creates a plan to reach those potential opportunities.
All in all, I think ultralearning has its place, particularly in non-school settings, with motivated and self-directed learners, although there are definitely a number of strategies and techniques that could be applied in any educational setting. The only way to know for certain how effective it is for you, of course, is to try it.
Source: Ultralearning, Scott H. Young, 2019
Benefits of curiosity:
How to maintain curiosity?
Role models are important. In an experiment with kindergarten children behind one-way glass, they saw their parents do one of three things: 1) play with objects on a table; 2) look at the table; 3) ignore the objects as they chatted. Later on, the children whose parents touched the objects did so themselves when given the opportunity.
Children want to talk; they just need the opportunity. Dinner table conversations also varied the amount of questions by the child. Of course, the more interest and open-ended questions resulted in a more curious and engaged child, compared to kids whose parents simply told them the "answers." Interestingly, toddlers may ask up to 26 questions per hour at home but just two per hour at school. Even worse, a researcher often observed during grade 5 lessons that two hours would pass without a single expression of active interest by students. On a surprising note, the expression of interest is directly correlated to the number of times a teacher smiles during the lesson.
In 2016, a study in Chile with 10th graders showed that the poorest children with a growth mindset performed at well as the richest children in the sample. In other words, a growth mindset may be able to compensate for many of the built-in disadvantages of being poor.
Even for someone whose genius seems almost "magical," Nobel-prize winning physicist, Richard Feynman, made his groundbreaking discovery when he decided to just have fun and play and experiment with questions he personally was interested in. It was this renewed curious mindset that led him to watch a man in the Cornell cafeteria throw, spin and catch plates in the air. He connected that with an electron's orbit, which eventually led to his theory of quantum electrodynamics. Feynman also acknowledges all the blood, sweat and tears (and drudgery) needed to achieve his accomplishments.
Source: The Intelligence Trap, David Robson, 2019
This entry will focus on how to improve memory. But first, we need to know why we forget: 1) not interested 2) not concentrating; 3) too stressed; 4) too much information; 5) poorly organized information; 6) weak links; 7) too long ago; 8) interference.
The brain operated at different frequencies in its four levels of consciousness: 1) beta (awake); 2) alpha (relaxed but alert); 3) theta (meditative/falling asleep); 4) delta (deep sleep). The alpha state is the best for learning.
Generally you only recall about 20% of new information within one or two days of learning it, because of all similar existing overlaid information. This is known as the confusion factor.
Stress is hazardous to our memory. First, it shuts down part of the brain responsible for long-term memory. Second, after an extended period of time, it can actually destroy brain cells related to memory.
How do you remember where you put your keys? Say it out loud. "I'm putting my keys in my jacket pocket." It brings it from subconscious to conscious awareness. Make it a habit!
How can we remember better? Think of Pavlov's dogs and the conditioning with the bell and food. We just need a powerful reminder, using the mnemonic REMIND. 1) Review what to do and visualize it clearly; 2) Exaggerate the picture of the trigger event. the more bizarre the better. 3) Maximize the recall power of the image with senses and memory visualization. 4) Install the link by repeating the association. 5) Note whether the trigger works or not. 6) Deepen the power by affirming it will work.
Source: Instant Recall, Michael Tipper, 2018
Sometimes doing nothing is actually doing something--something good for your memory, that is. New research suggests that when trying to memorize new information, taking a break, dimming the lights and sitting quietly can reap benefits. This is known as reduced interference.
Of course, this is actually nothing new. In 1900, German psychologist Georg Elias Muller and his student Alfons Pilzecker conducted experiments on memory consolidation. When studying meaningless syllables, half the group was given a six-minute break. When tested 2.5 hours later, the group with the break remembered nearly 50% of their list, compared to 28% for the group with no break.
In the early 2000s, a study by Sergio Della Sala at the University of Edinburgh and Nelson Cowan at the University of Missouri. They followed Muller and Pilzecker's original study, but with a 10 minute break, and the participants with neurological injury (eg. stroke) improved from 14 to 49%, similar to healthy people. More impressive results came with listening to stories and answering questions. Without rest, they could only recall 7% of the facts; with rest, this jumped to 79% recall.
The process is not yet known, but generally memories, after encoding, are consolidated into long-term memory. This seems to occur during sleep, as communication between the hippocampus and the cortex build and strengthen the new neural connections for later recall. Perhaps surprisingly, Lila Davachi at New York University, in 2010, found similar neural activity during periods of wakeful rest, just lying down and letting your mind wander.
In terms of education, this could mean the difference between rapidly switching from once subject to the next, and giving students a five-minute break just to sit and contemplate and reflect on their learning (with dimmed lights), of course.
Source: David Robson, BBC Future, February 11, 2018
Zest is enthusiasm for and enjoyment of something. John Holt wrote in 1967 that since we don't don't what people need to know in the future, "we should try to turn out people who love learning so much and learn so well that they will be able to learn whatever needs to be learned." This makes a lot of sense and is similar to lifelong learning.
The authors break down zest for learning into two bodies of knowledge: psychology of flourishing (psychology traits, theories of intelligence, positive psychology and psychology of motivation) and education for flourishing (purpose and pedagogy). So they cover a lot of varied but interrelated topics. This entry will be more of a brief summary of key ideas.
The psychological traits needed for zest can be summed up with the mnemonic OCEAN: openness, conscientiousness, extraversion, agreeableness and neuroticism. Embracing novel experiences, risk taking and taking on new challenges is a big part of zest.
Theories of intelligence related to zest are cognition, experiential learning, deliberate practice (expertise) and experience flow. Zestful learners find meaning, both in body and mind. Experiential learning began with John Dewey, Kurt Lewin and Jean Piaget, and the experiential learning cycle involves thinking, acting, experiencing and reflecting. The authors argue that the learning that happens in classrooms is as valuable as "real-life" learning. As much as teamwork and collaboration is held in high esteem in learning and society, deliberate practice is often best done alone, as in the case of elite athletes (Anders Ericsson). There is some disagreement with deliberate practice and flow, coined by Csikszentmihalyi. Angela Duckworth, who focuses on grit, explains that deliberate practice helps in preparation, but in the performance flow happens. Cal Newport and Jordan Peterson feel that flow might keep people in thrall but not actually improving. Duckworth is more positive about flow and feels grit is necessary to keep in the flow. Grit comes through four ways: 1) cultivate your interests; 2) develop a habit of daily challenge-exceeding-skill practice; 3) connect your work to a purpose beyond yourself; 4) learn to hope when all is lost!
Positive psychology studies well-being, self-actualization and flow. Similar to flow is the notion of "the zone" (Ken Robinson) when activities are "completely absorbing," A word of caution regarding the self-esteem movement, based on dispositions and character strengths, which carried to the extreme can lead to narcissism. The authors believe self-esteem based not on subjective feelings but rather the innate value and dignity of human beings is a wiser approach. Peterson and Seligman have identified 24 character strengths, based on six virtues, related to zest: 1) wisdom and knowledge; 2) courage; 3) humanity; 4) justice; 5) temperance; 6) transcendence. Carol Dweck's growth mindset also connects to this area of positive psychology; as well, maintaining optimism, physical activity and social well-being are all important aspects of zest.
The psychology of motivation is related to performance (though there are differences, of course). Performance has been turned into an equation by Campbell and Pritchard (1976):
performance = f (aptitude level x skill level x understanding of the task x choice to expend effort x choice to persist x choice of degree of effort to expend x facilitating and inhibiting conditions not under the control of the individual)
Four key factors for motivation are based on two habits:
1) performing well (feedback from good performance; expectation of good performance; goals worth pursuing) and 2) finding meaning (a sense of purpose)
Maslow (1970) identified 15 characteristics of people who are able to self-actualise. The following 8 are related to zest:
Source: Zest for Learning, Bill Lucas & Ellen Spencer, 2020
Among teens, 13 - 17, 1 in 3 struggled with anxiety, and 8.3% suffered from a severe impairment. What's the cause? Well, about 30-40% stems from genetics. Nearly 1 in 5 adults have suffered from an anxiety impairment in the past year. So, anxious parents and their reactions/behavior towards their child can create anxiety in that child. So a parent who recalls his own experience of falling off a bike will be more reluctant and fearful for his child to ride, for both the child's safety and the parent's peace of mind. Unfortunately, all this protectiveness eventually leads to a child's accumulated disability: the inability to cope, adapt and function with life skills.
An example is of a boy named Theo with separation anxiety in kindergarten, followed by future worries, which led to sleep issues in elementary grades, so parents took turns sleeping with him. Eventually, his life's needs were being catered to and met, which led to more anxiety and fragility and stress on Theo's part. One part of the solution is for both parents and child to receive treatment for anxiety disorders. There's a 77% of success in that case compare to 39% if only the child is treated. Another treatment method is progressive desensitization, whereby the child takes incremental steps to face her fear or anxiety. Instead of avoiding dogs all the time, walk pass one, then the parent should pet one. This builds the muscles of tolerating anxiety and building competency. This is somewhat similar to gradual release of responsibility: I do it, we do it, you do it.
Anxiety disorders usually appear between the ages of 6 to 10. Some big ones include sleeping, eating, using the bathroom and playdates. Developing social skills from K - 3 are critical as most anxiety disorders at age 8 - 10 stem from social problems, not academic. If kids are not able to spend time in their peer groups, they will not develop the necessary interpersonal and conflict resolution skills needed as they get older. Then in middle school and high school, with higher academic and future educational stakes, parents may continue to provide accommodations and make excuses to account for their child's lack of sleep, cleanliness, use of tech late at night, and inability to cope academically. All the while, having responsibility and the maturity to do chores would actually aid in their overall development.
Source: Ready or Not, Madeline Levine, 2020
This book is chockful of excellent ideas, strategies and techniques, as well as great stories, to help people both acquire good habits and eliminate bad ones. I will attempt to note as much as I possibly can, and relate them to education and learning.
The title of the book and the thrust of it is scaling down to the "atomic" level, or to as small steps or manageable blocks of action. The author, James Clear, talks about the difference in improving by 1% a day over a year (37.78) vs. declining by 1% over the same span (0.03). So thinking small in the short term can amount to huge gains in the future. Of course, Clear talks about people and their desire for immediate gratification (the now), which is stronger than delayed gratification (the future). Short-term gains have longer-lasting negative consequences; long-term gains give you long-lasting benefits.
The waiting game is difficult, something the author calls the Plateau of Latent Potential. An ice cube sits in a room getting ever warmer, with little change. Suddenly, at 0 degrees C, it begins to melt. If our goal is like waiting for the ice to melt, we may be sorely disappointed. Instead of goals, focus on systems. If you're a teacher, your goal is to teach students to learn the curriculum. Your system is the way you manage the class, assess students, and create engaging and effective lessons.
I like how he talks about the importance of identity, and not just processes and outcomes. If we start from the outcomes and move towards identity, we may never reach our core identity. Instead, think every time you write a paragraph, you are a writer. The process is simple: 1) decide the type of person you want to be; 2) show it with small wins. Who do you want to be? Then do the small actions that demonstrate that kind of person. Are you a teacher who believes students should have a voice and choice? Do your actions reflect that belief system?
Clear describes the four stages of habit, a feedback loop: problem (cue, craving); solution (response, reward). For example, a student gets stuck on a math problem (cue); she wants to relieve the frustration (craving); she asks to go to the washroom (response); reward (to satisfy craving and avoid work, she escapes from the problem).
How to Create a Good Habit
To break a bad habit, we do the opposite.
Law #1: Make it obvious
To begin a good habit, use the implementation intention, essentially stating specifically what you plan on doing: I will [behavior] at [time] in [location]. I will exercise for one hour at 5pm in my gym. Once this habit is established, then move on to BJ Fogg's habit stacking formula: After I [current habit], I will [new habit]. Thinking: After I hang up my coat, I will sit down and work on the morning questions. Then add another habit, beginning a cascading effect of habits. After I finished the problem, I will hand it in. Then I will read a book in my desk.
In 1936, psychologist Kurt Lewin wrote the equation, B=f (P,E), where behavior is a function of the person in their environment. A clear example is the phenomenon tested by economist Hawkins Stern in 1952, called Suggestion Impulse Buying. Essentially the more available a product or service is, the more likely it will be bought or used. More expensive brand-name items are at eye level and at the end of aisles. This makes clear sense when you realize that about 10 million out of 11 million sensory receptors are for vision. Many teachers have realized that fact and have created their classrooms as environments that accentuate their values and desired behaviors. More books means more reading. More tech means more virtual learning. More sports equipment means more active children. If you have student art or work on the walls, you're sending the message that their efforts are worthy to be displayed.
Law #2: Make it attractive
You will need to use temptation bundling, created by professor David Premack, where "more probable behaviors will reinforce less probable behaviors."
To add a habit that is not as desired, use the habit stacking + temptation bundling formula: After I [current habit], I will [habit I need]. After [habit I need], I will [habit I want].
For example, if you want to watch YouTube, but you have to do homework:
1. After I open my web browser, I will do 20 minutes of homework (need).
2. After I do the homework, I will watch 10 minutes of YouTube (want).
Another important facet is realizing the power of peer pressure from three groups: the close, the many, and the powerful. First, we tend to imitate the behaviors of those closest to us, our family or friends. Your chances of becoming obese is 57% greater if you have a friend who became obese. So a good idea is join a group where you behavior is normal and you have a commonality. Second, the influence of the many (the tribe) is seen with reviews on Amazon or Yelp. Third, we copy those who are powerful or successful.
So, how do we enjoy hard habits, things we dislike doing? One way is to shift your mindset. Instead of saying I have to go to work, say you get to go to work. A man in a wheelchair was asked how it felt to be confined to it. Instead he replied that he was liberated! Without it he would be bed-bound and stuck in his house. It's a shift in perspective, mindset, and counting your blessings.
Law # 3: Make it easy
Habits are formed when behaviors become automatic through repetition. This is known as long-term potentiation, first described by neuropsychologist Donald Hebb in 1949, known as Hebb's Law: "Neurons that fire together wire together."
The most effective form of learning is practice, not planning; action, not being in motion. A film photography class at the University of Florida was conducted in an unusual way. Half the class would be graded on quantity (100 photos an A, 90 a B, 80 a C, etc.) while the other group would be grade on "quality." They only needed to produce one photo for an A, but it had to be nearly perfect. What happened? The quantity group produced the best photos, with all their practice with lighting, composition, making mistakes, while the quality group spent all their time thinking about the best photo, but ultimately producing a mediocre one.
So practice, practice, practice to create a habit.
Reduce the friction involved in doing good habits. The Law of Least Effort states that people will choose the easiest option between two. That's why scrolling on our phones or checking email is so commonplace. It takes little to no effort. Meal delivery services reduce the friction of shopping for groceries.
So to make your habit have less friction, prime your environment. Want to draw more? Then put your pencils and paper on your desk. Want to send a card to a friend? Have a box of cards all ready for all occasions. The opposite holds true. Want to use your phone less? Put it in a different room or tell a friend to hide it for a few hours. Out of sight, out of mind.
Clear talks about decisive moments in our day, and we have so many, but each individual choice will lead to further choices (good or bad), which will ultimately decide how good our day was. So choose wisely. Also, the Two-Minute Rule is key: a new habit should take less than two minutes to do. Start tiny. To start to exercise, change into workout clothes. That's it! Next phase is to step outside, and maybe walk. Eventually, you're get to exercising three times a week. This will also prevent procrastination.
Law #4: Make it satisfying
The Cardinal rule of Behavior Change: What is immediately rewarded is repeated. What is immediately punished is avoided. For a habit to stick, you need to feel some kind of reward, however small immediately. You can move money into a money jar to save for a vacation. You can track your habit with a measurement tool as a motivator. Just be careful that you're tracking the right thing.
There's a ton more, but that's about I can manage. Plus I have to return it to the library.
Dr. Paula Kluth - Supporting Inclusion in Challenging Times & Creating Schools for All
Below is a summary of my notes and thoughts based on Dr. Kluth's keynote message on inclusion.
First of all, I immediately liked the live speech-to-text (real-time captions) on the screen. It was a perfect example of inclusion, as well as UDL (universal design learning), as all people could partake in the presentation despite any sound issues.
Right away, Dr. Kluth showed us a video of a younger musician, Feng E, and told us to remember this one thing if nothing else: remember the chorus (of teaching); after all, kids will remember the human interaction, not necessarily the technology and all the little details. Belonging and inclusion are the key. Connection and community--that’s what kids will remember in these challenging times.
In fact, we did a brief but insightful activity where the teacher participants wrote what they remembered most from their high school days. Invariably it wasn't primarily academics, like math and chemistry; instead it was the good times together with friends, lunchtimes, PE, band, clubs, and the like. (Right now I'm listening to Feng E on YouTube and he's older and even more talented. Amazing!)
Five Big Ideas:
1. Keep “doing inclusion” - We are all doing it already, so keep it up. For example, a teacher named Sarah Brady started a virtual lunch table with a few of her students twice a week on Zoom, a form of AAC (Augmentative and Alternative Communication). Communication devices, systems, strategies and tools that replace or support natural speech are known as augmentative and alternative communication (AAC).
2. Focus on inclusion as a process
Figure out how to include all students: Over, under, around or through. Find a way, make a way.
Essentially, what the speaker was saying was do not quit until you've tried every possible avenue, and then try something else. It may take a long time to figure out the specific needs because every child is unique and different. I love when she said that often teachers will say that "it" didn't work. Dr. Kluth would reply, "What is your 'it'?" In other words, you need to keep going until you find that "it" for that particular learner.
She also gave an example of a student who she thought was her match. But then she realized that maybe we can't solve the problem, but we can get to a better problem. In other words, something closer to the finish line, an incremental improvement. After all, Rome wasn't built in a day, and some of your most challenging students are like gladiators, battling with you day in and day out. But eventually, there will be cracks in the armor and you will find a way to work alongside instead of head-to-head.
Keep in mind some of these ideas:
Learners need need supports, not just a space (like the classroom). Teachers and support staff need to try all supports, not just some, including ones that don't even exist! Technology, peer support are some ideas. Also, keep in mind that inclusion means different things for different learners, so keeping that student in the classroom but not being an active participant might be defeating the purpose. If you're stuck, brainstorm with other educators a 20 ways list. Remember, kids aren't elastic so structures need to be.
3. Provide access to academics
Dr. Kluth showed a poignant example of a student as an adult and asked how we would have done things differently had we known her future. A woman named Kailey with Down's syndrome was currently working in the government. We need to presume confidence in learners and then help find it. Kids are very complex or competent, so they deserve rich and meaningful learning opportunities. Let's encourage joyful learning and give lots of entry points for our learners, making adaptations where and when necessary. What's really fascinating is that inclusion seems to improve overall class results.
“Sometimes being realistic isn’t being realistic.” Norman Kunc
WHAT IS POSSIBLE? Don't limit yourself.
4. Focus on all
UDL helps one student but also all. Currently social-emotional learning is bieng used for all students, though previously it was for students with autism. UDL helps bring success on multiple pathways for learners.
5. Let them lead
A rising tide lifts all ships. When we give learners agency, self-determination, self-direction, and self-advocacy, choices, then that is when we will truly see success. Let kids lead!
This workshop was timely and significant for teachers and students returning to school in these unsettling times. I enjoyed the idea of meaningful texts acting as windows, mirrors or sliding glass doors. Some texts allow us to see through a window and into another world from a safe distance, yet still have empathy and connection with those they come across. Other texts act as mirrors and reflect who we are and allow us to understand ourselves better. Finally, some texts are sliding doors, which allow us to actually step into another world, experience something life-changing, and bring back that "experience" to our real world and life.
More than ever, this year's start will need to foster shared experiences through texts. With shared connections and vocabulary, a community can be formed. This can come in the form of read alouds, heart maps/identity webs or the classroom library.
For texts to be most effective, keep in mind several things. Choice is important. If you give them a focus of a topic or theme, students can choose any type of text and level--poems, novels, picture books, graphic novels--and still come together to talk and share their opinions on the common theme. Relevance is another key component. The text needs to be significant to them and engage their senses and mind.
What do we as educators want learners to become? Critical, creative problem-solvers. Instead of students simply extracting information, they need to be able to transact and interact with the text. What do they connect with? What are they interested or frustrated with? What's important to them? They need to be able to feel safe to express their opinions, ideas and viewpoints. Building the courage and the capacity to share with others is essential. Disruptive thinking interprets the book in different ways; at the book level, head, and the heart. Being able to ask questions, not just answer them, is more important.
Source: Celine Feazel, Sept. 1, 2020, Summer Institute workshop
Daniel H. Lee
This blog will be dedicated to sharing in three areas: happenings in my classroom and school; analysis and distillation of other educators' wealth of knowledge in various texts; insights from other disciplines and areas of expertise that relate and connect with educational practices. |
Lab 3: Water Quality and Availability
Suppose you were hiking along a stream or lake and became thirsty. Would it be safe to drink the water? In many cases, it wouldn’t. Contaminants affect fresh water on or beneath Earth’s surface. Though the sources of these contaminants vary, all can make water unfit to drink if they are allowed to increase beyond safe limits.
In this lab, you will:
- Analyze the tests results of water samples from a variety of fresh water sources.
- Determine how to treat the water samples to make them safe to drink.
Begin Work on Your Lab
To begin work on your lab, access:
"Place your order now for a similar assignment and have exceptional work written by our team of experts, guaranteeing you A results." |
Encircled in purple stratospheric haze, Titan appears as a softly glowing sphere in this colorized image taken one day after Cassini’s first flyby of that moon.
This image shows two thin haze layers. The outer haze layer is detached and appears to float high in the atmosphere. Because of its thinness, the high haze layer is best seen at the moon’s limb.
The image was taken using a spectral filter sensitive to wavelengths of ultraviolet light centered at 338 nanometers. The image has been falsely colored: The globe of Titan retains the pale orange hue our eyes usually see, and both the main atmospheric haze and the thin detached layer have been brightened and given a purple color to enhance their visibility.
The best possible observations of the detached layer are made in ultraviolet light because the small haze particles which populate this part of Titan’s upper atmosphere scatter short wavelengths more efficiently than longer visible or infrared wavelengths.
Images like this one reveal some of the key steps in the formation and evolution of Titan’s haze. The process is thought to begin in the high atmosphere, at altitudes above 400 kilometers (250 miles), where ultraviolet light breaks down methane and nitrogen molecules. The products are believed to react to form more complex organic molecules containing carbon, hydrogen and nitrogen that can combine to form the very small particles seen as haze. The bottom of the detached haze layer is a few hundred kilometers above the surface and is about 120 kilometers (75 miles) thick.
The image was taken with the narrow angle camera on July 3, 2004, from a distance of about 789,000 kilometers (491,000 miles) from Titan and at a Sun-Titan-spacecraft, or phase, angle of 114 degrees. The image scale is 4.7 kilometers (2.9 miles) per pixel.
The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Cassini-Huygens mission for NASA’s Office of Space Science, Washington, D.C. The imaging team is based at the Space Science Institute, Boulder, Colorado. |
Designing a program relative to food fermentation can offer a good opportunity to introduce to basic principles of microbiology and their application in food preservation.
“Yogurt making” : a possible DIY fermentation process
A recent publication in the Journal of Microbiology and Biology Education presented a protocol to use “yogurt making as a tool to understand food fermentation process for non science participants”. Elaborated and tested among Indonesian families, it can be adapted for schools to allow children assess the fermentation process and go beyond a well-known commercial product.
The approach shows that yogurt fermentation can be used as an active learning tool in which participants will learn the principles of aseptic technique, hygiene and sanitation (of kitchen equipment, preparation of substrate and bacterial culture) and the control of fermentation.
Inspired by the publication, and using a commercial yogurt-maker, you may need :
- Different types of milk, as the authors suggested the separation in several groups. Each group will prepare its own yogurt recipe with a specific substrate:
- Milk A: Dissolve 75 g of skim milk powder into 500 ml boiled water and add 20 g of sugar.
- Milk B : Full cream pasteurized milk
- Milk C: UHT full cream milk
- Milk D: UHT low fat milk
- Commercial yogurt starter culture : Lactobacillus delbrueckii subsp. bulgaricus and Streptococcus thermophilus
The fermentation process will be done following the yogurt-maker’s directions of use but using the several substrates (different types of milk).
At the end of the fermentation process, participants can be invited to analyze the pH (with pH universal indicators), observe the texture, color and aroma and finally taste the yogurts.
The sanitation of equipment and participants is essential, as yogurt fermentation requires good personal hygiene and sanitation conditions.
Yogurt is a fermented milk that contains lactic acid-bacteria (LAB) and provides nutritional benefits to human health. Certain LAB are typically used in yogurt fermentation, Lactobacillus delbrueckii subsp. bulgaricus and Streptococcus thermophilus. They convert the milk carbohydrate, lactose, into lactic acid. The combination of these LAB in yogurt fermentation contributes to the acidity, taste, texture of the final product. |
Teaching your child the importance of recycling goes a long way in helping to help preserve the planet.
The future of recycling lies in the hands of our youth. It’s up to us to empower our children by educating them about the positive effect that recycling will have on our environment, and how they can make a difference.
The Glass Recycling Company suggests these tips to educate your family about going green:
- When buying packaged products, think about how you can reuse or recycle the packaging. Glass is 100% recyclable and can be recycled again and again without losing its purity or strength.
- Plan your trips to glass banks so it fits into your daily schedule. Take the kids along and show them how and where to put their bottles.
- Explain to your kids what is recyclable and what’s not. Glass containers like those used for food and drinks can be recycled.
- Other types of glass like window glass, ovenware, Pyrex, crystal, and light bulbs are manufactured through a different process and cannot be recycled through South Africa’s glass manufacturers.
- Reuse old containers. They are great for storing paint, crayons, buttons and arts and crafts tools such as paintbrushes, rulers, and more.
How to make recycling fun for kids
- Invent games that involve recycling. When sorting recyclables with your kids, let them throw the non-recyclable items like the plastic bottles into a bin from a short distance to see if they can get a ‘basket’.
- Do recyclable crafts with your kids. You can use printer paper, cardboard boxes, glass and plastic bottles, cans, old clothing, newspaper and wood scraps for your projects. You can make masks from the cardboard, aeroplanes from the paper, and animals from the cans and plastic bottles. |
Listen to Maria Pränting explaining the mechanisms behind antibiotic resistance and spread of resistant bacteria.
Emergence of resistance
The picture below illustrates the mechanism though which bacteria can become resistant to antibiotics:
1. alteration of the target site for the antibiotic
2. production of enzymes that inactivate the antibiotic
3. alterations in the cell membrane resulting in decreased permeability and thus decreased uptake of the antibiotic
4. removal of the antibiotic using active transportation of the antibiotic out of the bacteria through so called efflux pumps
5. use of alternative pathways, which compensates for the action inhibited by the antibiotic
Spread of resistant bacteria
Resistant bacteria spread via many routes. Poor hygiene, poor sanitation and poor infection control are three interconnected key factors contributing to the spread of resistant bacteria in health care facilities as well as in the community. Bacteria know no boundaries and international traveling and trade help disseminate resistant bacteria across the world. This contributes to the complexity of the antibiotic resistance problem and underpins the fact that it is a global issue. Here follows an overview with descriptions of some of the ways resistant bacteria can spread.
Within health-care facilities
Health care facilities are hot spots for resistant bacteria, since many sick people are in close vicinity of each other and antibiotic usage is high resulting in selection and spread of resistant strains. Poor hygiene practices may facilitate spread of resistant bacteria via the hands or clothes of doctors, nurses and other health care staff as well as via patients or visitors. Other risk factors include crowded wards, few isolation rooms, improper cleaning of the facilities and instruments that are used in patient care.
Between people in the community
Bacteria can spread from one person to another through direct contact between people. Transmission can also occur indirectly, for example when someone coughs. If a person contaminates a surface (such as a doorknob) with bacteria, these bacteria can be transferred to another person who touches the same surface. Good hand hygiene is important to limit spread of pathogens and the risk of becoming a carrier of resistant bacteria. Still, even with good hygiene practices, bacteria are a normal part of our surroundings that we will be continuously exposed to.
International travellers help spread resistant bacteria across the world. Any given day several million people will catch a flight, and if someone carries a resistant bacterium they will bring it along. Many studies have demonstrated that a large proportion of international travellers acquire resistant bacteria during visits in areas with a high prevalence of resistant bacteria. In some studies, more than 70% of people travelling to certain geographical areas were colonized with multidrug-resistant ESBL-producing bacteria upon return. The risk is even higher for hospitalized patients, who are exposed to additional risk factors such as surgery and antibiotic therapy. Several hospital outbreaks have originated from patients transferred from another hospital with a higher prevalence of resistant bacteria.
From animals to humans and from humans to animals
Bacteria can spread from animals to humans, but also the other way around. Many people come in close contact with animals in their daily life as we keep them as pets in our homes or raise animals for food. Resistant bacteria are common in livestock and there are several examples of how farmers and their families have become colonized with the same resistant bacteria as their animals. Likewise, livestock veterinarians are at risk of carrying livestock-associated resistant bacteria. The bacteria may then spread further in society. Resistant bacteria are also found in wildlife and migratory birds but this probably has a limited impact on the increasing rates of resistance in humans.
In many animal farms, antibiotics are used in large quantities to prevent and treat infections as well as for growth promotion, and therefore many farm animals have become colonized with antibiotic-resistant bacteria. During slaughter or when processing the meat, these bacteria can potentially be transferred to the product. Furthermore, fruits and vegetables can become contaminated with animal feces directly from the animals or via contaminated water that is used for irrigation of the crops. Eating food contaminated with bacteria may directly cause an infection, such as diarrhea caused by Salmonella, Campylobacter and E. coli. Resistant bacterial strains, or genes encoding resistance, may also be transferred to the normal flora of the consumer without causing an infection. The resistant bacteria can potentially cause infections later on and spread to other people.
Resistant bacteria are frequently detected in chicken and meat. However, the impact this has on human health is currently not known and may differ in different parts of the world. Some studies demonstrate similarities between the antibiotic-resistance genes found in meat and those found in human pathogens, while other studies have not seen this connection. More research is needed to determine the scale of the problem. Proper cooking and handling of food helps to decrease spread of infections as well as resistant bacteria.
Bacteria can spread via drinking water or water supplies that are used for irrigation, washing cooking utensils or for hygienic purposes. There are many ways resistant bacteria can end up in the water; release of untreated waste from animals and humans is one important source. Resistant bacteria have been found in many water sources such as drinking wells, rivers and effluents from wastewater treatment plants. Several bacterial diseases can spread via contaminated water, including typhoid fever and cholera.
Find out more
Below, you can find links to access factsheets about the spread of antibiotic resistance by ECDC, CDC and WHO. There is also a downloadable PDF containing the slides from the above lecture with Maria Pränting.
© Uppsala University
More from "Part 2"
- Bacteria basics
- Bacterial evolution and importance of normal flora
- Antibiotic basics
- A doctor’s reality
- An ethical dilemma
- Antibiotic use in humans
- Antibiotic use in animals
- Introduction to antibiotic resistance
- Emergence and spread of antibiotic resistance
- Infection prevention and control in the clinic
- Antibiotics and resistance (quiz)
- Test your understanding II
- Reflection and analysis: optimizing antibiotic use on poultry farms
- End of part 2 |
Article 1: Sociolinguistic Factors in the History of American Negro Dialects, by William A. Stewart.
Questions for further study:
1. -Stewart suggests that sensitivity to language variety differences (at least in the
USA) has, in part, been the result of “the growth of a cadre of specialists in the
teaching of English as a second language”. Can you elaborate on this connection?
Why would a group of second or foreign language teachers positively influence a
country's attitudes towards dialect or variety differences?
-The problem of dialect differences is coped by specialists in the teaching of
English to speakers of other languages. This is so, because Nonstandard Negro
English show a grammatical system which must be treated as a foreign language.
These specialists are aware of the most important patterns and features of the
English language, that they deem to be problematic for foreign speakers.
And therefore the specialists know how to cope with those dialect problems that
seem to be similar for teaching English as a foreign language and for non-standard
For instance, grammatical patterns or the right utterance of the language.
Because these teachers are the ones who really know about the problems that
involve the differences between two languages, and one of them (the standard
English) being very important for social life and for economic success.
Therefore it is positive for dialect speakers, as for foreign speakers, to master the
Language used by most people of this country.
4. -Summarize the reasoning Stewart offers for the resentment (especially among
minority group leaders) felt at having social class and ethnic background linguistic
correlates pointed out.
“Linguists are finding their observations on language variation among the
disadvantaged received with uneasiness and even hostility by many community
leaders. The reason for this is undoubtedly that the accurate description of dialect
variation in American communities (particularly in urban centers) is turning out
to show a disturbing correlation between language behavior on the one hand and
socio-economic and ethnic stratification on the other...”
This turns to be that the differences within the English language in America,
concerning at least between Negro dialects and white-standard language, are due
to the social and economic situation of the person, and less recognized but
probably most important to the race.
7. -The earlier plantation creole in the United States disappeared (except, perhaps,
in the island areas where Gullah is spoken) under a number of influences which
led to “decreolization”. List the factors which contributed to the drift of the creole
towards the standard and nonstandard forms around it.
I can clearly notice the three most important factors to the drift of the creole
Towards the standard and nonstandard forms:
-The breakdown of the plantation system due to the abolition of slavery.
-The influence of the nonstandard dialects of whites with whom they or their
ancestors have come into contact.
-The insurance of their social mobility in modern American society, and a
prosperous increase in their economic situation.
Article 2: Some Illustrative Features of Black English, by Walt Wolfram.
-Dialect geography, as its name implies, is mainly concerned with regional
variants, chiefly in the areas of vocabulary and pronunciation. What are the
factors that need to be taken into account in the study of Black English? Is the
term social dialect applicable and why?.
Factors to be into account in the study of Black English:
a). -The linguistic history of Black English. This is independent and very different
from the history of the rest of American English.
b). -A significant factor in the development and maintenance of Black English: the
social distance from the White English.
c). -It is also important to have into account all those dialects close to the Black
English, which influence it, such as White Southern dialects, Standard English and
also the Spanish-influenced English.
Very important too, the influence from a Caribbean Creole language, which may
have been spoken by the early plantation slaves.
d). -The social class factors in the development of Black English:
The sex: within the different social classes of Negroes, the women use to
approximate the standard English norm more than men do.
The age: adults generally use socially stigmatized features less than teenagers.
The race (in relation with language acquisition): a Black child who has
predominantly White peers will speak like his peers not his parents.
e). -Its grammatical system, though it can be considered very near and similar to
the Standard English: “... Nonstandard Negro English will show a grammatical
system which must be treated as a foreign language.”
Extracted from: Marvin D. Loflin, “A teaching Problem in Nonstandard Negro
The term social dialect is applicable in the point of view that it is a dialect
derived from the English language and it is used by a specifically part of the
American society, the Black people.
But it is probably more than that, since it has a grammatical system very different
to the English language one, and therefore it can be considered as an independent
Article 3: Black English/Ebonics: What it be like?, by Geneva Smitherman.
The term Ebonics is over two decades old. It was coined by a group of Black
Scholars as a new way of talking about the language of African slave descendants.
Ebonics refers to linguistic and paralinguistic features which on a concentric
continuum represent the communicative competence of the West African,
Caribbean, and United States slave descendant of African origin.
“Ebonics” represented a way to begin repairing the psycholinguistically maimed
psyche of Blacks in America.
What gives Black Language its distinctiveness is the nuanced meanings of some
English words, the pronunciations, the way in which the words are combined to
form grammatical statements, and the communicative practices of the USEB-
speaking community. In short, USEB may be thought of as the Africanization of
Patterns of USEB: a). -Aspectual be (denote iterativity); b). - Stressed been (She
been married); c). -Multiple negation (Don't nobody don't know God can't tell me
nothing!); d). -Adjacency/context in possessives (Sista nose); e). -Copula absence;
f). -Camouflaged and other unique lexical forms.
Styles of speaking: a). - Call-Response; b). -Tonal Semantics; c). - Narrativizing;
d). - Proverb Use/Proverbializing; e). - Signification/Signifying; f). - The
Signification is a form of ritualized insult in which a speaker puts down, talks
about, needles -signifies on- other speakers. The speaker deploys exaggeration,
irony, and indirection as a way of saying something on two different levels at once.
Characteristics of signifying: indirection; metaphorical-imagistic; humorous,
ironic; rhythmic fluency; teachy, but not preachy; directed at person present in the
speech situation; punning, play on words; introduction to the semantically or
There are two types of Signification. One type is leveled at a person's mother:
“The Dozens”. The second type is aimed at a person, action, or thing, either just
for fun or for corrective criticism. Today, the two types of Signification are being
conflated under a more general form of discourse, referred to as “snappin”.
To speak Ebonics is to assume the cultural legacy of U.S. slave descendants of
African origin. It symbolizes a new way of talking the walk about language and
liberatory education for African Americans.
I have found very interesting this study of the culture and the language of the
Black People in America, and how some languages influence one to each other.
I never thought that Black English had an own grammatical system different from
the American Standard English. I thought that Black English would be like a kind
of regional accent of English spoken by Black people. Just like happens in Spain
with the Spanish language and the Andalucia people who talk in a different accent.
So it has been a little surprising for me, and a theme very interesting to know
|Enviado por:||Carlos Duro Sanchez| |
Forests underpin life on Earth. Globally, around 2.4 billion people or one third of the entire human population depend directly on forests for wood for cooking their daily meals. And we all depend on forests in less tangible ways to provide the services that support life such as food, oxygen and pollination. Forested watersheds provide three quarters of accessible freshwater, and forests can provide 30% of the greenhouse gas mitigation required by 2030 to keep global warming to below 2%. We need to find sustainable ways to manage forests, more urgently than ever, to nurture rather than undermine the life-supporting services that forests provide and on which life on this planet depends – services that will play a key role in our resilience during and after the Covid-19 pandemic by supporting life and livelihoods in so many different ways.
But to do so in a targeted manner, we need to understand the status of our forests. In mid-May 2020, the Key Findings of the Global Forest Resources Assessment were published, quantifying the global trends in terms of forest extent and rates of loss. This enables us to gauge how effective efforts made so far have been, and how far we still have to go to secure forests and their ecosystem services for current and future generations.
To read the full story, visit UN-REDD.org. |
- This event has passed.
Webinar Event: Vocabulary Development
13 August 2020 @ 3:30 pm - 4:30 pm AESTFree
Vocabulary is the understanding and use of words. Having a strong vocabulary bank is essential for students to access the curriculum. Vocabulary is important for making connections and following instructions in the classroom. It is also a foundational skill for future success in literacy. This webinar aims to discuss vocabulary and provides strategies to support students in the classroom.
The key learning objectives in this webinar include:
- Overview of Vocabulary
- Identifying vocabulary difficulties
- Vocabulary development strategies to support students
- Exploring a range of vocabulary interventions
This webinar will be recorded. If you are not able to make the live event. please RSVP and a link to the recording will be sent to you following the event.
Presenters: Alexandra H, Speech Language Pathologist & Grace E, Speech Language Pathologist, from SALDA School Support Services. |
Why do Thunderstorms Often Occur on Summer Afternoons? Credit: NOAA National Severe Storms Laboratory / The Weather Prediction.com
Thunderstorm on May 26th, 2012 in Alliers, France
Thunderstorms are a weather phenomenon that occur and develop due to high amounts of moisture in the air along with warm air that is rising. These storms typically last less than thirty minutes and occur within a 15-mile radius. According to NOAA, in the United States nearly 100,000 thunderstorms occur each year, with ten percent of these storms becoming severe thunderstorms. Thunderstorms occur most often in the afternoon and evening of the spring and summer months, and bring with them thunder, lightning, heavy rain, and the potential risk for flash flooding.
A thunderstorm forms when warm moist air is unstable and begins rising. As this warm air rises the water vapor within the air cools and releases heat. Condensation then occurs as the air condenses creating a cloud, that then grows until it forms a towering cumulonimbus cloud. Ice particles within the cloud holding both positive and negative charges create lightning when leaders extend from these charges within the cloud. These negatively and positively charged particles within the cloud connect through a channel with the opposing charges of electricity rising up from the ground, creating a strong electric discharge. Lightning is followed by thunder after the lightning heats up the surrounding air causing it to expand rapidly. This expansion creates sound waves that make a loud cracking sound after the lightning strikes.
Thunderstorms occur more often in the afternoon and evening because in order for there to be high amounts of moisture in the air along with warm rising air, there must be instability in the atmosphere. During the warmer months the humidity is much higher. On days with less clouds in the sky temperatures can also rise to very high values. Because of this daytime heating throughout the day, the late afternoon and evening hours are when radiational heating and instability are at their highest points, and thus there is a steep temperature gradient between the mid-levels and the Boundary Layer. This daytime heating is often strong enough to completely overcome significant capping inversions, thus triggering Convective Available Potential Energy or CAPE that can spur up even severe thunderstorms. The intense heating that can occur during the daytime of the spring and summer months is very conducive for afternoon and evening thunderstorms. As the summer comes to a close, be sure to be aware of the potential for afternoon thunderstorms and the risks that come along with them.
To learn more about severe weather topics from around the globe, click here!
©2019 Weather Forecaster Christina Talamo |
For starters, skin is responsible for protecting our inner organs and it also carries out a number of bodily functions that help us to maintain a healthy life. But that’s not all! There is much more to know about skin and this article on 50 interesting skin facts intends to provide you with some really fascinating information about this unique organ in our body.
Interesting Human Skin Facts: 1-10
1. Skin is actually an organ. In fact, it is the largest organ in our body.
2. Just like every other organ in our body, the skin also has a set of very specialized functions that no other organ can perform.
3. The skin helps to regulate body temperature by detecting cold and hot.
4. It is reponsible for keeping our internal organs, muscles and bones protected from outside diseases and infections.
5. Our skin is blessed with the ability to renew itself. The entire skin is renewed in 28 days.
6. In order to renew itself, the skin needs to shed dead cells. It does so at the rate of 30,000 to 40,000 dead cells per minute.
7. An average human sheds 9 pounds of dead skin cells in one year!
8. The skin of any average human has nearly 300 million cells. There are nearly 19 million cells per square inch of skin.
9. Every square inch of skin also holds up to 300 sweat glands.
10. An average adult human has about 21 square feet of skin. The entire skin weighs 9 lbs (pounds).
Interesting Skin Facts: 11-20
11. 11 miles of blood vessels run throughout the skin to provide oxygen and blood to the skin cells.
12. Dead skin cells from humans makes up 1 billion tons of total dust found in Earth’s atmosphere.
13. 15% of the total body weight comes from the skin.
14. There are 5 different types of receptors found in skin. These receptors are responsible for responding to touch and pain.
15. Of the total dust found in home, half is actually made of dead skin cells.
16. Human skin is home to more than a 1000 species of bacteria.
17. When exposed to pressure or friction repeatedly, additional toughness and thickness can be formed by the skin. It is known as callus.
18. Changes in skin conditions can sometimes mean changes in health conditions as a whole and hence, any skin condition should be taken seriously.
19. Some nerves in human skin don’t connect directly with the brain. Instead, they connect with the muscles and send signals directly through the spinal cord. This allows quicker transmission of signals allowing us to respond more quickly to stimuli like pain, heat etc.
20. The thickest skin on human body is 1.4 mm in depth and is found in human feet. The thinnest is found on eyelids. Sink on eyelids is only 0.02mm deep.
Interesting Human Skin Facts: 21-30
21. Severely damaged skin often attempts to heal all by itself by the formation of scar tissue. The scar tissue is way different from the normal skin in the sense that it will not have sweat glands and hair.
22. Babies take up to 6 months to develop permanent skin tone.
23. In hot weather conditions, sweat glands in skin can produce up to 3 gallons of sweat in a day.
24. Penis tip, eardrums, lips‘ margins and nail bed do not sweat.
25. A special type of sweat gland known as apocrine sweat gland can be found in anus, genitals and armpits. These glands produce a fatty secretion that causes body odor.
26. It is not that the fatty secretion has a smell of its own. It is the bacteria present of skin that feeds on the fatty secretion and digests them. They byproduct left after digestion is what causes the odor.
27. Fingerprints (fine ridges seen on the skin of our fingers) may never develop in some people. This is actually a result of two rare types of genetic defects known as dermatopathia pigmentosa reticularis and Naegeli Syndrome.
28. Fingerprints are responsible for helping with a better grip on objects. These fine ridges help to increase friction and thereby helping with better grip.
29. Touch receptors of skin (known as Meissner corpuscels) are very sensitive but they are most sensitive on tongue, lips, palms, fingertips, clitoris, penis and nipples. The touch receptors in these areas respond more quickly to slight pressure put by just 20 milligrams (a common housefly weighs 20 mg).
30. The visual cortex of brain in blind people is rewired in a way that it responds to sound and touch stimuli. Thus, blind people literally ‘see’ the world using hearing and touch.
Interesting Human Skin Facts: 31-40
31. Per square inch of human skin contains 50 million bacteria. On oily surfaces like face, the number can ramp up to 500 million.
32. The scientific name of skin is Cutaneous Membrane.
33. Human skin has a pigment known as melanin. The more the melanin content, the darker is the skin. Lighter skin means melanin content is low.
34. Skin is made up of three different layers. The outermost layer is known as epidermis, the middle layer is known as dermis and the innermost layer is known as subcutis.
35. 4 acres of skin tissue can be grown in laboratory from 1 sq. inch of foreskin from a circumcised young boy.
36. Melanin is produced and distributed by tentacle-shaped cells known as melanocytes.
37. The number of melanocytes is same for all humans but the amount of melanin produced by these cells differ.
38. Again, there are two different types of melanin – Eumelanin and Pheomelanin.
39. The Eumelanin is either black or dark brown in color and the Pheomelanin is either red or yellow in color.
40. Some people may not have melanocytes. This is a medical condition known as albinsim. Only 1 out of 110,000 people suffer from albinism.
Interesting Human Skin Facts: 41-50
41. If there is over production of cells lining the sweat glands, it leads to acne formation.
42. One out of every hundred adult males suffer with acne compared to one out of every 20 in case of adult women.
43. Four out of every five teenagers suffer from some form of acne.
44. There are special glands in our ears that produce wax. These glands are actually specialize sweat glands.
45. Between our toes, there are around 14 different fungi species living on the skin.
46. The skin’s outer layer remains healthy and moist because of a special kind of natural fat known as lipid. Alcohols and detergent are known to destroy lipids.
47. Every hair we see on our skin has a small muscle attached to it. This muscle is known as pili. In case of stimuli such as heightened emotional state or cold, the pili helps the hair to rise and stand. Commonly we call it goose bump.
48. Every inch of skin has its own stretchiness and strength designed especially for its position. So, the skin that you see on your belly is very different in strength and elasticity compared to skin that you see on your knuckles.
49. Staphylococcal bacteria is responsible for producing skin boils. This bacteria enters skin through very tiny cuts and travels all the way down to hail follicles (the second layer of the skin) and results in boils.
50. Artificial skin has been produced by INTEGRA using silicon and bovine collagen. This artificial skin can be used for complete skin replacement |
Learn to recognize and use unity of effect, a writing method that can produce an emotional response in your readers.
Read on to get a step-by-step introduction to Edgar Allan Poe’s description of this writing strategy.
Definition of Unity of Effect
If you’ve read a poem or short story that really left you with a chill or a tear in your eye, then you understand how good writing can have an effect on people’s emotions. Edgar Allan Poe believed that you could recognize certain common features of writing that leave an emotional impact on readers. Not only that, but he outlined an approach to writing that can help you to touch your readers’ emotions as well. This approach is the unity of effect.
Poe was an American author and literary critic during the 19th century. He is generally considered a mystery, horror, and science fiction writer, and is well known for both poems and short stories. As one of the American Romantic writers (which is different from a romance writer), he strived to write pieces that evoked beauty and brought forth emotion. To that end, he promoted what is called the unity of effect in writing.Poe wrote about the unity of effect in his essay, ‘The Philosophy of Composition.
‘ Poe makes clear in this work that despite the romantic view of writers as being struck by inspiration, most must have some kind of structure in place for how they go about composing their work. The unity of effect is supposedly a method that he used in his own writing. Put simply, it is determining what effect you would like to have on a reader and carrying that effect through all the elements of your story or poem. The effect on the reader is, essentially, the purpose of your piece.In fact, in ‘The Philosophy of Composition,’ Poe quite clearly outlines the method he used to write his famous poem, ‘The Raven.
‘ The main thing to remember about this method is that it requires consistency. Poe, however, felt this unity was difficult, if not impossible, to achieve with longer works because of the discontinuity of having to put a book down in the middle of reading.
Achieving Unity of Effect
Think of the unity of effect like a target. Each concentric circle focuses your writing down to a point:the bullseye.
When every aspect of your writing is focused down to a consistent point, then your piece hits the bullseye. Employing the unity of effect focuses your writing to a central effect, like focusing on a target.Poe laid out the different parts of the unity of effect that a writer would use in order to maintain the desired emotions in readers:
- Determine the ending. You should know where you are going with your writing and keep the ending in mind as you write the piece. Poe emphasized that the d;nouement, or the resolution of the plot, should determine all else that happens in your story.
- Determine extent. This means that you should know ahead of time about how long your finished piece should take your reader to complete. You are choosing the length of your writing, and according to Poe, the unity of effect is best achieved in a piece that can be read in one sitting.
- Determine the effect. The effect is the atmosphere of your writing. You consciously decide upon the emotional impression you want your piece to have on readers.
- Determine tone.
Poe argues for a consistent tone throughout a piece of writing. This involves carefully choosing your words to portray excitement or melancholy, for instance, in your overall piece.
- Determine and establish your artistry. Poe refers here to utilizing techniques that work with your desired effect, such as foreshadowing in a mystery. He gives the example of the refrain he used in ‘The Raven.’ The repetition of the refrain ‘Nevermore’ set a rhythm to his poem and kept readers returning to the mood.
- Invoke originality. Although he utilized a popular refrain in ‘The Raven,’ Poe encourages writers to experiment with such things as meter and stanzas. If you plan to use Poe’s techniques, look to literary traditions, but don’t be afraid to do something bold and original.
The unity of effect is determining what effect you would like to have on a reader and carrying that effect through all the elements of your story or poem. The effect on the reader is essentially the purpose of your piece.
Edgar Allan Poe wrote about the unity of effect in his essay, ‘The Philosophy of Composition’.The six elements of the unity of effect are:
- Determine the ending.
- Determine extent.
- Determine the effect.
- Determine tone.
- Determine and establish your artistry.
- Invoke originality. |
Remote Sensing is the science and art of obtaining information about an object, area or phenomenon through the analysis of data acquired by a device that is not in contact with the object, area or phenomenon under investigation. If the information is collected through satellites it is called Satellite Remote Sensing (SRS). Remote Sensing along with its allied technologies has become an industry in itself. The basic principle of remote sensing is based on the interaction of electromagnetic radiation with atmosphere and the earth. Electromagnetic radiation reflected or emitted from an object is the usual source of remote sensing data. However, any media such as gravity or magnetic fields can be utilized in remote sensing. The characteristics of objects can be determined, using a reflected or emitted electromagnetic radiation from the object. Each object has unique and different characteristics of reflection or emission under different environmental conditions.
Remote sensing is the technology used to identify and understand the objects under different environmental conditions. A sensor receives the electromagnetic radiation emitted and reflected by various earth surface features. These received radiations are analyzed and converted into information about the object under investigation. Therefore, remote sensing offers an efficient and reliable means of collecting the information required for various purposes. Due to its unique ability to furnish synoptic views of larger areas, satellite remote sensing is being effectively utilized in several areas for sustainable agricultural development and management. These areas include cropping system analysis; agro-ecological zonation; quantitative assessment of soil carbon dynamics and land productivity; soil erosion inventory; integrated agricultural drought assessment and management.
Pakistan Space and Upper Atmosphere Research Commission (SUPARCO) is committed to the peaceful uses of space and space technologies. It has been pursuing Satellite Remote Sensing (SRS) and Geographic Information System (GIS) application programs for the past 30 years. It has established the requisite facilities and developed necessary expertise. SUPARCO acquires and archives SRS data from French SPOT series of satellites. The satellite data and related services are provided to different users within and outside the country. Information pertaining to status of crops is acquired through satellite remote sensing. This information and its calibration through ground truth surveys, provides timely and accurate data on crop acreage estimates, yield forecasts, early warning and crop stress. This helps in food security and better management of agriculture sector in Pakistan.
There are two main crop growing seasons in Pakistan; the Kharif and the Rabi. By tradition, the crops are counted among their seasons of harvest. All crops harvested around spring or follow up are called Rabi as the word literally means spring. The crops harvested in autumn, on the same analogy, are called Kharif crops. Similarly, the crops are counted in the financial year of harvest. Major Rabi crops include wheat, brassica, gram, fodders and others. The Kharif crops include sugarcane, cotton, rice, maize, fodders and legumes. Crop calendar plays a vital role to determine the satellite data acquisition schedule for crop area, yield and production estimation in Pakistan.
Satellite data vary in terms of their sensitivity to ground features with respect to sensors, its coverage in a single scene, spectral, spatial and temporal resolution and topographic effects. The SPOT satellite data cover an area of 3600 Km² to 4800 Km² in single observation with different acquisition angles. Spatial resolution is an important parameter of the satellite data for the estimation of crop area. Various land surface features such as field boundary, roads, canals etc. can be differentiated, once high resolution satellite data is available SUPARCO uses SPOT 5 data for the crop area estimation. A maximum of 10 meter resolution data can be used to estimate crop area with more than 90% accuracy and the lowest sampling and non-sampling errors. SUPARCO generally uses 5 meter multispectral resolution data produced through spatial enhancement of 10 meter multispectral images from 5 meter panchromatic image.
Data quality control through validation system makes the acquisition more reliable for area estimates. This helps to work out crop area estimates with better accuracy. Acquisition dates or time of the satellite data is extremely crucial for the final crop area estimates by removing the impact of other crops. Acquisition dates correspond to the start and peak time of the photosynthetic activity and times of maturity. In Pakistan, there is 6-8 weeks difference in cropping pattern from southern latitude of 24 degree to 34 degree in North. These two elements play important role in defining the time schedule for satellite data acquisitions. These dates change with zone to zone and cropping seasons. High resolution satellite data for crop area estimation are acquired twice during the growing season, once at sowing and secondly during peak growth season.
REMOTE SENSING AND FLOOD 2010 IN PAKISTAN
Natural disasters of any kind play havoc with and cause huge losses to both humans and properties. Current situation of flooding in Pakistan is one of the true examples of how floods of such a magnitude can put an entire country in chaos and adversely affect its economy. These floods affected all the provinces of the country badly. In the wake of such a wide spread disaster, remote sensing data once again proved its importance both for relief and rescue efforts. The availability of temporal remote sensing data from Landsat and ASTER has made it possible to map the flood extents in Pakistan, and track the movement of the flood from north to south of the country. Temporally mapped flood extent helps the authorities to monitor the progress of the floods, how and from where to access the affected urban areas to provide relief and rescue in a timely manner. It also helps in determining which of the flood protection bunds (levees) needs reinforcement, deciding where to breach to the bunds to save some of the cities, barrages and other important installations. In future, these mapped flood extents can help the concerned authorities for damage assessment of urban areas, road infrastructure, and crops, as well as to demarcate and designate nonexistent floodplain boundaries.
The Climate Change Cell at University of Agriculture Faisalabad is working on Satellite Remote Sensing under project of “Indus River Basin Research Activities” to estimate crop yield. The activity addresses potential impacts of climate change on water resources and crop productivity for ultimate goal of disaster risk reduction. Director External Linkages and Head of Climate Change Cell Prof. Dr. Ashfaq Ahmad and his team dedicated well to face variable impacts of climate change and he rated Pakistan a “high risk” country in the global rankings for Climate Change Vulnerability Index. So, it is the need of time to take extra measures such as Remote Sensing Approach for disaster risk reduction. |
Athletic performance is a complex trait that is influenced by both genetic and environmental factors.
There are many cognitive factors that come into play when evaluating things that could potentially affect an individual’s cognitive performance. Cognitive factors are those characteristics of a person that affect the way they learn and perform.
Testing has regularly shown that people exposed to high levels of negative ions perform better in mentally challenging tasks than those breathing normal positive ion dense air. Pierce J. Howard PhD at the Center for Applied Cognitive Sciences says in the Owner’s Manual for the Brain – “Negative ions increase the flow of oxygen to the brain; resulting in higher alertness, decreased drowsiness, and more mental energy.”
As noted in the research undertaken by Toyota in 2002, negative-ion exposure can increase cognitive performance. Years before this publication, however, researchers R.A. Duffee and RH. Koontz were investigating ions and witnessing similar results. In 1965, they published a study in Psychophysiology that tested the effects of negatively ionized air on the cognitive functioning of rats. The study found that the rats’ abilities to navigate a water maze proved by an average of 350 percent with negative ion exposure, which suggested a significant boost in cognitive ability. Moreover, the performances of the older rats living in a negatively ionized atmosphere showed even more improvement.
Improve your mental abilities, including learning, thinking, reasoning, remembering, problem solving, decision making, and attention through Oxy-Angel, your personal negative ionizer. |
Dr. Kent Brantly was drawn to Africa with Samaritan’s Purse, to help the people of Liberia live longer, healthier lives. During his time there, the unimaginable happened, there was an outbreak of Ebola, and he contracted the virus while treating patients with this incurable disease.
Fortunately, Dr. Brantly and Ms. Writebol (an American aid worker) were given the experimental drug ZMapp and quickly evacuated to the U.S. for further treatment.
The best news possible was announced this week; they have been released from the hospital. In a televised news conference, Dr. Kent Brantly was all smiles, and grateful for the treatment and kindness he has received. So, what is Ebola and what should you know about this deadly virus?
1. The Ebola Virus Or Ebola Hemorrhagic Fever
The rare, but deadly virus Ebola has been in the headlines for months, but few in the U.S. actually understand what it is, and how it is transmitted. This virus causes both internal and external bleeding while damaging the immune system and vital organs.
The internal storm caused by this virus results in blood not clotting leaving victims to bleed to death. Currently, it is believed that the Ebola virus is fatal in up to 90% of infected individuals. That is why the recent news about the recovery of Dr. Brantly and Ms. Writebol is such welcomed news.
2. How Ebola Virus Spreads
While Ebola is a virus, like the common cold, the measles, or the flu, fortunately it doesn’t spread as easily or quickly as these everyday maladies. Researchers believe that Ebola spreads through the direct contact with bodily fluids and waste from both humans and infected animals.
In Africa, monkeys, chimps, bats, and other animals carry Ebola, and their spit, feces and other fluids are believed to spread to humans. While some viruses have a fairly short life outside their host, it is believed that Ebola can survive quite some time. This virus can be spread through casual contact with tables, beds, floors, lamps, door handles, and more.
The Ebola virus is not airborne, cannot be spread throughout the water supply, or in properly handled and cooked foods.
3. Signs & Symptoms
Part of the problem with Ebola, is that it presents like so many other diseases and infections. The first signs are much like the flu – high fever, headaches, achy joints and muscles, weakness, lethargy and lack of appetite.
However, as Ebola progresses, individuals start to bleed internally and externally. Common areas of bleeding are gums, nose, eyes, ears, and around fingernails or toenails, while internally, tissues are bleeding as well.
4. Travel Precautions
If you travel for business, and it takes you to Europe, Africa or beyond, it is important to understand that not all countries and cultures practice the same level of hygiene that we do in the western part of the world. It is important to be mindful of your surroundings, and of the people you come into contact with.
Drink water only from sealed bottles; avoid raw fruits and vegetables; clean your hands thoroughly with disinfectants, and avoid touching your face or mouth whenever possible. If you are planning a trip soon, please review the Centers for Disease Control and Prevention’s article “Ebola Hemorrhagic Fever – prevention” for more information on how to protect yourself, and your family.
Until individuals are symptomatic, they are not spreading the disease. The incubation period can be anywhere from 2 days to over a month.
Of course, the best news of all is that the two Americans that have been treated with ZMapp and tender care, are, by all accounts, cured of the Ebola virus. |
Most athletes experience the anaerobic threshold zone when they put in some serious work and add power to their workouts. When you reach it, It feels like a burn in the muscles and you will truly have to push yourself to continue. When a lot of power is used over a short period - like in weightlifting, sprints or those exhausting HIIT workouts - your muscles need more oxygen than what your bloodstream can provide. The anaerobic threshold kicks in when the intensity of exercise intensity is highly increased, and the aerobic system can no longer keep up with the energy demand.
Each muscle consists of contractile tissue and each muscle fiber is comprised of thick and thin filaments that act like cylindrical hydraulics, which makes the muscles contract and enable movement. Everything needs energy including our muscles. Energy used for muscle contraction is called adenosine triphosphate (ATP.)
To fully explain the relationship between aerobic and anaerobic threshold and why a strong anaerobic threshold is good, we need to get a bit scientific and show the basics of how muscles work.
There are two forms of cell metabolism; aerobic and anaerobic. The most common and slowest of the two is the aerobic metabolism, counting for 90% of your cellular metabolism. Aerobic metabolism occurs when we need energy for daily activities and slower forms of exercise, where the need for energy here is relatively low.
The required energy is produced when our body converts food into energy by using oxygen - carbohydrates and fats are the two primary sources. They are stored in our bodies in the form of glycogen which during a process called glycolysis, is broken into pyruvic acid. This acid then creates the much-needed ATP to our muscles. This process is constantly occurring in our bodies.
The by-products carbon dioxide and water are also produced in the process, and the more demanding the exercise, the more by-products generated. That is when we start breathing harder and sweating to get rid of those by-products.
As we start to increase and pick up the pace, things change. To work harder, our muscles start to require more energy than can be produced using oxygen. Our bodies simply cannot supply enough oxygen for your muscles for a burst of energy needed in such high performance. When this happens, we go into anaerobic exercise, a condition where energy is supplemented by contributions from anaerobic metabolism. This means that the body burns glucose and creates energy without oxygen and energy is here produced very quickly to keep up the high intensity.
In this process, a by-product of energy generation is lactic acid, which breaks into lactate and hydrogen ions, and starts to build up in the blood. These ions contribute to fatigue by interfering and changing the PH of the muscle cell.
Unlike aerobic metabolism, which provides long-lasting energy, the anaerobic system is far from sustainable. Working at 95% effort, the anaerobic threshold lasts for about 120-240 seconds as it burns through muscle glycogen and lactic acid. Once we reach the limits of our anaerobic tolerance the anaerobic threshold kicks in and the pain and burn level increases. Our muscles will stiffen up and we need to slow down or stop. The bottom line of the physiological processes is that when we push hard, we use more oxygen that we can physically inhale.
The tolerance of lactate and decreasing PH level, which is needed in the anaerobic threshold zone, is limited. However, it can be trained, making you faster and able to perform longer at high intensities. Training our anaerobic threshold is usually associated with various interval exercises. However, focusing directly on our lungs, training your respiratory muscles will strengthen our ability to hold our breath, called apnea. This ability is also needed in the anaerobic threshold zone, where we lose breath. Therefore, respiratory muscle training benefits your anaerobic threshold.
By increasing anaerobic tolerance we also increase our bodies’ resistance toward lactate and we will be able to perform at higher intensities for longer periods, decreasing muscle fatigue and lowering our recovery time.
At Airofit, we have developed special respiratory training programs that allow you to focus directly on abnea, so you can boost your anaerobic tolerance without disrupting your ongoing training schedule.
Sign up to stay tuned about our latest news and offers! |
No Bullies Allowed! B.A.N.D. Together
“B.A.N.D. Together” is a 45 minute program filled with lots of “WOW” moments that evoke screams and laughter. Magic is just the vehicle to impart the message of your students Banding Together to combat all types of bullying. With role playing and audience participation, students learn how to….Be a Buddy, not a bully…Attitude, avoid, ask for help…Nobody deserves to be bullied…Don’t Join In, help instead.
Transform your school into a positive and empowered learning environment.
BAND Together is an exciting, magical, and fun-filled program designed to educate about bullying. Through the use of comedy, magic, and role-playing with fun masks, students will learn what bullying is and how to identify it. They also learn techniques to help themselves and friends when being bullied and how to prevent bullying.
Students learn and practice techniques in this highly interactive program.
Students participate in the program through role-playing using masks. Games and role-playing are used to help them master the anti-bullying techniques introduced. This program also discusses: the different types of bullying behavior; how working together lessens bullying; how to open lines of communication; and how bystanders can help.
A complete, comprehensive, packed-with-fun 45 minutes!
Unique types of bullies are identified and discussed. Students learn tools to address psychological, emotional, and cyber bullying. Effective strategies leave students and staff feeling empowered and closer together.
Bullying happens when: a person or group hurts or scares another person on purpose, and the person being picked-on is unable to defend himself or herself. Bullying is recurring; it usually happens over and over again.
We can’t discern or tell who a bully is by the way a person looks. The way a person behaves makes someone a bully. Some of the most common bullying behaviors include verbal bullying, physical bullying, intellectual bullying, social bullying and cyber bullying. All of these are serious behaviors and need to be addressed!
Bullying happens most frequently at school. Bullies choose places at school where they can find victims and where there is limited supervision. These places may include: the bus, the playground, the gym, the cafeteria, the bathroom, and hallways.
S.A.F.E. is an acronym to help remember what to do and how to act when being bullied. S.A.F.E. stands for: Speak up, Ask an adult for help, Find your role, End it quietly.
Sometimes we know about bullying happening to someone else. The person being bullied needs a H.E.R.O. to step in! H.E.R.O. is another acronym to help remember what to do and how to act when bullying happens to someone else. H.E.R.O. stands for: Help out, Empathize, Report, Open communication. |
New observations indicate that massive, star-forming galaxies during the peak epoch of galaxy formation, 10 billion years ago, were dominated by baryonic or “normal” matter. This is in stark contrast to present-day galaxies, where the effects of mysterious dark matter seem to be much greater. This surprising result was obtained using ESO’s Very Large Telescope and suggests that dark matter was less influential in the early Universe than it is today. The research is presented in four papers, one of which will be published in the journal Nature this week.
We see normal matter as brightly shining stars, glowing gas and clouds of dust. But the more elusive dark matter does not emit, absorb or reflect light and can only be observed via its gravitational effects. The presence of dark matter can explain why the outer parts of nearby spiral galaxies rotate more quickly than would be expected if only the normal matter that we can see directly were present .
Now, an international team of astronomers led by Reinhard Genzel at the Max Planck Institute for Extraterrestrial Physics in Garching, Germany have used the KMOS and SINFONI instruments at ESO’s Very Large Telescope in Chile to measure the rotation of six massive, star-forming galaxies in the distant Universe, at the peak of galaxy formation 10 billion years ago.
What they found was intriguing: unlike spiral galaxies in the modern Universe, the outer regions of these distant galaxies seem to be rotating more slowly than regions closer to the core — suggesting there is less dark matter present than expected .
“Surprisingly, the rotation velocities are not constant, but decrease further out in the galaxies,” comments Reinhard Genzel, lead author of the Nature paper. “There are probably two causes for this. Firstly, most of these early massive galaxies are strongly dominated by normal matter, with dark matter playing a much smaller role than in the Local Universe. Secondly, these early discs were much more turbulent than the spiral galaxies we see in our cosmic neighbourhood.”
Both effects seem to become more marked as astronomers look further and further back in time, into the early Universe. This suggests that 3 to 4 billion years after the Big Bang, the gas in galaxies had already efficiently condensed into flat, rotating discs, while the dark matter halos surrounding them were much larger and more spread out. Apparently it took billions of years longer for dark matter to condense as well, so its dominating effect is only seen on the rotation velocities of galaxy discs today
Comparison of rotating disc galaxies in the distant Universe and the present day. The imaginary galaxy on the left is in the nearby Universe and the stars in its outer parts are orbiting rapidly due to the presence of large amounts of dark matter around the central regions. On the other hand the galaxy at the right, which is in the distant Universe, and seen as it was about ten billion years ago, is rotating more slowly in its outer parts as dark matter is more diffuse. The size of the difference is exaggerated in this schematic view to make the effect clearer. The distribution of dark matter is shown in red. Credit: ESO/L. Calçada
This explanation is consistent with observations showing that early galaxies were much more gas-rich and compact than today’s galaxies.
The six galaxies mapped in this study were among a larger sample of a hundred distant, star-forming discs imaged with the KMOS and SINFONI instruments at ESO’s Very Large Telescope at the Paranal Observatory in Chile. In addition to the individual galaxy measurements described above, an average rotation curve was created by combining the weaker signals from the other galaxies. This composite curve also showed the same decreasing velocity trend away from the centres of the galaxies. In addition, two further studies of 240 star forming discs also support these findings.
Detailed modelling shows that while normal matter typically accounts for about half of the total mass of all galaxies on average, it completely dominates the dynamics of galaxies at the highest redshifts.
The disc of a spiral galaxy rotates over a timescale of hundreds of millions of years. Spiral galaxy cores have high concentrations of stars, but the density of bright matter decreases towards their outskirts. If a galaxy’s mass consisted entirely of normal matter, then the sparser outer regions should rotate more slowly than the dense regions at the centre. But observations of nearby spiral galaxies show that their inner and outer parts actually rotate at approximately the same speed. These “flat rotation curves ” indicate that spiral galaxies must contain large amounts of non-luminous matter in a dark matter halo surrounding the galactic disc.
The data analysed were obtained with the integral field spectrometers KMOS and SINFONI at ESO’s Very Large Telescope in Chile in the framework of the KMOS3D and SINS/zC-SINF surveys. It is the first time that such a comprehensive study of the dynamics of a large number of galaxies spanning the redshift interval from z~0.6 to 2.6, or 5 billion years of cosmic time, has been carried out.
This new result does not call into question the need for dark matter as a fundamental component of the Universe or the total amount. Rather it suggests that dark matter was differently distributed in and around disc galaxies at early times compared to the present day.
Open Access Publication 1: Strongly baryon-dominated disk galaxies at the peak of galaxy formation ten billion years ago†
Open Access Publication 2: FALLING OUTER ROTATION CURVES OF STAR-FORMING GALAXIES AT 0.6 . Z . 2.6 PROBED WITH KMOS3D AND SINS/ZC-SINF
Open Access Publication 3: THE EVOLUTION OF THE TULLY-FISHER RELATION BETWEEN Z ∼ 2.3 AND Z ∼ 0.9 WITH KMOS3D
Open Access Publication 4: KMOS3D: DYNAMICAL CONSTRAINTS ON THE MASS BUDGET IN EARLY STAR-FORMING DISKS∗
Featured Image: Schematic representation of rotating disc galaxies in the early Universe (right) and the present day (left). Observations with ESO’s Very Large Telescope suggest that such massive star-forming disc galaxies in the early Universe were less influenced by dark matter (shown in red), as it was less concentrated. As a result the outer parts of distant galaxies rotate more slowly than comparable regions of galaxies in the local Universe. Credit: ESO/L. Calçada
Provided by: ESO |
Mathematics, Problem Solving
Grade 1- 3
Students wil learn to make conclusions from facts in a logical way, or to reason.
Have the students draw and cut out paper shapes of rectangles, squares, trapezoids, and triangles. Discuss their different properties. For example, talk about the number of equal and unequal sides.
Then, ask questions about the shapes. For example, How are they alike or different? or Which do you see more/less of? Have the students look around their houses, apartments, or neighborhoods for examples of shapes.
Review the following fact: the place of a digit in a numeral tells its value.
325-3 has the place value of hundreds, 2 has the place value of tens, 5 has the place value of ones
Have students look for numbers in everyday experiences. For example, the values of page numbers in a book or on a restaurant menu. Then, ask questions about the place value of digits, such as What place do you think the 3 is in the number 36?
Review the following facts:
Even numbers end with 0, 2, 4, 6, or 8.
Odd numbers end with 1, 3, 5, 7, or 9.
Help the child to recognize even/odd numbers in everyday life. Ask how the child applied reasoning to his or her recognition.
copies of the activity sheets (see the link below) |
In addition to their sweet, succulent fruit, peach trees (Prunus persica) add an ornamental touch to the landscape with their lavender flowers and shiny green leaves. Peach trees thrive in United States Department of Agriculture plant hardiness zones 5 to 9, but even in the perfect environment, you may encounter issues as diverse as insect pests, stunted blooms or dying leaves. Although the effort pays off in spades, peach trees require careful observation and maintenance.
Leaf Curl Tips
Peach leaf curl is a serious disease that affects peach trees, causing the leaves to distort and die. If left untreated, it may lead to the tree's death. This affliction first appears in spring. Leaves pucker and exhibit reddish spots, which eventually turn yellow and then grayish white as fungal spores appear, causing some leaves to fall off. To prevent leaf curl, treat your peach tree with a fungicide -- such as a copper, bordeaux or chlorothalonil mixture -- after it sheds its leaves and again as flower buds swell, but before they bloom. Prune any infected leaves and gather any fallen leaves. Remove them from the vicinity of the tree and burn them. For new plantings, plant resistant varieties such as Frost, Indian Free or Muir.
Overwatering causes yellowed leaves or, in extreme cases, root rot, which can lead to the death of the tree. Never allow standing water, which is the clearest sign of overwatering, to accumulate at the base of the tree. To prevent over-wet roots and fungal root rot, mix soil from the surrounding area with the growing medium when you plant the peach tree. This introduces a gradual soil change and encourages root growth, rather than boxing in roots and moisture.
Deficiency of a certain element, such as nitrogen or potassium, in your peach tree's soil may cause leaves to wilt. Take a soil sample and have it analyzed by a local Cooperative Extension office. If the soil exhibits a deficiency, amend it with fertilizer, adding high-phosphorus fertilizer to low-phosphorus soil, for example. In general, peach trees require 10-10-10 general-use fertilizer. For optimum growth, apply 1/2 pound of fertilizer 10 days after planting and again 40 days later. After the first year, apply 3/4 pound of fertilizer once in early spring and once in late spring. Up the amount to 1 pound for mature trees.
Other Care Tips
If your peach tree doesn't get the care it needs, its leaves could wilt, or the tree could die. Peach trees require full sunlight and elevated planting sites with sandy loam soil. They need about 6 feet of space and should be planted in spring. Prune your peach tree each year, building a balanced frame of scaffolds using the open center system. To prevent the spread of disease, disinfect your pruning shears with a bleach solution before use. Apply organic mulch in late winter or early spring. In winter, apply dormant oil spray to control insects.
- Virginia Tech Department of Forest Resources and Environmental Conservation: Prunus Persica Fact Sheet
- National Gardening Association: Peach Care
- The Ohio State University Extension Fact Sheet: Growing Peaches and Nectarines in the Home Landscape
- University of California Agriculture and Natural Resources IPM Online: Peach Leaf Curl
- Clemson University Cooperative Extension: Peach Diseases
- North Carolina Department of Agriculture and Consumer Services
- Parker County Master Gardener Association: Peach Tree Care Through the Year
- Jupiterimages/liquidlibrary/Getty Images |
Plants are among the most important materials for effective landscape design. Yet the fundamentals of plant biology and growth; their morphology, colour, and functional assets; and details such as planting, pruning techniques, and maintenance practices are surprisingly absent from our education and training, which tend to focus on other core principles like drainage, grading, and spatial relationships.
What do you need to know about how plants grow and function? How can you determine appropriate plants for a particular site? How can you use their distinct design features effectively? What are the real design considerations to keep in mind? This book a Botany 101 course for professionals and students alike walks you through all the answers, equipping you with the ability to be not just an informed landscape designer but also an effective planting designer.
Kimberly Duffy Turner, a landscape architect and horticulturalist, explains the essentials of planting design, exploring form and function and showing how various characteristics of plants and trees shape, pigment, leaf veination, texture, fragrance, sound, height, and more can be used to achieve effective site-appropriate designs.
Specifying appropriate plant material and examining stock at the nursery drawing up a planting schedule of the species or cultivar, sizes, and quantities and evaluating modes of transplantation (when to ask for bare root, balled and burlapped, or containerized) are other key on-the-job concepts covered.
A chapter on green design outlines some of the sustainable trends in botany: the role of LEED certification in landscape design; mitigating environmental problems with plants and open space; the emergence of green roofs and vertical gardens; biomimicry; and sensitive material selection, like composite wood products and plant-derived, soy-based paints.
Both a handy appendix of common Latin and Greek terms used in horticulture and a comprehensive list of plant palettes are included.
With more than 150 colour photographs and schematic drawings illustrating key strategies, Botany for Designers is the professional s go-to guide, showing you how an appreciation of plant fundamentals can lead to more inspired, well-designed landscapes. |
Brain Stroke occurs when blood flow to an area of brain is cut off. The mind depends upon its arteries to bring blood from lungs and the heart. The blood carries nutrients and oxygen and requires waste and CO2 away.
Why medical treatment is crucial? What causes it?
Many distinct types of ailments can cause ischemic stroke. The problem is due to the cells in the neck or head. This is caused by atherosclerosis, or cholesterol deposition that was gradual. These blood clots can obstruct the artery they’re formed, or may dislodge and become trapped in cells closer there. Another reason of stroke is blood clots from the center, which may happen as a result of abnormalities of the heart valves, myocardial infarction, or irregular heartbeat.
While these are the most typical causes of ischemic stroke, there are numerous other possible causes. Examples include diseases of blood clotting injury to the blood vessels of the neck, or usage of street drugs.
Types of stroke
Stroke can be divided into two types: embolic and thrombotic. When damaged or diseased cerebral arteries blocked within the mind, a stroke happens. Referred to as thrombosis or cerebral infarction, this type of occasion is accountable for 50% of all strokes. Thrombosis might be divided to a two groups that relate to the location of the congestion vessel thrombosis and big vessel thrombosis.
Large vessel thrombosis is the term used when the block is in one of the brains bigger blood supplying arteries like the carotid or brain center, while small vessel thrombosis involves one of the brains smaller, yet deeper, penetrating cells. That latter type of stroke is also called a lacuner stroke. An embolic stroke can also be caused by a clot in an artery, but in this instance the clot forms somewhere other than in the mind itself. Frequently in the heart, these emboli will travel from the blood stream till they become lodged and can’t travel any farther. This naturally restricts the blood flow to the brain and result from near immediate physical and neurological deficits.
Ischemic stroke is probably the most typical sort of stroke, accounting for approximately 88 percent of all strokes. Stroke may affect individuals of all ages, including children. Many individuals with ischemic strokes are older, and the potential risk of stroke increases with age. Every year, about 55, 000 much more women than men have a stroke, and it’s much more common among African Americans than members of other ethnic groups. Many individuals with stroke have other problems or conditions that put them at higher risk for stroke, like high blood pressure level, heart problems, smoking, or diabetes. |
|The oceanic slow carbon cycle.|
The seafloor is absorbing carbon dioxide, the greenhouse gas most associated with climate change, according to researchers at the University of Sydney in Australia. But while the ocean bottom might someday help reverse global warming, the process would take millions of years.
A previously unknown connection between geological atmospheric carbon dioxide cycles and the fluctuating capacity of the ocean crust to store carbon dioxide has been uncovered by two geoscientists from the University of Sydney.
Prof Dietmar Müller and Dr Adriana Dutkiewicz from the Sydney Informatics Hub and the School of Geosciences report their discovery in the journal Science Advances.
The slow carbon cycle predates humans and takes place over tens of millions of years, driven by a series of chemical reactions and tectonic activity. The slow carbon cycle is part of Earth's life insurance, as it has maintained the planet's habitability throughout a series of hothouse climates punctuated by ice ages.
One idea is that when atmospheric carbon dioxide rises, the weathering of continental rock exposed to the atmosphere increases, eventually drawing down carbon dioxide and cooling the Earth again.
Less well-known is that weathering exists in the deep oceans too. Young, hot, volcanic ocean crust is subject to weathering from the circulation of seawater through cracks and open spaces in the crust. Minerals such as calcite, which capture carbon in their structure, gradually form within the crust from the seawater.
Recent work has shown that the efficiency of this seafloor weathering process depends on the temperature of the water at the bottom of the ocean--the hotter it is, the more carbon dioxide gets stored in the ocean crust.
Prof Müller explains: "To find out how this process contributes to the slow carbon cycle, we reconstructed the average bottom water temperature of the oceans through time, and plugged it into a global computer model for the evolution of the ocean crust over the past 230 million years. This allowed us to compute how much carbon dioxide is stored in any new chunk of crust created by seafloor spreading."
The computer model reveals that the capacity of the ocean crust to store carbon dioxide changes through time with a regular periodicity of about 26 million years.
Several geological phenomena including extinctions, volcanism, salt deposits and atmospheric carbon dioxide fluctuations reconstructed independently from the geological record all display 26 million-year cycles.
A previous hypothesis had attributed these fluctuations to cycles of cosmic showers, thought to reflect the Solar System's oscillation about the plane of the Milky Way Galaxy.
Prof Müller says: "Our model suggests that characteristic 26 million-year periodicity in the slow carbon cycle is instead driven by fluctuations in seafloor spreading rates that in turn alter the capacity of the ocean crust to store carbon dioxide. This raises the next question: what ultimately drives these fluctuations in crustal production?"
Subduction, the sinking of tectonic plates deep into the convecting mantle, is regarded as the dominant plate driving force of plate tectonics. It follows that cyclicities in seafloor spreading rates should be driven by equivalent cycles in subduction.
An analysis of subduction zone behaviour suggests that the driving force in the 26 million-year periodicity originates from an episodicity in subduction zone migration. This component of the slow carbon cycle needs to be built into global carbon cycle models.
Better understanding of the slow carbon cycle will help us predict how the Earth will react to the human-induced rise in atmospheric carbon dioxide. It will help us answer the question: To what extent will the continents, oceans and the ocean crust take up the extra carbon dioxide in the long run?
The above story is based on materials provided by University of Sydney. |
Setting up a café is exciting for students and helps build community. Poetry is an accessible and personal genre through which to express ideas and feelings. Oral storytelling brings history and culture to the communication process. This task encourages classroom communities to come together and share writing in a safe, supportive and inspiring space.
- Review or explore elements of both storytelling and poetry.
- Select and read various poems and stories together as a class.
- Assess student interest in artistic modes of expression. Students may work together individually or in partnerships or small groups depending on their preferred medium.
- Make note of literary devices and strategies used to convey ideas and feelings such as:
- Descriptive language to help the reader or listener visualize the feeling or message
- Repetition for emphasis
- Comparisons such as similes and metaphors
- Introduce students to the Do Something Student Planning Guide. Instruct them in mapping the steps necessary to complete poems and stories.
- Share the sample rubric or adapt it into a checklist for students. Refer to the rubric to define expectations.
- As a class, generate topics that connect to central text themes.
- Show students video clips of poetry and storytelling performances.
- Help students decide whether to write a poem or an original story (fiction or creative nonfiction).
- Students will work individually on writing but can work with partners to peer review each other’s work.
- Have students practice telling their stories or reciting their poems aloud.
- Ask students to provide feedback to each other related to performance elements such as eye contact, expression, delivery clarity and volume.
- Students finalize their poems and stories and prepare to perform in the cafe showcase.
- Decide location for the cafe based on your school community, resources and schedule. If possible, invite families, other grades and community members.
- Think about creative ways to transform space into the cafe.
- Consider mood lighting, tablecloths, flowers for decoration, etc.
- Consider serving snacks or beverages.
- Students can help with set up and decorations.
- Students should be encouraged to show support for each performer.
- Throughout the showcase, tie poems and stories back the literacy work being done in class, the central texts and the overall social justice themes.
- Take photographs of the cafe event and use them in a digital or paper scrapbook that celebrates the event.
Students can journal about how their poems and stories reflected central text themes. Some suggested reflection questions include:
- What topic or theme from the central text was included in your poem or story?
- What important message did your poem or story express to your audience?
- How can poems or stories be a form of social action?
English language learners
The heavy language focus of this task can be challenging for students learning English, so be sure to check in and scaffold the experience throughout. Graphic organizers can help with the poetry or story-writing process. This project engages linguistic and intra-personal learning modalities throughout the writing process, and the inter-personal modality during the performances.
Connection to anti-bias education
Poetry and storytelling are personal, expressive forms of writing that help students develop their voices and convey thoughts, feelings and understandings related to social justice topics. By hearing multiple performances, students can learn from and appreciate other perspectives, a foundational element of anti-bias curriculum. |
- A logic gate is a basic building block of a digital circuit. Most logic gates have two inputs and one output. At any given moment, every terminal is in one of the two binary conditions low (0) or high (1), represented by different voltage levels. The logic state of a terminal can, and generally does, change often, as the circuit processes data. In most logic gates, the low state is approximately zero volts (0 V), while the high state is approximately five volts positive (+5 V).
- There are seven basic logic gates: AND, OR, XOR, NOT, NAND, NOR, and XNOR.
- The AND gate is so named because, if 0 is called "false" and 1 is called "true," the gate acts in the same way as the logical "and" operator. The following illustration and table show the circuit symbol and logic combinations for an AND gate. (In the symbol, the input terminals are at left and the output terminal is at right.) The output is "true" when both inputs are "true." Otherwise, the output is "false."
Example of a logic Diagram: |
Key: subject = yellow, bold verb = green, underline subjects and verbs must agree in number if the subject is singular, the verb must be singular too example: she writes every day if the subject is plural, the verb must also be plural. The paper aims to investigate the sva (subject-verb agreement) errors made by the fourth semester (year two) diploma students at a public university in malaysia. Finding and fixing subject / verb agreement errors what’s a subject / verb agreement error the subject of a sentence is the actor/idea of a sentence the verb is the action or state of being of. Some nouns and pronouns seem to be plural but function as trick singular nouns, so there must be correct verb agreement with trick singular nouns and pronouns an example of this is everybody, a singular noun which refers to a group, but must agree with a singular verb, ie.
This checklist for research papers in writing-intensive courses identifies progress points in the successful completion of the research paper subject-verb agreement, verb tense shift, pronoun-antecedent agreement, and diction errors due by eighth week of quarter. Subject verb agreement rules (and tricky scenarios) - english grammar lesson - duration: 7:03 how to write a research paper - duration: 10:47 siraj raval 75,475 views. This experiment was designed to simulate the conditions for subject-verb agreement errors, which are rarely but regularly observed in highly educated adults twenty-four adults and 24 children (12 years old) were orally presented with sentences to write. Top grammar and punctuation errors (research paper sample) instructions: errors in subject-verb agreement the subject and verb need to agree in number, noun and pronoun whether singular or plural as such there is a need to match singular verbs with singular nouns, and plural verbs with nouns top grammar and punctuation errors.
Paper sample term vancouver style my school rules essay wikipedia, prediction about the future essay brighter original research paper in spanish food, essay about energy saving grants ltd. A subject will come before a phrase beginning with of this is a key rule for understanding subjects this is a key rule for understanding subjects the word of is the culprit in many, perhaps most, subject-verb mistakes. Subject-verb agreement “the basic rule of sentence agreement is simple: a subject must agree with its verb in number number means singular or plural” (rozakis, 2003, p. Sample focus of the paper: o one’s high school years o a favorite family member narrative essays are unique in that research is conducted within the scope of your personal life and experiences subject verb agreement author: adjunct. The subject and verb of every clause (independent or dependent), however, must agree in person and number once you identify the subject and the verb, you can then determine such agreement of the subject and the verb.
Subject/verb agreement rules are pretty complex here's a link to a full explanation here are some examples sentences to try: 1 anna and louie _____ (is/are) afraid of cows. Subject verb agreement the basic rules to ensure subject and verb agreement, the basic rules to follow are: 1identify the real subject the subject is the person or object that is described by a verb, or that performs the action of a verb2determine whether the subject is singular or plural 3use the matching form of the verb (singular or plural. Subject-verb agreement details written by brogan sullivan lies at the heart of every sentence in the english language like an indivisible nucleus at the center of an atom, the subject-verb pair unifies the sentence do too much work, a tendency that can cause problems with subject-verb agreement before going through a lengthy research.
Choosing a singular or plural verb for a subject usually comes naturally but some tricky subjects cause problems, even for native speakers check your knowledge of subject-verb agreement with our quiz on tricky subjects. English essay rules subject verb agreement my wonders essays locality english essay class selfie in the kitchen essay synonym task for essay pollution in hindi essay about marriage japanese period student and teacher essay day, essay about princess diana playground london good school essay about education definition research paper. Subject-verb agreement when writing a paper, it is helpful to keep in mind that verbs must always agree with the subject in both number (singular or plural) and person (first, second, or third) this type of agreement helps to ensure that your paper will be accurate, clear, and stylistically correct. Subject verb agreement the subject and the verb must agree in number: both must be singular or both must be plural students have problems with subject verb agreement when the verb is a form of be or have, or when the verb is in present tense.
Subjects, verbs and objects in the english language. What is a research paper steps in writing a research paper agreement - irregular verbs agreement of nouns/pronouns (the subject of the verb) with irregular verbs the singular and plural forms of certain present-tense verbs are irregular the pronoun (subject of the verb) must agree with the verb (subject) irregular verbs with. In her research, the students have difficulties in subject-verb agreement because, in their l1 which is malay language, they do not have such rules of subject needs to agree with verb in the long run, mother tongue of the student affects the performance of english grammar. 20 rules of subject verb agreement essays and research papers search self teaching english verb agreement on sentences, subject/verb agreement, paper buzzfeed has a list of rules and words that are for internet in a sentence to 15 maybe 20 words try not to make. |
This picture provides a 3D graphical representation of a generic influenza virion’s ultrastructure, and is not specific to a seasonal, avian or 2009 H1N1 virus. There are three types of influenza viruses: A, B and C. Human influenza A and B viruses cause seasonal epidemics of disease almost every winter in the United States. The emergence of a new and very different influenza virus to infect people can cause an influenza pandemic. Influenza type C infections cause a mild respiratory illness and are not thought to cause epidemics. Influenza A viruses are divided into subtypes based on two proteins on the surface of the virus: the hemagglutinin (H), and the neuraminidase (N). There are 16 different hemagglutinin subtypes and 9 different neuraminidase subtypes. Influenza A viruses can be further broken down into different strains. Current subtypes of influenza A viruses found in people are influenza A (H1N1) and influenza A (H3N2) viruses. In the spring of 2009, a new influenza A (H1N1) virus emerged to cause illness in people. This virus was very different from regular human influenza A (H1N1) viruses and the new virus has caused an influenza pandemic. Influenza B viruses are not divided into subtypes. Influenza B viruses also can be further broken down into different strains.
Photo by Contributed Photo /Chattanooga Times Free Press .
published Monday, December 30th, 2013 |
Science Fair Projects and Geyser Models
|Glassware Geyser Model||
Before you start please read these important notes on building a geyser model.
There are several good ways of making a working model of a geyser. Since most of these methods involve a heat source and pressure, safety is very important. Experiments should be conducted with adult supervision. Care should be taken to conduct these experiments in a lab or an area where there is enough room to stand away from the model. Safety goggles should be worn by participants who will be near the model. The experiments should be conducted in an area where water spillage can not cause damage. A school lab is the perfect location. If done at home, a good work location in which to do these experiments must be chosen. My children and I have built these models on our patio where we can safely conduct the experiments without incurring the wrath of my wife. A garage workshop can also be a good place. Please be safe and always keep small children away from the experiment area.
A simple model can be constructed using laboratory glassware. Items needed:
Fill the flask with water to about 3/4 full. As shown in the photo above, insert the stopper in the flask. Carefully insert the glass tube in the rubber stopper so it so down into the flask about 3/4 of the way to the bottom. Place the flask on the hot plate or on a ring stand just above a Bunsen burner. Drill a hole in a strong small plastic bowl or Tupperware container. Work the glass tube into the hole in the bowl and place the bowl on a ring stand as shown in the photo above. Plumber's putty can be applied to the bottom of the bowl around the glass tube entry point to keep the bottom of the bowl from leaking. The glass tube should extent up an inch or so into the bowl. The bowl should catch the water from an eruption and also allow the water to flow back into the model.
Fill the bowl until water runs down the tube into the flask. Keep pouring water until the flask and tube is full of water. Do not fill the bowl above the top of the glass tube. Turn the heat on and allow the water to heat up. Observe how long it takes for an eruption to occur. Water should erupt into the air inside the bowl. Observe how the eruption occurs. After the eruption water from the bowl should run back down the tube into the flask. One can design several experiments. How does the length of the glass tube effect the amount of time required for an eruption? What other factors could effect the eruption?
This model is essentially the same as the glassware model above but with common household items (or at least things your can get at the local hardware) substituted for the more expensive and sometimes harder to come by ( for parents anyway) glassware.
1.Instead of the flask use a can with a screw on lid like the one in the photo. Make sure the can was not used to hold flammable or hazardous material like lantern fuel, or paint thinner. Make sure the can is thoroughly cleaned.
2- Use a A rubber stopper or cork that snuggly fits in the hole and plugs it up. The stopper or cork will need a hole in the middle big enough to fit small copper tubing in. Use the copper tubing instead of the glass tube above.
3- The small copper tubing needs to reach about 3/4 of the depth of the can and need to extend 6 to 10 inches above the can.
4- You will need a Tupperware dish or a plastic tray with sides. Drill a hole the same size as the tubing in the dish and fit the tubing in it. The dish will ketch the falling water during an eruption. Your can put plumber's putty around the hole to minimize leaks.
5- Like above you will need a Bunsen burner or a hot plate. If you use a Bunsen burner you will need a ring stand. If you use a hot plate you can put the can directly on the hot plate. Be careful though because the can will get hot.
Fill the can 3/4 of the way up with water. Put the cork/ tube in place. Heat it up and wait fot the eruption. Make sure you note how long it takes. This model is not self filling. Once the eruption is over turn off the heat and let the can cool. Then if you want to do the experiment again take the cork out and refill. Be careful. Don't burn yourself. Potholders or potholder gloves are helpful if you need to touch the hot items. The cooks in you family can help you with how to handle the hot can, tube and how to safely us the hotplate. Remember there is water involved so hot plates need to be chosen careful and handled carefully. I prefer the Bunsen burner but have used hot plates and even a propane stove. I also prefer doing this in a lab or outside on a patio.
WyoJones's Geyser Page.
Copyright © 1996, 1997, 1998, 2006[Gregory L. Jones]. All rights reserved. |
A new paper in Current Anthropology suggests that Neanderthals were as gifted at hunting as modern humans.
CURRENT ANTHROPOLOGY Volume 47, Number 1, February 2006
Ahead of the Game
Middle and Upper Palaeolithic Hunting Behaviors in the Southern Caucasus
by Daniel S. Adler, Guy Bar-Oz, Anna Belfer-Cohen, and Ofer Bar-Yosef
Over the past several decades a variety of models have been proposed to explain perceived behavioral and cognitive differences between Neanderthals and modern humans. A key element in many of these models and one often used as a proxy for behavioral "modernity" is the frequency and nature of hunting among Palaeolithic populations. Here new archaeological data from Ortvale Klde, a late Middle–early Upper Palaeolithic rockshelter in the Georgian Republic, are considered, and zooarchaeological methods are applied to the study of faunal acquisition patterns to test whether they changed significantly from the Middle to the Upper Palaeolithic. The analyses demonstrate that Neanderthals and modern humans practiced largely identical hunting tactics and that the two populations were equally and independently capable of acquiring and exploiting critical biogeographical information pertaining to resource availability and animal behavior. Like lithic techno-typological traditions, hunting behaviors are poor proxies for major behavioral differences between Neanderthals and modern humans, a conclusion that has important implications for debates surrounding the Middle–Upper Palaeolithic transition and what features constitute "modern" behavior. The proposition is advanced that developments in the social realm of Upper Palaeolithic societies allowed the replacement of Neanderthals in the Caucasus with little temporal or spatial overlap and that this process was widespread beyond traditional topographic and biogeographical barriers to Neanderthal mobility. |
Non-destructive Testing is one part of the function of Quality Control and is complementary to other long established methods. By definition non-destructive testing is the testing of materials, for surface or internal flaws or metallurgical condition, without interfering in any way with the integrity of the material or its suitability for service. The technique can be applied on a sampling basis for individual investigation or may be used for 100% checking of material in a production quality control system.
Non-destructive Testing is not just a method for rejecting substandard material; it is also an assurance that the supposedly good is good. The technique uses a variety of principles; there is no single method around which a black box may be built to satisfy all requirements in all circumstances.
What follows is a brief description of the methods most commonly used in industry, together with details of typical applications, functions and advantages.
COMMON NDT METHODS
While there are many different methods of NDT only the more common NDT methods used for the evaluation of materials and welds will be outlined here. These methods are the following:
|(1) Visual inspection
(2) Liquid penetrant inspection
(3) Magnetic particle testing
(4) Radiographic inspection
(5) Ultrasonic testing
(6) Eddy current testing
Non-Destructive Testing (NDT) training is provided for people working in many industries. It is generally necessary that the candidate successfully completes a theoretical and practical training program, as well as have performed several hundred hours of practical application of the particular method they wish to be trained in. At this point, they may pass a certification examination. While online training has become more popular, many certifying bodies will require additional practical training.
Levels of certification
Most NDT personnel certification schemes listed above specify three “levels” of qualification and/or certification, usually designated as Level 1, Level 2 and Level 3 . The roles and responsibilities of personnel in each level are generally as follows (there are slight differences or variations between different codes and standards.
- Level 1 are technicians qualified to perform only specific calibrations and tests under close supervision and direction by higher level personnel. They can only report test results. Normally they work following specific work instructions for testing procedures and rejection criteria.
- Level 2 are engineers or experienced technicians who are able to set up and calibrate testing equipment, conduct the inspection according to codes and standards (instead of following work instructions) and compile work instructions for Level 1 technicians. They are also authorized to report, interpret, evaluate and document testing results. They can also supervise and train Level 1 technicians. In addition to testing methods, they must be familiar with applicable codes and standards and have some knowledge of the manufacture and service of tested products.
- Level 3 are usually specialized engineers or very experienced technicians. They can establish NDT techniques and procedures and interpret codes and standards. They also direct NDT laboratories and have central role in personnel certification. They are expected to have wider knowledge covering materials, fabrication and product technology.
Piping is crucial for transport of fluid from one equipment to another in any process plant.There are many aspects to piping and it can be a daunting and time consuming task to understand how everything fits together. This course provides a broad overview of piping engineering from designing to construction.Introduction to Piping Process Diagrams (PFD, UFD, P&ID, Line List etc)
Process piping interconnects various instruments in the projects and is intended for the chemical to flow through it. Piping system uses different types of pipe fittings such as valves, tees, flanges, reducers, elbows, gaskets etc. in its fabrication. Hence engineers should be familiar with those fittings and its quality control.
- Introduction to Piping design
- (ASME B31.1/B31.3)
- Classification of straight pipes
- Classification of Pipe Fitting
- “O” Lets
- Pipe Supports
- Symbols used in drawing
- Interpretation of Pipe drawings
Welding is used to fuse together pieces of metal to create or repair various metallic structures. There are many ways welding can be done. Welding equipment can operate using lasers, open flames, or an electric arc. Welding inspectors examine the connections and bonds between metals. Inspectors use visual tools and electrical instruments to check and ensure the quality and safety of connections.In addition to working in the field completing their examinations of welding projects, inspectors spend time in an office setting compiling their reports. The majority of inspectors work on a full-time basis, primarily during business workdays. Some risk may be associated with this profession; welding inspectors utilize protective gear during their evaluations to keep themselves from harm on welding sites.
Main features included in the course of Welding Inspection are
- Inspect welds
- Physical Condition
- Knowledge of Welding
- Knowledge of Drawings, Specifications, and Procedures
- Knowledge of Testing Methods
- Education and Training
- Welding Experience
- Inspection Experience
- Certification of Qualification
Some of the processes included in the course are
WPS – Weld Procedure Specification:
- Qualified instructions on how to complete the weld.
- A WPS is a written (qualified) welding procedure prepared to provide direction for the making of production welds.
PQR – Procedure Qualification Record (ASME) & WPAR – Weld Procedure Approval Record:
- Record of the welding parameters and test results
- A PQR is a record of welding data used to weld a test coupon
Welder Qualification Test certificate & Welder Performance Qualification (ASME):
- Record of Welder test results and range of approval
Non-destructive Testing is one part of the function of Quality Control and is complementary to other long established methods. By definition non-destructive testing is the testing of materials, for surface or internal flaws or metallurgical condition, without interfering in any way with the integrity of the material or its suitability for service. The technique can be applied on a sampling basis for individual investigation or may be used for 100% checking of material in a production quality control system.Non-destructive Testing is not just a method for rejecting substandard material; it is also an assurance that the supposedly good is good. The technique uses a variety of principles; there is no single method around which a black box may be built to satisfy all requirements in all circumstances.
The ASNT NDT Level II program provides third-party certification for nondestructive testing (NDT) personnel whose specific jobs require knowledge of the technical principles underlying the nondestructive tests they perform, witness, monitor or evaluate. The program provides a system for ASNT NDT Level II certification in NDT accordance with Recommended Practice No. SNT-TC-1A.
Certification under this program results in the issuance of an ASNT certificate and wallet card attesting to the fact that the certificate holder has met the published guidelines for the Basic and Method examinations as detailed in Recommended Practice No. SNT-TC-1A.
Destructive Testing And Non-Destructive Testing-Define & type, Testing and inspection, Levels of certification, ASNT Level II Responsibilities, Mechanical properties, Methods of NDT-(Conventional and Advanced), NDT Responsibilities, Stress-strain curve, mechanical properties.
COMMON NDT METHODS
The six most frequently used test methods are
- Magnetic Particle Testing (MT)
- Liquid Penetrant Testing (PT)
- Radiographic Testing (RT)
- Ultrasonic Testing (UT)
- Electromagnetic Testing (ET)
- Visual Testing (VT)
SPECIALISED NDT METHOD
- Acoustic Emission Testing (AE)
- Guided Wave Testing (GW)
- Laser Testing Methods (LM)
- Leak Testing (LT)
- Magnetic Flux Leakage (MFL)
- Neutron Radiographic Testing (NR)
- Thermal/Infrared Testing (IR)
- Vibration Analysis (VA)
ASNT NDT LEVEL II EXAMINATIONS
Successful ASNT NDT Level II certification candidates must complete the General examination and at least one Specific examination.
The 50-question multiple-choice Level II General examinations cover the fundamentals, principles and theory found in the applicable Level II Topical Outlines found in the ANSI/ASNT American National Standard CP-105, ASNT Standard Topical Outlines for Qualification of Nondestructive Testing Personnel. The general exam is the same for all Sectors for any one test method.
The specific examinations determine the industry Sector. ASNT currently offers Specific examinations for the General Inspection and Pressure Equipment Sectors. These examinations consist of 40 multiple-choice questions based on an NDT procedure covering the equipment, operating processes and NDT techniques commonly used in the applicable industry Sector.
QA/QC in Civil
Quality Assurance and Quality Control are extremely important aspects of any engineering or construction project without which successful completion of the project can’t be imagined. In fact, these two are integral parts of virtually any project one can think of. Proper implementation of Quality Assurance and Quality Control not only results in a sound project but also leads to more economy by means of optimisation. It’s hence important to realise the meaning or the definitions of the terms Quality Assurance and Quality Control. Certification of QC and QA certificate is issued by NACEL, Government of India. NDT CERTIFICATES are issued by AMERICAN SOCIETY FOR NON DESTRUCTIVE TESTING (ASNT).
Duties and Responsibilities of a QA/QC Engineer:
- Codes QC documentation
- QC Reporting
- Quality review meetings
- Quality audits
- Training QC personnel on quality matters
- Improving the existing QA framework
- Calibration and maintenance of testing and measuring equipments
- Quantity survey Site safety
- Non Destructive Tests(NDT)
- QC laboratory and Field tests
- Project Quality Plan (PQP)
- Quality Assurance Plan (QAP)
- Inspection Test Plans (ITP)
- Job Procedures (JP)
- Pile load testing
- Concrete core test Permeabilty tests
- Strength tests Corrosion assessment
- Carbonation tests
- Pile integrity tests
Civil engineering deals with the application of planning, designing, constructing, maintaining, and operating infrastructures while civil surveying deals with estimation, quick method, estimation for complete project, standard material consumption for various items, mode of measurements, rate analysis, data used by the site engineers etc. In short quantity surveying provides expert advice on construction costs.
A Quantity Surveyor (QS) is a professional working within the construction industry concerned with construction costs and contracts. Services provided by a Quantity Surveyor may include:
- Cost planning and commercial management throughout the entire life cycle of the project from inception to post-completion
- Value engineering
- Risk Management and calculation
- Procurement advice and assistance during the tendering procedures
- Tender analysis and agreement of the Contract Sum
- Commercial Management and Contract Administration
- Assistance in dispute resolution
- Asset Capitalisation
- Interim valuations and payment assessment
- Cost Management process
- Assessing the additional costs of design variations
- Assessing the tenders
- Estimating the cost of variations
- Preparing valuation statements for interim certificates
- Preparing regular cost reposts.
- Completing the final account.
Civil NDT includes two sections
- Non Destructive Testing (NDT) Section As Per (IS Codes and ASTM Codes)
- QC Section
Non Destructive Testing (NDT) Section As Per (IS CODES and ASTM CODES)
- Rebound hammer test
- Ultrasonic pulse velocity Concrete testing
- Penetration methods
- Corrosion assessment ,thickness of concrete bridges
- NDT for detection of cracks voids in concrete bridges
- Radar method
- For steel bridges (Ultrasonic Testing , Magnetic And Liquid Penetrate Testing and Radiographic Testing )
- NDT on masonry bridges
- Infrared thermograph Test
- Pile Integrity Test
- Introduction to quality control and civil engineering
- Planning, assuring and process quality
- Construction materials and material control
- Destructive testing
- Mechanical inspection
- Safety requirements for testing
- Results and interpretation
- Tests on soils
- Tests on bitumen
- Tests on cements
- Tests on aggregates
- Tests on fresh concrete
- Moisture content
- Liquid limit
- Plastic limit
- Specific gravity
- Particle size analysis
- Sedimentation analysis
- Consolidation test
- In situ bulk density and dry density
- Plate load test
- Standard penetration test.
- Setting time
- Compressive strength
- Storage of cement
- In-site Sampling & Preparations
- Flakines& Elongation Index
- Specific Gravity
- Water Absorption
- Bulk Density
- Aggregate Impact Value
- Aggregate crushing value
- Aggregate abrasion value
- Silt & Clay content
- Compressive strength
- Flexural strength
- Compaction factor
- Slump test
- Flow test
- Non destructive tests
- Carbonation Test
- Pile Load test
- Flash point
- Fire point
- Softening point
- content – Ductility
- Silt & Clay content
Non Destructive Test:
- Surface Hardness Test
- Penetration resistance tests
- Pull out test
- Pull off test
- Maturity method
- Ultrasonic pulse velocity test
- frequency test
- Infra red thermography
- Corrosion detection & Analysis
- Permeability tests
- Reinforcement detection & verification
- Material calculation
- Tendering & Billing
- Method statements
- Duties and responsibilities of a quantity surveyor
- Use of PPE’s
Electrical Design & Drafting Engineering
Electrical Design Engineering is a field of engineering that commonly deals with the study and application of electricity, electronics, and electromagnetism. Due to lack of formal studies to the students, they will not get skillful techniques on electrical design systems. These days, there is huge requirement of Electrical Design Engineers in numerous divisions such as design, manufacture and mechanism of power and distribution systems, design of safety system, Sub-stations Design, commercial and Domestic interior lighting, design of low current systems, selection of shielding device, CCTV System Design, design of fire alarm systems, and Sound Systems design. All courses based on Electrical Design are designed by Engineering Design & Power Training Institute with the help of advance technology and techniques.
- Introduction to EPC industry
- Introduction to role of Electrical Engineering in EPC/Plant establishment.
- Wiring and cable management systems
- Lighting Management System
- Introduction to common terminology used across industry
- Overview of international codes & standard used in industry
- Introduction to P & ID & symbols.
- Calculation of Load & preparation of Load schedule
- Selection of motor with respect to connected machine & usage.
- Estimation of power supply capacity & stand by capacity
- Power distribution system House Wiring Concept (designing)
- Apartment & Industrial designing
- Revit, Auto cadd, relux software training
Instrumentation can be defined as the art and science of measurement and manage of process variables within fabrication or manufacturing area. Instrumentation generally plays an important role in chemical process plants where instrumentation is used as calculating and monitoring of diverse operations. Control system is considered as a main part of instrumentation and instrumentation design that concerns with specifications of equipment, layouts, wiring schematics, instrument index and many others. All these major activities related to the process are handled by the professional instrumentation Design Engineer. Instrumentation Design Engineering Course offer by the professionals to improves the skills of the students by providing them practical knowledge of collection, mechanism and commissioning of industrial instrumentation and control valves with its qualifications, layouts, wiring schematics, instrument index.
- Introduction to industry & EPC contractor.
- Role of Instrumentation Engineer in various types of Industry.
- General requirements from Clients & Supplier.
- Relevant Codes & Standards.
- Basic Design requirement based on the type of plant e.g. Chemical, Petrochemical Industrial, power plant etc.
- Designing & selection of various instrument
– Pressure Instruments (Gauge,Indicator,Transmittor)
– Temperature Instruments (Gauge,Indicator,Transmittor)
– Flow Instruments (Gauge,Indicator,Transmittor)
– Control Valves
– Shutoff Valves
– DCS for whole plant
- Process Data sheets and Specifications, Instrument Data Sheets
- Instrument Wiring Layout, Logic Diagrams
- Loop Drawing, Loop Wiring Diagram, JB Layout
- Cable Schedule, Cable Tray Layout
- Hook-Up Drawing
- Introduction to PLC hardware
- Role of PLC in automation
- Types of Inputs & outputs
- Architectural Evolution of PLC
- Introduction to the field devices attached to PLC
- Various ranges available in PLCs
- Source Sink Concept in PLC
- PLC Communication etc
- Uses of SCADA software
- Different packages available with I/O structure
- Features of SCADA software
- Introduction to AC/DC Drives
- Selection criteria of the drives for particular application
- Preparation of cable layput diagram
- LT/HT Panel design
- Wireless Technology
QA (Quality Assurance)/QC (Quality control) is a process which entitles to review Quality of all factors involved in electrical production and instrumentation. In every industry QA/QC have major rule to Develop and determine all standards to perform inspection and tests on all procedures and oversee all testing methods and maintain high standards of quality for all processes. Review quality of all materials at site and ensure compliance to all project specifications and quality and collaborate with department for all material procurement and maintain quality of materials. Supervise effective implementation of all test and inspection schedule and ensure adherence to all procedures and coordinate with various teams to perform quality audits on processes. Assist with employees to ensure knowledge of all quality standards and ensure compliance to all quality manual and procedures and collaborate with contractors and suppliers to maintain quality of all systems.
Duties and Responsibilities of a QA/QC Engineer:
- Preparation of Lighting & Power outlet layouts covering light points, A/C points, Telephone points, speaker & Music System, CCTV points, Computer (LAN) points, fan fixtures etc.
- Preparation of Electrical Load Calculation to arrive at the ratings / requirements of Capital items such as Transformer & DG set.
- Preparation of detailed Estimate for Budgetary purposes.
- Preparation of Substation / Electrical Room / DG set room drawings as well as Electrical Equipment layout drawings.
- Preparation of Electrical Schematic Diagrams, Technical specifications for materials to be purchased, Schedule of Quantities and Tender Documents.
- Invitation of quotations and assistance in finalization of Electrical Contract as well as material to be purchased by the owner.
- Scrutinizing and approval of contractors working drawings.
- Supervision for installation, testing and commissioning.
- Duties & Role of an Electrical QC Engineer
- Important Indian Standards & International Standards (IEC,DEWA,NEC )
- MV Installation, Transformer, Cables, Generator, motors
- Switch gear and Busbar selection
- Fault level calculation and Earthing design
- Lighting Protection
- Preparation of drawings (House, industrial)
- Substation room designing
- Quality control orders
- Testing and Inspection of CT, PT, Panel boards ,earthing conductors etc.
- Project and BOQ SOFTWARE AUTO CAD, RELUX, ECODIAL
HVAC (Heating, Ventilation and Air-Conditioning) is a major sub discipline of mechanical engineering. The goal of HVAC design is to balance indoor and vehicular environmental comfort with other factors such as installation cost, ease of maintenance and energy efficiency, provide thermal comfort and acceptable indoor air quality.
The diploma certification program specializes in full analytical load calculations, ventilation system designing, duct designing, air distribution system designing, sizing pipes and pumps, estimation of requirements, and all HVAC EQUIPMENT selections as per ASHRAE, ISHRAE standards. We offer high quality training in designing and drafting along with cost effective and prompt HVAC design system services. The training provided by the academy is fruitful as it includes on-site training and practical knowledge.
HVAC is a major sub discipline of mechanical engineering deals with the technology of indoor air quality management of buildings and human comfort. HVAC system designing deals with heating, ventilation systems designing, air conditioning, refrigeration system designing and equipment selection.
- Basic of designing, heat load calculation (manually & Software)
- Machine parts, Coil selection, chiller parts
- Pumps schematic diagram and estimation
- Cooling tower selection, pump selection
- Low side material selection
- Drafting Project 1
Drafting Project 2
- Quantity Survey
- HAP (hourly analysis program)
- Duct Sizes
- Pipe Sizes
- Mechanical CAD Training
Graduate in any discipline.
COURSE DURATION & DETAILS
Mechanical, electrical, and plumbing services (MEP) are a significant component of the construction supply chain. MEP design is critical for design decision-making, accurate documentation, performance and cost-estimating, construction planning, managing and operating the resulting facility. Our mechanical and electrical engineering, HVAC, and plumbing design services team members are proven leaders in innovation and conscious of the need to balance functionality, cost, energy conservation and integration with aesthetic architectural elements. We proactively meet the challenges of achieving occupant comfort, meeting code-mandated operating parameters and concealing large equipment both visually and audibly.
MEP Engineer is a single-level professional classification responsible for planning and design in the areas of mechanical, electrical, and plumbing (MEP) systems including developing polices, standards, inspection procedures, and evaluation of various projects. The mechanical aspect focuses on heating, cooling and ventilation (HVAC) and the electrical aspect focuses on providing power to all outlets and appliances .The plumbing aspect focuses on the delivery of water and the draining of waste water
The plumbing industry is access to water supply and basic sanitation systems is an essential human need. In order for our society to function efficiently and in safe environment, it is important that these basic needs are met. Plumbers and professionals working in plumbing industry help us to achieve this goal. A modern day plumber’s job is wide in scope: plumbers lay down pipes for the distribution of water and disposal of waste-water, design, repairs, and maintain the drain, waste and vent system, install fixtures and also work with gas pipelines.
- Plumb selection
- Head calculation
- Rain Water Harvesting
- Drainage System
- Soak Pit Calculation
- Water supply fixture unit
- Water calculation
- Electrical designing
- Plumbing systems
- Fire fighting
Drilling Technology course gives a thorough knowledge in exploration process. It covers major topics such as rigging component, drilling fluids, rotary drilling practices and rotary rig components. Reviews well planning, rig system, BOP, well control and well completion. It will give the students to get prominent knowledge in drilling.
A drilling rig is a machine that creates holes in the earth sub-surface. Drilling rigs can be massive structures like housing equipments used to drill water, oil wells or natural gas extraction wells, or they can be small enough to move manually by one person and are called augers. Drilling rigs can be sample sub-surface mineral deposits, test rock, soil and ground water physical properties and can be used to instal sub-surface fabrications, such as underground utilities, instrumentation, tunnels or wells. Drilling rigs can be mobile equipments mounted on trucks, tracks or trailers or more permanent land or marine based structures. The term “rig” therefore generally refers to the complex equipment that is used to penetrate the surface of the Earth Crust.
- Basic of drilling
- Mud Circulating System
- Drilling fluids
- Drilling operations
- Mud logging
- Well testing
- Well completion
- Well Head Maintenance
- Basics of rig safety
The American Welding Society (AWS) was founded in 1919, as a nonprofit organization with a goal to advance the science, technology and application of welding and related joining methods. From factory floor to high-rise construction, from military weaponry to home products, from pressure vessels to pipelines, AWS continues to lead the way in supporting welding education and technology.
AWS is known internationally to have documented the best practices, and the best way to test the skillset of several Job profiles involved in the engineering industry. When AWS tests and certifies a technician or an engineer, the international engineering community accepts the fact that this person is suitable for a particular job profile. So an AWS certification basically helps the candidate to widen the horizon where he can take up a job or assignment.
The Certification Scheme for Personnel (CSWIP) is a comprehensive scheme which provides for the examination and certification of individuals seeking to demonstrate their knowledge and/or competence in their field of operation. The scope of CSWIP includes, all levels of Welding Inspectors, Welding Supervisors, Plant Inspectors, Welding Instructors, Underwater Inspectors and NDT personnel.CSWIP is managed by the Certification Management Board, which acts as the Governing Board for Certification, in keeping with the requirements of the industries served by the scheme. The Certification Management Board, in turn, appoints specialist Management Committees to oversee specific parts of the scheme. All CSWIP Boards and Committees comprise member representatives of relevant industrial and other interests.The current document covers the Certification of Visual Welding Inspectors (Level 1) Welding Inspectors (Level 2) and Senior Welding Inspectors (Level 3). There are two categories of Senior Welding Inspector: one with Radiographic Interpretation and one without.
Pipe Design Engineering is counted as an important department in diverse stream of engineering such as Mechanical, Chemical, Petroleum and Production Engineering. Pipes and its related equipments are responsible for 25% of total investments in varied sectors including chemical, pharmaceutical, power plants, LPG / CNG Plant, Distribution System, oil and petrochemical plants. That why Piping Design Engineering plays an imperative role in plant Design and its Construction.
All professionals like Mechanical Engineers, System Designers, and Production & Manufacturing Engineers with awareness of piping design engineering are permitted to make effectual & competent designs, prepare of accurate equipment specification and route layouts etc., in any EPC company.
A large number of global challenges which include protection against corrosion, safeguarding pipelines, pipe coating, pipe bending, pipe welding, leak detection in long-Online pipelines and offshore services, cleaning of pipelines, efficient use of pipe etc, create the requirement of specialized piping design engineers.
- Basics of Oil & Gas/ Power Industry & introduction to commonly used terminology.
• Role of a Piping Engineer in various fields of industry.
• Introduction to Various piping components (Pipe, Fittings, Flanges, Valves, gaskets, Strainer, Steam Trap etc)
• Codes & Standards.
• Pipe Wall Thickness Calculation
• Branch reinforcement calculation
- Pipe Hydraulics & Line Sizing
• Introduction to of Equipments Used in Process/ Power Plants & study of Equipment datasheet/ GADs .
• PMS & VMS.
• Introduction to P&ID symbols.
• PFD, P&ID.
• Layout: Designing of piping of Various equipments, Equipment layout, Pipe Rack, Nozzle Orientation, Steam tracing, Steam piping, Drip leg Piping, Underground piping etc.
• Study of GAD isometric drawing.
• Guidelines to preparation of as built drawings.
• Preparation of Isometric drawings and Bill of Materials.
• Steam Tracing & insulation
• Piping Supports. |
When did it all start?
There are various theories on the origin of Valentine's Day, but the most popular dates back to the time of the Roman Empire during the reign of Claudius II, 270 A.D. Claudius didn't want men to marry during wartime because he believed single men made better soldiers. Bishop Valentine went against his wishes and performed secret wedding ceremonies. For this, Valentine was jailed and then executed by order of the Emperor on Feb. 14. While in jail, he wrote a love note to the jailor's daughter, signing it, "From your Valentine." Sound familiar?
More Valentine's Day-related history
- The ancient Romans celebrated the Feast of Lupercalia on Feb. 14 in honor of Juno, the queen of the Roman gods and goddesses. Juno was also the goddess of women and marriage.
- Many believe the X symbol became synonymous with the kiss in medieval times. People who couldn't write their names signed in front of a witness with an X. The X was then kissed to show their sincerity.
- Girls of medieval times ate bizarre foods on St. Valentine's Day to make them dream of their future spouse.
- In the Middle Ages, young men and women drew names from a bowl to see who would be their Valentine. They would wear this name pinned onto their sleeves for one week for everyone to see. This was the origin of the expression "to wear your heart on your sleeve."
- In 1537, England's King Henry VII officially declared Feb. 14 the holiday of St. Valentine's Day. |
The decade of 1870 supposed a radical break with the previous political economy; this break was called the marginalist revolution, promulgated by three economists: English William Stanley Jevons; the Austrian Anton Menger; and the Frenchman Léon Walras. His great contribution consisted in substituting the theory of labor value with the theory of value based on marginal utility. In the long run, it has been shown that the concept of marginal unity, or ultimate unity, is far more important than the concept of utility. This contribution of the notion of marginality was what marked the rupture between the classic theory and the modern economy. The classical political economists considered that the main economic problem was to predict the effects that the changes in the amount of capital and labor would have on the rate of growth of the national production. However, the marginalist approach focused on knowing the conditions that determine the allocation of resources (capital and labor) between different activities, in order to achieve optimal results, ie maximizing the utility or satisfaction of consumers.
This new perspective is characterized, in the first place by its initial theme: the reflections on the diminishing marginal utility of consumer goods. But the authors immediately discover that the principles of this particular domain are easily generalizable. Hence the main theme: marginalism will apply procedures of maximization to the different economic variables reasoning in the margin, that is to say on the last unit of the good consumed, produced, exchanged or retained. If one were to summarize the marginalist reasoning in a sentence, we would say that the optimal use of a given resource is obtained when there is no longer any net gain to be obtained from the displacement of a unit of such resource from one job to another. The optimum is born of the equalization in the margin of the utilities of the resources in the different possible uses. This is a universal principle, from which a theory of the behavior of the individual agents of the economy is built, based on the rationality of economic decisions.
During the last three decades of the nineteenth century the English, Austrian and French marginalists moved away from each other, creating three new schools of thought. The Austrian school focused on the analysis of the importance of the concept of utility as a determinant of the value of goods, attacking the thinking of classical economists, which for them, was outdated. A leading second-generation Austrian economist, Eugen von Böhm-Bawerk, applied new ideas to determine interest rates, forever marking the theory of capital. The English school, led by Alfred Marshall, tried to reconcile the new ideas with the work of the classic economists. According to Marshall, classical authors had concentrated on analyzing supply; the theory of marginal utility focused more on demand, but prices are determined by the interaction of supply and demand, just as the scissors cut through their two blades. Marshall, seeking practical utility, applied his analysis of partial equilibrium to certain markets and industries. Walras, the leading French marginalist, deepened this analysis by studying the economic system in mathematical terms.
In addition, since it is a question of maximizing objective functions, one should not be surprised at the use of mathematics admitted and claimed by most authors, although many exceptions can be made (among them the so-called Austrian school). In summary, the three essential characteristics of marginalism are: maximization as a reference of behavior, calculation at the margin as a principle of rationality and mathematics as a technique of analysis. Marginalism then has the ambition at the same time of rigor and generality. But this ambition is not going to be achieved without changing the issues raised by economic analysis and may lead to reductionism. We have seen that classical theory, constructed from the opposition between labor and the greed of nature in a context of competition, emphasizes the problems of economic development and distribution and was therefore fundamentally macroeconomic and dynamic. The marginalist thinking, dedicated to the search for the best possible use of the resources given, will tend to consider as fixed what the classics considered as a variable and to make the economy essentially microeconomic and static.
For each product there is a demand function that shows the quantities of products demanded by consumers according to the different possible prices of that good, the other goods, the income of consumers and each product also has an offer function that shows the quantity of products that the manufacturers are willing to offer according to the production costs, the prices of the productive services and the level of technological knowledge. In the market, there will be an equilibrium point for each product, similar to the balance of forces of classical mechanics. It is not difficult to analyze the equilibrium conditions to be met, which depend, in part, on equilibrium in other markets. In an economy with infinite markets the general equilibrium requires the simultaneous determination of the partial equilibria that occur in one. Walras’s attempts to describe in general terms the functioning of economics led economic historian Joseph Schumpeter to describe Walras’s work as the ‘Magna Carta’ of economics. The Walrasian economy is rather abstract, but provides an adequate framework for analysis to create a global theory of the economic system. |
Allium cepa, the onion (also called bulb onion or common onion) and the shallot (A. cepa var. aggregatum), is a monocot bulbous perennial (often biannual). It is the most widely cultivated species of the genus Allium, which includes other important species such as garlic (A. sativum) and leeks (A. ampeloprasum). The name "wild onion" is applied to various Alliums.
Allium species are among the oldest cultivated crops. Diverse representations in Egyptian artifacts dating to 2700 B.C suggest that onions had been cultivated and in wide use by that time (Fritsch and Friesen 2002). The present species, A. cepa, is known only from cultivation, but appears to have been domesticated from wild ancestors in the Central Asian mountains (Brewster 1994).
Numerous cultivars have been developed for size, form, color, storability, resistance to pests and pathogens, and climatic adaptations. Cultivars are divided into the Common Onion Group (A. cepa var. cepa), which contains most of the economically important varieties (including cultivars grown for green or salad onions) and the Aggregatum Group, which includes shallots and potato onions, and typically produce clusters of small bulbs (Brewster 1994).
Onions are widely used on cooking in nearly all regions of the world, and have been used in diverse cultures and rituals throughout history. (See Wikipedia article in full entry; additional details in Block 2010 and Brewster 1994.)
Onions produce various sulfur-containing compounds (such as cysteine sulfoxide), probably for defense against fungi and insects, that, together with their breakdown products, produce their distinctive odor, flavor, and lachrymatory (tear-stimulating) properties (Brewster 1994). Throughout history, onions have been used in folk medicine for purposes ranging from treating wounds and stomach ailments to treating infertility (Wikipedia 2011). Scientific and pharmacological studies since World War II have found evidence that onions or their derived compounds have antimicrobial and antifungal properties, and may also be of benefit in preventing or treating heart disease and atherosclerosis, diabetes, cancer, and possibly asthma (Brewster 1994, Griffiths et al. 2002).
Despite their benefits to humans, onions are toxic to cattle, cats, and dogs, and, to a lesser extent, sheep and goats (Cope 2011, Merck 2011). Consumption by these animals of large amounts of onion may lead to anemia and impaired oxygen transport.
Global production of onions in 2008 was second only to tomatoes among horticultural crops: more than 73 million metric tons harvested from 3.6 million hectares. China alone produced more than 20 million metric tons; other leading producers were India, Australia, the United States, Pakistan, and Turkey (FAOSTAT 2011). A. cepa has escaped cultivation or naturalized in much of eastern North America as well as California and the Pacific Northwest (USDA PLANTS 2011), but generally remains localized. It is classified as a noxious weed in Arkansas (along with all Alliums).
Onions have large cells visible under low magnification, so onion tissue is often used in high school science laboratories for learning about microscope use and cell structure, as shown in this lesson from Rice University (http://teachertech.rice.edu/Participants/dawsonm/cells/microlab4.htm) and this video of onion cells from YouTube: http://www.youtube.com/watch?v=Tdch3mxQ4oU.
No one has provided updates yet. |
Sealants, also referred to as dental sealants, consist of a plastic material that is placed on the chewing (occlusal) surface of the permanent back teeth — the molars and premolars — to help protect them from bacteria and acids that contribute to tooth decay. The plastic resin in sealants is placed by a dental hygienist into the depressions and grooves of the chewing surfaces of back teeth and a light is utilized to cure it to the enamel which acts as a barrier, protecting the enamel surface of the teeth from plaque and acids.
Thorough brushing and flossing helps remove food particles and plaque from the smooth surfaces of teeth, but toothbrushes can't reach all the way into the depressions and grooves to extract all food and plaque. Plaque accumulates in these areas, and the acid from bacteria in the plaque attacks the enamel, causing cavities to develop. While fluoride helps prevent decay and helps protect all the surfaces of the teeth, dental sealants add extra protection for the grooved and pitted areas. Sealants can help protect these vulnerable areas by "sealing out" plaque and food debris from the occlusal surfaces of the teeth.
Placing dental sealants is usually painless and doesn't require drilling or numbing medications.
First, the dental hygienist will polish the surface of the tooth with a pumice material to remove plaque and food debris from the pit and fissure surfaces of the teeth selected for sealant placement.
Next, the hygienist will isolate and dry the tooth so that saliva doesn't cover the pit and fissure surfaces. Then the hygienist will etch the surface of the tooth in the pit and fissure areas, rinse off the etching material and dry the tooth.
The hygienist will apply the dental sealant material to the surface of the tooth with a brush; a self-curing light will be used for about 30 seconds to bond the sealant to the tooth surface.
Finally, the dental hygienist and dentist will evaluate the dental sealant and check its occlusion. Once the dental sealant has hardened it becomes a hard plastic coating, and you can chew on the tooth again.
Most of the time, the dental sealant is applied soon after the tooth has erupted through the gums, normally between six and twelve years of age. Sealants can be used for older children and even adults whose teeth have deep grooves and pits in them. Your dentist can help you decide when the right time is to undergo the treatment.
As long as the sealant remains intact, the tooth surface will be protected from decay. Sealants hold up well under the force of normal chewing and usually last several years before a reapplication of the sealant is needed. During your regular dental visits, your dentist will check the condition of the sealants and reapply them when necessary. |
Lemmings: key actors in the tundra food web
Arctic lemmings include true lemmings, of the genus Lemmus, and collared lemmings, of the genus Dicrostonyx. Although both have a circumpolar – though not identical – distribution, the two genera differ in many respects. Dicrostonyx is much more resistant to extreme low temperatures than Lemmus. Consequently, its distribution extends farther north: collared lemmings are found on the northernmost islands of the Canadian Arctic and in northern Greenland, whereas the range of true lemmings reaches down to the boreal zone and does not include Greenland or the northern Canadian islands. Lemmus lemmings eat mosses, supplemented by grasses and sedges. Dicrostonyx lemmings prefer forbs and shrubs like avens and willows. This distinction is reflected in habitat use. In the Arctic tundra, Lemmus is usually found on wet lowlands or moist patches. Dicrostonyx lives almost exclusively on dry and sandy hills and ridges.
Lemmings are well-known for their violent population fluctuations, the impacts of which are reflected in various ways. High numbers of lemmings have a strong impact on the vegetation and affect the nutrient cycle. Shallow permafrost slows down soil processes, but when lemmings at peak densities graze away much of the vegetation cover, the summer thaw can penetrate deeper into the soil, making more nutrients available. Lemmings also have a strong impact on the biomass and species composition of the vegetation.
The numbers of both mammalian and avian predators depend on the lemming fluctuations. In low lemming years, resident mammalian predators, including the arctic fox, ermine, and least weasel, hardly breed. During peak years, arctic foxes can have litters of up to 20 kits, and ermine and weasel numbers increase rapidly. Many species of birds of prey move nomadically, or briefly visit their breeding grounds in the Arctic in the search of lemming peaks. Snowy owls and jaegers (or skuas) are particularly well-known for their dependence on lemmings, breeding only in years when lemmings are abundant.
The causes of these population cycles have long been a topic of research and often heated discussion. Food, predation, and intrinsic physiological factors are possible explanations, each of which has some scientific support. It may well be that in different areas different factors play the critical role. In the Canadian Arctic, for example, the longterm low density of Dicrostonyx is explained by heavy predation. In Fennoscandia, on the other hand, lemming crashes in alpine areas seem to be caused by shortage of food, but when lemmings migrate into the boreal forests, they are regulated by predation.
In Northern Fennoscandia, the famous migrations of the Norwegian lemming ( Lemmus lemmus) can temporarily extend its range some 200 kilometers into the boreal forest, though they only occur three times a century. Norwegian lemmings have both spring and fall movements. Spring movements are caused by snow melt and last only two or three weeks, whereas fall migrations are density-dependent, and may last two to three months. Such remarkable movements in the fall have not been reported for other lemming species.
Heikki Henttonen, Vantaa Research Centre, Finnish Forest Research Institute, Finland. From the CAFF publication, Arctic Flora and Fauna, www.caff.is |
The Alvarez hypothesis is the theory that the mass extinction of the dinosaurs and many other living things was caused by the impact of a large asteroid on the Earth sixty-five million years ago, called the Cretaceous-Tertiary extinction event. Evidence indicates that the asteroid fell in the Yucatán Peninsula, Mexico. The hypothesis is named after the father-and-son team of scientists Luis and Walter Alvarez, who first suggested it in 1980.
In 1980, a team of researchers led by Nobel prize-winning physicist Luis Alvarez, his son geologist Walter Alvarez and chemists Frank Asaro and Helen Michels discovered that sedimentary layers found all over the world at the Cretaceous–Tertiary boundary contain a concentration of iridium hundreds of times greater than normal. Iridium is extremely rare in the earth's crust because it is very dense, and therefore most of it sank into the earth's core while the earth was still molten. The Alvarez team suggested that an asteroid struck the earth at the time of the K–T boundary. There were other earlier speculations on the possibility of an impact event, but no evidence had been uncovered at that time.
The evidence for the Alvarez impact theory is supported by chondritic meteorites and asteroids which contain a much higher iridium concentration than the earth's crust. The isotopic ratio of iridium in asteroids is similar to that of the K–T boundary layer but significantly different from the ratio in the earth's crust. Chromium isotopic anomalies found in Cretaceous–Tertiary boundary sediments are similar to that of an asteroid or a comet composed of carbonaceous chondrites. Shocked quartz granules, glass spherules and tektites, indicative of an impact event, are common in the K–T boundary, especially in deposits from around the Caribbean. All of these constituents are embedded in a layer of clay, which the Alvarez team interpreted as the debris spread all over the world by the impact. The location of the impact was unknown when the Alvarez team developed their theory, but later scientists discovered the Chicxulub Crater in the Yucatán Peninsula, now considered the likely impact site.
Using estimates of the total amount of iridium in the K–T layer, and assuming that the asteroid contained the normal percentage of iridium found in chondrites, the Alvarez team went on to calculate the size of the asteroid. The answer was about 10 kilometers (6 mi) in diameter, about the size of Manhattan. Such a large impact would have had approximately the force of 100 trillion tons of TNT, i.e. about 2 million times as great as the most powerful thermonuclear bomb ever tested.
The most obvious consequence of such an impact would be a vast dust cloud which would block sunlight and prevent photosynthesis for a few years. This would account for the extinction of plants and phytoplankton and of all organisms dependent on them (including predatory animals as well as herbivores). But small creatures whose food chains were based on detritus would have a reasonable chance of survival. It is estimated that sulfuric acid aerosols were injected into the stratosphere, leading to a 10–20% reduction of solar transmission normal for that period. It would have taken at least ten years for those aerosols to dissipate.
Global firestorms may have resulted as incendiary fragments from the blast fell back to Earth. Analyses of fluid inclusions in ancient amber suggest that the oxygen content of the atmosphere was very high (30–35%) during the late Cretaceous. This high O2 level would have supported intense combustion. The level of atmospheric O2 plummeted in the early Tertiary Period. If widespread fires occurred, they would have increased the CO2 content of the atmosphere and caused a temporary greenhouse effect once the dust cloud settled, and this would have exterminated the most vulnerable survivors of the "long winter".
The impact may also have produced acid rain, depending on what type of rock the asteroid struck. However, recent research suggests this effect was relatively minor. Chemical buffers would have limited the changes, and the survival of animals vulnerable to acid rain effects (such as frogs) indicate this was not a major contributor to extinction.
Impact theories can only explain very rapid extinctions, since the dust clouds and possible sulphuric aerosols would wash out of the atmosphere in a fairly short time — possibly under ten years.
Although further studies of the K–T layer consistently show the excess of iridium, the idea that the dinosaurs were exterminated by an asteroid remained a matter of controversy among geologists and paleontologists for more than a decade. |
Nuclear magnetic resonance spectroscopy of nucleic acids
Nucleic acid NMR is the use of nuclear magnetic resonance spectroscopy to obtain information about the structure and dynamics of nucleic acid molecules, such as DNA or RNA. It is useful for molecules of up to 100 nucleotides, and as of 2003, nearly half of all known RNA structures had been determined by NMR spectroscopy.
NMR has advantages over X-ray crystallography, which is the other method for high-resolution nucleic acid structure determination, in that the molecules are being observed in their natural solution state rather than in a crystal lattice that may affect the molecule's structural properties. It is also possible to investigate dynamics with NMR. This comes at the cost of slightly less accurate and detailed structures than crystallography.
Nucleic acid NMR uses techniques similar to those of protein NMR, but has several differences. Nucleic acids have a smaller percentage of hydrogen atoms, which are the atoms usually observed in NMR, and because nucleic acid double helices are stiff and roughly linear, they do not fold back on themselves to give "long-range" correlations. Nucleic acids also tend to have resonances distributed over a smaller range than proteins, making the spectra potentially more crowded and difficult to interpret.
Two-dimensional NMR methods are almost always used with nucleic acids. These include correlation spectroscopy (COSY) and total coherence transfer spectroscopy (TOCSY) to detect through-bond nuclear couplings, and nuclear Overhauser effect spectroscopy (NOESY) to detect couplings between nuclei that are close to each other in space. The types of NMR usually done with nucleic acids are 1H NMR, 13C NMR, 15N NMR, and 31P NMR. 19F NMR is also useful if nonnatural nucleotides such as 2'-fluoro-2'-deoxyadenosine are incorporated into the nucleic acid strand, as natural nucleic acids do not contain any fluorine atoms.
1H and 31P have near 100% natural abundance, while 13C and 15N have low natural abundances. For these latter two nuclei, there is the capability of isotopically enriching desired atoms within the molecules, either uniformly or in a site-specific manner. Nucleotides uniformly enriched in 13C and/or 15N can be obtained through biochemical methods, by performing polymerase chain reaction using dNTPs or NTPs derived from bacteria grown in an isotopically enriched environment. Site-specific isotope enrichment must be done through chemical synthesis of the labeled nucleoside phosphoramidite monomer and of the full strand; however these are difficult and expensive to synthesize.
Because nucleic acids have a relatively large number of protons which are solvent-exchangeable, nucleic acid NMR is generally not done in D2O solvent as is common with other types of NMR. This is because the deuterium in the solvent would replace the exchangeable protons and extinguish their signal. H2O is used as a solvent, and other methods are used to eliminate the strong solvent signal, such as saturating the solvent signal before the normal pulse sequence ("presaturation"), which works best a low temperature to prevent exchange of the saturated solvent protons with the nucleic acid protons; or exciting only resonances of interest ("selective excitation"), which has the additional, potentially undesired effect of distorting the peak amplitudes.
The exchangeable and non-exchageable protons are usually assigned to their specific peaks as two independent groups. For exchangeable protons, which are for the most part the protons involved in base pairing, NOESY can be used to find through-space correlations between on neighboring bases, allowing an entire duplex molecule to be assigned through sequential walking. For nonexchangable protons, many of which are on the sugar moiety of the nucleic acid, COSY and TOCSY are used to identify systems of coupled nuclei, while NOESY is again used to correlate the sugar to the base and each base to its neighboring base. For duplex DNA nonexchangeable protons the H6/H8 protons on the base correlate to their counterparts on neighboring bases and to the H1' proton on the sugar, allowing sequential walking to be done. For RNA, the differences in chemical structure and helix geometry make this assignment more technically difficult, but still possible. The sequential walking methodology is not possible for non-double helical nucleic acid structures, nor for the Z-DNA form, making assignment of resonances more difficult.
Parameters taken from the spectrum, mainly NOESY cross-peaks and coupling constants, can be used to determine local structural features such as glycosidic bond angles, dihedral angles (using the Karplus equation), and sugar pucker conformations. The presence or absence of imino proton resonances, or of coupling between 15N atoms across a hydrogen bond, indicates the presence or absence of basepairing. For large-scale structure, these local parameters must be supplemented with other structural assumptions or models, because errors add up as the double helix is traversed, and unlike with proteins, the double helix does not have a compact interior and does not fold back upon itself. However, long-range orientation information can be obtained through residual dipolar coupling experiments in a medium which imposes a weak alignment on the nucleic acid molecules.
NMR is also useful for investigating nonstandard geometries such as bent helices, non-Watson–Crick basepairing, and coaxial stacking. It has been especially useful in probing the structure of natural RNA oligonucleotides, which tend to adopt complex conformations such as stem-loops and pseudoknots. Interactions between RNA and metal ions can be probed by a number of methods, including observing changes in chemical shift upon ion binding, observing line broadening for paramagnetic ion species, and observing intermolecular NOE contacts for organometallic mimics of the metal ions. NMR is also useful for probing the binding of nucleic acid molecules to other molecules, such as proteins or drugs. This can be done by chemical-shift mapping, which is seeing which resonances are shifted upon binding of the other molecule, or by cross-saturation experiments where one of the binding molecules is selectively saturated and, if bound, the saturation transfers to the other molecule in the complex.
Dynamic properties such as duplex–single strand equilibria and binding rates of other molecules to duplexes can also be determined by its effect on the spin–lattice relaxation time T1, but these methods are insensitive to intermediate rates of 104–108 s−1, which must be investigated with other methods such as solid-state NMR. Dynamics of mechanical properties of a nucleic acid double helix such as bending and twisting can also be studied using NMR. Pulsed field gradient NMR experiments can be used to measure diffusion constants.
Early nucleic acid NMR studies were performed as early as 1971, and focused on using imino proton resonances to probe base pairing interactions, such as in tRNA. With the advent of oligonucleotide synthesis, the first NMR spectrum of double-helical DNA was published in 1982, and methods for sequential assignment of the resonances were published the following year.
- Fürtig, Boris; Richter, Christian; Wöhnert, Jens; Schwalbe, Harald (2003). "NMR Spectroscopy of RNA". ChemBioChem 4 (10): 936–962. doi:10.1002/cbic.200300700. PMID 14523911.
- Wemmer, David (2000). "Chapter 5: Structure and Dynamics by NMR". In Bloomfield, Victor A., Crothers, Donald M., and Tinoco, Ignacio. Nucleic acids: Structures, Properties, and Functions. Sausalito, California: University Science Books. ISBN 0-935702-49-0.
- Addess, Kenneth J.; Feigon, Juli (1996). "Introduction to 1H NMR Spectroscopy of DNA". In Hecht, Sidney M. Bioorganic Chemistry: Nucleic Acids. New York: Oxford University Press. ISBN 0-19-508467-5.
- Kan, Lou-sing; Ts'o, Paul O. P. (1986). "Nuclear Magnetic Resonance Studies of Nucleic Acids". In Chien, Shu and Ho, Chien. NMR In Biology and Medicine. New York: Raven Press. ISBN 0-88167-231-9.
- Kojima, C; Ono, A; Ono, A; Kainosho, M (2002). "Solid-phase synthesis of selectively labeled DNA: Applications for multidimensional nuclear magnetic resonance spectroscopy". Methods in Enzymology 338: 261–283. doi:10.1016/S0076-6879(02)38224-7. PMID 11460552.
- Robinson, B.H.; Drobny, G.P. (1995). " Site-specific dynamics in DNA: Theory and experiment". Nuclear Magnetic Resonance and Nucleic Acids. Methods in Enzymology 261. pp. 451–509. doi:10.1016/S0076-6879(95)61021-9. ISBN 978-0-12-182162-3. |
preschool art projects: when learning about animals they could make these plate lions! They work on their cutting skills by cutting the mane. And they work on coloring by doing the rest. Then explain to them things that lions do or eat and what types of animals are similar. Maybe make these along with a book about a lion. |
3.6.5 Regional Differences
Comparative regional studies of agricultural emissions per unit of GDP or per
hectare or per capita would show significant differences but as a result of
local farming systems, climate, and management techniques employed, a useful
comparison between regions is not possible. Standard methods for measuring and
reporting of agricultural emissions are being developed and will enable more
accurate and useful comparisons to be made between alternative production systems
in the future (Kroeze and Mosier, 1999).
Developing countries are slowly moving towards using modern food and fibre
production techniques. Economies in transition are also implementing modern
production methods encouraged by foreign investors but many challenges remain.
From a sustainability point of view, traditional methods may well be preferable.
3.6.6 Technological and Economic Potential
A summary of the technical and market potential for reducing greenhouse gas
emissions from the agricultural industry is given in Table
3.36. If agricultural production per hectare in developing countries could
be increased to meet the growing food and fibre demand as a result of a greater
uptake of new farming techniques, modern technologies and improved management
systems, then there would be less incentive for deforestation to provide more |
Banana and citrus, two of the region's major fruit crops, are seriously threatened by virus diseases. Once a plant is infected with such diseases, it will remain infected until it dies, and so will its progeny. Insect and aphid vectors feeding on infected plants will go on to spread the disease further.
Some of the worst diseases of banana and citrus are transmitted through planting materials. New techniques are now available which can produce seedlings free of virus and other diseases. There are also new laboratory techniques for the diagnosis and indexing of virus diseases. However, problems still remain.
Indexing is expensive in relation to farm incomes in less industrialized countries. Such countries must develop systems of disease management which use laboratory tests as an occasional backup, not as a standard procedure for all orchards. Furthermore, new viruses are appearing every year, and new laboratory tests must be devised to identify them.
Distribution is another problem - often disease-free planting materials are not available in sufficient numbers when farmers need them, or else they are so expensive that farmers do not want them. Since disease-free seedlings are quite vulnerable to infection, another major problem is how to keep them free of disease after they are planted out in the field.
Major Disease Problems of Citrus
Almost 90% of citrus trees in Asia are infected with tristeza, a virus disease transmitted by several species of aphid. Many strains are mild ones which do little damage to the plant, but new virulent strains have arisen in recent decades which are a major threat to the citrus industry world-wide. The symptoms of tristeza vary according to the virus strain and the scion-rootstock combination. The use of resistant rootstock such as mandarin or trifoliate orange is the main method of control, plus the certification of budstock. Pre-immunization, whereby plants are inoculated with mild strains of virus to protect them from severe strains, has sometimes also proved useful.
Greening is the most destructive citrus disease in tropical Asia. It is caused by a fastidious bacteria, and is spread by the citrus psyllid. Control of this disease is based on early detection, which in turn depends on accurate diagnosis, and on the early removal of infected shoots and trees. This includes the detection of greening organisms in symptomless trees.
Control of greening disease also depends on control of the psyllid vector. Psyllids prefer to lay their eggs on new shoots. For this reason, the titer of the greening organism tends to rise with the spring flush. It is important to synchronize chemical sprays with the growth of new shoots. Growers applying chemical pesticides should also try and protect the natural enemies of the psyllid. One natural enemy, the parasitoid wasp Tamarixia radiata, is mass-produced in Taiwan for release in citrus orchards.
Major Disease Problems of Banana
All banana viruses are transmitted in infected planted materials. After planting, the viruses are usually spread further by aphids.
Banana Bunchytop (BBTV)
This is the most serious virus disease affecting banana and plantain in Asia. It has been present in the region since the last century, and probably much earlier than this. The virus has a limited range of vectors, and is found in only a few species. Apart from abaca, the main spread is from banana to banana. This means that the eradication of infected plants is an effective control measure.
Cucumber Mosaic Virus (CMV)
Unlike banana bunchytop, both the virus and its vectors occur on a wide range of plant species. It is found in common crops such as bean, cucumber, pepper and tomato. Eliminating the sources of virus outside the crop is important in controlling this disease.
Banana Streak (BSV)
This virus was only identified recently, being first described in Africa in 1974. The late diagnosis is probably because it is a highly variable virus, which makes detection and indexing difficult. The virus is widespread, and has probably existed for a very long time. Symptoms tend to be more severe in poorly managed plantations.
A startling recent discovery has been that all banana and plantain species contain segments of the DNA of this virus. When virus-free varieties of banana are propagated by tissue culture, the integrated DNA of the virus in the DNA of the plant sometimes becomes activated. As a result, the new plants are infected with BSV.
Detection of Virus Diseases
The reliability of virus indexing depends on the detection method used, the serological diversity of the virus, and the sampling method followed.
Viruses are generally not evenly distributed throughout the plant at a uniform rate. Concentrations of BBTV, for example, are much higher in younger leaves than in older ones, and are higher in the midrib than in the rest of the leaf. In contrast, BSV is present in higher concentration in old leaves than in young ones. The distribution of CMV is very uneven, and can vary greatly between different leaves of the same plant, or even different parts of the same leaf. There seems to be no relationship between the amount of CMV virus in the plant, and the severity of the symptoms.
The distribution of virus particles in the plant must be taken into account when collecting samples for testing. Otherwise, an infected plant may test negative for virus.
How to Detect Virus
Just looking at the plant is the easiest way of diagnosing a virus, but it is not very reliable. Mild strains of virus disease, or virus disease in its early stages, may have no symptoms. Mixed infections with several viruses are common, while the same virus may produce different symptoms in different plants. Laboratory tests are needed for an accurate diagnosis. There are two kinds of laboratory tests, PCR and serological assay.
This is based on antibodies produced by animals (usually rats or rabbits). When suitable antibodies are available, the most sensitive and efficient practical assay for plants is enzyme-linked immunosorbent assay (ELISA). In most cases, ELISA is the indexing method of choice. It is sensitive, easy to use, and needs minimal equipment.
Polymerase Chain Reaction (PCR) is a very sensitive assay, and can be used to test for 3-4 viruses at a time. There are two main problems in using it. The reagents used are expensive, and a tedious process of preparation is needed for each sample. It is used mainly for foundation stock, or in other situations where accuracy is very important.
Use of Tissue-Cultured Seedlings
Plantlets produced by tissue culture have the advantage, not only of being free of disease, but of being relatively uniform. In general, they give higher quality fruit, and production costs are lower. However, there are also some drawbacks. Banana plantlets grown by tissue culture are more susceptible to CMV than suckers. They are also vulnerable to herbicide damage.
Plantlets produced by tissue culture are fairly expensive. Banana farmers who use suckers get their planting materials free, as do citrus producers who produce their own seedlings by grafting or marcotting. Growers should buy disease-free planting materials only in areas which are relatively free of both virus and vectors. It is a waste of money to use these materials in areas where the virus still persists.
Management of Disease-Free Orchards
Protecting seedlings from infection in the field involves spraying with insecticide to control the vectors, disinfection of pruning tools, and in the case of citrus, the use of resistant rootstock. Cultural practices which can help keep orchards and plantations free of disease include windbreaks to reduce the number of windborne vectors, and the use of catch crops such as curryleaf. Both curryleaf and jasmine orange are preferred hosts for citrus psyllids, and can be grown around citrus orchards as trap plants.
Techniques of indexing diseases and diagnosing virus diseases are improving rapidly. Virus diseases can be identified and indexed with a speed and precision that would have been impossible a decade ago. Techniques of producing virus-free planting material have also developed rapidly. However, there are still many difficulties in applying these techniques in a cost effective way, and in keeping seedlings free of disease after they are planted out in the field.
Since virus diseases and citrus greening are easily transmitted by insect vectors, early detection and early removal of infected plants are critical. However in practice early detection is often difficult, since there may be few or no symptoms. Even where plants are definitely diagnosed with virus disease or greening, eradication may be difficult. Infected plants may still give a profitable yield which farmers are unwilling to sacrifice, particularly if the plants show only minor symptoms.
FFTC International Workshop Disease Management for Banana and Citrus: The Use and Management of Disease-Free Planting Materials
Location: Davao City, Philippines
Date: October 14-16 1998
No. Participating Countries: 9 (Australia, Indonesia, Japan, Malaysia, Philippines, Taiwan ROC, Thailand, USA, Vietnam)
No. Papers: 18
No. Participants: 75
Co-sponsors:International Network for the Improvement of Banana and Plantain - Asia and Pacific Network (INIBAP-ASPNET)
Philippine Council for Agriculture, Forestry and Natural Resources Research and Development (PCARRD), Department of Science and Technology
Davao National Crop Research and Development Center, Bureau of Plant Industry, Department of Agriculture, Philippines
List of Papers
1. Epidemiological review on citrus greening and virus diseases of citrus and banana, with special reference to disease-free nursery system
- Hong-Ji Su
2. Production and cultivation of virus-free banana tissue-culture plantlets in Taiwan
- Shin-Chuan Hwang
3. Viruses of banana and methods for their detection
- John Thomas
4. Virus and virus-like diseases of banana and citrus in Malaysia: Status and control strategies
- Ching-Ang Ong
5. Disease management in citrus orchards planted with disease-free seedlings in Thailand
- Suchat Vichitrananda
6. Recent progress in the research on citrus greening in Asia, including serological diagnosis
- Yoshihiro Ohtsu
7. Ecology of the insect vectors of citrus systemic diseases and their control in Taiwan
- Chiou-Nan Chen
8. Citrus greening control project in Okinawa, Japan
- Shinji Kawano
9. Management of viral streak in banana and plantain: Understanding a new challenge
- Ben Lockhart
10. Pathological and molecular characterization of BBTV strains in Asia
- Hong-Ji Su
11. The impact of tissue culture plants in the ongoing eradication and rehabilitation program in the Philippines
- Lydia Magnay
12. Epidemiology and integrated management of Abaca bunchytop in the Philippines
- Avelino Raymundo
13. Rehabilitation of BBTV affected areas in the Philippines: Experiences and problems
- Rene Rafael Espino
14. Status of disease management of citrus in the Philippines
- Ceferino Baniqued
15. Management of disease-free citrus seedlings in southern Vietnam
- Le Thi Thu Hong
16. Management of disease-free citrus seedlings in the North of Vietnam
- Ha Minh Trung
17. Establishment of disease-free foundation stock and nursery for controlling greening disease and citrus tristeza virus: The Sarawak experience
- Chan Hock Teo
18. Status of disease management of virus diseases of banana and citrus in Indonesia
- A. Nurhadi
Index of Images
Figure 1 Leaf Symptoms of Pummelo with Greening Disease (Yellowing and Mottling). a Healthy Leaf Is Shown on the Right.
Figure 2 Citrus Tree with Tristeza Virus, Showing Dieback and Poor Fruit Set.
Figure 3 Citrus Brown Aphid (Toxoptera Citricida Kircaldy) the Primary Vector for Citrus Tristeza Virus
Figure 4 Banana Plant with Bunchytop. Note the Small Size of the Hand. |
A packet is a basic unit of communication over a digital network. A packet is also called a datagram, a segment, a block, a cell or a frame, depending on the protocol. When data has to be transmitted, it is broken down into similar structures of data, which are reassembled to the original data chunk once they reach their destination.
Packets and protocols
Packets vary in structure depending on the protocols implementing them. VoIP uses the IP protocol, and hence IP packets. On an Ethernet network, for example, data is transmitted in Ethernet frames.
The structure of a packet depends on the type of packet it is and on the protocol. Normally, a packet has a header and a payload.
The header keeps overhead information about the packet, the service and other transmission-related things. For example, an IP packet includes
- The source IP address
- The destination IP address
- The sequence number of the packets
- The type of service
The payload is the data it carries. |
Time Period: 460-370 BC
Background: Born in Abdera, Thrace, Democritus wrote extensively on the subject of ethics, promoting happiness as the highest good, insisting that it was to be achieved through moderation, tranquility, and freedom from fear. He came to be known as the Laughing Philosopher, for his jovial spirit, in contrast to the pessimistic Heraclitus (the Weeping Philosopher). He based much of his theories from his teacher, Leucippus.
Belief: Democritus wrote that all things were composed of atoms - small, minute, and indestructible particles of pure matter, with a void between each and every atom. He wrote that atoms were solid, with no internal structure, and that they were different in size, shape, and weight.
Contribution: Although some points of Democritus's beliefs were incorrect (atoms are not completely solid and do not lack an internal structure), he nevertheless helped us to understand the basic atomic theory of matter. We now know and accept that atoms are indeed the basic building blocks of matter, and that they differ innumerably in shape, size, weight, position, and sequence.
"Democritus," Microsoft® Encarta® Online Encyclopedia 2003
http://encarta.msn.com © 1997-2003 Microsoft Corporation. All Rights Reserved.
"The Greek Concept of Atomos: The Indivisible Atom." Copyright © 1996, 1997 John L. Park |
Why is it important to improve dietary assessment methods?
Food frequency questionnaires, which measure a person's usual intake over a defined period of time, and 24-hour recalls, in which a person records everything eaten or drunk during the previous 24 hours, are commonly used to collect dietary information. Short screeners, which include just a few questions about consumption of selected items, can be useful in situations that don't require assessment of the total diet or when resources are limited.
Accurately measuring dietary intake through these methods is crucial to understanding the role of diet in causing and preventing chronic diseases such as cancer, heart disease, and diabetes. Dietary recommendations aimed at encouraging people to follow dietary patterns to promote health and reduce disease risks are based in part on information gathered through these means.
The problem is that these dietary assessment instruments are subject to substantial error, both random and systematic. In addition, people don't always report accurately. So, it's important to design these instruments so that they collect the most accurate information possible and to validate them before use. NCI staff used extensive cognitive testing in developing the DHQ and these short screeners so as to make them easier to use and to improve their performance.
Last Modified: 11 Apr 2014 |
Even in sterile space craft, bacteria thrive
Unwelcome Visitors Spread as in Horror Movie
After returning to Earth, officials were disturbed to discover a host of bacteria and fungi covering the porthole. Even worse, the organisms had corroded the window even though it was made of quartz glass inserted in a titanium frame encased in enamel—previously thought able to withstand almost anything.
Aggressive microorganisms damaged electronic equipment, oxidizing copper cables. Fungus was also found to be flourishing on polyurethane surfaces.
Germs Outwit Sterilization Measures
To prevent the risk of contamination of outer space with Earth germs, as well as the introduction of foreign organisms to our own planet, space vehicles are exhaustively cleaned.
The craft is usually pumped full of ethylene oxide and methyl chloride, a lethal mixture to microorganisms. A few days before departure, astronauts are often quarantined to reduce exposure to germs. During flight, the crew vacuums the vehicle regularly and wipes all surfaces with disinfectant.
Radiation Encourages Mutation
Despite these measures, such life forms thrive. It is believed bacteria escape fumigation by hiding under plastic parts where the gas does not penetrate. Once in flight, they emerge into a sterile atmosphere with few competitors to stop them. By contrast, Earth's environment is so full of microorganisms they usually keep each other in check.
Once in space, germs mutate, partly due to radiation levels 500 times higher than on Earth. They sometimes become disturbingly aggressive, growing rapidly in unexpected places. Solar activity often causes the fungi to grow more actively. They get nourishment from the breath, perspiration, and dead skin of astronauts.
Stored in Sealed Places
Samples of the mutated microorganisms are kept in sealed containers stored in secure facilities because scientists cannot be sure how they would react on earth. Since some of the bacteria can "eat" metal, they could become a potentially serious weapon, rendering guns or machines useless. |
2. Planning for Water and Watersheds
“A new relationship between people and water needs to be established to ensure that there will be water supplies for human use, thriving ecosystems and a healthy economy.... both now and in the future.”
Every living organism, ecosystem and community requires water for life, health and functioning. The water cycle also influences the availability of many natural resources for communities and the exposure of those communities to water-related hazards. Biodiversity, ecosystem functioning and most sectors of the economy are highly dependent on water resources and watershed health.
A watershed refers to a region or area of land that drains into a stream, river system or other body of water. Watersheds capture precipitation, filter and store water, and influence the timing and volume of water flows. They are integrated systems, with actions in one part of a watershed often impacting other parts of the watershed; therefore, the watershed is an important unit for planning for and managing water. Watershed planning and management seeks to ensure the wise and effective use of water and land resources, and in particular, the quantity, quality and timing of water flows.
Communities rely on water and watersheds for a safe, secure and adequate supply of water for many uses; a receiving environment for wastewater discharges; provision of fish, wildlife, habitat and biodiversity; moderation of flooding, erosion and sedimentation processes; and a host of other social, cultural, economic and spiritual values. Communities and watersheds are experiencing unprecedented changes at many different scales related to population growth, settlement patterns, use of natural resources, release of waste products into the environment and a changing climate. These changes can lead to impacts on water resources, watershed health and community sustainability.
Water and watershed planning is about defining and achieving a desired future vision for water resources and watersheds. Planning plays a critical role in how communities define their vision of the future and their path to achieve this vision
Back to top
READ MORE ABOUT:
PLANNING FOR WATER AND WATERSHEDS |
The Educational Legacy of Alcott, Emerson, Fuller, Peabody and Thoreau
Transcendental Learning discusses the work of five figures associated with transcendentalism concerning their views on education. Alcott, Emerson, Fuller, Peabody and Thoreau all taught at one time and held definite views about education. The book explores these conceptions with chapters on each of the five individuals and then focuses the main features of transcendental learning and its legacy today. A central thesis of the book is that transcendental learning is essentially holistic in nature and provides rich educational vision that is in many ways a tonic to today’s factory like approach to schooling. In contrast to the narrow vision of education that is promoted by governments and the media, the Transcendentalists offer a redemptive vision of education that includes:
-educating the whole child-body, mind, and soul,
-happiness as a goal of education.
-educating students so they see the interconnectedness of nature,
-recognizing the inner wisdom of the child as something to be honored and nurtured,
- a blueprint for environmental education through the work of Thoreau,
- an inspiring vision for educating women of all ages through the work of Margaret Fuller,
- an experimental approach to pedagogy that continually seeks for more effective ways of educating children,
- a recognition of the importance of the presence of teacher and encouraging teachers to be aware and conscious of their own behavior.
-a vision of multicultural and bilingual education through the work of Elizabeth Peabody
The Transcendentalists, particularly Emerson and Thoreau, sewed the seeds for the environmental movement and for non-violent change. Their work eventually influenced Gandhi and Martin Luther King Jr. and it continues to resonate today in the thinking of Aung Sang Suu Kyi and the Dalai Lama. The Transcendentalists’ vision of education is worth examining as well given the dissatisfaction with the current educational scene.
"A Transcendental Education provides a powerfully hopeful, integrative, and holistic vision that can help guide education out of its current vacuum. The book is thoughtfully explicated, expertly synthesized and completely relevant for anyone interesting in helping education find itself. Like the transcendentalists themselves, this is both down-to-earth and soaring in its potential implications."
Tobin Hart author of "The Secret Spiritual World of Children" and "From Information to Transformation: Education for the Evolution of Consciousness."
"The secret to a vital, renewed America lies in the life and writings of the Transcendentalist community of Concord, Massachusetts in the 19th century. Jack Miller, who I know has been devoted to a new, living form of education throughout his career, has written a book that could inspire a revolution in teaching. It goes against the tide, as do Emerson and Thoreau. But it offers a blueprint and a hope for our children."
Thomas Moore, author of "Care of the Soul."
"A timely account of great thinking on genuine education. Reading this, today's beleaguered teachers should experience a renewal of spirit and commitment."
Nel Noddings, author of "Happiness and Education."
1. Transcendental Learning 2. Ralph Waldo Emerson: Visionary and Mentor 3. Bronson Alcott: Pioneer in Spiritual Education 4. Margaret Fuller: Voice for and Educator of Women 5. Henry David Thoreau: Environmental Education/Holistic Educator 6. Elizabeth Peabody 7. A Transcendental Pedagogy 8. The Legacy: Holistic Education
"John P. Miller’s vision of the current state of the world is bleak, characterized by increasing corporate corruption, financial instability, distrust of politicians, environmental destruction, and “an empty lifestyle based on materialism and consumption” (Miller, 2011, p. 4). In addition, he suggests that overall, contemporary educational systems’ pervasive emphasis on preparation for competition in the global economy is only intensifying the fragmentation and alienation felt by many youth. However, throughout history, some advocates for children have called for radical changes in methods of educating, towards a more balanced holistic approach. In a time in which it is essential to marshal all potential resources to support spirit based education, the “American transcendentalists” are a potent and overlooked source. Miller’s book demonstrates that this small group of American philosopher/educators has much to offer.
Miller’s style is clear and straightforward so that ideas presented can be easily understood; thus the book is appropriate for a wide audience, from undergraduates to academics. It is a book well worth pondering, a distinctive addition to the holistic/spiritual educator’s library. In concluding, Miller says: “Their work should encourage us to look within and trust our own intuitions so we can “build our own world” (Miller, 2011, p. 122). This is crucial wisdom echoing across centuries to a world that, more than ever, cries out for inspired rebuilding." Dr. Aostre N. Johnson Saint Michael's College in International Journal of Children's Spirituality
"One can point to alternative schools, such as Waldorf and Montessori schools, and certian private schools, but hardly any of our public schools, where a holistic educational model is so desperately needed. However, for those who view teaching as a subversive activity, Miller ofers valuable advice for bringing the principles of Transcendental learning into the classroom." Barry Andrews in Thoreau Society Bulletin
"Useful for educational history or philosophy classes, the book would also be appropriate for those exploring outdoor learning, environmental education, feminist pedagogy or peace studies. Summing Up: Highly recommended. All readership levels." S.T. Schroth Knox College in CHOICE
Web price: $25.50 (Reg. 30.00)
- EDU009000 - EDUCATION: Educational Psychology
- EDU016000 - EDUCATION: History
- EDU037000 - EDUCATION: Research
- "The Brain Controls Everything" Children's Ideas About the Body
- Clinical Preparation at the Middle Level Practices and Possibilities
- Digital Curricula in School Mathematics
- Envisioning a Critical Race Praxis in K-12 Leadership Through Counter-Storytelling
- Envisioning Critical Race Praxis in Higher Education Through Counter-Storytelling
- Evaluation for an Equitable Society
- Intercultural Competence in Instructed Language Learning Bridging Theory and Practice |
Yale simulations probe the unstable recipe behind "intergalactic pancakes"
Galaxies are well studied, but far less is known about the vast stretches of space between them. Though it seems empty, the intergalactic medium (IGM) actually contains more matter than galaxies do – it's just hard to see because it's not shining bright as stars. Now, astronomers have used simulations to reveal new details about the structure of this matter, showing that "intergalactic pancakes" spanning millions of light-years tend to collapse into a cosmic fog.
The universe appears to be missing quite a lot of matter – the vast majority of it, in fact, is unaccounted for. Dark matter, of course, is the mysterious stuff that's hypothesized to make up roughly 80 percent of all matter, and while we can tell indirectly that it's out there, we haven't yet been able to detect it directly.
But even of the remaining 20 percent, which is made up of "normal" matter or baryons, large swathes are hard to find. It's been suggested that as little as 10 to 20 percent of baryons reside in galaxies, while the rest of them are probably swirling around in huge gas clouds between galaxies.
Because this matter doesn't shine like stars, it's hard to spot directly, but astronomers have their ways. Usually, they can tell what's there and how much by measuring the absorption of background light and other radiation like X-rays.
For the new study, a team of astronomers from Yale University, the Max Planck Institute for Astrophysics and Heidelberg Institute for Theoretical Studies has examined the physics and structure of intergalactic gas clouds using simulations. Their results have revealed a few new features about this strange stuff.
"These are flattened distributions of matter, known as 'pancakes,' that extend across many millions of light years across," says Frank van den Bosch, co-author of the study. "We found that rather than being smoothly distributed, the gas in these pancakes shatters into what resembles a 'cosmic fog' made up of tiny, discrete clouds of relatively cold and dense gas."
Previously it's been believed that these denser clouds of gas only formed close to galaxies, where the gas is already naturally denser. But this new simulation is the first to show that it can also happen in deep intergalactic space, as the cooling gas triggers an instability.
The simulation also showed that this cosmic fog is "pristine," meaning it doesn't contain heavy metals. That makes sense, since it's far enough away from galaxies that heavy metal "pollution" can't reach them.
The team says it's important to study the IGM because it feeds into galaxies and provides fuel for new stars to fire up. Knowing what's in it, and how it works, can tell us a lot about the large scale structure and composition of the universe.
"The reason galaxies are able to form stars continuously is because fresh gas flows into galaxies from the IGM," says Nir Mandelker, lead author of the study. "It is clear that galaxies would run out of gas in very short order if they didn't accrete fresh gas from the IGM."
The research was published in the Astrophysical Journal Letters. A simulation can be seen in the video below.
Source: Yale University |
“The Birder’s Handbook” (Ehrlich, Dobkin, and Wheye) states “Birds are defined by feathers—no bird lacks them, no other animal possesses them.”
Feathers are very important to birds. They make flight possible. They provide insulation and protection from the cold and the heat. They protect from sunburn and rainfall.
Woodpeckers have strong, pointed tail feathers that they use to prop themselves up while they whack at a tree. Colorful feathers are used by some birds for display. Drab feathers help to provide camouflage. The authors conclude “Feathers not only define the bird but are essential to its existence.”
Birds have different feather types on their body, head, wings and tail. They’re all made of beta-keratin, the same protein that’s in their beak and claws, and our hair and fingernails. Here are the most common types of feathers.
Flight feather: The tube that goes up the entire flight feather is called the rachis. It’s centered in most feather types, but it’s offset in the flight feather. The little vanes that come out from the rachis are shorter on the leading edge of the feather. This design lets one feather overlap the next forming a seamless, aerodynamic wing, that also sheds rain. In addition, the vanes have tiny hooks and ridges that keep each feather in proper shape for flight. The bird can simply run its beak along the feather to zip up the hooks and ridges again.
Contour feather: Contour feathers are the body feathers that shape the bird. House sparrows have about 1,800 feathers in the summer, about 1,400 of them are contour feathers. Contour feathers are symmetrical with vanes of equal length on each side of the rachis. There are no contour feathers on the wings or the tail.
Semiplume: Semiplumes have a central rachis, but the vanes are frilly and not interlocked. These feathers help to provide insulation between the contour feathers and the down feathers.
Down feather: The down feather doesn’t have a rachis; the vanes or plumes emanate right from the rim of the quill or calamus at the bottom of the feather. The plumes are elongated, not interlocking. Their fluffiness provides insulation by creating air pockets.
Bristles: A bristle is essentially a stiff rachis with no or few vanes. Bristles serve as sensors or in a protective capacity, for example, protecting the bird’s eyes from incoming bugs.
Filoplume: The filoplumes act like little wind gauges surrounding each flight feather. They don’t have any muscles in the socket, unlike the other feathers listed above, but their movement is reported to the bird’s central nervous system and helps the bird take off, fly, land, and maneuver.
Powder down feathers: Most birds have a preen gland above the base of the tail which secretes an oil that the bird uses to groom its feathers.
But some birds such as pigeons, hawks, herons, bitterns and parrots don’t have that gland. They have what are known as powder down feathers, that break down into fine powder the bird uses for grooming and waterproofing its feathers. The powder down feathers are concentrated in dense patches in herons, for example, but scattered in hawks. Sounds like a serious dandruff problem.
Summary: Now that we’ve described several different kinds of feathers, let’s stick them onto an imaginary bird to see where they go and what they do.
If we start with a naked bird, we’d first add a layer of down feathers on the body. This is the thermal underwear that keeps birds cozy when it’s chilly. But it’s not very effective if it gets wet. That’s why the down layer must be protected by the contour feathers.
The contour feathers, also on the body, interlock and form a waterproof layer above the down feathers. Any rain is shed off without dampening the down. Wind doesn’t get through the contour feathers, either.
Between the contour feathers and the down feathers are the semiplume feathers. They’re like a frilly contour feather that’s got some downiness to it. They help improve the insulation of the down layer.
The flight feathers are on the wings, of course. These are the feathers that are asymmetrical with longer vanes on one side of the rachis. That allows the feathers to overlap with their neighbors, producing a strong aerodynamic instrument for flight.
Filoplumes surround the base of each flight feather. They send signals that move or rotate each flight feather while in flight.
Not all birds have bristles, but those that do are most likely using them to protect their eyes from incoming bugs.
And the powder down feathers are used by some birds as a source of preening powder. Not really dandruff at all.
So, the next time you find a feather on your walk you can now additionally impress your friends and family by identifying its type and purpose.
Clay Christensen lives and writes in Lauderdale, Minnesota. |
Trying to trace the evolutionary history of armadillos is hampered both by the highly fragmentary nature of the earliest fossils, and by some confusion as to how exactly they should be classified. Historically, armadillos were considered to belong to a single family of animals, technically termed the Dasypodidae. For much of the 20th century, this family was placed in a broader group consisting of a number of mammals that either had teeth with an unusually simple structure or none at all. Since this clearly wasn't a primitive feature (in that teeth were already reasonably complex even in the sort of creatures they must have descended from) the reasoning went that they had lost the more usual dental features, perhaps initially to feed on small invertebrates that don't need much chewing.
These mammals, collectively called edentates, could be found in Africa, Eurasia, and both the Americas and, historically, in Europe, too. That seemed a rather broad distribution, but if they really were incredibly ancient, it perhaps wasn't surprising. It was also, as became clear once we had decent molecular and genetic evidence from around the 1990s, wrong.
It turned out that the Old World edentates were entirely unrelated to the American sort, a case of parallel evolution. Instead, we now recognise that the closest living relatives of the armadillo are the anteaters and sloths, and that the three families taken together are only distantly related to... well, anything, really.
Exactly where they do fit is still a matter of debate, although it's clearly somewhere pretty close to the base of the placental family tree. What we can say is that the modern group, now called the xenarthrans, seems to have originated in South America with species only crossing over into the North after the two continents collided towards the end of the Pliocene. Even then, most of them subsequently died out, with only a single species of armadillo currently found wild in the United States.
There's also debate as to whether all living armadillos really do belong to the same family. Since what constitutes a family and what doesn't is essentially arbitrary it's a fairly abstruse argument. A molecular study in 2016 suggested that, as had previously been suspected, glyptodonts - giant armadillo-like animals with a heavy bony shell - were just an unusual form of armadillo, and not merely close relatives.
The upshot of that was that a number of scientific papers written since 2016 have split the traditional armadillo family into two. The Dasypodidae itself still includes the animal that's so familiar to inhabitants of the southern US, along with its closest relatives. A second family, the Chlamyphoridae, includes the glyptodonts along with a bunch of living South American species, some of which are pretty odd-looking even by the standards of armadillos. Even so, not all modern scientists necessarily use this scheme... and since the two families are each other's closest relatives, there's nothing to say that they have to.
Nonetheless, whether they constitute one family or two, armadillos (in a sense that includes glyptodonts and the rather similar-looking pantatheres) are a single evolutionary group, which makes it reasonable to ask what the first ones looked like.
The oldest known armadillo fossil belongs to an animal called Riostegotherium. This lived in southern Brazil a little over 50 million years ago... and that's almost the only thing we know about it, given that all we have are a few individual chunks of bony scale. Unfortunately, much the same is true of most other early armadillo fossils, which tend to be fragmentary at best.
Most, but not all. Just two species of Eocene armadillo are known from reasonably complete remains. The older of the two is Lumbreratherium, which was first described in 2017 on the basis of a fossil found in northwest Argentina. This was joined last year by Pucatherium, previously known only from a few pieces of carapace. Those pieces had been distinctive enough to indicate that it was likely a close relative of the slightly older species, and, while the complete fossil dates to around 40 million years ago, some of the isolated pieces found elsewhere suggest that it may have lived on until as late as 35 million years ago, almost at the end of the Eocene.
The Pucatherium fossil shows an animal roughly the size of the nine-banded armadillo (Dasypus novemcinctus), the living species found everywhere from the southern US to Uruguay, but it clearly belongs to an entirely different lineage. If we do divide living armadillos into two families, this and Lumbreratherium belong to neither, representing an early branch that died out early on.
We already know, from some more fragmentary remains, that the two living families (or subfamilies, if you prefer) already existed at this point, so these Eocene species can't be the direct ancestors of modern armadillos, but they do retain some decidedly "primitive" features. For example, the carapace of Pucatherium consists of no less than 36 bands of scales, with no sign of the solid shields that the nine-banded armadillo has over its shoulders and hips. Since the scales (more technically "scutes" or "osteoderms") were made of solid bone, they would have provided armour to the animal, but it would have been much more flexible than in most living species, something that presumably had both advantages and disadvantages.
The rest of the skeleton shows a mixture of the features we'd expect to see in the two living groups, and that the animal was more heavily built than a nine-banded armadillo, with relatively strong and thick bones. The teeth were also different from those of modern species. As noted above, armadillo teeth are relatively simple in structure, all appearing essentially identical along the length of the jaw, rather than the usual division into incisors, molars, and so on seen in most mammal species. But, while Lumbreratherium had no incisors, it did have teeth that look like canines, separated by a gap from the more typical armadillo-type teeth behind them.
The history of armadillos must stretch at least ten million years further back in time than these Argentinian animals, and almost certainly much further than that. And it's unlikely that the real ancestors of the group looked exactly like these, more modern, specimens. But they do give us a partial glimpse into what at least some of these strange animals were like in the early days of their evolution.
[Photo by "Mwcolgan8", from Wikimedia Commons.] |
Encyclopædia. Æon. Anæsthesia. What do these words have in common? They refer back to a letter we don’t really use anymore.
Today, on the anniversary of Encyclopaedia Britannica’s first publication in 1768, we’re taking a look at where that squished-up “ae”—visible in older editions of this and many other encyclopedias—comes from.
Æ is technically called an “ash,” and it makes a noise like the “a” in “fast.” It’s what linguist-types call a ligature, or two letters joined together. Take a look at the ash in action in this first passage of the Old English epic Beowulf.
The ash originally appeared in Old English texts written using an adapted Latin alphabet. Eventually, the ash began to be associated with Latin itself, even though it was never used in the original Roman alphabet.
Old English (that is, English as it was spoken between 400 and about 1100 AD) was written using an adapted Latin alphabet introduced by Christian missionaries, write Jonathan Slocum and Winfred P. Lehmann of the University of Texas at Austin. But because the alphabet wasn’t standardized to the new language it was trying to describe, words were written phonetically and spelling was not standardized. Scribes added a few letters to capture sounds, including æ. It was called “ash” after the Anglo-Saxon rune, writes M. Asher Cantrell for Mental Floss.
Words that used æ included: æfter (it means “after”); ǣfre (ever); and āhwæþer (either). They’re not that different from their modern counterparts: more than 80 percent of the thousand most common words in today’s English come from Old English.
But encyclopedia isn’t an Old English word, however it’s spelled. In fact, although “encyclopædia” sounds like an old word, according to the Oxford English Dictionary, it has its origins in the sixteenth century, not ancient Rome. When the first encyclopedias were written, Europe was taking a fresh interest in the classical world and classical thinking, and therefore a fresh interest in Latin.
The “ae” spelling of encyclopedia would have become obsolete earlier, writes the OED in a longer, paywalled entry, but it stayed alive because many of the works that used the word (notably, Encyclopaedia Britannica) wanted that authoritative, Latin-ey look.
The ash has more or less vanished from American spellings. In some words the æ has become uncoupled, like in “archaeology.” In others, the American English spelling drops the e, like in “encyclopedia.” But the “ae” spelling that parallels the medieval letter is alive and well in England. Take a look at this 2015 article from The Telegraph about a man who just needs to correct Wikipedia, the “online encyclopaedia.” |
SCIENTISTS have developed an unlikely treatment for the dry skin condition eczema — a cream that is packed with human skin bacteria.
Studies suggest a healthy strain of skin bacteria can effectively treat flare-ups in people with the most common form of eczema, called atopic dermatitis, which affects 15 million people in the UK.
The theory is that these healthy bacteria applied to the skin will kill the harmful bacteria and the toxins they produce, which are the cause of the painful inflammation.
This is because good and bad bacteria compete against each other: by flooding the area with good bacteria, the hope is that the good bugs win.
This innovative approach, known as bacteriotherapy, is already used in other areas of medicine. For example, persistent gut infections such as C. difficile, a cause of food poisoning, can be treated with a pill containing good bacteria from faecal material of healthy donors.
Now, research published in the journal Nature Medicine suggests that bacteriotherapy could be a treatment for eczema.
One in five children has atopic dermatitis, and for one in 12 cases it persists into adulthood. The condition tends to run in families and there is no cure.
Inflammation causes a chronic, itchy rash on the arms, legs and cheeks. Flare-ups can be triggered by heat, detergents, pets, foods and stress.
The bacterium Staphylococcus aureus, which tends to live in greater abundance on the skin of people with eczema, is often the cause of the irritation.
While human skin is naturally a hotbed of bacteria, most cause no harm because the immune system keeps them under control. In those with atopic dermatitis, the immune system goes into overdrive and the bacteria can turn nasty.
Professor Carsten Flohr, a consultant dermatologist at Guy’s and St Thomas’ NHS Foundation Trust in London, says: ‘Staphylococcus aureus is a key organism that causes infections in eczema. Toxins released by the bacterium drive the skin inflammation.’ Conventional treatments include emollients (moisturisers) to repair and protect the skin barrier, steroids to reduce redness of the skin by blocking the body’s inflammation process, and antibiotics to kill the bacteria.
However, long-term use of some treatments is associated with side-effects. Steroids, for example, can thin the skin, cause impaired kidney function and raise blood pressure.
As a result, scientists have been looking for safer alternatives. Researchers from the University of California, San Diego first screened 8,000 types of bacteria taken from the skin of people without eczema to identify which ones were able to kill the harmful Staphylococcus aureus bug.
A shortlist of around ten strains was then further assessed to check they were safe and wouldn’t become harmful. The scientists were left with a single strain of bacteria, Staphylococcus hominis A9, which was chosen as the treatment for atopic dermatitis.
‘This has several different ways to get rid of the harmful bacteria,’ Richard Gallo, a professor of dermatology who led the research, told Good Health. ‘It produces a type of antibiotic that kills bad bacteria, it produces a gene that blocks the toxins from bad bacteria, and it helps the body fight the bad bacteria by boosting the immune system.’
The chosen strain was added to an unscented lotion, and a double-blind trial involving 54 people with eczema began.
Two thirds were given the lotion to apply to their arms twice a day for seven days, while the rest were given a dummy cream.
Results showed those treated with the bacteria lotion had a reduction in Staphylococcus aureus on their skin and reported fewer complaints of inflammation. Larger trials are planned to see if the lotion works for longer periods.
Professor Flohr welcomed the new research, saying it used ‘Nature’s gift’ of bacteria to treat the problem. But he warned it could be several years before such a treatment is widely available.
Other new treatments for severe eczema include dupilumab, a monoclonal antibody. For this, molecules are produced in a laboratory and engineered so that they target the pathways of the skin’s immune system which have gone into overdrive.
A 2014 U.S. study, published in the New England Journal of Medicine, found 85 per cent of patients taking dupilumab had a 50 per cent reduction in eczema symptoms after 12 weeks, compared with 35 per cent taking a placebo.
In 40 per cent of cases, the eczema cleared up altogether (compared with 7 per cent on the placebo). |
Accounting is a process of measurement and communication of financial information for business entities. It is a specialized field in which individuals, businesses, and other economic entities communicate, analyze, and store financial information. This practice is known as accountancy. Throughout time, it has become an integral part of modern society. Whether a business is small or large, accountants track the progress of the company’s financial health. There are many aspects to accounting.
As the name suggests, business accounting involves interpreting and reporting financial data. It is often used for strategic planning and evaluation. It also entails regulatory and fiscal aspects. Most large companies use this form of accounting to meet external standards. There are many different kinds of accounting software. However, there is no one best way to perform business accounting. Some people choose to specialize in one area and focus on another. The key differences between business accounting and other types of financial statements are the following:
There are two main types of accounting. Financial accounting, which focuses on analyzing financial data, relates to business transactions and is a type of financial management. Unlike finance, business accounting also requires the accurate recording of data. Consequently, financial records must be verified to prevent the risks of fraud. The cost principle is the basis for recording costs for business. The use of this method is crucial to a business’s success. This method is also important in managing liabilities.
A key difference between financial and business accounting is the purpose. The former includes the assessment of assets and liabilities. The latter is focused on determining the future performance of a business. It helps business managers make decisions related to the company’s operations. In financial accounting, the focus is on meeting external standards. The former, however, focuses on meeting internal business needs and is concerned with providing the necessary information for growth. The latter uses historical data.
There are two types of business accounting. The first is the small business accounting, which records the cash that the business receives. This is the simplest type of accounting. The second type, which involves the sale of goods and services, is the one involving a business’s sales. In the latter, the company’s assets and liabilities are separated into accounts. These accounts are all listed on the balance sheet. The liabilities and the equity are the same.
As the name implies, accounting involves three main activities. These activities include the recording of financial transactions. As a result, a business needs to know how to handle its finances. Among these, the managerial accountants oversee its operations. Moreover, the financial accountants are responsible for keeping track of the costs of goods and services, and how to manage the cash flows. As a result, the cost of service and products are both recorded on the expense registers.
Journal entries contain financial transactions. The other type is the general ledger. This is used for recording the income of the business. The second kind is the equity. A business’s liability is reflected in its equity. The owner of a business must record all financial transactions. The general ledger, a record of a business’s assets, liabilities, and expenses are recorded in the general ledger. A chart of accounts has many different categories.
A business’s assets and liabilities are represented on the expense account. Profits and expenses are recorded on the expense side. In the income statement, the credit is used for the expenses. The equity is the difference between the equity and the liability. In the income statement, the profit is the total of the revenues. This is the second kind. A business’s assets are the debt and equity. A company’s revenue is the difference between its expenses and sales. Both types of these categories are recorded on the expense and the equity.
The basic difference between accrual accounting and cash accounting is that cash accounting focuses on cash transactions, while business accounting focuses on the internal needs of a business. Intuit is the provider of the best financial software for businesses. Intuit’s financial software is used for most small businesses. The most important part of a cash book is the income. A profit is calculated if the sales exceed the expenses. If the expenses equal the profits, the accrual account will be the highest. |
The Future of Time: UTC and the Leap Second
Earth’s clocks have always provided Sun time. But will that continue?
Before atomic timekeeping, clocks were set to the skies. But starting in 1972, radio signals began broadcasting atomic seconds and leap seconds have occasionally been added to that stream of atomic seconds to keep the signals synchronized with the actual rotation of Earth. Such adjustments were considered necessary because Earth’s rotation is less regular than atomic timekeeping. In January 2012, a United Nations-affiliated organization could permanently break this link by redefining Coordinated Universal Time. To understand the importance of this potential change, it’s important to understand the history of human timekeeping.
Go to Article |
Pests on a Gerbera Daisy
The Gerbera daisy (Gerbera jamesonii), also known as commonly as the Barberton or Transvaal daisy, grows as a perennial across U.S. Department of Agriculture plant hardiness zones 8 through 11, but is often grown as an annual within this range and beyond. A handful of different types of pests can potentially prove problematic on Gerbera daisies.
Aphids are small, pear-shaped insects that vary in color, have a pair of tube-like structures projecting from their rear end and use slender mouth parts to feed on Gerberas and other plants. Feeding by these pests can cause leaf curling, yellowing and distortion. Aphids also excrete a sticky sweet substance known as honeydew that hosts the development of unsightly sooty mold and attracts ants. Avoiding the use of excessive or fast-release nitrogen fertilizer, keeping nearby areas free of weeds, avoiding the use of broad-spectrum insecticides and controlling ants all help to limit aphid problems. Where aphids are present, spraying the Gerberas with a strong spray of water, pruning of heavily infested leaves and if necessary, applying an insecticidal soap or narrow-range horticultural oil offer control.
Leafminers, the larvae of small black and yellow flies, feed between the upper and lower leaf surface of the Gerbera and many other flowering hosts. The larval feeding appears as a winding tunnel or blotch, while adults puncture leaves and sometimes petals to feed, creating a light-colored stippling. Damage is usually not serious, but a heavy infestation can slow plant growth. Natural parasites, such as the small parasitic wasps in the genus Diglyphus, often control the leafminer population. Removing and disposing of infested leaves will provide further relief.
Multiple types of mites, including spider, broad and cyclamen mites, can feed on Gerberas. Spider mites look like tiny moving dots to the naked eye, while the others cannot be seen without a microscope. Spider mite damage appears as a stippling and bronzing or yellowing of leaves and premature leaf drop. Feeding by the other mites shows as distorted or dwarfed leaves and foliage. Providing the daisies with adequate irrigation, avoiding the use of broad-spectrum pesticides and if possible, isolating infested plants away from healthy ones are viable control techniques.
Whiteflies are tiny, whitish insects that tend to appear in clusters on the undersides of leaves where they feed on sap. Whitefly feeding causes leaf yellowing and drop. Like aphids, whiteflies also excrete honeydew. A heavy whitefly infestation is difficult to treat. In many cases, natural whitefly enemies will offer control unless disrupted by dusty conditions, broad-spectrum pesticides or ants. Where needed, control efforts may include the removal of heavily-infested leaves, water sprays, the use of yellow sticky traps and the application of an insecticidal soap or narrow-range oil.
Thrips are tiny delicate-looking insects with fringed wings that puncture Gerbera leaves and flowers to suck out cell contents. Thrips feeding causes stippling, color break and papery leaves, and thrips leave speck-like black feces where they feed. Addressing nearby weeds, laying reflective mulch around the plants and if necessary, applying a narrow-range oil, insecticidal soap or pyrethrin as soon as damage is noticed can offer some control.
- University of California Statewide Integrated Pest Management Program: Gerbera Daisy
- Cornell Cooperative Extension of Oneida County: Gerbera Daisy
- University of California Statewide Integrated Pest Management Program: Leafminers
- University of California Statewide Integrated Pest Management Program: Aphids
- University of California Statewide Integrated Pest Management Program: Spider Mites
- University of California Statewide Integrated Pest Management Program: Broad Mite and Cyclamen Mite
- University of California Statewide Integrated Pest Management Program: Thrips
- University of California Statewide Integrated Pest Management Program: Whiteflies
Angela Ryczkowski is a professional writer who has served as a greenhouse manager and certified wildland firefighter. She holds a Bachelor of Arts in urban and regional studies. |
Writing 5-Digit Numbers in Words Worksheets
Take your writing numbers in words skill one big notch up with our free, printable, writing 5-digit numbers in words worksheets. The key, as always, is to have a near-thorough knowledge of the place values – units, tens, hundreds, thousands, and ten thousand – that form a five-digit number. Remember, every progressive place value is the earlier place value multiplied by 10. If this still sounds Greek to you, which is highly unlikely though, go straight to the worksheets, where there are loads of 5-digit numbers waiting for you. Try writing each number in words and vice versa.
These pdf resources work brilliantly well for grade 3, grade 4, and grade 5.
We now go to 5-digit numbers, a bunch that excites and a wee bit tricks learners with their conversion. Our free printable 3rd grade worksheet helps big time. This is how to do it – begin the name with the number of thousands and move on to the next place value.
Provide more practice in writing equivalent number names for the given numerals with this exclusive free pdf and transform the young learners into math whizzes. Look carefully at each digit, identify the place value of each digit, and jot down the number name.
If writing names for numbers is here, can writing numbers for names be far behind? The key is to attentively read and accurately identify each number name so writing its numeral counterpart is hardly a thing to worry about.
Keep the grade 3 and grade 4 children engaged with this worksheet on writing corresponding 5-digit numbers for the given number names. This exercise helps improve the learner’s flexibility and fluidity with 5-digit numbers.
Encourage children to dive deeper into number names with this free printable worksheet. Let them first correctly spell the number names and then read the number names to represent them in numerical notation. Remember to verify the answers with the answer key.
The more you review number names the brighter your prospects become. Jam-packed with number names and 5-digit numbers, this worksheet is drafted to help the 4th grade and 5th grade students to effectively learn converting between number names and numbers and become well-versed with the process. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.