content
stringlengths 275
370k
|
---|
NASA will be exploring the ocean worlds of Jupiter and Saturn's moons Europa and Enceladus using the infrared capabilities of its James Webb Space Telescope.
Geronimo Villanueva, a planetary scientist at NASA's Goddard Space Flight Centre in Greenbelt, Maryland, is the lead scientist on the Webb telescope's observation of Europa and Enceladus.
Villanueva's team is part of a larger effort to study our solar system with the telescope, spearheaded by astronomer Heidi Hammel, the executive vice-president of the Association of Universities for Research in Astronomy (AURA).
Here are 7 things to know:
1. The new findings will add more details to data previously collected by the Galileo and Cassini orbiters of the space agency. The new data collected by the powerful James Webb Space Telescope can also help guide future missions to the icy moons.
2. The NASA astronomers are basically interested in the plumes that breach the surface of these two moons, which comprise of water vapour and simple organic chemicals.
3. The main aim of this analysis is to discover the presence of life on Jupiter's moon Europa and Saturn's moon Enceladus. The telescope's near-infrared camera or NIRCam will be used by the team of astronomers to capture high-resolution images of Europa to analyse its surface and detect hot surface regions which point towards the presence of plume activity and active geologic processes.
4. The composition of the plume can be analysed spectroscopically if it is present close to the near-infrared spectograph, or NIRSpec, and mid-infrared instrument, or MIRI.
"Are they made of water ice? Is hot water vapour being released? What is the temperature of the active regions and the emitted water?" Villanueva asked. "Webb telescope's measurements will allow us to address these questions with unprecedented accuracy and precision."
5. NASA had previously accumulated details about these Cassini-Huygens and Galileo missions as well as its Hubble Space Telescope, it led to the revelation that large subsurface oceans took place because of geologic processes.
"We chose these two moons because of their potential to exhibit chemical signatures of astrobiological interest," astronomer Hammel was quoted as saying in a NASA statement.
6. Saturn's moon Enceladus is almost 10 times smaller than Saturn's moon Jupiter, which will make it difficult to capture high-resolution images of Enceladus' surface. However, it will still be possible to determine the molecular composition of the plumes of the moon and also determine its surface features.
7. The plans to use NIRSpec to detect organic signatures like methane, methanol and ethane in plumes of these two moons if the missions take place at the right time. |
4 Year Old VPK Class
This group is exposed to a variety of experiences and an increased opportunity for fine motor skills. Scissors, manipulatives and cooking are greatly enjoyed by the four year old. Attention span and memory skills begin to mature (through games of color recognition, object size and shape), as does the development of basic math concepts and problem solving skills. Four year olds learn through active participation with adults, other children and materials, while enjoying indoor and outdoor play.
Four year olds are learning to make choices through planned activities. Teacher prepared learning centers will include housekeeping, blocks, science, math, manipulatives, music and art. Through weekly activities, the four year olds are introduced to the alphabet. Math concepts include patterning, sequencing, opposites, shapes, likes and differences, and counting. Other media, such as social studies, art, cooking, and computers, reflect the letter theme of the week.
The Florida Early Learning and Developmental Standards for Four-Year-Olds create a common framework and language for providers of school readiness and VPK programs.
Based on collaboration with a state panel of experts, public input from citizens across Florida, and national and state expert reviewers, Standards for Four-Year-Olds reflect the latest research on child development and developmentally appropriate practices for 4-year-old children. Standards and benchmarks are organized into five domains.
– Physical development
– Approaches to learning
– Social and emotional development
– Language, communication and emergent literacy
– Cognitive development and general knowledge
Benchmarks are available for two domains – Language, Communication and Emergent Literacy and the Mathematical Thinking section in Cognitive Development and General Knowledge – to help explain what Florida’s children should know and be able to do by the end of prekindergarten.
4 Year Old VPK Class Available Hours and Days
Classic Day: 8:30 AM – 1:30 PM
Full Day: 8:30 AM – 4:00 PM
Early Drop-off Available: 7:30 AM |
Meaningful learning is the new norm
“It’s frustrating, but we look at the bright side. This is about our children’s safety and health. I will just let my children study at home like what they have been doing since the MCO (movement control order) . They can do activities from previous examination questions.”
Exams play a huge role in our education system and for many students, the reason why they work so hard is to do well and get good grades. What if we took exams out of the equation, will students be as motivated to learn?
The alternative to exams when learning remotely
Students need to first understand the purpose of learning before they can fully grasp what they are about to learn. They need to be guided in their learning goals and this calls for a more creative approach to encourage students to learn from home.
One of the challenges for students when learning from home is the distraction that comes from not being in a classroom.
We encourage teachers to rethink how lessons are taught and recommend to keep lessons simple, fun and quick with FrogSchool lessons!
1. Keep learning goals simple
When we study for an exam, the amount of information students need to learn can be overwhelming.
To help students achieve more meaningful learning, a great way is to keep learning goals simple and minimal!
For example, if a student is learning about the science of soap, instead of focusing on how well a student is able to remember technical terms and formulas, shift the focus to how it translates to something more tangible, like learning how soap reacts to water instead!
This will bring us to our next point...
2. Learning happens through fun activities
To instill a love for learning is unrealistic if we focus on how well a student is able to regurgitate facts. Encouraging students to be more physically involved in their lessons is a better way to engage the ‘why’ and ‘how’ when learning.
An example of the type of experiments teachers can encourage students to do at home is taken from a FrogSchool lesson.
3. Assess through quizzes
Even though exams are cancelled, it doesn’t mean that teachers and parents can’t monitor student progress. To encourage learning from home also means to assess if students understand what they’re learning.
A great way for teachers and parents to quickly assess student understanding is to introduce more low-stake questions. For example, instead of preparing short answer questions or essays, we recommend using online quizzes to your advantage. Not only is this a quicker way to find out if students are understanding the learning concepts, it produces quick feedback on where they can improve as well!
FrogSchool lessons make it easy for busy parents to monitor student learning from home!
There are many FrogSchool lessons that are simple, fun and quick!
They are simple for students to understand and easy for you to help guide them through their lessons for the day. We understand that there’s a lot to do and we’re here to help busy parents to help their child learn from home!
For more information on how parents and teachers can help students learn with their busy schedules, we’ve shared some tips for you in our previous blog post!
- Get creative! Make learning an adventure for upcoming lessons and how students can learn and play their way with fun learning!28 May 2020
Kids will be kids and they prefer to play! On a typical school day, students spend their time with teachers who guide and encourage them to learn. While they may continue to learn from home for 1-2 hours after classes end, the rest of the day is spent on play. …read more
- Are you ready for an adventure? Here’s how students can be motivated to explore the fun side of learning through FrogSchool lessons in the upcoming week!22 May 2020
All work and no play makes Jack a dull boy! It’s no fun when you’re stuck at home learning from a textbook or a screen all day! Kids will be kids and they love to spend their energy exploring and running around with friends or their siblings during play time. …read more
- Eksklusif di Frog: Pakar pendidikan menyarankan ibubapa melibatkan diri di alam fantasi kanak-kanak untuk menggalakkan mereka meneroka serta menguji idea-idea baru.22 May 2020
Professor Colin Diamond, Pusat Pengajian Universiti Birmingham Colin Diamond telah lama berkhidmat di dalam bidang kepemimpinan pendidikan di England dan pernah menjadi Ketua Fakulti, Guru Besar Bersekutu, Penasihat Penguasa Pendidikan Tempatan, Penolong Pengarah dan Pengarah Pendidikan / Perkhidmatan Kanak-kanak. Beliau telah banyak membantu ramai anak muda untuk membaca dan menikmati …read more
- Frog Exclusive: Education expert recommends parents to involve themselves in their child’s fantasies to encourage them to explore and experiment with new ideas.22 May 2020
Professor Colin Diamond, University of Birmingham School of Education Colin Diamond has worked in the field of educational leadership for many years in England and has been a Head of Faculty, Associate Headteacher, Local Education Authority Adviser, Assistant Director and Director of Education/Children’s Services. He has helped many young people …read more
- How can parents and teachers support each other to help Malaysian students transition from offline to online learning21 May 2020
Parents play a pivotal role in reducing teaching workloads Navigating through the ‘hows and whys’ during these uncertain times without research or experience can be confusing for everyone involved! Although we aren’t able to guarantee that planning will go on smoothly, it’s clear that online learning will still play a …read more |
Earth has volcanoes — and Mars has volcanoes. Earth has actively erupting volcanoes — and Mars has... none.
Or so it seems. Scientists point out that the Martian surface has more volcano-related rocks and features than Earth does. In addition, Mars is home to the largest volcano in the solar system: Olympus Mons, 600 kilometers (370 miles) in diameter, about the size of the entire state of Arizona, and 21 km (13 mi) high, over twice as high as Mt. Everest. Olympus Mons is the largest volcano in Tharsis, a vast volcanic province with several other volcanoes not much smaller than Olympus Mons.
Furthermore, Tharsis is not the only Martian volcanic center. There's Elysium, Syrtis Major, and a cluster of low-profile volcanic structures near the Hellas impact basin, to name the larger volcanic areas alone.
It's clear that Mars has seen much volcanic activity in the past. But has its volcanism actually shut down? Scientists don't know because they are still just beginning to explore the planet. They can see that the slopes of the giant Tharsis volcanoes show only a few impact craters, which says that their lava flows cannot be ancient. Yet scientists can only guess how old they are in years. Ages estimated by the number of impact craters seen on lava flows go as low as a few million years.
Here's a question to think about as you study Mars: If its last volcanic eruption occurred a million years ago, is Mars truly dead today? Perhaps Martian volcanic activity has ended permanently. However, it's also possible that we are simply seeing Mars at a geologically quiet moment, and activity may resume any time.
Volcanic activity begins when a planet's internal heat causes a body of rock to become molten. Geologists call molten rock magma; it's a mix of silicate and other minerals, plus gases, including water vapor and carbon dioxide (CO2), that are dissolved in it. Magma rises because it is less dense than the surrounding rock. Moving slowly upward, magma will rise until its density matches the surrounding rock. This may occur at some depth within the planet, or the magma may rise all the way to the surface.
If magma reaches the surface it will erupt, sometimes quietly, sometimes explosively. The most important ingredient in determining whether an eruption will be quiet and effusive (gushing), or violent and explosive, is the amount of gases and water dissolved in the magma. Basically, the more of these, the more explosive the eruption.
The explosions happen because as magma rises near the surface, the pressure keeping the water and gases dissolved goes away. Lowered pressure causes the gases to form bubbles in the magma. If enough gas is dissolved in the magma, this can cause a runaway effect. At that point, the superheated water vapor and gases burst out of the lava like popping the top of a soda can after shaking it. (The 1982 eruption of Mount St. Helens occurred in this way, after an earthquake shook off the volcano's upper part and exposed gas-rich magma.)
Most Earth volcanism is basaltic. Basalt (bah-SALT) is an igneous rock rich in iron and magnesium minerals. It is usually dark gray or black. The most iron-rich basalts erupt at high temperatures and flow easily, like maple syrup. When magma has more silica in its composition, however, it moves stiffly, like cookie dough or peanut butter. Being thicker and stickier, high-silica magmas make it difficult for gas bubbles to escape smoothly once they form. Again, this makes explosive eruptions more likely.
Volcanic eruptions on Mars have built numerous kinds of features. The main types of features are what scientists call shield volcanoes and flood basalts. These volcanic features also occur on Earth, although the Martian versions have some differences.
Basalt flows are the most common kind of flow on Earth, and they are widespread on Mars as well. But because Mars is a smaller planet with less gravity and a thin atmosphere, its volcanic activity does not exactly match that of Earth. For example, the lower gravity (38% of Earth's) causes lava flows to travel farther, all else being equal. Flows also tend to go farther because the thinner atmosphere means they cool more slowly. In addition, the lower gravity and thinner atmosphere means that volcanic ash thrown from explosive eruptions likewise travels farther and spreads out more.
No one has ever seen a Mars volcano erupt, yet scientists can look at the results of past eruptions and make calculations and predictions. For example, Martian basalt volcanoes could erupt a lot of volcanic ash in plumes shooting many kilometers into the sky. The ash would fall back to the surface over a wide area, building up layers of poorly compacted material.
On the other hand, the erupting column might collapse and send a superhot cloud of ash and gases flying from the vent at ground level. When this mixture cools, it would leave a deposit of pyroclastic debris. On Earth, such eruptions come mostly from silica-rich magmas, not basaltic ones. But, on Mars, the lesser gravity would make such explosive eruptions more common.
Scientists think that when Mars was young, explosive eruptions were more common than in recent eras. The oldest volcanic areas on Mars (the Circum-Hellas Volcanic Province) tend to have more loosely compacted materials such as ash than hardened lava flows. If more research confirms this finding, it says Mars experienced a major change in volcanic style in its past. The change meant that later magmas contained far less water. |
Cooling Degree-days (CDD) and Heating Degree-days (HDD) are meteorological indices defined as integrated temperature deviations from a base temperature over time. Formally, degree-days are defined as a summation of the differences between the outdoor temperature and some threshold (or reference base) temperature over a specified time period (such as at annual time scales).
Put simply, a Cooling Degree-day is the outdoor temperature below which a building would not require cooling.
Conversely, a Heating Degree-day is the outdoor temperature above which a building would not require heating.
Naturally, the choice of the threshold outdoor temperature becomes important while calculating the CDD and HDD for a particular region. Furthermore, because of the non-linear relationship between degree Celcius (˚C) and degree Fahrenheit (˚F), degree-days computed on either of the units is not convertible to the other.
Mathematically, HDD and CDD are represented as temperature sums in ˚C (or ˚F)*day:
The degree-days methodology is commonly applied in the energy sector for planning energy systems and predicting seasonal load demands. Moreover, traders and economists also utilize degree-days for market instruments such as weather derivatives and insurance, to iron out short to seasonal time scale weather related variations in energy demand. Because the thermal comfort in buildings relate to both cooling and heating systems, the degree-days have been developed with the corresponding dual concepts of CDD and HDD.
A key issue in the application of degree-days is the definition of the reference base temperature, which should ideally differentiate the region specific thermal factor. For instance, a widely used common base temperature of 18.3 ˚C (65 ˚F) for computing degree-days over Texas (U.S.A) and Siberia (Russia) may not account for the underlying heterogeneous mean climate in the two regions, wherein the residents would have adapted to differential thermal comfort level. As the choice of base temperature can be both subjective and through statistical reasoning, defining an appropriate region-specific threshold often creates a dilemma, especially in large countries with diverse temperature ranges (such as the United States of America, India and China).
Additionally, accounting for population weightage for degree-days at regional or country scales is equally important. For instance, population weighted degree-days for Beijing (China) with an urban population of ~21 million and Philadelphia (U.S.A) with a similar annual climate but relatively sparse population (~1.5 million) would be better representative of the true energy demands for heating and/or cooling compared to un-weighted degree-days.
Data and methodology used in ENERGYA for assembling a historical global gridded degree-day dataset
The degree-day methodology is widely used as tools for assessing weather related energy requirements in buildings. A measure of changes in both duration and magnitude of degree-days under future projected climate can provide important basic information for formulating energy policies. Understanding these changes at both regional and global scales requires assembling a comprehensive spatio-temporal dataset of CDD and HDD at fine-scale gridded resolution, weighted by the population density at equivalent spatial scales.
For assembling a historical global dataset of CDD and HDD, we utilize the daily minimum and maximum surface temperature (˚C) from the Global Land Data Assimilation System Version 2.1 (GLDAS-2.1) dataset. The degree-days are computed at the native 0.25˚ (~ 27 x 27 km) global gridded resolution, for years 1971-2016, using Climate Data Operators (CDO) ver 1.9.0. As discussed above, employing a constant reference base temperature across the global domain is both implausible and of little practical application. In our preliminary analysis, we therefore employ a range of widely used thresholds adapted in the literature, ranging from 15-23 ˚C.
- Day, T. Degree-days: Theory and Application [Butcher, K. (ed.)] [1–98] (The Chartered Institution of Building Services Engineers, London, 2006).
- https://en.wikipedia.org/wiki/Beijing and https://en.wikipedia.org/wiki/Philadelphia
- Scott, M. J. & Huang, Y. J. [Effects of Climate Change on Energy Use in the United States] Effects of Climate Change on Energy Production and Use in the United States [7–28] (CCSP, Washington, 2008).
- Collins, M. et al. [Long-term Climate Change: Projections, Commitments, and Irreversibility] Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. [Stocker, T. F. et al. (ed.)]. [1029–1136] (Cambridge University Press, Cambridge, 2013).
- Rodell, M. et al. [The Global Land Data Assimilation System (GLDAS)], (Bulletin of the American Meteorological Society 85: 381-394, 2004).
- Climate Data Operators (CDO): http://www.mpimet.mpg.de/cdo |
White-backed vultures are highly social and diurnal. It is gregarious in its feeding habits, and large numbers of them gather when food is abundant. They are very tame birds, and will venture into towns, looking for food. They are adapted for feeding on soft tissue, and are not able to tear open large carcasses that have thick skin. They search for food by soaring high in the air, using their keen eyesight. Once an individual sees a freshly killed animal, it will wheel in the sky as a signal to other vultures to fly down and eat. After gorging on food, a vulture may bathe at a favorite site with other species, or rest with its wings spread and back to the sun.
White-backed vultures are monogamous breeders and pairs stay together for life.
Food poisoning does not affect a vulture because its stomach acids are very acidic, with a pH of nearly zero. These acids prevent the spread of disease. To keep cool, vultures will urinate on their feet and legs. This also kills parasites and bacteria and helps keep the birds healthy.
The main threats to the White-backed vulture are: the conversion and loss of their habitat for agriculture, less available carrion due to declining wild ungulate populations, being hunted for traditional medicine use, illegal capture for the live trade, drowning in farm reservoirs, electrocution from electricity pylons, persecution and poisoning. According to the IUCN Red List, this species is the most widespread and common vulture in Africa, and the total population size of the White-backed vulture is 270,000 individuals. Currently this species is classified as Critically Endangered (CR) and its numbers today are decreasing.
White-backed vultures have a very important role in the ecosystem. By removing animal remains, these scavengers clean up the environment, helping to prevent diseases from spreading. |
A heart attack, also called a myocardial infarction, occurs when vessels that supply oxygenated blood to the heart become blocked. The part of the heart muscle starved for oxygen may die or become permanently damaged.
Most heart attacks are caused by a blood clot that forms a blockage in the coronary arteries. Blood clots are often the result of coronary artery disease (CAD), also called atherosclerosis, in which hardened fat deposits called plaques build up inside blood vessels. The plaque may cause fissures, or tiny tears, causing blood clots to form. A piece of plaque may also break loose and lodge in a small blood vessel, blocking blood flow.
How does molecular imaging help people who have experienced a heart attack?
Following a heart attack, heart function is assessed using either echocardiography or nuclear imaging.
Molecular and functional imaging procedures assess heart function and provide valuable information on specific biochemical and structural changes in heart tissue including:
Images and information provided by myocardial perfusion imaging, nuclear functional heart studies and other molecular imaging procedures help physicians: |
What was the Reformation?
The Reformation was a European-wide conflict over the hearts and minds of Christendom which gave rise to the distinction between Catholic and Protestant. The struggle over the true practices and beliefs of the Church played out in every aspect of sixteenth-century life: from high-politics to the parish church. At its heart lay the question of how a Christian was to be saved: via a believer’s own good works or through faith in God alone?
The universal Church challenged
Throughout its history, the Church had been periodically called to a renewal of faith. However from 1517 the German monk and professor, Martin Luther, would issue a series of increasingly radical attacks on the authority of the Pope and the nature of faith itself.
His message was twofold: salvation was by faith alone, and only scripture held merit in theological dispute. The suggestion that a person’s works earned them nothing towards salvation proved galvanising to figures across Europe, who took up Luther’s mantle in their own lands.
When Luther died in 1546, his was only one of a number of reformed doctrines known collectively as Protestantism.
Reformation in England
The 1520s saw the first reformers in England persecuted for propagating their faith. Largely confined to urban areas of the south, their fate changed in the 1530s when King Henry VIII broke from Rome in order to divorce Catherine of Aragon.
Though Henry’s successors would maintain the Protestant faith (with a brief reversal under Mary I), it is unlikely that the majority of English people were identifiably Protestant until the later years of Elizabeth I’s reign.
But Protestantism was never uniform. Well beyond Elizabeth’s death in 1603, debate raged about the true nature of the English Church. Catholicism, though outlawed, also lingered on, even if its full practice was increasingly confined to gentry households.
Conflict characterised much of the Reformation experience. This was most obvious in Ireland where the repercussions of violent Reformation remain visible today.
Elsewhere parishioners were asked to unravel the collective work of centuries: smashing idolatrous images and whitewashing church walls. Catholics fell foul of government legislation, whether defying it openly or hidden in a priest-hole.
But the Reformation did not only take away. Works such as the King James Bible remain testimonies to the period’s transformation of the English language. The period also provided the nation with some of its foundational myths.
As Catholic Spain’s Armada floundered in the Channel, English nationality was injected with a sense of separateness, one that would develop both at home and in the New World. |
Microsoft Office Excel in Windows 10 can be configured to display a number with the default currency symbol. In addition to the options for currency symbol, the format has options for the number of decimal places and negative number handling too. The case in point here is how you add a currency symbol before a number in the cells since simply typing a symbol sign at the beginning of a currency value will not be recognised as a number.
Let’s see how to do it.
Format Number as Currency in Excel
Excel users wanting to display numbers as monetary values, must first format those numbers as currency.
To do this, apply either the Currency or Accounting number format to the cells that you want to format. The number formatting options are visible under the Home tab of the ribbon menu, in the Number group.
Next, for displaying a number with the default currency symbol adjacent to it, select the cell or range of cells, and then click Accounting Number Format Button image in the Number group on the Home tab. (If you want to apply the Currency format instead, select the cells, and press Ctrl+Shift+$.)
If you would like to change other aspects of formatting for your selection,
Select the cells you would want to format.
Next, on the Home tab, click the Dialog Box Launcher adjacent to Number. See the screenshot below.
Then, in the Format Cells dialog box, in the Category list, click Currency or Accounting.
Thereafter, under the Symbol box, click the currency symbol that you want. If you do not wish to display a monetary value, simply chose the None option. If required, enter the number of decimal places that you want for the number.
As you make changes, it will reflect in the number in the Sample box, indicating changing the decimal places affect the display of a number. |
How to treat hypothyroidism
The thyroid gland is located in the throat under the larynx. It is wrapped around the trachea and releases the hormones thyroxine and triiodothyronine, which are also known as T4 and T3, respectively. These hormones require a chemical compound called iodine to form, which can be obtained through a diet or supplement (if necessary). T3 is released from the thyroid in much smaller amounts than T4, but acts on T4 to convert it to T3. T3 is the chemical that stimulates cell metabolism. A low concentration of T3 can lower your metabolism.
Like many glands in the body, the thyroid works under a feedback mechanism. In other words, when there are small amounts of a particular hormone, the body signals the other glands in the chain to release hormones that act on the gland to release additional hormones. For example, a low thyroid hormone level in the blood sends a signal to the hypothalamus that sends a signal to the pituitary to release TSH or the thyroid stimulating hormone.
Hypothyroidism can result from an inactive pituitary gland that does not excrete enough TSH. Because it is the hormone that stimulates the production of T4 and T3, a low TSH level leads to low T4 and T3 levels. There are other reasons for low thyroid hormone levels, which can be related to hypothalamic or pituitary disorders or iodine deficiency, radiation exposure, or other conditions.
Treatment for hypothyroidism depends on the underlying cause of the disease. Most people are treated with synthetic drugs called levothyroxine sodium, commonly called levoxyl or synthryoid. These drugs are synthetic T4. Remember that T4 is converted to T3 in the body, which actually boosts cell metabolism. In some cases, lyothyronine sodium or T3 is used to treat hypothyroidism. However, since T3 is much shorter, the dose should be taken several times a day.
T4 is treated with hypothyroidism in the morning, 30 minutes before eating. Taking other medicines with T4 is not recommended as this may affect the effectiveness of T4. Once drug therapy begins, thyroid levels tests should be done every 6 weeks to ensure that there are sufficient levels of hormones in the bloodstream. An excessive amount of hormones can damage the heart and cause palpitations. It can also raise blood pressure, so it is very important to monitor thyroid medication.
Once the hypothyroidism treatment is found to be effective and hormone levels are maintained, an annual blood test is sufficient for continuous monitoring. If you experience symptoms such as weight gain, fatigue, depression, dry skin, muscle cramps or constipation, contact your doctor as a dose adjustment may be required. The blood levels of T4, T3 and TSH provide a complete picture of the appropriateness of drug therapy.
Underactive thyroid: causes and symptoms
Your thyroid, whose main function is to produce hormones that control the body's energy consumption, is shaped like a butterfly and is located in the front of the neck. If the thyroid becomes underactive or does not produce enough hormones, there is hypothyroidism.
A small amount of thyroid hormone has many effects on your entire body. It increases your cholesterol level, making you more prone to stroke or heart attack. It reduces your energy, which leads to weakness and exhaustion. Pregnant women should also be aware that if not treated, this condition can be harmful to the baby.
What is the good news? Hypothyroidism can be treated easily!
What exactly is causing this condition?
The most common cause in Australia is Hashimoto's thyroiditis. Because it forces the immune system to attack the thyroid tissue, the gland can no longer produce enough hormones.
This condition does not choose age. Regardless of your age, you can be affected. However, women aged 60 and over are more susceptible. More than that, if this condition underlies your family, it can increase your susceptibility to getting the same.
The most common factors that can lead to hypothyroidism are cancer radiation and surgery to remove the gland. Other less popular causes include lithium medication and viral infections.
How would you know if there is hypothyroidism?
Some symptoms of this disease are:
1. Dry skin 2. Brittle and fragile nails 3. Irregular or heavy menstruation (in women) 4. Sudden physical weakness and depression 5. Inability to withstand cool temperatures 6. Constipation 7. Memory problems
The above symptoms appear gradually. Because of this, you cannot recognize it as it is. You may even think that these are signs of aging. It is recommended to consult a doctor before the situation worsens. |
October 24, 2018
SOLOMONS, MD (October 24, 2018)—Noise levels in the world’s oceans are on the rise, but little is known about its impact on marine mammals like dolphins that rely on sound for communication. Researchers from the University of Maryland Center for Environmental Science laid underwater microphones on the bottom of the Atlantic Ocean to find out more about the ambient noise levels in the area off the coast of Maryland. They found that dolphins are simplifying their calls to be heard over noise from recreational boats and other vessels in nearby shipping lanes.
“It’s kind of like trying to answer a question in a noisy bar and after repeated attempts to be heard, you just give the shortest answer possible,” said Dr. Helen Bailey, a marine biologist with the University of Maryland Center for Environmental Science who studies protected species in order to understand their habitat use and inform conservation and management. “Dolphins simplified their calls to counter the masking effects of vessel noise.”
Helen Bailey and her assistant Leila Fouda studied underwater ambient noise levels and whistle calls by bottlenose dolphins in the western North Atlantic, which experiences relatively high levels of vessel traffic between shipping lanes and recreational boaters off the coast of Maryland. Acoustic recordings were collected using hydrophones deployed to the bottom of the ocean in the leased Wind Energy Area, approximately 20 miles off the coast.
They found that increases in ship noise resulted in high dolphin whistle frequencies and a reduction in whistle complexity, an acoustic feature associated with individual identification. “The simplification of these whistles could reduce the information in these acoustic signals and make it more difficult for dolphins to communicate,” Fouda said.
Dolphins are social animals, and they produce calls for many different reasons. They talk to each other to stay together as a group, they whistle when they feed, and they even call out their names when different groups of dolphins meet.
“These whistles are really important,” Bailey said. “Nobody wants to live in a noisy neighborhood. If you have these chronic noise levels, what does this mean to the population?”
Normally dolphin calls have a complex sound pattern with rises and falls in the pitch and frequency in their whistles. The researchers found that ambient noise had a significant effect on the whistle characteristics. Bailey and her team analyzed the duration, start and end frequencies, presence of harmonics, and inflection points. With background noise, such as the low frequency chug-chugging of a ship’s engine, their usually complex whistle signatures flatlined.
“We need to be working to engineer quieter boats,” said Bailey. The sound from the ships that were recorded had decibel levels up to 130, just like being near a noisy road. “We need to ask, ‘Is there a way shipping can do it that is more environmentally friendly’.”
Regulations and voluntary incentives to reduce the sound from vessels, with speed limits or quieter engines, could help to decrease the effects on dolphins and other species sensitive to sound.
Maryland Department of Natural Resources secured the funding for this study from the Maryland Energy Administration’s Offshore Wind Development Fund and the Bureau of Ocean Energy Management.
The paper, “Dolphins simplify their vocal calls in response to increased ambient noise,” by Helen Bailey and Leila Fouda of the University of Maryland Center for Environmental Sciencewas published October 24 in Biology Letters.
UNIVERSITY OF MARYLAND CENTER FOR ENVIRONMENTAL SCIENCE
The University of Maryland Center for Environmental Science leads the way toward better management of Maryland’s natural resources and the protection and restoration of the Chesapeake Bay. From a network of laboratories located across the state, UMCES scientists provide sound evidence and advice to help state and national leaders manage the environment, and prepare future scientists to meet the global challenges of the 21st century. www.umces.edu
# # # |
HIVES AND URTICARIA
What is urticaria?
Urticaria, also known as hives or nettlerash, is an itchy, painful, rash with patches of raised red or white skin (known as wheals). It is caused by fluid leaking from blood vessels in the skin. Urticaria is classed into acute urticaria (which lasts for only a short period of time) and chronic urticaria (which lasts longer). There are also different types of urticaria, which may be chronic or acute. These include: contact urticaria (which is caused by something touching the skin), urticarial vasculitis (which has wheals that last longer than 24 hours, often with burning or pain), and physical urticaria (which has several different subtypes). Urticaria may occur with angioedema, which is swelling due to fluid building up, and typically affects the lips, face, and throat.
What do the wheals of urticaria look like?
The wheals in urticaria are often a pinky-red, but may have a paler centre. They can range in size from a few millimetres to a few centimetres, and can occur anywhere on the body. They are usually circular, but wheals which are close to each other can fuse to form many different irregular shapes. These wheals can be itchy, but are often relieved by rubbing rather than scratching. Usually, these wheals only last a few hours.
Approximately 15 % of people get urticaria every year.
What is acute urticaria?
Acute urticaria is urticaria lasting less than 6 weeks, with individual wheals usually lasting less than 24 hours. Acute urticaria is more common in children than adults, and studies have found a number of causes:
- No cause can be found in 50 % of people with urticaria.
- 40 % of urticaria can be caused by upper respiratory tract infections, such as the common cold, flu, coughs etc.
- 9 % of urticaria can be caused by various drugs, such as certain types of antibacterial drugs, muscle relaxants, and certain types of painkillers.
- Only 1 % of urticaria is caused by foods. However, the actual figure may be a little higher than 1 %. Common foods include fish and seafood, eggs, dairy, and nuts.
Between 20 % and 30 % of people with acute urticaria go on to develop chronic urticaria at some point in their lives. Acute urticaria may not require treatment as it is not usually serious. However, a bath or shower might help ease the itching, and antihistamines such as loratadine or cetirizine hydrochloride might be able to ease some of the symptoms too.
What is chronic urticaria?
Chronic urticaria lasts longer than 6 weeks. The wheals appear regularly (usually daily), and the individual wheals usually last between 4 and 6 hours. Chronic urticaria is also twice as common in women than in men.
In the majority of cases, the cause is not known. In 35 - 50 % of cases, the problems may be due to the body's own defence mechanisms causing the blood vessels to become leaky. Drugs such as aspirin and certain other painkillers can alter body chemistry and also cause chronic urticaria. Food allergies rarely cause chronic urticaria.
A bacteria known as Helicobacter pylori (which causes peptic ulcers in the stomach) may indirectly cause chronic urticaria. In addition, chronic urticaria might run in some families. Intestinal parasites such as roundworms can also cause chronic urticaria.
There are several diseases in which the body's immune system attacks parts of the body that are associated with chronic urticaria. They include:
- Graves' disease and Hashimoto's thyroiditis - conditions in which the immune system. attacks the thyroid gland, an important part of the body involved with metabolism.
- Vitiligo - a disease where white or pale patches appear on the skin.
- Rheumatoid arthritis - a condition where the body's immune system attacks joints (particularly those in the fingers), leading to pain in the joints.
- Pernicious anaemia - a condition where vitamin B12 cannot be absorbed into the body.
- Insulin-dependent diabetes mellitus - a disease where the immune system attacks certain cells in the pancreas involved in regulating the amount of sugar in our blood.
Treatment for chronic urticaria involves antihistamines and possibly H2-blockers (which act on histamine receptors) such as ranitidine or cimetidine. If any particular trigger can be identified, that should be avoided as well to prevent future episodes.
What is contact urticaria?
Contact urticaria develops where the skin or body surface has been touched by something. There are allergic and non-allergic types, the difference between the two is in how the body reacts to the substance causing the urticaria. In allergic cases, it is because the body has become sensetised to a particular substance such as grass or foods or latex. Non-allergic contact urticaria happens when certain chemicals directly act on blood vessels. These chemicals may include chemicals in cosmetics, eye solutions, and nettle stings.
What is urticarial vasculitis?
Urticarial vasculitis differs from urticaria in that the wheals tend to last longer than 24 hours. They can also be associated with burning and pain in addition to itching. Urticarial vasculitis is relatively rare - it is estimated to affect 1 - 10 % of people with chronic urticaria. It is usually part of another disease affecting the whole body, such as:
- Systemic lupus erythematosus - a condition where the skin and various internal organs become inflamed. It typically presents with a “butterfly rash,” which covers the cheeks and bridge of the nose.
- Sjörgen's syndrome - a disease where glands are affected, leading to a dry mouth, eyes, throat, and dryness in other areas.
What are physical urticarias?
Physical urticarias occur in response to certain stimuli and occur only in those areas affected by those stimuli. The wheals tend to resolve within 2 hours.
Dermatographism is the most common physical urticaria. In dermatographism, wheals develop over parts of skin that are stroked firmly. Dermatographism isn't associated with any form of allergy or atopy, food, autoimmunity (where the body's immune system attacks parts of the body), or any other diseases.
Cholinergic urticaria is the second most common form of physical urticaria. Wheals form in response to exercise, stress, or emotions, and are surrounded by a reddened flare on the skin. Adrenergic urticaria, on the other hand, has wheals which are small and pink and surrounded by paler skin.
Delayed pressure urticaria develops on parts of the body which have had pressure applied to them for a long period of time. The wheals can develop after 30 minutes or even as long as 12 hours after the pressure has been lifted. Common areas are around the waist after wearing a tight pair of trousers or a tight belt, or at the ankles or lower leg where the top of socks are.
Other types of physical urticarias can occur in response to water, sunlight, cold, or pressure.
What is angioedema?
Angioedema is a swelling caused by fluid leaking from blood vessels. It is usually caused as a result of allergy, and common triggers include foods and insect stings. Other causes of angioedema include:
- Heat, cold, or vigorous exercise (angioedema caused by these factors is associated with physical urticarias)
- Drugs, such as aspirin, certain types of painkillers, anti-inflammatory drugs, and ACE inhibitors like enalapril.
- Physical injury can cause angioedema in people with hereditary angioedema (C1 inhibitory deficiency). In these cases, the swelling can last from a few hours to a few days. Often, there is no accompanying urticaria, and antihistamines do not work against it. Angioedema can also be spontaneous in people with hereditary angioedema.
The lips, face, genitalia, extremities, throat, and tongue are often affected - angioedema in the tongue or throat can cause a blockage in the airways and can be fatal. Angioedema can also affect the gut, and here it can lead to symptoms such as severe abdominal pain.
Angioedema is treated with oral antihistamines such as loratadine. However, this is ineffective in people with hereditary angioedema or angioedema caused by ACE inhibitors. The swelling usually subsides in several hours. If someone gets a blockage of their airways because of angioedema, seek urgent medical help.
How can urticaria be diagnosed?
Urticaria can be diagnosed easily clinically by talking to the patient and finding out what the rash is like. However, there are a number of investigations that can help doctors to find out what type of urticaria you have and what might possibly be causing it. They include:
- Blood tests. Blood tests would look for many different things. Doctors might look for eosinophils in your blood - eosinophils are a type of white blood cell which defend your body against parasites, and more eosinophils are found when parasites are in the body. Doctors might also look for IgE, a chemical involved in allergy, and might try to find out what you are allergic to using your IgE (this test is known as a RAST). Doctors might also look for various other proteins involved in inflammation and immunity.
- A test known as erythrocyte sedimentation rate (ESR) can be useful in detecting the presence of any infections or inflammation.
- Investigations which look at how well your liver and kidneys are working. This can reveal any underlying diseases which might be causing or contributing to your urticaria.
- A biopsy of the skin might be useful if doctors think you might suffer from urticarial vasculitis. A biopsy of the skin involves taking a small sample of skin from your body and looking at it under a microscope.
- Allergy Treatment
- Bee Stings
- Cow's Milk Allergy
- Drug Allergies
- Egg Allergies
- Food Allergies
- Hives And Urticaria
- House dust Mite Allergy
- Latex Allergies
- Mould Allergies
- Poison Plant Allergies
- Peanut Allergy
- Pet Allergies
- Seafood Allergies
- Shellfish Allergy
- Soya Allergy
- Tree Nut Allergy
- Wheat Allergies |
1.) what is inheritance ?
- It is mechanism of creating new class by already existing class.
- Inheritace is used to establish the relationship between two (or) more classes.
- It is mechanism of obtaining the Variables and Methods from one class to another class.
- The class which is giving Variable & Methods is called as "Base Class (or) Super Class (or) Parent Class".
- The class which is taking variable and methods is called as "Derived Class (or) Sub Class (or) Child Class".
- The Operator : is inheritance operator
- Inheritance is always is possible between "Base Class" to "Derived Class"
2.) What is the Advantages of Inheritance.
The Advantages of Inheritance is :
3.) What are different types of inheritance?
- Single Inheritance
- Multi-Level Inheritance
- Hierarchical Inheritance
- Hybrid Inheritance
- Multiple Inheritance
4.) Is multiple inheritance possible in C#.Why
- Its not supported by CLR since its support many diff language and not all languages can have multiple inheritance concept.
- Because of the complexities involved where method name can clash when two diff classes have same method name.This is resolved by pointers in C++ but its not possible in c#.
- In c#.net we can achieve multiple inheritance by using "Interfaces"
5.) How do you prevent a class from being inherited ?
Make that class as sealed.
6.) What do you mean by sealed keyword ?
If you mark a class as sealed it means that you cannot inherit from it but you can create objects of that class.
7.) can you mark method as sealed ?
Yes.But for a method to be marked as sealed you need to have override keyword also.
8. what do you mean by upcasting and downcasting ?
class DerivedClass :BaseClass Upcasting--assigning a derived class object to a base class.This is implicit. BaseClass b= new DerivedClass. Downcasting--assigning baseclass object to derived class. This is explicit and it throws run time error . a)BaseClass b = new BaseClass() DerivedClass d= b //will give compile time error. b) d=(DerivedClass)b //will give runtime error. c)b=d; d=(DerivedClass)b; d.method() //will always call derivedclass method //there is no point in taking so much of pain and using this kind of code !!!
Upcasting -- assigning a derived class object to a base class.This is implicit. Downcasting -- assigning baseclass object to derived class. This is explicit and it throws run time error . |
Beacon Lesson Plan Library
Is the Sun our Heater?
Santa Rosa District Schools
Why is it warm in Florida and cold in Alaska? Students explore and discover how the sun provides heat to the earth, depending on the surface as well as the angle of the sun's rays. (This lesson focuses on the sun as a source of heat only.)
The student knows how the energy of the Sun can be captured as a source of heat and light on Earth (for example, plants, solar panels).
-A textbook that covers how sunlight affects air temperature
-2 similar pans, one filled with sand and the other with water
-Several styrofoam (or other substance) balls of different sizes
-Several flashlights for student use
-Construction and copy paper for demonstrations
-Chart paper or board
-Crayons, colored pencils, and markers
-Checklist for Formative Evaluation of Presentation (See Associated File)
1. On the morning of the lesson, fill one flat pan with sand and another flat pan with water. Place these outside in a sunny spot.
2. Put student names and/or numbers on Checklist. (See Associated File)
3. Gather materials and check that flashlights work.
4. Divide class roster into groups of 4 or 5 students.
Note: This lesson focuses on the sun as a source of heat only.
1. Take class outside where you earlier placed a pan with water and a pan with sand.
2. Ask students to guess which one would be warmer.
3. Allow students to touch each and discover if they guessed correctly.
4. Ask students why the temperature is different. Allow students to offer explanations one at a time, steering them to suggest that some surfaces absorb more heat than others.
5. Return to the classroom for further instructions.
6. (optional) Use a textbook that covers how sunlight affects air temperature.
7. Explain to students that our lesson today will help them understand how the energy of the sun can be captured as a source of heat on Earth.
8. Remind students to consider that the temperature of the air is warmer in the afternoons than in the mornings.
9. Ask students to consider why the temperature is different at different times of day. Allow responses.
10. Explain to students that sunlight passes through the air without heating it at all; that it only generates heat when it comes in contact with different surfaces Ė solids and/or liquids such as forests, highways, lakes, grassy areas, etc.
11. Explain to students that the sunís rays warm liquids and solids, and then the heat rises from those objects.
12. Discuss comparisons of asphalt and grass and other surfaces.
13. Explain to students that Earthís surface does not heat evenly; that it depends on the angle of the sunís rays striking the earth.
14. Ask the students why they think that is.
15. Encourage students to decide that the angle at which the sun hits would make a difference. The more direct rays are at noon; they generate more heat.
16. Carry the concept even further by explaining to students that the rays heat the surface (whatever it is), and then the surface warms the air. Encourage students to understand that this is why the temperature is often warmest in the afternoon.
17. Explain to students that you have models for their use in exploring how the sun's rays hit the earth (flashlights, balls, etc.). Place students in groups of 4 or 5 students. Allow groups ten minutes for exploration. Circulate among groups.
18. Instruct groups of students that they have twenty minutes (longer if needed) to prepare a presentation for the class which demonstrates each the following three concepts:
a. The sun warms liquids and solids on the earth, and then the heat rises from those objects warming the air.
b. The earthís surface does not heat evenly; it depends on the angle of the sunís rays as they strike the surface.
c. The air temperatures on the earth are often highest in the afternoon.
19. List these concepts in writing on the board or hang a poster stating them. Emphasize that this is where their grades are coming from.
20. Explain to students that they may use skits, posters, demonstrations, drawings, or any manner that they might choose for their presentations.
21. Provide plenty of materials to encourage creativity.
22. Circulate as the students work on presentations.
23. Allow students to make presentations in groups.
24. As the groups do their presentations, check off each required criteria on the Checklist. (See Associated File)
25. Commend those accomplishing all three criteria, and provide suggestions to those who did not accomplish one or more, emphasizing how they might reach that criteria. The observing class members might even make suggestions.
26. Explain to students that during their free time, center time or at home they might want to check out the sun movie at www.brainpop.com/science/space/sun/index.weml. (See Weblinks)
Formative assessment is done as students in groups use models to explore how the energy of the sun can be captured as a source of heat on Earth.
At the end of the exploration, each group demonstrates through skits, posters, etc. their knowledge of how the energy of the sun can be captured as a source of heat on Earth.
In group presentations, students demonstrate (using provided materials) some discovered evidence of use of the sun for heat on Earth demonstrating each of the following:
1. The sun warms liquids and solids on the earth, and then the heat rises from those objects warming the air.
2. The earthís surface does not heat evenly; it depends on the angle of the sunís rays as they strike the surface.
3. The air temperatures on the earth are often highest in the afternoon.
Use this sun movie as an extension for students to watch during free time or center time after the lesson. Brainpop.com |
Exactly how the circadian rhythm — our 24-hour internal clocks — and mitochrondria — our cell’s energy producers — cross talk to coordinate their work and maintain cellular health appears to rest on the workings of a protein called DRP1, researchers report.
Their study, “Circadian Control of DRP1 Activity Regulates Mitochondrial Dynamics and Bioenergetics,” was published in the journal Cell Metabolism.
Mitochondria are small organelles that carry out cellular metabolism to produce a cell’s energy (also known as ATP) through a process known as mitochondrial respiration. Scientists have suggested that cellular metabolism is coordinated by the circadian clock.
That clock, an organism’s internal biological clock, is a network that synchronizes the metabolic pathways of cells to the optimal time of day, by anticipating periodic changes in the external environment.
“The time of day determines the design of the mitochondrial network, and this, in turn, influences the cells’ energy capacity,” Anne Eckert with the University of Basel’s Transfaculty Research Platform Molecular and Cognitive Neurosciences, and the study’s leader, said in a press release.
Disruptions to the circadian clock is known to affect mitochondrial respiration and cellular energy, but exactly how the two interact — the mechanism that links them — is not well understood.
Mitochondria are very dynamic structures, constantly adapting to a cell’s changing conditions. In this way, mitochondria have the ability to fuse together (fusion) and then divide (fission), depending on a cell’s needs. Things that work against the ability to fuse and divide can lead to health problems.
The researchers showed that the circadian clock and mitochondria interact through a protein called the dynamin-related protein 1 (DRP1), a key mediator of mitochondrial fission. Specifically, they found that the mitochondrial fission-fusion cycle is controlled by this fission protein, which — in turn — is synchronized by circadian rhythm.
To show this, the researchers conducted in vitro experiments (in cells in the lab) and then in vivo experiments in mouse models. In this case, they used DRP1-deficient or clock-deficient mice.
Results showed that blocking or slowing DRP1 activity, whether by altering genes or through chemical compounds, halted (“abolished” is the terms the scientists used) the rhythm of cellular energy production, affecting the circadian clock.
Likewise, blocking pathways that regulate circadian metabolism and mitochondrial function affected DRP1 activity.
“Our findings provide new insight into the crosstalk between the mitochondrial network and circadian cycles,” the researchers concluded.
These findings may help in treating diseases characterized by a poorly regulated circadian rhythm and inadequate mitochondrial function, like Alzheimer’s, they suggested. |
This lesson teaches the engineering method for testing wherein one variable is changed while the others are held constant. Students compare the performance of a single paper airplane design while changing the shape, size and position of flaps on the airplane. Students also learn about control surfaces on the tail and wings of an airplane.
Search Results (6)
7th Grade Historical Literacy consists of two 43 minute class periods. Writing is one 43 minute block and reading is another. The teacher has picked themes based on social studies standards, and a read-aloud novel based on social studies serves as the mentor text for writing and reading skills. More social studies content is addressed in reading through teaching nonfiction reading skills and discussion.
Standards reflect CCSS ELA, Reading, and Social Studies Standards.
This unit is focused on the examination of a single topic, in this case, the Native Americans of the inland Northwest and conflict that arose when other non-native people started to settle in the northwest, and to specifically address the native populations that lived in the inland northwest. The materials were created to be one coherent arc of instruction focused on one topic. The module was designed to include teaching notes that signal the kind of planning and thinking such instruction requires: close reading with complex text, and specific instructional strategies or protocols are described that support students’ reading and writing with evidence are described in enough detail to make it very clear what is required of students and how to support students in doing this rigorous work. Materials include summative assessment of content and process, central texts, key resources, and protocols that support and facilitate student learning.
This activity emphasizes the importance of teaching reading and writing strategies for students to use with informational text.
The goal of this project is to stimulate student's critical thinking and investigative skills. We will lead students in an investigative process utilizing Forensic Science to uncover and solve theft in the school building. Students will research forensic instrumentation. Next they will go into discovery and explore and design a basic fingerprinting tool kit. Students will get the opportunity to build their own prototype of their tools. |
In this article, we will learn about how to use the DELTA function in Excel.
In simple words, this function only returns 1 when the values are an exact match with each other or else 0.
DELTA function returns 1 or 0 stated if the given two numbers are equal or not.
Number1 : first value.
Number2 : second value.
Let’s understand this function using it in an example.
Here we performed 2 operations in the DELTA function.
4 + 2 = 6
3 * 2 = 6
As you can see from the above operations result is equal so the function returns 1.
Now let’s use the function giving argument as cell reference.
Here we have some numbers in Column 1 & 2. ANd we need to find the exact number match.
Use the formula:
A2 : first value given in as cell reference.
B2 : second value given in as cell reference.
As you can see the DELTA function returns 1 only if both values are exact or equal match.
Now copy the formula in other cells using Ctrl + D shortcut key.
The applications/code on this site are distributed as is and without warranties or liability. In no event shall the owner of the copyrights, or the authors of the applications/code be liable for any loss of profit, any problems or any damage resulting from the use or evaluation of the applications/code. |
Why Do Many Flowers Have a Sweet Scent?
The sweet scent of flowers is designed to attract insects who seek food in the shape of pollen and the fragrant-smelling nectar. This nectar is a solution of sugars produced in little sacs called nectaries at the base of the flower petal. The insects have a part in the process of fertilization.
Almost all plants perpetuate themselves by means of sexual reproduction, during which a male reproductive cell or sperm fuses with the female reproductive cell or egg. When bees or other insects visit flowers in search of the sweet-smelling nectar, parts of their hairy bodies become dusted with pollen which contains the male reproductive cells. This rubs off on the flowers’ carpels which contain the egg or ovule.
Insects seem to be strongly attracted by sweet scents. In fact, some flowers, such as the Meadow Sweet, are so highly scented that insects are attracted to them although they have no nectar to offer. Most insect-pollinated flowers have evolved wonderful devices for guiding the movements of the insect. In this way they ensure that pollen is dusted over the insect’s body.
Although flowers can be identical in their color or shape, there are no two floral scents that are exactly the same because of the large diversity of volatile compounds and their relative abundances and interactions. Thus, scent is a signal that directs pollinators to a particular flower whose nectar and/or pollen is the reward.
Volatiles emitted from flowers function as both long- and short-distance attractants and play a prominent role in the localization and selection of flowers by insects, especially moth-pollinated flowers, which are detected and visited at night. Species pollinated by bees and flies have sweet scents, whereas those pollinated by beetles have strong musty, spicy, or fruity odors.
To date, little is known about how insects respond to individual components found within floral scents, but it is clear that they are capable of distinguishing among complex scent mixtures. In addition to attracting insects to flowers and guiding them to food resources within the flower, floral volatiles are essential in allowing insects to discriminate among plant species and even among individual flowers of a single species.
For example, closely related plant species that rely on different types of insects for pollination produce different odors, reflecting the olfactory sensitivities or preferences of the pollinators. By providing species-specific signals, flower fragrances facilitate an insect’s ability to learn particular food sources, thereby increasing its foraging efficiency. At the same time, successful pollen transfer (and thus, sexual reproduction) is ensured, which is beneficial to plants.
Plants tend to have their scent output at maximal levels only when the flowers are ready for pollination and when its potential pollinators are active as well. Plants that maximize their output during the day are primarily pollinated by bees or butterflies, whereas those that release their fragrance mostly at night are pollinated by moth and bats.
During flower development, newly opened and young flowers, which are not ready to function as pollen donors, produce fewer odors and are less attractive to pollinators than are older flowers. Once a flower has been sufficiently pollinated, quantitative and/or qualitative changes to the floral bouquets lead to a lower attractiveness of these flowers and help to direct pollinators to un-pollinated flowers instead, thereby maximizing the reproductive success of the plant. |
Garden Literature: Discover Community Gardens with Seedfolks
Download: SeedFolks Discover Community Gardens
Overview: Engage your class in thinking about the power of community gardens by reading Seedfolks by Paul Fleischman.* Community gardens provide residents with many benefits, such as food, a way to beautify their neighborhoods and the opportunity to get to know each other.
*Parents and Teachers: this book contains realistic, mature themes. Please read first and decide if it is appropriate for your child or class.
Grade Level/Range: 6th- 12th Grade
Objective: Students will:
- Read Seedfolks by Paul Fleischman
- Discuss the ways a community garden changes a neighborhood
- Discuss the impact of the garden on the book’s characters
Time: 1 week
- Seedfolks book
- Chalkboard and chalk or dry erase board and markers
Laying the Groundwork
Ask students to define a community. What does it mean? Can you be part of more than one community? Share some examples. Ask students to create a Venn diagram depicting the different communities they are part of.
- Instruct students to read the book Seedfolks by Paul Fleischman. As they read, ask them to think about and write down the answers to the following questions:
- What are some of the reasons the characters become involved with the garden?
- What benefits did the characters receive from the garden?
- Did the characters need to be physically involved in gardening to receive benefits?
- What challenges did the characters face?
- What does the title of the book mean?
- What ethnic groups are represented in the book?
- Lead a classroom discussion sharing the students’ answers to the questions above. Write the responses on a chalkboard or dry-erase board. Responses may include (although are not limited to):
What are some of the reasons the characters become involved with the garden?
- To deal with sorrow from death of a family member (Kim)
- To take advantage of the opportunity to change/improve life (Wendell)
- To participate in an activity enjoyed in past (Tío Juan)
- To honor a dead relative (Leona)
- To support activities to bring neighbors together (Sam)
- To make money (Virgil and his father)
- To be around people (Sae Young)
- To win the love of a girl (Curtis)
- To use the garden as therapy (Nora and Mr. Myles)
- To complete a school requirement (Maricela)
- To connect with neighbors (Amir)
What benefits did the characters receive from the garden?
- Beautification of the neighborhood (All)
- Removal of unsightly and unhealthy piles of garbage (All)
- A connection with a deceased relative (Kim)
- Hope for a better life and the feeling of making a difference (Wendell)
- A chance to relate to an older family member (Gonzalo)
- A chance to share expertise and a reminder of native country and former life (Tío Juan)
- Opportunities to bring people from all cultures together for a common purpose (Sam)
- Healed feelings of fear and loneliness; felt like part of a family again (Sae Young)
- Renewed interest in life (Mr. Myles)
- Discovered the wonder of nature and find personal place in the bigger picture of life (Maricella)
- An opportunity to meet and really get to know neighbors (Amir)
- Improvement in environment and scenery (Florence)
Did the characters need to be physically involved in gardening to receive benefits?
No. Ana and Florence were both observers, but watching the gardeners and the gardens improved their outlook on the community and on life.
What challenges did the characters face?
- Finding the right governmental office to clean up the lot
- Locating a water source
- Ethnic groups separating themselves in the lot and initial hesitation in getting to know each other
- Litter and vandalism
- Lack of gardening information
- Stealing of harvest
- Animals eating plants
- Criminal activity
What does the title of the book mean?
‘Seedfolks’ is a term used by the character Florence to describe the first people to take residence in an area. In an interview with NPR, the author Paul Fleischman shared that it was an old term for ancestors.
What ethnic groups/countries of origin are represented in the book?
Vietnamese, Rumanian, Guatemalan, Haitian, Korean, British, Mexican, Indian and American
- As a last question, ask the students who started the garden? The answer is the character Kim, a nine-year-old child. Ask your students to discuss their thoughts on the impact a child’s actions can have on a community. Ask them to brainstorm ways they might be able to help improve their community.
Find a way for your students to contribute to their community, either as a group or individually. During your discussion, make sure to consider actions that can help mitigate human impact on the environment. Ideas may include organizing a trash pick-up, weeding or renovating a current garden area, creating a new garden area or planting container gardens, leading a litter awareness campaign, or planting street trees. Let the students brainstorm ideas and involve them in all planning stages of the activity.
As a class, read or listen to the NPR Backseat Book Club featuring Seedfolks. This spotlight includes an interview with author Paul Fleischman.
The characters from Seedfolks arrived in Cleveland from many different countries and the plants in our gardens are the same way. Assign each student a common garden plant to investigate. Ask them to find out the plant’s origin, history, uses and how it moved throughout the world. Have each student prepare a written and oral report on their plant. To compile the collected information, place a picture of each plant (or of its fruits or flowers) on the country of origin on a large world map poster. Pictures can be found in old seed catalogs or drawn by the students.
Invite a representative from a local community garden to your school and conduct a class interview to help the students learn more about the garden and how it benefits the community, in addition to learning how to conduct an interview. As a follow-up, ask students to practice their skills by finding a gardener in their own neighborhood to interview and creating a newspaper article with the information they collect.
Introduce students to the fact that different cultures eat different types of foods, including different fruits and vegetables. Research information about common fruits and vegetables in the cultures mentioned in Seedfolks or the different ethnic groups represented in your area. Research methods can include Internet searches, cultural reference books or personal interviews. If possible, host an International Festival in your classroom or school, giving students the opportunity to taste the foods from different cultures.
Link to Standards
MS-ESS3 Earth and Human Activity
MS-ESS3-3. Apply scientific principles to design a method for monitoring and minimizing a human impact on the environment.
HS-LS2 Ecosystems: Interactions, Energy, and Dynamics
HS-LS2-7. Design, evaluate, and refine a solution for reducing the impacts of human activities on the environment and biodiversity. |
LITERACY YEAR 8 TEST
Year 8 Literacy Test 1 - ProProfs Quiz
The correct term for making sure the words have the correct tense depending on whether it is single or plural is4.5/5
Year 8 Literacy – National Curriculum - FutureSchool
Year 8 Literacy # TOPIC TITLE 1 Conjunctions – Sentences Conjunctions – Sentences Objective: To use conjunctions to increase the variety of sentence structure in written expression. 2 Sentence Improvement Sentence Improvement Objective: To construct expressive sentences by using adjectives, adverbs, phrases and clauses. 3 Transitive and Intransitive Verbs Transitive and Intransitive Verbs
National Reading and Numeracy Tests | Literacy
As part of the National Literacy and Numeracy Framework, all KS3 pupils (Y7, Y8 & Y9) in Wales will be expected to do one reading and two numeracy tests. A leaflet explaining the tests in further detail is available on the ‘Learning Wales’ website, and previous years’ tests are available to download below results of the tests will be communicated to you at a later date.
National Reading and Numeracy Tests -Years 7, 8 and 9
National Reading and Numeracy Tests -Years 7, 8 and 9 – KS3. As part of the National Literacy and Numeracy Framework, all KS3 pupils in Wales will be expected to do one reading and one reasoning paper based test. The tests will be administered in school between Tuesday 30th April and Tuesday 7th May 2018.
The Year 8 Grammar and Punctuation Quiz | Studiosity
Test your understanding of grammar and punctuation with this practice quiz, suitable for students in Year 8 of the Australian Curriculum. Test your understanding of grammar and punctuation with this practice quiz, suitable for students in Year 8 of the Australian Curriculum.
k10outline - Year 8 Syllabus
Year 8 Achievement Standard Science Understanding At Standard, students compare physical and chemical changes and use the particle model to explain and
Could you pass the new literacy and numeracy tests?
Mar 13, 2018Last year's NAPLAN results revealed that only 32 per cent of year 9 students achieved a band 8 in all three domains, and 68 per cent would need to sit at least one online test to qualify for theirAuthor: Pallavi Singhal
Literacy test - Wikipedia
A literacy test assesses a person's literacy skills: their ability to read and write. Literacy tests have been administered by various governments to immigrants. In the United States, between the 1850s and 1960s, literacy tests were administered to prospective voters, and this had the effect of disenfranchising African Americans and others.[PDF]
Year 8 Grammar Booklet 1 and tasks - Burford School
8. mist 9. morning 10. mourning 11. patience 12. patients 13. piece 14. peace 15. scene 16. seen 17. stair 18. stare 19. witch 20. which Vocabulary Task: Choose ten of your spellings and write a sentence for each, demonstrating you know the meaning of the word. Use a dictionary to look up the meaning if you are unsure. 1. aisle 11. cell 2. isle 12. sell
KS3 Literacy Skills Builder Booklets | Teaching Resources
Feb 22, 2018KS3 Literacy Skills Builder Booklets. 4.8 181 customer reviews. Author: Created 2018. A series of booklets designed for years 7-9 containing all key terms, definitions and worksheets to test knowledge. Read more. Free. Loading.. Save for later. Preview and Half Term / End of Term / Easter Holiday Year 6 Homework - A SATs Based Booklet4.8/5(181)Brand: TES
Related searches for literacy year 8 test
computer literacy testliteracy test pdfliteracy test answershealth literacy testbasic computer literacy test |
San Francisco (Web Desk) – It is hard to imagine you could reconstruct a record of fog dating back thousands of years, but this is exactly what Chilean scientists have done.
The low-lying cloud is seemingly so transient and intangible, and unlike rivers and glaciers it leaves no easy-to-read impressions on the landscape.
BBC reported that a Santiago team has been able to trace the fog history of the Atacama Desert by studying Tillandsia plants.
Their chemistry suggests strongly that this local fog has increased over time. It is a period covering the last 3,500 years.
“I don’t think there’s any other place in the world where I’ve actually seen a record of fog, even spanning the last hundred years,” said Claudio Latorre Hidalgo from the Catholic University of Chile.
“What little we know about fog is from measurement instrumental data that we have, and from satellite data that only spans the last 20 years.
“So, this is actually a unique opportunity to study the evolution of a fog ecosystem over the Late Holocene, and what are the major drivers and controls of the mechanisms that produce that fog in the long term – the very long term.”
The palaeoclimate expert was discussing his team’s research here at the Fall Meeting of the American Geophysical Union – the world’s largest annual gathering of Earth scientists.
The Atacama is famous for its super-arid conditions; there are places where it has not rained for years. But life can eke out an existence if it can exploit the fog that rolls in off the Pacific. Tillandsia are a perfectly adapted opportunist.
These wiry, grey plants have no roots. They clutch weakly at sand dunes, but arrange themselves at every spatial scale to maximise their capture of the fog.
They derive everything they need from the damp air – not simply the must-have water, but also all the chemical nutrients required to underpin their biology.
Dr Latorre Hidalgo and colleagues have dug deep into the dunes to uncover a multi-millennia succession of Tillandsia; and they have described a pronounced trend: the younger the plants, the more of the lighter type, or isotope, of nitrogen atom that they have incorporated into their tissues.
Analysis of modern fog suggests this lighter nitrogen is favoured, and so the observed trend in the Tillandsia would strongly indicate the fogs of the Atacama have increased over time… with some complications.
“How the nitrogen gets into the fog is a much more complex question,” said Dr Latorre Hidalgo.
“I suspect a lot of that nitrogen is of marine origin. There is a huge oxygen-minimum zone off the coast of northern Chile, where there is a lot of denitrification going on.
“So, there is a lot of molecular nitrogen going into the air and a lot of nitrogen oxide as well.
“We know there is both ammonia and nitrate in the fog. So, you get both organic and inorganic forms of nitrogen.”
Oxygen-minimum zones are mid-water regions in the ocean that are extremely low in oxygen abundance, in part because marine organisms are removing it very fast and also because the waters that move into the zone fail to replenish the oxygen as they themselves are depleted. This is usually cold, upwelling water. And, again, this fits the overall picture because cold coastal waters will produce more fog.
“Our monthly fog collector data shows there is a significant trend with the coastal sea-surface temperatures and the fog. So, when you get El Nino events (and local surface waters warm), this warm water dissipates the thermal inversion that’s holding in the low-lying cloud and this dissipates the fog.
“We think that over the last three thousand years, the coastal waters have gotten much colder, much more productive and that’s releasing nitrogen from this oxygen-minimum zone to fertilise the plants.” |
Top Facts About the New GCSEs
Courses and exams are changing to ensure that young people have the knowledge and skills they need to succeed in the 21st Century. The new GCSEs ensure that students leave school better prepared for work or further study. They cover more challenging content and are designed to match standards in the strongest performing education systems elsewhere in the world.
- The new GCSEs in England have a new 9 to 1 grading scale, to better differentiate between the highest performing students and distinguish clearly between the old and new exams.
- Grade 9 is the highest grade and will be awarded to fewer students than the current A*.
- The first exams in new English language, English literature and maths GCSEs were sat in summer 2017 and the rest of the new GCSEs will be rolled out over the next three years.
- The old and new GCSE grading scales do not directly compare but there are three points where they align, as the diagram shows:
- The bottom of grade 7 is aligned with the bottom of grade A;
- The bottom of grade 4 is aligned with the bottom of grade C; and
- The bottom of grade 1 is aligned with the bottom of grade G.
- Although the exams will cover more challenging content, it is right that pupils are not disadvantaged simply by being the first to sit the new GCSEs. The approach used by Ofqual, the exams regulator ensures that, all things being equal, broadly the same proportion of pupils get grades 1, 4 and 7 and above in any subject, as would have got G, C or A and above respectively in the old system.
- The Department for Education recognises grade 4 and above as a ‘standard pass’; this is the minimum level that students need to reach in English and maths, otherwise they need to continue to study these subjects as part of their post-16 education. There is no re-take requirement for other subjects.
- Employers, universities and colleges will continue to set the GCSE grades they require for entry to employment or further study. We are saying to them that if you previously set grade C as your minimum requirement, then the nearest equivalent is grade 4. The old A* to G grades will remain valid for future employment or study.
- For measuring school performance, the Department for Education will publish the proportion of students achieving a grade 5 and above in English and maths. A grade 5 and above in English and maths is recognised as a “strong pass”, a benchmark in line with the expectations of top performing education systems around the world – this is one of the headline measures of school performance. The Department for Education will also publish the proportion of students achieving a grade 4 or above in English and maths for transparency and to enable schools to show their students’ achievements. |
One of the most valuable skills we can teach our history students is to use evidence from the past to develop their own opinions about historical events. One popular program that many high schools use is the DBQ Project. Students use textual and visual primary and secondary scholarly sources to answer a question. For example, my sophomores recently had a class debate based in the evidence from the DBQ entitled "North or South: Who Killed Reconstruction?" Essentially, students use evidence from experts and first-hand witnesses to solve problems, just like a detective would. The program has a fabulous reputation and student essays that result are well-thought-out and evidence-based. Unfortunately, one small pitfall of the program is that it requires a lot of paper and not much technology.
Why not combine technology with historical evidence analysis?
I found a great website that enables student to do the same kind of analysis in a webquest style environment. Surprisingly, students are more enthusiastic about the same tasks when they can simply use a computer instead of doing the writing out with pencil and paper. Historical Scene Investigation puts famous dilemmas from history into "case files" and asks students to solve the mysteries. Students analyze primary sources, similarly to the DBQ Project, but the entire task can be done online.
Recently, my freshmen were finishing up their unit on the causes of the American Revolution. As a review of some of the events, we spent two class periods in the computer lab where they chose to work on one of two case files:
- The Boston "Massacre": Students read about the event, sifted through both American and British first-hand accounts, and decided whether justice was served at the trial where 6 of the 8 accused British regulars were acquitted.
- Lexington and Concord: After reading both American and British first-hand accounts, students had to decide the historical question: Who fired the first shot? Was it the Minutemen or the British regulars?
In Document D, created by John Bufford, it shows colonists are attacking, while others are getting slaughtered by the soldiers' guns. In Document E by Alonzo Chapel, it shows colonists holding weapons attacking the soldiers. In Paul Revere's depiction, it shows innocent colonists being brutally killed. I believe that the colonists were not innocent. They did somewhat attack the soldiers. But shooting the colonists was not justified. I believe that justice was not served. How can branding someone's thumb be a justified exchange for someone's life? All of the soldiers should have been put in jail and branded because they killed a group of people over a small conflict that could have been solved a different way.
The British had well disciplined soldiers who would not fire without an order. This is clear as one British soldier commented on their intent not to fire and said, "we still continue advancing, keeping prepared against an attack though without attacking them." This line clearly represents how the British did not want to fire on the Patriots.It was satisfying for me, as their teacher, to read that these 14 and 15 year old students were combining their own opinions and reasoning skills with evidence from the past. Their number grade was based on a rubric I developed according to the assignment description and class standards we have developed throughout the school year. Overall, however, I think they did pretty well, don't you? |
A primary goal of ADHD therapy is to reduce undesirable symptoms associated with the disorder and improve performance of daily tasks and responsibilities. In addition to treatment with stimulant prescription ADHD medications, ADD therapy can enhance and complement the positive effects of ADHD medication in children.
How ADHD Therapy for Children Works
ADD therapy provides counseling and tools that the child can use to help manage his ADHD symptoms. Stimulant drugs, commonly used in treatment of ADHD, seem to work by bringing brain neurotransmitter levels to normal. ADHD medications, while very effective in reducing symptoms, provide only physiological relief to the child. To reach his potential and achieve success, the child must learn a variety of skills, behavior modifications, and how to change destructive thought patterns. That's where ADHD therapy for children can prove very effective.
Types of ADHD Therapy
Social workers, psychologists, psychiatrists, or other mental health professionals provide ADHD therapy for children (see Where to Find ADD Help). These professionals use a variety of techniques in counseling the ADD child, but ADHD behavioral therapy and ADHD cognitive therapy techniques represent the most common types of ADHD therapy.
- ADHD Behavior Therapy - The mental health professional teaches parents and children behavior modification strategies that help them cope with challenging situations. Think of these techniques in the context of ABC; wherein, A represents Antecedents, B represents Behaviors, and C represents Consequences. Essentially, ADHD behavior therapy utilizes a basic token-reward system. Antecedents are triggers that occur prior to behaviors. Behaviors are negative things the child does that parents and therapists work to change. Consequences are the interventions consistently imposed by the parents to effectively change the behavior in the future.
- ADHD Family Therapy - Counselors help parents and siblings of the ADHD child as a group by teaching them how to cope with the pressures and issues that emerge from living with a child with ADHD.
- Psychotherapy - The discipline of psychotherapy uses ADHD cognitive therapy techniques in addition to other therapeutic strategies. Many children with ADD have co-morbid mental disorders, such as anxiety and depression. The psychotherapist can discuss issues that bother the child and explore negative behaviors, as well as provide ways to reduce the effects of ADD symptoms.
- Support Groups and Skills Training – Parents and children can attend ADD support group meetings, which include skills training and education about ADHD therapy for children. The meetings provide a support network of other families coping with the disorder. Together, they can discuss common issues and experiences with using the various coping skills and strategies.
Issues Addressed in ADHD Therapy
ADHD therapy techniques address a variety of issues associated with ADHD in children. Common issues dealt with during therapy sessions include:
- destructive thought patterns
- emotional outbursts
- learning challenges
- difficulties maintaining friendships and other social relationships
- impatience and impulsiveness
Finding a Qualified ADD Therapy Professional
Finding a skilled mental health professional with years of experience providing ADHD therapy for children is very important. You can start with a referral from your child's pediatrician. Another referral source is your county psychological association. Also check with other parents of ADHD children. There's a good chance their child is receiving ADD therapy and you can get feedback on that particular therapist and their ability to provide ADHD therapy for children.
Parents can also search through several ADHD practitioner referral sites on the Internet. Physicians and therapists listing on these services do so because they have experience providing ADHD therapy and likely specialize in it. |
Twenty-seventh in a seventy-five part series sponsored by theOklahoma Heritage Association as its contribution to the Diamond Jubilee Celebration in 1982.
1934 – Year of Disaster
by Mac McGalliard
“Okies, Grapes of Wrath, Dust Bowl” are all terms suggestive of the years of the Great Depression in Oklahoma. The depression got under way in 1930 and extended through the decade of the 1930s and beyond, but the year when the worst disaster struck most families was 1934.
That was the year it forgot to rain. Dust storms blotted out the sun, and the low prices and lack of money hit bottom. That was the year that the trickle of people out of the state developed into the greatest flood of outmigration the nation has ever seen. By the end of the decade, it was estimated that some 250,000 people left Oklahoma, and 1934 was the year the flood began.In the face of barren fields and parched pastures, water wells and springs drying up, families by the thousands loaded their most necessary possessions on their automobiles, and left their homes and dreams behind to head to more prosperous states. Five years later, in 1939, their continued sufferings and disappointments were to be chronicled for all time by John Steinbeck in his novel, The Grapes of Wrath.
What brought on the disaster of 1934 was a combination of factors, some natural and some man-made. Decades of poor farming practices had begun to take their toll in the late 1920s and reached a peak in 1934. The natural cover, grass sod and timber, had been destroyed on the land, leaving the soil exposed to wind and water erosion. This destruction was made worse by running rows up and down hills, by clean cultivation, and the destruction of crop residues. Soil fertility was used up, and none put back.
Soon after the onset of the nationwide economic depression with the stock market crash of 1929, jobs in the towns and industries began to be eliminated, and there was nowhere for the people to go but back to the land, and this increased the pressure on the already abused land. Every old house or shanty on the countryside was occupied by a distressed family who hoped to survive with a garden, a cow or two, and a few chickens.
The weather was dryer than normal in 1933, and it developed into a major drought in 1934. Windstorms blew across the bare prairies and plains of western Oklahoma, west Texas, and the portions of New Mexico, Colorado, and Kansas. That area was the heart of the “Dust Bowl.” Blown dust and fine sand piled up along fence rows until only the tops of the post were visible. Sand dunes built up against farm buildings. Crops and pastures wee devastated.
The only help available for the suffering families were the beginnings of the state and federal drought and depression relief programs. 1934 was year of the infamous killing of cattle and hogs, “little pigs and calves,” with a mere pittance paid to the owners, but it was better than nothing. “Made work” jobs were being provided through the federal WPA (Works Progress Administration) and the CCC (Civilian Conservation Corps). The Governor at that time was William H. “Alfalfa Bill” Murray, and the state-sponsored relief program (mostly on rural roads) was called “Murray Work.”
But the relief programs were not enough for many families. The West beckoned, and they went. Mostly to California, but also to Arizona, Oregon, and Washington. Generally, they were not welcomed in those states, and many were exploited by employers, but most of them survived and became permanent residents. Not all were “Okies,” there were also “Arkies, Kansies, Texicans,” etc.
Now with development of Sunbelt jobs and prosperity, some of them are “coming back home.”
Newspaper clipping from Blanchard, Oklahoma newspaper celebrating 1982 Oklahoma Diamond Jubilee. Found in scrapbook belonging to Raymond R. Stone, born 1919 in Hastings, Jefferson County, Oklahoma |
Germs and children seem to go hand-in-hand despite a parent's best efforts. An estimated 40 percent of kids between the ages of 5 to 17 missed at least three days of school in 2007 due to injury or sickness, according to the Centers for Disease Control and Prevention. Help cut down the number of school days your preschooler misses once he enters kindergarten by teaching him about hygiene and infection control.
Turn hand washing into an enjoyable activity by singing songs, reciting the ABCs or playing a game while teaching your preschooler the importance of proper hygiene. For example, sing a song featuring every member of the family, or tell your preschooler a funny story about how germs live on his dirty, unwashed hands. The CDC urges parents to encourage their preschoolers to wash their hands with warm water and soap after playing outdoors, sneezing, using the toilet and before eating.
Play a game to teach you preschooler how easily germs are spread. Fill two plastic baggies with colorful stickers. One baggie is for your preschooler, the other is for you. Start the game by giving your preschooler a hug. While hugging him, place a sticker on his back. Explain that you're sick with the flu or a cold, and now he is, as well, and everything you touch becomes infected by germs. Instruct your preschooler to keep the baggie with him all day and place a sticker on everything he touches, including the television remote or his favorite stuffed toy. Use this game to emphasize how easily germs are spread, and why it's so important for your preschooler to wash his hands, especially before eating.
Hand Sanitizer or Hand Washing Chart
Create a chart that tracks each time your preschooler washes his hands with soap or water, or eliminates germs with hand sanitizer. Using a poster board and markers, create a chart featuring all the instances when your preschooler should wash his hands, including after using the toilet, before eating, after sneezing or coughing and after playing outside. Each time your preschooler washes his hands or uses hand sanitizer without being told, place a gold star on the chart. After your preschooler gathers a set number of stars, such as 20 or 30, provide him a reward.
Will You Get Sick?
Teach your preschooler about the various ways people can catch the cold or flu by playing a game called “Will You Get Sick?” Walk through your home or neighborhood and provide your preschooler with a variety of scenarios. For example, tell your preschooler he was just outside playing with a friend who has the sniffles and a cough. Your preschooler then grabs a snack and begins eating, but he forgets to wash his hands. Ask your preschooler if he could get sick. For each correct answer, give your child a point. After achieving a set amount of points, provide your preschooler a reward.
- Brand X Pictures/Brand X Pictures/Getty Images |
Insects were important religious symbols in ancient Egyptian culture and mythology. They were featured prominently in hieroglyphs, seals, and carvings. Depictions of insects were used as talismans for protection, and even placed in burial tombs. To the ancient Egyptians, these tiny creatures were powerful symbols of resurrection and eternal life.
The scarab beetle is an interesting little insect that rolls a ball of manure into a perfect sphere in the direction of east to west, the solar route. The ancient Egyptians associated the ball of dung with the sun. The scarab was personified by Khepri, a sun god. The deity, Khepri is depicted having the head of a scarab, and symbolizing the morning. Khepri was also associated with the sun god Ra, and the creator god Atum. Ancient Egyptians believed the scarab would ward off evil and protect the owner. Scarab talismans were placed in burial chambers as a symbol of eternal life. Scarabs were also a symbol of good luck and prosperity.
The jewel beetle is another sacred insect of ancient Egypt. The jewel beetle has a brightly colored appearance, and comes in shades of green, gold and reddish-purple. The symbolism of the jewel beetle remains somewhat obscure; however, it may have been associated with Osiris, lord of the underworld. According to Egyptian myth, Osiris was tricked by his brother Seth, and became trapped in a tamarisk tree. Isis split the tree open, releasing Osiris. The jewel beetle may have been a symbol of rebirth because it emerges from split logs.
Click beetles are common in Egypt, and are named for the clicking sound they make when they jump. Click beetles were associated with Neith, the goddess of protection. Ancient Egyptian artifacts and carvings depict the click beetle. Although the symbolism is unclear, the beetle may have had some religious significance.
Honey bees played an important role in ancient Egyptian society since honey was the main source of sweetener. Honey was also used in medicinal preparations. The Egyptians used beeswax for purposes such as forming molds, and for paint and varnish. Honey was such a valuable commodity in ancient Egypt that it was used for payment. Both bees and honey were considered sacred. Honey was given as an offering to the dead. Honey was a symbol of protection and resurrection. Honey bees were associated with the sun god Ra, and the protective goddess Neith. According to Egyptian myth, honey bees were Ra's tears.
In ancient Egypt, flies represented courage and tenacity. Stone carvings in the form of flies have been found and dated to approximately 3500 B.C. According to Egyptian mythology, flies protected against misfortune and disease.
- Insect Mythology; Gene Kritsky
- The Sacred Bee in Ancient Times and Folklore; Hilda M. Ransome
- Museum.UNL.edu: Ancient Egypt
- Insects.org: Beetles as Religious Symbols
- The Mythical Zoo: An A to Z of Animals in World Myth, Legend and Literature; Boria Sax
- Insect Biodiversity: Science and Society; Robert G. Foottit, Peter H. Adler, eds.
- Jeffrey Hamilton/Digital Vision/Getty Images |
Theories of Flight - An Overview
During the centuries before the Wright brothers' first flight in 1903, physical scientists had developed a large body of theory concerning fluid flow. Much of their work had focused on understanding the flow of water, and incompressible fluid, and the science of fluid flow was originally called hydrodynamics. Only a small number of these researchers were interested in studying airflow, largely because human flight was believed to be impossible. Yet because air and water are both fluids, some important concepts for the science of aerodynamics came from studies of water.
The first of these was Bernoullli's Principle, which states that in a fluid in motion, as the fluid's velocity increases, the fluid's pressure decreases. Derived by Daniel Bernoulli during the 1730s from an examination of how water flowed out of tanks, this principle is often used (not entirely correctly) to explain how wings generate lift. Because of the way wings are shaped, air flowing across the top of the wing must move faster than the air across the wing's bottom. The lower air pressure on top of the wing generates a “suction” that lifts the airplane. Bernoulli's principle was an incomplete description of how lift works, but it was a beginning.
Bernoulli's student, Leonhard Euler, made what was probably the 18th century's most important contribution to 20th century aerodynamics, the Euler equations. During a 25-year period in St. Petersberg, Russia, Euler constructed a set of equations that accurately represent both compressible and incompressible flow of any fluid, as long as one can assume that the flow is inviscid—free of the effects of viscosity. Among other things, Euler's equations allow accurate calculation of lift (but not drag). The equations were published in a set of three papers during the 1750s and were well known to individuals interested in experimenting with flying machines later in that century, such as George Cayley. Unfortunately, neither Euler nor anyone else had able to solve the equations during the 18th or early 19th centuries. This did not stop theoreticians from continuing to seek yet more powerful analytic descriptions of fluid flows. The key issue missing from Euler's description of fluid motion was the problem of friction, or what modern aerodynamicists call skin drag. During the early 19th century, two mathematicians, Frenchman Louis Navier and Englishman George Stokes, independently arrived at a set of equations that were similar to Euler's but included friction's effects. Known as the Navier-Stokes equations, these were by far the most powerful equations of fluid motion, but they were unsolvable until the mid-20th century.
The unsolvability of the highly complex Euler and Navier-Stokes equations led to two consequences. The first was that theoreticians turned to trying to simplify the equations and arrive at approximate solutions representing specific cases. This effort led to other important theoretical innovations, such as Hermann von Helmholtz's concept of vortex filaments (1858), which in turn led to Frederick Lanchester's concept of circulatory flow (1894)and to the Kutta-Joukowski circulation theory of lift (1906). (see fig) The second consequence was that theoretical analysis played no role in the Wright brothers' achievement of powered flight in 1903. Instead, the Wrights relied upon experimentation to figure out what theory could not yet tell them.
Experimentation with airfoil shapes had its own long history. Researchers had devised two different instruments with which to conduct airfoil experiments. The earlier device was called a whirling arm, which spun an airfoil around in a circle in order to generate lift and drag data. The second instrument, the wind tunnel, became the primary tool for aerodynamic research during the first half of the 20th century. Invented by Francis Wendham in 1870, the wind tunnel was not initially well regarded as a scientific instrument. But that changed when the Wright brothers used one of their own design to demonstrate that data produced by numerous other respected and methodical researchers using the whirling arm was wrong. The discredited whirling arm vanished as a research tool after 1903, while a vast variety of wind tunnels sprang up across the western world.
After the Wrights' success, theory and theoreticians began to play a larger role in aeronautics. One major reason why was Ludwig Prandtl, who finally explained the two most important causes of drag in 1904. Prandtl argued that the fluid immediately adjacent to a surface was motionless, and that in a thin transitional region (the boundary layer), as one moved away from the surface the fluid velocity increased rapidly. At the edge of this boundary layer, the fluid velocity reached the full, frictionless velocity that researchers had been studying for the past two centuries. Thus the effects of friction, or skin drag, were confined to the boundary layer. Under certain circumstances, this boundary layer could separate, causing a dramatic decrease in lift and increase in drag. When this happens, the airfoil has stalled. Prandtl's boundary layer theory allowed various simplifications of the Navier-Stokes equations, which in turn permitted prediction of skin friction drag and the location of flow separation for simple shapes, like cones and plates. While Prandtl's boundary layer simplifications still did not make calculation of complex shapes possible, the boundary layer theory became very important to airfoil research during the 1920s.
The 1920s also saw the beginning of research focused on what was called the compressibility problem. Because air is a compressible fluid, its behavior changes substantially at high speeds, above about 350 miles per hour (563 kilometers per hour). Airplanes could not yet go that fast, but propellers (which are also airfoils) did exceed that speed, especially at the propeller tips. Airplane designers began to notice that high-speed propellers were suffering large losses in efficiency, causing researchers to investigate. Frank Caldwell and Elisha Fales, of the U.S. Army Air Service, demonstrated in 1918 that at a critical speed (later renamed the critical Mach number) airfoils suffered dramatic increases in drag and decreases in lift. In 1926, Lyman Briggs and Hugh Dryden, in an experiment sponsored by the National Advisory Committee for Aeronautics (NACA), demonstrated that a dramatic increase in pressure occurred on the airfoil's top surface at the critical speed, indicating that the airflow was separating from the surface. Finally, the NACA's John Stack found the cause of this flow separation in 1934. Using a special camera, Stack was able to photograph the formation of shock waves above the airfoil's surface. As the figure shows, the shock wave was the termination of a pocket of supersonic flow caused by the air's acceleration over the airfoil. The shock wave, in turn, caused the boundary layer to separate, essentially stalling the airfoil.
Over the subsequent decades, several individuals found ways to delay and weaken shock wave formation to permit higher speeds. The first of these was Adolf Busemann's 1935 idea of swept wings, initially ignored but rediscovered in the 1940s by Robert T. Jones and now used on all modern jet airliners. During the 1950s, NACA researcher Richard T. Whitcomb developed the transonic area rule, which showed that one could reduce shock strength by careful tailoring of an aircraft's shape. In the 1960s, Whitcomb also demonstrated that one could design an airfoil that could operate well above the critical Mach number without encountering severe flow separation—a supercritical wing.
Long before Whitcomb worked out the supercritical wing, however, the quest for higher performance had led the US Air Force to demand true supersonic aircraft. From the standpoint of aerodynamic theory, supersonics posed an easier problem. On a transonic aircraft, shockwaves formed on top of the wings, meaning that part of the wing had supersonic flow and part of it had subsonic flow—a very difficult problem to resolve mathematically. In supersonic flight, however, the shockwaves formed at the aircraft's leading edges, meaning that the entire airflow around the vehicle was supersonic. This eliminated a large source of complexity. During the 19th century and the first two decades of the 20th century, researchers Leonhard Euler, G.F.B. Riemann, William Rankine, Pierre Henry Hugoniot, Ernst Mach, John William Strutt (Lord Rayleigh), Ludwig Prandtl, and Theodor Meyer had developed a solid methodology for calculating the behavior of supersonic shockwaves. During the 1920s, Swiss scientist Jakob Ackeret, working in Prandtl's laboratory at Goettingen, succeeded in simplifying, this body of theory enough so that it could be used to calculate the lift and drag of supersonic airfoils. Supersonic theory thus preceded supersonic flight substantially.
The major challenge aerodynamicists faced in making supersonic flight reasonably efficient was in finding ways to reduce the one unique kind of drag supersonic aircraft experienced: wave drag. Sonic shock waves were really compression waves, which meant that the air behind the shock was at a higher pressure than the air in front of the shock. The higher pressure behind the shock was exerted directly on the aircraft's leading edges and tended to slow it down—in other words, the higher pressure produced more pressure drag. In 1932, again well before supersonic flight was possible, Hungarian scientist Theodore von Kármán developed a method to calculate wave drag on simple bodies. It could also be used on more complex shapes, but the calculations necessary quickly became overwhelming. Through the 1960s, wave drag calculations for complex aircraft shapes were so laborious they were rarely done. Instead, aerodynamicists involved in supersonic research primarily experimented with wind tunnel models until electronic digital computers powerful enough to do the calculations became available in the 1960s.
If the challenges of designing supersonic aircraft helped motivate aerodynamicists to adopt the digital computer as design tool, hypersonic vehicles sparked a new subdiscipline, aerothermodynamics. Hypersonic flight, traditionally defined as speeds above Mach 5, meant new problems for aerodynamicists, one of which was the role of heating. At high speeds, friction causes the surface of a vehicle to heat up. At Mach 6.7, the speed NASA's X-15 research aircraft reached in the early 1960s, temperatures exceed 1300° F (704° C). Vehicles returning from space hit the atmosphere at speeds above Mach 18, producing temperatures above those at the Sun's surface. This places enormous heat loads on vehicles that can destroy them if their aerodynamic characteristics are not very carefully chosen.
After World War II, as the United States began to develop rockets for use as weapons and for space flight, the need to design vehicles for heat began to supplant the need to design them for aerodynamic efficiency. The earliest, and simplest, example of how important heating is to hypersonic aircraft design was the late 1950s recognition that for vehicles re-entering the earth's atmosphere, aerodynamicists should deliberately chose aerodynamically inefficient shapes. H. Julian “Harvey” Allen of the NACA's Ames laboratory is generally credited with this realization. Engineers designing missiles in the 1940s and 1950s expected to copy the aerodynamics of artillery shells—cones flying point first—for the missiles' warheads. Allen proposed that this was exactly backward. Warheads could still be conical, but they should fly blunt-end first. Allen based his reasoning on the behavior of shock wave that formed in front of the vehicle. Shock waves dissipate energy, and the stronger the shock wave, the more energy it would dissipate away from the vehicle structure. A pointed vehicle would form a weak shockwave and therefore would experience maximum heating. A blunt vehicle would produce a much stronger shockwave, reducing the heat loading the vehicle had to withstand. In essence, Allen's blunt-body theory required aerodynamicists to discard their long-standing emphasis on aerodynamic efficiency and embrace deliberately inefficient shapes for hypersonic flight.
One unusual concept that emerged from the demands of hypersonic flight was the lifting body—an airplane without wings. In the United States, this idea was first proposed at the same 1958 NACA conference on High Speed Aerodynamics that witnessed presentation of the space capsule idea used by both the United States and Soviet Union for their space programs of the 1960s. A lifting body-based hypersonic vehicle would be shaped like a blunt half-cone, to mitigate heating, and would offer the benefit of maneuverability during landing, something the space capsule couldn't do. During the 1960s and 1970s, researchers at NASA's Dryden Flight Research Center flew a variety of lifting bodies to demonstrate the idea's feasibility, including the one prominently featured crashing at the beginning of a popular television series, The Six Million Dollar Man.
Finally, interest in hypersonic flight has led aerodynamicists to revisit the 19th century's theoretical achievements. Because the Navier-Stokes equations can handle heat-conductive air flows as well as viscous, compressible flows—at least they can if aerodynamicists can find solutions to them—they offer the hope of designing reasonably efficient hypersonic vehicles. During the late 1970s, a new subdiscipline in aerodynamics formed around the use of supercomputers to approximate solutions to the Navier-Stokes and Euler equations. Called computational fluid dynamics, or CFD, the practitioners of this discipline are turning the number-crunching power of supercomputers into a virtual wind tunnel able to fully analyze the aerodynamics of any vehicle, in any speed range.
Computational fluid dynamics is actually a very broad research program encompassing all of flight's speed ranges, from subsonic to re-entry, and because it is relatively recent, it is far from being a completed. But it promises to have its greatest impact on hypersonic flight due to the combination of inadequate test facilities and high design complexity. An example will help illustrate CFD's promise while also underscoring how far aerodynamicists have to go before hypersonic flight is well understood. During the 1980s, the US Air Force and the National Aeronautics and Space Administration ran a program to develop hypersonic vehicle that could replace the Space Shuttle, but would use air-breathing engines instead of rockets. In the early 1990s, however, it became clear that the development effort had been premature. Aerodynamicists did not know exactly how air would behave during a key part of the vehicle's flight. The CFD analysis had produced an answer, but due to the lack of test facilities no one knew whether the computer was correct. If the CFD analysis was wrong, even slightly, the vehicle would not achieve orbit. And at a cost of more than $10 billion, failure due to a lack of basic knowledge was not acceptable to anyone. Hence NASA is currently trying to verify the computer's answer by flying a CFD-designed working model, the X-43A, atop a solid-fuel booster rocket. If the X-43A performs as CFD predicts it will, then aerodynamicists will be one significant step closer to one of aviation's ultimate goals, an airplane that can reach space.
Allen, Oliver E. Planet Earth: Atmosphere. Alexandria, Va.: Time-Life Books Inc., 1983
Anderson, Jr., John D. A History of Aerodynamics. New York: Cambridge University Press, 1998.
Anderson, Jr., John D. and Lewis, Mark. “Hypersonic Waveriders—Where Do We Stand?” AIAA paper 93-0399, January 1993.
Baals, Donald D. and Corliss, William R. Wind Tunnels of NASA. SP-440. National Aeronautics and Space Administration.http://www.hq.nasa.gov/office/pao/History/SP-440/cover.htm
Becker, John V. The High Speed Frontier: Case Histories of Four NACA Programs, 1920-1950. SP-445. Washington, D.C.: NASA, 1980). http://www.hq.nasa.gov/office/pao/History/SP-445/cover.htm
Bilstein, Roger E. Orders of Magnitude: A History of the NACA and NASA, 1915 – 1990. NASA NP-4406. Washington, D.C.: Government Printing Office, 1989. Also at http://www.hq.nasa.gov/office/pao/History/SP-4406/cover.html.
Dalton, Stephen. The Miracle of Flight. Willowdale, Ontario, Canada: Firefly Books Ltd., 1999.
Dwiggins, Don. The SST: Here It Comes Ready or Not. Garden City, New York: Doubleday, 1968.
Gablehouse, Charles. Helicopters and Autogiros; A History of Rotating-wing and V/STOL Aviation. Philadelphia: J.B. Lippincott Company, 1969.
Hansen, James R. Engineer in Charge: A History of the Langley Aeronautical Laboratory, 1917-1958. Washington, D.C.: NASA, 1987).
Hewitt, Paul G. Conceptual Physics. Sixth Edition. Glenview, Ill.: Scott, Foresman and Company, 1989.
Jacobs, Eastman N., Ward, Kenneth E., and Pinkerton, Robert. The Characteristics of 78 Related Airfoil Sections From Tests in the Variable-Density Wind Tunnel. National Advisory Committee on Aeronautics (NACA) Technical Report 460, 1933. Available at http://naca.larc.nasa.gov/reports/1933/naca-report-460/naca-report-460.pdf.
Jakab, Peter L. Visions of a Flying Machine, Washington DC: Smithsonian Institution Press, 1990.
Katz, Joseph and Plotkin, Allen. Low-Speed Aerodynamics, 2nd edition. Cambridge, England: Cambridge University Press, 2001.
Loftin Jr., Laurence K. Quest for Performance: The Evolution of Modern Aircraft SP-468. Washington, D.C.: NASA, 1985. http://www.hq.nasa.gov/office/pao/History/SP-468/cover.htm.
Looking at Earth From Space: Glossary of Terms. National Aeronautics and Space Administration. Office of Mission to Planet Earth. August 1994.
Montgomery, Jeff, exec. ed. Aerospace: The Journey of Flight. Maxwell Air Force Base, Ala.: Civil Air Patrol: 2000.
“NACA Conference on High Speed Aerodynamics: A Compilation of the Papers Presented,” Ames Aeronautical Laboratory, Moffett Field, CA, 18-20 March 1958.
Prandtl, Ludwig, Tietjens, O.G., and Hartjog, J. Applied Hydro and Aeromechanics. London, England: McGraw-Hill Book Company, Inc., 1934.
Reed, R. Dale. Wingless Flight: The Lifting Body Story. Washington, D.C.: NASA, 1997. http://www.dfrc.nasa.gov/History/Publications/WinglessFlight/
Shurcliff, William. S/S/T and Sonic Boom Handbook. New York: Ballantine Books, 1970.
Smith, H.C. “Skip.” The Illustrated Guide to Aerodynamics. 2nd edition. Blue Ridge Summit, Pa.: TAB Books, 1992.
Talay, Theodore A. Introduction to the Aerodynamics of Flight SP-367. Washington, D.C.: NASA, 1975. http://history.nasa.gov/SP-367/cover367.htm.
U.S. Department of Transportation, Federal Aviation Administration. Pilot's Handbook of Aeronautical Knowledge. Washington, D.C.: Government Printing Office, 1997.
Vincenti, Walter G. What Engineers Know and How They Know It. Baltimore: The Johns Hopkins University Press, 1990.
Wegner, Peter. What Makes Airplanes Fly? New York: Springer-Verlag, 1991.
Williams, Jack. The Weather Book. USA Today. New York: Vintage Books, 1992.
Young, Warren R. The Helicopters. Alexandria, Va.: Time-Life Books, 1982.
Aerodynamics for Students. http://www.ae.su.oz.au/aero/aerodyn.html
Aerodynamics in Car Racing. http://www.nas.nasa.gov/About/Education/Racecar/aerodynamics.html
Ames Aerospace Team Online. http://quest.arc.nasa.gov/aero/teachers/learning.html
“The Beginner's Guide to Aerodynamics.” http://www.grc.nasa.gov/www/k-12/airplane/bga.html
“A Brief History of Hydrodynamics: Ludwig Prandtl.” http://www.icase.edu/~luo/hydrodynamics.html
“Air Force Supersonic Research Airplane XS-1 Report No. 1. January 1948. NASA Historical Reference Collection, NASA History Office, NASA Headquarters, Washington, D.C. http://www.hq.nasa.gov/office/pao/History/x1/afsrax.html
“Boundary Layer Separation and Pressure Drag.” University of Virginia Department of Physics. http://www.phys.virginia.edu/classes/311/notes/fluids2/node11.html
Denker, John S. “See How It Flies.” http://www.monmouth.com/~jsd/how/htm/4forces.html
“Drag.” Lego Design and Programming System. http://ldaps.ivv.nasa.gov/Physics/drag.html
“Flow Conditions.” Allstar Project. http://www.allstar.fiu.edu/aero/Hydr15.htm
Houston, Robert S., Hallion, Richard P. and Boston, Ronald G. “Transiting from Air to Space - The North American X-15.” National Aeronautics and Space Administration. From The Hypersonic Revolution, Case Studies in the History of Hypersonic Technology. Air Force History and Museums Program, 1998. http://www.hq.nasa.gov/office/pao/History/hyperrev-x15/ch-7.html
“Jet Engines” and “Reciprocating Engines” http://library.thinkquest.org/25486/english/
“Ludwig Prandtl: Father of Aerodynamic Theory” http://www.allstar.fiu.edu/aero/prandtl.htm
“Proceedings of the F-8 Digital Fly-by-Wire and Supercritical Wing First Flight's 20th Anniversary Celebration” (May 27, 1992). NASA Conference Pub 3256, Vol. 1 at http://techreports.larc.nasa.gov/cgi-bin/NTRS(search on supercritical on the Dryden Technical Report Server).
“Shock Waves.” Encyclopedia Britannica. http://www.britannica.com/eb/article?eu=69210&tocid=0&query=shock%20wave. Available on CD, on-line through subscription, and in print version.
Stillwell, Wendell H. X-15 Research Results - Aerodynamic Characteristics of Supersonic-Hypersonic Flight. http://www.hq.nasa.gov/office/pao/History/SP-60/ch-5.html
“Wing Design: Other Wing Additions.” http://www.allstar.fiu.edu/aero/Wing33.htm
X-29 Fact Sheet. National Aeronautics and Space Administration. Dryden Flight Research Center, April 1998. http://trc.dfrc.nasa.gov/PAO/PAIS/HTML/FS-008-DFRC.html |
Body Length. The bald eagle (Haliaeetus Ieucocephalus) has sharp curved talons, and inch and a half long for grasping its prey. Their beak is two inches long. You can tell males and females apart because a females beak is bigger. Bald eagles have bare legs. Bald eagles can see small animals such as a mouse or rabbit moving in the grass from a mile away. The largest bald eagle can spread its wing as far as 8 feet from tip to tip. Adult eagles generally weigh 9-12 pounds. Bald eagles can fly up to 30 mph.
Fish. Bald eagles mostly eat fish. An eagle may eat four to six fish in a day. Bald eagles mostly fish in the morning and late afternoon. For winter bald eagles need to eat about a pound of food (one or two fish) each day to maintain its weight. Young eagles have to find and hunt for food by itself because they have to learn to survive
Other food they eat. Bald Eagles also eat jackrabbits, ducks, waterfowl, and carrion supplement their diet.
Body Color. Not all bald eagles have white heads and tails. Many bald eagles are dark brown all over while others have patches of white feathers here and there.
Babies Color. Baby bald eagle's beaks and eyes are also often brown. They will not grow the white head and tail feathers until they are four or five years old.
Nest. Bald eagles mate for life. The bald eagles return to the same area each spring because it is hard to build a nest. An eagles nest may be eight or nine feet wide, twenty feet deep, and weigh as much as two tons. Bald eagles usually build their nests in treetops.
Eggs. The bald eagles lay two eggs that are ivory-white. In the Southeast, nesting activities generally begin in early September; egg laying begins as early as late October and peaks in late December. The eggs hatch between May and early June after a 34 or 38 day incubation period.
Scientists Protection. Scientists put radio transmitters and lightweight, brightly colored plastic tags to find and locate them an see what they do. They each have a number.
Where Bald Eagles can be found. Once bald eagles could be seen in every state except Hawaii. Today this magnificent bird is an endangered species in forty-three states and a threatened species in five states. Parts of Alaska and Canada are now the only homes for thriving populations of bald eagles.
Environment to Live in. Bald eagles need big old trees for their heavy nests but lots of trees have been cut down for lumber or replaced by towns and vacation homes.
How they die. Bald eagles die from landing and running into power lines which electrocute them. Some people kill bald eagles for their feathers. Their feathers are worth money. In 1917 to 1962 100,000 bald eagles were shot in Alaska. The bald eagle has been the National bird of the United States since 1782. The American public supported a series of laws in 1972 that sharply reduced the use of pesticides such as DDT. The bald eagle population was decimated by habitat destruction, hunting, pesticide use and lead poisoning ( from eating waterfowl containing shotgun pellets).
By 1995 the population had rebounded to over 4,500 nesting pairs, allowing the US Fish and Wildlife Service to remove the bird from its list of endangered species.
Back to Glencoe's Endangered Animals Home Page |
Habitat protection more important now than ever before
HabitatAll animals need safe places to grow, reproduce, and find food. Marine animals are no different. In the ocean, their habitats can be the sandy bottom, a seamount rising from the ocean floor, or a deep canyon carved into the continental shelf. These places are affected by pollution and other human activities such as oil and gas drilling and use of destructive fishing gear, which research shows can have negative consequences. The National Oceanic and Atmospheric Administration is tasked with regulating ocean fishing and protecting our nation's ocean resources.
Protected ocean areas would help fish populations thrive |
Francis Crick, one of the discoverers of the structure of the DNA molecule, lecturing ca. 1979. Source: Wellcome Library for the History and Understanding of Medicine. Photograph: Bradley Smith.
Scientists over decades have explored and mapped lands, oceans and the heavens with the expectation of increasing our awareness of the environment in which we live. Underlying this search for knowledge is also the desire to improve human existence through the discovery of beneficial resources. The Human Genome Project (HGP) has served to explore our genetic environment to make us aware of the beneficial resources that might contribute to understanding and improving our lives.9 The HGP involves the discovery and sequence of the full DNA complement in a single human somatic cell. Its primary goal is a listing and location of our genes — the single unit of heredity responsible for how we develop from conception, how we grow and mature, how we live, and how we die.
Dr. James Watson, one of the most well-known proponents of the Human Genome Project, contributed significantly along with Francis Crick, Rosalind Franklin and Maurice Wilkins to our understanding of the nature of DNA through the discovery of the structure of the DNA double helix.11 This discovery changed the focus of modern genetics and influenced the direction of many other disciplines in that the foundation of all life processes could now begin to be explored.12
Since then, technological advances have enabled scientists to study DNA and its structure in detail:
DNA to be sequenced undergoes a lengthy process, using computer programs like this one to “read” DNA fragments. Source: DOE Joint Genome Institute.
- Computer generated analysis tools designed specifically to understand the significance of the base sequence in this large macromolecule have aided the Human Genome Project tremendously. These tools also aid in understanding how biochemical processes encoded in the sequence of bases are maintained, controlled, duplicated and terminated. With the development and modernization of the Fred Sanger dideoxy chain-terminating automatic sequencing method, the bacteria artificial chromosome (BAC) and the polymerase chain reaction (PCR), scientists have within 13 years been able to finish determining the order of 98% of the 3-billion nucleotide base pairs that compose the human genome.6 Simply knowing the sequence of bases at any given place or locus on a chromosome is not sufficient to understanding its function. As important as the sequence is to the function of genes, their distribution, location and structure among the 23 chromosome pairs is just as valuable when determining their role in different life processes. The estimate of between 30,000 and 40,000 genes is based on the fact that exons (gene segments) within the genome are flanked by known marker sequences (e.g., splice sites) that are located along the linear DNA sequence. Some computer programs can now recognize and label these segments and marker sequences while other programs can predict the location and structures of genes in genomic sequences from a variety of organisms. [Editor’s Note: See Computational Biology “learn more links” at end of this article.]
- Through great effort and expense, scientists in molecular biology, biochemistry, math, computer science, engineering and the health care industry have worked together to turn what began in 1985 as a simple campus improvement project at the University of California, Santa Cruz into an international scientific consortium. This cooperative effort now known as the Human Genome Project, begun in 1989, was lead by the U.S. Department of Energy (DOE), formally the Atomic Energy Commission. The DOE was charged to investigate genetic mutations and genome structural integrity after observing the consequences of the development of the atomic bomb. Many universities, private industries and non-profit organizations from around the world have worked together to produce a complete reconstruction of the human genome for public display. The institutions involved in this consortium are often referred to as “sequencing centers.” These centers:3,7,10
- offer facilities that allow scientists to determine the sequence of DNA of many different organisms including human
- spend time and money in disseminating sequence information into publicly accessible databases
- also develop computer programs that attempt to make biological sense out of the vast amount of sequence data being generated
- The accelerated development of the Internet is due in no small part to the need for communication between scientists at various DNA sequencing centers and to provide public accessibility to a DNA sequence database initially set up at the National Institutes of Health (NIH) at National Center for Biotechnology Information (NCBI). The database called GenBank is the major warehouse of genome sequence information from many different species and is accessible from many other web sites devoted to the utilization of sequence information.1,4
April 25, 2003 marked 50 years since the publication in the journal Nature of the letter by James Watson and Francis Crick describing the DNA double helix structure. This day also marked the completion of the human genome sequence to 99.9% accuracy as announced by the National Human Genome Research Institute (NHGRI).8 In the wake of this enthusiasm for the completed project lies information that has only begun to provide science and medicine with clues to how to combat heritable diseases, how to improve medical applications, and how some of life’s seemingly most insignificant organisms like the fly, roundworm and mouse give us clues to better understand our natural selves.2,5,13,14 The door of discovery and knowledge has been opened and it is up to responsible individuals to use this information to improve our collective lives today and in the future.
The ethical issues raised by the human genome project can be grouped into two general categories: genetic engineering and genetic information.
The first category consists of issues pertaining to genetic manipulation or what is sometimes called “genetic engineering.” The map of the human genome provides information that will allow us to diagnose and eventually treat many diseases. This map will also enable us to determine the genetic basis of numerous physical and psychological traits, which raises the possibility of altering those traits through genetic intervention. Reflection on the ethical permissibility of genetic manipulation is typically structured around two relevant distinctions:
- the distinction between somatic cell and germline intervention, and
- the distinction between therapeutic and enhancement engineering
Somatic cell manipulation alters body cells, which means that resulting changes are limited to an individual. In contrast, germline manipulation alters reproductive cells, which means that changes are passed on to future generations. Therapeutic engineering occurs when genetic interventions are used to rectify diseases or deficiencies. In contrast, enhancement engineering attempts extend traits or capacities beyond their normal levels.
The use of somatic cell interventions to treat disease is generally regarded as ethically acceptable, because such interventions are consistent with the purpose of medicine, and because risks are localized to a single patient.
Germline interventions involve more significant ethical concerns, because risks will extend across generations, magnifying the impact of unforeseen consequences. While these greater risks call for added caution, most ethicists would not object to the use of germline interventions for the treatment of serious disease if we reach the point where such interventions could be performed safely and effectively. Indeed, germline interventions would be a more efficient method for treating disease, since a single intervention would render both the patient and his or her progeny disease-free, thus removing the need for repeated somatic cell treatments across future generations.
Enhancement engineering is widely regarded as both scientifically and ethically problematic. From a scientific standpoint, it is unlikely that we will soon be able to enhance normally functioning genes without risking grave side effects. For example:
- Enhancing an individual’s height beyond his or her naturally ordained level may inadvertently cause stress to other parts of the organism, such as the heart.
- Moreover, many of the traits that might be targeted for enhancement (e.g., intelligence or memory) are genetically multifactorial, and have a strong environmental component. Thus, alteration of single genes would not likely achieve the desired outcome.
- These problems are magnified, and additional problems arise, when we move from somatic cell enhancements to germline enhancements.
In addition to the problem of disseminating unforeseen consequences across generations, we are faced with questions about whether future generations would share their predecessors’ views about the desirability of the traits that have been bequeathed to them. Future generations are not likely to be ungrateful if we deprive them of genes associated with horrible diseases, but they may well feel limited by choices we have made regarding their physical, cognitive, or emotional traits. In short, there is a danger that social-historical trends and biases could place genetic limitations on future generations.
The second general category consists of ethical questions pertaining to the acquisition and use of genetic information. Once we pinpoint the genetic basis for diseases and other phenotypic traits, what parameters should be set for the acquisition and use of genetic information? The key issue to be considered here is the use of genetic screening. Screening for diseases with the due consent of a patient or a legal proxy is generally viewed as ethically permissible, but even this form of screening can create some significant ethical challenges. Knowledge that one is or may be affected by a serious disease can create difficult situations for both patients and their families. Consider:
- If a test is positive, what options, medical or otherwise, will be available to ameliorate the condition?
- Will the patient’s relatives be informed that they too may be affected by the ailment?
It is the job of genetic counselors to educate patients about the implications of genetic knowledge, and to help patients anticipate and deal with these challenges.
Mandatory genetic screening of the adult population raises serious ethical questions about personal liberty and privacy, and thus is not likely to garner widespread support. Nevertheless, we are likely to hear calls for mandatory genetic testing in specific social contexts, and existing practices will no doubt be cited as justifications for such testing. For example, in the justice system, longstanding practices of fingerprinting, urine testing, and blood testing are already being supplemented by DNA testing.
Of particular concern is the specter of genetic testing in the insurance industry. When individuals apply for insurance policies, they are often required to provide family medical history, as well as blood and urine samples. At present, however, insurance companies in the United States cannot require genetic testing of applicants. While this prohibition is designed to prevent genetic discrimination, insurance industry lobbyists will surely be pressing the following kind of argument in coming years:
- If it is considered fair and proper to identify applicants with high cholesterol and/or a family history of heart disease, and to charge those applicants higher premiums, why should it be considered unfair to utilize genetic testing to accomplish the same goals?
Such questions will have to be seriously considered by ethicists and lawmakers, in the attempt to achieve a fair balance between individual rights and the rights of insurance companies. Indeed, the development of genetic screening for a broad array of diseases and conditions may eventually lead us to rethink the principles that are used to determine insurability and the apportionment of payment burdens.
Additional ethical questions arise when we consider genetic screening of newborns, young children, and others who cannot give valid consent for such procedures:
- As more genetic tests become available, which ones should be universally administered to newborns?
- What role should parental consent play in determining when children are screened?
Decisions about the implementation of universal genetic screening for newborns will likely follow existing policies, which perform tests for serious, early-onset diseases that are susceptible to treatment. The paradigm case for such universal screening is phenylketonuria (PKU). Newborns are routinely tested for PKU without the explicit consent of parents, under the assumption that parents want to know if their child is afflicted with this potentially devastating but easily treatable condition. Of course, the moral propriety of newborn screening becomes more complicated when we begin to deviate from this paradigm case. Determining whether screening should be pursued in cases like this will not always be easy:
- What if the disease is not easily treatable, or can only be treated at great expense that parents may not want to incur?
- What if an ailment is late onset and untreatable, as is the case with Huntington’s disease? What if a test can only determine a probability, not a certainty, that a child will develop a disease?
Of course, from a legal standpoint parents have broad discretion when it comes to decisions about their children’s health and welfare, and this will no doubt hold true for decisions about both genetic testing and genetic engineering as these procedures become increasingly available. While this broad discretion is based on respect for parental autonomy and on a desire for minimal government intrusion into family life, we must acknowledge the potential for conflict between a parent’s choice and a child’s welfare.
- What if a parent refuses to consent to a test that is clearly in their child’s best interest?
- What if a parent decides to pursue a genetic “enhancement” that involves significant risks for a child, or that may limit a child’s life prospects?
While these questions may seem far-fetched to some, it is worth noting that current laws in most states allow parents to opt out of testing for PKU, despite the fact that this may leave their child exposed to a devastating disease.
Today, we face many important challenges pertaining to the use and distribution of genetic research and information. As our capabilities for genetic screening and genetic engineering increase, we are likely to encounter more difficult ethical questions, including questions about the limits of parental autonomy and the application of child welfare laws.
© 2007, American Institute of Biological Sciences. Educators have permission to reprint articles for classroom use; other users, please contact [email protected] for reprint permission. See reprint policy. |
Earth Science Projects
This group of earth science projects are based on a broad definition of earth science, which includes other areas besides geology, including bodies of water--both limnology (the study of fresh water) and oceanography--rocks and minerals, and soil science.
These projects are adaptable to make them useful for different purposes, and can be used in homeschools, as jumping off points for public or private school science projects, or as the foundation of science fair projects. You can feel free to include, reconfigure, or ignore the extensions. You can, if necessary, also adapt them to a younger or older, more sophisticated audience.
Bodies of Water
Collect water from a rainfall, from your tap, and from several local bodies of water, including ponds, lakes, rivers, or oceans. Label it carefully. Describe each water sample.
- Look at each sample under a microscope. What organisms do you find?
- Test the pH of each water sample. Explain the results.
- Test the water for other products, including arsenic, chlorine, copper, hardness, iron, lead, nitrates, nitrites, and pesticides. You can purchase water sample test kits online, including ones designed for student and science project use, like the one here: waterfiltersonline.com
Rocks and Minerals
Collect a bunch of small rocks from around your neighborhood, local parks, and anywhere you’re allowed to take rocks, and also, if possible, purchase a small set of assorted gems and minerals. How many ways can you find to categorize the sets individually and when they’re mixed together?
- Classify the rocks as sedimentary, igneous, and metamorphic.
- Identify your rocks. What do the rocks you found outside reveal about the areas in your community where you found them? Use the information at this University of Wisconsin website if you like: uwgb.edu
- Look at your rocks and minerals under a magnifying glass. What do you see that you couldn’t see with just your unaided eyes?
Go the the International Sand Collectors Society website - sandcollectors.org and read the page about becoming a sand collector. Set yourself up as a sand collector adapting the suggested system to suit your purposes and begin collecting samples. Don’t forget to look in your neighborhood - there is sandy soil in places other than beaches and deserts.
- Sand grains are small to measure easily. Develop a system for sorting sand by size. Is sand in each sample of similar size? Explain differences you notice within and between samples.
- Using a standardized color identification system like Pantone, for example, to describe the colors of the sand you have. Explain color differences you notice within and between samples.
- In the spring, go around your yard or a similar area of your choice. Take note of plants growing where they were not planted, for example, dandelions and violets in lawn areas, oaks and maples sprouting in the garden, irises that are feet away from where they were planted. Make a positive identification of each one. Do your best to figure out how the plants got to their new location. What conclusions can you draw about why they’re growing in the new place?
- Find out about the growing guidelines for each of the plants you discovered. Check to see if the places where these “unplanted” specimens have landed meet their expected requirements. As part of this process, describe the soil conditions and test the soil. Explain any discrepancies you find between what the plants’ ideal growing conditions are and where they have actually grown. What conclusions can you draw?
- Choose a case in which you have two specimens of the same plant growing unplanted in two different places. Alter the location of one of them (carefully dig it up if you have to) and add nutrients to the soil, water more frequently, or whatever you need to do to create a situation close to its ideal conditions. Compare the growth of the two plants. What conclusions can you draw.
- Choose a kind of seed to plant. Dig up a bit of dirt from your yard or garden, place it in a flower pot, and plant several seeds in it. Stick the pot in the garden or yard and leave it to nature. For the second pot, prepare the soil to suit the plant’s needs and put it next the first pot. What difference does the prepared soil make to the plant?
- Take a bit of the soil from each plant and look at it under the microscopes. How does it differ? |
Equivalent Numerical Expressions, Day 1 of 2
Lesson 13 of 16
Objective: SWBAT: • Find the area of squares and rectangles. • Simplify an expression using the order of operations. • Write and evaluate numerical expressions from area models.
These two lessons are based on the Laws of Arithmetic lesson that is part of the Mathematics Assessment Project .
I give the pre-assessment task as homework a few days before I teach these lessons. I explain to students that they may not know how to do everything on this assignment, but that is okay because they will be working on a similar task in class. The important thing is that they try their best and explain their thinking. When I review the pre-assessment task I write 1-2 questions on each student’s work, following the MAP Assessment recommendations. I do not give students a grade.
Teacher's Note: For an extensive list of common student misconceptions and questions, read the Before the lesson section of MAP's Laws_of_Arithmetic.
Here are a few of the common issues that my students have:
- They do not recognize the function of parentheses
- They do not understand the distributive law of multiplication
- They fail to recognize the commutative property
- They do not see the link between multiplication and addition
- They assume that squaring a number is the same as multiplying by two.
- They do not understand the significance of the fraction bar.
- Student needs an extension.
I use this pre-assessment to create homogeneous groups for students to work in during these two lessons. I look to pair students up with a partner who has a similar level of understanding. The partners may differ with the specific topics they understand, but they are around the same level. I want to prevent a student with a relatively low level of understanding being a partner to a student with a relatively high level of understanding. I have found that high-low pairings often result in the higher student either tutoring the lower student or the higher student completing the task without waiting for the lower student (see my Creating Homogeneous Groups reflection).
Today's Do Now reviews the meaning of the measurement concepts, perimeter and area. In the task Do Now Task my students will representing the area of squares and rectangles with numerical expressions, based on their interpretation of geometric figures.
I plan to ask my students to explain how they found the area of the rectangle. If a student says he/she multiplied 3 x 2, I ask if multiplying 2 x 3 would also work. For the square, I ask for a student to explain how they found the area. If he/she multiplied 3 x 3, I will ask if 3^2 would also work. I want my students to start to recognize that there are multiple ways to represent the same area, and, to apply the Commutative Property of Addition fluently.
"Review Your Work."
After the Do Now I hand back my students' Pre-Assessment with my questions. I give them a couple minutes to read my questions and write a response that reflects their thinking on the Review Your Work worksheet. It is okay if they don’t have a clear response at this time. My hope is that reading my questions will spark an idea, or, that they will think about the question throughout the lesson.
After students respond to my question, I will ask them to think about Question 1. I offer that they can use what they wrote about their pre-assessment as a starting place. It may help some students to find the actual area of the diagram and use that to compare with the expressions.
Once the class works on Question 1, I will ask several students to identify one expression that correctly models the area of the figure. I am looking for students to clearly explain/show their thinking, so I will ask them to explain their answer to the class. Here is what I am hoping to hear:
- I want my students to recognize that (b) works because it is finding the area of the two rectangles and adding them together.
- I want my students to understand that (c) works because 3+5+3 +5 is the same as 3+3+5+5 or 3x2 + 5x 2.
- I want my students to recognize that (d) works because it is finding the area of the larger rectangle that has the dimensions of 2 and 3+5.
After we discuss this task, I again collect the pre-assessments. I will return them to my students after they have completed the post-assessment (see Equivalent Numerical Expressions, Day 2 of 2).
In order to maximize participation during the next section of the lesson, I pass out Whiteboards and markers to my students.
During this section of the lesson I will be displaying the diagrams included in Whiteboard Practice using a document camera (or LCD Projector). As I display each figure, I ask students to look carefully at the diagram. After they have had time to observe, think and recall, I ask them to write an expression that shows how you would find the area of the figure in each diagram.
I will circle the room as students work observing what they are writing on their whiteboards. If I find a student who is struggling, I will ask him/her to try to find the area first, then write an expression showing how they figured out the area. If some students easily come up with one expression, I will ask them to write a second expression that also models the area of the diagram.
After a couple minutes I plan to ask students to hold up their whiteboards so I can see all of their responses. If I see a common mistake, I will make note of it. Eventually, I will write the expression on the board and ask students if they agree/disagree with it. I expect that I will be able to ask a couple students who have written different expressions to explain their thinking to the class. I am looking for students to share and explain 3 x 4 + 3 x 5 and 3 (4+5). During this time I tell students to record these expressions on their paper with their pencil.
Next, I will have students look at the next question. I give them a minute to work on their whiteboards. When most students have an answer, I ask them to show me their whiteboards. Again, if I see a common mistake, I will ask students to debate that answer. I ask a student to explain their thinking and ask another student to add to their thinking. I ask students to come up with other expressions that would also represent the area of diagram A.
If I have time, I ask students to come up with expressions that represent the area of diagram B and C. I want them to recognize that they are equivalent because they both have a rectangle that is 4 units by 1 unit and a rectangle that is 5 units by 2 units.
Before moving on, I will have student volunteers collect the whiteboards and markers.
Matching Part 1
For this section, students work in their homogeneous partner pairs. I have a volunteer read the rules and expectations from Matching Part 1 to the class. I ask students what questions they have. Then I pass out the materials and students start working.
- Before this lesson, I print and cut out one set of Expressions Cards and one set of Area Diagrams Cards for each partner pair. I like to print them on card stock and label the sets and place them in envelopes. For example, for set 1 I put #1 on the back of each card and label the envelope #1. That way when (inevitably) a card falls on the floor, it can easily be returned to the proper envelope.
- Today, I only give partner pairs an envelope with the following cards: A1, A2, A7, E1, E2, E7, E8, E13, and 3 blank E cards (they will need at least one to create a matching expression for A2). I do this so students can focus on a smaller amount of cards. When I gave students all the cards at once, many of them were overwhelmed. Adjust the amount of cards to meet the needs of your students.
As my students work on the task I walk around and monitor student progress. I am observing the strategies students are using and commenting on appropriate partner work. Many of my students may struggle at first, and this is okay. I want my students to engage in real mathematical practices (MP1, MP2, MP3). If a pair is stuck, I will not intervene immediately. I want students to find ways of applying what they know to find matches. If students raise their hand and ask for help, I may ask some of the following questions:
- What area card are you working with?
- What is the total area of the diagram? How do you know?
- How do you know if an expression card matches this diagram?
If students successfully find all the matches, I will ask them to use blank cards to create a different expression that is equivalent to each area diagram. Once they complete this task, I will have them pair up with another partner pair to compare their matches.
I begin the Closure by asking students to share their matches for A1 with the class. I want students to show and explain their thinking. Then I ask if 2 (3 + 4) would represent the area of A1. I want students to understand that that expression wouldn’t work since the rectangles don’t have the same width.
I will ask students to share out their different strategies for finding matches.
- Do you find matching expressions first and then match with an area diagram? Or vice versa?
- Do you find the area of the diagram and then simplify the expressions?
As we come to the end of the lesson I want my students to hear about the diverse strategies that their classmates are using. Then, I will have students clean up and organize their cards. Instead of giving a ticket to go, I will collect and look at my students' work to prepare for Part 2 tomorrow. |
Complement is a blood test that measures the activity of certain proteins in the liquid portion of your blood.
The complement system is a group of proteins that move freely through your bloodstream. The proteins work with your immune system and play a role in the development of inflammation.
There are nine major complement proteins. They are labeled C1 through C9.
- Complement component 3 (C3)
- Complement component 4 (C4)
Complement assay; Complement proteins
How the Test is Performed
Blood is drawn from a vein, usually from the inside of the elbow or the back of the hand. The site is cleaned with germ-killing medicine (antiseptic). The health care provider wraps an elastic band around the upper arm to apply pressure to the area and make the vein swell with blood.
Next, the health care provider gently inserts a needle into the vein. The blood collects into an airtight vial or tube attached to the needle. The elastic band is removed from your arm.
Once the blood has been collected, the needle is removed, and the puncture site is covered to stop any bleeding.
In infants or young children, a sharp tool called a lancet may be used to puncture the skin and make it bleed. The blood collects into a small glass tube called a pipette, or onto a slide or test strip. A bandage may be placed over the area if there is any bleeding.
How to Prepare for the Test
There is no special preparation.
How the Test Will Feel
When the needle is inserted to draw blood, some people feel moderate pain, while others feel only a prick or stinging sensation. Afterward, there may be some throbbing.
Why the Test is Performed
Total complement activity (CH50, CH100) looks at the overall activity of the complement system. Typically, other tests that are more specific for the suspected disease are performed first. C3 and C4 are the most commonly measured complement components.
A complement test may be used to monitor patients with an autoimmune disorder and to see if treatment for their condition is working. For example, patients with active lupus erythematosus may have lower-than-normal levels of the complement proteins C3 and C4.
Complement activity varies throughout the body. For example, in patients with rheumatoid arthritis, complement activity in the blood may be normal or higher-than-normal, but much lower-than-normal in the joint fluid.
Patients with gram negative septicemia and shock often have very low C3 and components of what's known as the alternative pathway. C3 is often also low in fungal infections and some parasitic infections such as malaria.
- Total blood complement level: 41 to 90 hemolytic units
- C1 level: 16 to 33 mg/dL
- C3 levels:
- Males: 88 to 252 mg/dL
- Females: 88 to 206 mg/dL
- C4 levels:
- Males: 12 to 72 mg/dL
- Females: 13 to 75 mg/dL
Note: mg/dL = milligrams per deciliter.
Note: Normal value ranges may vary slightly among different laboratories. Talk to your doctor about the meaning of your specific test results.
The examples above show the common measurements for results for these tests. Some laboratories use different measurements or may test different specimens.
What Abnormal Results Mean
Increased complement activity may be seen in:
- Certain infections
- Ulcerative colitis
Decreased complement activity may be seen in:
- Hereditary angioedema
- Kidney transplant rejection
- Lupus nephritis
- Systemic lupus erythematosis
Veins and arteries vary in size from one patient to another and from one side of the body to the other. Obtaining a blood sample from some people may be more difficult than from others.
Other risks associated with having blood drawn are slight but may include:
- Excessive bleeding
- Fainting or feeling light-headed
- Hematoma (blood accumulating under the skin)
- Infection (a slight risk any time the skin is broken)
The "complement cascade" is a series of reactions that take place in the blood. The cascade activates the complement proteins. The result is an attack unit that creates holes in the membrane of bacteria, killing them.
Michael E. Makover, MD, professor and attending in Rheumatology at the New York University Medical Center, New York, NY. Review provided by VeriMed Healthcare Network. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc.
The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997-
A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited. |
One of the three meninges, the protective membranes that cover the brain and spinal cord.
It is interposed between the two other meninges, the more superficial and much thicker dura mater and the deeper pia mater, from which it is separated by the subarachnoid space. The delicate arachnoid layer is attached to the inside of the dura and surrounds the brain and spinal cord. It does not line the brain down into its sulci (folds), as does the pia mater, with the exception of the longitudinal fissure, which divides the left and right cerebral hemispheres. Cerebrospinal fluid (CSF) flows under the arachnoid in the subarachnoid space. The arachnoid mater makes arachnoid villi, small protrusions through the dura mater into the venous sinuses of the brain, which allow CSF to exit the subarachnoid space and enter the blood stream.
The arachnoid mater and dura mater are very close together throughout the cranium all the way to S2, where the two layers fuse into one layer and terminate. Sandwiched between the dura and arachnoid maters lie some veins that connect the brain's venous system with the venous system in the dura mater.
The arachnoid mater covering the brain is referred to as the “arachnoidea encephali,” and the portion covering the spinal cord as the “arachnoidea spinalis.” The arachnoid and pia mater are sometimes considered as a single structure, the leptomeninx, or the plural version, leptomeninges. (“Lepto”- from the Greek root meaning “thin”). Similarly, the dura in this situation is called the pachymeninx.
There are two subdivisions of arachnoid mater surrounding the subarachnoid space, the dorsal layer and the ventral layer. The dorsal layer covers internal cerebral veins and fixes them to the surrounding tela choroidea. The ventral layer of arachnoid membrane, on the other hand, is a direct anterior extension of this arachnoid envelope that the dorsal layer forms over the pineal region.
A arachnoid membrane is a selective barrier.
During direct cortical observation, it is sometimes difficult to understand the cortical anatomy of the sulci and gyrus because of the arachnoid matter.
see Arachnoid cyst |
DNA Databases and Human Rights
Using DNA to trace people who are suspected of committing a crime has been a major advance in policing. When DNA profiling is used wisely it can help to convict people who have committed serious crimes or exonerate people who are innocent. However, concerns arise when individuals’ tissue samples, computerized DNA profiles and personal data are stored indefinitely on a DNA database. There are concerns that this information could be used in ways that threaten people’s individual privacy and rights and that of their families.
Forensic DNA databases are now well established in many countries in the world. Rules on what data can be collected and stored and how it can be used differ greatly between different countries. As DNA sequencing technology advances and becomes cheaper, there are plans to set up new databases or expand existing databases in many countries.
In some countries, databases that used to contain records only from people convicted of serious crimes are being expanded to include many innocent people who have been arrested but not convicted and people convicted or given police warnings or other sanctions for minor crimes. These people are treated as a ‘risky population’ who may commit future offences. In other countries, a DNA database of the whole population is proposed. Data-sharing, involving the transfer of information across international borders, is also on the increase.
Anyone who can access an individual’s forensic DNA profile can use it to track the individual or their relatives. Access to a DNA sample can reveal more detailed information about a person’s health. DNA evidence is not foolproof and mistakes can be made in laboratories or in court. However, there are currently no international
safeguards that would protect people’s privacy and rights and prevent miscarriages of justice.
This briefing is intended to provide people with the information that they need in order to understand how DNA databases are built and used and the implications for their rights. Its starting point is that safeguards are needed and that ordinary citizens should have a say in how these safeguards are developed.
• How DNA databases are built, by the collection and retention of DNA samples and computer records
• Their role in solving crimes
• Expansions in uses
• The implications for privacy and human rights
• Impacts on children, ethnic minorities and other vulnerable people
• Safeguards that can be adopted
What is special about DNA?
DNA is a chemical that occurs inside every cell of a person’s body. The DNA is contained in 22 pairs of structures known as chromosomes, shaped like an X, plus an extra pair – the sex chromosomes – which determine whether someone is male or female. In this final pair, women have two X chromosomes, but men have one X and one Y chromosome. Each chromosome consists of two long strings of chemical letters, twisted together in the famous shape of the double-helix. The chemical letters occur in pairs as rungs on this twisted chemical ladder. The four chemical letters of the genetic code spell out instructions to the cell about how to make the proteins that allow the human body to grow and function normally. The parts of the DNA sequence that contain the instructions for making proteins are known as genes.
DNA is useful to identify an individual because everyone’s genetic code is thought to be unique, unless they have an identical twin. The string of chemical letters in a person’s DNA can therefore act like a unique bar code to identify them. Because a person inherits half their DNA from their mother and half from their father, it can also be used to identify their relatives. Close relatives have a DNA sequence that is more alike than distant relatives or than someone who is unrelated.
Biological identifiers such as DNA, fingerprints, iris scans and digital photographs are known as ‘biometrics’. In recent years there has been a lot of interest in developing biometrics to track and identify individuals as they enter or leave different countries or as they use public or private services, such as banks, computers, workplaces or hospitals.
Unlike iris scans and photographs, DNA and fingerprints can be left wherever a person goes: for example, on a glass or cup that they have been drinking from. This means that they can be used to track individuals – i.e. to find out whether they have been at a particular place, such as a crime scene or meeting place – where there might not be a scanner or a camera.
DNA differs from fingerprints in two main ways:
• Because DNA has a biological function, some of the information in a person’s DNA may be relevant to their health or other physical characteristics, such as their eye colour.
• Because DNA is shared with relatives, a person’s DNA can be used to help identify their parents or children and perhaps more distant relatives.
However, DNA profiles used by the police are not based on the whole sequence of someone’s DNA, but only on parts of it. This means that the information contained in them is more limited than that contained in a person’s whole genetic make-up.
What role can DNA play in solving crimes?
People can leave traces of their DNA at a crime scene because it is inside every cell of their body. DNA can be extracted from blood, semen, saliva or hair roots left at a crime scene using a chemical process. Tiny amounts of DNA can sometimes be extracted
from a single cell – such as cells shed from someone’s skin when they touch an object –
using new sensitive techniques (known as ‘low copy number’ DNA).
Police can also collect biological samples from suspects, usually by scraping some cells from inside their cheek.
When biological samples are collected by the police from a crime scene or an individual, they are sent to a laboratory for analysis. The laboratory extracts the DNA, amplifies it using a chemical reaction, and creates a string of numbers based on part of the sequence of chemical letters: this is known as a DNA profile. The DNA profile is not based on the whole sequence of the DNA (which would currently be very expensive) but on parts of it known as ‘short tandem repeats’ (STRs), where the chemical letters of the DNA are known to be repeated a different number of times in different people. The final DNA profile consists of a string of numbers based on the number of repeats at each of the STRs, plus the results of a test of the sex of the person from whom the sample came.
DNA profiles are not unique but the probability that two people’s DNA profiles match by chance is low. If the DNA profile from an individual matches the DNA profile from a crime scene it is therefore highly likely (but not certain) that the blood, semen or saliva left at the crime scene came from them.
If the police have a number of suspects for a crime a DNA match can help them to identify who was at the crime scene and who wasn’t. The value of this evidence in solving the crime will vary: DNA on a cigarette butt could have been dropped earlier in the day or have been planted by someone who wanted to implicate an innocent person in the crime; in contrast, DNA in semen from a woman who has been raped can show that a particular man was or was not likely to have been involved. However, even a rape case may not be straightforward: for example, if the man argues that the woman agreed to have sex.
When DNA samples are collected at a murder scene, many DNA matches will occur with DNA from the victim or with others who may have been there earlier in the day, not with the perpetrator of the crime. However, these matches can still help to provide important clues that will help to solve the crime. For example, a DNA match between the victim’s blood and a blood stain on someone’s shoes or clothes might be part of the evidence
that leads to a criminal’s conviction. In this example, a DNA database is not required because the victim’s DNA can be obtained easily from him or her.
How does a DNA database help to solve more crimes?
Because a DNA profile is a string of numbers it can be stored on a computer database. If there is a group of known suspects for a crime, a DNA database is not needed to help
the investigation. Any DNA profile collected from the crime scene can be compared directly with the DNA profiles of all the suspects to find out which one it comes from. However, a DNA database can be useful to bring new (unexpected) suspects into an
investigation if there are no known suspects, or if the crime scene DNA does not match anyone who has already been identified.
A DNA database is a computer database containing records of DNA profiles. Usually there are two different sources of these DNA profiles: crime scene DNA samples and
individuals’ DNA samples. When a new crime scene DNA profile is added to the database it is searched against all the other DNA profiles stored on the database. The crime scene profile might match with stored DNA profiles from other crime scenes, indicating a link between these crimes. Or it might match with an individual’s DNA profile, suggesting that they could be a suspect for the crime. When a new DNA profile from an individual is added to the database it is searched against all the stored crime scene DNA profiles on the database. Again, a match may indicate that the individual may be a suspect for the crime. This process is known as ‘speculative searching’ and it results in reports of matches that can be sent back to the police for further investigation.
Although DNA can undoubtedly be useful to exonerate the innocent, a database of individual DNA profiles (as opposed to crime scene profiles) is never necessary to exonerate an innocent person, since this can always be done by comparing the DNA profile of the innocent suspect directly with the crime scene DNA profile. The Innocence Project in the USA has helped free a number of innocent people – including many on death row – by ensuring that crime scene DNA evidence is analyzed and used correctly. However, individuals who have been wrongly convicted of a crime do not need their
DNA profile to be on a database in order to be exonerated: their DNA profile can be taken from them at any time provided the relevant crime scene evidence has been
When a match report is sent to the police they will need to do further work to use the information to try to solve the crime. The ’added value’ of putting individuals on the Database is only to introduce new suspects into a past or future investigation. This depends on the number of ’cold hits’ (unanticipated DNA matches) and the extent to which these matches lead to successful prosecutions. Matches with known suspects do not require a database, although they do require the ability to collect and use DNA during an investigation.
Stored DNA profiles are useless to the police unless some information about where they came from is also kept. Crime scene DNA profiles must be stored with information about when they were collected and where they came from, or linked to databases which contain this information by a crime reference number. The type of information that is stored with an individual’s DNA profile will typically include their name and some other information about their appearance, suspected crime, and when the sample was taken. The individual’s record on the DNA database may also be linked to other records on different databases, such as the record of their arrest, which can also include more personal data to make it easier for the police to track them down. A unique bar code can also be used to link the computer record containing the DNA profile back to the original DNA sample, stored in a laboratory.
Some countries keep databases only of crime scene DNA profiles, but others also keep databases of individuals’ DNA profiles. If only DNA profiles from crime scenes are stored, a new individual’s profile can still be searched against all past crime scene profiles to see if they are a suspect for any of these offences. However, retention of individuals’ DNA profiles allows them to remain suspects for any future crime. The largest databases of individuals’ DNA profiles are in the UK and USA which each store
the DNA profiles of about 5 million people. Currently, laboratories in both these countries also store individuals’ DNA samples linked to the person’s record on the database.
What are the concerns about DNA databases?
The retention of DNA profiles and samples taken from crime scenes can be readily justified because they might be useful if an investigation needs to be re-opened in the future (either to convict a perpetrator, or to exonerate an innocent person). The major human rights concerns relate to the widening of the group of individuals (not crime scene samples) from whom DNA can be taken and then retained. This is because:
• DNA can be used to track individuals or their relatives, so a DNA Database could be misused by Governments or anyone who can infiltrate the system;
• In order to be useful to track suspects, DNA records are linked to other computer records such as records of arrest, which can be used to refuse someone a visa
or a job, or lead to them being treated differently by the police;
• DNA samples and profiles contain private information about health and genetic relationships (including paternity and non-paternity).
Expanding DNA databases to include many persons who have merely been arrested represents a significant shift in which the line between guilty and innocent is becoming blurred. It undermines the presumption of innocence by treating people who have merely been arrested as somehow less innocent than others who have not been convicted of any offence. DNA databases also shift the burden of proof because people with records on them may be required to prove their innocence if a match occurs between their DNA profile and a crime scene DNA profile at some point in the future.
DNA is not foolproof so procedures need to be in place to ensure that matches between individuals’ DNA profiles and stored DNA profiles do not result in miscarriages of justice. The more DNA profiles that are compared the more likely errors are to occur, and problems can also result due to poor laboratory procedures, failure to require corroborating evidence, or if DNA evidence is planted at a crime scene.
These concerns are exacerbated by wider problems within many criminal justice
systems, which may result in racial, religious or political bias in whose DNA and personal information is kept, or insensitivity to the impacts on vulnerable people, including
children and the mentally ill.
The benefits of DNA databases in solving crimes must be weighed against these downsides. The main issues are outlined in more detail below.
DNA databases, Privacy and human Rights
DNA is left at crime scenes, but it is also left elsewhere. The retention of DNA and fingerprints from an individual on a database therefore allows a form of biological tagging or ‘biosurveillance’, which can be used to attempt to establish where they have been.
This means that DNA databases can be used to track individuals who have not committed a crime, or whose ‘crime’ is an act of peaceful protest or dissent. For example, in a state where freedom of speech or political rights are restricted, the police or secret services could attempt to take DNA samples from the scene of a political meeting to establish whether or not particular individuals had been present.
Paper-based databases of individuals’ records have been a powerful force in facilitating oppressive regimes and genocide, from the Nazis and the Stasi to Rwanda. DNA
databases link searchable computer records of personal demographic information, such as name and ethnic appearance, with the ability to biologically tag an individual and track their whereabouts using their DNA profile. An individual’s relatives may also be identified through partial matching with their DNA. Thus, DNA databases significantly shift the balance of power from the individual to the state.
These concerns do not relate solely to the storage of DNA profiles and samples, but also to the other information that may be kept. For example, if DNA is collected on arrest and retained indefinitely, there is additional information kept in the police records of arrest
and in the samples which may be stored in the laboratories which analysed them. The former may be accessed to assess someone’s suitability for a job or visa, leading to potential erosion of their rights purely as a result of being arrested, even if they have not been convicted by a court. The latter contain additional personal information, such as whether someone is a carrier for a genetic disorder, that could be accessed and revealed if the sample is re-analyzed.
Concerns about ‘biosurveillance’ extend beyond the state to anyone who can infiltrate the system and obtain access to an individual’s DNA profile. This might include
organized criminal or terrorist groups, or anyone seeking to track down an individual. For
example, individuals on witness protection schemes may have their appearance altered but cannot change their DNA. If someone becomes suspicious about them and collects their DNA, their identity could be revealed by matching this to a stored DNA profile on a database, if this is accessible and linked to their old identity. Their relatives might also be found through ‘familial searching’ (looking for partial matches with the DNA profiles of other people on the database). Children who have been separated from an adult for their own protection could also be tracked down by someone with access to a DNA database
if the adult has a sample of their DNA (taken from an old toothbrush, for example), or who shares part of their DNA profile because they are related to them.
Expansion in uses: familial searching, research uses and counter-terrorism
Familial searching is a process by which investigators look for partial matches between crime scene DNA profiles and the DNA profiles of individuals stored on a DNA database. This can be used to identify a relative of the suspect who can then be interviewed, potentially leading to the suspect’s identification and perhaps a successful prosecution. Familial searching leads to a long list of partial matches which must be shortened by additional DNA testing and/or other policework. It has been pioneered in the UK, where it has helped to solve a number of serious crimes. However, it raises additional concerns about the privacy of individuals who are not suspects but who may be related to a suspect. In particular, instances of non-paternity might inadvertently be revealed through the process of familial searching. If used routinely, familial searching could lead to significant abuses by allowing investigators or anyone who infiltrates the database to track down the relatives of political dissenters or to pursue enemies or identify paternity and non-paternity for personal, commercial or criminal reasons.
DNA databases consist of collections of biological samples (if stored), computerized DNA profiles and other information (such as criminal history and ethnicity) that may be valuable to genetic researchers. However, much research in this area is contentious due to the history of eugenics. In particular, attempts to link genetic characteristics to discredited concepts of race or to identify ‘genes for criminality’ are controversial. Unlike
databases set up for research purposes, forensic DNA databases contain data collected
without consent and/or sometimes with consent for policing purposes only. Any attempt to use such databases to draw inferences about genetic characteristics is therefore in breach of established ethical standards. Such breaches have already occurred with some existing databases.
The use of DNA databases in criminal investigations requires an individual’s identity to be revealed only if there is a match between their DNA profile and a crime scene DNA profile. Until recently, uses of DNA databases were restricted largely to looking for matches with crime scene DNA profiles. However, this is now changing. For example, in the UK DNA collected and retained under the Counter-Terrorism Act 2008 can now be used for “identification…of the person from whom the material came”. This is a recent change of use which allows biological surveillance of certain individuals (i.e. the ability to use an individual’s DNA to track and identify them, whether or not they are suspected of committing a crime). Clearly this may be useful to security services but it is also potentially open to abuse. UK Government proposals to collect DNA and fingerprints routinely on arrest for any offence (including dropping litter and parking fines) and use them routinely for identification purposes (i.e. by matching the individual to their details on the DNA and fingerprint databases, using facilities set up in shopping centers for
such purposes) were dropped in 2008 following public outcry. However, this remains a potential use for DNA databases in the future, particularly as new technology develops
which may allow on-the-spot DNA real-time testing and matching with database records.
A variety of techniques to predict individual characteristics from a DNA sample (hair, eye and skin color and surnames) are also under development, with a view to identifying individuals who do not have a record on a DNA database. Scientific opinions differ on
the likely value of such techniques, due to their fairly limited predictive value.
DNA is not foolproof
False matches between an individual’s profile and a crime scene DNA profile can occur by chance, or due to poor laboratory procedures, and the implications of someone’s DNA being at a crime scene can also be misinterpreted.
The chance of a false match between an individual’s DNA profile and a crime scene DNA profile depends on the system of DNA profiling that is used. The standards used to create a DNA profile have changed with time and vary from country to country: the US uses 13 STRs at different places in the genetic sequence, but most other countries use fewer STRs. The UK system (which uses 10 STRs) is estimated to have about a 1 in a billion ‘match probability’: this is the likelihood that an individual’s DNA profile matches a crime scene DNA profile by chance even if the DNA at the crime scene did not come from them. Although this likelihood is very low, the number of false matches that occur depends on the number of comparisons that are made between different DNA profiles. If every crime scene DNA profile is compared against every stored DNA profile on a large database by speculative searching, a small number of false matches are expected to occur simply by chance. False matches are more likely to occur with relatives, as the brother or cousin of someone who has committed a crime will share some of their relative’s DNA sequence. This problem is exacerbated if some crime scene DNA profiles are not complete, as the likelihood of a false match can then increase considerably.
The quality of DNA profiles taken from a crime scene can vary according to the source of the DNA, whether it has become degraded over time, and whether the DNA is a mixture
from more than one person. Tiny samples of DNA from a single cell are more prone to errors in analysis and can also be easily transferred to a crime scene, even if an individual was not present. In contrast, a large quantity of blood found at the scene of a murder or burglary can give very reliable results. A mixture can be interpreted in many ways since there is no clear way to tell which part of the profile comes from which individual: this means that mixed DNA profiles are open to interpretation, particularly if a forensic laboratory is biased by trying to find a match with a particular suspect. Many DNA profiles taken from crime scenes are not complete or contain mixtures of more than one person’s DNA: this increases the likelihood of a false match with the wrong person.
As the size of a DNA database increases the number of false matches is expected to increase: this can waste police time following false leads, and lead to potential miscarriages of justice.
DNA samples can also be wrongly analyzed or mixed up during laboratory procedures, resulting in a match with the wrong person if quality assurance procedures are not followed.
Comparing DNA profiles from different countries can be complicated by the fact that not all countries test STRs at the same places along the DNA. This means that the crime scene DNA profile and the individual’s DNA profile can often be compared at a smaller number of places that would usually be the case, leading to a higher likelihood that a false match will occur by chance. Routine cross-border speculative searching of crime scene DNA profiles against stored DNA profiles from individuals arrested in other countries is therefore likely to throw up many more false matches than if such searches are restricted to one country or limited to only a small number of profiles.
Even if a DNA match is genuine, a person’s presence at a crime scene may not mean that they committed the crime. The weight attached to the match should depend on whether there is additional corroborating evidence: as well as the potential for false matches there may be a credible alternative explanation for the person’s presence at the crime scene, or the DNA evidence could have been planted. Using speculative
searching to identify suspects can mean that the balance of evidence is shifted: the onus is on the individual to prove they did not commit the crime, rather than the other way
around. Any individual with a record on a DNA database may also be vulnerable to being falsely implicated in a crime by the planting of evidence: by corrupt police officers,
powerful government agencies, or by criminals. Even if a miscarriage of justice does not occur, an individual who is falsely accused of a crime as a result of a DNA match may be subjected to a stressful police inquiry, pre-trial detention, or extradition to a foreign
Racial bias, mass screens and impacts on children and vulnerable people
In many countries, ethnic minorities are more likely to be arrested and prosecuted for criminal offences. DNA databases often include disproportionate numbers from such minorities and therefore the impacts on their privacy and rights may be greater than on others. Records on DNA databases, or linked records on police databases, often contain information about ethnicity. Records of names can also be searched for typical
surnames associated with a particular country of origin or religion, and the computer records of such individuals can therefore be identified. This opens the possibility of such records being used to facilitate discrimination – restricting access to jobs, visas or
housing – or more serious abuses of human rights, including ethnic cleansing and even genocide.
DNA samples may also be requested during an investigation on a voluntary basis. This may be necessary for example to check that a DNA profile from the crime scene is not that of the victim or of friends or relatives with a legitimate reason to be present. Sometimes this process is extended to mass screens of everyone living in a particular area, in an attempt to narrow down an investigation. There have been many instances in the US and UK where such ‘DNA dragnets’ have been racially targeted, people have been coerced into taking part and/or their records have been kept on databases without their consent following the investigation. Mass screens are rarely effective unless there
is a specific reason to test a specific group of people (for example, the suspect is known to work in a particular office) and they can lead to loss of trust in policing in targeted
People who are mentally ill and children are often arrested for minor offences. If their DNA is taken routinely on arrest (as has happened in England and Wales since 2004) their privacy and rights can also be disproportionately affected. Vulnerable individuals can find having their DNA taken and their records kept particularly disturbing and some individuals have even become suicidal as a result. Stigmatizing children and young people for minor crimes or on the basis of false accusations can also be counter- productive: some evidence suggests that this may make them more likely to commit offences in the future.
A good use of police resources?
DNA is undoubtedly a valuable tool in criminal investigations and has helped to catch the perpetrators of some very serious crimes, including rapes and murders. However, the idea that there would be no more rapes or murders if everyone had their DNA profile recorded on a database is totally mistaken. In addition to concerns about privacy and rights, the main limitations to this idea are: (i) the difficulties in collecting relevant and useful crime scene DNA evidence; (ii) the very low likelihood of most people committing serious crimes for which DNA evidence might be relevant; (iii) the costs and practical difficulties associated with collecting and keeping reliable computer records of DNA profiles and associated information from large numbers of people; (iv) the impacts on public trust in policing.
For example, in England and Wales a major government project to expand the use of DNA began in 2000. Positive benefits were achieved by improving the collection of DNA from crime scenes and speeding up its analysis. However, there were still real practical limits in how much useable DNA could be collected in this way: despite improvements in procedures, DNA profiles are still loaded to the DNA database from less than 1% of recorded crimes. Many crime scenes do not reveal any useable DNA, or may include DNA from multiple passers-by. Thus there will always be a real practical limit to how many crimes can be solved using DNA.
More importantly, a massive increase in the number of individuals’ DNA profiles collected and stored on the DNA database in England and Wales did not increase the likelihood of being able to prosecute someone for a crime. This appears to be because DNA profiles were collected and stored from too wide a pool of people (everyone arrested for any offence which the police keep records of, regardless of whether they
were ultimately charged or convicted). The likelihood of any of these individuals committing a crime for which DNA evidence was relevant was very low, so most of these stored profiles did not help to solve any crimes. The inclusion of hundreds of thousands of innocent people’s records on the DNA database also resulted in a loss of public trust
in policing. Although it is difficult to quantify the impacts, this may have made some crimes more difficult to solve by making some people less cooperative with police investigations. In Scotland, stricter rules on the retention of DNA profiles maintained public support and Scotland’s DNA database remained an effective tool in criminal investigations despite most innocent people’s records being deleted.
In 2010, putting someone’s DNA profile on the database in England and Wales was estimated to cost £30 to £40 (USD 46-62) and storing one person’s DNA sample cost about £1 (USD 1.54) a year. Running the computer database itself cost £4.29 million (USD 6.6 million) in 2008/09 and additional unknown policing costs were associated with crime scene examination and the police time spent taking DNA and fingerprints from people who had been arrested. If DNA were to be collected from the whole population, rather than only from people who have been arrested, this would obviously cost substantially more and also raise practical and ethical difficulties about how to collect DNA from everyone without consent. Collecting DNA from foreign visitors would add further to the costs and difficulties and could have negative impacts on people willing to travel to the country. Collecting DNA from babies at birth would raise serious ethical issues about consent and the role of the medical profession. Large databases of children’s DNA profiles together with their contact information would be attractive to abusers or to anyone wishing to establish paternity or non-paternity.
In addition, bigger databases – and more comparisons between DNA profiles stored in different countries – increase the likelihood of false matches, as described above. These can waste police time following false leads, even if they do not lead to miscarriages of justice.
This means that, rather than bigger databases being better, there is a trade-off between the pros and cons: restrictions on whose DNA is collected and stored are needed if a database is to be cost-effective. The main benefits of DNA in criminal investigations appear to have been delivered by: improving the collection and analysis of crime scene DNA (including the number of crime scenes visited and the speed and quality of the analysis of samples); ensuring that known suspects for a crime have their DNA analyzed and compared with relevant DNA evidence in a way that does not misrepresent the
value of this evidence in court; and retaining the DNA profiles of repeat offenders. Reanalyzing evidence from old crime scenes has also helped to correct some serious
miscarriages of justice.
Safeguards and Standards
Different ethical, legal and technical standards are set for DNA databases in each country. Important questions include:
• Under what circumstances should the police be allowed to collect DNA and store samples and profiles?
• Are there any procedures to destroy individuals’ samples or records when they are no longer needed?
• What data is sent to whom and is it kept securely?
• What technical standards must be met by the DNA profiles before they are loaded to the database?
• Are quality assurance procedures being followed in the labs that analyse the
• How are DNA matches used in court and is corroborating evidence needed?
• Can the database and samples be used for additional purposes other than solving crimes?
• Is there any independent oversight and information about how the database operates?
• Are safeguards included in legislation, or only in guidelines that can easily be changed?
Some of the relevant issues are considered in more detail below.
Collection of DNA
Most DNA samples collected by the police are taken without consent, usually using a mouth swab whilst the individual is in police custody. In such circumstances, the police may be allowed to use ‘reasonable force’ if someone refuses to give a sample. A DNA sample can be taken without consent by pulling a few hairs from the person’s head: the hair roots, but not the hair itself, contain their DNA.
One important safeguard is legislation that restricts the collection of DNA by the police without consent to circumstances where it is necessary for solving crimes, and where
the interference with a person’s rights is not disproportionate to any benefit that might be achieved. There is a wide range of views in different countries about whan DNA should
be collected by the police. Issues include:
• Should the DNA be directly relevant to the crime for which an individual has been arrested, or can DNA be taken just to search an individual’s DNA profile against stored DNA profiles from other crimes?
• If DNA can be taken when it is not needed for a specific investigation, are there other restrictions on when it should be collected (depending, for example, on the seriousness of the alleged offence, whether the individual has been charged or merely arrested, or their age or other circumstances)?
• Should there be any independent oversight for these decisions, should they be left to the discretion of the police, or should DNA collection be routine if certain conditions are met so that everyone is treated fairly?
Additional safeguards are needed for people who give their DNA to the police on a voluntary basis during the course of an investigation: their consent to this should be fully informed and freely given, without coercion from the police or others.
DNA analysis and reporting
DNA may be sent to police or commercial labs for processing. Data security and privacy policies are critical to ensure that private information is not revealed to unauthorised persons during processing, or accessed by someone who wants to infiltrate the system.
Laboratory quality assurance procedures are essential if people are not to be falsely accused of crimes due to sample mix-ups or poor quality DNA profiles. New procedures,
such as procedures to interpret very small samples of DNA or mixed DNA samples also need careful independent evaluation.
Data loading and match reporting
Procedures need to specify: the DNA profiling system to be used; how complete a DNA profile needs to be before it can be uploaded to the Database; and reporting requirements for matches with partial DNA profiles.
Reporting procedures to the police and to the courts need to ensure both privacy and reliability of the information that is provided. Investigators also need to understand the limitations of the technology and how and why it can be misinterpreted.
Retention of DNA profiles, samples and other data
Because of the impacts on privacy and human rights, one of the most contentious issues has been the question of when biological samples, DNA profiles and other police records can be retained.
Some countries, such as Germany, destroy each individual’s DNA sample as soon as the computerised DNA profile that is needed for identification purposes has been obtained from it. This protects privacy by preventing the sample from being re-analysed to obtain personal health information. However, other countries retain samples, in some cases indefinitely.
A separate question is how long individuals’ DNA profiles and other personal information should be stored on computer databases. Most countries with DNA databases keep the DNA profiles of people who have committed serious crimes such as rape and murder on the database indefinitely, but there are a wide variety of rules for entering and removing people who are convicted of more minor crimes.
When DNA databases were first set up, DNA profiles, samples and police computer records were legally required to be deleted and destroyed if someone was acquitted or charges against them were dropped. In England and Wales the law was changed in
2001 to allow the indefinite retention of all this information, but in December 2008 the
European Court of Human Rights found unanimously that this practice was in breach of the European Convention on Human Rights. The law in England and Wales has not yet
been changed, although the new Government has promised to implement the judgment.
Safeguards are also needed to ensure that people who have given their DNA voluntarily during an investigation should not be entered into databases or have their data retained against their will.
Access to and uses of stored data and samples
Access to DNA samples and to the DNA database must be restricted to a small number of authorised persons if security breaches are not to occur.
Uses of DNA database records and samples should also be restricted. Key issues include:
• When speculative searches of the database can be made;
• Under what circumstances DNA profiles from overseas can be searched against a database and how decisions are made to exchange data internationally;
• Whether and under what circumstances ‘familial searches’ of a database can be made (looking for partial matches with the DNA profile of a relative);
• Whether the database can be used for research purposes and if so, whether research on people’s genetic characteristics can take place without their consent;
• Whether the database can be used for the identification of individuals who are not suspects for a crime.
Use of DNA evidence in court
A key issue is whether prosecution requires corroborating evidence, or whether a person can be convicted on the basis of DNA evidence alone. The process of explaining DNA evidence to the court is also crucial: the value of DNA evidence can easily be overstated by using misleading statistics, particularly when the crime scene DNA profile is not complete. Expert forensic witnesses must not be under pressure to misrepresent evidence in cases where the interpretation may be in doubt (for example, when a mixed DNA profile is involved).
Another important issue is whether extradition or transfer of suspects to other countries can take place on the basis of a DNA match alone.
Oversight and governance
Legislation and policies can only safeguard privacy and rights and prevent miscarriages of justice if there is sufficient scrutiny of whether policies are being properly implemented and what the outcomes are. This requires independent oversight as well as the regular publication of public information about the size, costs and effectiveness of the database in solving crimes.
The impacts of a DNA database on privacy, human rights and justice will also depend on the context in which it operates, i.e. on the integrity of the criminal justice system in the country as a whole.
DNA databases raise important issues about privacy and human rights. Safeguards are essential because:
• DNA can be used to track individuals or their relatives, so a DNA Database could be misused by governments or anyone who can infiltrate the system;
• In order to be useful to track suspects, DNA records are linked to other computer records such as records of arrest, which can be used to refuse someone a visa
or a job or otherwise discriminate against them;
• DNA samples and profiles contain private information about health and genetic relationships (including paternity and non-paternity).
Essential safeguards include legal restrictions on the circumstances in which DNA and associated information can be collected and retained.
DNA is not foolproof, so procedures also need to be in place to ensure that misleading interpretations of DNA evidence do not result in miscarriages of justice. |
- Adding Text
- Cutting, Copying, and Pasting Text
- Dragging and Dropping Text
- Using Paste Special
- Applying Headings
- Applying Character Formats
- Applying HTML Text Styles
- Using Preformatted Text
- Adding Line Breaks
- Indenting and Aligning Text with HTML
- Working with Lists
- Inserting Special Characters
- Working with Font Styles
- Using Web Fonts
- Adding Automatic Dates
- Finding and Replacing
- Finding Text with a Simple Search
- Performing Advanced Text Searches
- Finding and Replacing in Source Code
- Finding and Replacing with a Specific Tag
- Using Regular Expressions for Searching
- Checking Spelling
Using Preformatted Text
Browsers usually ignore invisible formatting that doesn’t affect page content, such as tabs, extra spaces, extra line feeds, and the like. If you need to display text exactly as entered, however, you can use the Preformatted paragraph format, which wraps the text in the <pre>...</pre> tags and makes browsers display all of the text characters.
Originally, preformatted text was meant to display tabular data in rows and columns, as in the output of a spreadsheet. To make the information line up, browsers display preformatted text in a monospaced font such as Courier .
Preformatted text lines up neatly, as in this table.
To apply preformatting
- Select the text you want to change.
From the Format pop-up menu of the HTML mode of the Property inspector, choose Preformatted .
Apply the style using the Property inspector by choosing Preformatted from the Format menu.
Choose Format > Paragraph Format > Preformatted Text.
The text changes appearance. |
Breaking the Poverty Cycle through Fair Trade
Since the beginning of the Fair Trade Movement in the 1940s, there have been continued debates over the effectiveness of Fair Trade. Numerous studies have been conducted. Some of the most recent studies found that Fair Trade made the following positive impacts:
- more capacity building allowing small-scale producers to increase efficiency and expand market;
- more children having access to schools as a result of less financial pressure and more schools built in rural communities through Fair Trade premium;
- more stable income enabling families to invest in crops and family business; and
- more collaboration between small-scale producers that build relationships and foster idea exchange.
All the above help a family to break the poverty cycle.
There have been criticisms around Fair Trade. The main points being whether producers get their equitable share of the Fair Trade premium, whether studies of the effectiveness of fair trade are based on large enough samples, whether producer cooperatives are operated efficiently, and whether retailers adhere to Fair Trade Principles. Most of the issues are more or less addressed by certification system for Fair Trade food, textile and handicraft producer groups. Like any other system, there are imperfections. But we believe that the Fair Trade Movement has given tangible help to the world's most disadvantaged people. Have Fair Trade is better than no Fair Trade. As for the imperfections, we can all help to make it better. |
Themes are the fundamental and often universal ideas explored in a literary work.
Steinbeck consistently and woefully points to the fact that the migrants’ great suffering is caused not by bad weather or mere misfortune but by their fellow human beings. Historical, social, and economic circumstances separate people into rich and poor, landowner and tenant, and the people in the dominant roles struggle viciously to preserve their positions. In his brief history of California in Chapter 19, Steinbeck portrays the state as the product of land-hungry squatters who took the land from Mexicans and, by working it and making it produce, rendered it their own. Now, generations later, the California landowners see this historical example as a threat, since they believe that the influx of migrant farmers might cause history to repeat itself. In order to protect themselves from such danger, the landowners create a system in which the migrants are treated like animals, shuffled from one filthy roadside camp to the next, denied livable wages, and forced to turn against their brethren simply to survive. The novel draws a simple line through the population—one that divides the privileged from the poor—and identifies that division as the primary source of evil and suffering in the world.
The Grapes of Wrath chronicles the story of two “families”: the Joads and the collective body of migrant workers. Although the Joads are joined by blood, the text argues that it is not their genetics but their loyalty and commitment to one another that establishes their true kinship. In the migrant lifestyle portrayed in the book, the biological family unit, lacking a home to define its boundaries, quickly becomes a thing of the past, as life on the road demands that new connections and new kinships be formed. The reader witnesses this phenomenon at work when the Joads meet the Wilsons. In a remarkably short time, the two groups merge into one, sharing one another’s hardships and committing to one another’s survival. This merging takes place among the migrant community in general as well: “twenty families became one family, the children were the children of all. The loss of home became one loss, and the golden time in the West was one dream.” In the face of adversity, the livelihood of the migrants depends upon their union. As Tom eventually realizes, “his” people are all people.
The Joads stand as exemplary figures in their refusal to be broken by the circumstances that conspire against them. At every turn, Steinbeck seems intent on showing their dignity and honor; he emphasizes the importance of maintaining self-respect in order to survive spiritually. Nowhere is this more evident than at the end of the novel. The Joads have suffered incomparable losses: Noah, Connie, and Tom have left the family; Rose of Sharon gives birth to a stillborn baby; the family possesses neither food nor promise of work. Yet it is at this moment (Chapter 30) that the family manages to rise above hardship to perform an act of unsurpassed kindness and generosity for the starving man, showing that the Joads have not lost their sense of the value of human life.
Steinbeck makes a clear connection in his novel between dignity and rage. As long as people maintain a sense of injustice—a sense of anger against those who seek to undercut their pride in themselves—they will never lose their dignity. This notion receives particular reinforcement in Steinbeck’s images of the festering grapes of wrath (Chapter 25), and in the last of the short, expository chapters (Chapter 29), in which the worker women, watching their husbands and brothers and sons, know that these men will remain strong “as long as fear [can] turn to wrath.” The women’s certainty is based on their understanding that the men’s wrath bespeaks their healthy sense of self-respect.
According to Steinbeck, many of the evils that plague the Joad family and the migrants stem from selfishness. Simple self-interest motivates the landowners and businessmen to sustain a system that sinks thousands of families into poverty. In contrast to and in conflict with this policy of selfishness stands the migrants’ behavior toward one another. Aware that their livelihood and survival depend upon their devotion to the collective good, the migrants unite—sharing their dreams as well as their burdens—in order to survive. Throughout the novel, Steinbeck constantly emphasizes self-interest and altruism as equal and opposite powers, evenly matched in their conflict with each other. In Chapters 13 and 15, for example, Steinbeck presents both greed and generosity as self-perpetuating, following cyclical dynamics. In Chapter 13, we learn that corporate gas companies have preyed upon the gas station attendant that the Joads meet. The attendant, in turn, insults the Joads and hesitates to help them. Then, after a brief expository chapter, the Joads immediately happen upon an instance of kindness as similarly self-propagating: Mae, a waitress, sells bread and sweets to a man and his sons for drastically reduced prices. Some truckers at the coffee shop see this interchange and leave Mae an extra-large tip.
Motifs are recurring structures, contrasts, and literary devices that can help to develop and inform the text’s major themes.
When the novel begins, the Joad family relies on a traditional family structure in which the men make the decisions and the women obediently do as they are told. So invested are they in these roles that they continue to honor Grampa as the head of the family, even though he has outlived his ability to act as a sound leader. As the Joads journey west and try to make a living in California, however, the family dynamic changes drastically. Discouraged and defeated by his mounting failures, Pa withdraws from his role as leader and spends his days tangled in thought. In his stead, Ma assumes the responsibility of making decisions for the family. At first, this shocks Pa, who, at one point, lamely threatens to beat her into her so-called proper place. The threat is empty, however, and the entire family knows it. By the end of the novel, the family structure has undergone a revolution, in which the woman figure, traditionally powerless, has taken control, while the male figure, traditionally in the leadership role, has retreated. This revolution parallels a similar upheaval in the larger economic hierarchies in the outside world. Thus, the workers at the Weedpatch camp govern themselves according to their own rules and share tasks in accordance with notions of fairness and equality rather than power-hungry ambition or love of authority.
Symbols are objects, characters, figures, and colors used to represent abstract ideas or concepts.
Rose of Sharon’s pregnancy holds the promise of a new beginning. When she delivers a stillborn baby, that promise seems broken. But rather than slipping into despair, the family moves boldly and gracefully forward, and the novel ends on a surprising (albeit unsettling) note of hope. In the last few pages of his book, Steinbeck employs many symbols, a number of which refer directly to episodes in the Bible. The way in which Uncle John disposes of the child’s corpse recalls Moses being sent down the Nile. The image suggests that the family, like the Hebrews in Egypt, will be delivered from the slavery of its present circumstances.
When the Joads stop for gas not long after they begin their trip west, they are met by a hostile station attendant, who accuses them of being beggars and vagrants. While there, a fancy roadster runs down their dog and leaves it for dead in the middle of the road. The gruesome death constitutes the first of many symbols foreshadowing the tragedies that await the family.
Tom, after he gets turned away from the north town decides to go around the angry californians to a work camp safe for his family and away from cops
28 out of 66 people found this helpful
Ma Joad is basically the only reason the family is still together. She gives support to the family and carries most of the burden
45 out of 55 people found this helpful
i do appreciate steinbeck's powerful insight on migrant work in california despite a small resentment at his shaming of my state
7 out of 48 people found this helpful
Take a Study Break! |
What to do with this activity?
Have you ever seen lightning strike the ground during a thunderstorm? Or perhaps you have experienced an electric shock after rubbing your feet on a synthetic carpet and then touching something? They are caused by the same thing - static electricity.
Static electricity usually happens when things get rubbed together. Electrons are tiny particles that move inside each atom (the building blocks of our world) and between atoms. When things are rubbed together the electrons move about and mix. Things become either positively charged if they lose some of their electrons, or negatively charged if they get extra electrons. Then things need to re-balance. The negatively charged object is attracted to the positively charged object.
In the case of lightning, the rain clouds rubbing together builds up a very big negative charge, until the extra electrons are attracted to the nearest positively charged object - something on the ground.
Watch this video from Hoopla Kidz Lab which shows you how to do an experiment with static electricity involving your hair, a balloon and some tissue paper.
Talking and listening helps your child build their language and thinking skills – this is a great foundation for them to learn more. Asking questions, finding out answers and looking up words together will help build your child’s vocabulary and knowledge of the world around them.
Encourage your child to give their opinions and to ask questions about things they see around them. Help your child to make decisions by discussing their ideas. Check if your child understands different things they hear. Encourage your child to teach you new words and phrases they have learnt.
Rate this activity
Based on 1 review
How would you rate it?
1 = Poor, 5 = Great. |
FOUR SIMILARITIES BETWEEN WOMEN AND ASIAN AMERICANS DURING WWII
- both groups faced discrimination
- the two communities were serving for war
- they're both communities that fought for equal rights
- Asian American women worked at home taking care of the kids and cooking, and women were expected to do the same
THREE DIFFERENCE BETWEEN WOMEN AND ASIAN AMERICANS DURING WWII
- Asian Americans worked as cooks, waiters, gardeners, shopkeepers and technicians while women worked as nurses in the war.
- Many Japanese Americans worked in industries (selling produce and flowers) while American women normally worked in factories.
- Asian Americans are ranked lower than American women
TREATMENT OF ASIAN AMERICANS DURING WWII
- The Japanese American community were suspected of being a spies due to the surprised attack in Pearl Harbor, Hawaii on the morning of Sunday, Dec. 7, 1941.
- Roosevelt signed Executive Order 9066, Japanese Americans were incarcerated and relocated to internment camps holding up 120,000 people.
- Korematsu v. United States was a battle in court against Executive Order 9066. Korematsu was discriminated and mostly got negative reviews so was sent to a five year probation .
ASIAN AMERICAN RESPOND DURING WWII
- 3,600 Japanese Americans entered the arm forces
- 22,00 live in Hawaii or away from relocation zone
- About 77,000 people were receive compensation for unjust liberties
- Japanese Americans stayed loyal to the United States
ASIAN AMERICANS CONTRIBUTION TO WWII
- Chinese Americans and and China was a vital ally to the U.S. Roosevelt approve with the Chinese government to purchase its resources. Chinese Americans were known as the "sick man of Asia" to an important ally of the United States.
- Chinese Americans benefited from the events, so they expanded influence socially and economically. Japanese Americans were known as selling produce and flowers.
ANALYSIS/OPINION OF ASIAN AMERICANS DURING WWII
- Japanese Americans were targeted the most. They were still loyal even when they're discriminate.
- The ternment camps is like the concentration camps where people were separated.
SLOGAN: "IF NO ONE IS WILLING, WE WILL"
Women during WWII
Treatment of Women during World War II
- Women faced inequality because they didn't have the same opportunities as men did (e.g. jobs)
- Women were considered " Second-class citizens "
- They were paid less than men
Response to Treatment
- Women went on strike in October 1943 for one week. This strike was supported by men as well. It worked, women and men had the same wage after this movement.
Contributions from Women during WWII
- They weren't the cause of WWII. But there were propagandas all over, recruiting men to join the army/war. Many of the women's husbands left to fight in the war. As a result, plenty of jobs opened up for women to take.
Slogan: "Keep on Trying"
Analysis/Opinion to the actions of Women
- I am inspired by how the women took a stand towards injustice. I'm glad that it worked out in their favor. They were very strong and brave during this time. |
Propositional logic, also known as sentential logic ("sentential" means "relating to sentences") and statement logic, is the branch of deductive logic that involves drawing conclusions from premises that are in the form of propositions. As discussed on the logic page, propositions are statements that are either true or false, but not both.
In contrast to branches of logic such as term logic (also called syllogistic logic) or predicate logic, propositional logic looks at propositions as a whole and does not study logical properties and relationships that depend on parts of a statement, such as the subject or predicate of a statement. For example, taking the proposition "Socrates is a man," propositional logic treats the entire sentence as an indivisible unit; we can't use propositional logic to draw conclusions about, for example, the subject of the sentence ("Socrates") or the predicate ("a man"). Rather, propositional logic is used to combine individual propositions together into compound propositions using various operators, and examine the truth value of these compound propositions. Individidual propositions are combined with operators called sentence connectives. The four most common operators are:
|Negation||"not" or "it is not the case that"|
|Material Conditional||"if ... then"|
Note that the negation operator operates on a single proposition. For example, taking the negation of "The sky is blue" would result in "It is not the case that the sky is blue". The other three operate on a pair of propositions. Examples of such connectives are: "Roses are red and violets are blue," "You are in Sweden or you are in New York," and "If you ate the last piece of cake, then there is no cake left."
Before discussing the truth values of compound propositions, it can be helpful to introduce a suitable notation. While there are several different symbols that can be used, the following symbols are in common use:
|Individual propositions||p, q, r, etc., or any other variable name.|
|p||q||p → q|
|p||q||p ∨ q|
|p||q||p ∧ q|
We can represent the values that these operators return in truth tables. A truth table lists every possible value of the individual propositions, as well as the result of the operation. The truth tables for the four operators listed above are found on the right.
It is important to keep in mind that these logical symbols do not function in the exact same way in which the words "not", "and", "or", or "if ... then" are used in English. For example:
Truth tables, like the ones above, can be used systematically to consider all possible combinations of truth values. They can still be used when evaluating the truth value of a proposition that consists of several individual propositions; however, the number of rows required grows exponentially as the number of individual propositions increases, so rules of inference come in handy. Here is a list of rules of inference and mutual inference that can be used. The symbol "::" implies mutual inferability or logical equivalence; for example, given "p :: p∧p", we can conclude p∧p given p, or p given p∧p. The symbol ∴ means "therefore". Wherever it is used, it means that, given whatever is on the left-hand side, we can conclude whatever is on the right-hand side.
|p→q, p ∴ q||Modus Ponens|
|p→q, ¬q ∴ ¬p||Modus Tollens|
|p∨q, ¬p ∴ q||Disjunctive Syllogism|
|p→q, q→r ∴ p→r||Hypothetical Syllogism|
|p→q ∴ (q→r)→(p→r)||Hypothetical Syllogism|
|p→(q→r) ∴ (p∧q)→r||Importation|
|p→q ∴ (p∧r)→(q→r)||Addition of a Factor|
|p, q ∴ p∧q||Conjunction|
|p∧q ∴ p||Simplification|
|¬p ∴ ¬(p∧q)||Negative Conjunction|
|¬p∨¬q ∴ ¬(p∧q)||De Morgan's Theorem|
|¬p∧¬q ∴ ¬(p∨q)||De Morgan's Theorem|
|p :: p∧p||Tautology|
|p→q :: ¬p∨q||Condition Disjunction|
|p→q :: ¬q→¬p||Transposition|
|p :: ¬¬p||Double Negation|
|p∧q :: q∧p||Commutation|
|p∨q :: q∨p||Commutation|
|p∧(q∧r) :: (p∧q)∧r||Association|
|p∨(q∨r) :: (p∨q)∨r||Association|
|p∨(q∧r) :: (p∨q)∧(p∨r)||Distribution|
As an aside, note that the last five are similar in form to the laws of arithmetic. If you have any doubts about these rules, you may want to try to create the corresponding truth tables and verify that they are correct.
Using all of these rules, we can apply them in proofs. Take the following example:
|Given||If you did not mow the lawn, you did not receive $10 from me.|
|If you did not receive $10 from me, you did not go to the movies.|
|You went to the movies.|
|Prove||You mowed the lawn.|
We could let p represent "You mowed the lawn," q represent "You received $10 from me," and r represent "You went to the movies." Symbolically, we would have:
|Given||~p → ~q|
|~q → ~r|
We could represent a proof as follows:
|1||~p → ~q||Premise|
|2||~q → ~r||Premise|
|4||~p → ~r||1, 2, Hypothetical Syllogism|
|5||p||3, 4, Modus Tollens|
See also Formal Fallacies and Converse, Inverse, and Contrapositive. |
When students have emotional, mental, learning and physical disabilities that require special attention, they may not be able to learn successfully in a traditional classroom setting. Special education teachers ensure that these students receive the education they require, but may also impart additional training, such as on independent living skills and communication.
Special education teachers begin by assessing their students’ knowledge as well as their strengths, challenges and needs. They adapt lessons on required courses, such as English, math, science and social studies, so that students can learn more effectively, and develop Individualized Education Programs that describe what accommodations and services the students receive. They meet with parents, teachers, counselors and other school staff to discuss student progress and make sure that schools comply with the Individuals and Disabilities Education Act. They may work exclusively in special education classes or work to integrate students into general education settings.
In public schools, special education teachers need a minimum bachelor’s degree in elementary education, a content area such as math, or in special education. The program of study combines classroom learning with field work, such as student teaching. Teachers also need a license, with requirements that vary by state. Typically, the qualifications include the degree, a teacher preparation program and supervised experience in teaching. Some states grant different licenses for specialties with special education. Annual professional development classes may be needed to maintain the credential. Private school teachers also require the degree but do not need the license.
Special education teachers in the preschool through elementary levels earned a mean $56,460 per year, with the lowest 10 percent earning under $35,170 annually, and the highest 10 percent receiving over $84,320 yearly. This was as of May 2011, according to the Bureau of Labor Statistics. Most worked in elementary and secondary schools to earn a mean yearly $56,660, with the highest pay in grant-making and giving services at a mean annual $64,670. Special education teachers in middle schools earned a mean $48,910 per year, with an annual range of below $30,230 and above $74,230. Their salaries in elementary and secondary schools ran $49,130 per year, which were also their highest paying employers. In high school, special education teachers averaged $56,420 per year, with a yearly range of below $35,840 to above $83,590. Wages in elementary and secondary schools averaged $56,560 per year, and their highest wages were in outpatient care centers at a mean annual $60,630.
The BLS predicts that jobs for special education teachers will increase at seven percent in secondary schools, 20 percent in middle schools and 21 percent in preschool through elementary from 2010 to 2020. Compare this to the 14 percent predicted growth for all jobs in all industries. Increasing enrollment due to a growing population will fuel the demand. The areas with the best opportunities will be in the South and West because of population increases. Jobs in the Northeast are predicted to decline.
- U.S. Bureau of Labor Statistics: What Special Education Teachers Do
- U.S. Bureau of Labor Statistics: How to Become a Special Education Teacher
- U.S. Bureau of Labor Statistics: Occupational Employment and Wages, May 2011, Special Education Teachers, Preschool, Kindergarten, and Elementary School
- U.S. Bureau of Labor Statistics: Occupational Employment and Wages, May 2011, Special Education Teachers, Middle School
- U.S. Bureau of Labor Statistics: Occupational Employment and Wages, May 2011, Special Education Teachers, Secondary School
- U.S. Bureau of Labor Statistics: Job Outlook for Special Education Teachers
- Jupiterimages/Comstock/Getty Images |
What is Glaucoma?
Glaucoma is the name for a group of eye conditions in which optic nerve is damaged at the point where it leaves the eye. This nerve carries information from the light sensitive layer, the retina, to the brain where it is perceived as a picture.
In some people, the glaucoma damage is caused by raised eye pressure. Others may have an eye pressure within normal limits but damage occurs because there is weakness in the optic nerve.
Different types of Glaucoma
Open angle glaucomas (chronic glaucoma): It is most common. The eye is anatomically normal, but blockage or malfunction of the drainage channels slowly over many years causes elevated eye pressure. There is no pain but the field of vision gradually becomes impaired. We need to use chemical cleaner (eye drops) to open the drain or turn down the faucet. If this is insufficient, we can stake the drain (laser trabeculoplasty) & if that doesn’t work. We need to put in new plumbing (surgery / implants)
Angle closure glaucoma (Acute glaucoma): The trabecular meshwork is normal, but the iris is pushed against the meshwork & there is sudden and more complete blockage to the flow of aqueous. It means the drainage channels are covered by a stopper & we need to remove the stopper (laser iridotomy). This glaucoma can be quite painful & will cause permanent damage to sight if not treated promptly.
Secondary and developmental glaucoma: When a rise in eye pressure is cause by another eye condition it is called secondary glaucoma. Glaucoma in childhood is called developmental or congenital which is caused by malformation in the eye.
> People over the age of 45.
> People with family history of glaucoma.
> People with myopia are more prone to develop open angle glaucoma & those with hyperopia are more prone to develop angle closure.
Warning Signs of Glaucoma
> Trouble adjusting to dark rooms
> Difficulty focusing on near or distant objects
> Squinting or blinking due to sensitivity to light or glare
> Recurrent pain in or around eyes
> Double vision
> Dark spot at the center of viewing
> Lines and edges appear distorted or wavy
> Excess “watery eyes”
> Dry eyes with itching or burning
Surgical facility includes
> Trabeculectomy with anti-fibrotic agents (MMC)
> Trabeculotomy for congenital glaucoma
> GLAUCOMA VALVE IMPLANT/ GLAUCOMA
> DRAINAGE DEVICE for complicated cases |
Determining the main idea, is a skill that some students acquire much easier than other students. It can also be a challenging skill to introduce to students and support them to successfully grasp.
I did a MAJOR revision of a paid-for resource and created a 50+ page free download (available at the end of this post) to support the development and practice of determining the main idea of a text.
Before I get to sharing all of the activities in the free download, I wanted to share with you an overview of a lesson I did for an observation on main idea. This was not the first time my second graders were introduced to main idea. Main idea is introduced in first grade for most of them, so the focus of my lesson was not only to strengthen their understanding of main idea but the importance of strong supporting details and how to determine them.
I wanted to give my students a real life example showing the relationship of the main idea and supporting details. We had recently read a Scholastic News on reindeer which gave me the idea of a young deer walking for the first time. I thought this would be a great motivator at the beginning of the lesson. I showed students a video from YouTube (just search baby deer first steps) and asked them to describe to me what they saw. Students shared that the baby deer’s legs weren’t strong enough, the baby deer didn’t understand which muscles to use to make them move, etc. This was exactly what I was looking for 🙂 I shared with students that just like the baby deer needed to have the strength to move his legs and lift his body, that in order for a main idea to be the main idea it must have strong supporting details.
After we read the text, which I am unable to share on here because of copyright, we then were going to work to determine the main idea and supporting details. For this blog post, I am utilizing a text from my Winter Main Idea Practice Pack pictured below.
We read the text together and then I had the students buddy read. As they read I gave them supplies so that they could build the main idea and supporting details just like the deer example we discussed at the beginning. I gave each pair of students 4 Jenga blocks and strips of paper (tape was available).
After students worked with their buddy, we came together as a group to work on an anchor chart together to see how everyone did. I sketched the reindeer on big chart paper. But if you want to use this example and don’t want to draw the reindeer, I have included a chart you could use digitally or display using a projector and write on.
The Levels of Main Idea Thinking
James Baumann, David Pearson, and Dale Johnson wrote about the levels of main idea thinking and shared their own tweaks on the idea. I have created resources to support each level. According to Baumann, Pearson, and Johnson main idea can be taught using the following steps or levels.
With our youngest students, or those students that struggle with the concept of main idea, teachers can begin with the first level – realizing a thought that runs through sentences and links them together. Many of our learners are very visual, and/or hands-on, and benefit from concrete examples. I want to share with you something I used to do with my 2nd grade students.
Materials needed: string, clothespins, index cards (or cards from free download)
For this model, you can utilize some of the object cards provided in the free download at the bottom of this post, or you can jot down words on index cards. Pick one card to pin up to the string and ask students what else can be included on this string? Only items that share a common idea or theme may go on. If necessary, start with an example- hang up a few different animals. Share with students – these can all hang on the same string because they are animals. Let’s try another one. Ask a student to give you an object. Jot it down and hang it up. Have students brainstorm other things that could hang with that object.
To continue in the Level 1 phase, you can utilize a section of the 50+ page free download available at the bottom of this post.
Categorize it (Level 1)
Print out the 8 cards that are included and laminate them. You can use these in small groups, students can work on them independently at a center or even with a classmate.
Students are tasked with reading a list of items and deciding what the could categorize those items as. If the cards are laminated, they can simply use a dry erase marker.
Shrink the Sentence (Level 2)
Challenge your students with reducing a long sentence to its’ basic idea, or main idea. These sentence cards are included in the free download at the end of this post. Laminate the cards so that students can simply cross off the “extra words” with a dry erase marker. You could also have students just write the simplified sentence on dry erase boards, chalkboards, or in a notebook.
Stepping Stones to Level 3
Three activities are provided in the free download that I feel are stepping stones to level 3 of main idea thinking. The first is Guess the Main Idea.
7 sets of main idea cards are included and I have given each set a different border for easy organization and implementation. There are a few different ways that you could utilize these resources.
- Only allow the student or group of students working on a bag to pull one picture out at a time. Each time a picture is drawn have them guess the main idea and record. Students will see how their guess for the main idea will change and develop as they have more “supporting details.”
- Have the class look at the objects together and then break off in groups to guess the main idea based on the objects pictured in the bag. Have the students write the main idea their group develops and then defend it to the rest of the class. Have the class vote for the best main idea.
- Have a student or group of students work to determine the main idea and draw what that would look like.
Multiple recording sheet options are provided with this activity so that you can choose what will work best for your students or groups of students!
The next activity challenges students with determining the main idea and supporting details of familiar objects or people. 6 options are provided which include determining the main idea of a pet, friend, family member, principal, telephone, and an object of your choice!
The final activity that I feel is a stepping stone to level 3 is practicing determining the main idea from a character and scene card. In the free download, character and scene photo cards are provided. I recommend laminating these so you can continue to reuse them.
These cards could be utilized in small group. You could select a scene and character photo card and discuss what the main idea could be if these two photos were combined. Students can practice inferring what the main idea would be.
You could also display a set of cards for the entire class and have them work to develop what the main could be.
Lastly, you could provide these as a center or early finisher activity. Students could select their own cards and infer the main idea.
A story organizer and story activity sheet have been included in case you wanted students to develop a story based around the main idea they inferred based on the scene and character photo cards.
Level 3 and 4
Level 3 and 4 activities are included in my revised Winter Main Idea Practice Pack. This pack provides 10 passages, one fiction and one nonfiction for each topic. These passages provide practice for students with some providing a topic sentence and some not providing a topic sentence.
Each passage is provided in color and black in white.
One of the new additions to this resource is stacked questions. These part A/part B questions are being used in many tests administered across grade levels. I refer to them as stacked questions for as easy way for students to understand their relationship. These questions are provided for each passage on a colored full-page version and also on a printable student version.
You will find that in this resource there are many options provided which will allow for easy differentiation for your students. I have provided colored main idea choice cards for each passage. These would be best utilized if working with a small group and discussing the options. Or you could use these if you set up this activity as a center.
Student printable pages are included. They all provide the passage but then you have a choice of activities. The first option has the passage and multiple choice main idea.
The next option has the passage, stacked questions, multiple choice main idea, and writing a new title with support.
The last option has the passage, stacked questions, multiple choice main idea, and supporting details.
In addition, three recording sheets are provided to give more options for your students.
See this resource in my TPT Shop by clicking below!
Interested in the 50+ page free download? |
What is a Fiber Optic Network?
A fiber optic network is a telecommunications infrastructure that utilizes fiber optic cables to transmit and distribute data, voice, or video signals. It consists of a network of interconnected fiber optic cables, switches, routers, and other equipment to enable the transmission of information.
Key Points About Fiber Optic Networks:
- Fiber Optic Cable Backbone: The backbone of a fiber optic network comprises high-capacity fiber optic cables that form the main transmission lines. These cables typically have a large number of individual fibers bundled together and are designed to handle high data traffic. The backbone cables connect various network nodes, such as data centers, central offices, or distribution points.
- High Bandwidth and Speed: Fiber optic networks offer high bandwidth capacity and data transmission speeds. Fiber optic cables can carry vast amounts of data over long distances with minimal signal loss or degradation. This enables the transmission of large files, high-definition video streams, and real-time communications with low latency.
- Data Transmission Technology: Fiber optic networks employ various data transmission technologies to maximize efficiency and performance. The most common technology used is called “optical networking,” which involves converting electrical signals into light pulses that travel through the fiber optic cables. This allows for fast and reliable transmission of data over long distances.
- Long-Distance Connectivity: Fiber optic networks are capable of long-distance connectivity without significant signal degradation. Unlike traditional copper-based networks, which suffer from signal attenuation and interference, fiber optic networks can transmit data over hundreds or even thousands of kilometers with minimal loss or distortion.
- Scalability and Future-Proofing: Fiber optic networks offer scalability and future-proofing capabilities. With the ability to handle high data volumes, fiber optic networks can accommodate increasing bandwidth demands as technology advances. They provide a foundation for emerging technologies such as cloud computing, Internet of Things (IoT), and high-definition video streaming.
- Reliability and Signal Integrity: Fiber optic networks provide high reliability and signal integrity. Fiber optic cables are immune to electromagnetic interference, ensuring consistent and high-quality signal transmission. Additionally, fiber optic networks are less susceptible to physical damage caused by factors like weather, moisture, or electrical disturbances.
- Versatility of Applications: Fiber optic networks are used for a wide range of applications, including:
- Telecommunications: Fiber optic networks serve as the backbone of global telecommunications infrastructure, enabling high-speed internet connections, voice calls, and data transfer between different locations.
- Data Centers: Fiber optic networks connect data centers, facilitating the fast and secure transmission of data between servers, storage systems, and other network devices.
- Video Streaming: Fiber optic networks support the transmission of high-definition video streams, allowing for high-quality video conferencing, streaming services, and digital media distribution.
- Smart City Infrastructure: Fiber optic networks form the foundation for smart city initiatives, supporting applications such as traffic management, public safety systems, and smart grids.
- Industrial Applications: Fiber optic networks are used in industrial environments to enable real-time monitoring, control systems, and automation in sectors such as manufacturing, oil and gas, and utilities.
Fiber optic networks have revolutionized the way information is transmitted and distributed, providing high-speed, reliable, and secure connectivity for various industries and applications. Their superior bandwidth capacity, long-distance capabilities, and resistance to interference make them essential for underground utility infrastructure, telecommunications, data centers, and other critical network deployments.
Additional Points About Fiber Optic Networks:
- Network Topology: Fiber optic networks can be designed in various topologies to suit different needs:
- Point-to-Point: In a point-to-point topology, two endpoints are directly connected by a single fiber optic link. This type of configuration is common in long-distance connections, such as connecting two buildings or locations.
- Ring: A ring topology connects multiple network nodes in a closed loop, where each node is connected to its adjacent nodes. This configuration provides redundancy, ensuring that if one section of the ring is disrupted, the data can still flow in the opposite direction.
- Star: In a star topology, all network nodes are connected to a central hub or switch using individual fiber optic links. This configuration offers simplicity and ease of management, as each node can be easily added or removed without affecting the entire network.
- Mesh: A mesh topology involves multiple interconnected links between network nodes, providing multiple paths for data transmission. This configuration enhances network reliability and fault tolerance as data can be rerouted through alternative paths if a link or node fails.
- Fiber Optic Network Components: A fiber optic network comprises various components and equipment, including:
- Optical Transceivers: Optical transceivers are devices that convert electrical signals into optical signals for transmission over fiber optic cables. They are installed in network devices such as switches, routers, and media converters.
- Optical Switches: Optical switches allow for the routing of optical signals between different fiber optic cables or paths. They provide flexibility in network design and allow for efficient use of network resources.
- Multiplexers and Demultiplexers: These devices enable the transmission of multiple signals over a single fiber optic cable by combining or separating the signals at different wavelengths. They are used to increase the capacity and efficiency of fiber optic networks.
- Optical Amplifiers: Optical amplifiers are used to boost the strength of optical signals over long distances. They are particularly useful in long-haul or submarine fiber optic cable installations, where signal loss can occur due to transmission distance.
- Optical Splitters: Optical splitters divide an incoming optical signal into multiple output signals, allowing for the distribution of data to multiple network nodes. They are commonly used in passive optical networks (PONs) for fiber-to-the-home (FTTH) deployments.
- Network Management and Monitoring: Fiber optic networks require effective management and monitoring to ensure optimal performance and reliability. This includes:
- Network Management Systems (NMS): NMS software allows network administrators to monitor and manage the fiber optic network infrastructure, including configuration, performance monitoring, and troubleshooting.
- Fiber Optic Testing and Measurement: Various testing and measurement tools, such as optical time-domain reflectometers (OTDR), optical power meters, and optical spectrum analyzers, are used to verify signal quality, detect faults, and locate issues within the fiber optic network.
- Fault Localization: When a network issue occurs, fault localization techniques help identify the exact location of the problem, whether it’s a fiber break, connector issue, or other network component failure. This facilitates faster troubleshooting and resolution.
- Network Security: Fiber optic networks offer inherent security advantages due to their physical characteristics:
- Signal Interception: Fiber optic cables do not radiate electromagnetic signals, making it difficult for unauthorized parties to intercept or tap into the transmitted data.
- Fiber Tapping Detection: Specialized techniques and equipment can be used to detect any attempts to physically tap into a fiber optic cable, ensuring the integrity and confidentiality of the transmitted data.
- Data Encryption: In addition to the physical security measures, data encryption techniques can be employed to secure the information transmitted over the fiber optic network, further protecting it from unauthorized access.
Fiber optic networks have revolutionized communication and data transmission, providing high-speed and reliable connectivity for various industries and applications. As technology continues to advance, fiber optic networks are expected to evolve, offering even higher data transfer rates, increased capacity, and enhanced network management capabilities. |
Resistance: Ross's Legacy
Related ImagesSee the photographs related to this lesson
Begin the class by asking students, individually or in groups, to create a working definition for the word "resistance." Working definitions are not found in the dictionary; rather, students create them based on what they know. To help the class reach these definitions, consider the following question: In what ways do people resist?
Have the students participate in a Gallery Walk Gallery Walk of the photographs. To frame this gallery walk, ask the students to think about how the photographs might change their definitions of "resistance." When the students have completed their gallery walk, ask them to revise their working definition to include any new ideas they have gathered from the photographs. Have each group share their original working definitions and their new definitions. Ask them to explicitly identify the ways the photographs affected their definitions of "resistance."
Transition to a whole group debrief of the exercise using these questions: It was very risky for Henryk Ross to take and hide these photographs. Why do you think he chose to take this risk? How is photography, particularly Henryk Ross's, an act of resistance? What other examples of resistance occurred during the Holocaust? How are these acts of resistance similar to or different from Ross's actions?
Photograph of Henryk Ross and his photographs at the Eichmann Trial in Jerusalem.
The Ringelblum Archive was clandestinely compiled between 1940 and 1943 under the leadership of historian Emanuel Ringelblum in the Warsaw Ghetto. |
It’s easy to think that vocabulary relates just to English Lessons, but this is not the case, as language is at the heart of education and every subject taught in schools. Using and understanding words does not only help pupils to achieve academically, it is also fundamentally important in helping them to develop into well-rounded individuals.
A leading academic, Professor Maggie Snowling CBE, President of St John’s College, Oxford, stated: “Language is the foundation of education and is vital for social and emotional development. Children with poor oral language are at high risk of poor literacy and hence, educational failure. They can also experience difficulty in communicating to make friends, to join in activities and to express their feelings.”
Vocabulary is one of the threads which runs through every curriculum area. In order to explain a science investigation or describe what they see, pupils need to have a bank of scientific words. The Oxford University Press conducted research into why closing the word gap matters. During their research, they questioned over 1,000 teachers and 69% of Primary school teachers felt the word gap is increasing.
Jane Harley, Strategy Director, UK Education, Oxford University Press, said: “Over half the teachers surveyed reported that at least 40% of their pupils lacked the vocabulary to access their learning.”
There is evidence to suggest that pupils with poor vocabulary at the age of thirteen are less likely to achieve during their GCSEs. It has been evident for years that pupils are coming into Primary school with limited vocabulary and poor communication skills.
As Andrea Quincey (Head of English, Primary, at Oxford University Press) states: “Talk to anyone involved in primary education and most will tell you limited vocabulary and poor communication is the ‘number one issue’. The reasons for this are many and complex but one thing is clear: this word gap affects EVERYTHING.”
What can we, as Primary school teachers, do to close the word gap?
We need to address vocabulary in every subject area taught. Science is a great place to start. All Science Coordinators will be ensuring there is progression across year groups, which should include scientific vocabulary. Teachers should have words displayed in their classroom or role play corners with word mats available for pupils to use when they are predicting, experimenting, investigating, discussing and evaluating. We have started to develop posters and knowledge organisers for each area of the science curriculum. Here is a free example to download…
They are all year group-specific and there are definitions on the reverse of the word mat, alongside key information the children need to know.
As stated previously, it is important that language is embedded throughout the subjects, and our teaching resources will provide the perfect links with reading. In our reading spreads, we ensure that we use the correct scientific vocabulary, explaining how to pronounce a tricky word by placing the phonetic spelling next to it. For example, nephrons (say neff-rons). Linking science and reading is a great way to deepen children’s science vocabulary knowledge.
“Research has shown that children are more likely to read texts that are meaningful and enjoyable. Schools, therefore, can play a major role in children’s lives by developing a love of reading and making available a wide range of interesting and accessible texts,” stated Dr Ian Thompson, Associate Professor of English Education and Nicole Dingwall, a Curriculum Tutor on the PGCE English course at the University of Oxford.
Without doubt, research shows us that vocabulary is key to academic success and personal wellbeing. As Ofsted’s new framework focuses on the whole curriculum, it is important to demonstrate a clear progression of vocabulary throughout each of the different areas and not just English.
To find out more about our teaching resources, click here.
Read the OUP’s full report here: |
Wondering where that brightly colored songbird that visited your yard during the summer disappeared to when the temperature dropped? Many songbirds and other migratory birds spend the cooler months in Latin America’s tropical rainforests, so preserving their winter habitat is essential to their survival. That’s one reason why NRDC partnered with the group Osa Conservation to help Revive a Rainforest on Costa Rica’s Osa Peninsula, one of the most biodiverse places on Earth. With the support of our members we’ve been helping to restore 50 acres of degraded tropical rainforest by planting carefully selected native tree species.
Six hundred and fifty species of birds make North America their home and breeding ground. While some of these birds are permanent residents many are migratory, with migration paths varying from short, medium to long. Approximately 350 species breed in the US and Canada and then winter all the way in Latin America and the Caribbean where they need to find sufficient food and safe nesting locations. The Yellow Warbler, Tennessee Warbler, and the Canada Warbler are just three of the many species that journey long distances during their seasonal migrations to Costa Rica’s Osa Peninsula.
The Yellow Warbler, known for its brilliant egg-yolk yellow color and prominent black eyes, was one of the earliest winter migrants to arrive in the Osa Peninsula this year. Some of the Yellow Warblers can even make this long journey without stopping! Back in its northern breeding grounds, this bird’s nests are often taken over by the Brown-headed Cowbird, but the warbler comes back and re-builds its nest right on top of the old one, resulting in nests that are sometimes six tiers high.
The Tennessee Warbler migrates from the Canadian Boreal Forest to South and Central America, including Costa Rica. Despite its name, this bird only passes through Tennessee during its migration. The closest its breeding range actually gets to Tennessee is northern Michigan and it summers only in southern tropical forests. This deceptively named warbler is known as a nectar thief that gathers nectar from the base of the flower tube, consuming the nectar without actually helping in pollination.
The Canada Warbler also breeds in the Canadian Boreal Forest, but spends much of it time in its wintering grounds further south. Male-female pairs of this colorful songbird have been observed together all the way in Central America, suggesting these active little warblers remain together year round.
Unfortunately, these beautiful backyard songbirds and their habitat are threatened by expanding unsustainable development. In the Canadian Boreal Forest, the nesting ground for more than half of all North American migratory birds, tars sands oil development threatens critical migratory bird breeding grounds. Meanwhile, in Costa Rica’s Osa Peninsula, rapid tourism growth and unregulated development puts pressure on their winter forest homes.
Thanks to our members’ support, we’re helping our partner Osa Conservation restore and protect 50 acres of degraded land in the Osa that was deforested decades ago for use as a low-grade cattle pasture and later as a cultivation area for exotic species (check out our new video). Using up to fifty different native tree and plant species, Osa Conservation is reviving a critical green corridor to help strengthen the Osa’s network of protected areas. Osa Conservation’s experts utilize innovative reforestation techniques to bring back the biodiversity of this important forest habitat. They’ve created tree nurseries with over 100 native species, some of which are quite rare or even endemic. To accomplish this, they carefully collect seeds by hand from native trees in nearby healthy forest lands and plant species they know will help attract the region’s most spectacular wildlife. Osa Conservation’s experts also employ the help of local forest dwellers by building nesting boxes for fruit-eating bats and birds – expert re-foresters in their own right that contribute to the natural seeding of the forest.
Doing this work helps protect the winter homes of birds like small songbirds by ensuring they have places to nest and food supplies to sustain them until they make the long journey back north in the spring. So while you’re waiting for your favorite backyard birds to return, help us Revive a Rainforest to keep the Osa Peninsula wild!
This blog post was co-written with Denée Reaves
Photo credit for all images: Nick Saunders |
Generally speaking, patients infected with the hepatitis C virus are unaware that the infection even exists in their body. They only find out once they undergo a physical examination leading to abnormal liver function being detected while blood tests reveal the infection. When the hepatitis C virus enters the body, there is an incubation period of 6–8 weeks during which the virus penetrates liver cells and begins to reproduce.
Once a person is first infected by hepatitis, they will develop the initial stage of the disease which is known as acute hepatitis. Symptoms during this stage are not severe and generally tend to come and go. The patient may experience symptoms that are shared with many other common illnesses, such as being easily tired, feelings of weakness, exhaustion or confusion. Because of this, those infected with hepatitis C rarely know they have it and thus it is often ignored until it has developed into chronic hepatitis.
From there, the disease continues to develop until it reaches the cirrhosis stage, which may take 10–30 years. For some patients, by the time they seek the help of a doctor, they are already in the last stages of cirrhosis. Worse yet, in some cases cirrhosis of the liver can result in liver cancer.
Similar to the hepatitis B virus, hepatitis C can be transmitted via blood or through sexual contact. It does not, however, spread via coughing, sneezing, eating or drinking together, or by using the same dishes.
The virus often results in chronic hepatitis and, unlike the hepatitis B virus, there is still no vaccine available to help prevent it. Because patients with acute hepatitis C rarely experience symptoms, they often do not receive the treatment they need and thus the disease may develop into chronic hepatitis.
Because the hepatitis C virus is transmitted via blood, those within the at-risk group can become infected in a variety of different ways. These include patients with a history of illness requiring blood or platelet transfusions, including heart surgery or extreme blood loss, as well as patients with chronic kidney disease or those who have injected illicit drugs, etc. Additionally, those who have received tattoos, ear or body piercings in an unclean environment using unsterile equipment, as well as those who change sexual partners frequently, are all at risk of becoming easily infected.
Because there is currently no vaccine available for hepatitis C, the key to preventing it is avoiding known risk factors for infection. For example, refrain from sharing any sharp objects, needles or syringes with other people. Don’t share razors or toothbrushes with others, avoid blood transfusions unless absolutely necessary, etc. The hepatitis C virus, however, does not spread by eating together with others or by sharing plates, bowls or cutlery. Breastfeeding, hugging, kissing, touching as well as sneezing or coughing don’t pose any risk.
Initially, the doctor will perform a blood test to determine whether or not there is any infection present. If an infection is detected, the doctor will then order an ultrasound test to see whether there are any signs of cirrhosis or liver cancer. In cases where ultrasound results are unclear, the doctor may order additional tests, such as a CT scan or MRI. For some patients, doctors may recommend a liver biopsy which involves inserting a thin needle through the abdominal wall to remove a small sample of liver tissue for laboratory testing. This is another method used for accurate diagnosis before treatment.
This is carried out using a FibroScan machine which assesses liver stiffness by measuring how quickly vibration waves pass through the liver. Results of the FibroScan are then translated and explained to the patient. This test is used to assess the severity of liver cirrhosis, liver fibrosis or fatty liver disease, and allows doctors to determine the severity of the disease without having to do a liver biopsy to examine liver tissue. A FibroScan test takes only 5-10 minutes before results are available. It is safe and painless and can be carried out without the patient’s need to refrain from eating or drinking. FibroScan is also used as a substitute for biopsies in patients with contraindications or who refuse biopsy.
The physician will consider the best treatment plan according to the stage and condition of the disease, and take any other diseases or illnesses the patient may have into account. The disease can be cured with oral medications. These are designed to get rid of the infection permanently. Response to the medications can be assessed by testing for the virus levels in the blood after treatment is completed. This type of treatment helps not only to improve the patient’s condition but also to clear the hepatitis C virus from the body and thus prevent cirrhosis and liver cancer. |
New “Inverse Vaccine” Developed to Combat Autoimmune Diseases, Promising Fewer Side Effects
Researchers at the University of Chicago’s Pritzker School of Molecular Engineering have made a significant breakthrough in the field of autoimmune diseases. They have developed an “inverse vaccine” that has the potential to reverse conditions such as multiple sclerosis, type 1 diabetes, and Crohn’s disease. This groundbreaking discovery could revolutionize the way these diseases are treated.
Unlike traditional vaccines that train the immune system to attack viruses or bacteria, the inverse vaccine works differently. It targets the immune system’s memory of a specific molecule that triggers autoimmune reactions. By removing this memory, the vaccine prevents the immune system from attacking healthy cells and tissues.
To do this, the vaccine takes advantage of the body’s natural process of marking molecules from broken-down cells with “do not attack” flags. This mechanism essentially tells the immune system to leave these molecules alone, preventing autoimmune reactions. The vaccine has already shown promising results in animal models, particularly in stopping the immune system from attacking myelin, which is the protective coating around nerves.
One of the key advantages of this inverse vaccine is its specificity and targeted approach. Currently, autoimmune diseases are primarily treated with drugs that broadly suppress the immune system. While this approach may alleviate symptoms, it often leads to unwanted side effects. The inverse vaccine could offer a more precise solution, significantly reducing these side effects.
The research team has conducted Phase I safety trials in humans with celiac disease, and these trials have shown promising results. Currently, Phase I trials are underway in multiple sclerosis patients as well. Although further research is necessary to determine the vaccine’s effectiveness in humans, the researchers and their collaborators are optimistic about its potential.
Autoimmune diseases affect millions of people worldwide, causing chronic pain and debilitating symptoms. The development of the inverse vaccine brings hope to those suffering from these conditions. With its potential to reverse autoimmune responses and reduce side effects, it has the potential to significantly improve patients’ quality of life.
The research team is now focused on advancing the technology and conducting further research. If successful, this breakthrough could have a profound impact on the treatment of autoimmune diseases, offering new possibilities for those affected by these conditions. As scientists continue to explore the potential of the inverse vaccine, the future looks brighter for individuals battling autoimmune diseases. |
Artificial ground freezing is a construction technique that is used in the construction of shafts, mines and tunnels to provide temporary earth support and groundwater control when other conventional methods such as dewatering, shoring and grouting or soil mixing do are not feasible. Ground freezing is also used to provide regional groundwater barriers around mining operations for gold and other minerals, oil sands or oil shales. It is often referred to as ground freezing, soil freezing, or freeze wall. The ground freezing process involves drilling and installing a series of relatively closely spaced pipes and circulating a coolant through these pipes. The refrigerated coolant extracts heat from the ground, converting the soil pore water to ice resulting in an extremely strong, impermeable material. It is the most positive method of ground improvement used in the underground construction and mining industries.
The freezing process uses an entirely closed system. There are no chemicals injected into the ground. The coolants used can be environmentally friendly glycols, calcium chloride brine or liquid nitrogen. These coolants are chilled with several different types of either above or below ground electrically powered refrigeration plants. In the case of liquid nitrogen, the liquid is delivered to the project site in tankers and vented to the atmosphere immediately after circulating through the pipes.
Ground freezing for deep shaft excavation
Deep shafts are the most common application of ground freezing. The freeze pipes are drilled and installed around the perimeter of the proposed shaft do the required depth. The circulation of the coolant is initiated until a frozen zone ranging from 1 to ten meters is formed. The inside of the shaft is then excavated and lined and the freezing system turned off.
Ground freezing in the tunneling industry
Ground freezing is used extensively in the tunneling industry. Tunnel applications use several different approaches. The most common involves horizontally drilling the freeze pipes around the tunnel perimeter very similar to the frozen shaft approach. This horizontal configuration is used to tunnel beneath roads or railways or to construct safety cross passages between two existing tunnels. Another method of constructing tunnels is to freeze the entire alignment solid and mining through a frozen mass of soil. This approach is often coupled with the Sequential Excavation Method (SEM) and used for small diameter tunnel adits.
Ground freezing and Tunnel Boring Machines
Ground freezing is also used in conjunction with Tunnel Boring Machines (TBMs). The ground in front of or around the TBM can be frozen in advance to create a pre-planned safe haven for tunneling interventions or used in emergencies for TBM repair.
Ground freezing to isolate groundwater from mining operations
Ground freezing has been proposed for regional groundwater barriers up to 10km long to isolate groundwater from mining operations instead of large scale dewatering operations that have environmental consequences or require complex and expensive treatment operations.
Ground freezing success
Ground freezing is successful when completed by experienced contractors that have the required specialized equipment. It is an interactive process requiring advanced engineering, accurate drilling as well as custom made refrigeration and instrumentation equipment. |
Although the entire water molecule is neutral, it has an uneven charge distribution. The oxygen atom will take electrons from the hydrogen atom, giving itself partial negative charges, while the hydrogen atom exhibits partial positive charges. The hydrogen atom will be attracted to the oxygen atom in another water molecule to form a hydrogen bond. It is not too weak force whose strength is between Van der Waals force and covalent bond. The hydrogen bond gives water properties that other solvent does not have.
Strong surface tension
Due to hydrogen bond, water molecules are subjected to stronger inward forces at the interface between liquid and gas, which gives water a strong surface tension. Some small insects can walk on the surface of water.
Water has a high specific heat
The irregular thermal motion of the particles will be accelerated by the absorption of heat. Water molecules have to break hydrogen bonds before they can accelerate their motion. This consumes some extra energy, and water molecules kinetic energy does not increase as much as expected. Compared to other liquids, water absorbs more heat by raising the same temperature and releases more heat by lowering the same temperature. It is essential for maintaining a moderate suitable temperature for life. For example, the ocean absorbs heat from the sun while its temperature increases by only a few degrees during the day; at night, the ocean releases the heat it absorbed during the day to prevent the fast dropping temperature. This keeps the Earth's temperature from changing drastically (the greenhouse effect is another reason).
Since hydrogen bond enhances the force between water molecules, more energy is consumed when water molecules enter the air. The evaporation of water carries away a large amount of heat. In summer or during intense exercise, people take away a lot of heat by sweat evaporation, so that their body temperature drops rapidly to normal levels.
Ice floats on the water surface.
For the same substance, solid is denser than liquid. However, liquid water is denser because its molecular thermal movement is more intense and water molecules overcome hydrogen bonds to stay closer. When temperature is below its freezing point, the hydrogen bond pushes the adjacent water molecules further apart due to the reduced thermal motion, forming a regular hexagon. This causes ice to be more dense than water, so it can float on the surface. This property is essential for life. If ice sinks to the bottom,it will freeze the benthos and destroy the ecosystem. Heat from the sun can be blocked by the water above, and the ice in the bottom may not fully melt during the summer. In the long run, there will be more and more ice in rivers, lakes and oceans, and eventually life will not survive.
Water is a good solvent
Negatively charged oxygen atoms of water attract cations and positively charged hydrogen atoms of water attract anions. Ions are easily stripped off by attraction and dissolved in water. ionic compounds such as NaCl, MgCl₂ are easily dissolved in water. Although sugar is not composed of ions, it has hydrophilic groups -OH or -CHO. The uneven distributed charge of these groups attract water molecules to make them easily soluble in water. Even some large molecules like proteins can dissolve in water if they have hydrophilic regions or ionic regions.
Because it dissolves polar molecules and ions easily, water has become an ideal medium for various chemical reactions in life, and nutrient or waste in cells are transported through water |
In 1998, University of Oregon researcher Avinash Singh Bala was working with barn owls in an Institute of Neuroscience lab when the birds’ eyes caught his attention.
The usual research done in the lab, led by Terry Takahashi, explores, at a fundamental level, how barn owls process sounds, with the idea that such knowledge could lead to improved hearing devices for people.
But those eyes. Every time the owls heard an unexpected sound, their eyes dilated.
“So, we asked, might this work in humans?” Bala said. “We thought, if so, it would be a great way to assess hearing in people who cannot respond by pushing a button, raising a hand or talking, such as babies, older children with developmental deficits and adults who are suffering from a debilitating disorder or are too sick to respond.”
Over the next decade, Bala and Takahashi, as free time outside their primary research allowed, pursued ideas on how to use the eyes as a window to hearing. They experimented, finding similar involuntary dilation in humans. They tweaked a possible approach, aiming for sensitivity that might equal that achieved with traditional tone-and-response testing.
“We presented early data analyses at conferences, and there was a lot of resistance to the idea that by looking at an involuntary response we could get results as good as button-press data.”
Last month, the two UO neuroscientists published a freely accessible paper in the Journal of the Association for Research in Otolaryngology that solidifies their case. They used eye-tracking technology simultaneously as they conducted traditional hearing exams with 31 adults in a quiet room.
Dilation was monitored for about three seconds as participants stared at a dot on a monitor while a tone was played. To avoid being fooled by pupil reactions generated by pushing a response button, subjects’ responses were delayed until the dot was replaced by a question mark, when eye-tracking stopped.
Levels of dilation seen throughout the testing directly reflected the participants’ subsequent push-button responses on whether or not a tone was heard. That, Bala said, allowed his team, which also included former doctoral student and co-author Elizabeth Whitchurch, “to see and establish causality.”
“This study is a proof of concept that this is possible,” Bala said. “The first time we tested a human subject’s pupil response was in 1999. We knew it could work, but we had to optimize the approach for capturing the detection of the quietest sounds.”
Takahashi said the initial discovery was completely accidental.
“If we hadn’t been working with owls, we wouldn’t have known about this possible human diagnostic technique,” he said. “This is a really good example of how animal-based research can benefit advances in human diagnostics.”
The testing in the newly published research, funded initially by internal grants, was done using conventional, commercially available hearing and eye-tracking technologies.
Bala and Takahashi are now collaborating with Dare Baldwin, a professor of psychology, on developing their own technology for testing with babies. The effort is being supported by a 2015 Incubating Interdisciplinary Initiatives award from the Office of the Vice President for Research and Innovation and a recent grant from the University Venture Development Fund.
Source: Read Full Article |
Advanced Placement Economics Chapter 12-16 Test Review What is the difference between fixed and variable costs? What is their relationship to short-run and long-run operations? 12.1 On a graph be able to identify the point of diminishing marginal returns. 12.2 Know the relationships between marginal, total, and average product . 12.2 On a graph be able to identify economies of scale, diseconomies of scale, and constant returns. 12.3 Know pricing in perfect competition. 13.1 Know the function of short-run marginal cost curves. 12.2 Be able to interpret and analyze profit-maximizing in perfect competition. 13.1 Explain and determine marginal cost. 12.3 Be able to identify profit, loss, and break-even points graphically in perfect competition. 13.2 Know short and long-run equilibrium in perfect competition. 13. Know the elasticity of demand in pure monopoly. 14.2 Be able to interpret on a graph profit- maximization in a monopoly. 14.1 Be able to identify consumer surplus, quantity produced, price setting, etc… on a graph for unregulated pure monopoly. 14.1,2,3 What is allocative and productive efficiency? Think of these in regards to the 4 Market types. What are the general characteristics of each of the 4 Market types? What is a “cartel” and how do firms in a cartel operate? What are their goals? What would this look like on a graph? Be able to analyze a pay-off matrix and identify Nash Equilibrium Why do firms in perfect competition shut down? What is price discrimination? What is product differentiation? |
Download This Sample
This sample is exclusively for KidsKonnect members!
To download this worksheet, click the button below to signup for free (it only takes a minute) and you'll be brought right back to this page to start the download!
Sign Me Up
Thanksgiving Day is celebrated on the fourth Thursday of November in the United States and the second Monday of October in Canada. It is celebrated as a day of giving thanks for the blessing of the harvest and of the preceding year.
See the fact file below for more information on the Thanksgiving Day or alternatively, you can download our 22-page Thanksgiving Day worksheet pack to utilise within the classroom or home environment.
Key Facts & Information
ORIGIN OF THANKSGIVING IN THE US
- In 1578, explorer Martin Frobisher was believed to be the first to celebrate Thanksgiving in Canada for surviving his journey from England. Some believed that explorer Samuel Champlain held Thanksgiving celebrations with the Natives Americans in New France during the 1600s.
- By the fall of 1621, only half of the Pilgrims, who had sailed on the Mayflower, survived. The survivors, thankful to be alive, decided to have a Thanksgiving feast. The Plymouth Pilgrims were the first to celebrate Thanksgiving.
- The Pilgrim leader, Governor William Bradford, organized the first Thanksgiving feast in the year 1621 and invited the neighboring Wampanoag Indians to the feast.
- The first Thanksgiving feast was held in the presence of around ninety Wampanoag Indians and it lasted three days.
- President George Washington issued the first national Thanksgiving Day Proclamation in the year 1789 and again in 1795.
- Sarah Josepha Hale, an editor with Ladies’ Magazine, started a Thanksgiving campaign in 1827. Due to her efforts, in 1863 Thanksgiving was observed as a day for national thanksgiving and prayer.
- Abraham Lincoln issued a ‘Thanksgiving Proclamation’ on
October 3, 1863, and officially set aside the last Thursday of November as the national day for Thanksgiving. Before President Lincoln made this happen, each president used to make an annual proclamation to specify the day when Thanksgiving would be held.
- President Franklin D. Roosevelt restored the last Thursday of November as Thanksgiving Day in the year 1939. He did so to make the Christmas shopping season longer, which would stimulate the economy.
- Congress passed an official proclamation in 1941 and declared that Thanksgiving would be observed as a legal holiday on the fourth Thursday of November every year.
- Traditional foods like turkey, stuffing, and pumpkin pie are served by many on Thanksgiving Day.
- Despite the simplicity of the first Thanksgiving feast, food on the table of Americans today include the traditional roasted turkey with stuffing, gravy, sweet potatoes, mashed potatoes, cornbread, and cranberry sauce.
- Others include apple cider, custard, hot chocolate, buttered rum, ham, fruit cake, candy canes, plum pudding, mixed nuts, fudge, pies, and eggnog.
- During this day, families gather to have a Thanksgiving meal together. Most do the breaking of the turkey wishbone during the meal wherein the one who gets the larger piece is granted a wish.
- There are many churches in the United States that hold a special Thanksgiving service to give thanks for the blessings they have received.
- In 1963, US President John F. Kennedy officially pardoned a turkey before the Thanksgiving dinner at the White House. Since then, succeeding presidents continue the tradition of saving a turkey’s life.
The president of the United States does the annual pardoning of a turkey that will not end up on the platter.
- The first Thanksgiving feast was celebrated with lobster, chestnuts, dried fruit, onions, leeks, cabbage, chicken, carrot, rabbit, honey, and maple syrup. The presence of corn on the cob, turkey, mashed potatoes, and pumpkin pies eventually became part of the fare.
- Since 1924, Macy’s, a department store chain in the US, holds its annual Thanksgiving Parade which includes marching bands, gigantic floats, balloons, and broadway musicals.
- Macy’s tradition started as a Christmas parade in celebration of the expansion of its flagship store in Manhattan. They introduced floats highlighting Little Miss Muffet, Little Red Riding Hood, and an animal parade.
- Felix the Cat was the first character balloon flown during the Macy’s parade in 1927, while Mickey Mouse made his debut in 1934. At the height of WWII in 1942 until 1944, the parade was halted.
- The day after Thanksgiving is called Black Friday and is considered the largest and busiest shopping day of the year. This day marks the beginning of the many holiday sales before Christmas.
- Thanksgiving is also a day to watch football live or on the television.
Also during Thanksgiving, many Americans give back and share their blessings to the less fortunate by creating food drives as a form of charity.
- As part of the turkey tradition, there are approximately 46 million turkeys eaten every year during Thanksgiving. Some even call the celebration Turkey Day.
- For some, the Wednesday before Thanksgiving is the busiest day for travelers as many Americans use this holiday to enjoy family trips.
THANKSGIVING AROUND THE WORLD
- In Germany, Thanksgiving is called “Erntedankfest,” and is celebrated in early October.
- It is similar to American Thanksgiving, and it also includes large dinners made with harvest vegetables, as well as a parade (similar to the Macy’s parade in the US).
- In German churches, the service includes an observance, some singing, and the presenting of the “harvest crown” to a “harvest queen.”
- Labor Thanksgiving Day is celebrated annually in Japan on the 23rd of November to celebrate labor and production, as well as to give thanks to one another.
- It is a national holiday and was adopted during the American occupation after World War II.
- Many children in Japan draw pictures on the holiday and give them to neighborhood policemen to thank them for their service to the community.
- Korea celebrates a day of thanks called “Chuseok,” which occurs in late September.
- Rome’s version of Thanksgiving is a harvest festival called “Cerelia.” It is a day to honor the Goddess of Corn, Ceres. Cerelia is celebrated on October 4th every year.
- The foods produced for Cerelia are a symbol of thanks from the Romans.
- In Ghana, “Homowo” is celebrated to give thanks in August and September. Ceremonies for this festival include a procession of chiefs through the major roads in the area, finely dressed. “Homowo” means “hooting at hunger.”
- Newly harvested crops are “blessed” and the people who eat them are purified before consuming them.
- The Chinese version of Thanksgiving is called “Chung Ch’ui,” also referred to as the August Moon Festival.
- It is a 3-day celebration that occurs in the middle of August and sees Chinese families celebrating the end of the harvest season with a roast pig and mooncakes, symbols of family unity and perfection.
- The Chinese also give their cakes to friends and family.
- In the Southern parts of India, people celebrate their harvest at the Pongal festival. This festival takes place in January.
Thanksgiving Day Worksheets
This is a fantastic bundle which includes everything you need to know about the Thanksgiving Day across 22 in-depth pages. These are ready-to-use Thanksgiving Day worksheets that are perfect for teaching students about the Thanksgiving Day which is celebrated on the fourth Thursday of November in the United States and the second Monday of October in Canada. It is celebrated as a day of giving thanks for the blessing of the harvest and of the preceding year.
Complete List Of Included Worksheets
- Thanksgiving Day Facts
- Around the World in Thanksgiving
- The Pilgrims
- Thanksgiving Feast
- The Mayflower
- Wampanoag Indians
- US Holidays
- Presidents and Thanksgiving
- In Painting
- How They Do It?
- Give Thanks!
Link/cite this page
If you reference any of the content on this page on your own website, please use the code below to cite this page as the original source.
Link will appear as Thanksgiving Day Facts & Worksheets: https://kidskonnect.com - KidsKonnect, November 13, 2019
Use With Any Curriculum
These worksheets have been specifically designed for use with any international curriculum. You can use these worksheets as-is, or edit them using Google Slides to make them more specific to your own student ability levels and curriculum standards. |
Character Strengths for Students
What Are They?
Character strengths are the positive qualities individuals have—as reflected in their thoughts, feelings, and actions—that promote the well-being of themselves and others. Though people may value different strengths to different extents, in general, parents and educators across cultures value these qualities and try to cultivate them in children and youth.
The idea of desirable character traits has existed since ancient times, but research on them is more recent, spurred by the rise of positive psychology—a movement that endeavors to use the tools of psychology not only to identify and fix problems, but also to recognize and foster positive qualities and flourishing.
Research on character strengths in both adults and youth tends to use the Values in Action (VIA) Classification, a framework that identifies 24 character strengths, which are often organized under six core virtues. The virtues are broader characteristics that have been valued in philosophical and spiritual traditions across time and place, while the character strengths function as components of or pathways to the virtues. The six virtues and their corresponding character strengths of the VIA are:
- Wisdom (creativity; curiosity; judgment; love of learning; perspective)
- Courage (bravery; perseverance; honesty; zest)
- Humanity (love; kindness; social-emotional intelligence)
- Justice (teamwork; fairness; leadership)
- Temperance (forgiveness; humility; prudence; self-regulation)
- Transcendence (appreciation of beauty and excellence; gratitude; hope; humor; spirituality)
In this view, good character is not a single attribute, but is multidimensional, a “family” of positive traits that may each be evident to different extents in different people. Each student has a unique profile of strengths, with some strengths being more developed and others less so, regardless of how they compare to other students.
One student may be particularly strong in curiosity, love of learning, and perseverance, while another may be strongest in kindness, humility, and fairness; yet another could have zest, social-emotional intelligence, and teamwork as top strengths.
Why Are They Important?
Research with young people has found that character strengths relate to multiple aspects of well-being, including happiness, mental/emotional health, social relationships, and academic achievement.
Character strengths help make kids happier.
- Many character strengths are associated with higher satisfaction with life. In one study, for young children (ages 3-9), the strengths of love, hope, and zest were particularly associated with happiness; for older kids, happiness was most related to these same strengths (love, hope, and zest), plus gratitude.
- Strengths relating to transcendence and temperance generally relate to higher life satisfaction in children and youth.
- Adolescents who participated in character strength-based exercises at school showed improvements in life satisfaction compared to other students.
Character strengths promote better psychological health.
- Studies have shown that certain character strengths are associated with fewer psychological problems among youth, both internalizing (e.g., hope, zest, and leadership associated with lower levels of anxiety and depression) and externalizing (e.g. perseverance, honesty, prudence, and love associated with less aggression).
- Other-directed strengths such as kindness and teamwork predict fewer symptoms of depression over time among youth.
Kids with character strengths get along better with peers.
- Students rated as more popular by their teachers tend to rate more highly on leadership and fairness, as well as on temperance strengths such as self-regulation, prudence, and forgiveness.
- Other-directed strengths such as kindness, teamwork, and social-emotional intelligence are associated with better social functioning at school.
Character strengths increase academic adjustment and success.
- Character strengths seem to help students adjust to school, from the beginning of elementary school through middle school and beyond; they have been associated with satisfaction with school, academic self-efficacy, and positive classroom behavior.
- Various strengths relate to academic achievement across ages, from middle school to college, above and beyond the effect of IQ. |
Health Statistics provide data on the health status of the population. Diagnostic information is classified according to the World Health Organization International Classification for Disease (ICD 10. Health Statistics also provide data on health resources and facilities' utilization.
The information is compiled using service and administrative data from health facilities and supplemented by data collected from household based surveys (Botswana AIDS Impact Surveys, Demographic Health Surveys and Family Health Surveys as well as Population and Housing Census. The statistics include facility service, staffing patterns and access to health services, non-institutional vital events. The non-institutional information is based on reported live births and deaths that occur outside of the formal health system. Reported data are used to calculate the level of outpatient and inpatient morbidity and mortality and examine trends overtime.
Indicators produced include:
- Statistics on health resources (facilities and personnel),
- Outpatients (curative services, ante-natal care,
- Post-natal care, family planning etc.),
- Inpatients (number of beds,
- Patients discharges, patient’s days etc.) and mental health care by some demographic characteristics, region, etc.
- Mortality (including causes) and fertility trends |
Religious and cultural difference was part of the landscape of America long before the period of European arrival and settlement. The indigenous peoples of this land Europeans called the “new world” were separated by language, landscape, cultural myths, and ritual practices. Some neighboring groups, such as the Hurons and the Iroquois, were entrenched in rivalry. Others, such as the nations that later formed the Iroquois League, developed sophisticated forms of government that enabled them to live harmoniously despite tribal differences. Some were nomads; others settled into highly developed agricultural civilizations. Along the Ohio and Mississippi rivers, ancient communities of Native peoples developed ceremonial centers, and in the Southwest, cliff-dwelling cultures developed complex settlements.
When Europeans arrived in the Americas, most did not even consider that the peoples they encountered had cultural and religious traditions that were different from their own; in fact, most believed indigenous communities had no culture or religion at all. As the “Age of Discovery” unfolded, Spanish and French Catholics were the first to arrive, beginning in the sixteenth century. Profit-minded Spanish conquistadors and French fur traders competed for land and wealth, while Spanish and French missionaries competed for the “saving of souls.” By the mid-century, the Spanish had established Catholic missions in present-day Florida and New Mexico and the French were steadily occupying the Great Lakes region, Upstate New York, Eastern Canada and, later, Louisiana and the Mississippi Delta.
Many of the European missionaries who energetically sought to spread Christianity to Native peoples were motivated by a sense of mission, seeking to bring the Gospel to those who had never had a chance to hear it, thereby offering an opportunity to be “saved.” In the context of the often brutal treatment of Native peoples by early Spanish conquistadors, many missionaries saw themselves as siding compassionately and protectively with the indigenous peoples. In 1537, Pope Paul III declared that Indians were not beasts to be killed or enslaved but human beings with souls capable of salvation. At the time, this was understood to be an enlightened view of indigenous people, one that well-meaning missionaries sought to encourage.
Letters from missionaries who lived among the Indians give us a sense of the concerns many held for the welfare of tribal peoples. A letter by Franciscan friar Juan de Escalona criticizes the “outrages against the Indians” committed by a Spanish governor of what is now New Mexico. The governor’s cruelty toward the people, de Escalona wrote, made preaching the Gospel impossible; the Indians rightly despised any message of hope from those who would plunder their corn, steal their blankets, and leave them to starve. The writings of Jean de Brebuf, a French Jesuit missionary who lived and worked among the Hurons for two years without securing a single convert, reveal the powerful force of religious devotion that compelled missionaries to leave their homes for unknown lands and difficult lives in North America.
Newcomers from England during the sixteenth century also brought many expressions of Protestant Christianity to the new world. Among them were profit-seeking explorers, with allegiances to the Church of England, and Puritan reformers, rebelling against the Church and in search of religious freedom. Others included English Quakers, Catholics, and Scotch-Irish Presbyterians—all seeking a place to practice their religious commitments free of interference from the state. On the whole, these English settlers saw themselves as settling in a “virgin land” where real “civilization” had not been established. They understood their right to conquest in terms of old English legal traditions based on industry and utility, in which constructing houses, building fences, and laying out plantations constituted legitimate claims to land. They took their Biblical warrant from Genesis 1:28: “Be fruitful and multiply, and fill the earth and subdue it.”
The early history of the colonies reveals a complex story of relations with the Native peoples. Some colonial settlers, like those on Plymouth Plantation, had positive relations with Native peoples. In Puritan Massachusetts, John Eliot mastered Algonkian and then translated the Bible into that language in 1663. His “The Indian Covenanting Confession” was printed in 1669 in both Algonkian and English. He intended to place missionary efforts in the hands of the Indians themselves. With its regard for Indian autonomy, his approach was considered novel for its time. For the most part the many Indian Wars dominated the encounter of Europeans and Native peoples. They were often complicated by the wrenching divisions within tribes caused by the increasing numbers of “praying Indians,” who had been converted by the missionaries.
From today’s perspective one might argue that even under the best of circumstances, colonial attitudes toward their indigenous neighbors were colored by paternalism, ignorance of tribal cultures, and desire for profit. Underneath even the most positive assessments lay a romanticism about the “noble savage.” It should be remembered, however, that even in the early years of settlement, European colonists often criticized one another for dealing too harshly or too greedily with their Native neighbors.
From the colonial period on, relations between European and Native peoples were predominantly expressed and negotiated in terms of land. The issue of land became, in many ways, the deepest “religious” issue over which worldviews collided. Many of the colonists saw the new land as a “wilderness” to be settled, not as already inhabited, or as Michael Wigglesworth described it in 1662, “a waste and howling wilderness, where none inhabited but hellish fiends, and brutish men that devils worshipped.” The founders of some colonies, such as Massachusetts and Connecticut, wholly disregarded Indian land-rights. Others drew up well-meaning treaties and purchase agreements. For example, Roger Williams and William Penn, in founding Rhode Island and Pennsylvania respectively, explicitly criticized the founders of other colonies for their self-justified acquisition of lands.
From the perspective of the Native peoples, the European discovery of the new world was more aptly an invasion. Most were deeply connected to the land but had no traditions of land ownership or private property. They often expressed astonishment that land could be sold or negotiated through treaties, since to them land was not a source of private profit but of life, including the life of the spirits. Some lands were also sacred as they bore the graves of the dead. Over the course of nearly three centuries, the terms “removal,” “displacement,” and “cession” came to be used by European settlers. Native peoples were to be “removed” from the lands they had occupied, “displaced” to other lands, and their lands “ceded” to the newcomers. Finally, Indian tribes were forcibly “settled” on “reservations,” lands set apart.
The religious encounter of Christian missionaries and Native peoples cannot be separated from the progressive seizure and settlement of tribal territories by European colonists. Through most of American history, however, there has been little recognition of the distinctively religious claims of Native peoples to the land and its sacred sites.
The encounter of Christians and Native peoples is too complex and varied to be characterized in general. There are surprising instances, such as the late eighteenth century Russian mission in Alaska, where early missionaries saw the Tlingit or Sugpiaq people of Kodiak Island as deeply religious, understanding that faith in terms of their own. More often, however, Christian missionaries did not recognize the customs of the Native peoples as spiritual or religious traditions in their own right and many mission schools effectively removed Native young people from their cultures. Many Christian colonists and missionaries, even those most sympathetic to the lifeways of the Native peoples, categorized Native Americans as “heathen” who either accepted or resisted conversion to Christianity. They did not place Native American traditions under the protection of religious freedom that had been enshrined in the Constitution. It was not until 1978, almost two hundred years after the Constitution was signed, that the American Indian Religious Freedom Act gave specific legal recognition to the integrity of Native American religions. |
A team of researchers from Cornell and IBM are calling the new processor TrueNorth and its something special with Its 5.4 billion transistors include over 4,000 individual cores, each of which contain a collection of circuitry that behaves like a set of neurons.
The goal of the new technology is to create a processor that can act more asynchronously, handling erratic spikes in activity, which can behave more like neurons in systems that entail consciousness.
According to arstechnica each core has over 100,000 bits of memory, which store things like the neuron’s state, the addresses of the neurons it receives signals from, and the addresses of the neurons it sends signals to. The memory also holds a value that reflects the strength of various connections, something seen in real neurons. Each core can receive input from 256 different “neurons” and can send spikes to a further 256.
Computer transistors work in binary; they’re either on or off, and their state can only directly influence the next transistor they’re wired to. Neurons don’t work like that at all and the goal of designing a processor to mimic a neuron is to create a a more flexible processing architecture. Neurons can accept inputs from an arbitrary number of other neurons via a structure called a dendrite, and they can send signals to a large number of other neurons through structures called axons. Finally, the signals they send aren’t binary; instead, they’re composed of a series of “spikes” of activity, with the information contained in the frequency and timing of these spikes.
Again, according to arstechnica, while it’s possible to model this sort of behavior on a traditional computer, the researchers involved in the work argue that there’s a fundamental mismatch that limits efficiency. While the connections among neurons are physically part of the computation structure in a brain, they’re stored in the main memory of a computer model of the brain, which means the processor has to wait while information is retrieved any time it wants to see how a modeled neuron should behave. The TrueNorth processor allows each “neuron” to behave semi-independently and communicate with a different number of other “neurons,” depending on the operation. |
Parents As Partners in Puberty Education
When teachers and parents form a partnership, children are more likely to form positive attitudes and behaviors regarding puberty and sexuality.
In fact, more adolescents say they would like to learn about sexuality from their parents than from any other source. Yet, the educator can do much to engage and equip parents to take on the challenge of talking with their children about sex. Let’s face it: many parents did not have positive models for talking about puberty and sex when they were children. Now that it’s their turn, they might be at a loss for ways to initiate discussions and places to access accurate information that can be shared with their children. Many parents think their children are not ready to learn about puberty and sexuality; so, they may wait too long to begin discussions and miss the window of time when their children are most open to learning about their developing bodies, changing emotions, relationships with peers, and other important topics. Parents and other family members and trusted adults can instill family and spiritual values that impact sexual health and behavior; educators instill the more universally held values and develop skills in the classroom setting that support what children learn at home. Parents as partners with teachers increase the effectiveness of puberty education.
How to enlist parents as partners
Puberty: The Wonder Years is very committed to supporting teachers and parents in forging a partnership that benefits children and youth. The curriculum includes many activities that encourage parents to talk with their children about family values and guide their children toward behavior that is consistent with their beliefs and values. The family resources that go home to parents and other trusted adults are provided in Spanish and English in order to be useful for more family members. To engage parents and other family members and trusted adults, Puberty: The Wonder Years includes the following family engagement resources for use with families:
- Family Letter: Schools send home a letter that explains the puberty education program and notifies parents of their rights. An outline of the topics that will be taught each year and a permission form, if one is to be used, will be attached. A sample letter has been provided for schools to modify, personalize, and print on school stationery. Keeping parents informed about the puberty lessons can help develop trust and support for the curriculum.
- Family Preview Meeting: Schools are encouraged to invite parents and other family members to attend a meeting at the school to learn about Puberty: The Wonder Years. This provides parents an opportunity to preview the media, review the curriculum, ask questions, and meet the school staff who will implement the lessons. A sample invitation has been provided.
- Parent Notification: Offer parents the opportunity to excuse their children from any lessons they find objectionable. Meet with parents to discuss the reasons they are considering having their children excused. Often, with an opportunity to see the curriculum and talk more about the topics that will be offered, parents will reconsider. By respecting each parent’s right to decide what is appropriate for his or her child, you are also maintaining the integrity of the curriculum that was selected by the school district as being most suitable for your community. A sample exclusion or “opt out” form has been provided.
- Family Partnership Fliers: These informational fliers describe the topics covered in the lessons, such as puberty, hygiene, and communication, to equip parents and other trusted adults for conversations with their children. They provide parents with information and helpful tips for maintaining a positive relationship with their children during puberty. The Family Partnership Fliers are also provided in Spanish.
- Family Activity Sheets: Homework assignments encourage the students to discuss issues related to puberty and sexual behavior with their families. If a parent is not available for these assignments, students are asked to identify another trusted adult with whom they can talk. Families have been grateful for this tool for triggering family discussions. The Family Activity Sheets are also provided in Spanish.
- Student Activity Sheets: Some of the classroom activities utilize activity sheets. When the students take their activity sheets home, they may be helpful as discussion starters.
- Home Discussions: During the lessons, some issues or topics that are not approved for classroom discussions may arise. At times like these, the students are encouraged to ask their families for help in clarifying the topic.
More helpful links for engaging parents in sex education:
- KidsHealth: Talking to Your Child About Puberty
- Teaching Sexual Health: The Parent’s Role
- Advocates for Youth: Parent-Child Communication
- Parents Matter: The Role of Parents in Teens’ Decisions About Sex
- Books for Parents and Children
Sign up to receive a FREE sample lesson to see what Puberty: The Wonder Years is all about. |
Viking Helmet- Steel- Gjermundbu- Viking Age, Vendel Helmet
The Viking helmet, also known as the Gjermunbdu helmet, originates from the 10th century in central Norway. This helmet has a round cap and the neck was protected with mail aventail (chainmail). It has a spectacle guard around the nose and eyes forming a mask. This design is similar to Vendel period helmets. There are five types of Viking helmets that have been discovered but only the Gjermundbu was able to be reconstructed to see the entire design. Archaeologists believe that Vikings may have only rarely used metal helmets. Instead, the majority of their battle helmets were likely made of leather. Helmets with metal horns may have been used for ceremonial purposes millennia earlier but there is no evidence that Vikings used horned helmets in battle at any point in history. It has been suggested that horned helmets became associated with Viking helmets due to a 19th century opera.
One size fits most |
Poultry Farmer Job Description
Poultry farmers are responsible for the daily care of chickens, turkeys, ducks, or other poultry species that are raised for meat production purposes. Approximately nine billion broiler chickens and 238 million turkeys are consumed in the U.S. each year. These birds are raised in over 233,000 poultry farms, many of which are small-scale operations.
Duties of a Poultry Farmer
Routine responsibilities for a poultry farmer include:
- Distributing feed
- Administering medications
- Cleaning enclosures
- Ensuring proper ventilation
- Removing dead or sick birds
- Maintaining facilities in good working order
- Monitoring flock behavior to detect any signs of illness
- Transporting birds to processing plants
- Restocking enclosures with young birds
- Keeping detailed records
- Overseeing various poultry farm employees
Poultry producers work in conjunction with poultry veterinarians to ensure the health of their flocks. Livestock feed sales representatives and animal nutritionists may also advise poultry producers on how to create nutritionally balanced rations for their facilities.
As is the case with many animal farming careers, a poultry farmer may be required to work long hours that can include nights, weekends, and holidays. Work may be carried out in varying weather conditions and extreme temperatures. Workers may also be exposed to diseases that are commonly found in poultry waste products, such as salmonella or E. coli.
Most poultry farmers raise one species of fowl for a specific purpose. Nearly two-thirds of poultry revenues come from the production of broilers, which are young chickens raised for meat. Approximately one-quarter of poultry revenues come from egg production. The remaining poultry revenues are derived from the production of other species such as turkeys, ducks, game birds, ostriches, or emus.
According to the USDA, most U.S. poultry farms involved in meat production are concentrated in the Northeast, Southeast, Appalachian, Delta, and Corn Belt regions, which places them in close proximity to the majority of poultry processing centers. The state with the highest number of broiler farms is Georgia, followed by Arkansas, Alabama, and Mississippi. The U.S. is the second largest exporter of broilers, second only to Brazil.
Most farms that produce broilers are large commercial operations involved in indoor broiler production. Other types of broiler farming are free-range broiler production or organic broiler production.
Education and Training
Many poultry farmers hold a two- or four-year degree in poultry science, animal science, agriculture, or a closely related area of study. However, a degree is not necessary for entrance to the career path. Coursework for these animal-related degrees can include poultry science, animal science, anatomy, physiology, reproduction, meat production, nutrition, crop science, genetics, farm management, technology, and agricultural marketing.
Many poultry farmers learn about the industry in their younger years through youth programs such as Future Farmers of America (FFA) or 4-H. These organizations expose students to a variety of animals and encourage participation in livestock shows. Others gain hands-on experience by working with livestock on the family farm.
The Earning Potential of a Poultry Farmer
The income a poultry farmer earns can vary widely based on the number of birds kept, the type of production, and the current market value of poultry meat. The Bureau of Labor Statistics (BLS) reports that the median wage for agricultural managers was $68,050 per year ($32.72 per hour) back in May of 2014. The lowest earning tenth of agricultural managers made under $34,170 while the top paid tenth in the category earned over $106,980.
Chicken manure may also be collected and sold to gardeners for use as fertilizer, which can serve as an additional source of revenue for poultry farmers. Many smaller non-corporate poultry farmers engage in other agricultural enterprises on their farms–from raising crops to producing other livestock species–to provide additional income to the farm.
Poultry farmers must factor in various expenses when calculating their total earnings. These expenses may include feed, labor, insurance, fuel, supplies, maintenance, veterinary care, waste removal, and equipment repair or replacement.
The Bureau of Labor and Statistics predicts that there will be a very slight decline of about 2 percent in the number of job opportunities for farmers, ranchers, and agricultural managers over the next several years. This is due primarily to the trend towards consolidation in the farming industry, as smaller producers are being absorbed by the larger commercial outfits.
While the total number of jobs may show a slight decline, the USDA’s industry surveys indicate that poultry production will post steady gains through 2021 due to increasing demand for broilers. |
This is the oxygen carrying compound of the red cells.
Hemoglobin can be measured chemically, and the amount of hemoglobin/L or 100 mL of blood can be used as an index of the oxygen carrying capacity of the blood. Total blood hemoglobin depends on the number of RBCs (the hemoglobin carriers), but also (to a much lower extent) on the amount of hemoglobin in each RBC. A low hemoglobin level indicates anemia .
Hemoglobin reference values are most frequently quoted as 13.5-17.5 g/dL for males and 12.0-16.0 g/dL for females. Infants and children have considerably different hemoglobin values than do adults. The local range of reference must also be considered; for example, increase in altitude causes a physiologic increase in hemoglobin, such that the normal hemoglobin level in Denver will be higher than in Omaha.
Are you a Doctor, Pharmacist, PA or a Nurse?
Join the Doctors Lounge online medical community
Editorial activities: Publish, peer review, edit online articles.
Ask a Doctor Teams: Respond to patient questions and discuss challenging presentations with other members. |
The CRISPR (Clustered, Regularly Interspaced, Short Palindromic Repeats)–Cas system evolved in microbes as a defense mechanism to protect against invasive phages. Today it forms the basis for the most exciting and fastest developing set of tools in biotechnology that give us the ability to edit any gene in any cellular organism or tissue. CRISPR-Cas promises to enable unprecedented advances in everything from health and diagnostics to agriculture and energy.
If you are exploring how to harness this technique in your research, this article will help you understand the basics to get you started. This article is the first in a series designed to carry you through your CRISPR journey, please check out our other articles here.
Understanding CRISPR Gene Editing
The CRISPR gene-editing process is driven by a complex consisting of a bacterially-derived nuclease (e.g. Cas9) and guide RNA (gRNA). The gRNA is a specific RNA sequence designed to recognize and direct the Cas nuclease to the target DNA region. The gRNA is made up of two parts: CRISPR RNA (crRNA) and trans-activating crRNA (tracrRNA).
- crRNA is a 17-20 nucleotide sequence complementary to the target DNA and therefore varies depending on your target gene.
- The tracrRNA is an invariable sequence that serves as a scaffold attaching the Cas nuclease to the rRNA.
The original CRISPR editing systems were driven by two-part gRNA complexes consisting of crRNA and tracrRNA. More recently, a single gRNA (sgRNA) approach, where the crRNA and the tracrRNA are combined into one RNA molecule, has become popular due to its increased ease and simplicity of use. The general schematic of the CRISPR gene-editing system can be seen in Figure 1.
While targeting of the nuclease is directed by the gRNA, a protospacer adjacent motif (PAM) of ~3-8 nucleotides must also be present downstream of the target site. Recognition of a PAM allows the nuclease to cleave the DNA, creating a double-strand break (DSB). The most commonly used nuclease, Cas9 derived from Streptococcus pyogenes, recognizes a PAM sequence of 5’-NGG-3’ (where ‘N’ is any nucleotide). However, the specific PAM required varies depending on the nuclease used, with nucleases isolated from different species requiring different PAMs.
Once DSBs are generated, the native cellular DNA repair machinery attempts to repair the cut via non-homologous end-joining (NHEJ) or homology-directed repair (HDR).
NHEJ is the native cellular mechanism for repairing DSB. However, because NHEJ ligates the DNA ends back together without the use of a homologous DNA template, it is very error-prone. When these errors result in the insertion or deletion of nucleotides (indels), frameshift mutations often occur, leading to the generation of premature stop codons and a loss-of-function (LOF) mutation. This method, utilizing NHEJ, is the primary means by which CRISPR is used to disrupt (knockout) a gene.
In contrast, HDR is the mechanism by which CRISPR is employed to guide the replacement and expression of a specific genetic sequence (knock-in). Although the same basic CRISPR components are used, HDR utilizes a DNA donor template containing the new desired sequence flanked by regions of homology. When this donor template is introduced along with the CRISPR components, the cells will use this template to repair the cut DNA via homologous recombination. The result is the incorporation of the new sequence into the target gene.
Getting Started with CRISPR Gene Editing
A typical CRISPR gene-editing workflow can be broken into the following key steps:
- gRNA design.
- Delivery of the editing complex into your cells or embryo.
- Cell selection
- Analysis of successfully edited cells.
Guide RNA (gRNA) Design
Careful design of the gRNA is critical to the success of your CRISPR experiment. A poorly designed gRNA may fail to generate an effective knockout or it may result in unwanted off-target effects by binding other regions of genomic DNA. Here are a few tips for designing effective gRNA:
- Ensure the presence of a PAM motif. As mentioned, nucleases require PAMs to cleave their target DNA. Therefore, it is essential that a PAM specific to your intended nuclease is present immediately downstream of your intended gRNA binding site.
- Get the GC-content right. The GC content of the gRNA has been reported to affect the activity of gRNAs. GC contents that are either too low or high resulted in lower cleavage efficiency. [2,3] It is therefore recommended that you design your gRNAs with a GC content of 40-60%, where possible.
- Consider chromatin accessibility. It has been reported that chromatin accessibility is a major determinant of sgRNA binding in vivo with successful binding occurring more frequently in open chromatin regions of DNA.
- Target essential exons. For generating knockouts it’s important to design your gRNA to target an exon that is essential for protein function, and successfully generate a loss-of-function mutation.
- Minimize off-target complementarity. Your gRNA sequence should ideally be unique to your target DNA. However, gRNAs may still bind other regions, even if complementarity is not 100%. There are various computational tools available that can help you determine if your gRNA has other potential binding sites.
Designing your gRNAs may seem complex, but online resources like the Sigma-Aldrich® CRISPR design tools, make the process straightforward.
Delivering CRISPR Into Your Cells
There are multiple ways to deliver CRISPR nucleases (e.g., Cas9) and gRNAs into cells, including transient transfection (DNA, RNA, or RNP), lentiviral transduction, PiggyBac integration, and ribonucleoprotein (RNP) transfection. The primary feature that distinguishes these methods is whether they express your CRISPR components transiently or stably.
Transient methods work best for individual gene knockouts as they are associated with fewer off-target effects, while stable gRNA integration is needed for large-scale screening applications to enable recovery and quantification at the end of the screening process.
Certain cell types, such as T cells, are hard to transfect with a plasmid vector, either because the viral vector elicits an immune response or the plasmid delivery ends up killing the target cell. In such cases, a ribonucleoprotein (RNP) system is often used. Here, the RNP, consisting of purified Cas9 protein complexed with the gRNA, is assembled in vitro and delivered directly into the target cells by electroporation or transfection. RNP offers faster gene editing since the functional nuclease is immediately available in the cell. RNP complexes are quickly degraded by cellular proteases making the editing activity short-lived. The reduced time of available RNP in the cells also reduces off-target effects. Overall, RNP provides an effective and straightforward method for achieving high levels of gene knockouts with efficiency rates reaching 70-80%.
CRISPR Selection Markers
To select those cells that have been successfully transduced, selection marker genes are often incorporated into the vector that expresses the CRISPR components.
The two most common selection markers are:
- Fluorescent proteins that allow enrichment of transduced cells via fluorescence-activated cell sorting (FACS).
- Antibiotic resistance genes that enable the selection of the transduced cells using an appropriate antibiotic.
Selection markers are not limited to vector-mediated CRISPR delivery methods. Fluorophore-tagged Cas9 RNP complexes, such as MISSION™ Cas9-GFP Fusion Proteins, as well as fluorophore-tagged gRNAs are available. As discussed above, a benefit of using RNP complexes is that they are quickly removed from the cell. Using tagged RNP complexes enables you to observe this clearance of the nuclease-gRNA complex in the transfected cell in real-time.
An important consideration when choosing your selection marker is if your chosen cell line includes any pre-existing antibiotic resistance or fluorescent tags that would interfere with the selection.
Measurement and Analysis of Successful Gene Editing
Knowing how you will measure the success of your CRISPR gene-editing experiment is critical. There are many options for doing so, including Sanger sequencing, mismatch detection assays, next-generation sequencing (NGS), phenotypic assessment, and measuring mRNA and protein levels for your targeted gene. These methods differ in their sensitivity, scalability, resolution, and cost.
For example, although NGS offers extremely high sensitivity and resolution, it is costly and requires significant technical expertise to carry out. Mismatch detection, on the other hand, is easy to perform but lacks the sensitivity of Sanger sequencing and NGS.
Determining the presence of an indel is often considered best practice. However, simply measuring the changes made to the genome is not sufficient to determine if they have disrupted gene function and created a gene knockout that causes a phenotypic response. It is also important to measure protein levels, ideally using a well-validated antibody.
Target Cell Ploidy and CRISPR Editing
The ploidy of your chosen cell line is another important consideration when using CRISPR. This is vital information because it dictates the number of mutations needed to obtain a homozygous LOF mutant. Many transformed or cancer cell lines possess more complex genomic configurations (e.g., triploid), making them more difficult substrates for complete gene knockout. In contrast, homozygous mutants can be obtained from haploid cells with relative ease.
Controls for CRISPR Gene Editing
Choosing the right controls for your gene-editing experiment is essential in determining the validity of your results and to facilitate troubleshooting, when necessary.
- Positive controls: gRNAs that have been shown to successfully target another gene in your system can be used to confirm that CRISPR is working correctly in your setup.
- Negative controls: Non-targeting gRNAs, or gRNAs that don’t target any gene in your cell’s genome, can be used to confirm that your LOF phenotype is not a technical artifact.
Additional controls can be used to simplify the interpretation of your CRISPR gene-editing experiment. For example:
- Generating multiple null mutant cell lines using different gRNAs that target the same gene will drastically decrease the chance that your LOF phenotype is caused by an off-target effect.
- Complementary approaches such as CRISPRa and RNAi can be used to confirm the same phenotype.
- If your cells permit, consider performing a rescue experiment in which you, either transiently or stably, reintroduce the knocked-out gene in your cell line to confirm function is regained. Re-expression of a gene can be achieved effectively through ORF (open reading frame) over-expression to confirm the phenotype observed is the result of the gene knockout and not due to off-target effects.
Summary of CRISPR Gene Editing
CRISPR-driven gene editing is one of the most powerful tools available today. While the number of considerations and design choices required may seem daunting, you’ll find that a modest degree of planning will go a long way in improving the likelihood of success in your experiments.
This article is intended only as an introduction to the basic concepts and considerations of conducting a CRISPR experiment. We encourage you to explore the additional educational resources listed below.
More CRISPR Resources
- How to Validate a CRISPR Experiment (Article)
- How to Understand CRISPR Formats and Their Applications (Article)
- Engineering Vero Cell Lines Using CRISPR to Increase Production of Viral Vaccines (Webinar)
- Efficient Generation of Gene-edited Mouse Models and Cell Lines Using Synthetic sgRNA (Webinar)
- CRISPR Gene Editing 101 (eBook)
And you should also consider the following Sigma-Aldrich® tools:
- Sigma-Aldrich® CRISPR
- Sigma-Aldrich® Advanced Genomics
- Advanced Genomics Resource Center
- CRISPR Essentials
- Additional CRISPR Webinars
- Chatterjee, P., et al. Minimal PAM specificity of a highly similar SpCas9 ortholog. Science Advances, 4, 10 (2018). DOI: 10.1126/sciadv.aau0766
- Liu, X. et al. Sequence features associated with the cleavage efficiency of CRISPR/Cas9 system. Sci Rep. 6: 19675 (2016). DOI: 10.1038/srep19675
- Doench, J. G. et al. Rational design of highly active sgRNAs for CRISPR-Cas9-mediated gene inactivation. Nat Biotechnol. 32(12): 1262–7. (2014). DOI: 10.1038/nbt.3026
- Wu, X. et al. Genome-wide binding of the CRISPR endonuclease Cas9 in mammalian cells. Nat Biotechnol. 32(7): 670–676. (2014) DOI: 10.1038/nbt.2889
- Kosicki, M. et al. Dynamics of indel profiles induced by various CRISPR/Cas9 delivery methods. Prog Mol Biol Transl Sci 152, 49-67 (2017). DOI: 10.1016/bs.pmbts.2017.09.0 |
If you want to challenge your kids’ skill in vocabularies and their ability in finding things, this time will share to you a large selection of kid word search to test and train their brain to be more active! In these word search worksheets, there are words hidden in the pool of letters.
To finish the worksheets, your kids simply need to meticulously look for the words listed in the worksheets! Check out these word search worksheets provided in the images below!
Children sometimes find it hard to concentrate. They are easily distracted by everything in their surroundings. Using word search or puzzle is one way to help train your kids to concentrate while finding words. These kid word searches contain many words hidden horizontally, vertically, diagonally, forward, or backward. Your kids have to carefully find and spot the words hidden in these worksheets. The words that they have to find are listed in the left, below, above, or right side of the word search.
By using these word search puzzles, your kids will get used to concentrate as they are trying to find the hidden words in the pools of letters. There are many themes of the word search that you can choose for your kids in this post.
Word search is of the best alternative ways to test your children’s brain and make them more active in concentrating. Therefore, print all these word search sheets and give them to your children! |
4 Math Measurement - Welcome aboard the journey into the world of education printable worksheets in Math, English, Science and Social Studies, Coordinated with the CCSS but Professionally applicable to Pupils of grades.
Vibrant charts, engaging tasks, practice exercises, online quizzes and templates together with obviously laid-out information, illustrations and a variety of tasks with varied levels of difficulty provide help to pupils in classroom and homework activities. Get started with our free sample worksheets and join to the full treasure trove. math measurement
come together with answer keys assisting in instant validation.
Metal Wall Art Panels Fresh 1 Kirkland Wall Decor Home from math measurement , source:kunuzmetals.com
Our math measurement
cover the full range of basic school mathematics skills from counting and numbers through fractions, decimals, word problems and more.
4 Easy Ways to Measure Centimeters with from math measurement , source:wikihow.com
Whether your child needs a small math boost or is interested in learning more about the solar system, our free worksheets and printable activities cover most of the educational bases. Every worksheet was made by a professional educator, so you know your child will learn crucial age-appropriate details and concepts. Best of math measurement
, lots of worksheets across many different topics feature vibrant colours, cute characters, and intriguing story prompts, so children become excited about their learning experience.
Cooking weights and measures from math measurement , source:en.wikipedia.org
metric math Urgup ewrs2018 from math measurement , source:urgup.ewrs2018.org
are an perfect learning tool for youngsters that are only learning to write or want to practice at home. Turtle Diary admits the importance of practicing educational content through writing, therefore we offer a variety of free printable worksheets in topics like language arts, mathematics, science, and science. Worksheets familiarize pupils with displaying their work in a written format and offer them the opportunity to receive feedback on mistakes or tasks well done. Be sure to check out our interesting and colorful worksheets for kids below. |
NASA is making to blast New Mexico chile pepper vegetation out of the Earth’s environment in March 2020 and develop the fruiting blooms on the International Space Station. Researchers hope it will result in improved meals for astronauts, in addition to a deeper understanding of tips on how to sometime develop meals on the moon and Mars.
These specific peppers had been chosen for a lot of causes.” We were in search of varieties that don’t develop too tall, and but are very productive within the managed environments that we would be utilized in space,” NASA plant physiologist Ray Wheeler told the Rio Grande Sun newspaper. “The astronauts have typically expressed a need for extra spicy and flavorful meals, and so having a little bit of scorching taste additionally gave the impression to be an excellent factor. Plus, many peppers are very excessive in vitamin C, which is essential for space diets.”
Matthew Romeyn, the lead scientist on the pepper project, tells the AP that his group selected the hybrid chile plant as a result of it has a shorter development cycle than different varieties, and since it may thrive inside the smaller confines of the Superior Plant Habitat, the backyard the place astronauts grow to produce. It can be eaten both when it’s nonetheless inexperienced, earlier than it has ripened, or when it reaches the ultimate red stage.
However, there could also be a further reason that this explicit selection was in the end favored. One of many NASA researchers on the undertaking, Jacob Torres, is an Española native, just like the chile plant itself. He believes the spicy fruit from his area might increase astronauts’ moods. “Simply by having one thing contemporary to eat, a kind of crop you grew yourself, being away from residence for a very long time, that picks up your morale, it brings positivity and provides to the mission that you were doing,” he tells the Albuquerque Journal. “That’s one essential facet of the research that we’re doing.”
The Superior Plant Habitat offers vegetation with the CO2, humidity, and lighting which can be in any other case absent in the space. The chile pepper challenge will exhibit how fruiting vegetation handle in these circumstances. “If we do go on a deep space mission, or we do go to the moon or a mission to Mars, we should work out an option to complement our diets,” Torres says. “Understanding easy methods to develop vegetation to complement the astronauts’ weight loss plan can be important to our mission of going to Mars. So that type of fuels our analysis that we’re doing now.” |
Brightness and Emittance
The research that can be carried out in the Beamlines is closely linked to the quality of light produced by the Source. The quality of a Synchrotron Light Source is characterized by its brightness, defined as the number of photons emitted by the source in a determined spectral range of energy, per unit time, per unit size and angular divergence of the source. The higher the brightness, the better the quality of the light source.
Some scientific applications and experimental methods can only be carried out in light sources with high brightness and coherence. Thus, there is a constant search for building synchrotrons increasingly bright, from which Sirius stands out for being designed to have the highest brightness in the world among the sources with its energy range.
One of the most effective ways to increase the brightness of light sources is the reduction of a quantity called emittance. The emittance of a synchrotron light source is a measure of the size and angular divergence of the electron beam. The better collimated is the electrons beam, that is, the lower the emittance, the higher the brightness of the source.
In turn, emittance, which is a constant feature of the machine and depends only on the configuration of the magnetic lattice of the Storage Ring, which is one of the main parameters of a Synchrotron Light Source. |
- Created by: GreenGooSnake
- Created on: 02-06-19 17:16
The Weimar Republic was formed
The Weimar Republic was the first time Germany had ever been governed as a democracy. It was designed to give the German people a voice. However, there were major flaws in its constitution that made it weak.
1. The Council of People's Representatives organised elections in January 1919 to create a new parliament. Germany was now a democracy - the people would say how the country was run.
2. Friedrich Ebert became the first President, with Philip Scheidemann as Chancellor. Ebert was leader of the SPD, a moderate party of socialists.
3. February 1919, the members of the new Reichstag met at Weimar to create a new constitution for Germany. This was the beginning of a new period of Germany's history that historians call the Weimar Republic.
The constitution established how the government would be organised and established its main principles.
The Weimar Constitution made Germany More Democrat
The new constitution reorganised the German system of government. The President was elected by the German people, and so were the parties in the Reichstag. The president had the most power, but the Chancellor was in charge of day-to-day running of government.
- Elected every 7 years.
- Chooses the Chancellor and is head of the army.
- Can dissolve the Reichstag, call new elections and suspend the constitution Proportional representation is where a proportion of seats a party wins in parliament is roughly the same as the proportion of total votes they win.
- The new German Parliament.
- Members elected every 4 years using proportional representation
- Second house of parliament.
- Consists of members from each local region.
- Can delay measures passed by the Reichstag.
1. The new constitution was designed to be as fair as possible. Even very small political parties were given seats in the Reichstag if they got 0.4% of the vote or above.
2. The constitution allowed women to vote for the first time, and lowered the voting age to 20 - more Germans could vote and the German public had greater power.
The Constitution had Weaknesses
Even though it was more democratic, it wasn't very efficient.
1. Proportional representation meant that even parties with a very small number of votes were guaranteed to get into the Reichstag. This meant it was difficult to make decisions because there were so many parties, and they all had different points of view.
2. When a decision couldn't be reached, the President could suspend the constitution and pass laws without the Reichstag's consent. (Article 48 - his ability to force through his own decision).
3. This power was only supposed to be used in an emergency, but became a useful way of getting around disagreements that took place in the Reichstag. This meant it undermined the new democracy. |
1. Introducing the Passive
Look at the following sentences.
Sentence A: People eat sushi in many parts of the world.
Sentence B: Sushi is eaten by people in many parts of the world.
Sentence A is considered active because the doer of the action (or agent) is the subject of the sentence.
Sentence B is passive. That is, the object of the active verb (eat) in the first sentence is the subject of the passive verb (is eaten) in the second sentence. This means that only verbs which have objects (transitive verbs) can be made passive.
Active sentences are usually regarded as stronger than passive sentences. Passive sentences are common, however, especially in academic writing. In fact, there are three situations when it is better to use a passive sentence instead of an active one. They are listed below.
Using the Passive
Situation One: When we don't care or don't know who performs the action.
The injured workers were rushed to the hospital.
Situation Two: When we can't or don't want to say who performed the action.
Has the truth been hidden from us?
Situation Three: When we want to place emphasis on the receiver of the action rather than the doer, or agent.
Thousands of homes were destroyed by the hurricane.
2. Forming the Passive
We form the passive by using the correct form of the verbs “be” (is, am, are,be, been, being, was, were) or “get” (get, gets, getting, got, gotten) plus a past participle. Be careful. Passive sentences with get plus a past participle are less formal than those with be plus a past participle. Consequently, they are most often used in spoken English and informal writing.
Most of the trash got taken to the recycling centre. (informal)
Most of the trash was taken to the recycling centre. (formal)
Using the “by phrase”
As you have learned, passive sentences are used when writers don't know or don't care who actually performed the action. Thus, the majority of passive sentences do not include “by phrases”. Only when it is important for readers to know who performs the actions, do writers include it in passive sentences. Consider the following examples.
That building was designed by a famous architect.
(The “by phrase” is important, so it is included.)
“Macbeth” and “King Lear” were written by William Shakespeare.
(The “by phrase” is included because it is important.)
The mail is usually delivered before noon.
(The “by phrase” is not necessary because we know who delivers the mail each day.) |
This coastal location in Chile’s northern area, in the Tarapacá Region, is currently a small fishing town that left the saltpeter riches behind yet hides other treasures. It is home to one of the richest and most varied marine ecosystems of northern Chile, which Oceana decided to research.
The expedition in Pisagua began precisely with the purpose to protect the spawning areas of one of Chile’s most caught fish, the anchovy. This is the area where fish release their eggs and is crucial for this resource to remain healthy. The ecosystem’s main characteristic and what makes it so important for the anchovy, are the vast macroalgae forests that grow on rock surfaces along the area’s entire coast serving as a safe shelter for young fish that need to protect themselves from their predators. Chilean jack mackerels and a large diversity of rock fish species typically seen in Chile’s coast, are also protected.
Before Oceana’s expeditions, there wasn’t much resource material available on this area. There were only a few references from past research conducted by Universidad Arturo Prat, already revealing that Pisagua could be considered a priority conservation area. In addition, there was a large colony of sea lions in Punta Pichalo, near Pisagua, where many sea birds and even Humboldt penguins could be found. In fact, the high concentration of birds in the area can be demonstrated by the guano accumulated along the coast. This material was highly exploited in the past and promoted the construction of docks to facilitate its sale.
For local authorities, other crucial sites to be preserved were Río Loa, Punta Patache, Punta Pichalo and Chipana. However, there wasn’t enough information about these ecosystems either, except for whale sightings, and the presence of sea lions and sea birds. So, in 2017, Oceana and researchers from Universidad Arturo Prat started out on the first expedition to the area to begin collecting scientific information. For one week, life on the ocean floor revealed itself; the ROV Camera gathered footage 550 meters deep, while a drop cam registered images at 700 meters. The expedition was divided into key points: Rio Loa’s estuary, Punta Chipana and Punta Pichalo. Of all the explored areas, Punta Pichalo in Pisagua, was characterized as having the largest diversity of species.
Pisagua’s bay is known for its high productivity and marine diversity. The abundance of phytoplankton and crustaceans such as krill and prawns create the perfect conditions where larger organisms such as fish, mammals and sea birds can proliferate. Macroalgae forests also protect the growth of coastal fish and important resources like anchovies and jack mackerels.
There’s a high concentration of nutrients consumed by other species in upwelling areas; the leftovers are distributed through currents and descend to the ocean floor. As they reach the bottom, thee cover the floor oxygen is lost, creating the perfect atmosphere for bacteria such as Thioploca to process this waste. According to experts, this true nutrient recycling process is the reason why Chile and Peru’s coasts are among the most productive in the world, a condition that can be found throughout our country, from Arica to Concepción. Doctor Ariel Gallardo, renowned professor at Universidad de Concepción discovered these bacteria and coincidentally, his first research was conducted on a scientific cruise in Punta Pichalo during the 60’s.
The fields of bacteria in the depths of Pisagua make this a unique area of Northern Chile. This is a resilient ecosystem that was able to recover itself after the El Niño phenomenon wreaked havoc in the coasts of Peru and Chile in 1984.
With this information on the table, Oceana organized a second expedition in 2018. This time the objective was to not only conduct research at the deepest level, but to also investigate the coastal shallow waters. A group of documentary divers produced visual records while researchers of Arturo Prat University collected live species such as fish, crustaceans and snails that can currently be seen at the Ocean Museum aquarium in Iquique.
After both expeditions and large amounts of samples at their disposal, Oceana, the local community, regional government and Universidad Arturo Prat are working hard on a proposal that aims to protect the Pisagua area through a Multiple Purpose Marine Coastal Protected Area (AMCP-MU in Spanish), allowing this ecosystem to continue to be a focal point of biodiversity and abundant marine life.
The size of Punta Pichalo can be seen from above; this is one of the areas of Chile’s Norte Grande that is most exposed to the sea. Below, on the bay, lies the historic town of Pisagua, settled at the shoreline.
A large forest of kelp constitutes a fundamental habitat for several species of fish and marine invertebrates. Finding them in such good condition is very significant, considering the high amount of fishing pressure inflicted on this species.
Several types of anemones could be seen during diving expeditions in Pisagua. In this case, this red specimen stands out, contrasting with the sea’s green-bluish color.
Aboard the “Stella Maris II”, Dr. Matthias Gorny, Science Director at Oceana Chile, prepares the Remote Operated Vehicle (ROV). Life existing in the deep waters of this area could be explored and recorded thanks to this equipment.
This wonderful animal has a large oval head that harbors several organs; however, its mouth is located beneath its eight arms. The image shows an octopus whose eyes are perfectly camouflaged with the color of its skin.
The area of Punta Pichalo is home to a sea lion colony where the common sea lion and the South American fur seal coexist, which is possible because of the abundance of food. The image shows several females leaning out to see the vessel.
Forests of algae are a fundamental ecosystem for many species, providing food and habitat. The picture shows how various species coexist in a very small space, from fish such as the bilagai and damselfish, to sunflower sea stars, sea urchins and snails.
This small decapod lives in symbiosis with the orange anemone, whose tentacles protect this species. Its shell measures up to 20 millimeters long and it can be found throughout the coasts of Chile and Peru.
This small anemone is very common in the coasts of Chile; it stings and varies in color. It can be found in rocky areas at a depth of about 28 meters. It’s very abundant in the waters of Pisagua.
This great rock fish is typical of the coastal waters of the Chilean shoreline. Due to underwater hunting, this species is already hard to find, expect in Pisagua. Several male and female specimens were observed, the latter are smaller and reddish while males are black and have a yellow spot on their side.
During the expedition we were able to see a few Humboldt Penguin couples among the sea lion colonies of Punta Pichalo. However, the surprise was greater when several more couples were seen swimming in the water while the ship navigated.
One of the great surprises found in Pisagua were the large schools of Chilean jack mackerel. Many specimens were observed during diving expeditions.
The image shows the base or disc of Chilean kelp, a subtidal species, meaning it is not left uncovered during low tide. Many species live and feed in this disc, such as snails and this orange actinia.
Crew members of the “Stella Maris II” open a grab. This heavy tool is thrown overboard, free falling into the sea; once it lands on the ocean floor it automatically closes keeping a sample of the ocean bottom’s surface sediment in its shovel.
A curious sea lion swims in front of the camera. Although their movements on land are rough, under water they are daring swimmers. Despite their intimidating aspect and large size, they are curious and friendly with divers, as long as their space is not invaded, or their pups are not disturbed.
This is one of the signature fish of Chile’s northern shore, associated to areas with algae at a depth of up to 20 meters. Even though it’s harder to find in other areas, large numbers of this species could be seen during almost all of the expedition’s dives.
This beautiful bird is one of the most abundant in the area. They group together in large families, perched on entire rocks. Their primary food source are anchovies and other small fish they find in the Humboldt current.
At the end of the expedition, a picture portrays the cooperation and good atmosphere experienced during those days. In the image, representatives of Universidad Arturo Prat, Pisagua Sumergido, Environment SEREMI of the Tarapaca Region, Stella Maris II and Oceana Chile. |
To rate this resource, click a star:
Students explore molecular data from Homo sapiens and four related primates and develop hypotheses regarding the ancestry of these five species by analyzing DNA sequences, protein sequences, and chromosomal maps.
Lents, Nathan, et al
1 to 4 periods
This activity, suitable for laboratory, discussion, or any other group work setting, is broken into three parts. Each individual part could be modified, done at different times or stand entirely on its own. The discussion at the end of each activity is critical.
“The inquiry-based student activity described herein is a novel approach toward the instruction of the practice of molecular phylogeny and systematics.”
The activities are appropriate for introductory biology curriculum, for nonmajors, and even secondary education levels.
Correspondence to the Next Generation Science Standards is indicated in parentheses after each relevant concept. See our conceptual framework for details.
- Through billions of years of evolution, life forms have continued to diversify in a branching pattern, from single-celled ancestors to the diversity of life on Earth today.
- Life forms of the past were in some ways very different from living forms of today, but in other ways very similar.
- Similarities among existing organisms (including morphological, developmental, and molecular similarities) reflect common ancestry and provide evidence for evolution.
- A hallmark of science is exposing ideas to testing.
- Scientists test their ideas using multiple lines of evidence.
- Scientists may explore many different hypotheses to explain their observations.
- Accepted scientific theories are not tenuous; they must survive rigorous testing and be supported by multiple lines of evidence to be accepted.
- Our understanding of life through time is based upon multiple lines of evidence.
- Classification is based on evolutionary relationships.
- Scientists use multiple lines of evidence (including morphological, developmental, and molecular evidence) to infer the relatedness of taxa.
- Evolutionary trees (i.e., phylogenies or cladograms) portray hypotheses about evolutionary relationships.
- Evolutionary trees (i.e., phylogenies or cladograms) are built from multiple lines of evidence. |
This post is one of many in a series explaining energy for a technical and non-technical audience. Previous posts include topics such as the Ambient Temperature Impact on Gas Turbine Performance or the difference between kW and kWh.
High return temperatures are a major problem in district heating (DH) networks. High return temperatures lead to:
- Increased flow rate of water pumped around the network.
- Lowers the capacity of the network to deliver heat.
- Increases heat losses from the network.
- Decreases heat recovery from gas engines and biomass boilers.
Before we dive into why lets give a brief overview of how a district heating network operates. Figure 1 shows a simple flow diagram for a district heating network.
Figure 1 – District heating system operating with a return temperature of 50 °C
The system delivers heat to the building heating system via a heat exchanger. Hot water is pumped around the district heating network and then returned to the energy centre for heating.
What then are the four negative impacts of a high return temperature?
1 – Increasing flow rate of water pumped around the DH network
Most district heating networks operate with a fixed flow temperature. This is set by the temperature of the water generated in boilers or CHP plants.
A high return temperature means that the temperature difference across the network (TFLOW – TRETURN) will decrease.
A smaller temperature difference means pumping more water to deliver the same amount of heat. See this earlier post if you are not clear how this relationship works.
Pumping more water means more electricity consumed by the pumps. This means increased electricity cost and carbon emissions from the scheme.
2 – Lowering the capacity of the network to deliver heat
Pipe sizes limit the capacity of a DH network to deliver water.
At peak flow rate a small temperature difference means we can deliver much less heat than the same network with a high temperature difference.
A scheme with a temperature difference half of the design means we are doubling the effective capital cost of our network per MW of heat capacity.
A larger temperature difference means we may be able to avoid installing new pipework (and the associated capital cost!) as our network expands.
Design of new networks with large temperature differences would mean smaller pipes. Smaller pipes means less capital cost and lower heat losses.
3 – Increasing heat losses
Heat losses are a function of the pipe surface area and the difference in temperature between the pipe and ambient. A higher return temperature means more heat losses in the return pipes.
Heat losses are a drawback of DH schemes versus local gas boilers. DH schemes lose a lot more heat due to the long length of the network pipes versus local systems. Minimizing heat losses is crucial in operating an efficient DH network.
Increased heat losses means more heat generation required in the energy centre. This means higher gas consumption and carbon emissions.
4 – Decreasing heat recovery from gas engines and biomass boilers
District heating schemes bring a net benefit to customers and the environment by the use of low carbon generation in the energy centre.
The efficient use of technologies such as gas engines or biomass boilers is central to the success of district heating. The benefits of using low carbon generation can offset heat lost from the DH network.
District heating schemes use gas engines to generate heat and power together. Gas engines generate roughly half of their recoverable heat as hot exhaust gases (> 500 °C) and half as low temperature (<100 °C). Biomass boilers generate only a hot exhaust gas.
The thermodynamic reasons for the loss of heat recovery are the same for of these three heat sources. An increased DH return temperature increases the final temperature the heat source can be cooled to.
This means that less heat is transferred between the heat source and the DH water. Below we will look at the example of recovering gas engine low temperature heat.
Gas engines operate with a low temperature hot water circuit. This water circuit removes the jacket water and lube oil from the engine. This heat can generate hot DH water for use in the scheme.
Figure 2 shows that a network return temperature (85 °C) leads to us only being able to cool the engine circuit to 85 °C. This limits heat recovery in the heat exchanger.
Figure 2 – Gas engine low temperature waste heat recovery with a high return temperature
It also forces us to use a dump radiator to cool the engine circut to the 70 °C required by the engine. If the scheme was not fitted with a dump radiator then the engine would be forced to reduce generation or shut down.
Figure 3 shows the temperature versus heat (T-Q) diagram for the heat exchanger when return temperature is low (50 °C). Operating with a low return temperature means we recover a full 1 MW from the engine water circut.
Figure 3 – Heat recovery from engine with a low network return temperature (50 °C)
Now look what happens when return temperature is high (80 °C). Figure 4 shows that we now only recover 400 kW of heat.
Figure 4 – Heat recovery from engine with a high network return temperature (80 °C)
Gas boilers will need to generate the additional 600 kW of heat required by the network. This means increased gas consumption and carbon emissions.
The same principle applies to the recovery of heat from higher temperature sources such as gas engine exhaust or biomass boiler combustion products. A high DH return temperature will limit heat recovery.
Why do high return temperatures occur?
High network return temperatures can occur for variety of reasons. Most commonly it is due to heating systems designed for local gas boilers connected to DH networks.
A major issue is the use of bypasses. Bypasses divert a small amount of the hot DH water being fed to a heat exchanger directly from the flow into the return. Figure 5 shows a bypass increasing network return temperature from 80 to 95 °C.
Figure 5 – Bypass causing high return temperature
Bypasses are installed to maintain a minimum amount of flow across the network when demand for heat is low. This prevents starving pumps at low heat demands.
Bypasses cause no issues in local boiler building heating systems but are a major problem in district heating.
These bypasses are pipes designed to only allow a small amount of water to bypass the heat exchanger. However when network flow is low they also have a proportionally large effect on the return temperature!
Instead of installing bypasses pump systems should operate with higher turndowns. This can be achieved through multiple pump systems.
Another reason for high network return temperatures is building circuits which use higher temperature water than they require. For example local hot water cylinders require temperatures above 60 °C to prevent legionella.
Local water storage does not make sense on a DH network – heat storage should occur in the energy centre. This will allow the DH network operators to optimally manage the heat storage.
Local hot water cylinders can also cause peaks in demand if they are set to charge at the same time. This will be seen as a huge peak in heat demand on the entire network. Peak demands can be difficult for DH network operators to deal with. |
In Hanoi alone, there are over 50,000 households which have been using inefficient Beehive stoves for various purposes like cooking, food & drink services and cattle feed. The exposure to smoke poses significant health risks, particularly for women and children. Considering that households spend more than two hours a day cooking, health hazards are ever so likely. The WHO estimates that short and long-term health effects associated with the smoke produced by these fuels have contributed to approximately 45,000 deaths per year in the country.
Picture of an old beehive cookstove
The traditional fuel used for beehive stoves in Hanoi is what is called “beehive coal”. Beehive coal is similar to conventional coal but is mixed with mud and other substances. When burning, it emits a number of hazardous substances.
The beehive stoves conversion into advanced clean cookstoves (see pictures below) leads to burning of biomass rather than fossil fuels, which contributes to GHG reduction. The clean cookstoves have also been checked and tested by ‘Smart Development Works Vietnam’ (SNV Vietnam) against efficiency, emissions, safety and fuel savings international standards. These advanced clean cookstoves have greater heating efficiency and need less fuel, therefore emit less CO2 and particles, reducing harmful impacts on user’s health.
What is the innovation? How does it work?
To improve the quality of the urban environment and citizens’ health, Hanoi has committed to eliminate beehive stoves by 2020, encouraging citizens to use innovative stoves which are fuel efficient, have less carbon emissions, and benefit users’ health and safety.
Since April 2017, the city assessed the cookstove usage status in every district of Hanoi by assessing users’ habits and demands, so that policies and financial mechanisms can be proposed for incentives for conversion to innovative stoves.
Statistics show that cookstoves are used mainly in middle and high-income areas. Awareness of cookstoves’ impacts is limited among local people. Only 14-30% of cookstove users want to replace traditional cookstoves with cleaner ones. Most of the interviewees did not know about advanced cookstoves with a lower price. This is why researchers, local and international NGOs and civil society have been working together to tackle the issue. Thanks to an effective network and awareness-raising campaigns, many Hanoi inhabitants now have clean cookstoves that drastically reduce smoke levels and carbon emissions.
At present, SNV Vietnam has provided financial support for qualified clean cookstove producers in order to promote clean advanced cookstoves across the country, including Hanoi.
Hanoi will continue to work with SNV Vietnam to increase access to and availability of advanced clean cookstoves for households, while increasing the capacity of local producers to distribute clean cookstoves.
What are the CO2 reduction goals?
According to estimates, it is possible to achieve emission reductions of up to 3,667 million tons of CO2 by replacing all Hanoi’s 50,000 fossil fuel-burning cookstoves with cleaner stoves burning biomass.
Le Thanh Thuy
Hanoi Environmental Protection Agency |
RationaleScience provides an empirical way of answering interesting and important questions about the biological, physical and technological world. The knowledge it produces has proved to be a reliable basis for action in our personal, social and economic lives.
AimsThe Australian Curriculum: Science aims to ensure that students develop:
an interest in science as a means of expanding their curiosity and willingness to explore, ask questions about and speculate on the changing world in which they live.
Key ideasIn the Australian Curriculum: Science, there are six key ideas that represent key aspects of a scientific view of the world and bridge knowledge and understanding across the disciplines of science, as shown Figure 1 below. These are embedded within each year level description and guide the teaching/learning emphasis for the relevant year level.
StructureThe three interrelated strands of science
The Australian Curriculum: Science has three interrelated strands: science understanding, science as a human endeavour and science inquiry skills.
Content and achievement sequencesResources and support materials for the Australian Curriculum: Science.
Year 8 Level Description
The science inquiry skills and science as a human endeavour strands are described across a two-year band. In their planning, schools and teachers refer to the expectations outlined in the achievement standard and also to the content of the science understanding strand for the relevant year level to ensure that these two strands are addressed over the two-year period. The three strands of the curriculum are interrelated and their content is taught in an integrated way. The order and detail in which the content descriptions are organised into teaching and learning programs are decisions to be made by the teacher.
Incorporating the key ideas of science
Over Years 7 to 10, students develop their understanding of microscopic and atomic structures; how systems at a range of scales are shaped by flows of energy and matter and interactions due to forces, and develop the ability to quantify changes and relative amounts.
In Year 8, students are introduced to cells as microscopic structures that explain macroscopic properties of living systems. They link form and function at a cellular level and explore the organisation of body systems in terms of flows of matter between interdependent organs. Similarly, they explore changes in matter at a particle level, and distinguish between chemical and physical change. They begin to classify different forms of energy, and describe the role of energy in causing change in systems, including the role of heat and kinetic energy in the rock cycle. Students use experimentation to isolate relationships between components in systems and explain these relationships through increasingly complex representations. They make predictions and propose explanations, drawing on evidence to support their views while considering other points of view.
Year 8 Content Descriptions
Earth and space sciences
Nature and development of science
Use and influence of science
Questioning and predicting
Planning and conducting
Processing and analysing data and information
Year 8 Achievement Standards
By the end of Year 8, students compare physical and chemical changes and use the particle model to explain and predict the properties and behaviours of substances. They identify different forms of energy and describe how energy transfers and transformations cause change in simple systems. They compare processes of rock formation, including the timescales involved. They analyse the relationship between structure and function at cell, organ and body system levels. Students examine the different science knowledge used in occupations. They explain how evidence has led to an improved understanding of a scientific idea and describe situations in which scientists collaborated to generate solutions to contemporary problems. They reflect on implications of these solutions for different groups in society.
Students identify and construct questions and problems that they can investigate scientifically. They consider safety and ethics when planning investigations, including designing field or experimental methods. They identify variables to be changed, measured and controlled. Students construct representations of their data to reveal and analyse patterns and trends, and use these when justifying their conclusions. They explain how modifications to methods could improve the quality of their data and apply their own scientific knowledge and investigation findings to evaluate claims made by others. They use appropriate language and representations to communicate science ideas, methods and findings in a range of text types. |
There are many different soil types. The basic ingredients of all soils are variable proportions of solid particles (sands, silts, and clays), organic material, water, and atmospheric gases (oxygen, nitrogen, argon, and carbon dioxide). Arizona’s state soil – each state has a type soil – is the Casa Grande soil from near the city of the same name.
Soil Hazards in the U.S. According to the American Society of Civil Engineers about half of the homes in the United States are built on expansive soils. And of these homes, nearly half suffer some damage because of the soil. Each year in the U.S., expansive soils are responsible for more damage to homes than are floods, tornadoes, and hurricanes combined!
The geology and semi-arid climate of the Desert Southwest provide near ideal conditions for the formation of expansive and collapsing soils. And, unfortunately, problem soils are found throughout Arizona, from Yuma in the southwest to the northeast corner of the Colorado Plateau.
Expansive and Collapsing Soils. Expansive soils contain clays – microscopic-sized minerals – that are capable of large volume changes in the face of changing water conditions. Add a little water – say during a monsoon storm — to expansive smectite clay and it swells to many times its original volume. Remove that water during the hot, dry summer and the clay component of the soil shrinks. The resulting changes in soil volume can cause considerable damages to homes, sidewalks, pipelines, and streets.
Collapsing Soils consist of loose, dry, low-density material – i.e., undercompacted – that shrinks in volume when wetted (hydrocompaction), and/or when loaded with a great weight, such as a building or street. These types of soils are particularly common in the semi-arid southwestern U.S. where wind and ephemeral streams deposit loose, unconsolidated, and undersaturated (re.: dry) sediments that are prone to sudden collapse.
Expansive Soils in Phoenix & Tucson. Visit the Natural Resources Conservation Service website for maps showing the distribution of shrink/swell soils (i.e., expansive soils) for the greater metropolitan areas of Phoenix and Tucson. |
Static pressure is the pressure when water is motionless. In a closed level piping system the static pressure is the same at every point. There are two way to create static pressure, by elevating water in tanks and reservoirs above where the water is needed and by utilizing a pump. Pumps can be used to increase or boost the pressure.
Working pressure is pressure at any point in the sprinkler system when water is moving through the sprinkler system. Working pressure is always less than static pressure because the movement of water through a sprinkler pipe always results in a loss of pressure due to friction.
Pressure loss (friction loss) is equal to static pressure minus working pressure.
Even though the water that is running through the water main in your particular city is probably never at rest, commonly it will be referred to as static pressure. Pressure in the city main will vary depending on demand. This is important to you because it may be necessary to water your yard at times of the day or night when demands on the city supply is lower.
As the flow of water increases, so does the friction and the resulting loss of pressure. In summary, when water demand from the city is at its highest, the pressure will be at its lowest.
Friction loss, loss of pressure, pressure drop, and pressure loss all mean the same thing. The more water that is being forced through a sprinkler system, the higher the flow velocity, and the higher the friction loss.
Friction loss is one of the most important factors when designing or troubleshooting a sprinkler system. I will discuss this topic in much more detail in upcoming posts.
[widgets_on_pages id=” Subscribe to iScaper’s Blog”] |
There’s lots of evidence that humans have a specialized mechanism for identifying and responding to faces; for example, people with a condition called prosopagnosia have difficulty recognizing faces but not other objects. A few years ago, researchers showed that individual paper wasps of the species Polistes fuscatus recognize each other’s faces; the same team has now gone on to show that, like humans, P. fuscatus accomplishes this via a specialized mechanism for facial recognition rather than through general shape or pattern recognition. This story is an excellent example of a complex cognitive ability being exhibited by a creature with a relatively simple nervous system.
Sheehan & Tibbetts studied the recognition abilities of paper wasps using a T-shaped maze with pictorial cues at the intersection. Wild-caught adult female wasps were introduced into the maze and chose to go down one arm or the other; the entire floor of the maze was electrified except for a “safe zone” in one arm which was consistently associated with one of a pair of images of wasp faces. Each wasp was tested 40 times with a pair of images; depending on how well the wasp could distinguish and recognize the two images, she would learn to go down the arm with the “safe” image. The researchers used changes in the speed and accuracy of this decision to measure the wasps’s ability to learn. In order to compare facial recognition with other kinds of discrimination, the researchers also used paired images of geometric patterns, caterpillars (the wasps’ prey) or wasp faces that had been digitally manipulated (either rearranged or antenna-less).
The researchers found that Polistes fuscatus females were quicker and more accurate at learning to distinguish pairs of faces than paired patterns or paired caterpillars. The wasps also had trouble with antenna-less or rearranged faces, learning to recognize them about as well as they did the patterns or caterpillars. This suggests that the digitally altered faces were being recognized and learned by the same general process, but that these wasps have a specialized mechanism geared specifically towards facial recognition, allowing them to more quickly and accurately learn faces.
By contrast, wasps of another species (P. metricus) were unable to learn to recognize images of faces; after 40 trials, they still performed no better than chance. This isn’t due to a general difference in visual learning, since P. metricus learned to recognize patterns and caterpillars about as well as P. fuscatus did. It’s also unlikely to be due to a difference in visual systems; in fact, the researchers suggest that P. metricus should have more acute vision than P. fuscatus (based on morphological measurements). The difference in performance seems to result from P. metricus lacking a specific facial recognition system.
P. fuscatus faces are more variable than those of P. metricus, so it’s possible that these results are due to how recognizable the individual images are rather than to a cognitive difference in recognition ability. In order to test for this, the researchers tested the ability of individuals of each species to recognize faces of the other species. P. fuscatus learned to recognize individuals of either species more quickly than P. metricus did. Interestingly, P. metricus individuals were eventually able to learn to recognize P. fuscatus faces, despite being unable to distinguish individuals of their own species. However, since they recognized the faces about as well as they could recognize caterpillars and did just as well even when the antenna were digitally removed, it’s unlikely that they were using a specific facial recognition mechanism; it may be that the more variable P. fuscatus faces are easier to distinguish with a general pattern recognition mechanism.
The researchers suggest that the difference in recognition abilities may be because P. fuscatus are social wasps, unlike P. metricus. P. fuscatus nests are established by a co-operative group of queens; there is a strict dominance hierarchy determining reproduction, making it important to be able to recognize other individuals. By contrast, P. metricus usually nests alone, meaning there isn’t a similar pressure to evolve individual recognition.
In mammals, facial recognition involves several brain regions and even specialized neurons. Wasps have a much simpler nervous system, yet this research shows that they have been able to evolve a similar facial recognition ability. The neurological mechanism behind this ability isn’t known and the authors highlight this as an avenue for further research. It’s also interesting that complex cognitive abilities often seem to have evolved in response to the needs of social interaction in animals as diverse as bees, ravens and dolphins. Evolution is remarkably effective at generating solutions to a problem; maybe results like these should serve as a reminder not to appraise other organisms and their abilities on the basis of things like neural complexity.
Ref: Sheehan, M., & Tibbetts, E. (2011). Specialized Face Learning Is Associated with Individual Recognition in Paper Wasps Science, 334 (6060), 1272-1275 DOI: 10.1126/science.1211334 |
Attention Deficit Hyperactivity Disorder (ADHD) Symptoms
The primary characteristic of attention deficit hyperactivity disorder (ADHD) is a persistent pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning or a child’s development.
The problems occur usually in two or more areas of a person’s life: home, work, school, and social relationships. ADHD is also referred to as attention deficit disorder (ADD) when hyperactivity or impulsivity is not present.
Attention deficit disorder begins in childhood. The symptoms of inattention and hyperactivity need to show themselves in a manner and degree which is inconsistent with the child’s current developmental level. That is, the child’s behavior is significantly more inattentive or hyperactive than that of his or her peers of a similar age.
Several symptoms must be present before age 12 (which is why ADHD is classified as a neurodevelopmental disorder, even if not diagnosed until adulthood). In the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV), symptoms were required before age 7. Now the age of 12 is seen as an acceptable criterion because it is often difficult for adults to look retrospectively and establish a precise age of onset for a child. Indeed, adult recall of childhood symptoms tends to be unreliable. Thus, the DSM-5 has added some leeway to the age cut-off.
A person can present with symptoms that are predominantly characterized by inattention, predominantly hyperactivity-impulsivity, or a combination of the two. To meet for each of these ADHD specifiers, a person must exhibit at least 6 symptoms from the appropriate categories below.
Symptoms of Inattention
- Often fails to give close attention to details or makes careless mistakes in schoolwork, work, or other activities
- Often has difficulty sustaining attention in tasks or play activities
- Often does not seem to listen when spoken to directly
- Often does not follow through on instructions and fails to finish schoolwork, chores, or duties in the workplace (not due to oppositional behavior or failure to understand instructions)
- Often has difficulty organizing tasks and activities
- Often avoids, dislikes, or is reluctant to engage in tasks that require sustained mental effort (such as schoolwork or homework)
- Often loses things necessary for tasks or activities (e.g., toys, school assignments, pencils, books, or tools)
- Is often easily distracted by extraneous stimuli
- Is often forgetful in daily activities–even those the person performs regularly (e.g., a routine appointment)
Symptoms of Hyperactivity/Impulsivity
- Often fidgets with hands or feet or squirms in seat
- Often leaves seat in classroom or in other situations in which remaining seated is expected
- Often runs about or climbs excessively in situations in which it is inappropriate (in adolescents or adults, may be limited to subjective feelings of restlessness)
- Often has difficulty playing or engaging in leisure activities quietly
- Is often “on the go” or often acts as if “driven by a motor”
- Often talks excessively
- Often blurts out answers before questions have been completed
- Often has difficulty awaiting turn
- Often interrupts or intrudes on others (e.g., butts into conversations or games)
Symptoms must have persisted for at least 6 months. Some of these symptoms need to have been present as a child, at 12 years old or younger. The symptoms also must exist in at least two separate settings (for example, at school and at home). The symptoms should be creating significant impairment in social, academic or occupational functioning or relationships.
This criteria has been updated for DSM-5. See next page for diagnostic codes and related resources for ADHD.
Diagnostic Codes for ADHD (consider past 6 months of symptoms)
- 314.01 for both combined presentation (i.e., inattention with hyperactivity/impulsivity) and for predominantly hyperactive/impulsive presentation (i.e., inattention criteria is not met).
- 314.00 for Predominantly inattentive presentation (hyperactivity-impulsivity criteria is not met).
Bressert, S. (2016). Attention Deficit Hyperactivity Disorder (ADHD) Symptoms. Psych Central. Retrieved on April 19, 2016, from http://psychcentral.com/disorders/attention-deficit-hyperactivity-disorder-adhd-symptoms/ |
Basic Operations and Generating Equivalent Expressions
Aligned To Common Core Standard:
Grade 6 Expressions and Equations- 6.EE.A.3
Printable Worksheets And Lessons
- Simplifying Expressions
Step-by-step Lesson- Expressions that include exponents and
- Guided Lesson -
Rewriting expressions and using the distributive property to create
- Guided Lesson Explanation
- I might have not provided enough steps. I just never really see
have students have trouble with these types of problems.
- Practice Worksheet
- The entire sheet is dedicated to creating equivalent expressions.
- Visual Expressions Five
Pack - I haven't seen questions like these before. So, I made
- Matching Worksheet
- Match the expression that are equal. A really nice way to practice.
View Answer Keys- All the answer keys in one file.
You will find that each sheet is progressively easier, 1 being the hardest in the set.
I tried to make this section slightly more difficult than the basic old problems I see everywhere. |
Washington: Researchers have discovered an evolutionary function in wild tomato plants that could be used by modern plant breeders to create pest-resistant tomatoes.
Researchers have traced the evolution of a specific gene that produces a sticky compound at the ends of the hair at the Solanum pennellii plant found in the Atacama Desert in Peru.
These sticky hairs act as natural insect repellents to protect the plant, ensuring that it will survive to reproduce.
"We have identified a gene that exists in this wild plant, but not in cultivated tomatoes. The invertase-like enzyme creates insecticidal compounds not found in the tomato variety garden. This defensive trait could be created in modern plants, "said Rob Last, a professor of plant biochemistry in the study published in the Journal of Science Advances.
"We want our current tomatoes to adapt to stress like this wild tomato, but we can only do this by understanding the traits that make them resistant. We are using evolution to teach us how to be better breeders and biologists. For example, how can we increase crop yields by creating a pest-resistant plant and eliminating the need to spray the fields with insecticides, "said Leong, co-lead author.
This discovery is a step towards understanding the natural resistance of insects to the plants of Solanum pennellii, which could allow the introduction of this characteristic in tomatoes grown using traditional breeding practices. |
PREVALENCE: Very common
ACTIVE PERIOD: Active at night
KEY ID FEATURES: Dark grey with yellow belly scales, small black head and yellow collar at the neck
BEHAVIOR: Docile, may squirm and musk if handled but almost never bites
SIZE: Very Small - 25 - 35cm
QUICK ASSESSMENT 0-10
IMPORTANT: Many snakes have significant variance in coloration and pattern even within the same species. There can also be extreme differences in appearance from juveniles to adults so it is important to never assume you have properly identified a snake.
A small inoffensive snake, the Northern Ringneck Snake is very common across its geographical range. Often dark gray with peach or yellow belly scales and a light yellow or peach collar around the neck. Large scales on the head relative to its size and uniform smooth scales on the body. This species technically has a weak venom it uses on its prey but is harmless to humans.
A largely fossorial snake, the Northern Ringneck spends almost all its time underneath rocks, logs and leaf litter hunting its prey and rarely comes out in the open. They can sometimes be found out during the day though more commonly at night. Eat small invertebrates like worms as well as salamanders and other small snakes. Breeding normally in the fall laying 2-7 eggs in late spring and early summer. Can be social and often found together in large numbers under rocks, logs, boards etc.
Often found in wooded areas at both low and medium elevation. Tend to be found in higher abundance near rock piles but can also be found under logs and other debris.
NO SNAKE SHOULD EVER BE HANDLED BY ANYONE BUT EXPERTS: Given its small size it could be mistaken for juvenile species of several snakes, but generally unique in appearance with respect to the neck collar. It should be noted that small snakes can be particularly dangerous due to difficulty in properly identifying them and difficulty handling them. As such they should never be handled or approached. |
The Dinosaur Death Pose
As a rule, vertebrate fossils are not discovered as neatly connected skeletons that were tranquilly buried. This is certainly true for the dinosaurs, most of whose bones are found disarticulated (not connected in such a way as to demonstrate normal relationships), dispersed over a broad area, or only preserved in small part.
When a complete skeleton is found intact, it provides much more information to the paleontologist than a loose pile of bones. One of the striking characteristics of many of these connected skeletons is that they exhibit a peculiar contorted position popularly called the “death pose.” This pose of many fossilized dinosaurs, with wide-open mouth, head thrown back and recurved tail, likely resulted from the agonized death throes typical of brain damage from asphyxiation, according to paleontologists. “An extreme, dorsally hyperextended posture of the spine (opisthotonus), characterized by the skull and neck recurved over the back, and with strong extension of the tail, is observed in many well-preserved, articulated amniote skeletons (birds and other dinosaurs, pterosaurs, and at least placental mammals). Postmortem water transport may explain some cases of spinal curvature in fossil tetrapods, but we show how these can be distinguished from causes of the opisthotonic posture, which is a biotic syndrome.” (Faux, Cynthia Marshall, Padian, Kevin, “The Opisthotonic Posture of Vertebrate Skeletons: Postmortem Contraction or Death Throes?,” Paleobiology, Spring 2007.)
Others emphasize how a watery grave worked with the dinosaur physiology to create this unique posture: “A strong ligamentum elasticum was essential for all long necked dinosaurs with a long tail. The preloaded ligament helped them [by] saving energy in their terrestrial mode of life. Following their death, at which they were immersed in water, the stored energy along the vertebra was strong enough to arch back the spine, increasingly so as more and more muscles and other soft parts were decaying. It is a special highlight that in the Compsognathus specimen, these gradual steps of recurvature can be substantiated, too. Therefore, biomechanics is ruling the postmortem weird posture of a carcass in a watery grave, not death throes.” (Reisdorf and Wutteke; cited in “Why Do Dinosaur Skeletons Look So Weird?,” ScienceDaily.com, Feb. 16, 2012.)
To the right) is an image of a Spinophorosaurus so contorted in this familiar death pose as to almost form a circle! Creationist Ian Juby has collected a large sample of such cases (including some of the pictures on this page) and presents this as evidence of fossilized creatures meeting their demise during the Genesis Flood. “I probably have close to a hundred examples now of the death pose, both in land animals and marine organisms. Obviously the marine organisms didn’t lay out in the desert and shrivel up…In fact, this is a sign lifeguards look for, to spot if someone is drowning–even people, when they are drowning, pull their heads back.” (Private correspondence, used with permission, 2009.)
The evidence of a drowning causing the dinosuar’s demise continues to mount up from all over the world. “When palaeontologists are lucky enough to find a complete dinosaur skeleton—whether it be a tiny Sinosauropteryx or an enormous Apatosaurus—there’s a good chance it will be found with its head thrown backwards and its tail arched upwards.” (Switek, B., “Watery Secret of the Dinosaur Death Pose,” New Scientist, November 23, 2011.) The worldwide Genesis Flood provides a straightforward explanation for all these dinosaur fossils and ties in well with the fact that dinosaur bones are very often mixed with marine sediments. For example, sharks have been found alongside the dinosaurs in Montana’s famous Hell Creek fossil beds and the popular Spinosaurus has been found in the same Moroccan rock layers as sharks, sawfish, ray-finned fishes, and coelacanths. (Ibrahim, Nizar, et. al., “Semiaquatic Adaptions in a Giant Predatory Dinosaur,” Science 345, 2014, pp. 1613-1616.)
A similar phenomenon to what we have discussed above involves the question of why the massive, armored Ankylosaurus dinosaurs are almost always found fossilized lying on their backs? For example, 26 out of 37 fossil ankylosaurs discovered in Alberta, Canada, were found upside down. Paleontologists have puzzled over this belly-up death pose since the 1930s. Now a new theory presented in 2017 at the Society of Vertebrate Paleontology meeting in Calgary, Alberta suggests this happened because of a flood mechanism. “We used computer modeling to show that ankylosaurs likely flipped over due to a phenomenon called ‘bloat-and-float,’ where the gases that accumulate in the bloating belly of the carcass cause the animal to flip over while suspended in water,” states researcher Jordan Mallon, a paleobiologist at the Canadian Museum of Nature in Ontario, Canada in an interview with Live Science. The computer model showed that when an ankylosaur’s center of gravity (a downward force) didn’t match its center of buoyancy (an upward force), a disturbance such as a breeze, current or wave could cause the rotund, bloated animal to turn upside down. After some time it would sink that way and get buried and fossilized.
There is a general agreement that most of the fossilized dinosaurs perished in a watery, muddy flow. “Recreating the spectacular pose many dinosaurs adopted in death might involve following the simplest of instructions: just add water.” (Switek, Brian, “Is water the secret of the dinosaur death pose?,” New Scientist, Nov. 23, 2011.) This scenario really is essential to rapidly bury them and preserve their remains. Those who don’t believe in the Genesis Flood will postulate local catastrophes like mudslides, swollen rivers, or flash floods. But if these are the mechanisms that formed the vast dinosaur graveyards, how come today’s rivers and local floods don’t ever form huge deposits of fossils like what we see in the dinosaur fossil beds? |
In this first chapter, we’ll start by establishing a common language for models and taking a deep view of the predictive modeling process. Much of the predictive modeling involves the key concepts of statistics and machine learning, and this chapter will provide a brief tour of the core distinctions of these fields that are essential knowledge for a predictive modeler. In particular, we’ll emphasize the importance of knowing how to evaluate a model that is appropriate to the type of problem we are trying to solve. Finally, we will showcase our first model, the k-nearest neighbors model, as well as a caret, a very useful R package for predictive modelers.
Models are at the heart of predictive analytics and for this reason, we’ll begin our journey by talking about models and what they look like. In simple terms, a model is a representation of a state, process, or system that we want to understand and reason about. We make models so that we can draw inferences from them and, more importantly for us in this book, make predictions about the world. Models come in a multitude of different formats and flavors, and we will explore some of this diversity in this book. Models can be equations linking quantities that we can observe or measure; they can also be a set of rules. A simple model with which most of us are familiar with school is Newton’s Second Law of Motion. This states that the net sum of force acting on an object causes the object to accelerate in the direction of the force applied and at a rate proportional to the resulting magnitude of the force and inversely proportional to the object’s mass.
We often summarize this information via an equation using the letters F, m, and a for the quantities involved. We also use the capital Greek letter sigma (Σ) to indicate that we are summing over the force and arrows above the letters that are vector quantities (that is, quantities that have both magnitude and direction):
This simple but powerful model allows us to make some predictions about the world. For example, if we apply a known force to an object with a known mass, we can use the model to predict how much it will accelerate. Like most models, this model makes some assumptions and generalizations. For example, it assumes that the color of the object, the temperature of the environment it is in, and its precise coordinates in space are all irrelevant to how the three quantities specified by the model interact with each other. Thus, models abstract away the myriad of details of a specific instance of a process or system in question, in this case, the particular object in whose motion we are interested, and limit our focus only to properties that matter.
Newton’s Second Law is not the only possible model to describe the motion of objects. Students of physics soon discover other more complex models, such as those taking into account relativistic mass. In general, models are considered more complex if they take a larger number of quantities into account or if their structure is more complex. Nonlinear models are generally more complex than linear models for example. Determining which model to use in practice isn’t as simple as picking a more complex model over a simpler model. In fact, this is a central theme that we will revisit time and again as we progress through the many different models in this book. To build our intuition as to why this is so, consider the case where our instruments that measure the mass of the object and the applied force are very noisy. Under these circumstances, it might not make sense to invest in using a more complicated model, as we know that the additional accuracy in the prediction won’t make a difference because of the noise in the inputs. Another situation where we may want to use the simpler model is if in our application we simply don’t need the extra accuracy. A third situation arises where a more complex model involves a quantity that we have no way of measuring. Finally, we might not want to use a more complex model if it turns out that it takes too long to train or make a prediction because of its complexity.
In this book, the models we will study have two important and defining characteristics. The first of these is that we will not use mathematical reasoning or logical induction to produce a model from known facts, nor will we build models from technical specifications or business rules; instead, the field of predictive analytics builds models from data. More specifically, we will assume that for any given predictive task that we want to accomplish, we will start with some data that is in some way related to or derived from the task at hand. For example, if we want to build a model to predict annual rainfall in various parts of a country, we might have collected (or have the means to collect) data on rainfall at different locations, while measuring potential quantities of interest, such as the height above sea level, latitude, and longitude. The power of building a model to perform our predictive task stems from the fact that we will use examples of rainfall measurements at a finite list of locations to predict the rainfall in places where we did not collect any data.
The second important characteristic of the problems for which we will build models is that during the process of building a model from some data to describe a particular phenomenon, we are bound to encounter some source of randomness. We will refer to this as the stochastic or nondeterministic component of the model. It may be the case that the system itself that we are trying to model doesn’t have any inherent randomness in it, but it is the data that contains a random component. A good example of a source of randomness in data is the measurement of the errors from the readings taken for quantities such as temperature. A model that contains no inherent stochastic component is known as a deterministic model, Newton’s Second Law is a good example of this. A stochastic model is one that assumes that there is an intrinsic source of randomness to the process being modeled. Sometimes, the source of this randomness arises from the fact that it is impossible to measure all the variables that are most likely impacting a system, and we simply choose to model this using probability. A well-known example of a purely stochastic model is rolling an unbiased six-sided die. Recall that in probability, we use the term random variable to describe the value of a particular outcome of an experiment or of a random process. In our die example, we can define the random variable, Y, as the number of dots on the side that lands face up after a single roll of the die, resulting in the following model:
This model tells us that the probability of rolling a particular digit, say, three is one in six. Notice that we are not making a definite prediction on the outcome of a particular roll of the die; instead, we are saying that each outcome is equally likely.
Probability is a term that is commonly used in everyday speech, but at the same time, sometimes results in confusion with regard to its actual interpretation. It turns out that there are a number of different ways of interpreting probability. Two commonly cited interpretations are the Frequentist probability and the Bayesian probability. Frequentist probability is associated with repeatable experiments, such as rolling a one-sided die. In this case, the probability of seeing the digit three, is just the relative proportion of the digit three coming up if this experiment were to be repeated an infinite number of times. Bayesian probability is associated with a subjective degree of belief or surprise in seeing a particular outcome and can, therefore, be used to give meaning to one-off events, such as the probability of a presidential candidate winning an election. In our die rolling experiment, we are equally surprised to see the number three come up as with any other number. Note that in both cases, we are still talking about the same probability numerically (1/6), only the interpretation differs.
In the case of the die model, there aren’t any variables that we have to measure. In most cases, however, we’ll be looking at predictive models that involve a number of independent variables that are measured, and these will be used to predict a dependent variable. Predictive modeling draws on many diverse fields and as a result, depending on the particular literature you consult, you will often find different names for these. Let’s load a dataset into R before we expand on this point. R comes with a number of commonly cited data sets already loaded, and we’ll pick what is probably the most famous of all, the iris data set:
To see what other data sets come bundled with R, we can use the data() command to obtain a list of data sets along with a short description of each. If we modify the data from a data set, we can reload it by providing the name of the data set in question as an input parameter to the data() command, for example, data(iris) reloads the iris data set.
head(iris, n = 3)
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
The iris data set consists of measurements made on a total of 150 flower samples of three different species of iris. In the preceding code, we can see that there are four measurements made on each sample, namely the lengths and widths of the flower petals and sepals. The iris data set is often used as a typical benchmark for different models that can predict the species of an iris flower sample, given the four previously mentioned measurements. Collectively, the sepal length, sepal width, petal length, and petal width are referred to as features, attributes, predictors, dimensions, or independent variables in literature. In this book, we prefer to use the word feature, but other terms are equally valid. Similarly, the species column in the data frame is what we are trying to predict with our model, and so it is referred to as the dependent variable, output, or target. Again, in this book, we will prefer one form for consistency, and will use output. Each row in the data frame corresponding to a single data point is referred to as an observation, though it typically involves observing the values of a number of features.
As we will be using data sets, such as the iris data described earlier, to build our predictive models, it also helps to establish some symbol conventions. Here, the conventions are quite common in most of the literature. We’ll use the capital letter, Y, to refer to the output variable, and subscripted capital letter, Xi, to denote the ith feature. For example, in our iris data set, we have four features that we could refer to as X1 through X4. We will use lower case letters for individual observations so that x1 corresponds to the first observation. Note that x1 itself is a vector of feature components, xij, so that x12 refers to the value of the second feature in the first observation. We’ll try to use double suffixes sparingly and we won’t use arrows or any other form of vector notation for simplicity. Most often, we will be discussing either observations or features and so the case of the variable will make it clear to the reader which of these two is being referenced.
When thinking about a predictive model using a data set, we are generally making the assumption that for a model with n features, there is a true or ideal function, f, that maps the features to the output:
We’ll refer to this function as our target function. In practice, as we train our model using the data available to us, we will produce our own function that we hope is a good estimate for the target function. We can represent this by using a caret on top of the symbol f to denote our predicted function, and also for the output, Y, since the output of our predicted function is the predicted output. Our predicted output will, unfortunately, not always agree with the actual output for all observations (in our data or in general):
Given this, we can essentially summarize the process of predictive modeling as a process that produces a function to predict a quantity, while minimizing the error it makes compared to the target function. A good question we can ask at this point is, where does the error come from? Put differently, why are we generally not able to exactly reproduce the underlying target function by analyzing a data set?
The answer to this question is that in reality there are several potential sources of error that we must deal with. Remember that each observation in our data set contains values for n features, and so we can think about our observations geometrically as points in an n-dimensional feature space. In this space, our underlying target function should pass through these points by the very definition of the target function. If we now think about this general problem of fitting a function to a finite set of points, we will quickly realize that there are actually infinite functions that could pass through the same set of points. The process of predictive modeling involves making a choice in the type of model that we will use for the data thereby constraining the range of possible target functions to which we can fit our data. At the same time, the data’s inherent randomness cannot be removed no matter what model we select. These ideas lead us to an important distinction in the types of error that we encounter during modeling, namely the reducible error and the irreducible error respectively.
The reducible error essentially refers to the error that we as predictive modelers can minimize by selecting a model structure that makes valid assumptions about the process being modeled and whose predicted function takes the same form as the underlying target function. For example, as we shall see in the next chapter, a linear model imposes a linear relationship between the features in order to compose the output. This restrictive assumption means that no matter what training method we use, how much data we have, and how much computational power we throw in, if the features aren’t linearly related in the real world, then our model will necessarily produce an error for at least some possible observations. By contrast, an example of an irreducible error arises when trying to build a model with an insufficient feature set. This is typically the norm and not the exception. Often, discovering what features to use is one of the most time-consuming activities of building an accurate model.
Sometimes, we may not be able to directly measure a feature that we know is important. At other times, collecting the data for too many features may simply be impractical or too costly. Furthermore, the solution to this problem is not simply an issue of adding as many features as possible. Adding more features to a model makes it more complex and we run the risk of adding a feature that is unrelated to the output thus introducing noise in our model. This also means that our model function will have more inputs and will, therefore, be a function in a higher dimensional space. Some of the potential practical consequences of adding more features to a model include increasing the time it will take to train the model, making convergence on a final solution harder, and actually reducing model accuracy under certain circumstances, such as with highly correlated features. Finally, another source of an irreducible error that we must live with is the error in measuring our features so that the data itself may be noisy.
Reducible errors can be minimized not only through selecting the right model but also by ensuring that the model is trained correctly. Thus, reducible errors can also come from not finding the right specific function to use, given the model assumptions. For example, even when we have correctly chosen to train a linear model, there are infinitely many linear combinations of the features that we could use. Choosing the model parameters correctly, which in this case would be the coefficients of the linear model, is also an aspect of minimizing the reducible error. Of course, a large part of training a model correctly involves using a good optimization procedure to fit the model. In this book, we will at least give a high-level intuition of how each model that we study is trained. We generally avoid delving deep into the mathematics of how optimization procedures work but we do give pointers to the relevant literature for the interested reader to find out more.
So far we’ve established some central notions behind models and a common language to talk about data. In this section, we’ll look at what the core components of a statistical model are. The primary components are typical:
As we’ll see in this book, most models, such as neural networks, linear regression, and support vector machines have certain parameterized equations that describe them. Let’s look at a linear model attempting to predict the output, Y, from three input features, which we will call X1, X2, and X3:
This model has exactly one equation describing it and this equation provides the linear structure of the model. The equation is parameterized by four parameters, known as coefficients in this case, and they are the four β parameters. In the next chapter, we will see exactly what roles these play, but for this discussion, it is important to note that a linear model is an example of a parameterized model. The set of parameters is typically much smaller than the amount of data available.
Given a set of equations and some data, we then talk about training the model. This involves assigning values to the model’s parameters so that the model describes the data more accurately. We typically employ certain standard measures that describe a model’s goodness of fit to the data, which is how well the model describes the training data. The training process is usually an iterative procedure that involves performing computations on the data so that new values for the parameters can be computed in order to increase the model’s goodness of fit. For example, a model can have an objective or error function. By differentiating this and setting it to zero, we can find the combination of parameters that give us the minimum error. Once we finish this process, we refer to the model as a trained model and say that the model has learned from the data. These terms are derived from the machine learning literature, although there is often a parallel made with statistics, a field that has its own nomenclature for this process. We will mostly use the terms from machine learning in this book.
Our first model: k-nearest neighbors
In order to put some of the ideas in this chapter into perspective, we will present our first model for this book, k-nearest neighbors, which is commonly abbreviated as kNN. In a nutshell, this simple approach actually avoids building an explicit model to describe how the features in our data combine to produce a target function. Instead, it relies on the notion that if we are trying to make a prediction on a data point that we have never seen before, we will look inside our original training data and find the k observations that are most similar to our new data point. We can then use some kind of averaging technique on the known value of the target function for these k neighbors to compute a prediction. Let’s use our iris data set to understand this by way of an example. Suppose that we collect a new unidentified sample of an iris flower with the following measurements:
Sepal.Length Sepal.Width Petal.Length Petal.Width
4.8 2.9 3.7 1.7
We would like to use the kNN algorithm in order to predict which species of flower we should use to identify our new sample. The first step in using the kNN algorithm is to determine the k-nearest neighbors of our new sample. In order to do this, we will have to give a more precise definition of what it means for two observations to be similar to each other. A common approach is to compute a numerical distance between two observations in the feature space. The intuition is that two observations that are similar will be close to each other in the feature space and therefore, the distance between them will be small. To compute the distance between two observations in the feature space, we often use the Euclidean distance, which is the length of a straight line between two points. The Euclidean distance between two observations, x1 and x2, is computed as follows:
Recall that the second suffix, j, in the preceding formula corresponds to the jth feature. So, what this formula is essentially telling us is that for every feature, take the square of the difference in values of the two observations, sum up all these squared differences, and then take the square root of the result. There are many other possible definitions of distance, but this is one of the most frequently encountered in the kNN setting. We’ll see more distance metrics in Chapter 11, Recommendation Systems.
In order to find the nearest neighbors of our new sample iris flower, we’ll have to compute the distance to every point in the iris data set and then sort the results. First, we’ll begin by subsetting the iris data frame to include only our features, thus excluding the species column, which is what we are trying to predict. We’ll then define our own function to compute the Euclidean distance. Next, we’ll use this to compute the distance to every iris observation in our data frame using the apply() function. Finally, we’ll use the sort() function of R with the index. return parameter set to TRUE, so that we also get back the indexes of the row numbers in our iris data frame corresponding to each distance computed:
List of 2
$ x : num [1:150] 0.574 0.9 0.9 0.949 0.954 …
$ ix: int [1:150] 60 65 107 90 58 89 85 94 95 99 …
The $x attribute contains the actual values of the distances computed between our sample iris flower and the observations in the iris data frame. The $ix attribute contains the row numbers of the corresponding observations. If we want to find the five nearest neighbors, we can subset our original iris data frame using the first five entries from the $ix attribute as the row numbers:
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
60 5.2 2.7 3.9 1.4 versicolor
65 5.6 2.9 3.6 1.3 versicolor
107 4.9 2.5 4.5 1.7 virginica
90 5.5 2.5 4.0 1.3 versicolor
58 4.9 2.4 3.3 1.0 versicolor
As we can see, four of the five nearest neighbors to our sample are the Versicolor species, while the remaining one is the virginica species. For this type of problem where we are picking a class label, we can use a majority vote as our averaging technique to make our final prediction. Consequently, we would label our new sample as belonging to the versicolor species. Notice that setting the value of k to an odd number is a good idea because it makes it less likely that we will have to contend with tie votes (and completely eliminates ties when the number of output labels is two). In the case of a tie, the convention is usually to just resolve it by randomly picking among the tied labels. Notice that nowhere in this process have we made any attempt to describe how our four features are related to our output. As a result, we often refer to the kNN model as a lazy learner because essentially, all it has done is memorize the training data and use it directly during a prediction. We’ll have more to say about our kNN model, but first, we’ll return to our general discussion on models and discuss different ways to classify them.
With a broad idea of the basic components of a model, we are ready to explore some of the common distinctions that modelers use to categorize different models.
We’ve already looked at the iris data set, which consisted of four features and one output variable, namely the species variable. Having the output variable available for all the observations in the training data is the defining characteristic of the supervised learning setting, which represents the most frequent scenario encountered. In a nutshell, the advantage of training a model under the supervised learning setting is that we have the correct answer that we should be predicting for the data points in our training data. As we saw in the previous section, kNN is a model that uses supervised learning, because the model makes its prediction for an input point by combining the values of the output variable for a small number of neighbors to that point. In this book, we will primarily focus on supervised learning.
Using the availability of the value of the output variable as a way to discriminate between different models, we can also envisage a second scenario in which the output variable is not specified. This is known as the unsupervised learning setting. An unsupervised version of the iris data set would consist of only the four features. If we don’t have the species output variable available to us, then we clearly have no idea as to which species each observation refers. Indeed, we won’t know how many species of flower are represented in the dataset, or how many observations belong to each species. At first glance, it would seem that without this information, no useful predictive task could be carried out. In fact, what we can do is examine the data and create groups of observations based on how similar they are to each other, using the four features available to us. This process is known as clustering. One benefit of clustering is that we can discover natural groups of data points in our data; for example, we might be able to discover that the flower samples in an unsupervised version of our iris set form three distinct groups which correspond to three different species.
Between unsupervised and supervised methods, which are two absolutes in terms of the availability of the output variable, reside the semi-supervised and reinforcement learning settings. Semi-supervised models are built using data for which a (typically quite small) fraction contains the values for the output variable, while the rest of the data is completely unlabeled. Many such models first use the labeled portion of the data set in order to train the model coarsely, then incorporate the unlabeled data by projecting labels predicted by the model trained up this point.
In a reinforcement learning setting the output, variable is not available, but other information that is directly linked with the output variable is provided. One example is predicting the next best move to win a chess game, based on data from complete chess games. Individual chess moves do not have output values in the training data, but for every game, the collective sequence of moves for each player resulted in either a win or a loss. Due to space constraints, semi-supervised and reinforcement settings aren’t covered in this book.
In a previous section, we noted how most of the models we will encounter are parametric models, and we saw an example of a simple linear model. Parametric models have the characteristic that they tend to define a functional form. This means that they reduce the problem of selecting between all possible functions for the target function to a particular family of functions that form a parameter set. Selecting the specific function that will define the model essentially involves selecting precise values for the parameters. So, returning to our example of a three feature linear model, we can see that we have the two following possible choices of parameters (the choices are infinite, of course; here we just demonstrate two specific ones):
Here, we have used a subscript on the output Y variable to denote the two different possible models. Which of these might be a better choice? The answer is that it depends on the data. If we apply each of our models on the observations in our data set, we will get the predicted output for every observation. With supervised learning, every observation in our training data is labeled with the correct value of the output variable. To assess our model’s goodness of fit, we can define an error function that measures the degree to which our predicted outputs differ from the correct outputs. We then use this to pick between our two candidate models in this case, but more generally to iteratively improve a model by moving through a sequence of progressively better candidate models.
Some parametric models are more flexible than linear models, meaning that they can be used to capture a greater variety of possible functions. Linear models, which require that the output be a linearly weighted combination of the input features, are considered strict. We can intuitively see that a more flexible model is more likely to allow us to approximate our input data with greater accuracy; however, when we look at overfitting, we’ll see that this is not always a good thing. Models that are more flexible also tend to be more complex and, thus, training them often proves to be harder than training less flexible models.
Models are not necessarily parameterized, in fact, the class of models that have no parameters is known (unsurprisingly) as nonparametric models. Nonparametric models generally make no assumptions on the particular form of the output function. There are different ways of constructing a target function without parameters. Splines are a common example of a nonparametric model. The key idea behind splines is that we envisage the output function, whose form is unknown to us, as being defined exactly at the points that correspond to all the observations in our training data. Between the points, the function is locally interpolated using smooth polynomial functions. Essentially, the output function is built in a piecewise manner in the space between the points in our training data. Unlike most scenarios, splines will guarantee 100 percent accuracy on the training data, whereas, it is perfectly normal to have some errors in our training data. Another good example of a nonparametric model is the k-nearest neighbor algorithm that we’ve already seen.
The distinction between regression and classification models has to do with the type of output we are trying to predict, and is generally relevant to supervised learning. Regression models try to predict a numerical or quantitative value, such as the stock market index, the amount of rainfall, or the cost of a project. Classification models try to predict a value from a finite (though still possibly large) set of classes or categories. Examples of this include predicting the topic of a website, the next word that will be typed by a user, a person’s gender, or whether a patient has a particular disease given a series of symptoms. The majority of models that we will study in this book fall quite neatly into one of these two categories, although a few, such as neural networks can be adapted to solve both types of problems. It is important to stress here that the distinction made is on the output only, and not on whether the feature values that are used to predict the output are quantitative or qualitative themselves. In general, features can be encoded in a way that allows both qualitative and quantitative features to be used in regression and classification models alike. Earlier, when we built a kNN model to predict the species of iris based on measurements of flower samples, we were solving a classification problem as our species output variable could take only one of three distinct labels. The kNN approach can also be used in a regression setting; in this case, the model combines the numerical values of the output variable for the selected nearest neighbors by taking the mean or median in order to make its final prediction. Thus, kNN is also a model that can be used in both regression and classification settings.
R may be classified as a complete analytical environment for the following reasons.
Multiple platforms and interfaces to input commands: R has multiple interfaces ranging from command line to numerous specialized graphical user interfaces (GUIs) (Chap. 2) for working on desktops. For clusters, cloud computing, and remote server environments, R now has extensive packages including SNOW, RApache, RMpi, R Web, and Rserve.
Software compatibility: Official commercial interfaces to R have been developed by numerous commercial vendors including software makers who had previously thought of R as a challenger in the analytical space (Chap. 4). Oracle, ODBC, Microsoft Excel, PostgreSQL, MySQL, SPSS, Oracle Data Miner, SAS/IML, JMP, Pentaho Kettle, and Jaspersoft BI are just a few examples of commercial software that are compatible with R usage. In terms of the basic SAS language, a WPS software reseller offers a separate add-on called the Bridge to R. Revolution Analytics offers primarily analytical products licensed in the R language, but other small companies have built successful R packages and applications commercially.
Interoperability of data: Data from various file formats as well as various databases can be used directly in R, connected via a package, or reduced to an intermediate format for importing into R (Chap. 2).
Extensive data visualization capabilities: These include much better animation and graphing than other software (Chap. 5).
Largest and fastest growing open source statistical library: The current number of statistical packages and the rate of growth at which new packages continue to be upgraded ensures the continuity of R as a long-term solution to analytical problems.
A wide range of solutions from the R package library for statistical, analytical, data mining, dashboard, data visualization, and online applications make it the broadest analytical platform in the field.
So what all is extra in R? The list below shows some of the additional features in R that make it superior to other analytical software.
R’s source code is designed to ensure complete custom solutions and embedding for a particular application. Open source code has the advantage of being extensively peer-reviewed in journals and the scientific literature. This means bugs will found, information about them shared, and solutions delivered transparently.
A wide range of training material in the form of books is available for the R analytical platform (Chap. 12).
R offers the best data visualization tools in analytical software (apart from Tableau Software’s latest version). The extensive data visualization available in R comprises a wide variety of customizable graphics as well as animation. The principal reason why third-party software initially started creating interfaces to R is that the graphical library of packages in R was more advanced and was acquiring more features by the day.
An R license is free for academics and thus budget friendly for small and large analytical teams.
R offers flexible programming for your data environment. This includes packages that ensure compatibility with Java, Python, and C.
It is easy to migrate from other analytical platforms to the R platform. It is relatively easy for a non-R platform user to migrate to the R platform, and there is no danger of vendor lock-in due to the GPL nature of the source code and the open community, the GPL can be seen at HTTP://WWW.GNU.ORG/COPYLEFT/GPL.HTML.
The latest and broadest range of statistical algorithms are available in R. This is due to R’s package structure in which it is rather easier for developers to create new packages than in any other comparable analytics platform.
Sometimes the distinction between statistical computing and analytics does come up. While statistics is a tool- and technique-based approach, analytics is more concerned with business objectives. Statistics are basically numbers that inform (descriptive), advise (prescriptive), or forecast (predictive). Analytics is a decision-making-assistance tool. Analytics on which no decision is to be made or is being considered can be classified as purely statistical and nonanalytical. Thus the ease with which a correct decision can be made separates a good analytical platform from a not-so-good one. The distinction is likely to be disputed by people of either background, and business analysis requires more emphasis on how practical or actionable the results are and less emphasis on the statistical metrics in a particular data analysis task. I believe one way in which business analytics differs from statistical analysis is the cost of perfect information (data costs in the real world) and the opportunity cost of delayed and distorted decision making.
The only cost of using R is the time spent learning it. The lack of a package or application marketplace in which developers can be rewarded for creating new packages hinders the professional mainstream programmer’s interest in R to the degree that several other platforms like iOS and Android and Salesforce offer better commercial opportunities to coding professionals. However, given the existing enthusiasm and engagement of the vast numbers of mostly academia-supported R developers, the number of R packages has grown exponentially over the past several years. The following list enumerates the advantages of R by business analytics, data mining, and business intelligence/data visualization as these have three different domains in the data sciences.
R is available for free download.
1. R is one of the few analytical platforms that work on Mac OS.
2. Its results have been established in journals like the Journal of Statistical Software, in places such as LinkedIn and Google, and by Facebook’s analytical teams.
3. It has open source code for customization as per GPL and adequate intellectual protection for developers wanting to create commercial packages.
4. It also has a flexible option for enterprise users from commercial vendors like Revolution Analytics (who support 64-bit Windows and now Linux) as well as big data processing through its RevoScaleR package.
5. It has interfaces from almost all other analytical software including SAS, SPSS, JMP, Oracle Data Mining, and RapidMiner. Exist huge library of packages is available for regression, time series, finance, and modeling.
6. High-quality data visualization packages are available for use with R.
As a computing platform, R is better suited to the needs of data mining for the following reasons.
1. R has a vast array of packages covering standard regression, decision trees, association rules, cluster analysis, machine learning, neural networks, and exotic specialized algorithms like those based on chaos models.
2. R provides flexibility in tweaking a standard algorithm by allowing one to see the source code.
3. The Rattle GUI remains the standard GUI for data miners using R. This GUI offers easy access to a wide variety of data mining techniques. It was created and developed in Australia by Prof. Graham Williams. Rattle offers a very powerful and convenient free and open source alternative to data mining software.
Business dashboards and reporting are an essential piece of business intelligence and decision making systems in organizations.
1. R offers data visualization through ggplot, and GUIs such as Deducer, GrapheR, and Red-R can help even business analysts who know none or very little of the R language in creating a metrics dashboard.
2. For online dashboards, R has packages like RWeb, RServe, and R Apache that, in combination with data visualization packages, offer powerful dashboard capabilities. Well-known examples of these will be shown later.
3. R can also be combined with Microsoft Excel using the R Excel package to enable R capabilities for importing within Excel. Thus an Excel user with no knowledge of R can use the GUI within the R Excel plug-in to take advantage of the powerful graphical and statistical capabilities.
4. R has extensive capabilities to interact with and pull data from databases including those by Oracle, MySQL, PostGresSQL, and Hadoop-based data. This ability to connect to databases enables R to pull data and summarize them for processing in the previsualization stage.
What follows is a brief collection of resources that describe how to use SAS Institute products and R: Base SAS, SAS/Stat, SAS/Graph.
An indicator of the long way R has come from being a niche player to a broadly accepted statistical computing platform is the SAS Institute’s acceptance of R as a complementary language. What follows is a brief extract from a February 2012 interview with researcher Kelci Miclaus from the JMP division at SAS Institute that includes a case study on how adding R can help analytics organizations even more.
How has JMP been integrating with R? What has been the feedback from customers so far? Is there a single case study you can point to where the combination of JMP and R was better than either one of them alone?
Feedback from customers has been very positive. Some customers use JMP to foster collaboration between SAS and R modelers within their organizations. Many use JMP’s interactive visualization to complement their use of R. Many SAS and JMP users use JMP’s integration with R to experiment with more bleeding-edge methods not yet available in commercial software. It can be used simply to smooth the transition with regard to sending data between the two tools or to build complete custom applications that take advantage of both JMP and R.
One customer has been using JMP and R together for Bayesian analysis. He uses R to create MCMC chains and has found that JMP is a great tool for preparing data for analysis and for displaying the results of the MCMC simulation. For example, the control chart and bubble plot platforms in JMP can be used to quickly verify convergence of an algorithm. The use of both tools together can increase productivity since the results of an analysis can be achieved faster than through scripting and static graphics alone.
I, along with a few other JMP developers, have written applications that use JMP scripting to call out to R packages and perform analysis like multidimensional scaling, bootstrapping, support vector machines, and modern variable selection methods. These really show the benefit of interactive visual analysis coupled with modern statistical algorithms. We’ve packaged these scripts as JMP add-ins and made them freely available on our JMP User Community file exchange. Customers can download them and employ these methods as they would a regular JMP platform. We hope that our customers familiar with scripting will also begin to contribute their own add-ins so a wider audience can take advantage of these new tools (see HTTP://WWW.DECISIONSTATS.COM/JMP-AND-R-RSTATS/).
How is R a complementary fit to JMP’s technical capabilities?
R has an incredible breadth of capabilities. JMP has extensive interactive, dynamic visualization intrinsic to its largely visual analysis paradigm, in addition to a strong core of statistical platforms. Since our brains are designed to visually process pictures and animated graphics more efficiently than numbers and text, this environment is all about supporting faster discovery. Of course, JMP also has a scripting language (JSL) that allows you to incorporate SAS code and R code and to build analytical applications for others to leverage SAS, R, and other applications for users who don’t code or who don’t want to code. JSL is a powerful scripting language on its own.
It can be used for dialog creation, automation of JMP statistical platforms, and custom graphic scripting. In other ways, JSL is very similar to the R language. It can also be used for data and matrix manipulation and to create new analysis functions. With the scripting capabilities of JMP, you can create custom applications that provide both a user interface and an interactive visual backend to R functionality. Alternatively, you could create a dashboard using statistical or graphical platforms in JMP to explore the data and, with the click of a button, send a portion of the data to R for further analysis.
Another JMP feature that complements R is the add-in architecture, which is similar to how R packages work. If you’ve written a cool script or analysis workflow, you can package it into a JMP add-in file and send it to your colleagues so they can easily use it.
What is the official view of R at your organization? Do you think it is a threat or a complementary product or statistical platform that coexists with your offerings?
Most definitely, we view R as complementary. R contributors provide a tremendous service to practitioners, allowing them to try a wide variety of methods in the pursuit of more insight and better results. The R community as a whole provides a valued role to the greater analytical community by focusing attention on newer methods that hold the most promise in so many application areas. Data analysts should be encouraged to use the tools available to them in order to drive discovery, and JMP can help with that by providing an analytic hub that supports both SAS and R integration.
Since you do use R, are there any plans to give back something to the R community in terms of your involvement and participation (say at use R events) or sponsoring contests?
We are certainly open to participating in use of R groups. At Predictive Analytics World in New York last October, they didn’t have a local use R group, but they did have a Predictive Analytics meet-up group comprised of many R users. We were happy to sponsor this. Some of us within the JMP division have joined local R user groups, myself included. Given that some local R user groups have entertained topics like Excel and R, Python and R, databases and R, we would be happy to participate more fully here. I also hope to attend the useR annual meeting later this year to gain more insight on how we can continue to provide tools to help both the JMP and R communities with their work. We are also exploring options to sponsor contests and would invite participants to use their favorite tools, languages, etc.
Free Demo for Corporate & Online Trainings. |
This blog explores health informatics—a collaborative activity connecting people, process, and technologies to produce trusted data for better decision-making.
By Clarice Smith, RHIA, CHP
Technopedia defines facial recognition as “a biometric software application capable of uniquely identifying or verifying a person by comparing and analyzing patterns based on the person’s facial contours.”
Facial recognition applications continue to expand into different aspects of our lives. For example, facial recognition technology can now be used instead of a password to unlock a user’s iPhone. Biometrics, including facial recognition, can be used to validate a user when making online purchases. This method is much more secure and convenient for the user than remembering user IDs and passwords. Facebook has developed facial recognition to identify and tag people in photos posted on the website. Facebook will even reach out to the person and ask “is this you?” If the person responds in the positive, the website has validated that instance of facial recognition for that person.
Some facial recognition programs work without obtaining consent from the person. The software using artificial intelligence compares the person’s face from a distance and matches the face to a database.
There are numerous applications being developed for use in healthcare using facial recognition. Several examples are below.
Facial recognition has the potential to revolutionize identity management. Once the patient has validated identity and entered into the system, they could potentially “register” by presenting themselves to a kiosk, log into the system using facial recognition, and then sign forms, etc. without human intervention.
Patients can use facial recognition to verify they are taking their medication as prescribed by:
- Logging into the system using their mobile device
- Having the device record the patient’s face, the medication, and the patient taking the medication
Facial recognition can be used to scan a patient’s face to determine the patient’s level of pain in order to manage chronic pain and medication usage.
Computerized personal assistants (social robots) will need facial recognition to interpret the emotional state of the patient in order to assist the patient appropriately.
Certain genetic diseases can be diagnosed using facial recognition.
Facial recognition can be used to identify staff as well as patients, making healthcare safer and more efficient.
As with any new applications, healthcare organizations must carefully consider all the privacy, security, and legal ramifications of implementation as well as patient “push back.” What safeguards and assurances will be put into place to assure the patient that their information will not be shared and will be secure? The answers to these questions will be addressed as the applications are deployed and will provide a new challenge to patient privacy and security.
Clarice Smith is director of HIM at AnMed Healthy. |
Vitamin A (Retinol, Beta-Carotene) | Deficiencies, Excesses and Recommendations
Vitamin A is involved in the regulation, growth and differentiation of cells.
Cell differentiation is the process by which cells become “specialized” in order to perform specific functions within the body (e.g. nerve cells, muscle cells, fat cells).
Vitamin A is fat soluble, meaning it is dissolved in fat. Animal sources of this nutrient contain the active form, or “preformed” Vitamin A, while plant sources contain the “pro-vitamin” form, meaning that it gets converted to the vitamin as a result of biological processes.
This essential nutrient has important roles in embryonic development and pregnancy, while too much Vitamin A can cause harm to a developing baby.
Vitamin A is needed for normal immune function and vision. In fact, Vitamin A deficiency is a major cause of preventable blindness in the world. [R]
What Could Happen If I Don’t Get Enough Vitamin A?
- Dry skin
- Dry eyes
- Night blindness
- Poor coordination
- Compromised immunity to various infections including acne breakouts and respiratory infections (e.g. common cold).
These symptoms can also be signs of other vitamin deficiencies or health issues. Please talk to a nutrition-focused physician prior to experimenting.
- According to Medical News Today, chronic Vitamin A deficiency could be a factor in the development of age-related diseases such as Type II Diabetes and even Alzheimer’s disease.
- In children of developing countries, where Vitamin A deficiency is more common, it may increase the risk of developing respiratory and diarrhea infections, decrease growth rate, slow bone development, and lessen their likelihood for survival of serious illness.
Vitamin A deficiency isn’t always the result of not getting enough through dietary sources. A deficiency can also be due to other causes, such as:
- Iron deficiency
- Zinc deficiency
- Weight loss surgery
- Malabsorption (e.g. liver or gallbladder disease)
- Inadequate fat intake (fat soluble vitamins require fat for proper absorption)
What Could Happen If I Get Too Much Vitamin A?
- Birth defects
- Liver damage
- Extremely high doses can result in coma or death.
Toxicity (symptoms from getting too much Vitamin A) can happen from taking extreme doses over a short period of time, or from getting too much daily Vitamin A over an extended period of time. [R]
Plant sources of Vitamin A (e.g. beta carotene) do not typically cause toxicity.
Eat generous portions of carrots, sweet potatoes, spinach and kale, as these are great plant sources of beta-carotene.
Potential Life Enhancements
- May reduce risk for some types of cancer
- May help slow the progression of retinitis pigmentosa, a hereditary cause of blindness |
A new technique, discovered by accident a few years ago, can speed up coral growth by 25 times by breaking them into tiny little pieces. Discovered accidentally the process could now help replenish coral reefs and protect them from degradation. But how does it work? And why is it so important?
Dr David Vaughan was close to retirement several years ago before discovering a revolutionary new technique for speeding up the rate of growth in endangered corals. Now along with his team at the Mote Marine Laboratory in Florida he is on the front line of protecting coral reefs from the effects of climate change. Up to half of the coral on reefs in Florida and the Caribbean is believed to have been lost to bleaching and other diseases in the last few decades. Unfortunately corals are naturally very slow growing and are unable to recover faster than they are destroyed. But Dr Vaughan and his team are now turning this around by breaking corals into tiny little pieces. The technique known as ‘microfragmentation’ can help corals grow 25 times faster and the aim is to now use it to grow one million corals to replant back on Florida’s reefs.
One of the main things coral reefs are running out of, apart from an abundance of coral, is time. Rapid ecological change brought about by rising sea temperatures, ocean acidification and other human caused stressors is massively reducing coral coverage on reefs around the world. The problem is they are so slow growing they cannot recover faster than they are destroyed. In particular ‘massive’ species such as brain, star, boulder and mounding corals which can be centuries old. These types will only grow a couple of centimetres a year and a colony will take decades to form. This has earned them the nickname ‘living rocks’ and is the reason why they are most at risk from things like coral bleaching. Some faster growing branching species such staghorn can be regrown relatively quickly in nurseries and introduced back onto the reefs. But until recently this has not possible for the ‘massive’ corals because it takes too long to grow them.
A happy accident
Dr David Vaughan is a highly experienced coral reef scientist and leads the coral restoration programme at the Mote Marine Laboratory research station in Florida. He accidentally stumbled onto a new technique of rapidly growing massive corals whilst moving some in a tank. He was frustrated by the slow growth of some samples of Elkhorn coral and decided to move them to a different area of the tank. As he picked it up and moved the sample some of the polyps (individual units of coral) broke off and fell to the bottom of the tank. He deemed them to be as good as dead and left them there claiming they would ‘be toast’. But when he returned a couple of weeks later what he discovered shocked and inspired him. The polyps had multiplied and grown to the size of the original sample which had previously taken over two years to grow. Dr Vaughan was close to retirement at the time of his discovery but claims “once we saw there was this technology for restoration, I had to stay”.
Read the full story at Marine Madness |
|After Lincoln's election, many Southern states fearing Republican control in the government, seceded from the Union. Lincoln faced the greatest internal crisis of any U.S. President. After the fall of Ft. Sumter, Lincoln raised an army and decided to fight to save the Union from falling apart. Despite enormous pressures, loss of life, battlefield setbacks, generals who weren't ready to fight, assassination threats, etc..
Lincoln's declaration of freedom for all slaves in the areas of the Confederacy not under Union control. Also, on November 19, 1863, Lincoln gave his famous Gettysburg Address which dedicated the battlefield there to the soldiers who had perished. He called on the living to finish the task the dead soldiers had begun. |
JOHNSON’S SEAGRASS } Halophila johnsonii
DESCRIPTION: Johnson’s seagrass is a small seagrass with short, hairless, elliptical leaves that have smooth edges. These leaves grow up to an inch long, occur in pairs, and have pointed tips. The plant spreads via its unbranched roots and horizontal, subterranean plant stem. Johnson’s seagrass can be identified by its female flowers and long-necked fruits. Male flowers are unknown.
HABITAT: Johnson’s seagrass grows in shallow waters of coastal lagoons in the intertidal zone. Found to depths of approximately six feet, the species occurs deeper than many other seagrasses. Johnson’s seagrass is also more tolerant of varied salinity and temperature ranges. This plant prefers coarse sand and muddy substrates in areas of turbid waters and high tidal currents.
RANGE: Johnson’s seagrass has very limited distribution. It has fragmented distribution along 125 miles of coastline in southeastern Florida from Sebastian Inlet to Biscayne Bay. The largest populations have been documented in Lake Worth Lagoon.
BREEDING: Reproduction of Johnson’s seagrass is by asexual branching and clonal reproduction. Female flowers have been observed, but even with decade-long studies, neither male flowers nor seeds have ever been recorded.
LIFE CYCLE: A perennial plant without a strong seasonal pattern, Johnson’s seagrass generally exhibits some winter decline.
THREATS: Johnson’s seagrass is imperiled by degraded water quality from agricultural and urban runoff, the dredging and filling of waterways, destruction of lagoon substrate by boating activity, trampling, and increased severity of hurricanes and storms driven by global warming.
POPULATION TREND: Because it seems to rely entirely on asexual reproduction and is dependent on substrate stability, Johnson’s seagrass is extremely vulnerable to human-caused disturbances. Information on this seagrass is difficult to come by, though one study found that all of the seagrass species in the Florida region have declined by 16 percent since 1986. Longer-term regional losses are thought to be nearly 50 percent since the 1970s. Johnson’s seagrass is known to be the least abundant seagrass within its range. Research is ongoing on ways to grow the species in captivity and successfully transplant it to suitable locations.
|Photo by Lori Morris, NOAA||HOME / DONATE NOW / SIGN UP FOR E-NETWORK / CONTACT US / PHOTO USE /| |
posted on 07 April 2017
from The Conversation
The periodic table is one of those classic images that you find in many science labs and classrooms. It’s an image almost everyone has seen at some time in their life.
Please share this article - Go to very top of page, right hand side for social media buttons.
Who can forget the periodic table put to music by the American Tom Lehrer, a Harvard mathematics professor who was also a singer/songwriter and satirist. His song, The Elements, includes all the elements that were known at the time of writing in 1959.
But what exactly does the periodic table show?
In brief, it is an attempt to organise the collection of the elements - all of the known pure compounds made from a single type of atom.
There are two ways to look at how the periodic table is constructed, based on either the observed properties of the elements contained within it, or on the subatomic construction of the atoms that form each element.
The basic modern periodic table. Shutterstock/duntaro
When scientists began collecting elements in the 1700s and 1800s, slowly identifying new ones over decades of research, they began to notice patterns and similarities in their physical properties. Some were gases, some were shiny metals, some reacted violently with water, and so on.
At the time when elements were first being discovered, the structure of atoms was not known. Scientists began to look at ways to arrange them systematically so that similar properties could be grouped together, just as someone collecting seashells might try to organise them by shape or colour.
The task was made more difficult because not all of the elements were known. This left gaps, which made deciphering patterns a bit like trying to assemble a jigsaw puzzle with missing pieces.
Different scientists came up with different types of tables. The first version of the current table is generally attributed to Russian chemistry professor Dmitri Mendeleev in 1869, with an updated version in 1871.
Mendeleev’s periodic table is first published outside Russia in Zeitschrift für Chemie (1869, pages 405-6). Wikimedia/Dimitri Mendeleev
Importantly, Mendeleev left gaps in the table where he thought missing elements should be placed. Over time, these gaps were filled in and the final version as we know it today emerged.
To really understand the final structure of the periodic table, we need to understand a bit about atoms and how they are constructed. Atoms have a central core (the nucleus) made up of smaller particles called protons and neutrons.
It is the number of protons that gives an element its atomic number - the number generally found in the top left corner of each box in the periodic table.
The properties of hydrogen as marked on the periodic table. Shutterstock/duntaro
The periodic table is arranged in order of increasing atomic number (left to right, top to bottom). It ranges from element 1 (hydrogen H) in the top left, to the newly approved element 118 (oganesson Og) in the bottom right.
The number of neutrons in the nucleus can vary. This gives rise to different isotopes for every element.
But why is there a separate box of elements below the main table, and why is the main table an odd shape, with a bite taken out of the top? That comes down to how the other component of the atom - the electrons - are arranged.
We tend to think of atoms as built a bit like onions, with seven layers of electrons called “shells", labelled K, L, M, N, O, P, and Q, surrounding the core nucleus.
Think of the atom with a central nucleus that contains all the protons and neutrons, surrounded by a series of shells that contain the electrons. The Conversation, CC BY-ND
Each row in the periodic table sort of corresponds to filling up one of these shells with electrons. Each shell has subshells, and the order in which the shells/subshells get filled is based on the energy required, although it’s a complicated process. We’ll come back to these later.
In simple terms, the first element in each row starts a new shell containing one electron, while the last element in each row has two (or one for the the first row) of the subshells in the outer shell fully occupied. These differences in electrons also account for some of the similarities in properties between elements.
With the one or two subshells in the outer layer full of electrons, the last elements of each row are quite unreactive, as there are no holes or gaps in the outer shell to interact with other atoms.
This is why elements in the last column, such as helium He, neon (Ne), argon (Ar) and so on, are called the noble gases (or inert gases). They are all gases and they are “noble" because they rarely associate with other elements.
In contrast, the elements of the first column, with the exception of hydrogen (just like English grammar, there’s always an exception!), are called alkali metals. The first-column elements are metal-like in character, but with only one electron in the outer shell, they are very reactive as this lone electron is very easy to engage in chemical bonding. When added to water, they quickly react to form an alkaline (basic) solution.
Each shell can accommodate an increasing number of electrons. The first shell (K) only fits two, so the first row of the periodic table has only two elements: hydrogen (H) with one electron, and helium (He) with two.
The second shell (L) fits eight electrons. Thus the second row of the periodic table contains eight elements, with a gap left between hydrogen and helium to accommodate the extra six.
The third shell (M) fits 18 electrons, but the third row still only has eight elements. This is because the extra ten electrons don’t get added to this layer until after the first two electrons are added to the fourth shell (N) (we’ll get to why, later).
So the gap is expanded in the fourth row to accommodate the additional ten elements, leading to the “bite" out of the top of the table. The extra ten compounds in the middle section are called the transition metals.
The fourth shell holds 32 electrons, but again the extra electrons are not added to this shell until some have also been added to the fifth (O) and sixth (P) shells, meaning that both the fourth and fifth rows hold 18 elements.
For the next two rows (sixth and seventh), rather than further expanding the table sideways to include these extra 14 elements, which would make it too wide to easily read, they have been inserted as a block of two rows, called the lanthanoids (elements 57 to 71) and actinoids (elements 89 to 103), below the main table.
The periodic table would look very different if the lanthanoids and actinoids were inserted within the table. The Conversation, CC BY
You can see where they would fit in if the periodic table was widened, if you look at the bottom two squares in the third column of the table above.
Across the columns
There is another complicating factor leading to the final shape of the table. As mentioned earlier, as the electrons are added to each layer they go into different subshells (or orbitals), which describes locations around the nucleus where they are most likely to be found. These are known by the letters s, p, d and f.
The letters used for the orbitals are actually derived from descriptions of the emission or absorption of light due to electrons moving between the orbitals: sharp, principal, diffuse and fundamental.
Each shell has its own configuration of subshells named from 1s through to 7p, which gives the total number of electrons in each shell as we progress through the periodic table.
The Conversation, CC BY-ND
As mentioned earlier the order in which the subshells fill with electrons is not so straightforward. You can see the order in which they fill from the image below, just follow the order as you would read down from left to right.
The Conversation, CC BY-SA
There is an interactive periodic table that also illustrates the filling sequence well if you click through the atoms.
Elements within a column generally have similar properties, but in some places elements side by side can also be similar. For example, in the transition metals the cluster of precious metals around copper (Cu), silver (Ag), gold (Au), palladium (Pd) and platinum (Pt) are quite alike.
Most of the existing elements with high atomic numbers, including the four superheavy elements added last year, are very unstable and have never been detected in, or isolated from, nature.
Instead, they are created and analysed in minute quantities under highly artificial conditions. Theoretically, there could be further elements beyond the 118 now known (there are additional g, h and i suborbitals), but we don’t know yet if any of these would be stable enough to be isolated.
A classic design
The periodic table has seen many colourful and informative versions created over the years.
One of my favourites is an artistic version with original artworks for each element commissioned by the Royal Australian Chemical Institute to celebrate the International Year of Chemistry in 2011.
Another favourite is an interactive version with pictures of the elements. The creators of this site have also published a coffee table book called The Elements and an Apple app with videos of each element.
The classic design of the periodic table can be used to play a version of the Battleship game.
Playing battleships with the periodic table at the first World Science Festival Brisbane in 2016. The Conversation, CC BY-NC-ND
As for Tom Lehrer’s The Elements, the song has yet to be updated to include all the elements known today but it has been covered by other people over the years.
Actor Daniel Radcliffe, of Harry Potter fame, performed a version during a guest appearance on the BBC’s Graham Norton Show.
There are other musical versions of the elements but they too have yet to be updated to include all entries of the periodic table.
In summary, the periodic table is the chemist’s taxonomy of all elements. Its triumph is that it is still highly relevant to scientists, while also becoming embedded in popular culture.
>>>>> Scroll down to view and make comments <<<<<<
This Web Page by Steven Hansen ---- Copyright 2010 - 2017 Econintersect LLC - all rights reserved |
Are you trying to figure out how you can divide numbers in Microsoft Excel, but are having trouble nailing down exactly how that works? Well, you’ve come to the right place. We’ll show you very easily how you can divide two numbers or cells together in an Excel sheet. It’s worth noting that this works for Google Sheets as well. Follow along below!
Microsoft Excel Formulas
Microsoft Excel itself doesn’t have a specific function for division. Instead, it’s a special character that you have to add into a column with a formula. One important thing to remember before we start is that all Microsoft Excel formulas start with the “=” sign. So, if the formula we create in just a moment isn’t working for you, make sure that it begins with the equal sign! You’ll also want to make sure that you’re using the division symbol. That character would be “/” on your keyboard.
So, here’s how this works:
- In type a number into cell A2.
- Next, type a number into cell B2.
Now that we’ve entered the data, we’re ready to divide. Use this formula in cell C1 to divide the values of A2 and B2 together:
- In Cell C1, type in =A2/B2
Congratulations, you’ve now entered the values of two cells together. Alternatively, you could simply use one cell to add two numbers together. Try it for yourself by typing this into cell A2:
The value of 25 divided by dived by 5 would show up in cell cell A2!
As you can see, it’s very easy to divide in Microsoft Excel, especially if you’re dealing exclusively with basic math. It can get a lot more complex than how we showed you as well, such as in calculating percentiles. It’s worth noting that these characters, commands, and formulas will work in Google Sheets as well. |
$0.00 donated in past month
Longterm sea level rise estimated at 2.3 metres for every degree Celsius of global warming
"Continuous sea-level rise is something we cannot avoid unless global temperatures go down again," said climate scientist Anders Levermann from the Postdam Institute for Climate Impact Research. "Thus we can be absolutely certain that we need to adapt. Sea-level rise might be slow on time scales on which we elect governments, but it is inevitable and therefore highly relevant for almost everything we build along our coastlines, for many generations to come."
Anders Levermann was taking about the implications of a new study he is a co-author of - The multi-millennial sea-level commitment of global warming. The study is published in the Proceedings of the National Academy of Sciences online during July 2013. It is one of the first studies to combine analyses of four major contributors to potential sea level rise into a collective estimate, and compare it with evidence of past sea-level responses to global temperature changes.
The result is a longterm estimate that global sea levels will rise about 2.3 meters, or more than seven feet, over the next two thousand years for every degree (Celsius) the planet warms. "the total sea-level commitment after 2,000 y is quasi-linear, with a sensitivity of 2.3 m °C" reports the study.
"The study did not seek to estimate how much the planet will warm, or how rapidly sea levels will rise," noted Professor Peter Clark, an Oregon State University paleo-climatologist and co-author on the PNAS article. "Instead, we were trying to pin down the 'sea-level commitment' of global warming on a multi-millennial time scale. In other words, how much would sea levels rise over long periods of time for each degree the planet warms and holds that warmth?"
"The simulations of future scenarios we ran from physical models were fairly consistent with evidence of sea-level rise from the past," Clark added. "Some 120,000 years ago, for example, it was 1-2 degrees warmer than it is now and sea levels were about five to nine meters higher. This is consistent with what our models say may happen in the future."
Our civilisations have been established based upon a stable sea level. We have built our ports, and many of our cities on the coastal plains adjacent to the oceans of the world. And now they will all be under threat by the encroachment of rising seas. Initially it will be damage by storm surges on the rising oceans. Houses with coastal views, like the ones pictured at Cabbage Tree Bay on the central coast north of Sydney, Australia, may find themselves slipping into the ocean as the cliffs behind the beach erodes.
Sea level rise will have a personal cost to those whose homes are affected, but it will also have a huge social and economic cost to all economies as they struggle to adapt the built environment to the inexorable rising sea levels. AfterHurricane Sandy, New York has already committed a US$20 billion plan to adapt New York City to defend it against rising seas and future storms. But even this costly defensive adaptation will prove only temporary and useful this century.
Primary contributors to sea level rise: Thermal Expansion, Mountain glaciers, Greenland and Antarctic Ice sheets
Sea level rise occurs through four major contributions. Thermal expansion of global oceans and melting of mountain glaciers are the most prominent factors contributing to sea level rise at the moment. As global warming continues this will change to the Greenland and Antarctic ice sheets becoming the dominant contributors. We are already seeing accelerating ice loss from Greenland while Antarctica is presently contributing less than 10 percent. Eventually half the contribution to sea level rise may come from Antarctica.
The mining of ground water over the last decades for drinking, and industrial scale mining and agricultural use has also been shown to be a small, but significant contribution to sea level rise. This is not likely to be significant in the very long term affecting the results of this study.
"CO2, once emitted by burning fossil fuels, stays an awful long time in the atmosphere," said Anders Levermann, lead author of the study and research domain co-chair at the Potsdam Institute for Climate Impact Research. "Consequently, the warming it causes also persists."
There is tremendous inertia in the oceans and ice sheets resulting in a slow initial response to changing temperatures and climate. "The problem is: once heated out of balance, they simply don't stop," said Levermann. "We're confident that our estimate is robust because of the combination of physics and data that we use."
The international team of scientists used data from sediments from the bottom of the sea and ancient raised shorelines found on various coastlines around the world. Computer simulation models were calibrated against observational paleo-climate data.
The hundreds of simulations across the four major contributors to sea level rise produced mostly linear results with sea level rise commensurate with the amount of warming. Greenland produced an exception with results indicating a threshold where response to warming is amplified. Indeed. A study published in March 2012 warned that Global Warming threshold for Greenland Ice Sheet collapse reduced to 1.6 degrees C (Robinson et al 2012)
"As the ice sheet in Greenland melts over thousands of years and becomes lower, the temperature will increase because of the elevation loss," Clark said. "For every 1,000 meters of elevation loss, it warms about six degrees (Celsius). That elevation loss would accelerate the melting of the Greenland ice sheet."
Antarctic geography provides a different response. The Antarctic ice sheet is so cold, elevation loss won't affect it the same way. Continuing research shows that Antarctic ice sheet disintegration comes primarily from warming southern ocean waters melting ice shelves, resulting in retreat of grounding lines and increase in the flow of ice stream and glaciers to discharge ice bergs. West Antarctica is particularly unstable as much of the ice sheet is over a deep depression below sea level.
"The Antarctic computer simulations were able to simulate the past five million years of ice history, and the other two ice models were directly calibrated against observational data - which in combination makes the scientists confident that these models are correctly estimating the future evolution of long-term sea-level rise," said co-author professor Peter Clark from Oregon State University.
Confusing the Rate of Sea level rise with the sensitivity of sea level to temperature
While this study says the long term rate of sea level rise is likely to be 2.3 metres per degree celsius over perhaps several centuries, it explicitly doesn't tell us what short term rates of sea level rise might be.
According to Andy Revkin.net an initial report by Reuters botched reporting of this study by warning of rapid sea level rise. The story headline as published in the Sydney Morning Herald on July 14 said "Models point to rapid sea-level rise from climate change", which is at odds with the content of the story and the press releases issued by Potsdam Institute for Climate Impact Research and Oregon State University.
Global Sea Level is rising 60% faster than IPCC projections according a 2012 study comparing the actual rise in CO2 concentration, global temperature and sea level with past projections done by the IPCC. (Stefan Rahmstorf et al 2012)
The researchers said "the observed rate of sea-level rise on multi-decadal timescales over the past 130 years shows a highly significant correlation with global temperature (Vermeer and Rahmstorf 2009) by which the increase in rate over the past three decades is linked to the warming since 1980, which is very unlikely to be a chance coincidence." Rahmstorf stressed that "the new findings highlight that the IPCC is far from being alarmist and in fact in some cases rather underestimates possible risks."
Other studies have also investigated projected rates of Sea level rise. James Hansen predicted in an 2007 interview on the ABC 7.30 Report that the earth will pass a tipping point resulting in Sea Level Rise of up to a metre every 20 years based upon previous rates of paleoclimate sea level rise. Hansen posits in a December 2012 discussion paper that an exponential rate of Ice sheet Mass Loss, and multi-metre sea level rise is possible later this century.
Professor Elco Rohling was co-author of a study published November 2012 which identified a Climate change connection between Global temperatures, ice volume and sea level. According to the study sea level rise reached speeds of "at least 1.2 metres per century during all major episodes of ice-volume reduction" in the last 150,000 years. (KM Grant et al 2012)
Some scientists think we may have a 20 metre sea level rise already in the pipeline based upon studying the geological record, the current level of warming (0.8 degrees C) and the warming inertia already built into the system (at least 1 degree C more) without considering our present emissions trajectory. There have been scientific projections done for sea level rise for next 500 years which says that "Most rise is expected after stabilization of forcing, due to the long response time of sea level. For all scenarios the rate of sea level rise would be positive for many centuries, requiring 200-400 years to drop to the 1.8 mm/yr 20th century average."
Many of these studies identified a relationship between global temperature rise and sea level. The current study demonstrates a linear relationship between rate of global warming and rate of sea level rise averaged out over thousands of years, and supported by observational data on past temperature change and sea level rise.
Last word comes from Professor Clark who says "Keep in mind that the sea level rise projected by these models of 2.3 meters per degree of warming is over thousands of years. If it warms a degree in the next two years, sea levels won't necessarily rise immediately. The Earth has to warm and hold that increased temperature over time."
"However, carbon dioxide has a very long time scale and the amounts we've emitted into the atmosphere will stay up there for thousands of years," he added. "Even if we were to reduce emissions, the sea-level commitment of global warming will be significant."
Fig. 1. Sea-level commitment per degree of warming as obtained from physical model simulations of (
A) ocean warming, (B) mountain glaciers and ice caps, and (C) the Greenland and (D) the Antarctic Ice Sheets. (E) The corresponding total sea-level commitment, which is consistent with paleo-estimates from past warm periods (PI, pre-industrial, Plio, mid-Pliocene). Temperatures are relative to pre-industrial. Dashed lines and large dots provide linear approximations: (A) sea-level rise for
a spatially homogeneous increase in ocean temperature; (A,D,E)constant slopes of 0.42, 1.2, and 1.8 and 2.3 m/°C. Shading as well as boxes represent the uncertainty range as discussed in the text. (A – C
) Thin lines provide the individual simulation results from different models (A and B) or different parameter combinations (C). The small black dots in D represent 1,000-y averages of the 5-million-year simulation of Antarctica. Source: Levermann, Clark et al, 2013 |
|Ornamental magnolia, March 20, 2012, in the New York Hudson Valley town of Nyack. Like many other trees and plants, it is flowering far earlier than in earlier decades.
(Neil Pederson/Lamont-Doherty Earth Observatory)
In an effort to understand how plants around the world will act in a warming climate, researchers have relied increasingly on experiments that measure how they respond to artificial warming. But a new study says that such experiments are underestimating potential advances in the timing of flowering and leafing four to eightfold, when compared with natural observations. As a result, species could change far more quickly than the experiments suggest, with major implications for water supplies, pollination of crops and ecosystems. The comparison, done by an interdisciplinary team from some 20 institutions in North America and Europe, appears this week in the leading journal Nature.
“Up to now, it’s been assumed that experimental systems will respond the same as natural systems respond—but they don’t,” said coauthor Benjamin Cook, a climate modeler at the NASA Goddard Institute for Space Studies and Columbia University’s Lamont-Doherty Earth Observatory. Elizabeth Wolkovich, who led the team as a postdoctoral fellow at the University of California, San Diego, said, “This suggests that predicted ecosystem changes—including continuing advances in the start of spring across much of the globe—may be far greater than current estimates based on data from experiments.”
The timing of annual plant and animal life events—the study of which is known as phenology--has emerged as perhaps the most consistent and visible gauge of nature’s response to rising temperature. Globally over the past century, land surfaces have warmed an average of about half a degree Celsius (1.25 degrees Fahrenheit), but some places, such as Alaska, are warming much more rapidly (there, about 1.8 degrees C, or over 3 degrees F). As a result, long-term historical records show that many plant species are flowering and leafing out days, or even weeks, earlier over recent decades. For instance, the meticulously recorded and celebrated blooming of Washington D.C.’s cherry blossoms has advanced about a week since the 1970s; if the trend continues, some recent projections say that by 2080 they will be coming out in February. Animals are reacting in turn, with robins showing up a month earlier in the Colorado Rockies compared to the early 1970s.
Interest in tracking phenology has grown, with the founding of organizations like the USA National Phenology Network, which uses citizen volunteers to contribute observations to studies. But because historical records are not available in many places and the future may bring ever-higher temperatures, many scientists are also trying to project by doing experiments in which they heat small field plots and measure the responses.
The researchers in the Nature study created new global databases of plant phenology, pitting calculations from experiments versus those from long-term monitoring of natural records. They included data from 50 different studies covering 1,643 species on four continents. Their analysis showed that experiments predicted every degree rise Celsius would advance plants’ flowering and leafing from half a day to 1.6 days. But in looking at actual observations in nature, they found advances four times faster for leafing—and over eight times faster for flowering. In sum, the natural records showed that phenological events advancing on average, five to six days per degree Celsius. The finding was strikingly consistent across species and datasets. Wolkovich said this suggests that long-term records “are converging on a consistent average response,” and that future plant and ecosystem responses to climate change may be much higher than estimated from experimental data alone.
A number of factors could explain the discrepancies, said the researchers. These could include effects of longer-term climate change, including shifts in plants’ genes as they adjust to warming, which would not be mirrored by shorter-term experiments. Or, it could be specific aspects of the experiments themselves, such as exactly how researchers manipulate temperatures and how accurately they measure them, they said. For instance, experimenters have used a variety of methods to increase temperatures, including cables buried in the soil, small greenhouse-like structures and heat sources placed above plants. “Some experiments get closer to nature than others,” said Cook. “We need to address this by improving experiments. In the meantime, we should pay more attention to nature, because it’s giving us critical information. For effective policy and conservation plans, we really need to have accurate predictions for which species will respond, and how much.”
David Inouye, a University of Maryland biologist who studies ecological responses to climate change, but was not involved in the study, said, “Phenology is one of the best ways to measure the impact of changing climate. The value of this study is that it makes sense of diverse data sets, and points out the value of long-term observations of natural ecosystems.”
The study was supported by the University of California’s National Center for Ecological Analysis and Synthesis. Additional support came from the U.S. National Science Foundation; University of California, Santa Barbara; and the USA National Phenology Network. |
Published on March 21, 2009
HOW IMPORTANT IS THE CONCEPT OF DEIXIS TO LANGUAGE WRITTEN FOR PERFORMANCE? By Yusuf Kurniawan I. Introduction Language is one of the most fundamental aspects of human’s life. Without language one can not communicate properly. Since the antiquity or prehistoric time people had used language for communication. However, the form of the language is of course different from what we recognise today. Every language has been developing from time to time. They underwent evolution that people never realised. Even we could not imagine how languages become so complex as we speak today. Every tribe, nation and country has their own languages. Let alone, there are also a lot of vernaculars in every country that make languages become more varied and complex. We probably could not trace back how the languages exist at present were previously formed and shaped. Besides the rapid progression of languages in the world, communication devices like telegraph and telephone also have been diffusing so swiftly. People now can communicate very easily and quickly. However, the essence of the communication is actually not the devices but the message or information delivered through language. In practice, language is used by people ‘to refer to persons and things, directly or indirectly’. The first is called direct reference and the latter is called indirect reference (Mey,1996:89). Such references are used both in spoken and written language. II. Background to the Essay Problem
2 Spoken and written language sometimes can be ambiguous to a hearer or an addressee. He/she might misinterpret what a speaker says. Such case is often related with deixis. The use of reference in an utterance that is not clear or lacking of description often makes a hearer confused or even not understand. Why? Because unclear reference can cause the utterance delivered by the speaker ambiguous. In English there are finite and infinite nouns that both can serve as references. Sometimes it is easy to understand a context of discourse if the reference is clear, but sometimes it is difficult to comprehend because of the lack of description in the reference. Besides, a hearer occasionally is not familiar with the term, word or expression used by a speaker. And one might ignore the use of finite nouns instead of the infinite ones. Moreover, one often forgets especially in spoken language he/ she has a partner, that is the hearer. Or if it is written, the writer has readers who read his writing. So, sometimes the speaker has considered that the hearer has enough knowledge background and reference about what he is talking about or what he is writing. If the hearer does not have enough background of knowledge toward what is talked about, then it makes the language uttered more difficult to understand. However, in language written for performance, it is still not clear if the concept of deixis is important or not, influential or not. So that in this essay I am going to analyse how important the concept of deixis to language written for performance is. I shall use other language, besides English, i.e. Indonesian as comparative examples. III. Analysis ‘Deixis means different things to different people’ (Cruse,2000:319). Or according to Davis (2000) ‘deixis is equivalent to pointing. It derives from the same Greek
3 root that occurs in Digit, Index, Indexical namely expressions whose reference is a function of the context of their utterance’. Some deictic forms like here, now, you, this and that are considered some of the most obvious linguistic elements which require contextual information for their interpretation (Brown & Yule, 2000:27). In his Meaning in Language (2000) Alan Cruse introduces that there are five main types of deixis: person deixis, spatial deixis, temporal deixis, social deixis, and discourse deixis. In order to be able to interpret elements of discourse such as deixis, it is important to know who the speaker and the hearer are, and also the time and place or location of the making of the discourse (Brown & Yule, 2000:27). So in the rest of the analysis I am going to explain further about deixis, especially the main types of deixis, in relation to language written for performance. III. 1 Person deixis Person deixis refers to the use of pronouns of first, second and third person. The first person is the speaker, and the second person is the addressee or the hearer. While the third person is neither the speaker nor the hearer (Cruse,2000:319). In this case we might look at the table below which shows the comparison between personal pronouns in English and in Indonesian: English Singular Plural 1st person I/ me we/ us nd 2 person you you 3rd person he/him/,she/her, it they/ them Indonesian 1st person saya/aku kami/ kita
4 2nd person kau/ kamu kalian rd 3 person dia mereka (Cruse,2000:320 & the writer’s data,2000) The first-person personal pronouns in English and Indonesian have the same type. However, the Indonesian personal pronouns do not have special form in the object pronouns. So ‘saya’ and ‘aku’ which mean ‘I’ are the same in their forms when they are used as a subject or an object. Both of them can substitute each other. Besides, ‘kami’ and ‘kita’ which mean ‘we/us’ also have no difference in the forms when they are used in a sentence as a subject or an object. Then the second-person personal pronoun in English and Indonesian also have the same form. In the sense that they can be used as a subject or an object without altering it. While the third-person personal pronouns in English and Indonesian have a significant difference. English third-person personal pronouns have clear genders or sexes when they occupy subject or object, for she/her, he/him, and it. So it is quite clear and easy to find out what the personal pronoun refers to in a discourse. But, in Indonesian there is no difference of using the third-person personal pronoun. ‘Dia’ is neutral; it means that it can be used to refer to he, she, or it. And when it becomes a subject or an object in a sentence it will not change. For more details let us look at some examples in the following sentences: a. - I gave her a gift on her birthday last year. - Saya memberi dia hadiah pada hari ulang tahunnya tahun lalu. b. - He lent me some books for his lecture. - Dia meminjami saya beberapa buku untuk mata kuliahnya. c. - Mom left us frozen outside the house.
5 - Ibu membiarkan kami/kita kedinginan di luar rumah. d. - We could not go to the party because of the rain. - Kami/kita tidak jadi pergi ke pesta karena hujan. e. - You never tell me about your girlfriend. - Kamu tidak pernah memberitahu saya tentang pacarmu. f. -I will tell you when we will leave. - Saya/aku akan memberitahu kamu kapan kita akan berangkat. g. - She kissed me and then cried. - Dia mencium saya dan kemudian menangis. h. - I kissed her and then left immediately. - Saya mencium dia dan kemudian segera pergi. (The writer’s data, 2000). In the examples a and b there is no difference between the first person personal pronoun in Indonesian when it is as a subject or an object. Also in the first person plural in c and d, kami/kita is similar in use while it is as a subject or object in a sentence. In the examples e and f it is the same of using the second person personal pronoun in English as well as in Indonesian. However, in the third person personal pronoun, the use of he, she, it, compared to the use of dia in Indonesian differs very much. From the examples given, it is clear that there is no difference in the use of the first and second-person personal pronouns in English and Indonesian. The difference is in using Indonesian third-person personal pronoun, since it merely has one form namely ‘dia’ for any gender and thing. Moreover, all of the Indonesian personal pronouns can be used as a subject and an object in a sentence. Therefore, it is quite difficult in Indonesian to trace the reference of the
6 personal pronouns especially in a long discourse. Moreover, what makes the Indonesian personal pronouns more difficult is the possessive pronoun for the third person. The suffix ‘-nya’ that is embedded in a noun is sometimes difficult to know, to what person or pronoun it refers to. Unlike the English possessive pronouns, for example ‘he’ will change into ‘his’, ‘she’ becomes ‘her’ and so on. To make it clearer we can look at the sentences below: (i) Amanda came to my flat but her book was left. (ii) Amanda datang ke flat saya tetapi bukunya tertinggal. (The writer’s data, 2000). The suffix ‘nya’ that shows the possessive pronoun of Amanda is not very clear since it has only one form, whatever the subject is. In the sentences above it is still clear that the suffix refers to Amanda. However, in a long text or discourse that involves more subjects and objects it will be sometimes difficult to know the suffix ‘-nya’ refers to. Now, look at the other example: (i) Baru-baru ini para ahli sosiologi menemukan sebuah fenomena baru bahwa banyak gadis Indonesia dibawah umur yang ‘menjual tubuhnya’ demi sesuap nasi. Santi dan Nela mengaku dia melakukan itu karena terpaksa, disebabkan perekonomian keluarganya yang sangat pas- pasan. Mereka menduga bahwa hal ini disebabkan oleh krisis ekonomi yang berkepanjangan di negeri ini. Dampaknya sangat terasa dalam kehidupannya. (ii) Recently, some sociologists discovered a new phenomenon that there are a lot of Indonesian adolescent girls who practise prostitution for the sake of some food. Santi and Nela confessed that they had to do that because of their family’s bad economy. They thought that it is caused by the
7 endless economic crisis in this country. Its impact is badly felt in their life. (The writer’s data, 2000) Note: ‘menjual tubuhnya’ = ‘selling their bodies’ = practising prostitution Suppose that the example given is a TV or radio news, there is a little bit confusion in the use of ’mereka’ as the plural personal pronoun that may refer to the sociologists or the girls. And the use of possessive adjective –nya embedded at the word keluarganya, dampaknya, kehidupannya can be ambiguous because the deictics is not quite clear. However, the ambiguity can be overcome if the hearer knows the context and has background knowledge of the topic being spoken. III.2 Spatial deixis According to AlanCruse (2000), spatial deixis refers to locative adverbs like here and there and demonstrative adjective like this and that. The use of such space markers is sometimes not clear, especially if the hearer is not directly involved face to face with the speaker during the conversation happen. In performance the use of adverbs and demonstrative adjectives can be ambiguous if it is not used appropriately. Usually, here, there, this and that can cause ambiguity when they are used in a long discourse. Moreover, if the hearer does not have enough background of reference about what is being talked about may think that the discourse ambiguous. The following examples give clearance that spatial deixis might cause ambiguity: I want to buy this, this and that. Even though the sentence is short, but it consists of deictic words. Somehow, a hearer will not
8 know what this and that really refer to if he is not at the same place when the speaker speaks. We just recognise that the speaker wants to buy something. And he could be anywhere when he utters such sentence, maybe in a butcher shop, in a market etc. III.3 Temporal deixis ‘Temporal deictics function to locate points or intervals on the time axis, using the moment of utterance as a reference point’ (Cruse,2000:321). Furthermore, Cruse says that the time axis is divided into three major groups, namely: a) before the moment of utterance, b) at the time of utterance, and c) after the time of utterance (2000:321). When we are talking about time it is something that we have to pay attention very well. Because if we do not clearly mention the time markers in a discourse, written or spoken, then it might be ambiguous to the hearer. The most common temporal deixis in English is now and then (Cruse, 2000). These two words are very flexible to use in a discourse. And they are very relative. For example, when I say : Now I am pursuing MA of Communications Studies in the University of Leeds. ‘Now’ refers to a certain length of time when I start the program until I hopefully can finish my program. ‘Now’ is not merely interpreted as at that moment when I uttered the sentence. However, let us compare with the following sentence: I am reading a book now. ‘Now’ in the sentence just refers to a point of time that is relatively short. If it is compared to Indonesian, there is not much difference in meaning. As is the case with ‘then’. It is a relative time marker in a discourse. Sometimes, in a certain context ‘then’ does not point to a definite time that make the hearer exactly sure. For instance, when I say: Ok, I will see you then. How could we interpret the word then
9 exactly? In this case the meaning of then is quite relative. One may interpret it as still on the same day but on the other occasion, or one may interpret it as on the different day, say the next day, on the same occasion. However, now and then do not cause much ambiguity in a discourse as long as the addressee know the context. The tenses in English can also be considered deictic (Cruse,2000). Especially when an utterance is not accompanied with definite time markers. For example in the sentence: She was very beatiful. ‘Was’ refers to the past time which the hearer does not exactly know when it was. It is different from when I say: When she was 20s she was very beautiful, it is much clearer. Another example: Last year my friend graduated from Leeds. Even though it can be quite clear to the hearer, and I think the hearer will not ask when it was exactly, there is an unclear point of time here. Last year might be inferred to the month on which the speaker says to the month in year before, or it can be interpreted on any month in the year before. III.4 Social deixis Social deixis according to Cruse (2000:322) ‘is exemplified by certain uses of the so-called TV (tul vous) pronouns in many languages.’ It is used to point out to a reference based on the class of the speaker and the person that he refers to. Such deixis is not too significant in English since it does not have distinct differentiation of referring to other person who has higher social level. But in Indonesian, social deixis plays an important role in indexing. Compare the following sentences:
10 (i) President Suharto founded the foundation in 1982. He resigned from presidency in May 1997. (ii) Presiden Suharto mendirikan yayasan itu pada tahun 1982. Beliau mengundurkan diri dari jabatan kepresidenan pada bulan Mei 1997. (The writer’s data,2000) The use of he that refers to President Suharto in (i) is not a problem in English. There is no indication of being impolite or improper to refer to such high social level of person. However, in Indonesian such use of pronoun does matter. Therefore, in sentence (ii) I use ‘beliau’ instead of ‘dia’. The word ’beliau’ in the sentence is equal to dia or he but it has higher social level. It will be improper or impolite if I use dia to address someone who has a higher social level. Or it can also be used to show a politeness of a speaker to someone he addresses, directly or indirectly. III.5 Discourse deixis Discourse deixis deals with the use of some words like ‘this to point to future discourse elements, that is, things which are about to be said’ (Cruse,2000:323). For example in ‘Watch this!’ And the word that to point to past discourse elements (Cruse,2000:323), such as in ‘That is not a good idea.’ Moreover, Cruse (2000) says that the sentence adverbs like therefore and furthermore can also generate ambiguous meaning. It means that if a hearer does not follow a discourse from the beginning, then if he finds such adverbs, he might find them deictic in the discourse. IV. Conclusion
11 In conclusion, based on the analysis, the concept of deixis to language written for performance is very important. One must look at what type of performance that deixis would be used. If it is intended to deliver information, like TV or radio news, the written language should not use deixis that are too complicated. The description of the reference should be clear, because the aim of this performance is not to entertain listeners, but to present information. Therefore, the language used must be clear and straightforward in order not to make them misled or misunderstood. However, if the performance has a purpose of entertaining, like a film, a theatre or an opera, then the language used may be combined with deictic words. Because in a fiction or story deixis can generate a thrill and curiosity of the readers or listeners. And even this can be manipulated as an attracting power toward the listeners to keep on watching a film or reading a novel. One thing that differentiates between spoken and written language is that we can not repeat what a speaker says. Even though it is possible to request the speaker to repeat what he said, if it happens in a conversation. However, we can not ask a TV broadcaster to repeat what she said on TV or it is impossible to request an actor in an opera or film to repeat what he says. But, in written language we can review the discourse as many times as we want in order to understand what the speaker says.
12 REFERENCES Brown, Gillian & Yule, George (1983). Discourse Analysis. Cambridge: Cambridge University Press. Cruse, Alan (2000). Meaning in Language: An Introduction to Semantics and Pragmatics. New York: Oxford University Press. Davis, B (2000). Discourse Analysis Lecture: Lecture 3 – Deixis and Reference. Lectured on October 2000. Mey, Jacob L (1996). Pragmatics: An Introduction. Oxford: Blackwell Publishers Ltd.
In linguistics, deixis refers to words and phrases that cannot be fully understood without additional contextual information. Words are deictic if their ...
Deixis is reference by means of an expression whose interpretation is relative to the (usually) extralinguistic context of the utterance , such as
The concept of distance mentioned before in conjunction with the third person form is also relevant to spatial deixis, since it indicates the relative ...
Deixis and gesture: The deictic son as a turning point between the determination of reference and the determination of concept in German noun phrases !
Spatial deixis. The concept of distance is considered relevant to spatial deixis , whenever relative location of things is being indicated.
DEIXIS - Download as Word Doc (.doc), PDF File (.pdf), Text file ... Spatial deixis The concept of distance is considered relevant to spatial deixis.
person deixis, place deixis and time deixis In linguistics, deixis refers to words and phrases that cannot be fully understood without additional ...
‘Deixis and Pragmatics’ for Handbook of Pragmatics handb-horn4.doc Stephen C. Levinson Max Planck Institute for Psycholinguistics For those who want to ...
Observations and Examples: "The term deixis applies to the use of expressions in which the meaning can be traced directly to features of the act of ... |
Electrical engineers at the California Institute of Technology (Caltech) have developed a remarkably small and inexpensive silicon microchip that generates and radiates terahertz (THz) waves in a relatively unexplored region of the electromagnetic spectrum — between microwaves and far-infrared radiation — and can penetrate various materials without the ionizing damage of X-rays.
Electromagnetic waves in the range from 0.3 to 3 THz can easily penetrate materials and not only render image details in high resolution but also detect the chemical fingerprints of pharmaceutical drugs, biological weapons or illegal drugs or explosives, for instance. Existing terahertz systems, however, are bulky, expensive and may require demanding operating environments (e.g., very low temperature). So the Caltech team set out to explore whether novel techniques could be used to push low-cost integrated silicon technology into the terahertz frequency range.
Caltech’s proof-of-concept terahertz imager chip demonstrates a new and efficient way to generate power at frequencies beyond what is traditionally known as the cut-off frequency of the technology. “The chip encompasses the whole system, including the onchip radiators, which send out the signals directly from the chip,” explains Kaushik Sengupta, PhD, a post-doctoral scholar in the Electrical Engineering Department at Caltech. “Traditionally, integrating antennas inside silicon has been difficult. We not only overcame that challenge, but also integrated an array of 2D elements that can electronically beam-scan.” Electronic beam-scanning is very fast, as it removes the necessity to mechanically move the transmitter towards different parts of the scene.
According to the scientist, who will join the electrical engineering faculty at Princeton University in February 2013, having solved the challenges of signal generation with enough power, radiation and beam-control, the chip could enable “development of terahertz technology for short-range ultrafast wireless communication, see-through imaging for homeland security and contraband detection, bio-molecular spectroscopy, noninvasive quality control and possibly medical imaging.”
Caltech’s innovation constitutes the world's first integrated terahertz scanning arrays. Says Sengupta: “The most exciting part is that we have demonstrated a methodology that has shrunk a system that can take up a small benchtop to a chip that is only 2.7mm in length and breadth, while dissipating an order of magnitude lesser power.”
Designing single integrated antennas that are efficient yet small enough to fit into a microchip has been a technological bottleneck for some time. The Caltech team’s paradigm-shifting approach was to deconstruct traditional small single antenna systems and recombine multiples of circuits, electromagnetics and antennas to invent a structure — dubbed “distributed active radiator” — that generates THz waves at the desired power. “When we combine multiples of such elements, placed rightly in the silicon chip, the entire system radiates out very efficiently at the desired directions,” Sengupta says. “This is just a starting point. We plan to take this forward to realize its full potential and also invent new technologies that can further technology development in this area.”
Together with Ali Hajimiri, the Thomas G. Myers Professor of Electrical Engineering at Caltech, Sengupta co-authored the paper “A 0.28 THz Power-Generation and Beam-Steering Array in CMOS based on Distributed Active Radiators” published IEEE Journal of Solid-State Circuits.
Image: Kaushik Sengupta, left, and Ali Hajimiri, California Institute of Technology
Written by Sandra Henderson, Research Editor, Novus Light Technologies Today |
Melbourne researchers are doing rocket science with clay.
They have developed a cheaper and more efficient way of making the complex, heat-resistant, ceramic parts needed to build tomorrow’s rockets and hypersonic airliners.
Using clever chemistry to modify a standard method of casting ceramics in a mould, they have developed an alternative to the traditional technique of forming these ceramics as blocks at high temperatures and pressures. And their new method, a form of slip casting, allows them to generate ultra-high-temparture ceramic components at lower temperatures and pressures, which do not require extensive machining, hence saving time and energy.
“The ceramic pieces we have made are stronger and will survive to higher temperatures than those used on the Space Shuttle,” says Dr Carolina Tallon, who is developing the processing techniques with Prof George Franks of the Department of Chemical and Biomolecular Engineering at the University of Melbourne. Their work is part of the propulsion program of the Defence Materials Technology Centre to develop manufacturing capabilities of advanced materials within Australia.
Hypersonic flight will allow passengers to travel at up to five times faster than the speed of sound (Mach 5). A flight between Melbourne and London would take about two hours. Jets have already been built that can achieve these speeds for a few seconds, but maintaining those conditions for an entire flight remains a challenge: and it’s partly a materials challenge.
“In order to lengthen the duration of the hypersonic flight, we need to find a perfect match between aerodynamic design and the materials able to survive what that design entails,” Carolina says. “At Mach 5, for instance, several of the components of the vehicle will be at temperatures of above 3000 °C. At these temperatures, most of the materials typically used in the aerospace industry will already have melted or if they have not, their properties will be severely damaged and they will not perform correctly. “
Ultra-high-temparture ceramics are a potential solution since they can survive such extreme conditions. But finding the right material is not the end of the story. The actual components to be used in the vehicle have to be formed into complex shapes.
With traditional processing techniques, the nose of a rocket, for example, would be manufactured using very high temperatures and pressures to produce a very simple geometry such as a solid cylinder, which would then require extensive and costly machining. The new slip casting technique simply requires a mould into which a low viscosity slurry of a particular chemistry can be poured. The ceramic particles then pack efficiently into the required shape as the solvent is removed.
“Using these techniques, I can manufacture components, that already resemble their final shape without machining. This can all be done at lower temperatures and pressures—and the end products are stronger than those made in the traditional manner. The preliminary tests showed that the components we made were able to survive temperatures above 3400 °C while keeping their shape and mechanical integrity,” Carolina says. “This technique is so versatile that we can fabricate anything from hip replacements to turbine rotors .”
This Australian technology is the result of extensive collaboration between researchers at universities, national laboratories and industry within the Defence Materials Technology Centre.
Carolina Tallon is one of 12 early career scientists unveiling their research to the public for the first time thanks to Fresh Science, a national program sponsored by the Australian Government. She is available for photos in her laboratory preparing pieces and operating the high temperature furnaces where the samples are manufactured.
Hypersonic flight video simulation available, explaining the extreme conditions and challenges of the different components prepared by Prof. Michael Smart (University of Queensland):
- Dr Carolina Tallon, [email protected]
- AJ Epstein, Science in Public, 0433 339 141, [email protected]
- Niall Byrne, Science in Public, 0417 131 977, [email protected]
- For University of Melbourne, contact Anne Rahilly, Media Officer, [email protected]
- For the Defence Materials Technology Centre (DMTC), contact Heidi Garth, Media Officer, 03 9214 4447 |
By: scribe Valdemir Mota de Menezes
In ancient Greece and Rome
During the Medieval Age and the Renaissance
During the Enlightenment
In the republican revolutions of the 18th century
19th to mid-20th century
|This section needs additional citations for verification. (November 2008)|
In later times
Comparable ideas in non-Western societies
- Friendliness is a pro-social set of behaviors seen in people who are pleasant, agreeable, interested in others, genial, empathetic, considerate, and helpful. Not all civil behaviors are friendly. For example, duelling in response to an intolerable insult has been considered a civil behavior in many cultures, but it is not a friendly action.
- Politeness focuses on the application of good manners or etiquette. Because politeness is informed by cultural values, there is substantial overlap between what is polite and what is civil. However, if the action in question is not related to civic virtues, then it may be polite or rude, without strictly being considered civil or incivil.
- Social graces
- The social graces include deportment, poise, and fashion, which are unrelated to civility.
- Incivility is a general term for social behavior lacking in civic virtue or good manners, on a scale from rudeness or lack of respect for elders, to vandalism and hooliganism, through public drunkenness and threatening behavior. The word incivility is derived from the Latin incivilis, meaning "not of a citizen."
- The distinction between plain rudeness, and perceived incivility as threat, will depend on some notion of "civility" as structural to society; incivility as anything more ominous than bad manners is therefore dependent on appeal to notions like its antagonism to the complex concepts of civic virtue or civil society. It has become a contemporary political issue in a number of countries.
- John Hale, The Civilization of Europe in the Renaissance (London 1993)
- Daniel Roche, La France des Lumières (Paris 1993)
- Parker, Harold T. The Cult of Antiquity and the French Revolutionaries (Univ. Chicago, 1937)
- Wood, Gordon S. The Creation of the American Republic, 1776–1787 (Univ. North Carolina Press 1969, repr. Horton 1975) ISBN 0-393-00644-1
- Peggy Noonan (2008) Patriotic Grace
- Stephen L. Carter Integrity
- The Bible (Philippians 2:3, Colossians 4:6, Galatians 5:22, Proverbs 22:11...) |
Bundala National Park is situated 245 kilometers (152 miles) southeast of Colombo. Bundala National Park was declared a wildlife sanctuary on 5 December 1969 and was upgraded to a National Park on 4 January 1993. In 1991, Bundala became the first wetland in Sri Lanka to be designated as a Ramsar Wetland and further in 2005, Bundala was declared a Man and Biosphere Reserve by UNESCO, the fourth biosphere reserve in Sri Lanka.
Bundala National Park is an outstanding Important Bird Area (IBA) in the South Indian and Sri Lankan wetlands. 324 species of vertebrates have been recorded including 32 species of fish, 15 species of amphibians, 48 species of reptiles, 197 species of birds and 32 species of mammals. 52 species of butterflies are among the invertebrates. Out of 197 species of birds, 58 are migratory species. National Bird Ringing Programme (NBRP) was launched in Bundala by in collaboration of Department of Wildlife Conservation and Field Ornithology Group of Sri Lanka in 2005.
The national park is surrounded by five lagoons namely; Bundala lagoon, Embilikala lagoon, Malala lagoon, Koholankala lagoon and the Mahalewaya lagoon. A total of 383 plant species belonging to 90 families have been recorded from the park. The Phytoplankton in all the lagoons is dominated by blue-green algae including species such as Macrocystis, Nostoc, Oscillatoria. Hydrilla is in abundance in lagoons such as Embilikala and Malala. Water hyacinth, water lilies and Typhaangustifolia reed beds are found in the marshes and streams. The vegetation mainly consists of Acacia scrubs including Dichrostachyscinerea, Randiadumetorum, Ziziphus sp., Gymnosporiaemarginata, Carissa spinarum, Cappariszeylanica and Cassia spp. The trees of the forest are Bauhinia racemosa, Salvadorapersica, Drypetessepiaria, Manilkarahexandra (Palu in Sinhalese), and less common Chloroxylonswietenia,
The unique and complex wetland system attracts the wintering birds that fly to escape the freezing climate and to rest their weather beaten bodies. Hence Bundala is the ideal grounds to observe the colourful migratory water birds and local air dominators, out of which greater flamingo takes the hotspot.
The greater flamingo (Phoenicopterus roseus) which visits in large flocks of over 1,000 individuals, from Rann of Kutch of India is being the highlight. Waterfowls such as lesser whistling duck (Dendrocygna javanica) and garganey (Anas querquedula), cormorants such as little cormorant (Phalacrocorax niger) and Indian cormorant (P. fuscicollis), large water birds such as grey heron (Ardea cinerea), black-headed ibis (Threskiornis melanocephalus), Eurasian spoonbill (Platalea leucorodia), Asian openbill (Anastomus oscitans), painted stork (Mycteria leucocephala), medium sized waders such as Tringa and small waders such as Charadrius are the other avifaunal species which are present in large flocks. Black-necked stork (Ephippiorhynchus asiaticus), lesser adjutant (Leptoptilos javanicus) and Eurasian coot (Fulica atra) are rare birds inhabit in the national park.
A few Asian elephants still inhabit the forests of Bundala. Other mammals seen in the park are toque macaque, common langur, jackal, leopard, fishing cat, rusty-spotted cat, mongoose, wild boar, mouse deer, Indian muntjac, spotted deer, sambar, black-naped hare, Indian pangolin and porcupine.
Bundala harbors various forms of fish including salt water dispersants, marine forms, brackish water forms and freshwater forms. Bundala's herpetofauna includes two endemic species, a toad and a snake, Bufo atukoralei and Xenochrophis asperrimus. Among reptiles are mugger crocodile Crocodylus palustris, estuarine crocodile Crocodylus porosus, common monitor Varanus bengalensis, star tortoise Geochelone elegans, python Python molurus, rat snake Pytas mucosus, endemic flying snake Chrysopelea taprobana, cat snakes Boiga spp. and whip snakes Dryophis spp. The adjacent seashore of Bundala is a breeding ground for all five species of globally endangered sea turtles that migrate to Sri Lanka. Bundala is also one of 3 wetlands in the island so if you're lucky enough, you will get a chance to behold a salt water crocodile. |
Cumans (also known as Kipchaks in the East, Kuns or Comani in the West, and Половці; Polovtsi in Ukraine). Turkic nomadic tribes racially related to the Pechenegs. At the turn of the 10th century the Cumans inhabited the southern part of Central Asia as far east as the upper Irtysh River. After forcing out the Torks, the Cumans migrated in the mid-11th century through the Black Sea steppes as far as the lower Danube River. In Eastern sources this territory was known as Dasht-i-Kipchak (the Cuman Steppe), while in Rus’ sources only its western part was called the Land of the Polovtsians. The western Cuman tribes were in constant contact with Kyivan Rus’, Byzantium, Hungary, and Bulgaria.
According to the chronicles, the first encounter between Rus’ and the Cumans took place in 1055 and resulted in a peace agreement. In 1061, however, the Cumans invaded the Pereiaslav principality and devastated it. In 1068, at the Alta River the Cumans crushed the combined forces of the three sons of Yaroslav the Wise—Iziaslav Yaroslavych, Sviatoslav II Yaroslavych, and Vsevolod Yaroslavych. From then on the Cumans repeatedly invaded Ukraine, devastated the land, and took captives whom they either kept as slaves or sold at slave markets in the south. Pereiaslav principality, Novhorod-Siverskyi principality, and Chernihiv principality were the most exposed regions. The Cumans inflicted the gravest losses on Ukraine at the end of the 11th century under the leadership of Khan Boniak, who was represented as a sorcerer in the Rus’ folklore of the time. The divided Ukrainian princes could not organize a common defense against the invader. Some of them, for example, Oleh (Mykhail) Sviatoslavych of Chernihiv, even sought aid from the Cumans in their internal squabbles. Only Volodymyr Monomakh, who at first ruled the Pereiaslav principality and then Kyiv principality, began to organize a common Rus’ coalition against the Cumans. As a result of an agreement reached at the Dolobske council of princes, several joint campaigns into the Cuman steppes took place, in 1103, 1109, and 1111. The nomads were defeated and pushed back to the Volga and Subcaucasia. After the death of Monomakh's son Mstyslav I Volodymyrovych (1132), who had successfully continued the policies of his father, the Cumans again became a threat to the Kyivan Rus’ principalities, although not as severe a threat as before. The Cuman encampments again moved up to the borders of the principalities, and Cuman incursions reached a peak in the 1180s under the leadership of Khan Konchak. The Ukrainian princes responded with joint campaigns in 1184–94. Although the separate action of Prince Ihor Sviatoslavych in 1185, which is described in the epic Slovo o polku Ihorevi, was a setback, the general outcome of the campaigns was favorable to the princes. Prince Roman Mstyslavych scored decisive victories over the Cumans in 1202 and 1204, which permitted Ukrainian colonization to expand about 100 km southward.
During the century and a half that the Cumans harassed Ukraine they did not form a state or even a common alliance of tribes. The basic unit of their society was the family, which consisted of blood relatives. Related families formed clans, which lived together in movable settlements called ‘Cuman towers’ by Kyivan Rus’ chroniclers. The tribes were larger social units that were led by khans. Each tribe had its own name, and their names—Toksobychi, Burchevychi, Yeltunovychi, Yetebychi, etc—are often mentioned in the Ukrainian chronicles. The various tribes were also distinguished by the territory they controlled. Thus, the seashore Cumans lived in the steppes between the mouths of the Dnieper River and the Dnister River; the coastal Cumans, on the coast of the Sea of Azov; the Dnieper Cumans, on both banks of the bend in the Dnieper Valley; and the Don Cumans, in the Don River Valley.
Animal husbandry was the main occupation of the Cumans. They raised horses, sheep, goats, camels, and cattle. In summer they moved north with their herds; in winter, south. A few of the Cumans also engaged in farming and trading and led a semisettled life. The main exports of the Cumans were animals, particularly horses, and animal products. The Cumans also played the role of middlemen in the trade between Byzantium and the East, which passed through the Cuman-controlled ports of Surozh, Oziv, and Saksyn. Several land routes between Europe and the Near East ran through Cuman territories: the Zaloznyi route, the Solianyi route, and the Varangian route. Cuman towns—Sharukan, Suhrov, and Balin—appeared in the Donets River Basin; they were inhabited, however, by other peoples besides the Cumans. Crafts were poorly developed among the Cumans and served only daily needs. Primitive stone figures called Stone babas (see Stone baba), which are found throughout southern Ukraine, were closely connected with the Cuman religious cult of shamanism. Like other Turkic tribes, the Cumans tolerated all religions; hence, Islam and Christianity spread quickly among them. As a result of their proximity to the Kyivan Rus’ principalities, the Cuman khans and prominent families began to Slavicize their names, for example, Yaroslav Tomzakovych, Hlib Tyriievych, Yurii Konchakovych, and Danylo Kobiakovych. Ukrainian princely families were often connected by marriage with Cuman khans, and this tended to dampen political conflicts. Sometimes the princes and khans waged joint campaigns; for example, in 1221 they attacked the trading town of Sudak on the Black Sea, which was held by the Seljuk Turks and which interfered with Rus’-Cuman trade.
Mongol forces led by the warlords Subutai and Jebe crossed the Caucasia in pursuit of Muhammad II, the shah of Khorezm, and defeated the Cumans in Subcaucasia in 1220. The Cuman khans Danylo Kobiakovych and Yurii Konchakovych fell in battle, while the others, led by Kotian (Sutoiovych) (Mstyslav Mstyslavych’s father-in-law), obtained aid from the Rus’ princes. The Rus’-Cuman forces were defeated, however, by the Mongols at the Kalka River in 1223. During the second Mongol invasion of Eastern Europe in 1237 the Cumans were defeated again. Most of them surrendered to the Mongols, while the others followed Kotian to Hungary and Bulgaria, where they assimilated into the local population. The Tatars became masters of the steppes of Ukraine. Although the Cumans were crushed, their cultural heritage passed on to the Tatars. The Mongol upper circles, being a minority, adopted from the Cumans much of their language, traits, and customs, and the two peoples finally became assimilated through intermarriage. In the second half of the 13th and the first half of the 14th century the Cumans, together with the Tatars, adopted Islam.
Golubovskii, P. Pechenegi, torki i polovtsy do nashestviia tatar (Kyiv 1884)
Marquart, J. Über das Volkstum der Komanen (Göttingen 1914)
Rasovskii, D. ‘Polovtsy,’ Seminarium Kondakovianum, 7-11 (Prague 1935-40)
Kudriashov, K. Polovetskaia step' (Moscow 1948)
Pletneva, S. ‘Pechenegi, torki i polovtsy v iuzhnorusskikh stepiakh,’ Materialy i issledovaniia po arkheologii SSSR, no. 62 (Moscow-Leningrad 1958)
Fedorov-Davydov, G. Kochevniki Vostochnoi Evropy pod vlast'iu zolotoordynskikh khanov (Moscow 1966)
Kargalov, V. ‘Vneshnepoliticheskie faktory razvitiia feodal'noi Rusi,’Feodal'naia Rus' i kochevniki (Moscow 1967)
Smilenko, A. Slov'iany ta ïkh susidy v stepovomu Podniprov'ï II-xiii st. (Kyiv 1975)
[This article originally appeared in the Encyclopedia of Ukraine, vol. 1 (1984).]
Encyclopedia of Ukraine |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.